Ability to control mount propagation in volume config

Hello,

I’m in the process of writing a charm for a CSI driver for Kubernetes. It’s modeled after https://github.com/canonical/ceph-csi-operators/ which seems to work fine for almost everything with one huge exception, that makes it pretty useless currently: I can’t set mountPropagation: Bidirectional on my volume mount in the CSI plugin pod.

Without this flag any mounts done in the pod are not visible outside that container, i.e. not visible on the host or any other pod that requested the volume. I took at look at the juju source and it looks like this is just not possible currently.

Any idea on how I could solve this issue? I’m also pretty sure the same issue happens with the ceph-csi operators?

Hi there, welcome to Charmhub :slight_smile:

The charm you reference there is using a slightly outdated pattern (though it’ll continue to work!). The only pain point here is that there is not much active work to add more supported fields to our own pod_spec definition.

Meanwhile, we have introduced a slightly different method of managing workloads on Kubernetes outlined in this post.

There are several differences here, and while support is early, we do enable easy access to the Kubernetes API, which would allow you to change that field retrospectively after Juju has created the pods. The full documentation for the Operator Framework (and examples of how to get started with the sidecar approach) is at Juju | The Charmed Operator Framework.

Additionally, I wrote a charm for the Kubernetes Dashboard that makes some more involved changes to the underlying deployment using the Kubernetes API which might be of use to you.

Feel free to share your code, or elements of it and we can try and get you started!

Thanks, Jon

Hi!

Thanks for your reply, and especially the link to your charm. I’ve already spotted some useful ideas for my operator.

With regards to the new way to manage workloads: For my use case (a CSI node agent should run on every node in a cluster), a DaemonSet is the Kubernetes-native way to achieve that. I was under the impression that this is currently not possible with the new approach, only a StatefulSet is possible.

Now, I could work around that with some tricks using pod anti affinity and scaling to the right amount of replicas, but that seems very inconvenient. That’s why I started using the older pod_spec approach.

Is there a way around this issue?

Hi

As of right now, we don’t directly support DaemonSets in Juju or Operator Framework, though the concept of having applications running on each node is something we’re considering an implementation for both in Kubernetes charms, and also machine charms.

Your suggestion about scaling and anti-affinities should work for now, at least for testing the underlying functionality of the charm!

Hope this helps! Jon