The Future of Charmed Operators on Kubernetes

The Future of Charmed Operators on Kubernetes

With the release of Juju v2.9 RC7, we’re previewing the future of Charmed Operators on Kubernetes by introducing sidecar charms that are more consistent with how workloads are managed across other Juju substrates.

Rationale and History

With the first generation of charms on K8s, Charmed Operators ran in their own Pods, and instructed Juju to provision the workload on the Kubernetes substrate through a mechanism called pod.set_spec. Workloads were then provisioned into their own Pods, separate from the Charm code. This approach had some inherent limitations:

  • Limited control over processes in the workload
  • No IPC or local communication between the Charmed Operator and the workload
  • No file or socket sharing
  • Inability to store per-unit state

Charmed Operators implementing this pattern are susceptible to more of the challenges associated with distributed computing, especially in those cases where Charmed Operators were not running on the same Kubernetes nodes as any or all of the workload Pods (due to the design of the Kubernetes scheduler).

Charms written in this way will continue to work going forward, but we strongly encourage developers to adopt the new sidecar-based approach, and help us make it the best way to operate workloads on Kubernetes or otherwise.

A Refined Approach

With the new approach, both the workload container and the charm run in the same Pod, implementing the Sidecar Pattern. By definition, the Sidecar Pattern is designed to allow the augmentation of workloads with additional capabilities and features - in this case the ability to effectively manage and operate complex workloads and yielding a number of advantages:

  • Charmed Operator and workload will always be scheduled on the same node
  • Charmed Operator and workload are co-located in same network namespace
  • Charmed Operator and workload can communicate with SHM or sockets
  • Files can be shared between Charmed Operator and workload more easily
  • Charmed Operator scales with the workload


To augment this approach we’ve developed Pebble: a lightweight, API-driven process supervisor designed for use with modern Charmed Operators.

Pebble enables you to declaratively configure processes to run in a container, and control those processes throughout the workload lifecycle. It features a layering system that allows for coarse revisions to running configurations.

When writing a Charmed Operator that implements the Sidecar Pattern, no modifications are required to the base container images.

How It Works

Juju automatically injects Pebble into workload containers using an initContainer and Volume Mount. The entrypoint of the container is overridden so that Pebble occupies PID 1. Pebble is controlled by the Charmed Operator using a UNIX socket, which is mounted into both the Charmed Operator container, and the workload container. The Charmed Operator communicates over the socket with Pebble to manage running workloads.

Example Sidecar Charms

To help you get started implementing Charmed Operators using this new approach, you can follow the conversion process of some existing charms. This list will be kept up to date as we progress.

Getting Started

You can use any existing bootstrapped Kubernetes cluster, provided the controller is at least version 2.9. See the documentation for instructions on how to upgrade your controller. If you do not have a cluster ready, you can use MicroK8s!

$ sudo snap install --classic microk8s
$ sudo usermod -aG microk8s $(whoami)
$ sudo microk8s enable storage dns
$ sudo snap alias microk8s.kubectl kubectl
$ newgrp microk8s

Once you’ve done that, you should be able to invoke microk8s commands without using sudo. If you can’t, try logging out and logging back in before continuing.

Next, let’s install and bootstrap Juju, then deploy an example Charmed Operator:

# Make sure we have the correct version of Juju installed
$ sudo snap install juju --classic --channel=2.9/edge
# Bootstrap a Juju controller on MicroK8s
$ juju bootstrap microk8s
# Install Charmcraft
$ sudo snap install charmcraft --edge

# Clone an example charm
$ git clone
# Build the charm
$ cd snappass-test
$ charmcraft build

# Create a model for our deployment
$ juju add-model snappass

# Deploy!
$ juju deploy ./snappass-test.charm \
    --resource snappass-image=benhoyt/snappass-test \
    --resource redis-image=redis
# Wait for the deployment to complete
$ watch -n1 --color "juju status --color"

You can now inspect your deployment with kubectl:

# List pods in the snappass namespace
$ kubectl -n snappass get pods
modeloperator-5cd65496c-6t6sq 1/1 Running 0 3m
snappass-test-0 3/3 Running 0 1m

Note that the snappass-test-0 pod indicates 3 running containers, in this case these are:

  • Charm container
  • Snappass container
  • Redis container

You can see the Pebble configuration for the snappass and redis containers in the Charm code. The Charm container was injected automatically by Juju.

Where to get help

We’re hard at work augmenting the existing Operator Framework and Juju documentation to include these new APIs. If you need help with your new Charm, write a post on the Charmhub Discourse, or reach out on the Charmhub Community Mattermost instance.


Thanks for the informative post. Can you clarify the rationale behind the new approach? You list these reasons:

  • Limited control over processes in the workload
  • No IPC or local communication between the Charmed Operator and the workload
  • No file or socket sharing
  • Inability to store per-unit state

This looks to me like a fundamental architectural difference of Pets vs. Cattle. In other words, Kubernetes very much tries to actively prevent you from doing each of those things, because it makes scaling out much easier when you can just terminate a malfunctioning pod and spin up a new one.

Kubernetes has become massively popular because this approach resonates with people. Existing Kubernetes users are likely to view these limitations as a good thing, because the cattle-based approach works well for them. They will likely view sidecar charms for Kubernetes as trying to fit a square peg into a round hole. What story do we have for convincing them that the pet-based approach is better?

Hi Kenneth,

Thanks for the considered response - I think it’s worth mentioning that Charms implementing this new pattern do not have to interfere with Pods being killed/restarted/rescheduled, but rather have the option of doing so.

While the Pets vs. Cattle argument has many merits, and is applicable to lots of cases, there are situations where it is less beneficial. The traditional example here is very stateful workloads (such as nodes in a database cluster). If a process fails in such a deployment, it may not be beneficial to have the entire pod reschedules, and failure states might be handled more gracefully with the addition of some considered operations code in the Charm - without this new pattern, such recoveries are harder to achieve.

Clearly, if one is aiming for a more “immutable” deployment, then letting the scheduler do its thing is preferable, but I(/we) think this provides a nice alternative. There is an upcoming Files API for Pebble which will allow the push/pull/modifications in the workload container; for long-running stateful deployments, this may be beneficial to facilitate backup/restore or other maintenance/operations activities.

@jameinel and @manadart may have more to add here too! :slight_smile:

Cheers! :penguin:

1 Like

When you say that it provides a nice alternative, do you see both sidecar charms and podspec-based charms being the path forward for K8s charming? Or is the plan to make sidecar charms work for immutable deployments as well?

So in that context I meant an alternative to defaulting to the Kubernetes scheduler to make decisions about the workload where appropriate.

The plan is for this to be the de-facto method for deploying Charms on Kubernetes. What’s present in Juju 2.9 RC7 is an early preview. There will be features landing in later 2.9.x releases, and onward into the Juju 3.0 series to build upon the current capability :slight_smile:

Hello Kenneth,

We’ve discussed this at length before and with better bandwidth, but we can certainly go over this again until these ideas are more clear.

Given these questions, there are apparently some misconceptions about what sidecars do in Kubernetes. So just to be clear:

  1. The sidecar pattern does not disable the Kubernetes scheduler

  2. The sidecar pattern does not remove immutability, unless the idea of what immutability means is stretched to fit some very particular personal notion of what that means.

These explanations from the Kubernetes communtiy about sidecars might bring some light here:

In the juju world, these are exactly the sorts of things we use Charms for. So it’s nice to see that upstream Kubernetes development and juju are aligned here. We like that very much, so this is the future of Kubernetes and juju.

I’m really hoping we can count on you for that as well.

@niemeyer: You can certainly count on me for this. It’s not really about me, though. I think the broader Kubernetes community will have these same questions about how immutable deployments are handled in Juju, and we should have a solid set of answers to these questions. With that in mind, here’s specifically what I’m wondering about:

The sidecar pattern does not remove immutability

When I mention immutability in my above posts, what I mean is that after deployment, Pods are not modified. If you want to change something about the deployment, you spin up new Pods with the desired configuration change, and terminate the old ones. This article has a good definition of the term, and what I have in mind when I use it.

I’d also like to clarify what I mean when talking about sidecars. In Kubernetes, sidecars are not opposed to immutable deployments. The key point is that if you want to change something about a sidecar, you similarly spin up an updated Pod with the new sidecar, and terminate the old Pod. The Sidecar Pattern mentioned above is different. It is using sidecars as a means to mutably update the main container in a Pod, which is inherently at odds with the idea of immutable deployments, as defined above.

Given these two clarifications, the question that I’m asking is: for people that like deploying their services in an immutable fashion, will that be something we support going forward? If not, what reasons can we list for why they should switch to the new style of deployment? This sort of documentation is important for people that are comfortable with their existing Kubernetes tooling, and are wondering why they should try out Juju.

Of course you can do that with juju. You can do that today, and will be able to continue to do that tomorrow. Nothing forces you to change anything. If you want to deploy a container and never touch it, just do it.

This is also very dogmatic, though, and as soon as you assign any kind of read-write storage to a pod you’re already breaking that rule, because the only reason to have read-write storage associated with a pod is if you want to make modifications to that environment at runtime. In fact, even if you don’t assign storage, but the software being deployed accepts dynamic changes via its API, you are also modifying that software.

This is the software deployment version of functional programming. Purity is beautiful, until you actually want side effects to do what in fact is essential. But juju is there for you in either case. If you don’t want to mutate your pod, just don’t.