Set up your development environment

From zero to hero: Write your first Kubernetes charm > Set up your development environment

See previous: Study your application

In this chapter of the tutorial you will set up your development environment.


  1. Create your charm project directory
  2. Set up an Ubuntu VM with Multipass
  3. Install Charmcraft
  4. Get Ops
  5. Install MicroK8s
  6. Set up Juju
  7. Install Tox

Create your charm project directory

On your local machine, create your charm project directory. On Linux, for example, run:

mkdir ~/fastapi-demo

In your home directory, this will create a directory called fastapi-demo. It is the directory where you will put all your charm files.

Set up an Ubuntu VM with Multipass

The goal of this section is to help you get a Linux machine that supports installation from snap, so that you can install Charmcraft.

If you already have that, we still recommend this step as a way for you to experiment in an isolated development environment.

If you however wish to continue directly on your existing Linux machine, you may do so – just ignore the code lines where we go through Multipass.

However, please keep in mind that the instructions in this tutorial have only been tested for the scenario with Multipass.

First, on your local machine, install Multipass: Linux | macOS | Windows.

Multipass is a tool that will help you quickly spin up an Ubuntu VM.

Read more: Multipass

Now, open a terminal window and use Multipass to launch an Ubuntu VM called charm-dev:

# Launch a VM  "charm-dev" with 8 GB RAM, 2 CPU cores, and 20 GB disk:
multipass launch -n charm-dev -m 8g -c 2 -d 20G
# >>> Launched: charm-dev

As a crucial piece of the SDK – Charmcraft – requires a Linux machine that supports installation via snap, this step ensures you will have a suitable machine. As development can be messy, this also ensures that you can quickly bounce back in case anything goes wrong.

Read more: multipass launch, multipass delete

Now use Multipass to mount your local project directory to your VM, as below.

# Mount a project dir, see
multipass mount ~/fastapi-demo charm-dev:~/fastapi-demo

With this setup you can use your preferred editor to make changes on your local machine, and all these changes will be available inside the VM as well, at the mount point ~/fastapi-demo.

Read more: multipass mount

Finally, open a shell inside the VM:

multipass shell charm-dev
# >>> ubuntu@charm-dev:~$ 

Any command you type after the last terminal prompt (ubuntu@charm-dev:~$) will be inside of this VM.

Read more: multipass shell

Install Charmcraft

On your Ubuntu VM, install the SDK’s CLI tool Charmcraft:

# Charmcraft relies on LXD. Configure LXD:
lxd init --auto

# Install Charmcraft:
sudo snap install charmcraft --classic

Charmcraft will give you access to a rich set of powerful CLI commands. You can use them to simplify the creation, building, and publication of your charm.

Read more: Charmcraft (charmcraft)

Get Ops

The next thing you’ll need is the SDK’s Python operator framework Ops. You don’t need to do anything to get it at this point – just plan on using it. That is, be ready to require, use, and import it in your code as you would any other Python library.

Read more: Ops (ops)

Install MicroK8s

On your Ubuntu VM, install and configure MicroK8s:

If you’re not using a Multipass VM but rather your existing Linux:
In the instructions below, replace the ubuntu user with $USER.

# Install Microk8s from snap:
sudo snap install microk8s --channel 1.25-strict/stable

# Add the 'ubuntu' user to the MicroK8s group:
sudo usermod -a -G snap_microk8s ubuntu

# Give the 'ubuntu' user permissions to read the ~/.kube directory:
sudo chown -f -R ubuntu ~/.kube

# Create the 'microk8s' group:
newgrp snap_microk8s

# Enable the necessary MicroK8s addons:
sudo microk8s enable hostpath-storage dns

# Set up a short alias for the Kubernetes CLI:
sudo snap alias microk8s.kubectl kubectl

MicroK8s is a minimal production Kubernetes, so now you have a small Kubernetes cloud (by default called microk8s) on your machine.

Read more: MicroK8s

Set up Juju

On your Ubuntu VM, install Juju, connect it to your MicroK8s cloud, and prepare a workspace (‘model’):

# Install 'juju':
sudo snap install juju --channel 3.1/stable
# >>> juju (3.1/stable) 3.1.2 from Canonical✓ installed

# Since the juju package is strictly confined, you also need to manually create a path:
mkdir -p ~/.local/share

# Register your "microk8s" cloud with juju:
# Not necessary --juju recognises a MicroK8s cloud automatically, as you can see by running 'juju clouds'. 
juju clouds
# >>> Cloud      Regions  Default    Type  Credentials  Source    Description
# >>> localhost  1        localhost  lxd   0            built-in  LXD Container Hypervisor
# >>> microk8s   1        localhost  k8s   1            built-in  A Kubernetes Cluster
# (If for any reason this doesn't happen, you can register it manually using 'juju add-k8s microk8s'.)

# Install a "juju" controller into your "microk8s" cloud. 
# We'll name ours "tutorial-controller".
juju bootstrap microk8s tutorial-controller

# Create a workspace, or 'model', on this controller. 
# We'll call ours "charm-model".
# Juju will create a Kubernetes namespace "charm-model"
juju add-model charm-model

# Check status:
juju status
# >>> Model         Controller           Cloud/Region        Version  SLA          Timestamp
# >>> charm-model  tutorial-controller  microk8s/localhost  3.0.2    unsupported  16:05:03+01:00

# >>> Model "admin/charm-model" is empty.

# There's your charm model!
# This is where you will deploy your charm.

If you are working with an existing microk8s installation, first check the confinement level of the microk8s snap:

snap info microk8s | grep installed

Juju cannot work with snaps that has the “classic” confinement level as it communicates with the microk8s snap via an interface to retrieve the authentication credentials. If this is the case for you, manually share the microk8s credentials with juju:

# Share the microk8s config with juju
sudo sh -c "microk8s config | tee /var/snap/juju/current/microk8s/credentials/client.config"

# Give the current user permission to this file
sudo chown -f -R $USER:$USER /var/snap/juju/current/microk8s/credentials/client.config

You should now be able to run juju bootstrap microk8s to install a juju controller into your microk8s cloud.

Read more: Strict Confinement

If you’re building a charm you’ve probably already used Juju, and you already know what all this is about – this model on MicroK8s is where you will use Juju to test-deploy and test-operate your charm, to make sure that it will work for anyone else wanting to use your charm as well.

Read more: Juju

Install Tox

On your Ubuntu VM, install Tox:

# tox depends on  Python and pip
sudo apt update; sudo apt install python3 python3-pip
# Get tox via pip
python3 -m pip install --user tox

As you probably already know if you’ve worked in Python, Tox is a tool for automated and standardized testing. It is a popular tool in general and also the tool of choice among charmers. You will use it in a bit to check that the charm you’ve built actually works as intended.

Read more: Tox

Congratulations, your development environment is now ready!

Now, if you want to:

  • Continue the tutorial:
    • Use the project directory on your host machine to create and edit your charm files.
    • Use your Ubuntu VM to run Charmcraft or Juju commands.
    • To pause:
      Stop the Ubuntu VM by running multipass stop charm-dev.
    • To resume:
      Restart the Ubuntu VM: multipass start charm-dev.
      Reopen a shell into it: multipass shell charm-dev.
      Reaccess the charm project directory: cd ~/fastapi-demo.
  • Abandon the tutorial and clean up:
    • Delete the Ubuntu VM: multipass delete --purge charm-dev
    • Delete the charm project directory from your host machine.
    • Uninstall Multipass: Linux | macOS | Windows.

See next: Create a minimal Kubernetes charm

Contributors: @beliaev-maksim

I like this, but the process of setting up your environment would differ if you would go for a VM charm on lets say LXD. I’m trying to reuse this step for LXD but realize that the extra requirement on K8 makes this guide not reusable and places on the reader/user to also implicitly accept to have to learn about K8 before starting development of charms.

Can this step be made either neutral to K8/lxd/vm - or do we divert between LXD-VM charms even at this stage?

Oh, and another note is that why is “lxd init --auto” in here for this purpose?


@erik-lonroth This file was intended as a chapter in the Write a Kubernetes Charm tutorial.

1 Like

can you please put a header that this is a part of Zero to Hero ?

@beliaev-maksim How about a breadcrumb (see above)?

Note that when published on the website the side Navigation will show this doc nested under Tutorial > Write your first Kubernetes charm (or something like that; as opposed to Tutorial > Write your first machine charm).

1 Like

I’m basically repeating all of the above for LXD as we speak.

This is what it looks like:

Spawn a new VM on LXD

lxc launch ubuntu:22.04 charm-dev --vm

Get a shell on the vm:

lxc shell charm-dev

Disable ipv4 for apt if you don’t have ipv6.

 root@charm-dev:~# sudo echo Acquire::ForceIPv4 "true"; > /etc/apt/apt.conf.d/99force-ipv4

Install juju and charmcraft and initialize lxd

 root@charm-dev:~# sudo snap install juju --classic
 root@charm-dev:~# sudo snap install charmcraft --classic
 root@charm-dev:~# lxd init --auto

Validate that you can see your new lxd cloud named(also called remote in lxd terms)

lxc remote list

Juju has picked this cloud up and named it “localhost”.

 root@charm-dev:~# juju clouds --client
Only clouds with registered credentials are shown.
There are more clouds, use --all to see them.
You can bootstrap a new controller using one of these clouds...

Clouds available on the client:
Cloud      Regions  Default    Type  Credentials  Source    Description
localhost  1        localhost  lxd   1            built-in  LXD Container Hypervisor

Create a controller on the localhost cloud

 root@charm-dev:~# juju bootstrap localhost tutorial-controller
Creating Juju controller "tutorial-controller" on localhost/localhost
Looking for packaged Juju agent version 2.9.38 for amd64
Located Juju agent version 2.9.38-ubuntu-amd64 at
To configure your system to better support LXD containers, please see:
Launching controller instance(s) on localhost/localhost...
 - juju-3a5da3-0 (arch=amd64)                 
Installing Juju agent on bootstrap instance
Fetching Juju Dashboard 0.8.1
Waiting for address
Attempting to connect to fd42:e802:396b:494a:216:3eff:fe4b:8fc7:22
Attempting to connect to
Connected to fd42:e802:396b:494a:216:3eff:fe4b:8fc7
Running machine configuration script...
Bootstrap agent now started
Contacting Juju controller at to verify accessibility...
Bootstrap complete, controller "tutorial-controller" is now available
Controller machines are in the "controller" model
Initial model "default" added

Validate that the juju client can talk to your localhost cloud.

root@charm-dev:~# juju status
Model    Controller           Cloud/Region         Version  SLA          Timestamp
default  tutorial-controller  localhost/localhost  2.9.38   unsupported  15:26:30Z

Model "admin/default" is empty.

With that, you have a juju-cloud and juju-controller ready for development.

1 Like

Some users, including myself, love the CLI autocompletion feature very much. However, this is not working in juju 3+ as it’s reported here. One temporary solution is the following:

cp /snap/juju/current/bash_completions/juju       /usr/share/bash-completion/completions/

It would be great if this part can be included in this tutorial.