Get started with the Juju OLM

Imagine your business requires a chat service, such as Mattermost (backed up by PostgreSQL), or a real-time data processing engine, such as Kapacitor (backed up by InfluxDB). To deploy and operate all this by hand would take you a lot of know-how, work, and time. Not so with Juju! In this tutorial you will learn how to achieve this in no time.

Prerequisites: A working station, e.g., a laptop.

Contents:

Create the test environment

The Juju SDK is currently supported only on Linux. However, you can also use it on macOS or Windows by quickly creating an Ubuntu virtual machine with Multipass. This is also a good option for people who would like an isolated test environment on their existing Linux system.

First, install Multipass: Linux | macOS | Windows.

Then, open a terminal and use Multipass to launch an Ubuntu virtual machine and open a shell in it, as shown below. We’ve called ours tutorial-vm .

# Launch a VM  "tutorial-vm" with 8 GB RAM, 2 CPU cores, 
# 20 GB disk, and Ubuntu 20.04 LTS (Focal Fossa):
$ multipass launch -n tutorial-vm -m 8g -c 2 -d 20G focal
Launched: tutorial-vm

# Open a shell inside the VM:
$ multipass shell tutorial-vm
ubuntu@tutorial-vm:~$

Congratulations, your Ubuntu virtual machine is ready! Any command you type after the last terminal prompt will be inside of this virtual machine.

If for whatever reason you need to interrupt this tutorial, we recommend running multipass stop tutorial-vm to stop the instance. When you resume, run multipass start tutorial-vm.

Prepare your cloud

To continue this tutorial you’ll need a cloud. To keep things simple, in this tutorial we will use MicroK8s / LXD, a small Kubernetes / non-Kubernetes cloud that you can create on your local workstation.

Juju supports a wide range of clouds—public or private, Kubernetes or not. If you want to follow this tutorial with a different cloud, see Supported clouds.

Expand to see the instructions for MicroK8s

When strictly confined MicroK8s is released, the group you need to be a member of changes to snap_microk8s.

Execute the code below to install and configure MicroK8s:

# Install Microk8s from snap:
sudo snap install microk8s --classic --channel=1.24

# Add the 'ubuntu' user to the Microk8s group:
sudo usermod -a -G microk8s ubuntu

# Give the 'ubuntu' user permissions to read the ~/.kube directory:
sudo chown -f -R ubuntu ~/.kube

# Create the 'microk8s' group:
newgrp microk8s

# Enable the necessary Microk8s addons:
microk8s enable hostpath-storage dns

Expand to see the instructions for LXD

LXD should be already installed in your Ubuntu system. Execute the code below to configure it:

lxd init --auto
lxc network set lxdbr0 ipv6.address none

Install the Juju CLI client

We will now install the Juju CLI client via snap:

sudo snap install juju --classic

If the installation was successful, you will see a message similar to the one below:

juju 2.9.32 from Canonical✓ installed

Register your cloud with Juju

We have our cloud. Now, let’s register it with Juju!

This always involves the same basic logic:

  1. Add your cloud to Juju.
  2. Add your cloud credentials to Juju.

However, the process differs a little bit depending on whether the cloud is Kubernetes or not. And, depending on the cloud, at least one of these steps may be implicit.

Expand to see the process for MicroK8s

For a Kubernetes cloud, both these logical steps are done in one go, via juju add-k8s. However, for the MicroK8s cloud all of this happens implicitly, as detailed below.

Beginning with Juju v.3.0-beta1 and until MicroK8s releases a strictly confined snap, these steps will have to be done explicitly via juju add-k8s --client <microk8s cloud name> for MicroK8s too.

Juju recognizes your MicroK8s cloud automatically. You can already see it if you run:

juju clouds
Cloud      Regions  Default    Type  Credentials  Source    Description
microk8s   1        localhost  k8s   1            built-in  A Kubernetes Cluster

The credentials are also retrieved automatically. You’re ready to start managing your MicroK8s cloud with Juju!

Expand to see the process for LXD

For a non-Kubernetes cloud, these logical steps are achieved sequentially via juju add-cloud and juju add-credential / juju autoload-credentials. However, for the LXD cloud all of this happens implicitly, as described below.

Juju recognizes your LXD cloud automatically. You can already see it if you run:

juju clouds
Cloud      Regions  Default    Type  Credentials  Source    Description
localhost  1        localhost  lxd   1            built-in  LXD Container Hypervisor

In the juju clouds output, somewhat confusingly, both Cloud and Default are given as localhost. However, in the first case this refers to the name of your LXD cloud whereas in the second it refers to the default cloud region, which in this case is just your local host (just as for the MicroK8s cloud, in case you noticed).

The credentials are also retrieved automatically. You’re ready to start managing your LXD cloud with Juju!

At this point what you have there is just cloud infrastructure, an empty cloud. There are no resources being used yet.

Install a Juju controller in your cloud

Time for things to get real! Let’s bootstrap a controller named tutorial-controller into our cloud! The process is the same for any cloud—just make sure to put in the name of the cloud you want.

juju bootstrap <cloud name> tutorial-controller
Expand to see the code with the MicroK8s cloud name
juju bootstrap microk8s tutorial-controller
Expand to see the code with the LXD cloud name
juju bootstrap localhost tutorial-controller

At this point you are beginning to use resources.

Create a workspace on your cloud

The next step is to create a workspace on your cloud. We do this by creating a model, as shown below. We will name ours tutorial-model.

juju add-model tutorial-model

Deploy applications on your workspace

We’ve got our workspace. Let’s start deploying applications!

To deploy an application with Juju you need to:

  1. Find a suitable charmed operator, or ‘charm’, on Charmhub.

Charms are small pieces of software that contain all the information required to install and manage the application. Juju creates and manages the application based on the instructions contained in the charm. Charms are usually named after the application they represent. Charmhub is the official home of charms.

For a Kubernetes cloud, choose a Kubernetes charm, and for a non-Kubernetes cloud choose a non-Kubernetes charm.

Starting with juju v.3.0, charms that require xenial or bionic are no longer supported.

  1. Deploy the charm, as shown below. Juju will pull the charm from Charmhub and then install the application on your cloud.
juju deploy <charm name>
Expand to deploy Mattermost and PostgreSQL on MicroK8s

Let’s deploy mattermost-k8s, the Kubernetes charm for Mattermost, an application that provides a chat service:

juju deploy mattermost-k8s

When deployed, this outputs:

Located charm "mattermost-k8s" in charm-store, revision 20
Deploying "mattermost-k8s" from charm-store charm "mattermost-k8s", revision 20 in channel stable

You can observe the deployment status with the following command:

watch -c juju status --format short --color

But Mattermost requires a database service. Let’s deploy postgresql-k8s, the Kubernetes charm for PostgreSQL, also:

juju deploy postgresql-k8s

When deployed, this outputs:

Located charm "postgresql-k8s" in charm-store, revision 9
Deploying "postgresql" from charm-store charm "postgresql-k8s", revision 9 in channel stable

That’s it. Your applications are now deployed.

Expand to deploy Kapacitor and InfluxDB on LXD

Let’s deploy kapacitor, the non-Kubernetes charm for Kapacitor, an application that provides real time analytics:

juju deploy kapacitor

When deployed, this outputs:

Located charm "kapacitor" in charm-hub, revision 1
Deploying "kapacitor" from charm-hub charm "kapacitor", revision 1 in channel stable on xenial

You can observe the deployment status with the following command:

watch -c juju status --format short --color

But Kapacitor requires a database service. Let’s install influxdb, the non-Kubernetes charm for InfluxDB, also:

juju deploy influxdb

When deployed, this outputs:

Located charm "influxdb" in charm-hub, revision 24
Deploying "influxdb" from charm-hub charm "influxdb", revision 24 in channel stable on focal

That’s it. Your applications are now deployed.

Integrate your applications

We’ve deployed two applications that look like they need to work together. However, currently, they don’t; we need to integrate them. In Juju, integrating applications is a first class operation that can be done straightforwardly via:

juju integrate <charm 1> <charm 2>

Before juju v.3.0, integrate was relate. Remnants of this persist in some command names, options, and outputs.

Expand to integrate Mattermost and PostgreSQL on MicroK8s
juju integrate mattermost-k8s postgresql-k8s:db

The : after postgresql-k8s signifies the db endpoint.

You can again observe the deployment status with the following command:

watch -c juju status --format short --color

That’s it. Your applications now know about each other and are ready to work together.

Expand to integrate Kapacitor and InfluxDB on LXD
juju integrate kapacitor influxdb

You can again observe the deployment status with the following command:

watch -c juju status --format short --color

That’s it. Your applications now know about each other and are ready to work together.

Test your deployment

You’ve integrated your applications. You now have a functional working system. Time to try it out!

Expand to test your Mattermost deployment on MicroK8s

Execute the code below to get the IP address of your Mattermost app:

juju status

Example output should look like:

Model           Controller           Cloud/Region        Version  SLA          Timestamp
tutorial-model  tutorial-controller  microk8s/localhost  2.9.34   unsupported  12:10:16+02:00

App             Version                         Status  Scale  Charm           Channel  Rev  Address         Exposed  Message
mattermost-k8s  .../mattermost:v6.6.0-20.04...  active      1  mattermost-k8s  stable    21  10.152.183.185  no       
postgresql-k8s  .../postgresql@ed0e37f          active      1  postgresql-k8s  stable     4                  no       Pod configured

Unit               Workload  Agent  Address       Ports     Message
mattermost-k8s/0*  active    idle   10.1.179.151  8065/TCP  
postgresql-k8s/0*  active    idle   10.1.179.149  5432/TCP  Pod configured

Now, use the IP address and the port of mattermost-k8s to check that the application is running (in example output it will be 10.152.183.185 and 8065 respectively):

curl <IP address>:<port>/api/v4/system/ping

You should see the following:

{"AndroidLatestVersion":"","AndroidMinVersion":"","DesktopLatestVersion":"","DesktopMinVersion":"","IosLatestVersion":"","IosMinVersion":"","status":"OK"}

Congratulations, your Mattermost application is ready to use!

Note: if you get 404 page, restart your multipass VM and try again

(OPTIONAL) Access MicroK8s from host machine

If you would like to reach your Mattermost instance from your host machine (e.g. in web browser) you will need to set up IP routes.

By default, MicroK8s will setup a sub-network for the running apps, and since we run MicroK8s in a VM, we need to expose this sub-network to the host. First of all, let’s get an IP range of our service addresses. Within your Multipass instance, you can do it by either running

juju status

See command output above

or run

microk8s kubectl -n tutorial-model get svc

MicroK8s example output:

NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
modeloperator              ClusterIP   10.152.183.131   <none>        17071/TCP        6d23h
mattermost-k8s-operator    ClusterIP   10.152.183.142   <none>        30666/TCP        6d23h
postgresql-k8s-operator    ClusterIP   10.152.183.240   <none>        30666/TCP        6d23h
postgresql-k8s-master      NodePort    10.152.183.37    <none>        5432:31056/TCP   6d23h
postgresql-k8s-standbys    NodePort    10.152.183.241   <none>        5432:32357/TCP   6d23h
postgresql-k8s             ClusterIP   10.152.183.103   <none>        5432/TCP         6d23h
postgresql-k8s-endpoints   ClusterIP   None             <none>        <none>           6d23h
mattermost-k8s             ClusterIP   10.152.183.185   <none>        8065/TCP         6d23h

From example output above we can see that Mattermost is running as kubernetes service with IP 10.152.183.185 and MicroK8s network is 10.152.183.0/24 (10.152.183.XXX).

Next step will be to get an IP address of the VM. You can get it by either running inside of multipass instance:

ip -br address show scope global

or running from your host

multipass info <vm name>

Example output, where VM IP is 10.112.13.219:

Name:           vivid-kakapo
State:          Running
IPv4:           10.112.13.219
                10.1.179.128
Release:        Ubuntu 20.04.4 LTS
Image hash:     77f2f05e39e5 (Ubuntu 20.04 LTS)
Load:           0.36 0.51 0.45
Disk usage:     8.6G out of 19.2G
Memory usage:   1.3G out of 7.8G
Mounts:         --

Now we are ready to add a static route from our host to MicroK8s subnetwork using Multipass VM IP as a gateway, run on your host:

sudo ip route add <MicroK8s subnetwork> via <VM IP>

As in our example:

sudo ip route add 10.152.183.0/24 via 10.112.13.219

Now you can open your web browser and open mattermost URL (see juju status for Mattermost IP and port), eg

http://10.152.183.185:8065
Expand to test your Kapacitor deployment on LXD

Execute the code below to get the IP address of your Kapacitor instance:

juju status

Now, use the IP address as shown below to check that the application is running:

curl -G -I 'http://<IP address>:9092/kapacitor/v1/ping'

You should see the following:

curl -G -I 'http://10.233.51.11:9092/kapacitor/v1/ping'
HTTP/1.1 204 No Content
Content-Type: application/json; charset=utf-8
Request-Id: 25e54d3b-2ab3-11ed-8010-000000000000
X-Kapacitor-Version: 1.3.1
Date: Fri, 02 Sep 2022 11:34:04 GMT

A ping request does nothing but respond with a 204 HTTP code.

Congratulations, your Kapacitor application is ready to use!

Destroy your test environment

Once you are done, you can run the code below to stop and delete your Multipass test environment.

# Stop your instance
multipass stop tutorial-vm

# Delete your instance permanently
multipass delete --purge tutorial-vm 

You can also uninstall Multipass to remove any trace of this guide.

Next steps

This tutorial has introduced you to the basic things you can do with Juju. But there is a lot more to explore:

If you are wondering… visit…
“How do I…?” Juju OLM How-to docs
“What is…?” Juju OLM Reference docs
“Why…?”, “So what?” Juju OLM Explanation docs
microk8s kubectl get pods -n tutorial-model

I didn’t get IP details so I used:

microk8s kubectl get pods -o wide -n tutorial-model

Did I do something wrong?

1 Like

I will fix it

better run juju status and pick an IP of the mattermost. but moreover, you need to add a port, which is 8065

then you can execute curl

1 Like

@beliaev-maksim I think the fix in terms of juju status is correct—thanks! Wondering why microk8s kubectl get pods -n tutorial-model failed though. @fernape any thoughts?

it does not fail. The problem is that this command does not show IP addresses of the pods. Adding -o wide flag shows a different view of the get command and then you see a list of IPs.

Any way, juju is a better way to grab k8s data whenever possible

1 Like

When I try to deploy kapacitor on LXD on a jammy VM (which is the default), the container gets stuck and the process doesn’t finish. Likely due to incompatibility between xenial containers on jammy hosts, as discussed in the chat.

I had to launch a focal VM with multipass launch -n tutorial-vm -m 8g -c 2 -d 20G focal

1 Like

Thanks, I’ve updated the text to what you had.

1 Like