Get started with the Juju OLM

Imagine your business requires a chat service, such as Mattermost (backed by PostgreSQL), or an identity service, such as Keystone (backed by MySQL). To deploy and operate all this by hand would take you a lot of know-how, work, and time. Not so with Juju! In this tutorial you will learn how to achieve this in no time.

Prerequisites: A working station, e.g., a laptop.

Contents:

  1. Create the test environment
  2. Prepare your cloud
  3. Install the Juju CLI client
  4. Register your cloud with Juju
  5. Install a Juju controller in your cloud
  6. Create a workspace on your cloud
  7. Deploy applications on your workspace
  8. Integrate your applications
  9. Test your deployment
  10. Destroy the test environment
  11. Next steps

Create the test environment

When you’re learning, it’s good to do it in an isolated environment. Let’s start by creating an Ubuntu virtual machine with Multipass.

First, install Multipass: Linux | macOS | Windows.

Then, open a terminal and use Multipass to launch an Ubuntu virtual machine and open a shell in it, as shown below. We’ve called ours tutorial-vm .

# Launch a VM  "tutorial-vm" with 8 GB RAM, 2 CPU cores, 
# 20 GB disk, and Ubuntu 20.04 LTS (Jammy Jellyfish):
$ multipass launch -n tutorial-vm -m 8g -c 2 -d 20G jammy
Launched: tutorial-vm

# Open a shell inside the VM:
$ multipass shell tutorial-vm
ubuntu@tutorial-vm:~$

Congratulations, your Ubuntu virtual machine is ready! Any command you type after the last terminal prompt will be inside of this virtual machine.

If for whatever reason you need to interrupt this tutorial, we recommend running multipass stop tutorial-vm to stop the instance. When you resume, run multipass start tutorial-vm.

Prepare your cloud

To continue this tutorial you’ll need a cloud. To keep things simple, in this tutorial we will use MicroK8s / LXD, a small Kubernetes / non-Kubernetes cloud that you can create on your local workstation.

Juju supports a wide range of clouds—public or private, Kubernetes or not. If you want to follow this tutorial with a different cloud, see Supported clouds.

Expand to see the instructions for MicroK8s

When strictly confined MicroK8s is released, the group you need to be a member of changes to snap_microk8s.

Execute the code below to install and configure MicroK8s:

# Install Microk8s from snap:
sudo snap install microk8s --classic --channel=1.24

# Add the 'ubuntu' user to the Microk8s group:
sudo usermod -a -G microk8s ubuntu

# Give the 'ubuntu' user permissions to read the ~/.kube directory:
sudo chown -f -R ubuntu ~/.kube

# Create the 'microk8s' group:
newgrp microk8s

# Enable the necessary Microk8s addons:
microk8s enable hostpath-storage dns

Expand to see the instructions for LXD

LXD should be already installed in your Ubuntu system. Execute the code below to configure it:

lxd init --auto
lxc network set lxdbr0 ipv6.address none

Install the Juju CLI client

We will now install the Juju CLI client via snap:

sudo snap install juju --classic

If the installation was successful, you will see a message similar to the one below:

juju 2.9.42 from Canonical✓ installed

Register your cloud with Juju

We have our cloud. Now, let’s register it with Juju!

This always involves the same basic logic:

  1. Add your cloud to Juju.
  2. Add your cloud credentials to Juju.

However, the process differs a little bit depending on whether the cloud is Kubernetes or not. And, depending on the cloud, at least one of these steps may be implicit.

Expand to see the process for MicroK8s

For a Kubernetes cloud, both these logical steps are done in one go, via juju add-k8s. However, for the MicroK8s cloud all of this happens implicitly, as detailed below.

Beginning with Juju v.3.0-beta1 and until MicroK8s releases a strictly confined snap, these steps will have to be done explicitly via juju add-k8s --client <microk8s cloud name> for MicroK8s too.

Juju recognizes your MicroK8s cloud automatically. You can already see it if you run:

juju clouds
Cloud      Regions  Default    Type  Credentials  Source    Description
microk8s   1        localhost  k8s   1            built-in  A Kubernetes Cluster

The credentials are also retrieved automatically. You’re ready to start managing your MicroK8s cloud with Juju!

Expand to see the process for LXD

For a non-Kubernetes cloud, these logical steps are achieved sequentially via juju add-cloud and juju add-credential / juju autoload-credentials. However, for the LXD cloud all of this happens implicitly, as described below.

Juju recognizes your LXD cloud automatically. You can already see it if you run:

juju clouds
Cloud      Regions  Default    Type  Credentials  Source    Description
localhost  1        localhost  lxd   1            built-in  LXD Container Hypervisor

In the juju clouds output, somewhat confusingly, both Cloud and Default are given as localhost. However, in the first case this refers to the name of your LXD cloud whereas in the second it refers to the default cloud region, which in this case is just your local host (just as for the MicroK8s cloud, in case you noticed).

The credentials are also retrieved automatically. You’re ready to start managing your LXD cloud with Juju!

At this point what you have there is just cloud infrastructure, an empty cloud. There are no resources being used yet.

Install a Juju controller in your cloud

Time for things to get real! Let’s bootstrap a controller named tutorial-controller into our cloud! The process is the same for any cloud—just make sure to put in the name of the cloud you want.

juju bootstrap <cloud name> tutorial-controller
Expand to see the code with the MicroK8s cloud name
juju bootstrap microk8s tutorial-controller
Expand to see the code with the LXD cloud name
juju bootstrap localhost tutorial-controller

At this point you are beginning to use resources.

Create a workspace on your cloud

The next step is to create a workspace on your cloud. We do this by creating a model, as shown below. We will name ours tutorial-model.

juju add-model tutorial-model

Deploy applications on your workspace

We’ve got our workspace. Let’s start deploying applications!

To deploy an application with Juju you need to:

  1. Find a suitable charmed operator, or ‘charm’, on Charmhub.

Charms are small pieces of software that contain all the information required to install and manage the application. Juju creates and manages the application based on the instructions contained in the charm. Charms are usually named after the application they represent. Charmhub is the official home of charms.

For a Kubernetes cloud, choose a Kubernetes charm, and for a non-Kubernetes cloud choose a non-Kubernetes charm.

Starting with juju v.3.0, charms that require xenial or bionic are no longer supported.

  1. Deploy the charm, as shown below. Juju will pull the charm from Charmhub and then install the application on your cloud.
juju deploy <charm name>
Expand to deploy Mattermost and PostgreSQL on MicroK8s

Let’s deploy mattermost-k8s, the Kubernetes charm for Mattermost, an application that provides a chat service:

juju deploy mattermost-k8s

When deployed, this outputs:

Located charm "mattermost-k8s" in charm-store, revision 20
Deploying "mattermost-k8s" from charm-store charm "mattermost-k8s", revision 20 in channel stable

You can observe the deployment status with the following command:

watch -c juju status --format short --color

But Mattermost requires a database service. Let’s deploy postgresql-k8s, the Kubernetes charm for PostgreSQL, also:

juju deploy postgresql-k8s

When deployed, this outputs:

Located charm "postgresql-k8s" in charm-store, revision 9
Deploying "postgresql" from charm-store charm "postgresql-k8s", revision 9 in channel stable

That’s it. Your applications are now deployed.

Expand to deploy Keystone and MySQL on LXD

Let’s deploy keystone, the non-Kubernetes charm for Keystone, an application that provides OpenStack’s Identity service:

juju deploy keystone --series focal

When deployed, this outputs:

Located charm "keystone" in charm-hub, revision 539
Deploying "keystone" from charm-hub charm "keystone", revision 539 in channel stable on focal

You can observe the deployment status with the following command:

watch -c juju status --format short --color

But Keystone requires a database service. Let’s install mysql, the non-Kubernetes charm for MySQL, also:

Keystone is normally deployed with mysql-innodb-cluster, but this requires a 3-unit cluster to be instantiated. For simplicity, we will deploy with mysql.

juju deploy mysql --channel edge

When deployed, this outputs:

Located charm "mysql" in charm-hub, revision 114
Deploying "mysql" from charm-hub charm "mysql", revision 114 in channel edge on ubuntu@22.04/stable

That’s it. Your applications are now deployed.

Integrate your applications

We’ve deployed two applications that look like they need to work together. However, currently, they don’t; we need to integrate them. In Juju, integrating applications is a first class operation that can be done straightforwardly via:

juju integrate <charm 1> <charm 2>

Before juju v.3.0, integrate was relate. Remnants of this persist in some command names, options, and outputs.

Expand to integrate Mattermost and PostgreSQL on MicroK8s
juju integrate mattermost-k8s postgresql-k8s:db

The : after postgresql-k8s signifies the db endpoint.

You can again observe the deployment status with the following command:

watch -c juju status --format short --color

That’s it. Your applications now know about each other and are ready to work together.

Expand to integrate Keystone and MySQL on LXD
juju integrate keystone mysql

You can again observe the deployment status with the following command:

watch -c juju status --format short --color

That’s it. Your applications now know about each other and are ready to work together.

Test your deployment

You’ve integrated your applications. You now have a functional working system. Time to try it out!

Expand to test your Mattermost deployment on MicroK8s

Execute the code below to get the IP address of your Mattermost app:

juju status

Example output should look like:

Model           Controller           Cloud/Region        Version  SLA          Timestamp
tutorial-model  tutorial-controller  microk8s/localhost  2.9.34   unsupported  12:10:16+02:00

App             Version                         Status  Scale  Charm           Channel  Rev  Address         Exposed  Message
mattermost-k8s  .../mattermost:v6.6.0-20.04...  active      1  mattermost-k8s  stable    21  10.152.183.185  no       
postgresql-k8s  .../postgresql@ed0e37f          active      1  postgresql-k8s  stable     4                  no       Pod configured

Unit               Workload  Agent  Address       Ports     Message
mattermost-k8s/0*  active    idle   10.1.179.151  8065/TCP  
postgresql-k8s/0*  active    idle   10.1.179.149  5432/TCP  Pod configured

Now, use the IP address and the port of mattermost-k8s to check that the application is running (in example output it will be 10.152.183.185 and 8065 respectively):

curl <IP address>:<port>/api/v4/system/ping

You should see the following:

{"AndroidLatestVersion":"","AndroidMinVersion":"","DesktopLatestVersion":"","DesktopMinVersion":"","IosLatestVersion":"","IosMinVersion":"","status":"OK"}

Congratulations, your Mattermost application is ready to use!

Note: if you get 404 page, restart your multipass VM and try again

(OPTIONAL) Access MicroK8s from host machine

If you would like to reach your Mattermost instance from your host machine (e.g. in web browser) you will need to set up IP routes.

By default, MicroK8s will setup a sub-network for the running apps, and since we run MicroK8s in a VM, we need to expose this sub-network to the host. First of all, let’s get an IP range of our service addresses. Within your Multipass instance, you can do it by either running

juju status

See command output above

or run

microk8s kubectl -n tutorial-model get svc

MicroK8s example output:

NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
modeloperator              ClusterIP   10.152.183.131   <none>        17071/TCP        6d23h
mattermost-k8s-operator    ClusterIP   10.152.183.142   <none>        30666/TCP        6d23h
postgresql-k8s-operator    ClusterIP   10.152.183.240   <none>        30666/TCP        6d23h
postgresql-k8s-master      NodePort    10.152.183.37    <none>        5432:31056/TCP   6d23h
postgresql-k8s-standbys    NodePort    10.152.183.241   <none>        5432:32357/TCP   6d23h
postgresql-k8s             ClusterIP   10.152.183.103   <none>        5432/TCP         6d23h
postgresql-k8s-endpoints   ClusterIP   None             <none>        <none>           6d23h
mattermost-k8s             ClusterIP   10.152.183.185   <none>        8065/TCP         6d23h

From example output above we can see that Mattermost is running as kubernetes service with IP 10.152.183.185 and MicroK8s network is 10.152.183.0/24 (10.152.183.XXX).

Next step will be to get an IP address of the VM. You can get it by either running inside of multipass instance:

ip -br address show scope global

or running from your host

multipass info <vm name>

Example output, where VM IP is 10.112.13.219:

Name:           vivid-kakapo
State:          Running
IPv4:           10.112.13.219
                10.1.179.128
Release:        Ubuntu 20.04.4 LTS
Image hash:     77f2f05e39e5 (Ubuntu 20.04 LTS)
Load:           0.36 0.51 0.45
Disk usage:     8.6G out of 19.2G
Memory usage:   1.3G out of 7.8G
Mounts:         --

Now we are ready to add a static route from our host to MicroK8s subnetwork using Multipass VM IP as a gateway, run on your host:

sudo ip route add <MicroK8s subnetwork> via <VM IP>

As in our example:

sudo ip route add 10.152.183.0/24 via 10.112.13.219

Now you can open your web browser and open mattermost URL (see juju status for Mattermost IP and port), eg

http://10.152.183.185:8065
Expand to test your Keystone deployment on LXD

Execute the code below to get the IP address of your Keystone instance:

juju status

Now, use the IP address as shown below to check that the application is running:

curl -G -I 'http://<IP address>:5000/'

You should see the following:

curl -G -l http://10.251.113.159:5000/     
{"versions": {"values": [{"id": "v3.14", "status": "stable", "updated": "2020-04-07T00:00:00Z", "links": [{"rel": "self", "href": "http://10.251.113.159:5000/v3/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}]}]}}%

Congratulations, your Keystone application is ready to use!

Destroy the test environment

Once you are done, you can run the code below to stop and delete your Multipass test environment.

# Stop your instance
multipass stop tutorial-vm

# Delete your instance permanently
multipass delete --purge tutorial-vm 

You can also uninstall Multipass to remove any trace of this guide.

Next steps

This tutorial has introduced you to the basic things you can do with Juju. But there is a lot more to explore:

If you are wondering… visit…
“How do I…?” Juju OLM How-to docs
“What is…?” Juju OLM Reference docs
“Why…?”, “So what?” Juju OLM Explanation docs
microk8s kubectl get pods -n tutorial-model

I didn’t get IP details so I used:

microk8s kubectl get pods -o wide -n tutorial-model

Did I do something wrong?

1 Like

I will fix it

better run juju status and pick an IP of the mattermost. but moreover, you need to add a port, which is 8065

then you can execute curl

1 Like

@beliaev-maksim I think the fix in terms of juju status is correct—thanks! Wondering why microk8s kubectl get pods -n tutorial-model failed though. @fernape any thoughts?

it does not fail. The problem is that this command does not show IP addresses of the pods. Adding -o wide flag shows a different view of the get command and then you see a list of IPs.

Any way, juju is a better way to grab k8s data whenever possible

1 Like

When I try to deploy kapacitor on LXD on a jammy VM (which is the default), the container gets stuck and the process doesn’t finish. Likely due to incompatibility between xenial containers on jammy hosts, as discussed in the chat.

I had to launch a focal VM with multipass launch -n tutorial-vm -m 8g -c 2 -d 20G focal

1 Like

Thanks, I’ve updated the text to what you had.

1 Like

Following the instructions on this page, it’s not exactly clear the LXD part is not compatible with vanilla Jammy, could it be possible to add a warning / note specifying that ?

On my side, I enabled CGroups V1 to go around this incompatibility.

Secondly, this seems a temporal issue and might be fixed by Influx but the installation of the InfluxDB charm fails because it cannot verify the repository:

unit-influxdb-2: 19:28:40 WARNING unit.influxdb/2.install W: GPG error: https://repos.influxdata.com/ubuntu focal InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY D8FF8E1F7DF8B07E
unit-influxdb-2: 19:28:40 WARNING unit.influxdb/2.install E: The repository 'https://repos.influxdata.com/ubuntu focal InRelease' is not signed.

I checked in the charms source, and the exposed gpg key, they are the same: https://repos.influxdata.com/influxdb.key, therefore the problem does not seem to be in the charms side.

Thank you

1 Like

@gboutry Sorry for the late reply – I was hoping I could reply when this was already done, but now it’s been a while, sorry.

To answer your question: I am aware there are some issues with the series required by the charms in the LXD track not being supported in Juju 3. I’m looking into replacing it with other charms.

Hi @tmihoc, thanks for the response.

I browsed through charmhub, and jenkins with jenkins-agent look like a good candidate.

You can integrate these charms and scale the jenkins-agent, poll the REST Api to confirm it’s working. There is a Focal version that should work on a Jammy LXD installation.

1 Like

Thanks for the suggestion – I’ll test and update!

1 Like

I can do it if that’s okay with you

Sure. However, two notes:

(1) I tried the combination you suggest on my local LXD cloud, but the jenkins-agent was always blocked. I didn’t have time to investigate.

(2) I was hoping to keep the LXD track of the tutorial parallel to the MicroK8s track, that is, to use something that needs a database and a database. What I was going to test next is Landscape + PostgreSQL or Keystone + MySQL. What do you think?

I deployed a jenkins instance (along with an agent) with success a few days ago, and I was even able to create + run jobs on the CI. But I agree that having an equivalence between the two charm types would simplify the knowledge transfer.

I will look into these two candidates and choose the one with less friction

1 Like

Just tried keystone with postgresql but (1) keystone goes straight to blocked and (2) I get an error when I try to integrate.

From what I know, Keystone needs a MySQL DB to work. Moreover the Keystone charm will need a subordinate charm to route to MySQL which makes it a harder to deploy in the scope of a getting started.

1 Like

This worked:

juju integrate mysql keystone:shared-db

(Should have tried MySQL from the start, of course, but I got confused for a sec.)

1 Like

landscape-server failed – wrong series:

juju deploy landscape-server
ERROR the charm defined series "bionic, xenial, trusty" not supported

I submitted the edit using keystone.

For information, as part of the sunbeam project, the Openstack team is writing k8s charms to deploy Openstack control plane to k8s.

Therefore, Keystone-k8s is available as beta for the moment. It could be used in the future in the k8s deployment as well.

1 Like

Removed self-referencing anchor links from all of the headings as they led nowhere, making navigating the page quite confusing.