Get started on Kubernetes

Note: This doc has been deprecated. Please see instead: Get started with Juju.


Kubernetes – also known as K8s – provides a flexible architecture for Cloud Native applications at scale. The Juju Charmed OLM manages multi-container workloads on K8s. This guide takes you through the registration steps necessary to connect the two systems.

You don’t need to have previous knowledge of Juju or Charmed Operators to follow this guide.

Contents:

Create the test environment

The instructions in this tutorial are based on an Ubuntu system. This section details how to quickly create a test environment on MS Windows and macOS using a virtual machine deployed by Multipass.

This section is also useful for those that are already running Ubuntu, but would like to create an isolated test environment that could be easily removed.

Info: Multipass is a tool for quickly running virtual machines from any host operating system. This will allow you to create a fully-isolated test environment that won’t impact your host system.

Install Multipass

You can find system specific information on how to install Multipass at multipass.run.

Create a virtual machine

To start a virtual machine called, for example, “microcloud”, that has 8 GB RAM allocated to it, execute:

multipass launch -n microcloud -m 8g -c 2 -d 20G

Multipass will confirm the creation:

Launched: microcloud

Once multipass has downloaded the latest Long Term Support version of the Ubuntu operating system, you will be able to enter a command-line with the shell command:

multipass shell microcloud

This gives you access to the shell (you may see a different prompt, depending on the version installed):

ubuntu@microcloud:~$

You are now ready to follow the rest of this guide in our newly created Ubuntu machine. All commands should be typed in this shell.

If for whatever reason you need to interrupt this tutorial,we recommend running multipass stop microcloud to stop the instance. When you resume, run multipass start microcloud.

Install the Juju CLI client

We will now install the Juju client via snap: this is the easiest and fastest way to get started with Juju.

Other installation methods You can find a comprehensive list of all the ways you have to install Juju at Juju | How to install `juju` .

The following command will install the Juju client:

sudo snap install juju --classic

If the installation was successful, you will see a message similar to the one below:

juju 2.8.10 from Canonical✓ installed

Gain access to a Kubernetes cluster

Juju supports a variety of Kubernetes distributions running on your laptop, private infrastructure, or public cloud. In this tutorial we will use MicroK8s. To install MicroK8s, execute:

sudo snap install microk8s --classic --channel=1.24

Register the cluster with Juju

You don’t need to register MicroK8s explicitly—Juju can retrieve the configuration file for the cluster automatically.

If for any reason that can’t be done automatically, you can manually point Juju to the cluster’s configuration file. Copy the cluster’s configuration file from the master node to your local machine and save it as $HOME/.kube/config, then run text juju add-k8s <k8s-name>

However, there is an extra step before Juju can communicate with your MicroK8s cluster, namely, you have to enable the MicroK8s hostpath-storage and dns addons, as shown below:

microk8s enable hostpath-storage dns

Done. Time to verify that Juju has really recognized this cluster. To check that, run:

juju clouds 

If everything has gone OK, this should output:

Cloud      Regions  Default    Type  Credentials  Source    Description
microk8s   1        localhost  k8s   0            built-in  A Kubernetes Cluster

As you can see, you now have a MicroK8s cluster which has been named, by default, microk8s.

Note the mixed terminology “cluster”—“cloud”. There is a subtle difference between these concepts. However, for all intents and purposes, you now have both a MicroK8s cluster and a microk8s cloud.

Bootstrap a Juju controller into the cluster

To give Juju control over your cluster, let’s bootstrap a Juju controller into it, calling it overlord.

juju bootstrap microk8s overlord

The command above assumes that the config file is located at $HOME/.kube/config (or it has been loaded manually) and you can communicate to your cluster via kubectl. To check that you have configured kubectl correctly, execute the command: microk8s kubectl get nodes .

Add a model

The next step is to create a workspace on your cluster. We do this by creating a model, as shown below. We will name ours tutorial-model.

juju add-model tutorial-model

In the Kubernetes context, models are namespaces.

Deploy applications

The fundamental purpose of Juju is to deploy and manage software applications in a way that is easy and repeatable. All this is done with the help of charmed operators, or ‘charms’, which are bits of code that contain all the necessary intelligence to do these things. Charmed operators can exist online in the official charm repository, Charmhub or on your local filesystem (previously downloaded from the store or written locally).

As an example, we will deploy Mattermost, an open-source, self-hostable online chat service using PostgreSQL as its database. In both cases, we will deploy them from Charmhub, which is the default source for Juju.

To deploy Mattermost, execute:

juju deploy mattermost-k8s

When deployed, this outputs:

Located charm "mattermost-k8s" in charm-store, revision 20

Deploying "mattermost-k8s" from charm-store charm "mattermost-k8s", revision 20 in channel stable

You can observe the deployment status with the following command:

watch -c juju status --format short --color

Then, to deploy PostgreSQL, run:

juju deploy postgresql-k8s

When deployed, this outputs:

Located charm "postgresql-k8s" in charm-store, revision 9

Deploying "postgresql" from charm-store charm "postgresql-k8s", revision 9 in channel stable

That’s it. Your applications are now deployed.

Relate applications

At this point, both applications are deployed in the model located in the Kubernetes cluster, but they don’t know about each other, although it is easy to see that one could really use the other—Mattermost needs a database and PostgreSQL can satisfy this need.

In Juju, integrating applications is a first class operation. To connect Mattermost to PostgreSQL, simply execute:

juju relate mattermost-k8s postgresql-k8s:db

Ambiguous relations. If the charmed operators in question are versatile enough, Juju may need to be supplied with more information as to how the charmed operators should be joined. In this example, we had to specify to which postresql endpoint we wanted to connect: postgresql:db That’s because postgresql has multiple endpoints for syslog, db and db_admin.

Again, watch the deployment status with the following command until both charmed operators’ status is active.

watch -c juju status --format short --color

Once the applications are related, this will return the following:

- mattermost/0: (agent:idle, workload:active) 8065/TCP

- postgresql/0: (agent:idle, workload:active) 5432/TCP

Test your deployment

To access the application locally, execute:

microk8s kubectl get pods -n tutorial-model

From the output, retrieve the name of the Mattermost pod.

Now use it to do port-forwarding to your local machine:

microk8s kubectl port-forward -n tutorial-model <Mattermost pod name> 8065

Now, open your browser and access the page http://localhost:8065. You should see the Mattermost home page.

Congratulations, you just deployed and integrated cloud-native applications in a Kubernetes cluster!

Destroy your test environment

Once you are done, you can run multipass delete --purge microcloud to delete your instance. You can also uninstall Multipass to remove any trace of this guide.

Next steps

This tutorial has introduced you to the basic things you can do with Juju. Visit the Juju How-to docs to experiment with further features such as scaling or cross-model relations; the Juju Reference docs to learn more about the primitive notions and commands that make Juju work; or the Juju Explanation docs, just for fun.

1 Like

Looks like something was omitted here?

Should be:

We believe that Juju is the simplest way to manage multi-container workloads on K8s.

1 Like

Thanks; fixed it.

1 Like

This command results in ERROR missing k8s name. The help text indicates that the name is required. Perhaps this has changed since the docs were created?

1 Like

Spelling mistake: Kuebernetes => Kubernetes

1 Like

juju deploy mariadb-k8s fails: ERROR cannot resolve URL "cs:mariadb-k8s": charm or bundle not found. Has the charm’s name changed? Or should another example be provided in its place?

1 Like

The documentation material refers to a mariadb-k8s charm publish to ~charmed-osm so you need to include that in your deploy, ie

juju deploy cs:~charmed-osm/mariadb-k8s

@wallyworld So to clarify, should the documentation be updated to reference cs:~charmed-osm/maria-k8s instead? Or did I miss a step somewhere to automatically include ~charmed-osm or similar?

It just looks like a step was missed. ie this command was run

juju deploy mariadb-k8s

instead of

juju deploy cs:~charmed-osm/mariadb-k8s

Please, can you help me? The applications did not relate. The. return is:

postgresql-k8s/2* error idle 12 hook failed: “db-relation-changed”

Thank you

Hello, @penacleiton. I think that we might be able to help you.

If you run juju debug-log --replay from the CLI, do you see any tracebacks or errors that look like they might be related to the postgresql charm?

It looks like it might have failed while trying to setup a database for one of the related charms. The logs often contain errors that will tell you what went wrong in a useful amount of detail.

1 Like

I’m very, very grateful for help. I saw debug-log. I don’t Know what this means. Please, can you clarify , if possible?

lientrel.py", line 75, in master_service_ip
    svc = self.get_k8s_service(self.master_service_name)
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/src/clientrel.py", line 88, in get_k8s_service
    self.k8s_auth()
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/src/clientrel.py", line 62, in k8s_auth
    kubernetes.config.load_incluster_config()
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/kubernetes/config/incluster_config.py", line 118, in load_incluster_config
    InClusterConfigLoader(
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/kubernetes/config/incluster_config.py", line 54, in load_and_set
    self._load_config()
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/kubernetes/config/incluster_config.py", line 62, in _load_config
    raise ConfigException("Service host/port is not set.")
kubernetes.config.config_exception.ConfigException: Service host/port is not set.
unit-postgresql-k8s-2: 22:34:32 ERROR juju.worker.uniter.operation hook "db-relation-changed" (via hook dispatching script: dispatch) failed: exit status 1
unit-postgresql-k8s-2: 22:34:32 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-mattermost-k8s-2: 22:34:37 INFO juju.worker.uniter.operation ran "update-status" hook (via hook dispatching script: dispatch)
unit-postgresql-k8s-2: 22:34:52 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-postgresql-k8s-2: 22:34:53 ERROR unit.postgresql-k8s/2.juju-log db:9: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 235, in <module>
    ops.main.main(PostgreSQLCharm, use_juju_for_storage=True)
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/ops/main.py", line 402, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/ops/main.py", line 140, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/ops/framework.py", line 278, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/ops/framework.py", line 722, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/ops/framework.py", line 767, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/src/clientrel.py", line 105, in on_db_relation_changed
    master_ip = self.master_service_ip
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/src/clientrel.py", line 75, in master_service_ip
    svc = self.get_k8s_service(self.master_service_name)
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/src/clientrel.py", line 88, in get_k8s_service
    self.k8s_auth()
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/src/clientrel.py", line 62, in k8s_auth
    kubernetes.config.load_incluster_config()
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/kubernetes/config/incluster_config.py", line 118, in load_incluster_config
    InClusterConfigLoader(
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/kubernetes/config/incluster_config.py", line 54, in load_and_set
    self._load_config()
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/kubernetes/config/incluster_config.py", line 62, in _load_config
    raise ConfigException("Service host/port is not set.")
kubernetes.config.config_exception.ConfigException: Service host/port is not set.
unit-postgresql-k8s-2: 22:34:53 ERROR juju.worker.uniter.operation hook "db-relation-changed" (via hook dispatching script: dispatch) failed: exit status 1
unit-postgresql-k8s-2: 22:34:53 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-postgresql-k8s-2: 22:34:55 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-mattermost-k8s-2: 22:34:56 INFO juju.worker.uniter.operation ran "db-relation-departed" hook (via hook dispatching script: dispatch)
unit-mattermost-k8s-2: 22:34:56 INFO unit.mattermost-k8s/2.juju-log db:9: emitting database_relation_broken event for relation 9
unit-mattermost-k8s-2: 22:34:56 INFO unit.mattermost-k8s/2.juju-log db:9: cleaning up broken relation 9
unit-mattermost-k8s-2: 22:34:56 INFO juju.worker.uniter.operation ran "db-relation-broken" hook (via hook dispatching script: dispatch)
unit-postgresql-k8s-2: 22:34:56 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-postgresql-k8s-2: 22:35:25 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-postgresql-k8s-2: 22:35:26 INFO juju.worker.uniter.operation ran "db-relation-departed" hook (via hook dispatching script: dispatch)
unit-postgresql-k8s-2: 22:35:27 INFO juju.worker.uniter.operation ran "db-relation-broken" hook (via hook dispatching script: dispatch)
machine-12: 22:36:00 INFO juju.worker.deployer checking unit "postgresql-k8s/2"
machine-11: 22:36:00 INFO juju.worker.deployer checking unit "mattermost-k8s/2"
unit-postgresql-k8s-2: 22:36:00 WARNING juju.worker.uniter.operation we should run a leader-deposed hook here, but we can't yet
unit-mattermost-k8s-2: 22:36:00 WARNING juju.worker.uniter.operation we should run a leader-deposed hook here, but we can't yet
unit-mattermost-k8s-2: 22:36:01 INFO juju.worker.uniter.operation ran "leader-settings-changed" hook (via hook dispatching script: dispatch)
unit-postgresql-k8s-2: 22:36:01 INFO juju.worker.uniter.operation ran "leader-settings-changed" hook (via hook dispatching script: dispatch)
unit-mattermost-k8s-2: 22:36:02 INFO juju.worker.uniter.operation ran "stop" hook (via hook dispatching script: dispatch)
unit-postgresql-k8s-2: 22:36:02 INFO juju.worker.uniter.operation ran "pgdata-storage-detaching" hook (via hook dispatching script: dispatch)
unit-mattermost-k8s-2: 22:36:03 INFO juju.worker.uniter.operation ran "remove" hook (via hook dispatching script: dispatch)
machine-11: 22:36:03 INFO juju.worker.deployer checking unit "mattermost-k8s/2"
machine-11: 22:36:03 INFO juju.worker.deployer recalling unit "mattermost-k8s/2"
machine-11: 22:36:03 INFO juju.worker.deployer removing unit "mattermost-k8s/2"
unit-postgresql-k8s-2: 22:36:03 INFO juju.worker.uniter.operation ran "logs-storage-detaching" hook (via hook dispatching script: dispatch)
controller-0: 22:36:04 INFO juju.worker.provisioner stopping known instances [i-05f2ecf03c83f5e2d]
controller-0: 22:36:04 INFO juju.worker.provisioner removing dead machine "11"
unit-postgresql-k8s-2: 22:36:04 INFO juju.worker.uniter.operation ran "stop" hook (via hook dispatching script: dispatch)
unit-postgresql-k8s-2: 22:36:05 INFO juju.worker.uniter.operation ran "remove" hook (via hook dispatching script: dispatch)
unit-postgresql-k8s-2: 22:36:06 INFO juju.worker.uniter unit "postgresql-k8s/2" shutting down: agent should be terminated
machine-12: 22:36:06 INFO juju.worker.deployer unit workers for "postgresql-k8s/2" not running
machine-12: 22:36:06 INFO juju.worker.deployer unit workers for "postgresql-k8s/2" not running
machine-12: 22:36:06 INFO juju.worker.deployer unit workers for "postgresql-k8s/2" not running
machine-12: 22:36:06 INFO juju.worker.deployer checking unit "postgresql-k8s/2"
machine-12: 22:36:06 INFO juju.worker.deployer recalling unit "postgresql-k8s/2"
machine-12: 22:36:06 INFO juju.worker.deployer unit workers for "postgresql-k8s/2" not running
machine-12: 22:36:06 INFO juju.worker.deployer removing unit "postgresql-k8s/2"
controller-0: 22:36:06 INFO juju.worker.provisioner stopping known instances [i-0c313a27ed27393d1]
controller-0: 22:36:07 INFO juju.worker.provisioner removing dead machine "12"

The core part of the error is here:

kubernetes.config.config_exception.ConfigException: Service host/port is not set

(That’s part of a Traceback block, though it looks like the beginning of the block is cut off.)

That error might be a bug in the charm. It looks like it is trying to process information about the connection before it has an actual route.

The error might also be due to a network routing issue, which could probably be better reported an surfaced by the charm.

The good news, if it’s a charm bug, is that you might be able to fix it by running juju resolved postgresql-k8s/2. That will re-run the relation-changed hook, hopefully with complete networking information.

Let me know if that helps. (If not, I can dig into it further.)

1 Like

Hello, Pengale Thank you for your help. But, “juju resolved PostgreSQL-k8s” didn’t solve the issue. The error remains, like in the new print. And after run “juju deploy postegresql-k83”, I losted the access to dashboard and portainer. I’ll very thankful if you can still help me. Thanks.

kubernetes.config.config_exception.ConfigException: Service host/port is not set.
unit-postgresql-k8s-0: 22:41:30 ERROR juju.worker.uniter.operation hook "db-relation-changed" (via hook dispatching script: dispatch) failed: exit status 1
unit-postgresql-k8s-0: 22:41:30 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-microk8s-2: 22:41:56 INFO juju.worker.uniter.operation ran "update-status" hook (via hook dispatching script: dispatch)
unit-microk8s-1: 22:42:49 INFO juju.worker.uniter.operation ran "update-status" hook (via hook dispatching script: dispatch)
unit-postgresql-k8s-0: 22:42:52 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-postgresql-k8s-0: 22:42:53 ERROR unit.postgresql-k8s/0.juju-log db:2: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 235, in <module>
    ops.main.main(PostgreSQLCharm, use_juju_for_storage=True)
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/venv/ops/main.py", line 402, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/venv/ops/main.py", line 140, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/venv/ops/framework.py", line 278, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/venv/ops/framework.py", line 722, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/venv/ops/framework.py", line 767, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/src/clientrel.py", line 105, in on_db_relation_changed
    master_ip = self.master_service_ip
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/src/clientrel.py", line 75, in master_service_ip
    svc = self.get_k8s_service(self.master_service_name)
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/src/clientrel.py", line 88, in get_k8s_service
    self.k8s_auth()
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/src/clientrel.py", line 62, in k8s_auth
    kubernetes.config.load_incluster_config()
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/venv/kubernetes/config/incluster_config.py", line 118, in load_incluster_config
    InClusterConfigLoader(
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/venv/kubernetes/config/incluster_config.py", line 54, in load_and_set
    self._load_config()
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/venv/kubernetes/config/incluster_config.py", line 62, in _load_config
    raise ConfigException("Service host/port is not set.")
kubernetes.config.config_exception.ConfigException: Service host/port is not set.
unit-postgresql-k8s-0: 22:42:53 ERROR juju.worker.uniter.operation hook "db-relation-changed" (via hook dispatching script: dispatch) failed: exit status 1
unit-postgresql-k8s-0: 22:42:53 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-mattermost-k8s-0: 22:43:06 INFO juju.worker.uniter.operation ran "update-status" hook (via hook dispatching script: dispatch)

Please, can I restart the network without disable metalllb and dns and others add-on? Thank you.

@penacleiton hmmm. We’re moving out of my areas of expertise here.

@neppel do you have any advice on further troubleshooting?

1 Like

Ok, I’m sorry. As you can see, I’m new here. Do you know who would help me? Thank you anyway.

Hi @penacleiton! It’s good to know about your interest on Juju on k8s :smiley:

I would like to confirm with you if you are not deploying the workloads in lxd instead of microk8s.

I saw the 2 lines below in your logs and they made me think that probably the workloads are being deployed in a lxd model (because of the text machine in the beginning of each line, which looks related to a deployment on VMs):

machine-12: 22:36:00 INFO juju.worker.deployer checking unit "postgresql-k8s/2"
machine-11: 22:36:00 INFO juju.worker.deployer checking unit "mattermost-k8s/2"

I also saw the error happening here when I deployed the charms to a lxd model and related them.

1 Like