Get started on Kubernetes

Kubernetes – also known as K8s – provides a flexible architecture for Cloud Native applications at scale. The Juju Charmed OLM manages multi-container workloads on K8s. This guide takes you through the registration steps necessary to connect the two systems.

You don’t need to have previous knowledge of Juju or Charmed Operators to follow this guide.

Contents:

Prerequisites

Install the Juju client

Juju is a single binary executable that is published and supported on multiple operating systems and CPU architectures.

OS Method
Linux snap install --classic juju
macOS brew install juju
Windows Download the signed Juju installer, md5, signature

Alternative installations: You can find instructions on installing development releases or building from the source in the Juju installation page.

Installing Kubectl

Kubectl is a command-line tool that allows you to run commands against Kubernetes clusters. If you have access to a cluster, you might already have it installed. If not, you can follow these instructions to download and install the binary.

Gaining access to a Kubernetes cluster

Juju supports a variety of Kubernetes distributions running on your laptop, private infrastructure, or public cloud. You can connect Juju to your existing K8s cluster or get one based on the recommendations below. If you are new to Kubernetes, we recommend following this guide with MicroK8s.

Use Case Recommended Action
Local development, testing and experimentation Install MicroK8s
Multi-node testing/production on a private cloud Install Charmed Kubernetes
Multi-node testing/production on the public cloud Install Charmed Kubernetes with the relevant integrator charm
Use a hosted Kubernetes distribution Enable the service via the provider

We also have guides for creating clusters on the following Kubernetes distributions:

If your distribution is not listed above, you are still able to use Juju on any cluster to which you have sufficient access privileges, recorded in a kubeconfig entry.

Connecting Juju to your Kubernetes cluster

Register the cluster with Juju

Juju will look for the kube configuration files to find the cloud definition.

You can see what clouds Juju has found by running: juju clouds

If for any reason that can’t be done automatically, you can manually point Juju to the cluster’s configuration file.

Manually exposing the cluster's configuration file

Copy the cluster’s configuration file from the master node to your local machine and save it as $HOME/.kube/config, then run

juju add-k8s <k8s-name>

If Juju can see your cluster (juju clouds), move on to Create a Juju controller.

Some clouds will require extra steps configure the cluster to work with Juju:

a. When running MicroK8s

The cluster is registered with Juju automatically, but we have to enable the storage and dns addons.

microk8s enable storage dns

Then move on to Create a Juju controller.

b. When you’re already able to interact with your cluster via kubectl

Juju will automatically look in the standard kube configuration file.

Then move on to Create a Juju controller.

c. When you have used Juju to deploy Charmed Kubernetes
mkdir ~/.kube
juju scp kubernetes-master/0:/home/ubuntu/config ~/.kube/config
juju add-k8s <k8s-name>

Then move on to Create a Juju controller.

Create a Juju controller

The Juju controller is a central software agent that oversees applications managed with Juju. It is created via the juju bootstrap command.

juju bootstrap <cloud-name> <controller-name>

Cloud name is microk8s or <cloud-name> specified previously.

Config file: The command above assumes that the config file is located at $HOME/.kube/config (or it has been loaded manually) and you can communicate to your cluster via kubectl. To check that you have configured kubectl correctly, execute the command: kubectl get nodes

Then move on to Add a model.

Deploy workloads

Add a model

Before deploying applications with charmed operators, Juju users create a “model”. In the Kubernetes context, models are namespaces.

A model is a canvas on a particular cloud/k8s-cluster. The model is used to group applications that are being operated together for a common purpose on a common substrate. The model will capture the applications, their integration, configuration, and resource allocation.

Since each model is on a single substrate, and the service as a whole may span multiple clouds/k8s-clusters, it may require several models to provide the canvases for all the different applications in the service.

The model is a workspace for inter-related applications. It is an abstraction over applications, machines hosting them and other components such as persistent storage.

To add a model, use the juju add-model command:

juju add-model <model-name>

Inside the cluster, adding a Juju model creates a Kubernetes namespace with the same name. The namespace hosts all of the pods and other resources, except global resources.

Deploy workloads

The fundamental purpose of Juju is to deploy and manage software applications in a way that is easy and repeatable. All this is done with the help of charmed operators , which are bits of code that contain all the necessary intelligence to do these things. Charmed operators can exist online (in the Charm Store) or on your local filesystem (previously downloaded from the store or written locally).

As an example, we will deploy Mattermost, an open-source, self-hostable online chat service using PostgreSQL as its database.

Typically, applications are deployed using the online charmed operators. This ensures that you get the latest version of the charm. Deploying in this way is straightforward:

Deploy Mattermost:

juju deploy mattermost-k8s

When deployed, this outputs:

Located charm "mattermost-k8s" in charm-store, revision 20

Deploying "mattermost-k8s" from charm-store charm "mattermost-k8s", revision 20 in channel stable

You can observe the deployment status with the following command:

watch -c juju status --format short --color

Then deploy the PostgreSQL Charm:

juju deploy postgresql-k8s

When deployed, this outputs:

Located charm "postgresql-k8s" in charm-store, revision 9

Deploying "postgresql" from charm-store charm "postgresql-k8s", revision 9 in channel stable

At this point, both applications are deployed in the model located in the Kubernetes cluster, but they don’t know about each other. Next we will relate apps.

Relate Applications

Relations

Most applications rely on other applications to function correctly. For example, typically web apps require a database to connect to. Relations avoid the need for manual intervention when the charm’s environment changes. The charm will be notified of new changes, re-configure and restart the application automatically.

Relations are a Juju abstraction that enables applications to inter-operate. They are a communication channel between charmed operators.

A certain charm knows that it requires, say, a database and, correspondingly, a database charm knows that it is capable of satisfying another charm’s requirements. The act of joining such mutually-dependent charmed operators causes code (hooks) to run in each charm in such a way that both charmed operators can effectively talk to one another. When charmed operators have joined logically in this manner they are said to have formed a relation.

Create a relation

Creating a relation is straightforward. The add-relation command is used to set up a relation between two applications:

juju relate mattermost-k8s postgresql-k8s:db

This will satisfy Mattermost’s database requirement where PostgreSQL provides the appropriate structures (e.g. tables) needed for Mattermost to run properly.

Ambiguous relations If the charmed operators in question are versatile enough, Juju may need to be supplied with more information as to how the charmed operators should be joined. In this example, we had to specify to which postresql endpoint we wanted to connect: postgresql:db That’s because postgresql has multiple endpoints for syslog, db and db_admin.

Again, watch the deployment status with the following command until both charmed operators’ status is active.

watch -c juju status --format short --color

Which will return the following when the applications are related:

- mattermost/0: (agent:idle, workload:active) 8065/TCP

- postgresql/0: (agent:idle, workload:active) 5432/TCP

Your deployment is dynamic: If you need to scale, deploy other applications or move it around, Juju will adapt and maintain the relations active without the need of special configurations.

Access the application

To access the application locally, get the Pod name:

export APP_NAME=$(kubectl get pod -l app.kubernetes.io/name=mattermost -n <model-name> -o name)

And create a port-forward:

kubectl port-forward -n my-model $APP_NAME 8065

Open your browser and access the page http://localhost:8065. You should see the Mattermost home page.

Congratulations, you just deployed and integrated cloud-native applications in a Kubernetes cluster!

Next steps

This guide presents the core of Juju’s functionality - most applications will follow a similar workflow: bootstrap the controller in your cluster, add a model, deploy and relate applications. If you would like to see some examples of what you can do with Juju, our tutorials page has application specific guide. You can also try Juju on your own cloud or localhost

1 Like

Looks like something was omitted here?

Should be:

We believe that Juju is the simplest way to manage multi-container workloads on K8s.

1 Like

Thanks; fixed it.

1 Like

This command results in ERROR missing k8s name. The help text indicates that the name is required. Perhaps this has changed since the docs were created?

1 Like

Spelling mistake: Kuebernetes => Kubernetes

1 Like

juju deploy mariadb-k8s fails: ERROR cannot resolve URL "cs:mariadb-k8s": charm or bundle not found. Has the charm’s name changed? Or should another example be provided in its place?

1 Like

The documentation material refers to a mariadb-k8s charm publish to ~charmed-osm so you need to include that in your deploy, ie

juju deploy cs:~charmed-osm/mariadb-k8s

@wallyworld So to clarify, should the documentation be updated to reference cs:~charmed-osm/maria-k8s instead? Or did I miss a step somewhere to automatically include ~charmed-osm or similar?

It just looks like a step was missed. ie this command was run

juju deploy mariadb-k8s

instead of

juju deploy cs:~charmed-osm/mariadb-k8s

Please, can you help me? The applications did not relate. The. return is:

postgresql-k8s/2* error idle 12 hook failed: “db-relation-changed”

Thank you

Hello, @penacleiton. I think that we might be able to help you.

If you run juju debug-log --replay from the CLI, do you see any tracebacks or errors that look like they might be related to the postgresql charm?

It looks like it might have failed while trying to setup a database for one of the related charms. The logs often contain errors that will tell you what went wrong in a useful amount of detail.

1 Like

I’m very, very grateful for help. I saw debug-log. I don’t Know what this means. Please, can you clarify , if possible?

lientrel.py", line 75, in master_service_ip
    svc = self.get_k8s_service(self.master_service_name)
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/src/clientrel.py", line 88, in get_k8s_service
    self.k8s_auth()
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/src/clientrel.py", line 62, in k8s_auth
    kubernetes.config.load_incluster_config()
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/kubernetes/config/incluster_config.py", line 118, in load_incluster_config
    InClusterConfigLoader(
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/kubernetes/config/incluster_config.py", line 54, in load_and_set
    self._load_config()
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/kubernetes/config/incluster_config.py", line 62, in _load_config
    raise ConfigException("Service host/port is not set.")
kubernetes.config.config_exception.ConfigException: Service host/port is not set.
unit-postgresql-k8s-2: 22:34:32 ERROR juju.worker.uniter.operation hook "db-relation-changed" (via hook dispatching script: dispatch) failed: exit status 1
unit-postgresql-k8s-2: 22:34:32 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-mattermost-k8s-2: 22:34:37 INFO juju.worker.uniter.operation ran "update-status" hook (via hook dispatching script: dispatch)
unit-postgresql-k8s-2: 22:34:52 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-postgresql-k8s-2: 22:34:53 ERROR unit.postgresql-k8s/2.juju-log db:9: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 235, in <module>
    ops.main.main(PostgreSQLCharm, use_juju_for_storage=True)
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/ops/main.py", line 402, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/ops/main.py", line 140, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/ops/framework.py", line 278, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/ops/framework.py", line 722, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/ops/framework.py", line 767, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/src/clientrel.py", line 105, in on_db_relation_changed
    master_ip = self.master_service_ip
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/src/clientrel.py", line 75, in master_service_ip
    svc = self.get_k8s_service(self.master_service_name)
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/src/clientrel.py", line 88, in get_k8s_service
    self.k8s_auth()
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/src/clientrel.py", line 62, in k8s_auth
    kubernetes.config.load_incluster_config()
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/kubernetes/config/incluster_config.py", line 118, in load_incluster_config
    InClusterConfigLoader(
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/kubernetes/config/incluster_config.py", line 54, in load_and_set
    self._load_config()
  File "/var/lib/juju/agents/unit-postgresql-k8s-2/charm/venv/kubernetes/config/incluster_config.py", line 62, in _load_config
    raise ConfigException("Service host/port is not set.")
kubernetes.config.config_exception.ConfigException: Service host/port is not set.
unit-postgresql-k8s-2: 22:34:53 ERROR juju.worker.uniter.operation hook "db-relation-changed" (via hook dispatching script: dispatch) failed: exit status 1
unit-postgresql-k8s-2: 22:34:53 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-postgresql-k8s-2: 22:34:55 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-mattermost-k8s-2: 22:34:56 INFO juju.worker.uniter.operation ran "db-relation-departed" hook (via hook dispatching script: dispatch)
unit-mattermost-k8s-2: 22:34:56 INFO unit.mattermost-k8s/2.juju-log db:9: emitting database_relation_broken event for relation 9
unit-mattermost-k8s-2: 22:34:56 INFO unit.mattermost-k8s/2.juju-log db:9: cleaning up broken relation 9
unit-mattermost-k8s-2: 22:34:56 INFO juju.worker.uniter.operation ran "db-relation-broken" hook (via hook dispatching script: dispatch)
unit-postgresql-k8s-2: 22:34:56 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-postgresql-k8s-2: 22:35:25 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-postgresql-k8s-2: 22:35:26 INFO juju.worker.uniter.operation ran "db-relation-departed" hook (via hook dispatching script: dispatch)
unit-postgresql-k8s-2: 22:35:27 INFO juju.worker.uniter.operation ran "db-relation-broken" hook (via hook dispatching script: dispatch)
machine-12: 22:36:00 INFO juju.worker.deployer checking unit "postgresql-k8s/2"
machine-11: 22:36:00 INFO juju.worker.deployer checking unit "mattermost-k8s/2"
unit-postgresql-k8s-2: 22:36:00 WARNING juju.worker.uniter.operation we should run a leader-deposed hook here, but we can't yet
unit-mattermost-k8s-2: 22:36:00 WARNING juju.worker.uniter.operation we should run a leader-deposed hook here, but we can't yet
unit-mattermost-k8s-2: 22:36:01 INFO juju.worker.uniter.operation ran "leader-settings-changed" hook (via hook dispatching script: dispatch)
unit-postgresql-k8s-2: 22:36:01 INFO juju.worker.uniter.operation ran "leader-settings-changed" hook (via hook dispatching script: dispatch)
unit-mattermost-k8s-2: 22:36:02 INFO juju.worker.uniter.operation ran "stop" hook (via hook dispatching script: dispatch)
unit-postgresql-k8s-2: 22:36:02 INFO juju.worker.uniter.operation ran "pgdata-storage-detaching" hook (via hook dispatching script: dispatch)
unit-mattermost-k8s-2: 22:36:03 INFO juju.worker.uniter.operation ran "remove" hook (via hook dispatching script: dispatch)
machine-11: 22:36:03 INFO juju.worker.deployer checking unit "mattermost-k8s/2"
machine-11: 22:36:03 INFO juju.worker.deployer recalling unit "mattermost-k8s/2"
machine-11: 22:36:03 INFO juju.worker.deployer removing unit "mattermost-k8s/2"
unit-postgresql-k8s-2: 22:36:03 INFO juju.worker.uniter.operation ran "logs-storage-detaching" hook (via hook dispatching script: dispatch)
controller-0: 22:36:04 INFO juju.worker.provisioner stopping known instances [i-05f2ecf03c83f5e2d]
controller-0: 22:36:04 INFO juju.worker.provisioner removing dead machine "11"
unit-postgresql-k8s-2: 22:36:04 INFO juju.worker.uniter.operation ran "stop" hook (via hook dispatching script: dispatch)
unit-postgresql-k8s-2: 22:36:05 INFO juju.worker.uniter.operation ran "remove" hook (via hook dispatching script: dispatch)
unit-postgresql-k8s-2: 22:36:06 INFO juju.worker.uniter unit "postgresql-k8s/2" shutting down: agent should be terminated
machine-12: 22:36:06 INFO juju.worker.deployer unit workers for "postgresql-k8s/2" not running
machine-12: 22:36:06 INFO juju.worker.deployer unit workers for "postgresql-k8s/2" not running
machine-12: 22:36:06 INFO juju.worker.deployer unit workers for "postgresql-k8s/2" not running
machine-12: 22:36:06 INFO juju.worker.deployer checking unit "postgresql-k8s/2"
machine-12: 22:36:06 INFO juju.worker.deployer recalling unit "postgresql-k8s/2"
machine-12: 22:36:06 INFO juju.worker.deployer unit workers for "postgresql-k8s/2" not running
machine-12: 22:36:06 INFO juju.worker.deployer removing unit "postgresql-k8s/2"
controller-0: 22:36:06 INFO juju.worker.provisioner stopping known instances [i-0c313a27ed27393d1]
controller-0: 22:36:07 INFO juju.worker.provisioner removing dead machine "12"

The core part of the error is here:

kubernetes.config.config_exception.ConfigException: Service host/port is not set

(That’s part of a Traceback block, though it looks like the beginning of the block is cut off.)

That error might be a bug in the charm. It looks like it is trying to process information about the connection before it has an actual route.

The error might also be due to a network routing issue, which could probably be better reported an surfaced by the charm.

The good news, if it’s a charm bug, is that you might be able to fix it by running juju resolved postgresql-k8s/2. That will re-run the relation-changed hook, hopefully with complete networking information.

Let me know if that helps. (If not, I can dig into it further.)

1 Like

Hello, Pengale Thank you for your help. But, “juju resolved PostgreSQL-k8s” didn’t solve the issue. The error remains, like in the new print. And after run “juju deploy postegresql-k83”, I losted the access to dashboard and portainer. I’ll very thankful if you can still help me. Thanks.

kubernetes.config.config_exception.ConfigException: Service host/port is not set.
unit-postgresql-k8s-0: 22:41:30 ERROR juju.worker.uniter.operation hook "db-relation-changed" (via hook dispatching script: dispatch) failed: exit status 1
unit-postgresql-k8s-0: 22:41:30 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-microk8s-2: 22:41:56 INFO juju.worker.uniter.operation ran "update-status" hook (via hook dispatching script: dispatch)
unit-microk8s-1: 22:42:49 INFO juju.worker.uniter.operation ran "update-status" hook (via hook dispatching script: dispatch)
unit-postgresql-k8s-0: 22:42:52 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-postgresql-k8s-0: 22:42:53 ERROR unit.postgresql-k8s/0.juju-log db:2: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 235, in <module>
    ops.main.main(PostgreSQLCharm, use_juju_for_storage=True)
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/venv/ops/main.py", line 402, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/venv/ops/main.py", line 140, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/venv/ops/framework.py", line 278, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/venv/ops/framework.py", line 722, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/venv/ops/framework.py", line 767, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/src/clientrel.py", line 105, in on_db_relation_changed
    master_ip = self.master_service_ip
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/src/clientrel.py", line 75, in master_service_ip
    svc = self.get_k8s_service(self.master_service_name)
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/src/clientrel.py", line 88, in get_k8s_service
    self.k8s_auth()
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/src/clientrel.py", line 62, in k8s_auth
    kubernetes.config.load_incluster_config()
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/venv/kubernetes/config/incluster_config.py", line 118, in load_incluster_config
    InClusterConfigLoader(
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/venv/kubernetes/config/incluster_config.py", line 54, in load_and_set
    self._load_config()
  File "/var/lib/juju/agents/unit-postgresql-k8s-0/charm/venv/kubernetes/config/incluster_config.py", line 62, in _load_config
    raise ConfigException("Service host/port is not set.")
kubernetes.config.config_exception.ConfigException: Service host/port is not set.
unit-postgresql-k8s-0: 22:42:53 ERROR juju.worker.uniter.operation hook "db-relation-changed" (via hook dispatching script: dispatch) failed: exit status 1
unit-postgresql-k8s-0: 22:42:53 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-mattermost-k8s-0: 22:43:06 INFO juju.worker.uniter.operation ran "update-status" hook (via hook dispatching script: dispatch)

Please, can I restart the network without disable metalllb and dns and others add-on? Thank you.

@penacleiton hmmm. We’re moving out of my areas of expertise here.

@neppel do you have any advice on further troubleshooting?

1 Like

Ok, I’m sorry. As you can see, I’m new here. Do you know who would help me? Thank you anyway.

Hi @penacleiton! It’s good to know about your interest on Juju on k8s :smiley:

I would like to confirm with you if you are not deploying the workloads in lxd instead of microk8s.

I saw the 2 lines below in your logs and they made me think that probably the workloads are being deployed in a lxd model (because of the text machine in the beginning of each line, which looks related to a deployment on VMs):

machine-12: 22:36:00 INFO juju.worker.deployer checking unit "postgresql-k8s/2"
machine-11: 22:36:00 INFO juju.worker.deployer checking unit "mattermost-k8s/2"

I also saw the error happening here when I deployed the charms to a lxd model and related them.

1 Like