N7 Get started on a localhost

Juju is an Open Source Charmed Operator Framework. It helps you move from configuration management to application management and has two main components:

* Charmed Operator Lifecycle Manager (OLM) - a hybrid-cloud application management and orchestration system that helps you from Day 0 to Day 2. Deploy, configure, scale, integrate, maintain and manage Kubernetes native, container-native and VM-native applications – and the relations between them.
* Charmed Operators, packaged as “Charms”, are software that encapsulate a single application and all the code and know-how it takes to operate it on Kubernetes or machines.

* Charmed Operator SDK - a guide to help you build Charmed Operators for use with the Charmed OLM.

This guide introduces you to Juju and charmed operators deployment on a local Ubuntu machine. If you are using MS Windows or macOS you can still follow this tutorial by installing Multipass. By the end of this guide you will have a running web application backed by a database.

Alternatively, if you would like to start with Juju on Kubernetes, please follow the Getting Started on Kubernetes guide.

The following sections provide a basic overview of using Juju, from installation to deploying and relating charmed operator.

  1. Create the test environment
    MS Windows and macOS users. Optional step for Ubuntu users.
  2. Install Juju CLI
  3. Create a local cloud
  4. Bootstrap the Juju OLM controller
  5. Deploy a charmed operator
  6. Relate charmed operators

1. Creating the test environment

The instructions in this tutorial are based on an Ubuntu system. This section details how to quickly create a test environment on MS Windows and macOS using a virtual machine deployed by Multipass.

This section is also useful for those that are already running Ubuntu, but would like to create an isolated test environment that could be easily removed.

Info: Multipass is a tool for quickly running virtual machines from any host operating system. This will allow you to create a fully-isolated test environment that won’t impact your host system.

Install Multipass

You can find system specific information on how to install Multipass at multipass.run.
It only takes a few minutes!

Create a virtual machine

To start a virtual machine microcloud that has 8 GB RAM allocated to it, execute:

multipass launch -n microcloud -m 8g -c 2 -d 20G 

Multipass will confirm the creation:

Launched: microcloud

Once multipass has downloaded the latest Long Term Support version of the Ubuntu operating system, you will be able to enter a command-line with the shell command:

multipass shell microcloud

This gives you access to the shell:

multipass@microcloud:~$

You are now ready to follow the rest of this guide in our newly created Ubuntu machine. All commands should be typed in this shell. Once you are done, you can run ‘$ multipass destroy microcloud’ and uninstall Multipass to remove any trace of this guide.

2. Install Juju CLI

We will install Juju CLI via a snap package: this is the easiest and fastest way to get started with Juju. We also provide installation instruction on several other systems.

Other installation methods
You can find a comprehensive list of all the ways you have to install Juju at Juju | Installing Juju.

The following command will install Juju CLI:

sudo snap install juju --classic

If the installation was successful, you will see the following message:

juju 2.8.10 from Canonical✓ installed

3. Create a local cloud

We will use LXD for creating a cloud on the localhost. LXD should be already installed in your Ubuntu system.

Info: LXD is a system container and virtual machine manager. Juju uses it to instantiate the containers need for the app.

Need to install LXD? Visit the LXD docs for installation instructions.

Configure LXD

Juju speaks directly to the local LXD daemon, which also requires lxd group membership.

newgrp lxd
sudo adduser $USER lxd

LXD then needs to be configured for its environment:

lxd init --auto

Verify that the localhost cloud is available

Our localhost cloud is now established. We can verify that by running juju clouds:

juju clouds

Juju should have detected the presence of LXD and has added it as the localhost cloud:

[...]
localhost             1  localhost        lxd         LXD Container Hypervisor

4. Bootstrap the Juju controller

Juju uses an active software agent, called the controller, to manage applications. The controller is installed on a machine through the bootstrap process:

juju bootstrap localhost overlord

During the bootstrap process, Juju connects with the cloud, then provision a machine to install the controller on, then install it.

Creating Juju controller "overlord" on localhost/localhost
Looking for packaged Juju agent version 2.6.8 for amd64
[...]

5. Deploying a charmed operator: Hello Juju!

The first workload that you’ll deploy is a simple web application. You’ll deploy an application that uses the Flask microframework to send “Hello Juju!” via HTTP.

juju deploy hello-juju

The charmed operator name hello-juju is resolved into an actual charmed operator version by contacting the charmhub.io. This charmed operator is then downloaded by the controller and used as the source of the application that was created with the same name:

Located charm "cs:hello-juju-4".
Deploying charm "cs:hello-juju-4".

The next step is to use juju expose to configure the necessary security groups and firewall rules and make the hello-world application publicly available over the network:

juju expose hello-juju

Checking the deployment

Now that a workload is in place, use juju status to inspect what is happening:

juju status

This outputs:

Model    Controller  Cloud/Region         Version  SLA          Timestamp
default  overlord    localhost/localhost  2.8.10   unsupported  16:24:04+12:00

App         Version  Status  Scale  Charm       Store       Rev  OS      Notes
hello-juju           active      1  hello-juju  jujucharms    4  ubuntu  
  
Unit           Workload  Agent  Machine  Public address  Ports   Message
hello-juju/0*  active    idle   0        10.47.112.215   80/tcp  
 
Machine  State    DNS            Inst id        Series  AZ  Message
0        started  10.47.112.215  juju-646ac9-0  bionic      Running

Connecting to the application

The juju status output above provided the “Public address” of the hello-juju/0 unit as 10.47.112.215 and its Ports column as 80/tcp. Let’s connect!

curl 10.47.112.215

If the connection was successful, you will see:

Hello Juju!

6. Relating another application: adding PostgreSQL to the deployment

Relations are Juju’s defining concept and its main point of difference with other systems in its class. They enable the simplicity, security and stability offered by the whole project.

The hello-juju web server maintains a count for each greeting that it has sent out via the /greetings endpoint.

curl 10.47.112.215/greetings

Which outputs:

{"greetings": 2}

By default, this state is maintained within a SQLite database that is set up by the hello-juju charmed operator itself. In this section, we will deploy a PostgreSQL database and relate it to our hello-juju charmed operator.

To deploy the PostgreSQL database, we will use the same deploy command:

juju deploy postgresql

Which outputs:

Located charm "cs:postgresql-199".
Deploying charm "cs:postgresql-199".

The PostgreSQL charm may take a few moments to be deployed. You can check the status of the deployment by running:

juju status

When ready, the field status will ready active. You should wait for the charm deployment to finish before following the next steps.

To relate the two charmed operators, all we need to do is run:

juju relate postgresql:db hello-juju

The applications will auto-configure themselves. You now have a web application and a database deployed and ready to use without having to deal with application specific configuration!

Find more information about relations and how they work.

Re-checking the deployment status

Now that the new application and a relation are in place, the juju status output has expanded.

juju status --relations

Add the --relations option include relations information:

Model    Controller  Cloud/Region         Version  SLA          Timestamp
default  overlord    localhost/localhost  2.8.10   unsupported  16:24:04+12:00

App         Version  Status  Scale  Charm       Store       Rev  OS      Notes
hello-juju           active      1  hello-juju  jujucharms    3  ubuntu  
postgresql  10.10    active      1  postgresql  jujucharms  199  ubuntu
  
Unit           Workload  Agent  Machine  Public address  Ports    Message
hello-juju/0*  active    idle   0        10.47.112.215   80/tcp  
postgresql/0*  active    idle   0        10.47.112.216   5432/tcp  Live master (10.10)
 
Machine  State    DNS            Inst id        Series  AZ  Message
0        started  10.47.112.215  juju-646ac9-0  bionic      Running
1        started  10.47.112.216  juju-646ac9-1  bionic      Running

Relation provider       Requirer                Interface    Type     Message
postgresql:coordinator  postgresql:coordinator  coordinator  peer
postgresql:db           hello-juju:db           pgsql        regular
postgresql:replication  postgresql:replication  pgpeer       peer

Yeah, this does feel like a wall of text, I wonder if a separate post on just juju status might be useful and we can just point people to that one every time?

This feels a bit weird and counter productive to juju modelling. It just happens that you can connect via the IP address without adjusting the firewall rules, doesn’t mean we should. Can we instead just walk through juju expose hello-juju and then we can drop the connection refused? error box.

1 Like

Thanks for your suggestions, @simonrichardson!

That’s a great idea, this info should probably be in Command status as a way to explain the output. If you add a comment to that page, I will make sure it gets merged with the docs.


What do you think about:

Deploying a charmed operator: Hello Juju!

The first workload that you’ll deploy is a simple web application. You’ll deploy an application that uses the Flask microframework to send “Hello Juju!” via HTTP.

juju deploy hello-juju

The charmed operator name hello-juju is resolved into an actual charmed operator version by contacting charmhub.io. This charmed operator is then downloaded by the controller and used as the source of the application that was created with the same name:

Located charm "cs:hello-juju-4".
Deploying charm "cs:hello-juju-4".

To configure the necessary security groups and firewall rules, use juju expose:

juju expose hello-juju

I am still not entirely happy with the explanation for juju expose. It it’s borderline “just run this command, you don’t need to know what it does, just trust me” and that’s bad for a getting started tutorial. “security groups” and “firewall” might spook some people. What do you think about:

The next step is to use juju expose to configure the necessary security groups and firewall rules and make the hello-world application publicly available over the network:

juju expose hello-juju

This is much nicer :+1:

1 Like

This probably should read “Source”?

Doc improvement suggestions:

  • multipass.run could be a link, like you’ve done in other places in this doc for LXD and snapcraft
  • “It only takes a few minutes!” reads better than “It takes only a few minutes!”

Could you add a link for the alternative Juju CLI installation instructions?

There is a typo in installation

It would be nice to have an info box here with a one sentence explanation of what is LXD and why we need it, like you did for Multipass.
Maybe something along the lines of:

Info: LXD is a system container and virtual machine manager. Juju uses it to instantiate the containers need for the app.

1 Like

I got a few errors that I had to workaround when following these steps on a cleanish Ubuntu install:

lxd init --auto
juju bootstrap localhost overlord

This is what I got:

~$ lxd init --auto
Error: Failed to connect to local LXD: Get "http://unix.socket/1.0": dial unix /var/snap/lxd/common/lxd/unix.socket: connect: permission denied
~$ juju bootstrap localhost overlord
ERROR Permission denied, are you in the lxd group?

Please configure LXD by running:
	$ newgrp lxd
	$ lxd init

So I added my user to the LXD group and followed the next instructions

myuser:~$ sudo adduser myuser lxd
Adding user `myuser' to group `lxd' ...
Adding user myuser to group lxd
Done.
myuser:~$ newgrp lxd
myuser:~$ lxd init --auto
myuser:~$ juju bootstrap localhost overlord
ERROR profile "default": juju does not support IPv6. Disable IPv6 in LXD via:
	lxc network set lxdbr0 ipv6.address none
and run the command again
myuser:~$ lxc network set lxdbr0 ipv6.address none

Only after all this, did I get to the next point mentioned in the doc:

Creating Juju controller "overlord" on localhost/localhost

This might add a lot of complications to the docs, but omitting it entirely might make the reader think something is wrong with his setup.

1 Like

@simonrichardson

  1. I think the IPv6 problem is expected and we don’t have a better way to deal with it, right?
  2. Is there a better way to deal with the group issue?

Thanks for your comments, @carol-lorini .

I think the doc should mention that between this step:

and this step:

juju relate postgresql:db hello-juju

You have to wait until the the postgresql app’s status changes to active which you can check using juju status. Otherwise hello-juju enters an error state like the one below and will stop working.

Unit           Workload  Agent  Machine  Public address  Ports     Message
hello-juju/0*  error     idle   0        10.157.24.172   80/tcp    hook failed: "db-relation-changed"

This happened to me the first time I ran this because I assumed juju relate would fail if the apps were not ready (and I was wrong). I think other people following this guide might make the same mistake, so adding something like ‘wait until postgresql is active’ would be helpful.

1 Like

Correct, we don’t currently support IPv6 atm,

Not currently.

Probably, we could change for v3 of Juju, but not for v2.9 as we don’t want to break backwards compatibility.

1 Like

This shouldn’t be required if the group is setup correctly first time around.

1 Like

@simonrichardson Thanks for fixing this page.