N16 Deploying Applications

The fundamental purpose of Juju is to deploy and manage software applications in a way that is easy and repeatable. All this is done with the help of charmed operators, which are bits of code that contain all the necessary intelligence to do these things. Charmed operators can exist online (at charmhub.io) or on your local file system (previously downloaded from the store or written locally).

This page collects together topics related to deploying applications:

Before deploying an application a controller must first be created. See the Creating a controller page for guidance.

Deploying from charmhub.io

Typically, applications are deployed from the public shared collection of operators at charmhub.io. This ensures that you get the latest version of the charmed operator, and have access to the stream of updates. Deploying in this way is straightforward:

juju deploy mysql

This will create a machine in your chosen backing cloud within which the MySQL application will be deployed. If your model already has already added a machine but not assigned an application to it, then Juju will schedule MySQL to use that machine instead.

Assuming that the Xenial series charmed operator exists and was used above, an equivalent command is:

juju deploy --series=xenial mysql

A used charmed operator gets cached on the controller’s database to minimise network traffic for subsequent uses.

Channels

charmhub.io offers charmed operators in different stages of development. Such stages are called channels. Some users may want the very latest features, or be part of a beta test; others may want to only install the most reliable software. The channels are:

  • stable: (default) This is the latest, tested, working stable version of the charmed operator.
  • candidate: A release candidate. There is high confidence this will work fine, but there may be minor bugs.
  • beta: A beta testing milestone release.
  • edge: The very latest version - expect bugs!

As each new version of a charmed operator is automatically versioned, these channels serve as pointers to a specific version number. It may be that after time a beta version becomes a candidate, or a candidate becomes the new stable version.

The default channel is ‘stable’, but you can specify a different channel easily. Here, we choose the ‘beta’ channel:

juju deploy mysql --channel beta

In the case of there being no version of the charmed operator specified for that channel, Juju will fall back to the next ‘most stable’; e.g. if you were to specify the ‘beta’ channel, but no charmed operator version is set for that channel, Juju will try to deploy from the ‘candidate’ channel instead, and so on. This means that whenever you specify a channel, you will always end up with something that best approximates your choice if it is not available.

See Upgrading applications for how charmed operator upgrades work.

Providing applications with a custom name

A custom name, such as ‘db’, can be assigned to the application by providing an extra argument:

juju deploy mysql db

Custom application names must comply with a few syntax rules. They consist lower-case letters (a-z), numbers (0-9), and single hyphens (-). The name must begin with a letter and not have a group of all numbers follow a hyphen.

Valid: myappname, custom-app, app2
Invalid: myAppName, custom--app, app2-23, areacode-555-info

The resulting application can be referred to as a “named application”.

Deploying from a local charmed operator

It is possible to deploy applications using local charmed operators. See Deploying charmed operators offline for further guidance.

Deploying from a charmed operator bundle

Complex installations requiring multiple charmed operators can be achieved through the use of a bundle. See page Charmed operator bundles for guidance.

Exposing deployed applications

Once an application is deployed changes need to be made to the backing cloud’s firewall to permit network traffic to contact the application. This is done with the expose command.

After running a juju expose command, any ports opened by the application’s charmed operator will become accessible by any IP address.

Assuming the ‘wordpress’ application has been deployed (and a relation has been made to deployed database ‘mariadb’), we would expose it in this way:

juju expose wordpress

The below partial output from the status command informs us that the ‘wordpress’ application is currently exposed. In this case it is available via its public address of 54.224.246.234:

App        Version  Status  Scale  Charm      Store       Rev  OS      Notes
mariadb    10.1.36  active      1  mariadb    jujucharms    7  ubuntu  
wordpress           active      1  wordpress  jujucharms    5  ubuntu  exposed

Unit          Workload  Agent  Machine  Public address  Ports   Message
mariadb/0*    active    idle   1        54.147.127.19           ready
wordpress/0*  active    idle   0        54.224.246.234  80/tcp

Use the unexpose command to undo the changes:

juju unexpose wordpress

Since Juju 2.9, operators can explicitly specify the list of CIDRs and/or spaces that should be able to access the ports opened by the application. See Controlling application expose settings via the Juju CLI to learn more about this feature.

Configuring at deployment time

Deployed applications usually start with a sane default configuration. However, for some applications it may be desirable (and quicker) to configure them at deployment time. This can be done whether a charmed operator is deployed from charmhub.io or from a local charmed operator. See Configuring applications for more on this.

Deploying to LXD containers

Applications can be deployed directly to new LXD containers in this way:

juju deploy etcd --to lxd

Here, etcd is deployed to a new container on a new machine.

It is equally possible to deploy to a new container that, in turn, resides on a pre-existing machine (see next section).

Deploying to specific machines

You can specify which machine (or container) an application is to be deployed to. See Deploying to specific machines for full coverage of this topic.

Deploying to specific availability zones

It is possible to dictate what availability zone (or zones) a machine must be installed in. See Deploying to specific availability zones for details.

Deploying to network spaces

Using network spaces you can create a more restricted network topology for applications at deployment time. See Deploying to network spaces for more information.

Scaling out deployed applications

A common enterprise requirement, once applications have been running for a while, is the ability to scale out (and scale back) one’s infrastructure. Fortunately, this is one of Juju’s strengths. The Scaling applications page offers in-depth guidance on the matter.

Trusting an application

Some applications may require access to the backing cloud in order to fulfill their purpose. In such cases, the model’s credential must be shared with the application, which can be done at deployment time. See section Trusting an application with a credential to see how this works.

Points to review:

Do we ALWAYS have to make these changes?

Typically, applications are deployed from the public shared collection of operators at charmhub.io. This ensures that you get the latest version of the charmed operator, and have access to the stream of updates. …

1 Like

This will create a machine in your chosen backing cloud, where MySQL will be installed and configured.

While the second half of that statement is true, I feel like it is confusing the issue at this point, and I would drop it. If we did want to keep it, then something like

If your model already has already added a machine but not assigned an application to it, then Juju will schedule MySQL to use that machine instead.

1 Like

There are 2 problems here.

  1. If we do want to talk about deploying a specific series of an application, then the preferred way would be
  juju deploy --series=xenial mysql

We also need to revisit this point as we settle through the implications of the new work around bases, as they would change the selection logic. For 2.9, I don’t feel this is going to change, as we only support bases that can be mapped to a series. Longer term, we will need alternative language around this.

  1. cs: very explicitly does not target charmhub.io but ‘charmstore’, aka jaas.ai/mysql, and ‘cs:’ is the way for you to use the old mechanisms that worked in 2.8 and prior. So we definitely don’t want to say:

We do have support for ‘ch:’ meaning charmhub, but we are avoiding pointing users to it, since it clutters up the UI.

If the goal for this document is to only present the new style of interaction, then I would skip talking about cs:, and certainly wouldn’t mention xenial/foo, but instead target --series. However, I would likely avoid all of that this point in the conversation.

So here you are making the (very common) mistake of conflating channels with risks, because we offer a short syntax for specifying risk. We should be cautious in how we present things.

Specifically a channel is actually made up of ‘track/risk/branch’, where ‘/branch’ is more than I would talk about now. The snapcraft doc:

Does a pretty good job of describing what we want out of channels.

I think it is a good idea to have a discussion about risk levels, and expose the user as to how to select them.

As for the track portion of channels, I’m not sure we have any great examples of charms that are currently using tracks to manage different releases. I believe the current expectation is that channel will be used to separate out long term supported versions of the charms, and will likely also represent versions of the underlying applications. (eg, you might select a track based on what version of postgresql you want to run.)

This is true for machine charms, but is less expected for Kubernetes charms. If we are trying to focus the early docs on just the Kubernetes experience at this time, then I would probably move series into a different document.

If the goal is to be able to cover both topics simultaneously, then I would probably rework this structure, into something starting along the lines of

Often applications can be run on many different target platforms. This may be a specific architecture or a specific OS or series of that OS. For charms that support multiple series, they always declare a default series .... To select a series other than the default, you can specify it with .... Some users may want to use a specific series, which the charm may not explicitly support. To force the series version for testing purposes see also ...

These don’t feel like they warrant the space given to them. We have a full heading and a single sentence in the that heading, directing you to a completely different page. Maybe something more like:

Alternative ways of deploying charms

It is possible to deploy applications…

And then each of the sentences describes other ways that deployments differ.

That said, deploying from a bundle is a fairly major way of getting the content that you want, and it is treated here as just a side note.

The warning here is no longer true in 2.9. We can instead direct people to:

We have the statement later about listing CIDRs, but

a) We probably shouldn’t use CIDR there just yet, as it hasn’t been defined in this document, and is definitely Jargon.
b) That box should be expanded and replace the warning about exposed to all traffic.

So this is true, but only applies to non Kubernetes models.

This grouping again feels weird to have a top level section that just refers you to a different document. I would probably structure this more as 1 section with either bullet points, or single paragraphs talking about various other documents that you might want to read.

Went with the second option as it fits the level of this guide IMO

  • Ok, this is a point we definitely need to discuss because it appears in several docs pages.

I think that for most guides, I wouldn’t like to use a specific series unless necessary. For this guide, yeah, we should present both options and explain what the difference is.

What is “the new work around bases”? I don’t think I quite get how series bases and deploy are linked

For non-kubernetes, the machine that the charm and the applicaiton runs in needs to be defined. Historically this was done only via ‘series’, which was a short string like “xenial” or “bionic”, or “centos7”. Generally a charm would know how to configure their application (eg postgresql) for a limited number of these. (There is very little to expect that an ubuntu bionic charm knows how to install configure and operate a postgresql running on windows/centos. And experience says that it isn’t always true that code that knows how to configure Postgresql on Xenial automatically does the right things on Bionic. for example, Trusty used Upstart, Xenial used systemd.)

Bases is an extension of ‘series’, to pull out pieces that feel overloaded. (centos7 becomes os: centos, channel:7, bionic becomes os: ubuntu, channel:20.04).

The reason deploy has a field for series, is because if you do have a charm that has taken the time to support more than one, you then want a knob to allow the admin to decide which of those they want in their model.

2 Likes