The Juju team is proud to release Juju 2.9.0. We aim to be your preferred tool for writing operators, software that manages software, whether your hosting infrastructure is Kubernetes, in the cloud or on-premises.
2.9 Highlights
New charmed operators on Kubernetes
Juju 2.9 offers much finer granularity of control for workloads on Kubernetes, with a new class of Kubernetes operator that places the charm as a sidecar to the workload. Sidecar charms gain direct access to low-level workload details with file monitoring, local socket communication and process management now possible. Charms using this approach become StatefulSets.
This release adds preliminary support for “sidecar” charms, a new approach to Kubernetes charms in which the charm container runs in the same pod as the workload container (instead of in a separate pod). Included with this is a feature preview of Pebble, a lightweight process supervisor for use with sidecar charms.
You can try out the demonstration snappass-test sidecar charm by following the instructions in the README.
More details about this change in this post !
Kube Config Bootstrap Support
It is now possible to bootstrap to Kubernetes clusters defined in the users local Kubernetes config files. You can the list of available clusters by typing juju clouds
and then bootstrapping with juju bootstrap <cloud>
CharmHub
CharmHub is a brand new all shiny charm store, replacing the existing CharmStore. Don’t worry though, we’ll still keep backwards compatibility with the old charm store, so everything should work, with one exception. Previously if you didn’t specify a scheme (cs:
) for your charm/bundle deployments then it would always hit the existing charm store. That is no longer the case, if you want to target the existing charm store, you will require the cs:
prefix.
The CharmHub API is still in progress, so some things like private charms aren’t currently supported, but are being actively worked on!
Juju is fully integrated with CharmHub store and has gained some new commands along with existing integration for deploying and refreshing your existing deployments.
juju find
Juju now has the ability to search the CharmHub store directly from the command line. The fuzzy search will find any potential matches to get the fully qualified charm/bundle name so it makes it a breeze to deploy a charm without leaving the command line.
juju info
Juju info displays important and vital information about a given charm/bundle, including the various channels, architectures and series available for deployment. In conjunction with juju find
, the Juju command line is now closer integrated with the store.
juju download
With the move to the new CharmHub backend, Juju can now download charms directly from the CharmHub store to your local file system. This should make it easier to customise a charm before deploying or fulfil any potential air-gapped strategies.
juju download wordpress
For more information about the CharmHub feature, refer to this discourse post.
Architecture and series constraint selection for charms
An example of provisioning a machine with arm64 and selecting the correct charm for that machine.
juju deploy ubuntu --constraints="arch=arm64"
Granular control of application expose parameters
Charm authors now have the ability to open port ranges to specific application endpoints. Operators have the power to control which application endpoints get exposed and who (spaces or CIDRs) should be able to access the ports opened for those endpoints.
Opening ports for specific endpoints
In previous versions of juju, when a charm requests a port to be opened, the port is implicitly opened for all defined application endpoints.
Opening and closing ports or port ranges is typically handled by the framework used by charm authors. Under the hood, such requests trigger an invocation of the open-ports
, opened-ports
and close-ports
hook tools.
Starting with juju 2.9, the aforementioned hook tools support an optional --endpoints
flag which allows the charm to constrain the opened port range to a comma-delimited list of application endpoints. If the flag is not provided, the command behaves in exactly the same manner as previous versions of juju, i.e. it opens the port for all application endpoints.
Opening ports
In the following example, we deploy percona-cluster
and then proceed to open port 3306 for all endpoints and port 8080 for the db-admin
endpoint.
$ juju deploy percona-cluster
$ juju run --unit percona-cluster/0 "open-port 3306/tcp"
$ juju run --unit percona-cluster/0 "open-port --endpoints db-admin 8080/tcp"
The opened-ports
tool, when invoked without any argument, will by default display the unique set of opened ports (for all endpoints) opened by the charm. This is intentional as we don’t want to break any existing charms out there that may be parsing the output of this command and assuming that the output is always formatted in a particular way.
$ juju run --unit percona-cluster/0 "opened-ports"
3306/tcp
8080/tcp
If we pass the --endpoints
flag to the above command, we will get back the extended opened port range report which is broken down by endpoint. In the following output the *
symbol represents all endpoints.
$ juju run --unit percona-cluster/0 "opened-ports --endpoints"
3306/tcp (*)
8080/tcp (db-admin)
The semantic meaning of “all endpoints”
If the charm needs to open a particular port range for all endpoints, it can do so in two ways:
- Via an
open-port port/protocol
invocation. - Via an
open-port --endpoints endpoint-list port/protocol
invocation where
theendpoint-list
value is a comma-delimited list of all application endpoints.
While both approaches are functionally equivalent (i.e. the port range gets opened for all endpoints), there is a slight difference with respect to the charm author’s stated intent.
The first command requests juju to open the port range for all endpoints, present and future. In other words, if the charm gets upgraded and the new version defines new endpoints, the port range will be automatically opened for the new endpoints as well.
On the other hand, the second command provides an explicit list of endpoints for which the port range should be opened. If an newer version of the charm defines new endpoints, the port range will not be automatically opened for them.
Closing ports
Analogous to the open-ports
tool, close-ports
also allows an endpoint list to be specified. As you might expect, if the --endpoints
flag is omitted, the specified port range will be closed for all endpoints.
Continuing with the percona-cluster
example from above, let’s now close port 3306 and list the open port ranges again; then close port 8080 (note that no --endpoints
flag is specified) and list the open port ranges one more time:
$ juju run --unit percona-cluster/0 "close-port 3306/tcp"
$ juju run --unit percona-cluster/0 "opened-ports --endpoints"
8080/tcp (db-admin)
$ juju run --unit percona-cluster/0 "close-port 8080/tcp"
# No output will be returned as no ports are opened
$ juju run --unit percona-cluster/0 "opened-ports --endpoints"
What if the charm needs to open a port range for all endpoints except one? The open/close port mechanism also supports this particular use-case:
$ juju run --unit percona-cluster/0 "open-port 3306/tcp"
$ juju run --unit percona-cluster/0 "opened-ports --endpoints"
3306/tcp (*)
$ juju run --unit percona-cluster/0 "close-port --endpoints db-admin 3306/tcp"
$ juju run --unit percona-cluster/0 "opened-ports --endpoints"
3306/tcp (access, cluster, db, ha, master, nrpe-external-master, shared-db, slave)
As you can see, the close-port
invocation caused juju to replace the “all endpoints” entry with the list of all application endpoints except the db-admin
one.
Supporting open/close ports for endpoints when writing new charms
Charm authors that are interested in leveraging the new --endpoints
flag when invoking the aforementioned hook tools must first check whether the flag is supported by the environment where the charm is deployed to by inspecting the value of the JUJU_VERSION
env-var and deciding based on it value (2.9+) whether this feature is available to them.
Granular control of application expose setting by operators
When an application was exposed in a previous version of juju, either via the CLI (e.g. juju expose APP
) or via a bundle with an exposed: true
field, all of its opened port ranges were automatically made accessible from everyone (i.e. the ingress rules used 0.0.0.0/0 as the source CIDR for incoming traffic).
From juju 2.9 and onwards, operators can opt to either expose all application ports or to only expose the ports that charms have opened for a particular list of application endpoints. Moreover, the operator can specify both globally, and on a per-endpoint basis, a set of source CIDRs (and/or spaces) that should be allowed to access the opened port ranges that have been opened either globally or for the specified endpoint.
Controlling expose settings via the juju CLI
Exposing individual application endpoints
As of juju 2.9, the juju expose
command now accepts the following optional flags:
-
--endpoints
: a comma-delimited list of endpoints to use for selecting the list of port ranges to be exposed. -
--to-cidrs
: a comma-delimited list of CIDRs that should be able to access the selected port ranges. -
--to-spaces
: a comma-delimited list of space names that should be able to access the selected port ranges.
Note that all of the above flags are optional. If none of these flags are provided (e.g. juju expose percona-cluster
), juju will instead evaluate the following equivalent command juju expose percona-cluster --to-cidrs 0.0.0.0/0,::/0
which makes all opened ports accessible from everyone thus matching the behavior of older juju versions.
On the other hand, if a list of endpoints is specified but no other flags are provided (e.g. juju expose percona-cluster --endpoints db
), juju will once again assume an implicit --to-cidrs 0.0.0.0/0,::/0
argument.
Contrary to older juju versions where juju expose X
is an one-off thing that simply marks the application as exposed, with juju 2.9, you can execute a sequence of juju expose X
commands to specify expose settings for individual application endpoints. Note that each juju expose X --endpoints
command will overwrite the previous expose settings for each referenced endpoint name.
Let’s take a look at a simple example where we expose all endpoints of percona-cluster
and then override the expose settings for the db-admin
endpoint to only allow access from the local network (10.0.0.0/24). Finally, we repeat the last expose command to specify a different set of CIDRs.
$ juju expose percona-cluster
$ juju expose percona-cluster --endpoints db-admin --to-cidrs 10.0.0.0/24
$ juju show-application percona-cluster
percona-cluster:
...
exposed: true
exposed-endpoints:
"":
expose-to-cidrs:
- 0.0.0.0/0
- ::/0
db-admin:
expose-to-cidrs:
- 10.0.0.0/24
...
$ juju expose percona-cluster --endpoints db-admin --to-cidrs 192.168.0.0/24,192.168.1.0/24
$ juju show-application percona-cluster
percona-cluster:
...
exposed: true
exposed-endpoints:
"":
expose-to-cidrs:
- 0.0.0.0/0
- ::/0
db-admin:
expose-to-cidrs:
- 192.168.0.0/24
- 192.168.1.0/24
...
Un-exposing individual endpoints or the entire application
In a similar fashion, the juju unexpose
command also supports an optional --endpoints
flag which may be specified to completely remove the expose settings for a list of endpoints. Juju will automatically mark the application as unexposed when the last expose setting is removed.
If the command is invoked without the --endpoints
flag, all expose settings will be deleted and the application will be marked as unexposed.
Providing expose settings via bundles
Specifying all the expose rules for each application in your model can be quite tedious. Fortunately, we can use a bundle to apply all rules in a single go! As the per-endpoint expose settings are deployment-specific, they must always be specified as part of an overlay.
Furthermore, when your bundle contains an overlay section with expose settings, the expose
field (with a true value) is not allowed to be present anywhere inside the bundle as this can cause security issues with older controllers. To understand why this is the case, let’s examine the following bundle and its overlay:
series: bionic
applications:
percona-cluster:
charm: cs:percona-cluster-291
num_units: 1
to:
- "0"
# This not allowed and is only included for this example!
expose: true
machines:
"0": {}
--- # overlay.yaml
applications:
percona-cluster:
exposed-endpoints:
"":
expose-to-cidrs:
- 0.0.0.0/0
- ::/0
db-admin:
expose-to-cidrs:
- 192.168.0.0/24
As you can see above, all endpoints of the percona-cluster applications are exposed to the world with the exception of ports opened for db-admin
which are only accessible by 192.168.0.0/24
.
If we were to take this bundle, as-is, and deploy it to a 2.8 controller, the controller would ignore the exposed-endpoints
section of the overlay and only consider the expose: true
entry. This would make all ports (including the db-admin
ones) accessible from 0.0.0.0/0
which is clearly not the
operator’s intent!
To prevent this from happening, juju will refuse to deploy the bundle if it contains both an expose: true
flag and an exposed-endpoints
section and will instead display an error:
ERROR cannot deploy bundle: the provided bundle has the following errors:
exposed-endpoints cannot be specified together with "exposed:true" in application "percona-cluster" as this poses a security risk when deploying bundles to older controllers
Furthermore, when exporting bundles via juju export-bundle
, the controller will populate the exposed-endpoints
section for applications that include endpoint-specific expose settings and omit setting the expose
flag. On the other hand, if an application exposes all endpoints to 0.0.0.0/0, the bundle exporter will instead set the expose
flag and omit the exposed-endpoints
section.
This behavior follows the principle of least surprise. If the bundle is exported from a 2.9 controller and subsequently deployed to a 2.8 controller, the application will not be exposed (as there is no expose: true
field present) out of the box to the entire world. Instead, the operator will need to step in, evaluate the risk involved and manually expose the application if needed.
Further technical detail is available in this post.
Enhancements
Bundles with cross model offers now update existing offers
It’s now possible to deploy a bundle containing offers multiple times. Juju will detect when an offer already exists and update its details instead of emitting an offer already exists
error.
juju list-operations now supports batched results
When listing operations, ie the results of running actions or execing scripts on machines/units, the number of results returned is limited to 50 by default. Using the offset
argument, the next batch of results may be requested. Use --limit
to adjust the number of results per batch.
$ juju operations --offset=12 --limit-4
Displaying operation results 13 to 16.
Run the command again with --offset=16 --limit=4 to see the next batch.
Id Status Started Finished Task IDs Summary
101 completed 2015-02-14T06:06:06 2,3 operation 101
103 running 2014-02-14T06:06:06 4 operation 103
105 pending 6 operation 105
110 error operation 110
juju ssh/scp to leader
There’s now support for the <unit>/leader
syntax (which is currently also supported for juju run
) to the ssh
, scp
, debug-hook(s)
and debug-code
client commands.
To ensure that the client can also support this feature when targeting older controllers, leader unit resolution is performed client-side using the output of the Status
API call (filtered by the application name).
NOTE: given that leader resolution happens at the client, there is still the potential for a race where the leader might change while we are trying to connect to the previous leader. However, leaders are generally stable so, a) this is unlikely to happen and b) it can still happen with the current way of manually ssh’ing to the leader (run juju status, find leader, run juju ssh for the leader).
vSphere enhancements
Hardware version
You can now specify which VM Hardware version to use when deploying vSphere machines in model config with the force-vm-hardware-version
Examples:
Set the value for bootstrap:
$ juju bootstrap vsphere --config force-vm-hardware-version=17
Change the VM Hardware version for a new model:
$ juju add-model --config force-vm-hardware-version=15
$ juju add-machine
Disk Provisioning
On vSphere, operators can now set a new model-level config option disk-provisioning-type
which dictates how template VM disks should be cloned when creating a new machine. Supported values are:
- thin - Sparse provisioning, only written blocks will take up disk space on the datastore
- thick - The entire size of the virtual disk will be deducted from the datastore, but unwritten blocks will not be zeroed out. This adds 2 potential pitfalls. See comments in provider/vsphere/internal/vsphereclient/client.go regarding DiskProvisioningType.
- thickEagerZero (default) - The entire size of the virtual disk is deducted from the datastore, and unwritten blocks are zeroed out. Improves latency when committing to disk, as no extra step needs to be taken before writing data.
For example:
juju bootstrap
juju model-config disk-provisioning-type=thin
juju add-machine
Azure enhancements
Spaces support
The Azure provider supports network spaces.
juju subnets
will show the subnets Juju knows about for a given model.
To add a space in one of the available subnets:
juju add-space foo 192.169.2.0/24
and now add a machine in that space:
juju add-machine --constraints="space=foo"
This will create a machine and use “subnet1 192.169.2.0/24” for the network
configuration of the primary NIC machine-X-primary
. If no space is specified, Juju will default to using a subnet called juju-internal-network
in the configured vnet.
You can create another space:
juju add-space bar 192.168.3.0/24
and create a machine in both spaces:
juju add-machine --constraints="spaces=foo,bar"
A machine gets created with 2 NICs, machine-X-primary
and machine-X-interface-1
.
Each NIC is bound to the subnet of each space.
If you do this instead (ie a space with multiple subnets):
juju add-space foobar 192.169.2.0/24 192.169.3.0/24
juju add-machine --constraints="spaces=foobar"
Then the machine only gets one NIC and one of the subnets is chosen at random.
Availability sets
As long as at least two units are deployed, Azure guarantees 99.95% availability of the application overall. Exposed ports are automatically load-balanced across all units within the application. Using availability sets disables manual placement and the add-machine
command.
Availability sets work differently from zones, but serve the same basic purpose. With zones, Juju directly ensures each unit of a distribution group is placed into a different zone. With sets, Juju places each unit of a distribution group into the same set, and Azure will then try to ensure that not all units in the set will become unavailable at the same time.
Availability sets are not enforced when unit placement (i.e. the --to
option for the deploy
or add-unit
commands) is used.
When creating a new machine, the algorithm used for choosing the availability set is:
- if the machine is a controller, use the availability set name
juju-controller
- if the machine has units assigned, create an availability set with a name based on the value of the
tags.JujuUnitsDeployed
tag in vmTags, if it exists - otherwise, do not assign the machine to an availability set
Custom resource groups
By default, Juju will create a new resource group for each model, to hold the artifacts for that model.
Sometimes, permissions will be such that the user does not have rights to create new resource groups, but they can use existing groups. Juju models can be configured to use such pre-existing resource groups using the resource-group
model config, eg
juju add-model test --config resource-group=myresgroup
It’s also possible to bootstrap using an existing resource group for the controller model, eg
juju bootstrap azure --config resource-group=myresgroup --no-default-model
Note that the --no-default-model
option is required to ensure Juju does not create an additional “default” model in a new resource group as would normally occur.
Note: When destroying a controller containing models with BYO resource groups, those resource groups will need to be cleaned up manually since there’s a limitation in how Juju tears down a controller, preventing those groups from being cleaned up up automatically.
Custom virtual network
It’s possible to use an existing virtual network when provisioning instances. Say there’s an existing resource group containing a virtual network
|- test-rg
|- test-vn
|- juju-controller-subnet 192.169.2.0/24
|- subnet2 192.169.3.0/24
The network
model config attribute can be used to specify this network be used instead of Juju creating a new one:
juju add-model --config network=test-rg/test-vn
Depending on spaces setup (see below), the machine can be provisioned with NICs on each subnet.
If the specified virtual network has a security group already, Juju will use that, otherwise it will create a new one called juju-internal-nsg
.
Placement also works, eg to choose one of the subnets in the vnet:
juju deploy mysql --to subnet=subnet2
Note: For bootstrap, if space constraints are not used to specify a particular subnet, Juju will prefer a subnet called juju-controller-subnet
to use for the controller’s primary NIC. If a so named subnet does not exist, Juju will choose one at random, or else error. A simple option for specifying a particular non-default subnet to use (if spaces are not set up) is to use placement:
juju bootstrap azure --config network=test-rg/test-vn --to subnet=subnet2
You must also ensure that the subnet has a security group which is configured to allow incoming traffic to the following ports:
- 22 (for bootstrap to use ssh to provision the instance)
- 17070 (to allow Juju clients and agents to connect to the controller API server)
Full disk encryption
When provisioning an Azure instance, it is possible to ask for root disk encryption using a disk encryption set and a key vault.
You can either use your own disk encryption set or have Juju create a key vault and disk encryption set. Right now, the key size and algorithm is not user configurable. It’s RSA and 4096 bits.
The provisioning is configured using a Juju storage pool.
The configurable parameters are:
- encrypted (must be true for Juju to act on it)
- disk-encryption-set-name (defaults to vault name prefix)
- vault-name-prefix
- vault-key-name (defaults to "disk-secret")
- vault-user-id
If just the disk encryption set name is configured, Juju will use that existing disk encryption set and not create any additional resources. It can reside in a different resource group by specifying resgroup_name/DES_name
If a vault name prefix
is also configured, Juju will create a vault called vault_name_prefix-resource_group_name
. Vault names need to be unique across a subscription.
If vault-user-id
is configured, that user will be given full access to manage the vault and keys created by Juju.
To add an instance with disk encryption, first create a storage pool and then use the root-disk-source
constraint when adding the Juju machine, eg
juju create-storage-pool byo azure encrypted=true disk-encryption-set-name="mydisksecrets"
juju add-machine --constraints="root-disk-source=byo"
or
juju create-storage-pool des azure encrypted=true vault-name-prefix=mysecrets vault-user-id=27f86c31-bca0-4a37-b233-0e5030107696
juju add-machine --constraints="root-disk-source=des"
etc.
If you want the controller to also use disk encryption, the storage pool will need to be created at bootstrap time, eg
juju bootstrap azure \
--no-default-model \
--storage-pool name=foo \
--storage-pool type=azure \
--storage-pool encrypted=true \
--storage-pool vault-name-prefix=secret \
--bootstrap-constraints="root-disk-source=foo"
The same arg handling as is used for --config
is used here; either a YAML file or multiple cmd args can be used with --storage-pool
.
Manual provider spaces
As manual machines are added to a model, Juju will progressively discover their subnet connectivity, allowing traffic segregation via network spaces.
Unit and machine agent aggregation
Juju will no longer have separate agents for the machine and each running unit. Instead, there is a single machine agent operating both machine and unit concerns.
Other changes
- Juju has rectified some modeling issues for Kubernetes clusters where previous version where using incorrect data to model credentials. Because if this change Juju 2.9 now creates different Kubernetes cloud credentials that are not backwards compatible with Juju releases pre 2.9
- It’s now possible to use Ctrl-C to cancel an in-flight bootstrap.
Issues Resolved
This release includes fixes released across several milestones.
- LP # 1892029 - make juju-reboot work on centos
- LP # 1890167 - txn watcher sync error
- LP # 1899536 - error upgrading from 2.7 to 2.8: cannot get all link layer devices
- LP # 1898718 - reboot breaks microk8s deployments
- LP # 1895040 - upgrading to a charm with a peer relation gives “unknown relation”
- LP # 1897743 - start hook called after charm upgrade, no reboot
- LP # 1895598 - continuous rotation of K8s charm units
- LP # 1900937 - destroy k8s controller hangs
- LP # 1901439 - Juju destroy kubernetes model hangs on “destroying” forever
- LP # 1898966 - juju allows leadership code to run on non-leader
- LP # 1902319 - set-config compatibility issue with older controllers
- LP # 1902815 - purge reboot monitor state when unit removed from machine
- LP # 1904020 - [azure provider] Juju purges resource group VMs and other resources that do not belong to it
- LP # 1902548 - TestClientWatchAllAdminPermission Intermittent Failures
- LP # 1902814 - embedded CLI tries to discharge macaroons for external users
- LP # 1902945 - 2.9rc2: kubeflow deploy fails on microk8s
- LP # 1903202 - juju machine agent runtime panic
- LP # 1903557 - 2.9rc2: juju enable-ha times out with new nodes failing to connect to juju api
- LP # 1905827 - Incorrect memory constraints on LXD cluster
- LP # 1900002 - Juju fails to reset jujud password on windows if logged into desktop
- LP # 1903726 - bad log message when caas unit shuts down
- LP # 1910595 - juju migrate fails: agents failed phase “validating”
- LP # 1907685 - upgrade step for 2.8.1 ReplaceNeverSetWithUnset fails if statuses collection is large
- LP # 1911800 - juju migrate fails: agents failed phase “validating”
- LP # 1910989 - Juju injecting unwanted metadata in Resources deployed by K8s Charm
- LP # 1904619 - lxd profile failure during upgrade-charm with more than 1 unit on a machine
- LP # 1915844 - export-bundle does not export charmhub charms appropriately, leading to an unusable bundle
- LP # 1901069 - centos: juju.worker.proxyupdater error writing apt proxy config file: open /etc/apt/apt.conf.d/95-juju-proxy-settings: no such file or directory
- LP # 1915320 - juju 2.9 cannot deploy kubeflow-lite bundle
-
LP # 1914074 - Inconsistency between
juju bootstrap
andjuju deploy
when dealing with Kubernetes configuration flags - LP # 1914619 - Failed agent stuck in a loop
- LP # 1918486 - Juju 2.9 doesn’t work with Juju 2.8 Kubernetes credentials
- LP #1921722 - Migration fails with different metadata name to charm URL name
- LP#1915418 - network-get format yaml keys differ from format json
-
LP#1923141 -
juju dashboard
command returns wrong IP for microk8s - LP#1916621 - Ingress rules are removed if pod spec changes
Other issues are available from the individual milestone pages:
Known Issues
Juju 2.9.0 contains a small number of known issues that require workarounds to overcome.
- Juju clients pre 2.9 will now longer work full with Juju 2.9. We recommend upgrading all clients to Juju 2.9
- Model migration is broken for some k8s scenarios, including migrating a 2.8 model to a 2.9 controller
- on k8s installations based on
minikube
orkind
orrancher
, the Juju Dashboard will not be able to be served because there’s no direct access to the controller service out of the box. A fix is being investigated.
How do I install Juju?
The best way to get your hands on this release of Juju is to install it as a snap:
sudo snap install juju --channel=stable --classic
Other packages are available for a variety of platforms, including Windows and macOS. Refer to the full Juju installation documentation.
Those subscribed to a snap channel should be automatically upgraded. If you’re using the PPA or Homebrew for macOS, you should see an upgrade available.
How do I upgrade?
Upgrading Juju is a multi-step process:
- upgrade the client (the Juju binary that you interact with on the command-line)
- (recommended) verify that the client upgrade was successful
- (recommended) back up the controller
- upgrade the controller
- upgrade models
Upgrade the client
If you’ve installed Juju via the snap, your Juju will automatically be installed overnight. To force the update, run:
snap refresh juju --channel=stable
Check version
To prevent any confusion in later steps, ensure that you’re using the new Juju version.
juju version
What to do if Juju reports an old version
Check your $PATH
environment variable and that you’ve installed the new version correctly.
On Linux and macOS, use the env
command to inspect the environment. On MS Windows, use SET
.
Backing up
The controller state can be saved to allow for recovery in case the upgrade process encounters a serious error. The juju create-backup
command operates within the controller
model and will download the controller’s state to the local machine.
juju switch controller
juju create-backup
Upgrading controllers
Juju includes a command that instructs the controller to upgrade itself to the latest version. Because this release is a release candidate, we need to change the “agent stream” to download the new version of the Juju agent.
juju upgrade-controller
How to upgrade the controller
First, check that you’re interacting with the intended controller. The juju models
command reports the current controller on the first line.
juju models
To verify what will occur, use the --dry-run
option.
juju upgrade-controller --dry-run
When you’re happy, omit the flag:
juju upgrade-controller
Upgrade models
Your models remain at the same version until you have explicitly told Juju to upgrade them.
juju upgrade-model -m <model-name>
How to upgrade your model(s)
Check that you’re operating in the correct model. juju model
highlights the current model with an asterisk:
juju models
To verify what will occur, use the --dry-run
option.
juju upgrade-juju --dry-run
When you’re happy, omit the flag:
juju upgrade-juju
Further instructions
Juju’s documentation pages provide detailed information about the upgrade process:
Feedback Appreciated!
We encourage everyone to let us know how you’re using Juju. Please ask questions on on Discourse, send us a message to our Twitter account @juju_devops, tweet using the hashtag #juju_devops, and chat with us at #juju on Freenode via IRC or Matrix.