The Juju team is proud to release Juju 2.6.1!
This release brings together several features and major stability improvements for large deployments.
Kubernetes Support
Microk8s built-in
Just as for LXD, microk8s is now a built-in cloud for Juju. So long as you have installed the microk8s snap, there’s now a “microk8s” cloud available to use out of the box with no additional configuration or set up needed. Previously you would have needed to run microk8s.config | add-k8s
.
By default, microk8s does not enable storage or DNS, so you’ll need to do this yourself:
microk8s.enable storage dns
Juju will warn you if you bootstrap and these features are not enabled.
$ juju clouds --local
Cloud Regions Default Type Description
aws 15 us-east-1 ec2 Amazon Web Services
azure 27 centralus azure Microsoft Azure
google 18 us-east1 gce Google Cloud Platform
oracle 4 us-phoenix-1 oci Oracle Cloud Infrastructure
localhost 1 localhost lxd LXD Container Hypervisor
microk8s 1 localhost k8s A Kubernetes Cluster
Support for GKE and AKS
If you want to bootstrap or deploy workloads to GKE or AKS clusters, those are now fully supported by the juju add-k8s
command. You need to have installed the respective CLI tools for the cluster you want to use as Juju needs those to extract the config and credential information.
For GKE, you’ll need to install gcloud .
sudo snap install google-cloud-sdk --classic
For AKS, you’ll need to install az.
sudo apt install az
After using relevant CLI tool to login to your account, and creating a cluster using either the CLI or web interface, you can then run:
juju add-k8s --gke mygke
or
juju add-k8s --aks myaks
You will be prompted for the necessary information such as project, region, account etc. Or you can specify any required details to avoid some of the interactive prompts, eg
juju add-k8s --gke --credential=myaccount --project=myproject mygke
See juju help add-k8s
for more examples.
Bootstrap
You can now bootstrap directly to a k8s cluster. The cluster needs to be registered as a named cloud in Juju by first running the add-k8s
command (more on this below). You can then bootstrap as you normally would on any other cloud. The one major difference is that there’s no “default” model created; you need to run add-model
yourself to create a model to host your workloads.
Bootstrap is supported on:
- microk8s
- Azure Kubernetes Service (AKS)
- Google Kubernetes Engine (GKE)
- CDK (with the integrator charm) deployed to Amazon AWS, Microsoft Azure, Google GCE
- CDK or kubernetes-core deployed to LXD
The controller pods are deployed to a k8s namespace called controller-<ctrlname>
.
Each model is deployed to its own namespace named after the model.
The OCI image for the controller operator, and any workload operators, is fetched from dockerhub under the jujusolutions namespace. For testing purposes, or if you’re behind a firewall, it’s possible to configure bootstrap to pull the image from elsewhere, eg
juju bootstrap microk8s mycontroller --config caas-image-repo=myrepopath
where “myrepopath” is either a namespace on docker hub or a docker registry path to an image. If a namespace is specified, Juju will look for an operator image at mynamespace/jujud-operator
.
Upgrades
Upgrades are supported for both the controller itself and any hosted models. The standard Juju upgrade command has been tweaked to make the process more obvious.
To upgrade the controller, run:
juju upgrade-controller
To upgrade a model, run:
juju upgrade-model -m <model>
The usual arguments such as --dry-run
and --agent-version
etc work as before.
Using add-k8s
The add-k8s command is used to register a cluster to make it available to Juju. This can either be done using --local to add to the local cache and allowing bootstrap, or a k8s cluster can be added to a running controller.
When adding a cluster, Juju needs to know the cloud type and region of the underlying cloud hosting the cluster. This is so that suitable storage can be set up. Juju will attempt to sniff the cloud details by querying the cluster, but this is not always possible. For clusters deployed using CDK, you may be asked to supply this information using the --region argument to add-k8s, eg
juju add-k8s --region ec2/us-east-1 mycluster
You need to specify the cloud type (ec2, gce, azure) not the name you may have given to the cloud.
Once Juju knows the type of cloud the cluster is running on, it will then look for a storage class to ensure suitable, opinionated storage is available for the application operators and workloads. There’s 3 scenarios:
- a cluster default storage class exists and uses the opinionated storage type (eg ebs on ec2)
- a cluster default storage class exists and uses a different, non-recommended storage type
- no cluster default storage class exists
In the latter two cases, you’ll need to use the --storage
argument to add-k8s to configure the cluster being added to Juju to use the specified storage; and in the latter case, Juju will create a storage class with the specified name and configured to use the opinionated storage for that cluster.
Storage
Storage support has been much improved in Juju 2.6. As noted above, Juju will detect (or require user input to specify) a suitable storage class at the time a cluster is added to Juju. This information is stored with the cluster definition in Juju and set as model default config options for operator-storage
and workload-storage
. Being model config options, these can be set to something different for any given model, and you can also set up Juju storage pools for more control over how storage is provisioned for any given application.
Given that opinionated storage is configured out of the box, you can now simply deploy a workload which requires storage with no additional storage directives if you just want the defaults to be used:
juju deploy cs:~juju/mariadb-k8s
If you add a model and want all workloads in that model to use a different storage class, you can override the workload-storage
config for the entire model:
juju config workload-storage=mystorageclass
Storage pools
If you want more control over how storage is provisioned, you can make use of Juju storage pools. A storage pool can be set up to:
- define an existing, cluster wide k8s storage class
- create a new model-scoped storage class using a given provisioner and config attributes
Storage pools have a provider type which could be specified but which defaults to “kubernetes”. Other provider types are “rootfs” and “tmpfs” (see below). The examples below use the “kubernetes” provider type.
For example, we can make a storage pool to provision volumes from a specific, pre-existing storage class:
juju create-storage-pool mypool storage-class=anothersc
Then you can deploy an application using that pool:
juju deploy cs:~juju/mariadb-k8s --storage database=mypool
We can also ask Juju to create a model-scoped storage class which does not exist ahead of time but which will be configured to use the specified provisioner:
juju create-storage-pool mypool storage-class=bespoke storage-provisioner=microk8s.io/hostpath foo=bar
juju deploy cs:~juju/mariadb-k8s --storage database=mypool
In the above scenario, Juju will create a k8s storage class called <modelname>-bespoke
configured like so:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: <modelname>-bespoke
provisioner: microk8s.io/hostpath
parameters:
foo=bar
reclaimPolicy: Retain
volumeBindingMode: Immediate
rootfs and tmpfs storage
rootfs
and tmpsfs
storage types are used in vm models to map storage to the root disk of the workload machine, or to an in-memory filesystem. The semantics are similar in k8s models, and both of the above map to emptyDir storage.
When tmpfs is used, the emptyDir medium is set to “Memory”.
For example, to use a 2GB in-memory storage allocation:
`juju deploy cs:~juju/mariadb-k8s --storage database=tmpfs,2G
Additional charm pod configuration capabilities
Charms can now specify many additional k8s pod attributes, including:
- init containers
- security context
- service annotations
- custom resource definitions
The easiest way to describe what’s supported is by looking at an example of a charm pod spec. The example is representative but not complete (eg not all possible security context attributes are shown).
activeDeadlineSeconds: 10
serviceAccountName: serviceAccount
restartPolicy: OnFailure
terminationGracePeriodSeconds: 20
automountServiceAccountToken: true
securityContext:
runAsNonRoot: true
supplementalGroups: [1,2]
hostname: host
subdomain: sub
priorityClassName: top
priority: 30
dnsPolicy: ClusterFirstWithHostNet
dnsConfig:
nameservers: [ns1, ns2]
readinessGates:
- conditionType: PodScheduled
containers:
- name: gitlab
image: gitlab/latest
imagePullPolicy: Always
command: ["sh", "-c"]
args: ["doIt", "--debug"]
workingDir: "/path/to/here"
ports:
- containerPort: 80
name: fred
protocol: TCP
- containerPort: 443
name: mary
securityContext:
runAsNonRoot: true
privileged: true
livenessProbe:
initialDelaySeconds: 10
httpGet:
path: /ping
port: 8080
readinessProbe:
initialDelaySeconds: 10
httpGet:
path: /pingReady
port: www
config:
attr: foo=bar; name['fred']='blogs';
foo: bar
restricted: 'yes'
switch: on
files:
- name: configuration
mountPath: /var/lib/foo
files:
file1: |
[config]
foo: bar
- name: gitlab-helper
image: gitlab-helper/latest
ports:
- containerPort: 8080
protocol: TCP
- name: secret-image-user
imageDetails:
imagePath: staging.registry.org/testing/testing-image@sha256:deed-beef
username: docker-registry
password: hunter2
- name: just-image-details
imageDetails:
imagePath: testing/no-secrets-needed@sha256:deed-beef
initContainers:
- name: gitlab-init
image: gitlab-init/latest
imagePullPolicy: Always
command: ["sh", "-c"]
args: ["doIt", "--debug"]
workingDir: "/path/to/here"
ports:
- containerPort: 80
name: fred
protocol: TCP
- containerPort: 443
name: mary
config:
foo: bar
restricted: 'yes'
switch: on
service:
annotations:
foo: bar
customResourceDefinitions:
tfjobs.kubeflow.org:
group: kubeflow.org
version: v1alpha2
scope: Namespaced
names:
plural: "tfjobs"
singular: "tfjob"
kind: TFJob
validation:
openAPIV3Schema:
properties:
tfReplicaSpecs:
properties:
Worker:
properties:
replicas:
type: integer
minimum: 1
PS:
properties:
replicas:
type: integer
minimum: 1
Chief:
properties:
replicas:
type: integer
minimum: 1
maximum: 1
Charms can specify deployment type
k8s charms now have the ability to specify the type of deployment: stateful
or stateless
:
Setting | Description |
---|---|
stateful |
use a k8s StatefulSet to manage the pods |
stateless |
use a k8s DeploymentController to manage the pods |
Previously, charms with storage used a StatefulSet because they needed a stable network identity, whereas other charms were always deployed using a DeploymentController. However, this can prove problematic in circumstances where the charm (or the reactive layers it imports) does not rely on application data via leader settings to store state, since pod restarts result in a new Juju unit and the removal of the old one.
As well as deployment type, charms can also specify service type (this can be overridden at deploy time using the kubernetes-service-type
config option). The default service type is cluster
if not otherwise specified.
There new charm metadata that can be used within k8s charms:
deployment:
type: stateless | stateful
service: cluster | loadbalancer | external
The default deployment type is stateless
. Charms which declare storage would need to also say they want a stateful
deployment or else we’ll error early. A charm is free to say it wants a stateful
deployment even without storage being required.
Charm OCI images
Charms declare what OCI images they need using resource metadata, and there’s a base reactive layer which handles the work of pulling the iamges as needed.
Charm store charms store their OCI images in a docker repo hosted by the charm store itself. If you want to deploy a local charm, you need to specify a path for each image like so:
juju deploy /path/to/mariadb --resource mysql_image=mariadb:latest
The image resource value is any supported docker repo path URL.
The above works great for public images. For private images requiring a username and password, the resource value is a path to a file similar to:
registrypath: image-path
username: me
password: secret
juju deploy /path/to/mariadb --resource mysql_image=/path/to/imagedef.yaml
To upgrade an existing deployed charm, use the upgrade-charm
as normal. The charm will need to have been written to react properly to the upgrade hook and generate a new pod spec yaml to send. Juju does filter out any yaml which is the same as already deployed so it’s safe for the charm to speculatively send in a regenerated pod spec.
juju upgrade-charm /path/to/mariadb --resource mysql_image=mariadb:test
Miscellaneous Improvements
Integration with microk8s snap
If the Juju snap is installed and microk8s is already present, the jujud-operator and juju-db images will be cached in the microk8s repository so that subsequent bootstrap is much faster.
Better error surfacing
If there’s a syntax error in the pod spec YAML generated by the charm, or a semantic error due to an unsupported config value in the pod spec itself, Juju will reflect this in status as an error on the application.
Charms without pods
Some charms may require just the operator to be deployed and no workload pods. That use case is now catered for in that juju status
and status history now better reflect the state of the charm, and don’t incorrectly report “waiting for container”.
Scale to 0
We now allow the application to be set to 0 on the Juju side. Previously this was disallowed for reasons no longer relevant.
Multi Cloud Controllers
This release has initial, “early access” support for multi-cloud controllers. It’s on ongoing effort, and will take time to get the user experience polished. You can for example bootstrap to MAAS, deploy openstack, add that openstack cloud to the same existing controller, and deploy stuff to that second cloud. All without having to waste a controller node in the openstack cloud.
To make use of the ability to add additional clouds to a controller you need to enable a feature flag:
juju controller-config features="[multi-cloud]"
Part of that effort is to fix the experience around how the various Juju CLI cloud commands operate.
The key change is that like most/all other Juju commands, the cloud commands will operate (by default) on a running controller. So, just like add-model
, these commands:
- list-clouds
- show-cloud
- add-cloud
- remove-cloud
- update-cloud
will use the current controller, or accept a -c or --controller argument to use a different one.
If you want to manage your local cloud metadata, you can use the above commands with the --local
option.
If you have no controllers running, you can use the juju list-clouds
command directly to see what clouds are available for bootstrap. Or you can use juju add-cloud
directly to add a new cloud and then bootstrap to that one.
Currently , interactive add-cloud is always local.
Mongo 4 from snap
Using the mongodb-snap
feature flag, you can bootstrap a Juju controller and mongo will be installed as a snap, currently version 4.0.9. See snap info juju-db
for details on the snap. This won’t work yet behind a firewall.
Deletion of stuck applications and units
This is a long requested feature which has finally landed.
If a unit ever got wedged, due to a hook error, or an issue with storage, or the agent being killed, or the machine being stopped, it was impossible to remove from the Juju model. And then in turn, the application could not be removed, and the model could not be destroyed.
There is now a --force
option for the various remove commands, eg remove-application
, remove-unit
etc, as well as destroy-model
. There’s also a --no-wait
option. The way it works is that Juju will try to remove/destroy entities and allow for proper cleanup, but after a set time, will resort to forceful removal. If you don’t want Juju to wait because you know things are totally broken, you can use the --no-wait
option.
Minor changes
vSphere Improvements
Improvements have been made to the vSphere provider.
-
Constraints to specify root disk parameters - datastore and size, eg
juju deploy myapp --constraints="root-disk-size=20G root-disk-source=mydatastore"
-
Resource groups within a host or cluster can now be specified as an availability zone constraint, eg
juju deploy myapp --constraints="zones=mycluster/mygroup"
juju deploy myapp --constraints="zones=mycluster/myparent/mygroup"
-
Better checks at deploy time that specified datastores exist and are valid.
Known Issues
If a model contains any manual machines, destroy-model
will fail to complete. Any manual machines must first be removed using the remove-machine
command. This issue will be fixed in the next point release.
This release includes more important fixes
- LP #1807960 - Juju vSphere cloud provider needs a constraint for datastore
- LP #1807961 - Juju vSphere cloud provider does not support resource pools or folders with vsphere 6.5/6.7
- LP #1817242 - juju bootstrap - openstack provider: failed to get list of flavour details
- LP #1827371 - Upgrade from 2.4.7 to 2.5.4 blocked on raft lease migration
Additional bugs fixed can be found in the milestone page.
Other news
Updated Website
The new user-facing home for Juju is jaas.ai. We hope that this shorter URL will make your day slightly easier.
New tutorials
The community has been busy adding new content for new users:
How do I get it?
The best way to get your hands on this release of Juju is to install it as a
snap package
sudo snap install juju --classic
Other packages are available for a variety of platforms. Please see the online documentation at https://docs.jujucharms.com/reference-install. Those subscribed to a snap channel should be automatically upgraded. If you’re using the ppa or homebrew, you should see an upgrade available.
Feedback Appreciated!
We encourage everyone to let us know how you’re using Juju. Join us on Discourse at https://discourse.jujucharms.com/, send us a message on Twitter using the hashtag #jujucharms, and join us at #juju on freenode.