There’s a lot of great new features coming to the Juju-k8s integration in Juju 2.6. Many have already appeared in earlier betas, and some have been backported to the later 2.5.x point releases. Here’s a summary of what’s new. Please see existing documentation for the basics.
The biggest feature is that it’s now possible to bootstrap a controller directly to the k8s cluster - no external controller needed anymore. There’s also additions to what a charm is able to specify in terms of the pod spec it needs for its workloads, lots of storage improvements, and a bunch of other enhancements and polish.
Microk8s built-in
Just as for LXD, microk8s is now a built-in cloud for Juju. So long as you have installed the microk8s snap, there’s now a “microk8s” cloud available to use out of the box with no additional configuration or set up needed. Previously you would have needed to run microk8s.config | add-k8s
.
By default, microk8s does not enable storage or DNS, so you’ll need to do this yourself:
microk8s.enable storage dns
Juju will warn you if you bootstrap and these features are not enabled.
$ juju clouds --local
Cloud Regions Default Type Description
aws 15 us-east-1 ec2 Amazon Web Services
azure 27 centralus azure Microsoft Azure
google 18 us-east1 gce Google Cloud Platform
oracle 4 us-phoenix-1 oci Oracle Cloud Infrastructure
localhost 1 localhost lxd LXD Container Hypervisor
microk8s 1 localhost k8s A Kubernetes Cluster
Support for GKE and AKS
If you want to bootstrap or deploy workloads to GKE or AKS clusters, those are now fully supported by the juju add-k8s
command. You need to have installed the respective CLI tools for the cluster you want to use as Juju needs those to extract the config and credential information.
For GKE, you’ll need to install gcloud .
sudo snap install google-cloud-sdk --classic
For AKS, you’ll need to install az.
sudo apt install az
After using relevant CLI tool to login to your account, and creating a cluster using either the CLI or web interface, you can then run:
juju add-k8s --gke mygke
or
juju add-k8s --aks myaks
You will be prompted for the necessary information such as project, region, account etc. Or you can specify any required details to avoid some of the interactive prompts, eg
juju add-k8s --gke --credential=myaccount --project=myproject mygke
See juju help add-k8s
for more examples.
Bootstrap
You can now bootstrap directly to a k8s cluster. The cluster needs to be registered as a named cloud in Juju by first running the add-k8s
command (more on this below). You can then bootstrap as you normally would on any other cloud. The one major difference is that there’s no “default” model created; you need to run add-model
yourself to create a model to host your workloads.
Bootstrap is supported on:
- microk8s
- Azure Kubernetes Service (AKS)
- Google Kubernetes Engine (GKE)
- CDK (with the integrator charm) deployed to Amazon AWS, Microsoft Azure, Google GCE
- CDK or kubernetes-core deployed to LXD
The controller pods are deployed to a k8s namespace called controller-<ctrlname>
.
Each model is deployed to its own namespace named after the model.
The OCI image for the controller operator, and any workload operators, is fetched from dockerhub under the jujusolutions namespace. For testing purposes, or if you’re behind a firewall, it’s possible to configure bootstrap to pull the image from elsewhere, eg
juju bootstrap microk8s mycontroller --config caas-image-repo=myrepopath
where “myrepopath” is either a namespace on docker hub or a docker registry path to an image. If a namespace is specified, Juju will look for an operator image at mynamespace/jujud-operator
.
Upgrades
Upgrades are supported for both the controller itself and any hosted models. The standard Juju upgrade command has been tweaked to make the process more obvious.
To upgrade the controller, run:
juju upgrade-controller
To upgrade a model, run:
juju upgrade-model -m <model>
The usual arguments such as --dry-run
and --agent-version
etc work as before.
Using add-k8s
The add-k8s command is used to register a cluster to make it available to Juju. This can either be done using --local to add to the local cache and allowing bootstrap, or a k8s cluster can be added to a running controller.
When adding a cluster, Juju needs to know the cloud type and region of the underlying cloud hosting the cluster. This is so that suitable storage can be set up. Juju will attempt to sniff the cloud details by querying the cluster, but this is not always possible. For clusters deployed using CDK, you may be asked to supply this information using the --region argument to add-k8s, eg
juju add-k8s --region ec2/us-east-1 mycluster
You need to specify the cloud type (ec2, gce, azure) not the name you may have given to the cloud.
Once Juju knows the type of cloud the cluster is running on, it will then look for a storage class to ensure suitable, opinionated storage is available for the application operators and workloads. There’s 3 scenarios:
- a cluster default storage class exists and uses the opinionated storage type (eg ebs on ec2)
- a cluster default storage class exists and uses a different, non-recommended storage type
- no cluster default storage class exists
In the latter two cases, you’ll need to use the --storage
argument to add-k8s to configure the cluster being added to Juju to use the specified storage; and in the latter case, Juju will create a storage class with the specified name and configured to use the opinionated storage for that cluster.
Storage
Storage support has been much improved in Juju 2.6. As noted above, Juju will detect (or require user input to specify) a suitable storage class at the time a cluster is added to Juju. This information is stored with the cluster definition in Juju and set as model default config options for operator-storage
and workload-storage
. Being model config options, these can be set to something different for any given model, and you can also set up Juju storage pools for more control over how storage is provisioned for any given application.
Given that opinionated storage is configured out of the box, you can now simply deploy a workload which requires storage with no additional storage directives if you just want the defaults to be used:
juju deploy cs:~juju/mariadb-k8s
If you add a model and want all workloads in that model to use a different storage class, you can override the workload-storage
config for the entire model:
juju model-config workload-storage=mystorageclass
Storage pools
If you want more control over how storage is provisioned, you can make use of Juju storage pools. A storage pool can be set up to:
- define an existing, cluster wide k8s storage class
- create a new model-scoped storage class using a given provisioner and config attributes
Storage pools have a provider type which could be specified but which defaults to “kubernetes”. Other provider types are “rootfs” and “tmpfs” (see below). The examples below use the “kubernetes” provider type.
For example, we can make a storage pool to provision volumes from a specific, pre-existing storage class:
juju create-storage-pool mypool storage-class=anothersc
Then you can deploy an application using that pool:
juju deploy cs:~juju/mariadb-k8s --storage database=mypool
We can also ask Juju to create a model-scoped storage class which does not exist ahead of time but which will be configured to use the specified provisioner:
juju create-storage-pool mypool storage-class=bespoke storage-provisioner=microk8s.io/hostpath foo=bar
juju deploy cs:~juju/mariadb-k8s --storage database=mypool
In the above scenario, Juju will create a k8s storage class called <modelname>-bespoke
configured like so:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: <modelname>-bespoke
provisioner: microk8s.io/hostpath
parameters:
foo=bar
reclaimPolicy: Retain
volumeBindingMode: Immediate
rootfs and tmpfs storage
rootfs
and tmpsfs
storage types are used in vm models to map storage to the root disk of the workload machine, or to an in-memory filesystem. The semantics are similar in k8s models, and both of the above map to emptyDir storage.
When tmpfs is used, the emptyDir medium is set to “Memory”.
For example, to use a 2GB in-memory storage allocation:
`juju deploy cs:~juju/mariadb-k8s --storage database=tmpfs,2G
Additional charm pod configuration capabilities
Charms can now specify many additional k8s pod attributes, including:
- init containers
- security context
- service annotations
- custom resource definitions
The easiest way to describe what’s supported is by looking at an example of a charm pod spec. The example is representative but not complete (eg not all possible security context attributes are shown).
activeDeadlineSeconds: 10
serviceAccountName: serviceAccount
restartPolicy: OnFailure
terminationGracePeriodSeconds: 20
automountServiceAccountToken: true
securityContext:
runAsNonRoot: true
supplementalGroups: [1,2]
hostname: host
subdomain: sub
priorityClassName: top
priority: 30
dnsPolicy: ClusterFirstWithHostNet
dnsConfig:
nameservers: [ns1, ns2]
readinessGates:
- conditionType: PodScheduled
containers:
- name: gitlab
image: gitlab/latest
imagePullPolicy: Always
command: ["sh", "-c"]
args: ["doIt", "--debug"]
workingDir: "/path/to/here"
ports:
- containerPort: 80
name: fred
protocol: TCP
- containerPort: 443
name: mary
securityContext:
runAsNonRoot: true
privileged: true
livenessProbe:
initialDelaySeconds: 10
httpGet:
path: /ping
port: 8080
readinessProbe:
initialDelaySeconds: 10
httpGet:
path: /pingReady
port: www
config:
attr: foo=bar; name['fred']='blogs';
foo: bar
restricted: 'yes'
switch: on
files:
- name: configuration
mountPath: /var/lib/foo
files:
file1: |
[config]
foo: bar
- name: gitlab-helper
image: gitlab-helper/latest
ports:
- containerPort: 8080
protocol: TCP
- name: secret-image-user
imageDetails:
imagePath: staging.registry.org/testing/testing-image@sha256:deed-beef
username: docker-registry
password: hunter2
- name: just-image-details
imageDetails:
imagePath: testing/no-secrets-needed@sha256:deed-beef
initContainers:
- name: gitlab-init
image: gitlab-init/latest
imagePullPolicy: Always
command: ["sh", "-c"]
args: ["doIt", "--debug"]
workingDir: "/path/to/here"
ports:
- containerPort: 80
name: fred
protocol: TCP
- containerPort: 443
name: mary
config:
foo: bar
restricted: 'yes'
switch: on
service:
annotations:
foo: bar
customResourceDefinitions:
tfjobs.kubeflow.org:
group: kubeflow.org
version: v1alpha2
scope: Namespaced
names:
plural: "tfjobs"
singular: "tfjob"
kind: TFJob
validation:
openAPIV3Schema:
properties:
tfReplicaSpecs:
properties:
Worker:
properties:
replicas:
type: integer
minimum: 1
PS:
properties:
replicas:
type: integer
minimum: 1
Chief:
properties:
replicas:
type: integer
minimum: 1
maximum: 1
Charms can specify deployment type
k8s charms now have the ability to specify the type of deployment: stateful
or stateless
:
Setting | Description |
---|---|
stateful |
use a k8s StatefulSet to manage the pods |
stateless |
use a k8s DeploymentController to manage the pods |
Previously, charms with storage used a StatefulSet because they needed a stable network identity, whereas other charms were always deployed using a DeploymentController. However, this can prove problematic in circumstances where the charm (or the reactive layers it imports) does not rely on application data via leader settings to store state, since pod restarts result in a new Juju unit and the removal of the old one.
As well as deployment type, charms can also specify service type (this can be overridden at deploy time using the kubernetes-service-type
config option). The default service type is cluster
if not otherwise specified.
There new charm metadata that can be used within k8s charms:
deployment:
type: stateless | stateful
service: cluster | loadbalancer | external
The default deployment type is stateless
. Charms which declare storage would need to also say they want a stateful
deployment or else we’ll error early. A charm is free to say it wants a stateful
deployment even without storage being required.
Charm OCI images
Charms declare what OCI images they need using resource metadata, and there’s a base reactive layer which handles the work of pulling the iamges as needed.
Charm store charms store their OCI images in a docker repo hosted by the charm store itself. If you want to deploy a local charm, you need to specify a path for each image like so:
juju deploy /path/to/mariadb --resource mysql_image=mariadb:latest
The image resource value is any supported docker repo path URL.
The above works great for public images. For private images requiring a username and password, the resource value is a path to a file similar to:
registrypath: image-path
username: me
password: secret
juju deploy /path/to/mariadb --resource mysql_image=/path/to/imagedef.yaml
To upgrade an existing deployed charm, use the upgrade-charm
as normal. The charm will need to have been written to react properly to the upgrade hook and generate a new pod spec yaml to send. Juju does filter out any yaml which is the same as already deployed so it’s safe for the charm to speculatively send in a regenerated pod spec.
juju upgrade-charm /path/to/mariadb --resource mysql_image=mariadb:test
Miscellaneous Improvements
Integration with microk8s snap
If the Juju snap is installed and microk8s is already present, the jujud-operator and juju-db images will be cached in the microk8s repository so that subsequent bootstrap is much faster.
Better error surfacing
If there’s a syntax error in the pod spec YAML generated by the charm, or a semantic error due to an unsupported config value in the pod spec itself, Juju will reflect this in status as an error on the application.
Charms without pods
Some charms may require just the operator to be deployed and no workload pods. That use case is now catered for in that juju status
and status history now better reflect the state of the charm, and don’t incorrectly report “waiting for container”.
Scale to 0
We now allow the application to be set to 0 on the Juju side. Previously this was disallowed for reasons no longer relevant.