Available in the 2.8 edge snap…
New extended volume support
It’s now possible to configure volumes backed by:
- config map
- secret
- host path
- empty dir
To do this, you’ll need to mark your YAML as version 3. This new version 3 also:
- renames the
config
block toenvConfig
(to better reflect its purpose). - renames the
files
block tovolumeConfig
. - allows file mode to be specified
With secret and config map, these must be defined in the elsewhere YAML handed to Juju - you can’t reference existing resources not created by the charm. If you leave out the files
block, the entire secret or config map will be mounted. path
is optional - the file will be created with the same name as key
if not specified.
The path
for each file is created relative to the overall mount point.
Here’s an example of what’s possible when creating the new volume types.
version: 3
...
# renamed from config
envConfig:
MYSQL_ROOT_PASSWORD: %(root_password)s
MYSQL_USER: %(user)s
MYSQL_PASSWORD: %(password)s
MYSQL_DATABASE: %(database)s
MY_NODE_NAME:
field:
path: spec.nodeName
api-version: v1
build-robot-secret:
secret:
name: build-robot-secret
key: config.yaml
# Here's where the new volumes types are set up
# This block was called "files" in v2
volumeConfig:
# This is what was supported previously (simple text files)
- name: configurations
mountPath: /etc/mysql/conf.d
files:
- path: custom_mysql.cnf
content: |
[mysqld]
skip-host-cache
skip-name-resolve
query_cache_limit = 1M
query_cache_size = %(query-cache-size)s
query_cache_type = %(query-cache-type)s
# host path
- name: myhostpath1
mountPath: /var/log1
hostPath:
path: /var/log
type: Directory
- name: myhostpath2
mountPath: /var/log2
hostPath:
path: /var/log
# see https://kubernetes.io/docs/concepts/storage/volumes/#hostpath for other types
type: Directory
# empty dir
- name: cache-volume
mountPath: /empty-dir
emptyDir:
medium: Memory # defaults to disk
- name: cache-volume222
mountPath: /empty-dir222
emptyDir:
medium: Memory
- name: cache-volume
mountPath: /empty-dir1
emptyDir:
medium: Memory
# secret
- name: another-build-robot-secret
mountPath: /opt/another-build-robot-secret
secret:
name: another-build-robot-secret
defaultMode: 511
files:
- key: username
path: my-group/username
mode: 511
- key: password
path: my-group/password
mode: 511
# config map
configMap:
name: log-config
defaultMode: 511
files:
- key: log_level
path: log_level
mode: 511
The lifecycle of CRDs
Introduce CRD lifecycle. Now charmers can decide when the CRDs get deleted by specifying proper labels.
{
"juju-resource-lifecycle": "model | persistent"
}
-
If no
juju-resource-lifecycle
label set, the CRD gets deleted with the application together. -
If
juju-resource-lifecycle
sets tomodel
, the CRD will not get deleted when the application is removed until the model is destroyed. -
If
juju-resource-lifecycle
sets topersistent
, the CRD will never get deleted by Juju even the model is gone.
deploy a charm has below spec:
version: 3
kubernetesResources:
customResourceDefinitions:
- name: tfjobs.kubeflow.org
labels:
foo: bar # deleted with the app;
spec:
...
- name: tfjob1s.kubeflow.org1
labels:
foo: bar
juju-resource-lifecycle: model # deleted with the model;
spec:
...
- name: tfjob2s.kubeflow.org2
labels:
foo: bar
juju-resource-lifecycle: persistent # never gets deleted;
spec:
...
$ juju deploy /tmp/charm-builds/mariadb-k8s/ --debug --resource mysql_image=mariadb -n1
$ mkubectl get crds -o json | jq '.items[] | .metadata | [.name,.labels]'
[
"tfjob1s.kubeflow.org1",
{
"foo": "bar",
"juju-app": "mariadb-k8s",
"juju-resource-lifecycle": "model",
"juju-model": "t1"
}
]
[
"tfjob2s.kubeflow.org2",
{
"foo": "bar",
"juju-app": "mariadb-k8s",
"juju-resource-lifecycle": "persistent",
"juju-model": "t1"
}
]
[
"tfjobs.kubeflow.org",
{
"foo": "bar",
"juju-app": "mariadb-k8s",
"juju-model": "t1"
}
]
$ juju remove-application mariadb-k8s -m k1:t1 --destroy-storage --force
removing application mariadb-k8s
- will remove storage database/0
$ mkubectl get crds -o json | jq '.items[] | .metadata | [.name,.labels]'
[
"tfjob1s.kubeflow.org1",
{
"foo": "bar",
"juju-app": "mariadb-k8s",
"juju-resource-lifecycle": "model",
"juju-model": "t1"
}
]
[
"tfjob2s.kubeflow.org2",
{
"foo": "bar",
"juju-app": "mariadb-k8s",
"juju-resource-lifecycle": "persistent",
"juju-model": "t1"
}
]
$ juju destroy-model t1 --destroy-storage -y --debug --force
$ mkubectl get crds -o json | jq '.items[] | .metadata | [.name,.labels]'
[
"tfjob2s.kubeflow.org2",
{
"foo": "bar",
"juju-app": "mariadb-k8s",
"juju-resource-lifecycle": "persistent",
"juju-model": "t1"
}
]
The lifecycle of CRs
$ juju deploy /tmp/charm-builds/mariadb-k8s/ --debug --resource mysql_image=mariadb
$ mkubectl get crds tfjob1s.kubeflow.org1 -o json | jq ' .metadata | {name: .name,"juju-resource-lifecycle": (.labels | ."juju-resource-lifecycle")}'
{
"name": "tfjob1s.kubeflow.org1",
"juju-resource-lifecycle": "persistent"
}
$ mkubectl get tfjob1s.kubeflow.org1 -o json | jq '.items[] | .metadata | {name: .name,"juju-resource-lifecycle":(.labels | ."juju-resource-lifecycle")}'
{
"name": "dist-mnist-for-e2e-test11",
"juju-resource-lifecycle": null
}
{
"name": "dist-mnist-for-e2e-test12",
"juju-resource-lifecycle": "model"
}
{
"name": "dist-mnist-for-e2e-test13",
"juju-resource-lifecycle": "persistent"
}
$ juju remove-application mariadb-k8s -m k1:t1 --destroy-storage --force
removing application mariadb-k8s
- will remove storage database/0
$ mkubectl get tfjob1s.kubeflow.org1 -o json | jq '.items[] | .metadata | {name: .name,"juju-resource-lifecycle":(.labels | ."juju-resource-lifecycle")}'
{
"name": "dist-mnist-for-e2e-test12",
"juju-resource-lifecycle": "model"
}
{
"name": "dist-mnist-for-e2e-test13",
"juju-resource-lifecycle": "persistent"
}
$ juju destroy-model t1 --destroy-storage -y --debug --force
$ mkubectl get tfjob1s.kubeflow.org1 -o json | jq '.items[] | .metadata | {name: .name,"juju-resource-lifecycle":(.labels | ."juju-resource-lifecycle")}'
{
"name": "dist-mnist-for-e2e-test13",
"juju-resource-lifecycle": "persistent"
}
Webhooks name now can be fixed;
-
webhooks section changed from
map
toslice
; -
Juju by default adds the namespace to the global webhook resources’ name as a prefix.
Now charmers can fix the name by specifying an annotation like:
{
"juju.io/disable-name-prefix": "true",
}
yml2json /tmp/charm-builds/mariadb-k8s/reactive/k8s_resources.yaml --pretty | jq '.kubernetesResources | .mutatingWebhookConfigurations[],.validatingWebhookConfigurations[] | {name: .name, annotations: .annotations}'
{
"name": "mutatingwebhook-will-change",
"annotations": null
}
{
"name": "mutatingwebhook-will-keep",
"annotations": {
"juju.io/disable-name-prefix": "true"
}
}
{
"name": "validatingwebhook-will-keep",
"annotations": {
"juju.io/disable-name-prefix": "true"
}
}
{
"name": "validatingwebhook-will-change",
"annotations": null
}
$ mkubectl get mutatingWebhookConfigurations,validatingWebhookConfigurations -n t1 -o json | jq '.items[].metadata | {name: .name, annotations: .annotations}'
{
"name": "mutatingwebhook-will-keep",
"annotations": {
"juju.io/controller": "f8917560-4288-46b7-87e4-56fce849bf6b",
"juju.io/disable-name-prefix": "true",
"juju.io/model": "a271b010-7f50-4254-8b1d-eda1f0c62081"
}
}
{
"name": "t1-mutatingwebhook-will-change",
"annotations": {
"juju.io/controller": "f8917560-4288-46b7-87e4-56fce849bf6b",
"juju.io/model": "a271b010-7f50-4254-8b1d-eda1f0c62081"
}
}
{
"name": "t1-validatingwebhook-will-change",
"annotations": {
"juju.io/controller": "f8917560-4288-46b7-87e4-56fce849bf6b",
"juju.io/model": "a271b010-7f50-4254-8b1d-eda1f0c62081"
}
}
{
"name": "validatingwebhook-will-keep",
"annotations": {
"juju.io/controller": "f8917560-4288-46b7-87e4-56fce849bf6b",
"juju.io/disable-name-prefix": "true",
"juju.io/model": "a271b010-7f50-4254-8b1d-eda1f0c62081"
}
}
Update strategy support:
Now we can define the update strategy in .service
section in podspec.
The detailed configuration for k8s:
- https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
- https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
- https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/
stateful app:
$ yml2json /tmp/charm-builds/mariadb-k8s/metadata.yaml | jq .deployment
{
"type": "stateful",
"min-version": "1.10.1",
"service": "omit"
}
$ juju run --unit mariadb-k8s/0 pod-spec-get | yml2json | jq .service
{
"updateStrategy": {
"rollingUpdate": {
"partition": 10
},
"type": "RollingUpdate"
}
}
stateless
$ yml2json /tmp/charm-builds/mariadb-k8s/metadata.yaml | jq .deployment
{
"type": "stateless",
"min-version": "1.10.1",
"service": "omit"
}
$ juju run --unit mariadb-k8s/0 pod-spec-get | yml2json | jq .service
{
"updateStrategy": {
"rollingUpdate": {
"maxUnavailable": 10
},
"type": "RollingUpdate"
}
}
daemon app
$ yml2json /tmp/charm-builds/mariadb-k8s/metadata.yaml | jq .deployment
{
"type": "daemon",
"min-version": "1.10.1",
"service": "omit"
}
$ juju run --unit mariadb-k8s/0 pod-spec-get | yml2json | jq .service
{
"updateStrategy": {
"rollingUpdate": {
"maxUnavailable": 10
},
"type": "RollingUpdate"
}
}