How to make initContainer works?

I’m looking for a working example of charm that implements InitContainer. Currently I’ve create this pod spec:

spec = {
       'version': 3,
       'initContainers': [{
           'name': self.app.name + "-code-init",
           'imageDetails': {
               'imagePath': "busybox"
           },
           'args': ["sleep", "5"],
           'kubernetes': {},
           'ports': [{
               'containerPort': 3001,
               'name': 'init-http',
               'protocol': 'TCP'
           }]
       }],
       'containers': [{
           'name': self.app.name,
           'imageDetails': {
               'imagePath': "busybox"
           },
           'args': ["sleep", "3600"],
           'kubernetes': {},
           'ports': [{
               'containerPort': 3000,
               'name': 'app-http',
               'protocol': 'TCP'
           }]
       }]
   }

   self.model.pod.set_spec(spec)

But after deployment operator fails with this error


    subprocess.CalledProcessError: Command '('/var/lib/juju/tools/unit-python-application-operator-9/pod-spec-set', '--file', '/tmp/tmphxfdekcq-pod-spec-set/spec.yaml')' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "./src/charm.py", line 68, in <module>
    main(PythonApplicationOperatorCharm)
  File "/var/lib/juju/agents/unit-python-application-operator-9/charm/venv/ops/main.py", line 402, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-python-application-operator-9/charm/venv/ops/main.py", line 140, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-python-application-operator-9/charm/venv/ops/framework.py", line 278, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-python-application-operator-9/charm/venv/ops/framework.py", line 722, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-python-application-operator-9/charm/venv/ops/framework.py", line 767, in _reemit
    custom_handler(event)
  File "./src/charm.py", line 61, in _on_config_changed
    self.model.pod.set_spec(spec)
  File "/var/lib/juju/agents/unit-python-application-operator-9/charm/venv/ops/model.py", line 926, in set_spec
    self._backend.pod_spec_set(spec, k8s_resources)
  File "/var/lib/juju/agents/unit-python-application-operator-9/charm/venv/ops/model.py", line 1154, in pod_spec_set
    self._run('pod-spec-set', *args)
  File "/var/lib/juju/agents/unit-python-application-operator-9/charm/venv/ops/model.py", line 1051, in _run
    raise ModelError(e.stderr)
ops.model.ModelError: b'ERROR json: unknown field "initContainers"\n'
2020-12-04 13:24:59 ERROR juju.worker.uniter.operation runhook.go:136 hook "config-changed" (via hook dispatching script: dispatch) failed: exit status 1

Can you help me get to the root cause?
I’m using k8s 1.19.3 (microk8s) with juju 2.8.6-groovy-amd64 and charmcraft 0.6.1+16.g86d77d9

Hi @kirek007! Thank you for the question.

As of podspec v2, initContainers should be included in the containers list, with a field called “init”, set to True.

I apologize for the out of date docs. I’ll make sure that we update this part of our docs as part of the work that we’re doing this cycle to improve all aspects of the documentation.

hth,
~ PeteVG

(Also, @wallyworld: it looks like we have some code to handle “legacy” container specs. Do you expect that to be working, or did we fully remove the initContainers field in 2.8.x?)

There’s 3 versions of the spec YAML. There’s support for parsing all 3. The example in the question had version set to 3, so Juju was expecting v3 format.

containers:
  - name: test-init
    image: myinit/latest
    imagePullPolicy: Always
    init: true
...

Thanks, I’ll check that and back with response soon :wink:

BTW. Where can I find a whole v3 specification for juju pod?

The doc at the moment is spread across a few discourse posts - it’s evolving to become easier to read but is not there yet. Until then, here’s a sample which contains just about everything it’s possible to do

version: 3
containers:
  - name: gitlab
    image: gitlab/latest
    imagePullPolicy: Always
    command:
      - sh
      - -c
      - |
        set -ex
        echo "do some stuff here for gitlab container"
    args: ["doIt", "--debug"]
    workingDir: "/path/to/here"
    ports:
      - containerPort: 80
        name: fred
        protocol: TCP
      - containerPort: 443
        name: mary
    kubernetes:
      securityContext:
        runAsNonRoot: true
        privileged: true
      livenessProbe:
        initialDelaySeconds: 10
        httpGet:
          path: /ping
          port: 8080
      readinessProbe:
        initialDelaySeconds: 10
        httpGet:
          path: /pingReady
          port: www
      startupProbe:
        httpGet:
          path: /healthz
          port: liveness-port
        failureThreshold: 30
        periodSeconds: 10
    envConfig:
      attr: foo=bar; name["fred"]="blogs";
      foo: bar
      brackets: '["hello", "world"]'
      restricted: "yes"
      switch: on
      special: p@ssword's
      number: 5242880
      my-resource-limit:
        resource:
          container-name: container1
          resource: requests.cpu
          divisor: 1m
    volumeConfig:
      - name: configuration
        mountPath: /var/lib/foo
        files:
          - path: file1
            mode: 644
            content: |
              [config]
              foo: bar
      - name: myhostpath
        mountPath: /host/etc/cni/net.d
        hostPath:
          path: /etc/cni/net.d
          type: Directory
      - name: cache-volume
        mountPath: /empty-dir
        emptyDir:
          medium: Memory
      - name: log_level
        mountPath: /log-config/log_level
        configMap:
          name: log-config
          defaultMode: 511
          files:
            - key: log_level
              path: log_level
              mode: 511
      - name: mysecret2
        mountPath: /secrets
        secret:
          name: mysecret2
          defaultMode: 511
          files:
            - key: password
              path: my-group/my-password
              mode: 511
  - name: gitlab-helper
    image: gitlab-helper/latest
    ports:
      - containerPort: 8080
        protocol: TCP
  - name: secret-image-user
    imageDetails:
      imagePath: staging.registry.org/testing/testing-image@sha256:deed-beef
      username: docker-registry
      password: hunter2
  - name: just-image-details
    imageDetails:
      imagePath: testing/no-secrets-needed@sha256:deed-beef
  - name: gitlab-init
    image: gitlab-init/latest
    imagePullPolicy: Always
    init: true
    command:
      - sh
      - -c
      - |
        set -ex
        echo "do some stuff here for gitlab-init container"
    args: ["doIt", "--debug"]
    workingDir: "/path/to/here"
    ports:
      - containerPort: 80
        name: fred
        protocol: TCP
      - containerPort: 443
        name: mary
    envConfig:
      brackets: '["hello", "world"]'
      foo: bar
      restricted: "yes"
      switch: on
      special: p@ssword's
configMaps:
  mydata:
    foo: bar
    hello: world
service:
  annotations:
    foo: bar
  scalePolicy: serial
  updateStrategy:
    type: Recreate
    rollingUpdate:
      maxUnavailable: 10%
      maxSurge: 25%
serviceAccount:
  automountServiceAccountToken: true
  roles:
    - global: true
      rules:
        - apiGroups: [""]
          resources: ["pods"]
          verbs: ["get", "watch", "list"]
kubernetesResources:
  services:
    - name: my-service1
      labels:
        foo: bar
      spec:
        selector:
          app: MyApp
        ports:
          - protocol: TCP
            port: 80
            targetPort: 9376
    - name: my-service2
      labels:
        app: test
      annotations:
        cloud.google.com/load-balancer-type: "Internal"
      spec:
        selector:
          app: MyApp
        ports:
          - protocol: TCP
            port: 80
            targetPort: 9376
        type: LoadBalancer
  serviceAccounts:
    - name: k8sServiceAccount1
      automountServiceAccountToken: true
      roles:
        - name: k8sRole
          rules:
            - apiGroups: [""]
              resources: ["pods"]
              verbs: ["get", "watch", "list"]
            - nonResourceURLs: ["/healthz", "/healthz/*"] # '*' in a nonResourceURL is a suffix glob match
              verbs: ["get", "post"]
            - apiGroups: ["rbac.authorization.k8s.io"]
              resources: ["clusterroles"]
              verbs: ["bind"]
              resourceNames: ["admin", "edit", "view"]
        - name: k8sClusterRole
          global: true
          rules:
            - apiGroups: [""]
              resources: ["pods"]
              verbs: ["get", "watch", "list"]
  pod:
    annotations:
      foo: baz
    labels:
      foo: bax
    restartPolicy: OnFailure
    activeDeadlineSeconds: 10
    terminationGracePeriodSeconds: 20
    securityContext:
      runAsNonRoot: true
      supplementalGroups: [1, 2]
    readinessGates:
      - conditionType: PodScheduled
    dnsPolicy: ClusterFirstWithHostNet
    hostNetwork: true
    hostPID: true
    priorityClassName: system-cluster-critical
    priority: 2000000000
  secrets:
    - name: build-robot-secret
      type: Opaque
      stringData:
        config.yaml: |-
          apiUrl: "https://my.api.com/api/v1"
          username: fred
          password: shhhh
    - name: another-build-robot-secret
      type: Opaque
      data:
        username: YWRtaW4=
        password: MWYyZDFlMmU2N2Rm
  customResourceDefinitions:
    - name: tfjobs.kubeflow.org
      labels:
        foo: bar
        juju-global-resource-lifecycle: model
      spec:
        group: kubeflow.org
        scope: Cluster
        names:
          kind: TFJob
          singular: tfjob
          plural: tfjobs
        version: v1
        versions:
          - name: v1
            served: true
            storage: true
          - name: v1beta2
            served: true
            storage: false
        conversion:
          strategy: None
        preserveUnknownFields: false
        additionalPrinterColumns:
          - name: Worker
            type: integer
            description: Worker attribute.
            jsonPath: .spec.tfReplicaSpecs.Worker
        validation:
          openAPIV3Schema:
            properties:
              spec:
                properties:
                  tfReplicaSpecs:
                    properties:
                      Worker:
                        properties:
                          replicas:
                            type: integer
                            minimum: 1
                      PS:
                        properties:
                          replicas:
                            type: integer
                            minimum: 1
                      Chief:
                        properties:
                          replicas:
                            type: integer
                            minimum: 1
                            maximum: 1
  customResources:
    tfjobs.kubeflow.org:
      - apiVersion: "kubeflow.org/v1"
        kind: "TFJob"
        metadata:
          name: "dist-mnist-for-e2e-test"
        labels:
          foo: bar
          juju-global-resource-lifecycle: model
        spec:
          tfReplicaSpecs:
            PS:
              replicas: 2
              restartPolicy: Never
              template:
                spec:
                  containers:
                    - name: tensorflow
                      image: kubeflow/tf-dist-mnist-test:1.0
            Worker:
              replicas: 4
              restartPolicy: Never
              template:
                spec:
                  containers:
                    - name: tensorflow
                      image: kubeflow/tf-dist-mnist-test:1.0
  ingressResources:
    - name: test-ingress
      labels:
        foo: bar
      annotations:
        nginx.ingress.kubernetes.io/rewrite-target: /
      spec:
        rules:
          - http:
              paths:
                - path: /testpath
                  backend:
                    serviceName: test
                    servicePort: 80
  mutatingWebhookConfigurations:
    - name: example-mutatingwebhookconfiguration
      labels:
        foo: bar
      annotations:
        juju.io/disable-name-prefix: "true"
      webhooks:
        - name: "example.mutatingwebhookconfiguration.com"
          failurePolicy: Ignore
          clientConfig:
            service:
              name: apple-service
              namespace: apples
              path: /apple
            caBundle: "YXBwbGVz"
          namespaceSelector:
            matchExpressions:
              - key: production
                operator: DoesNotExist
          rules:
            - apiGroups:
                - ""
              apiVersions:
                - v1
              operations:
                - CREATE
                - UPDATE
              resources:
                - pods
  validatingWebhookConfigurations:
    - name: pod-policy.example.com
      labels:
        foo: bar
      annotations:
        juju.io/disable-name-prefix: "true"
      webhooks:
        - name: "pod-policy.example.com"
          rules:
            - apiGroups: [""]
              apiVersions: ["v1"]
              operations: ["CREATE"]
              resources: ["pods"]
              scope: "Namespaced"
          clientConfig:
            service:
              namespace: "example-namespace"
              name: "example-service"
            caBundle: "YXBwbGVz"
          admissionReviewVersions: ["v1", "v1beta1"]
          sideEffects: None
          timeoutSeconds: 5

Hi, Is this information still valid ? I mean are init containers supported and working with Juju pod spec v3 ? I find if I add an init container to my Juju pod spec then the application pod is not created. Most likely this is because the entire pod is terminated when the init container terminates. I infer this is the reason because by removing init:true from the init container and adding a busy loop to the end of command for the init container, then both application container and init container are created without error but of course the initialization requirements of the application container are not met.

The info should still be valid, but it’s possible something has accidentally broken.
We’ll run some tests and provide an update…
(most likely very early next week as it’s reached EOW)

1 Like

Hi @bthomas
I just tested today, and found there is a bug in the init container stuff.
I am working to get a fix, will let you know here once the fix landed.
Thanks for reporting this issue.

Hi @bthomas
This bug is fixed on 2.8/edge and 2.9/candidate(2.9-rc7) now.
Please feel free to let us know you still got any issues.
Thanks

1 Like

Hi @kelvin.liu . Thank you for your efforts. I can confirm that the issues seems to have been fixed, and I am now seeing the both the init container and charm application container being created successfully.

2 Likes