Mounting Volumes

I’m trying to build a Filebeat Kubernetes Series Charm and following the docs for deploying Filebeat in Kubernetes it seems that I need to mount some volumes to receive log files from other containers. I’m trying to do this by defining volumes in the spec template and volumeMounts in the container definition. Unfortunately the volumes don’t seem to be mounting when I deploy the charm. The only way I seem to be able to mount stuff is by defining it in my spec file in the files section. Is mounting volumes the other way currently supported?

Here is the documentation that I’m following:
https://www.elastic.co/guide/en/beats/filebeat/6.0/running-on-kubernetes.html
The kubernetes file that the docs use:
https://github.com/elastic/beats/blob/master/deploy/kubernetes/filebeat-kubernetes.yaml

And my spec_template.yaml:

    containers:
    - name: %(name)s
      imageDetails:
        imagePath: %(docker_image_path)s
        username: %(docker_image_username)s
        password: %(docker_image_password)s
      args: [
        "-c", "/etc/filebeat.yml",
        "-e",
      ]
      env:
      - name: ELASTICSEARCH_HOST
        value: %(elasticsearch_host)s
      - name: ELASTICSEARCH_PORT
        value: %(elasticsearch_port)s
      securityContext:
        runAsUser: 0
      ports:
      - containerPort: %(advertised-port)s
        protocol: TCP
     files:
      - name: config
        mountPath: /etc/filebeat
        files:
          filebeat.yml: |
            filebeat.inputs:
            - type: docker
              enabled: true
              containers:
                ids:
                - '*'
            processors:
            - add_docker_metadata: ~
            output.elasticsearch:
              hosts: ["%(elasticsearch_host)s:%(elasticsearch_port)s"]
            logging.to_files: true
      volumeMounts:
      - name: prospectors
        mountPath: /usr/share/filebeat/prospectors.d
        readOnly: true
      - name: data
        mountPath: /usr/share/filebeat/data
      - name: varlibdockercontainers
        mountPath: /var/lib/docker/containers
        readOnly: true
    volumes:
    - name: dockercontainers
      hostPath:
        path: /var/lib/docker/containers
    - name: prospectors
      configMap:
        defaultMode: 0600
        name: filebeat-prospectors
    - name: data
      emptyDir: {}

To provision persistent volume storage, you need to use standard Juju storage primitives. The YAML sent by the charm is not used for this as you have found. Here’s a link to get started:

The doc is slightly out of date (an update is being worked on) in that for recent versions of Juju you no longer need to create a storage pool unless you want to. Juju will use the cluster default storage class if one is defined. If not, you will need to use a storage pool to tell Juju how to request storage be provisioned.

If you want a really simple example, look say at the demo mariadb charm. The charm declares that it wants some file storage:

"storage":
  "database":
    "type": "filesystem"
    "location": "/var/lib/mysql"

When you deploy the charm, Juju will ask k8s to create a persistent volume which will be mounted into the pod at the specified location. By default, the claim will be 1GB but you can request more, eg

juju deploy cs:~juju/mariadb-k8s --storage database=10G

To pass parameters to the k8s provisioner, set up a Juju storage class, eg

juju create-storage-pool k8s-ebs storage-class=juju-ebs storage-provisioner=kubernetes.io/aws-ebs parameters.type=gp2

In Juju 2.6 beta1 you can also make use of the k8s emptyDir storage backend which maps to the standard Juju tmpfs (for in memory) or rootfs (for pod root disk) storage types. eg

juju deploy cs:~juju/mariadb-k8s --storage database=tmpfs,100M

or

juju deploy cs:~juju/mariadb-k8s --storage database=rootfs,1G