PVC Write permissions

I just deployed a small charm with the fsGroup set and via some debugging confirmed that the values are set in the statefulset Spec.Template when the statefulset create API is called.

We’ll have to do some investigation as to why aspects of the Spec.Template are not being applied as requested.

{
...
  "containers": [
    {
      "name": "mariadb-k8s",
      "image": "mariadb",
      "ports": [
        {
          "containerPort": 3306,
          "protocol": "TCP"
        }
      ],
      "env": [
        {
          "name": "MYSQL_DATABASE",
          "value": "database"
        },
        {
          "name": "MYSQL_PASSWORD",
          "value": "password"
        },
        {
          "name": "MYSQL_ROOT_PASSWORD",
          "value": "root"
        },
        {
          "name": "MYSQL_USER",
          "value": "admin"
        },
        {
          "name": "NODE_NAME",
          "valueFrom": {
            "fieldRef": {
              "apiVersion": "v1",
              "fieldPath": "spec.nodeName"
            }
          }
        }
      ],
      "resources": {},
      "volumeMounts": [
        {
          "name": "juju-data-dir",
          "mountPath": "/var/lib/juju"
        },
        {
          "name": "juju-data-dir",
          "mountPath": "/usr/bin/juju-run",
          "subPath": "tools/jujud"
        },
        {
          "name": "mariadb-k8s-configurations-config",
          "mountPath": "/etc/mysql/conf.d"
        }
      ],
      "securityContext": {
        "runAsNonRoot": false,
        "readOnlyRootFilesystem": false,
        "allowPrivilegeEscalation": true
      }
    }
  ],
  "serviceAccountName": "mariadb-k8s",
  "automountServiceAccountToken": true,
  "securityContext": {
    "fsGroup": 2
  }
}

Hmm fair enough & thanks for checking. The other thing I see people do is setup an init container and chown the mount directory space prior to the main container coming up. I tried that also earlier but it appears the volumes specified in metadata.yaml aren’t mounted in init pods, is there anyway to get that pvc mounted in the init pod?

Currently any Juju storage pv is only mounted in the workload container.
We could look at if it makes sense to also mount them in an init container to allow the sort of initialisation you are talking about. But then there’s issues reclaim policy to consider etc. I’d also like to understand why the fsGroup is not being applied.

Yeah,

I don’t think either of these methods are particularly weird for non-root running containers, but the setting of fsGroup and the runAsUser etc seem to be the correct way of going about it. chowning a bunch of stuff in an init container is particularly unelegant.

This also obviously affects any non-root charm in the charmstore, I saw it with, I believe grafana as well the other day when testing, so it’ll be cool to get this figured out and stuck into various K8S charms that currently mount volumes with the wrong permissions.

So did a bit more digging @wallyworld

I deployed my charm on microk8s and it worked without modification, which is what I was expecting as ya’ll deploy charms with pvc’s without bother.

So I then edited the statefulset and realised there are 2 securityContexts, and this setting only seems to stick in the one outside of the containers: block at spec level.

As soon as I added:

      securityContext:
        fsGroup: 1001

To the outer context, the pod came to life and fixed the permissions error I was facing.

So, can you set higher level kubernetes resources?

Thanks

Tom

You are referring to the PodSecurityContext value on the statefulset spec template right?

type StatefulSetSpec struct {
	Template v1.PodTemplateSpec
}

type PodTemplateSpec struct {
	metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
	Spec PodSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
}

type PodSpec struct {
	Volumes []Volume `json:"volumes,omitempty" patchStrategy:"merge,retainKeys" patchMergeKey:"name" protobuf:"bytes,1,rep,name=volumes"`
	InitContainers []Container `json:"initContainers,omitempty" patchStrategy:"merge" patchMergeKey:"name" protobuf:"bytes,20,rep,name=initContainers"`
	Containers []Container `json:"containers" patchStrategy:"merge" patchMergeKey:"name" protobuf:"bytes,2,rep,name=containers"`
	// SecurityContext holds pod-level security attributes and common container settings.
	// Optional: Defaults to empty.  See type description for default values of each field.
	// +optional
	SecurityContext *PodSecurityContext `json:"securityContext,omitempty" protobuf:"bytes,14,opt,name=securityContext"`
}

The SecurityContext in the PodSpec above is what the securityContext passed to Juju is set from

version: 3
containers:
  - name: something
    image: something/latest
    ports:
      - containerPort: 80
        name: web
kubernetesResources:
  pod:
    securityContext:
      fsGroup: 1001

So when you say “outside the containers” I think we are both referring to the pod level security context? This is what I confirmed Juju was setting with the fsGroup and passing to the statefulset create() API but was not showing up later when inspecting the pods that got stood up. It’s not clear at this stage why that is happening. I may have had a typo in the fsGroup value and it got ignored, but haven’t had a chance to retest.

Ah yeah sorry, I think we’re on the same line, I was thrown because the container level securityContext entries and the mentions of pods. My fault!

Cause this is liable to get lost over Christmas and New Years and blocks deployments of K8S charms onto a few different implementations I’ve filed a bug on launchpad Bug #1909153 “fsGroup unsettable in juju k8s” : Bugs : juju

Thanks for filing the bug, @magicaltrout. I triaged and tagged it as something we will fix if possible this cycle.

@magicaltrout support for pod level security context should already be there as per

version: 3
containers:
  - name: something
    image: something/latest
    ports:
      - containerPort: 80
        name: web
kubernetesResources:
  pod:
    securityContext:
      fsGroup: 1001

Is this not working for you?

I can test again later, I don’t believe I found a combination that works:

isn’t that the same as your spec?

Yeah, looks the same. When I last tested, I confirmed the k8s pod spec template that got set on the statefulset had the correct attributes set. But maybe I should re-test to be sure.

I just did another test. juju version 2.8.7
My charm YAML

version: 3
containers:
  - name: mariadb-k8s
    ports:
    - containerPort: 3361
      protocol: TCP
...
kubernetesResources:
  pod:
    securityContext:
      runAsUser: 1001
      runAsGroup: 1001
      runAsNonRoot: true
      fsGroup: 1001

I deployed the charm and then queried both the statefulset template and workload pod and things were as expected:

$ kubectl get pods/mariadb-k8s-0 -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    juju.io/charm-modified-version: "0"
    juju.io/controller: 4bdfb2d4-422c-4a15-896c-209e3c65b714
    juju.io/model: ac39a35e-9df1-450e-87da-66a0fb564614
    juju.io/unit: mariadb-k8s/0
...
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1001
    runAsGroup: 1001
    runAsNonRoot: true
    runAsUser: 1001
...
$ kubectl get statefulset.apps/mariadb-k8s -o yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    juju-app-uuid: 7d695a9d
    juju.io/charm-modified-version: "0"
    juju.io/controller: 4bdfb2d4-422c-4a15-896c-209e3c65b714
    juju.io/model: ac39a35e-9df1-450e-87da-66a0fb564614
  creationTimestamp: "2021-01-05T01:12:30Z"
  generation: 1
...
spec:
  podManagementPolicy: Parallel
  replicas: 1
  revisionHistoryLimit: 0
  selector:
    matchLabels:
      juju-app: mariadb-k8s
  serviceName: mariadb-k8s-endpoints
  template:
...
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1001
        runAsGroup: 1001
        runAsNonRoot: true
        runAsUser: 1001
...

Weird. Thanks for checking @wallyworld I’ll test tomorrow and let you know what I see.

Okay, as it works for you I assume I’m doing something stupid:

https://gitlab.com/spiculedata/juju/solr-k8s-charm/-/blob/master/src/charm.py#L73-90

This is my charm test spec.

Pod log:

Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/solr'

There is no set fsGroup at pod or statefulset level :man_shrugging:

I can’t figure it out, this is just a standard 1.19 K8S OVH cluster, happy to give you access to it if it makes life any easier. I guess its probably more likely me doing something foolish in the bootstrap process.

I pulled the charm solr charm and installed it.

I dumped the Juju db to see the pod spec that was sent in:

    containers:
    - envConfig:
        SOLR_JAVA_MEM: -Xms2g -Xmx3g
        SOLR_LOG_LEVEL: INFO
      imageDetails:
        imagePath: bitnami/solr:8.7.0-debian-10-r31
      imagePullPolicy: Always
      name: ddd
      ports:
      - containerPort: 8983
        name: solr
        protocol: TCP
    kubernetesResources:
      pod:
        securityContext:
          fsGroup: 1001
          runAsGroup: 1001
          runAsUser: 1001
    version: 3
    kubernetesResources:
      secrets:
      - data: {}
        name: charm-secrets
        type: Opaque
      services:
      - name: solr-port
        spec:
          clusterIP: ''
          ports:
          - name: solr
            port: 8983
            protocol: TCP
          type: NodePort

There’s 2 kubernetesResources sections. It does still parse ok but comes out as:

{
  "kubernetesResources": {
    "secrets": [
      {
        "data": {}, 
        "type": "Opaque", 
        "name": "charm-secrets"
      }
    ], 
    "services": [
      {
        "name": "solr-port", 
        "spec": {
          "clusterIP": "", 
          "type": "NodePort", 
          "ports": [
            {
              "protocol": "TCP", 
              "name": "solr", 
              "port": 8983
            }
          ]
        }
      }
    ]
  }, 
  "version": 3, 
  "containers": [
    {
      "imagePullPolicy": "Always", 
      "imageDetails": {
        "imagePath": "bitnami/solr:8.7.0-debian-10-r31"
      }, 
      "ports": [
        {
          "protocol": "TCP", 
          "containerPort": 8983, 
          "name": "solr"
        }
      ], 
      "envConfig": {
        "SOLR_JAVA_MEM": "-Xms2g -Xmx3g", 
        "SOLR_LOG_LEVEL": "INFO"
      }, 
      "name": "ddd"
    }
  ]
}

I fixed the charm and got the desired result. Here’s the diff:

diff --git a/src/charm.py b/src/charm.py
index 82b7aad..da5e687 100755
--- a/src/charm.py
+++ b/src/charm.py
@@ -72,15 +72,6 @@ class SolrCharm(CharmBase):
             ]
         spec = {
                 "version": 3,
-                "kubernetesResources":{
-                    "pod":{
-                        "securityContext":{
-                            "fsGroup": 1001,
-                            "runAsUser": 1001,
-                            "runAsGroup":1001,
-                        }
-                    }
-                },
                 "containers": [
                     {
                         "name": self.app.name,
@@ -108,7 +99,7 @@ class SolrCharm(CharmBase):
                     #        {
                     #            "name":"binami",
                     #           "mountPath":"/bitnami",
-                    #            
+                    #
                     #        }
                     #    ]
                     #}
@@ -135,6 +126,13 @@ class SolrCharm(CharmBase):
         resources = {
             "secrets": [{"name": "charm-secrets", "type": "Opaque", "data": secrets_data}],
             "services": services,
+            "pod": {
+                "securityContext": {
+                    "fsGroup": 1001,
+                    "runAsUser": 1001,
+                    "runAsGroup": 1001,
+                }
+            }
         }
         logger.info(f"Pod resources <<EOM\n{yaml.dump(resources)}\nEOM")

The issues is that the pod security context info was being added to the wrong section.

Thanks a lot @wallyworld! This is all black magic, I’m glad someone who knows what they are doing could figure it out.

Looking forward to getting this cleaned up and shipped in some state.

As an aside as containers progress to non-root based containers more frequently, I wonder what effect this will have on charms and charm developers, clearly the fsGroup / runAsUser stuff doesn’t effect your testsuites, but does hit this K8S/Ceph implementation, I guess the permissions you see on microk8s are more frequent but its something developers need to be wary of because cool charms that would otherwise work out the box will blow up with permissions errors like this one that only certain people will see.

Yeah, there’s a bit to get your head around. Not sure if this helps.

If there’s gaps, any suggestions to improve gratefully accepted. It’s hard to know what is unclear to people having been immersed in it for a bit.

Sorry for the delay in testing this, busy couple of weeks. Looks like those changes did the trick, thanks for testing for me Ian!

bugg@DESKTOP-JIMKO29:~/Projects/solr-k8s-charm$ kubectl -n zk logs solr-0

Welcome to the Bitnami solr container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-solr
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-solr/issues

nami    INFO  Initializing solr
solr    INFO
solr    INFO  ########################################################################
solr    INFO   Installation parameters for solr:
solr    INFO     Solr URL: http://localhost:8983/solr
solr    INFO   This installation requires no credentials.
solr    INFO  ########################################################################
solr    INFO
nami    INFO  solr successfully initialized
INFO  ==> Starting gosu...
OpenJDK 64-Bit Server VM warning: Failed to reserve shared memory. (error = 12)
OpenJDK 64-Bit Server VM warning: Failed to reserve shared memory. (error = 12)
OpenJDK 64-Bit Server VM warning: Failed to reserve shared memory. (error = 12)
OpenJDK 64-Bit Server VM warning: Failed to reserve shared memory. (error = 12)
2021-01-12 23:19:49.560 INFO  (main) [   ] o.e.j.u.log Logging initialized @1501ms to org.eclipse.jetty.util.log.Slf4jLog
2021-01-12 23:19:49.633 WARN  (main) [   ] o.e.j.x.XmlConfiguration Ignored arg: <Arg name="threadpool">
<New id="threadpool" class="com.codahale.metrics.jetty9.InstrumentedQueuedThreadPool"><Arg name="registry">
    <Call id="solrJettyMetricRegistry" name="getOrCreate" class="com.codahale.metrics.SharedMetricRegistries"><Arg>solr.jetty</Arg></Call>
  </Arg></New>
  </Arg>
2021-01-12 23:19:49.744 INFO  (main) [   ] o.e.j.s.Server jetty-9.4.27.v20200227; built: 2020-02-27T18:37:21.340Z; git: a304fd9f351f337e7c0e2a7c28878dd536149c6c; jvm 1.8.0_275-b01
2021-01-12 23:19:49.769 INFO  (main) [   ] o.e.j.d.p.ScanningAppProvider Deployment monitor [file:///opt/bitnami/solr/server/contexts/] at interval 0
2021-01-12 23:19:50.140 INFO  (main) [   ] o.e.j.w.StandardDescriptorProcessor NO JSP Support for /solr, did not find org.apache.jasper.servlet.JspServlet
2021-01-12 23:19:50.158 INFO  (main) [   ] o.e.j.s.session DefaultSessionIdManager workerName=node0
2021-01-12 23:19:50.158 INFO  (main) [   ] o.e.j.s.session No SessionScavenger set, using defaults
2021-01-12 23:19:50.161 INFO  (main) [   ] o.e.j.s.session node0 Scavenging every 660000ms
2021-01-12 23:19:50.283 INFO  (main) [   ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory
2021-01-12 23:19:50.290 INFO  (main) [   ] o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 8.7.0
2021-01-12 23:19:50.290 INFO  (main) [   ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in standalone mode on port 8983
2021-01-12 23:19:50.293 INFO  (main) [   ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: /opt/bitnami/solr
2021-01-12 23:19:50.294 INFO  (main) [   ] o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 2021-01-12T23:19:50.294Z
2021-01-12 23:19:50.295 INFO  (main) [   ] o.a.s.s.SolrDispatchFilter Log level override, property solr.log.level=INFO
2021-01-12 23:19:50.303 INFO  (main) [   ] o.a.s.c.SolrPaths Using system property solr.solr.home: /opt/bitnami/solr/server/solr
2021-01-12 23:19:50.310 INFO  (main) [   ] o.a.s.c.SolrXmlConfig Loading container configuration from /opt/bitnami/solr/server/solr/solr.xml
2021-01-12 23:19:50.465 INFO  (main) [   ] o.a.s.c.SolrXmlConfig MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@47542153, but no JMX reporters were configured - adding default JMX reporter.
2021-01-12 23:19:51.122 INFO  (main) [   ] o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=true]
2021-01-12 23:19:51.315 WARN  (main) [   ] o.e.j.u.s.S.config Trusting all certificates configured for Client@2ece4966[provider=null,keyStore=null,trustStore=null]
2021-01-12 23:19:51.315 WARN  (main) [   ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for Client@2ece4966[provider=null,keyStore=null,trustStore=null]
2021-01-12 23:19:51.534 WARN  (main) [   ] o.e.j.u.s.S.config Trusting all certificates configured for Client@841e575[provider=null,keyStore=null,trustStore=null]
2021-01-12 23:19:51.534 WARN  (main) [   ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for Client@841e575[provider=null,keyStore=null,trustStore=null]
2021-01-12 23:19:51.590 WARN  (main) [   ] o.a.s.c.CoreContainer Not all security plugins configured!  authentication=disabled authorization=disabled.  Solr is only as secure as you make it. Consider configuring authentication/authorization before exposing Solr to users internal or external.  See https://s.apache.org/solrsecurity for more info
2021-01-12 23:19:51.807 INFO  (main) [   ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 2147483647 transient cores
2021-01-12 23:19:51.810 INFO  (main) [   ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory.
2021-01-12 23:19:51.908 INFO  (main) [   ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@47542153
2021-01-12 23:19:51.911 INFO  (main) [   ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@47542153
2021-01-12 23:19:51.921 INFO  (main) [   ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@47542153
2021-01-12 23:19:51.958 INFO  (main) [   ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /opt/bitnami/solr/server/solr
2021-01-12 23:19:52.013 INFO  (main) [   ] o.e.j.s.h.ContextHandler Started o.e.j.w.WebAppContext@4d5650ae{/solr,file:///opt/bitnami/solr/server/solr-webapp/webapp/,AVAILABLE}{/opt/bitnami/solr/server/solr-webapp/webapp}
2021-01-12 23:19:52.023 INFO  (main) [   ] o.e.j.s.AbstractConnector Started ServerConnector@776b83cc{HTTP/1.1, (http/1.1, h2c)}{0.0.0.0:8983}
2021-01-12 23:19:52.023 INFO  (main) [   ] o.e.j.s.Server Started @3966ms

Awesome, glad you got it all working.