Unable to deploy charms behind corporate proxy

Hi there!

I’m trying to deploy Juju bootstrapping over microk8s behind a corporate proxy. I’ve checked several threads but still I’m unable to fix “juju deploy”:

$ juju deploy kubeflow
ERROR resolving with preferred channel: Post "https://api.charmhub.io/v2/charms/refresh": x509: 
certificate signed by unknown authority

I’m on Ubuntu 22.04.2 LTS

I’ve followed several steps - picked from various topics in the forum. Can anyone help me figure this out?

I prepared a bootstrap.yaml with all my proxy vars AND several ca-certs (full corporate cert chain) as my proxy will tamper with SSL:

apt-http-proxy: "my.corporate.proxy:port"
apt-https-proxy: "my.corporate.proxy:port"
apt-no-proxy: "my.corporate.domain,local,localhost,::1,127.0.0.0/8,10.0.0.0/8,192.168.0.0/16,172.16.0.0/16"
juju-http-proxy: "my.corporate.proxy:port"
juju-https-proxy: "my.corporate.proxy:port"
juju-no-proxy: "my.corporate.domain,local,localhost,::1,127.0.0.0/8,10.0.0.0/8,192.168.0.0/16,172.16.0.0/16"
snap-http-proxy: "my.corporate.proxy:port"
snap-https-proxy: "my.corporate.proxy:port"
cloudinit-userdata: |
  ca_certs:
    trusted: 
    - |
     -----BEGIN CERTIFICATE-----
     -----END CERTIFICATE-----
    - |
     -----BEGIN CERTIFICATE-----
     -----END CERTIFICATE-----
    - |
     -----BEGIN CERTIFICATE-----
     -----END CERTIFICATE-----
    - |
     -----BEGIN CERTIFICATE-----
     -----END CERTIFICATE-----
    - |
     -----BEGIN CERTIFICATE-----
     -----END CERTIFICATE-----

I bootstrap apparently OK with:

$ juju bootstrap --config=bootstrap.yaml --model-default=bootstrap.yaml microk8s micro
Creating Juju controller "micro" on microk8s/localhost
Bootstrap to Kubernetes cluster identified as microk8s/localhost
Fetching Juju Dashboard 0.8.1
Creating k8s resources for controller "controller-micro"
Starting controller pod
Bootstrap agent now started
Contacting Juju controller at 10.152.183.63 to verify accessibility...
Bootstrap complete, controller "micro" is now available in namespace "controller-micro"
Now you can run
    juju add-model <model-name>
to create a new model to deploy k8s workloads.

Some additional context:

Snap proxies are properly set up via /etc/environment and /var/snap/microk8s/current/args/containerd-env:

HTTP_PROXY=my.corporate.proxy:port
HTTPS_PROXY=my.corporate.proxy:port
NO_PROXY=my.corporate.domain,local,localhost,::1,127.0.0.0/8,10.0.0.0/8,192.168.0.0/16,172.16.0.0/16
http_proxy=my.corporate.proxy:port
https_proxy=my.corporate.proxy:port
no_proxy=my.corporate.domain,local,localhost,::1,127.0.0.0/8,10.0.0.0/8,192.168.0.0/16,172.16.0.0/16

Both microk8s and juju deployed and running just fine (snap install --classic microk8s && snap install --classic juju):

snap list | grep -e microk8s -e juju
juju                       2.9.42            22345  2.9/stable       canonical**    classic
microk8s                   v1.26.4           5219   1.26/stable      canonical**    classic

microk8s working just fine & with extra features enabled:

$ kubectl get all --all-namespaces
NAMESPACE            NAME                                            READY   STATUS    RESTARTS      AGE
minio-operator       pod/minio-operator-67dcf6dd7c-xn7fl             0/1     Pending   0             109m
kube-system          pod/kubernetes-dashboard-dc96f9fc-bw56h         1/1     Running   2 (78m ago)   116m
kube-system          pod/dashboard-metrics-scraper-7bc864c59-4xxmz   1/1     Running   2 (78m ago)   116m
kube-system          pod/hostpath-provisioner-69cd9ff5b8-tx2w4       1/1     Running   2 (78m ago)   116m
istio-system         pod/istiod-558cdbcbff-7s8wx                     1/1     Running   1 (78m ago)   110m
minio-operator       pod/console-66c4b79fbd-zg26q                    1/1     Running   1 (78m ago)   109m
container-registry   pod/registry-77c7575667-q66vh                   1/1     Running   2 (78m ago)   116m
kube-system          pod/calico-kube-controllers-79568db7f8-l678k    1/1     Running   2 (78m ago)   125m
kube-system          pod/coredns-6f5f9b5d74-5fff8                    1/1     Running   2 (78m ago)   117m
kube-system          pod/calico-node-8vztk                           1/1     Running   2 (78m ago)   125m
kube-system          pod/metrics-server-6f754f88d-xgpng              1/1     Running   2 (78m ago)   116m
istio-system         pod/istio-ingressgateway-5f57c68988-qwpnc       1/1     Running   1 (78m ago)   110m
istio-system         pod/istio-egressgateway-6b957f5b7d-kvjpg        1/1     Running   1 (78m ago)   110m
minio-operator       pod/minio-operator-67dcf6dd7c-qz2nc             1/1     Running   4 (77m ago)   109m
minio-operator       pod/microk8s-ss-0-0                             1/1     Running   4 (77m ago)   109m

NAMESPACE            NAME                                TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                      AGE
default              service/kubernetes                  ClusterIP      10.152.183.1     <none>        443/TCP                                                                      126m
kube-system          service/kube-dns                    ClusterIP      10.152.183.10    <none>        53/UDP,53/TCP,9153/TCP                                                       117m
container-registry   service/registry                    NodePort       10.152.183.227   <none>        5000:32000/TCP                                                               117m
kube-system          service/metrics-server              ClusterIP      10.152.183.49    <none>        443/TCP                                                                      117m
kube-system          service/kubernetes-dashboard        ClusterIP      10.152.183.17    <none>        443/TCP                                                                      117m
kube-system          service/dashboard-metrics-scraper   ClusterIP      10.152.183.152   <none>        8000/TCP                                                                     117m
istio-system         service/istiod                      ClusterIP      10.152.183.214   <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP                                        111m
istio-system         service/istio-egressgateway         ClusterIP      10.152.183.128   <none>        80/TCP,443/TCP                                                               110m
istio-system         service/istio-ingressgateway        LoadBalancer   10.152.183.67    <pending>     15021:30010/TCP,80:31753/TCP,443:30371/TCP,31400:30318/TCP,15443:30661/TCP   110m
minio-operator       service/operator                    ClusterIP      10.152.183.220   <none>        4222/TCP,4221/TCP                                                            109m
minio-operator       service/console                     ClusterIP      10.152.183.104   <none>        9090/TCP,9443/TCP                                                            109m
minio-operator       service/minio                       ClusterIP      10.152.183.183   <none>        80/TCP                                                                       109m
minio-operator       service/microk8s-console            ClusterIP      10.152.183.234   <none>        9090/TCP                                                                     109m
minio-operator       service/microk8s-hl                 ClusterIP      None             <none>        9000/TCP                                                                     109m

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   1         1         1       1            1           kubernetes.io/os=linux   126m

NAMESPACE            NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
kube-system          deployment.apps/calico-kube-controllers     1/1     1            1           126m
kube-system          deployment.apps/coredns                     1/1     1            1           117m
kube-system          deployment.apps/dashboard-metrics-scraper   1/1     1            1           117m
kube-system          deployment.apps/metrics-server              1/1     1            1           117m
kube-system          deployment.apps/kubernetes-dashboard        1/1     1            1           117m
kube-system          deployment.apps/hostpath-provisioner        1/1     1            1           117m
container-registry   deployment.apps/registry                    1/1     1            1           117m
istio-system         deployment.apps/istiod                      1/1     1            1           111m
minio-operator       deployment.apps/console                     1/1     1            1           109m
istio-system         deployment.apps/istio-ingressgateway        1/1     1            1           110m
istio-system         deployment.apps/istio-egressgateway         1/1     1            1           110m
minio-operator       deployment.apps/minio-operator              1/2     2            1           109m

NAMESPACE            NAME                                                  DESIRED   CURRENT   READY   AGE
kube-system          replicaset.apps/calico-kube-controllers-79568db7f8    1         1         1       125m
kube-system          replicaset.apps/coredns-6f5f9b5d74                    1         1         1       117m
kube-system          replicaset.apps/dashboard-metrics-scraper-7bc864c59   1         1         1       116m
kube-system          replicaset.apps/metrics-server-6f754f88d              1         1         1       116m
kube-system          replicaset.apps/kubernetes-dashboard-dc96f9fc         1         1         1       116m
kube-system          replicaset.apps/hostpath-provisioner-69cd9ff5b8       1         1         1       116m
container-registry   replicaset.apps/registry-77c7575667                   1         1         1       116m
istio-system         replicaset.apps/istiod-558cdbcbff                     1         1         1       110m
minio-operator       replicaset.apps/console-66c4b79fbd                    1         1         1       109m
istio-system         replicaset.apps/istio-ingressgateway-5f57c68988       1         1         1       110m
istio-system         replicaset.apps/istio-egressgateway-6b957f5b7d        1         1         1       110m
minio-operator       replicaset.apps/minio-operator-67dcf6dd7c             2         2         1       109m

NAMESPACE        NAME                             READY   AGE
minio-operator   statefulset.apps/microk8s-ss-0   1/1     109m

Anyone can help me spot the problem here?

Hi @alienmind,

Thank you for the question. I am going to make some assumptions in this post so please correct anything I may have got wrong.

When you are bootstrapping with that config you are modifying the configuration for the Juju model and subsequently all applications/units deployed into that model get the configuration.

Because of this when deploying an application to a new model (the model that is not the one running the controller). These values will need to be set based on the context of where the model is running from. In your case this would be Kubernetes.

Please see:

Regards tlm

Thanks for getting back to me, Tim.

Your assumptions are correct.

The error is triggered while accessing the Charm Store. Which, according to the links above:

Required by the controller so that charms can be deployed on the machines.See [Deploying charms offline]

What I’m trying to do is make the controller trust an intermediate proxy that is intercepting SSL.

Therefore I need to add certain ca.crt entries to the controller. I’ve prepared a bootstrap.yaml file with the relevant entries (cloudinit-userdata.ca_certs.trusted - see above)

However the controller seems to be ignoring the entries when running juju deploy.

Looking at Juju | Command 'juju model-config' there seems to be options for http_proxy / https_proxy / no_proxy, but I don’t see any particular config options for adding a custom ca.crt which seems to be the problem when going through an intercepting proxy.

Is there any other workaround I could try?

Hi alienmind,

Did you manage to resolve it?

I tried digging for quite a while but could not get it to behave. I even added the root CA to the controller container and api-server container and updated ca-certs on these containers. Tried different variations of the cloud-init (with one cert multiple certs etc). No luck though :confused:

Keep getting the tls: failed to verify certificate: x509: certificate signed by unknown authority

Any help would greatly be appreciated. In my case it is ZScaler that will intercept the SSL.

No, I’m afraid I didn’t fix it

That’s unfortunate @alienmind. @tlm I tried to check the model => kubeflow and also pushing the cloudinit with the certs in there again without much luck.

Is there anything else we can try @tlm?

Thanks for the responses and help :slight_smile:

Hi @alienmind & @dum-pster,

My apologies for missing the earlier responses. Notifications must have missed me.

In any event we don’t currently have a mechanism to decorate Juju’s TLS trust store as we predominantly rely on what the host is shipped with. It’s weird that you have both tried to include new TLS certs into the system and Juju has not trusted them.

Can I ask what was the method/steps you used to do this.

For reference as Juju is written in Go we are relying on the internal Go implementation for loading the system certificate store. For Ubuntu Go will be pulling this information from /etc/ssl/certs/ca-certificates.crt. Are you able to confirm your CA cert lives in this location?

There also exists and env variable you can try SSL_CERT_FILE to load up a custom CA file.

Are you also able to confirm that the intermediate proxy is returning the full trust chain to the client all the way up to the root as that could be causing issues as well.

@tlm and @alienmind I finally got it to work but I tried a number of things. I’ll try to add my steps after answering your Qs.

I started by installing micro8ks and juju through snaps on my WSL 2 ubuntu. To do that I already had to add the root CAs as trusted CAs through /usr/local/share/ca-certificates/ followed by a update-ca-certificates to include them in the /etc/ssl/certs/ca-certificates.crt chain. Once that was done I had fairly consistent behaviour in WSL. Next up added dns, storage, ingress, and metallb to the one node cluster.

So far so good. Also added GPU support because I wanted to test CUDA support. I had to take some steps to get that sorted but all good up to this point.

Next, I bootstrapped the juju controller, added the model, and tried to deploy. During that deploy the root CA was not picked up so I have tried:

  • the cloudinit-userdata
  • opening a shell into the bootstrapped controller and explicitly adding the certs in there (as I did on the ubuntu on WSL os)

None of that really worked. I checked the model of the controller and the kubeflow model. Both of them did have the correct cloudinit-userdata, and I knew it should work if those are in the trused CA chain, but really seems like they are not added.

I’ll try to add my current workaround in the next post.

1 Like

My current workaround uses

  • cert-manager,
  • trust-manager,
  • and gatekeeper

to basically use a configmap created in the cert-manager namespace that includes my trusted CA chain and deploy that in namespaces labelled as trust=enabled. The intention there is to override the /etc/ssl/certs/ with this chain. Gatekeeper is used to add the label to the namespaces that I want (before they are created). So, I added a rule for controller-micro and one for kubeflow that will assign the label that trust-manager and cert-manager will use to deploy the configmap. Gatekeeper also adds a volume and a volume mount to the pods in those namespaces to override the contents of the /etc/ssl/certs/ and if you do that the deploy seems to work. (I now have some upgrades to istio-pilot to take care off but that should be okay)

Some steps should you want to try @alienmind. I might have overlooked some and ofc use at your own risk :slight_smile:

microk8s enable helm
microk8s helm repo add jetstack https://charts.jetstack.io --force-update
microk8s helm upgrade -i -n cert-manager cert-manager jetstack/cert-manager --set installCRDs=true --wait --create-namespace
microk8s helm upgrade -i -n cert-manager trust-manager jetstack/trust-manager --wait

Time to create that configmap. I used a copy of my chain but use what works for you:

sudo cp /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates-shorter.crt
microk8s kubectl create configmap -n cert-manager trusted-certs --from-file=/etc/ssl/certs/ca-certificates-shorter.crt

That configmap can now be used as source in a bundle but many different sources are possible:

microk8s kubectl apply -f - <<EOF
apiVersion: trust.cert-manager.io/v1alpha1
kind: Bundle
metadata:
  name: "trusted.ca.certs.com"  # The bundle name will also be used for the target
spec:
  sources:
  # Include a bundle of publicly trusted certificates which can be
  # used to validate most TLS certificates on the internet, such as
  # those issued by Let's Encrypt, Google, Amazon and others.
  - useDefaultCAs: true
  # A ConfigMap in the trust-manager namespace
  - configMap:
      name: "trusted-certs"
      key: "ca-certificates-shorter.crt"
  target:
    # Sync the bundle to a ConfigMap called `trusted.ca.certs.com` in every namespace which
    # has the label "trust=enabled"
    # All ConfigMaps will include a PEM-formatted bundle, here named "root-certs.pem"
    # and in this case we also request a binary JKS formatted bundle, here named "bundle.jks"
    configMap:
      key: "root-certs.pem"
    configMap:
      key: "ca-certificates.crt"
    additionalFormats:
      jks:
        key: "bundle.jks"
    namespaceSelector:
      matchLabels:
        trust: "enabled"
EOF

Next we’ll need some gatekeeper to add the labels to the namespaces that you want. I created two files:

  • for the controller namespace trusted-annotation-controller.yml:
apiVersion: mutations.gatekeeper.sh/v1
kind: AssignMetadata
metadata:
  name: trusted-annotation-controller-micro
spec:
  match:
    scope: Cluster
    name: controller-micro
    kinds:
      - apiGroups: [""]
        kinds: ["Namespace"]
  location: "metadata.labels.trust"
  parameters:
    assign:
      value:  "enabled"
  • and for the kubeflow namespace trusted-annotation-kubeflow.yml:
apiVersion: mutations.gatekeeper.sh/v1
kind: AssignMetadata
metadata:
  name: trusted-annotation-kubeflow
spec:
  match:
    scope: Cluster
    name: kubeflow
    kinds:
      - apiGroups: [""]
        kinds: ["Namespace"]
  location: "metadata.labels.trust"
  parameters:
    assign:
      value:  "enabled"

Once applied if these namespaces are created the label will be added which will in turn me picked up by cert-manager and trust-manager and the configmaps will be added subsequently.

microk8s kubectl apply -f trusted-annotation-controller.yml
microk8s kubectl apply -f trusted-annotation-kubeflow.yml

Now these configmaps need to be mapped to a volume and volume mount that each pod in the namespace will use. Gatekeeper can again help with that:

This is the volume again stored in a yml file trusted-gatekeeper-volume.yml:

apiVersion: mutations.gatekeeper.sh/v1
kind: Assign
metadata:
  name: trusted-ca-volume
spec:
  applyTo:
  - groups: [""]
    kinds: ["Pod"]
    versions: ["v1"]
  match:
    scope: Namespaced
    kinds:
    - apiGroups: ["*"]
      kinds: ["Pod"]
    namespaces: ["controller-micro","kubeflow"]
    excludedNamespaces: ["kube-system", "public", "kyverno","gatekeeper-system","kube-public","kube-node-lease","default","gpu-operator-resources","ingress"]
  location: "spec.volumes[name:ca-certificates-only]"
  parameters:
    assign:
      value:
        name: ca-certificates-only
        configMap:
          name: 'trusted.ca.certs.com'
          defaultMode: 0644
          optional: false
          items:
          - key: 'ca-certificates.crt'
            path: 'ca-certificates.crt'

This the volume mount trusted-gatekeeper-volume-mount.yml

apiVersion: mutations.gatekeeper.sh/v1
kind: Assign
metadata:
  name: trust-ca-volumemount
spec:
  applyTo:
  - groups: [""]
    kinds: ["Pod"]
    versions: ["v1"]
  match:
    scope: Namespaced
    kinds:
    - apiGroups: ["*"]
      kinds: ["Pod"]
    namespaces: ["controller-micro","kubeflow"]
    excludedNamespaces: ["kube-system", "public", "kyverno","gatekeeper-system","kube-public","kube-node-lease","default","gpu-operator-resources","ingress"]
  location: "spec.containers[name:*].volumeMounts[name:ca-certificates-only]"
  parameters:
    assign:
      value:
        mountPath: /etc/ssl/certs/
        name: ca-certificates-only
        readOnly: true

Apply and happy bootstrapping and deploying :slight_smile:

 microk8s kubectl apply -f trusted-gatekeeper-volume.yml
 microk8s kubectl apply -f trusted-gatekeeper-volume-mount.yml
 juju bootstrap microk8s micro
 juju add-model kubeflow
 juju deploy cs:kubeflow-lite

I combined a couple of blog posts but this eventually worked

1 Like

Thanks for the thorough guide @dum-pster I’ll definitely give it a try to double check and get back to you!

Hey everyone, I am having the same Issue SSL Inspection - Custom Certificate - #2 by kimwnasptd Using the workaround described by @dum-pster did not work yet as there are some issues with the provided manifests.

I think there should be an easier way to make juju aware of the custom CA certificate.

Agree with @lukas!

I tried the workaround again and whilst there were some issues it works for me. This is what I had to correct/look out for:

  • make sure to install gatekeeper

  • indent in the spec of the controller metadata is wrong so has to be updated

  • name of the contoller namespace in that spec has to be updated to whatever name you choose

  • error in this manifest trusted-gatekeeper-volume.yml at line 15 array needs to end this was a copy paste mistake

  • same error here trusted-gatekeeper-volume-mount.yml copy paste … :confused:

  • double check namespace name for the controller in the gatekeeper volume manifests