Add nginx-ingress controller to charmed kubernetes deployment

I would like to add another nginx-ingress controller that binds to specific kubernetes-workers in the charmed kubernetes environment in order to allow traffic from outside to two different IP-Groups. Is there a specific command for this or has this to be hard coded in another charm ?

Ping @k8s-charmers. Does anyone have experience with this?

I think you could do this by adding a second group of workers with a different Juju application name. I believe the ingress daemonset will only run on the nodes of a single Juju application, so you could have one ingress controller per Juju application that’s using the kubernetes-worker charm. I haven’t actually tried this myself (yet), but I would be interested to hear about your results if you try it. If you encounter any problems we could help sort those out.

So I have deployed another kubernetes worker with

juju deploy cs:~containers/kubernetes-worker-661 worker4

and a new loadbalancer with

juju deploy cs:~containers/kubeapi-load-balancer-715 balancer2

and added following relations:

balancer2:loadbalancer               kubernetes-master:loadbalancer       public-address         regular      
balancer2:website                    worker4:kube-api-endpoint            http                   regular      
kubernetes-master:kube-control       worker4:kube-control                 kube-control           regular      
openstack-integrator:clients         worker4:openstack                    openstack-integration  regular      
vault:certificates                   worker4:certificates                 tls-certificates       regular      
worker4:cni                          flannel:cni                          kubernetes-cni         subordinate  
worker4:container-runtime            containerd:containerd                container-runtime      subordinate  
worker4:coordinator                  worker4:coordinator                  coordinator            peer  

But I get the message on juju status that worker4 is ‘Waiting for kubelet,kube-proxy to start.’ and that kubernetes-master/0 has ‘hook failed: “kube-api-endpoint-relation-broken”’. However, kubernetes-master/1 is running without problems. In particular, the debug message from the first master’s /var/log/juju reads, in the relevant passages:

2020-05-09 14:29:00 INFO juju-log kube-api-endpoint:47: Executing ['kubectl', '--kubeconfig=/root/.kube/config', 'get', 'service', '--namespace', 'kube-system', 'kube-dns', '--output', 'json']
2020-05-09 14:29:03 DEBUG kube-api-endpoint-relation-broken ERROR permission denied

where it says, on the second master node, that:

2020-05-09 14:33:29 INFO juju-log kube-api-endpoint:60: Executing ['kubectl', '--kubeconfig=/root/.kube/config', 'get', 'po', '-n', 'kube-system', '-o', 'json', '--request-timeout', '10s']
2020-05-09 14:33:30 INFO juju-log kube-api-endpoint:60: Checking system pods status: coredns-799df594b4-8wjdf=Running, csi-cinder-controllerplugin-0=Running, csi-cinder-nodeplugin-gnhw9=Running, csi-cinder-nodeplugin-q6ff5=Running, csi-cinder-nodeplugin-qmmjh=Running, kube-state-metrics-b7fcfd576-8rr6t=Running, kube-state-metrics-b7fcfd576-l7f8l=Failed, metrics-server-v0.3.6-6dc669d66-zntrn=Running, openstack-cloud-controller-manager-2s55t=Running, openstack-cloud-controller-manager-gqtcn=Running, openstack-cloud-controller-manager-k9jq8=Running
2020-05-09 14:33:30 INFO juju-log kube-api-endpoint:60: status-set: active: Kubernetes master running.

What is more, when I ssh into the first master node and execute

kubectl get svc -n kube-system kube-dns --output json --kubeconfig=/root/.kube/config

manually, it actually gives me the output, without the PERMISSION DENIED error. I have also tried to restart the error causing relation, but to no avail.

Another approach would be to add units to the existing kubeapi-load-balancer and kubernetes-worker application. But then the question becomes how to bind specific units together as relations are usually defined between applications. Is there a source on this I could consult?