Configuring ingress for sidecar charms

As mentioned in The Future of Charmed Operators on Kubernetes there’s a new approach to charming, that’s been available in preview since Juju 2.9-rc7, where the workload container and the charm run in the same pod. Because we’re no longer using pod-spec-set to configure our workloads, we needed a way to be able to configure an ingress resource to make our workload reachable from outside the k8s cluster.

To do this, we’ve created an nginx-ingress-integrator charm. Here’s an example of this charm in action:

# First add our Juju model
juju add-model gunicorn
# Ensure you’re running Juju 2.9-rc11 or later. We'll use the 
# name "ingress" in this model for brevity.
juju deploy nginx-ingress-integrator ingress
# Let’s also deploy a workload that we can relate it to.
git clone -b pebble https://git.launchpad.net/~mthaddon/charm-k8s-gunicorn/+git/charm-k8s-gunicorn
cd charm-k8s-gunicorn
charmcraft build
juju deploy ./gunicorn.charm --resource gunicorn-image=gunicorncharmers/gunicorn-app:20.0.4-20.04_edge
# Now relate the two charms
juju relate ingress gunicorn
# Now inspect our ingress resource
microk8s kubectl describe ingress -n gunicorn

You would then get output similar to the following:

Name:             gunicorn-ingress
Namespace:        gunicorn
Address:          
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host          Path  Backends
  ----          ----  --------
  foo.internal  
                /   gunicorn-service:80 (10.1.234.12:80)
Annotations:    nginx.ingress.kubernetes.io/rewrite-target: /
                nginx.ingress.kubernetes.io/ssl-redirect: false
Events:         <none>

Assuming you’re using MicroK8s, you can test this end to end, by installing the ingress controller with microk8s enable ingress. You should then check the ingress class by running microk8s.kubectl describe pod -n ingress | grep ingress-class. In my case this was set to “public” so I ran juju config ingress ingress-class=public, and then I can connect to my application through the ingress as follows:

curl -H "Host: foo.internal" http://127.0.0.1

This charm provides a library that developers can use to easily implement an ingress relation. This can be integrated into your charm by running charmcraft list-lib nginx-ingress-integrator followed by charmcraft fetch-lib charms.nginx_ingress_integrator.v0.ingress. As a trivial example, you would then implement the relations as follows:

from charms.nginx_ingress_integrator.v0.ingress import IngressRequires

# In __init__:
self.ingress = IngressRequires(
    self, 
    {
        "service-hostname": self.config["external_hostname"],
        "service-name": self.app.name,
        "service-port": 80,
    }
)

# In config-changed handler
self.ingress.update_config({"service-hostname": self.config["external_hostname"]})

The charm supports the following via the relation:

  • Rate limiting (with a whitelist for exclusions by CIDRs)
  • Setting maximum allowed body size for file uploads
  • Configuring retrying of errors against the next server
  • A session cookie to use for cookie-based session affinity, and the age of that cookie
  • The TLS certificate to use for your service if applicable

All of these options can also be configured at deploy time. In addition there’s also an ingress-class option to use, in the case that your cluster has multiple ingress controllers. This allows you to target the correct one.

Please give it a try, and add support for the relation in your sidecar charm, and let me know if you have any issues with it or questions about how it works. I’m looking for early feedback on it, as well as what other features you’d want to be able to support.

3 Likes

Thanks for the post, it’s really helpful. I hit one small papercut: s/service_hostname/service-hostname/ in the example code for the config-handler.

Ah right, updated.

Just as a side note, we’re trying to document all microk8s commands with spaces, rather than .. E.g. microk8s kubectl vs microk8s.kubectl as the “dot method” only works on Linux (not Windows and Mac).

Thanks!

Thanks for this writeup and excellent charm for nginx-ingress integration/passthrough, @mthaddon!

I’ve been writing a Bitwarden sidecar charm and am able to get direct ingress working perfectly first try, however, I’ve been trying to use nginx-ingress as an HTTPS reverse proxy for ingress in microk8s.

I’ve found kubernetes documentation for nginx-ingress and TLS (utilizing the tls-secret-name config in the spec and adding my RSA cert/key into the namespace using kubectl create secret) but it doesn’t seem to be working as intended.

https://kubernetes.github.io/ingress-nginx/user-guide/tls/

My charm provides the following:

    def _setup_ingress(self):
        ingress_config = {
            "service-hostname": self._stored.dns_domain,
            "service-name": self.app.name,
            "service-port": self._stored.service_port,
        }
        if self._stored.tls_secret_name:
            ingress_config["tls-secret-name"] = self._stored.tls_secret_name
        logger.debug(f"Setting up ingress with env: {ingress_config}")
        if self.ingress:
            self.ingress.update_config(ingress_config)
        else:
            self.ingress = IngressRequires(self, ingress_config)

    def _get_or_generate_admin_token(self):
        admin_token_config = self.config.get("admin_token", None)
        if not admin_token_config:
            return self._generate_new_admin_token()
        return admin_token_config

Within my charm’s unit logs, I see I’m passing the proper bits through the relation:

unit-bitwarden-k8s-0: 17:44:25 DEBUG unit.bitwarden-k8s/0.juju-log Setting up ingress with env: {'service-hostname': 'bitwarden.juju', 'service-name': 'bitwarden-k8s', 'service-port': 8000, 'tls-secret-name': 'bitwarden'}

I’m also seeing the ingress matching the upstream documenation requirements:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  creationTimestamp: "2021-06-15T04:09:18Z"
  generation: 3
  labels:
    app.juju.is/created-by: nginx-ingress-integrator
  name: bitwarden-k8s-ingress
  namespace: test
  resourceVersion: "372640"
  selfLink: /apis/networking.k8s.io/v1/namespaces/test/ingresses/bitwarden-k8s-ingress
  uid: e751924b-f1aa-4cb8-b4db-f3b6bbeee776
spec:
  ingressClassName: public
  rules:
  - host: bitwarden.juju
    http:
      paths:
      - backend:
          service:
            name: bitwarden-k8s-service
            port:
              number: 8000
        path: /
        pathType: ImplementationSpecific
  tls:
  - hosts:
    - bitwarden.juju
    secretName: bitwarden
status:
  loadBalancer:
    ingress:
    - ip: 127.0.0.1
✔ 17:45 ~/src/charm-bitwarden-k8s/src [master|✔] $ kubectl describe ing -n test
Name:             bitwarden-k8s-ingress
Namespace:        test
Address:          127.0.0.1
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  bitwarden terminates bitwarden.juju
Rules:
  Host            Path  Backends
  ----            ----  --------
  bitwarden.juju  
                  /   bitwarden-k8s-service:8000 (10.1.146.255:8000)
Annotations:      nginx.ingress.kubernetes.io/rewrite-target: /
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  UPDATE  17h (x4 over 18h)  nginx-ingress-controller  Ingress test/bitwarden-k8s-ingress
  Normal  CREATE  35m                nginx-ingress-controller  Ingress test/bitwarden-k8s-ingress
  Normal  CREATE  19m                nginx-ingress-controller  Ingress test/bitwarden-k8s-ingress
  Normal  UPDATE  19m                nginx-ingress-controller  Ingress test/bitwarden-k8s-ingress
  Normal  CREATE  16m                nginx-ingress-controller  Ingress test/bitwarden-k8s-ingress
  Normal  UPDATE  16m                nginx-ingress-controller  Ingress test/bitwarden-k8s-ingress


bitwarden-k8s-service      ClusterIP   10.152.183.33    <none>        8000/TCP,80/TCP,443/TCP   18h

Interestingly, ingress is listening on port 8000 and providing passthrough http traffic, however, I expected with the tls certificate for ingress to listen on port 443 and provide an HTTPS reverse proxy.

I’ve added the ingress service IP to /etc/hosts as bitwarden.juju, however it is only listening on the port announced by the bitwarden relation and it shows listening on port 443, but is not actually listening there.

Do you know if this is a known supported feature and if this may be microk8s or my attempted implementation causing issues?

Without HTTPS termination, my application won’t be useful, and the app’s upstream documentation suggests an HTTPS reverse proxy rather than processing https within the app container.

Do we know of any charms that are performing HTTPS ingress with the nginx-ingress-integrator?

I should note that I’m creating a self-signed cert and there are no errors about the secret coming from the ingress controller logs. I do wonder if I need to attach a CA to the secret or have it signed by the k8s authority. Is there a pattern being developed (similar to vault certificates) for k8s charming?

Hi Drew,

If you’re deploying on MicroK8s the ingress IP is 127.0.0.1 (you’ll need the “ingress” addon in MicroK8s enabled). I think what you’re seeing is that there are two pieces here:

  • A kubernetes “service” that refers to the services running in the pods, and uses (in your case) port 8000.
  • A kubernetes “ingress” definition that gets picked up by an ingress controller (in your case configured by the “ingress” addon in MicroK8s) and instructs that ingress controller how/where to route traffic.

In Juju status the ingress charm reports the service IP because that’s what used to be reported in old pod-spec-set charms, but also because there’s no way for the charm to know what the IP(s) of the ingress controller is (as far as I’m aware).

Having said that, I’ve not tested TLS myself, but I know @pjdc has. Unfortunately I’m affected by https://github.com/ubuntu/microk8s/issues/2255 so I don’t have end to end ingress working currently on my MicroK8s install to be able to confirm this either way.

Hope that helps.

Tom

Thanks for the feedback, Tom.

I did get a chance to try a few more ideas out, both attempting to use the ingress address as well as 127.0.0.1 with /etc/hosts entries pointing to my "dns_domain" of bitwarden.juju.

It appears that the ingress is performing a direct passthrough from the ingress address advertised in juju status:

nginx-ingress-integrator/0*  active    idle   10.1.146.217         Ingress with service IP(s): 10.152.183.206

But is not listening on localhost/127.0.0.1 on either 80, 443, or service port 8000 or updated service port 8143, all provide connection refused upon attempt to access with curl or telnet/nc. Only the service port is listening on the advertised 10.152.183.206 IP.

You can see here in my ingress definition that bitwarden.juju hostname should be TLS encapsulated with the bitwarden secret (SSL cert/key). (For the below log, this secret changed to bitwarden-tls.)

drew@grimoire:~$ kubectl describe ingress -n demo bitwarden-k8s-ingress
Name:             bitwarden-k8s-ingress
Namespace:        demo
Address:          127.0.0.1
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  bitwarden-tls terminates bitwarden.juju
Rules:
  Host            Path  Backends
  ----            ----  --------
  bitwarden.juju  
                  /   bitwarden-k8s-service:8143 (10.1.146.216:8143)
Annotations:      nginx.ingress.kubernetes.io/rewrite-target: /
Events:           <none>

(I changed the port to 8143 for a new demo from port 8000.)

The logs of the ingress controller show the following, most notably, the secret is being added to the local store (without error). Interestingly, no ports are listed in the ingress status for bitwarden-k8s-ingress, just an IP of 127.0.0.1 is set. I’m not sure if this is intended or not, or something I should be setting. I was imagining that the nginx-ingress-integrator provided the glue for the inbound external IP to this localhost IP, but maybe that’s incorrect.

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v0.44.0
  Build:         f802554ccfadf828f7eb6d3f9a9333686706d613
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.6

-------------------------------------------------------------------------------

I0707 16:56:35.115393       7 flags.go:208] "Watching for Ingress" class="public"
W0707 16:56:35.115429       7 flags.go:211] Only Ingresses with class "public" will be processed by this Ingress controller
W0707 16:56:35.115640       7 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0707 16:56:35.115793       7 main.go:241] "Creating API client" host="https://10.152.183.1:443"
I0707 16:56:35.127872       7 main.go:285] "Running in Kubernetes cluster" major="1" minor="21+" git="v1.21.1-3+ba118484dd39df" state="clean" commit="ba118484dd39df570e55e47f082e523cda7583e5" platform="linux/amd64"
I0707 16:56:35.255229       7 main.go:105] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0707 16:56:35.256025       7 main.go:115] "Enabling new Ingress features available since Kubernetes v1.18"
I0707 16:56:35.277074       7 nginx.go:254] "Starting NGINX Ingress controller"
I0707 16:56:35.280073       7 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress", Name:"nginx-ingress-tcp-microk8s-conf", UID:"3929ebf1-90f6-4c1c-9637-86f60166bcb7", APIVersion:"v1", ResourceVersion:"546", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress/nginx-ingress-tcp-microk8s-conf
I0707 16:56:35.280092       7 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress", Name:"nginx-ingress-udp-microk8s-conf", UID:"c2289e42-da48-4127-8a04-e3053e3837b8", APIVersion:"v1", ResourceVersion:"547", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress/nginx-ingress-udp-microk8s-conf
I0707 16:56:35.280404       7 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress", Name:"nginx-load-balancer-microk8s-conf", UID:"f531b784-4575-4c17-99da-012144f35ad2", APIVersion:"v1", ResourceVersion:"545", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress/nginx-load-balancer-microk8s-conf
I0707 16:56:36.477474       7 nginx.go:296] "Starting NGINX process"
I0707 16:56:36.477522       7 leaderelection.go:243] attempting to acquire leader lease ingress/ingress-controller-leader-public...
I0707 16:56:36.477825       7 controller.go:146] "Configuration changes detected, backend reload required"
I0707 16:56:36.487218       7 leaderelection.go:253] successfully acquired lease ingress/ingress-controller-leader-public
I0707 16:56:36.487347       7 status.go:84] "New leader elected" identity="nginx-ingress-microk8s-controller-gj8j2"
I0707 16:56:36.541352       7 controller.go:163] "Backend successfully reloaded"
I0707 16:56:36.541429       7 controller.go:174] "Initial sync, sleeping for 1 second"
I0707 16:56:36.541521       7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress", Name:"nginx-ingress-microk8s-controller-gj8j2", UID:"73f5ccb3-94c0-4489-b2c0-3625ca757911", APIVersion:"v1", ResourceVersion:"5293", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0707 17:28:51.981207       7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"demo", Name:"bitwarden-k8s-ingress", UID:"79a6c020-7b26-4fd4-b532-09c92b61eb62", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"8119", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0707 17:28:51.981504       7 backend_ssl.go:66] "Adding secret to local store" name="demo/bitwarden-tls"
I0707 17:28:55.273512       7 controller.go:146] "Configuration changes detected, backend reload required"
I0707 17:28:55.327323       7 controller.go:163] "Backend successfully reloaded"
I0707 17:28:55.327707       7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress", Name:"nginx-ingress-microk8s-controller-gj8j2", UID:"73f5ccb3-94c0-4489-b2c0-3625ca757911", APIVersion:"v1", ResourceVersion:"5293", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0707 17:29:36.498697       7 status.go:281] "updating Ingress status" namespace="demo" ingress="bitwarden-k8s-ingress" currentValue=[] newValue=[{IP:127.0.0.1 Hostname: Ports:[]}]
I0707 17:29:36.513918       7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"demo", Name:"bitwarden-k8s-ingress", UID:"79a6c020-7b26-4fd4-b532-09c92b61eb62", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"8188", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0708 06:16:38.818165       7 reflector.go:436] k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: watch of *v1.ConfigMap ended with: very short watch: k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
W0708 06:16:38.818165       7 reflector.go:436] k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: watch of *v1.Endpoints ended with: very short watch: k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
W0708 06:16:38.818212       7 reflector.go:436] k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: watch of *v1.Secret ended with: very short watch: k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
W0708 06:16:38.818229       7 reflector.go:436] k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: watch of *v1.Service ended with: very short watch: k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
W0708 06:16:38.818635       7 reflector.go:436] k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: watch of *v1beta1.Ingress ended with: very short watch: k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
E0708 06:16:39.192151       7 leaderelection.go:325] error retrieving resource lock ingress/ingress-controller-leader-public: Get "https://10.152.183.1:443/api/v1/namespaces/ingress/configmaps/ingress-controller-leader-public": dial tcp 10.152.183.1:443: connect: connection refused
E0708 06:16:39.642374       7 reflector.go:138] k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://10.152.183.1:443/api/v1/secrets?fieldSelector=%2Ctype%21%3Dhelm.sh%2Frelease.v1&resourceVersion=18475": dial tcp 10.152.183.1:443: connect: connection refused
E0708 06:16:39.642773       7 reflector.go:138] k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: Get "https://10.152.183.1:443/apis/networking.k8s.io/v1beta1/ingresses?resourceVersion=18475": dial tcp 10.152.183.1:443: connect: connection refused
E0708 06:16:39.662844       7 reflector.go:138] k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.152.183.1:443/api/v1/configmaps?labelSelector=OWNER%21%3DTILLER&resourceVersion=69476": dial tcp 10.152.183.1:443: connect: connection refused
E0708 06:16:40.088085       7 reflector.go:138] k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.152.183.1:443/api/v1/endpoints?resourceVersion=18536": dial tcp 10.152.183.1:443: connect: connection refused
E0708 06:16:40.341479       7 reflector.go:138] k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.152.183.1:443/api/v1/services?resourceVersion=18475": dial tcp 10.152.183.1:443: connect: connection refused
E0708 06:16:41.259415       7 reflector.go:138] k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: Get "https://10.152.183.1:443/apis/networking.k8s.io/v1beta1/ingresses?resourceVersion=18475": dial tcp 10.152.183.1:443: connect: connection refused
E0708 06:16:54.202586       7 leaderelection.go:325] error retrieving resource lock ingress/ingress-controller-leader-public: Get "https://10.152.183.1:443/api/v1/namespaces/ingress/configmaps/ingress-controller-leader-public": dial tcp 10.152.183.1:443: connect: connection refused
E0709 00:31:51.733534       7 leaderelection.go:325] error retrieving resource lock ingress/ingress-controller-leader-public: Get "https://10.152.183.1:443/api/v1/namespaces/ingress/configmaps/ingress-controller-leader-public": dial tcp 10.152.183.1:443: connect: connection refused
I0709 15:38:51.109731       7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"demo", Name:"bitwarden-k8s-ingress", UID:"79a6c020-7b26-4fd4-b532-09c92b61eb62", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"228683", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0709 15:38:54.388141       7 controller.go:146] "Configuration changes detected, backend reload required"
I0709 15:38:54.561115       7 controller.go:163] "Backend successfully reloaded"
I0709 15:38:54.561264       7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress", Name:"nginx-ingress-microk8s-controller-gj8j2", UID:"73f5ccb3-94c0-4489-b2c0-3625ca757911", APIVersion:"v1", ResourceVersion:"5293", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration

Here are the upstream references I’ve been investigating:
https://kubernetes.github.io/ingress-nginx/user-guide/tls/

The "default-http-backend" not found appears to be normal/ignorable per this thread:
https://github.com/nginxinc/kubernetes-ingress/issues/966

I believe the errors in the log above are from my kubectl describe ingress -n ingress command during troubleshooting.

I reviewed the microk8s bug you mentioned and it seems I’m also being hit by the same thing.

If I deploy my charm into a microk8s instance on a fresh multipass VM, the nginx-ingress-integrator and curl --insecure https://localhost work fine.

I was also hit by this “nothing responding at 80/443”. Similarly I’d installed and reinstalled microk8s and messed around with the snap channel where I was installing microk8s from.

FWIW I was able to work around this by uninstalling microk8s, doing a iptables-legacy -F of all the microk8s / calico rules that were left behind (incidentally there existed this now stalebot closed issue about this topic of them getting left behind), reinstalled microk8s and then deployed with my charms -> 80 / 443 worketh happily.

2 Likes