Different Pod and Service Ports

I’m working on a charm which should use a different port for its Service and its Pod.

This would look like this in a Service yaml:

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

In the case of my k8s charm, as far as I know, I can only state a port number in the containers section like this:

            'containers': [
                {
                    'name': 'kubernetes-dashboard',
                    'imageDetails': dashboard_image_details,
                    'imagePullPolicy': 'Always',
                    'ports': [
                        {
                            'name': 'dashboard',
                            'containerPort': 8443,
                            'protocol': 'TCP',
                        },

This port will then be used for both the pod and the service. Is there a way I can separately define those 2?

Why does it need to use a different port? I’m guessing there isn’t any reason why something else is running on the container at 8443. But potentially there is an application that is unable to be configured with a different port?

Just to speak a bit more here, when exposing an application, should that be decided by the Admin that is deciding how they want those applications to be exposed, rather than the charm deciding the final ingress port. It may be that there should be a default value that can be configured, but it does seem odd to have a default exposure on 8433.

Sometimes, the target port was hardcoded in the entrypoint of docker images.
So in this case, we can create a SVC like this

kubernetesResources:
  services:
    - name: my-service1
      labels:
        foo: bar
      spec:
        selector:
          app: MyApp
        ports:
          - protocol: TCP
            port: 80
            targetPort: 9376

The spec in kubernetesResources can be set like this:

layer.caas_base.pod_spec_set(spec, k8s_resources=kubernetes_resources)

Hey, this is exactly the case, the image expects 8443 and APIs that consume the service expect 443. I tried your solution the following way:

            'kubernetesResources': {
                'services': [{
                    'name': 'kubernetes-dashboard',
                    'spec': {
                        'selector': {
                            'juju-app': 'kubernetes-dashboard',
                        },
                        'ports': [{
                            'protocol': 'TCP',
                            'port': 443,
                            'targetPort': 8443,
                        }],
                    },
                }],
            },

I was able to deploy the charm, but the same kubernetes-dashboard service with 8443 is created, ignoring this part of the spec completely.

it does seem odd to have a default exposure on 8433

The Service is the exposure point, and the container is the backend which is not expected to be directly accessed. It seems like a fairly common use-case to have the backends running on non-standard, unprivileged ports with a Service or load balancer or what-have-you exposing the official ingress point on a standard, privileged port.

Before K8s, this would have been handled in Juju by some charm representing that ingress point, such as haproxy or kubeapi-load-balancer, and that charm would have some config to allow the admin to control the exposed port while the relation to the backend would allow it to specify its internal port. Now that Juju is managing the Service for K8s applications, it seems like it needs to be able to handle the distinction.

The services under kubernetesResources allows the charm to create extra services.
In this case, we will have 2 services created.
The main service is always mapping from 8443 to 8443.
The extra one maps from 443 to 8443. But you have to provide the address of this service by hand to the other side to use.
This is the current limitation because we don’t model service/ingress etc properly yet.

Is this planned to change in future releases?

It’s definitely an important feature but it’s currently not in the roadmap.
Would you fire a bug then the team can prioritise and have a plan for it?
thank you

Done: https://bugs.launchpad.net/juju/+bug/1902000

1 Like