A while ago @jnsgr introduced the Juju terraform provider and since then we all have been celebrating the versatility of Hashicorp’s Terraform backed by our Juju almighty.
Recently our friends in the CommSys team raised an interesting question and asked, “Is it impossible for the terraform juju provider to talk to a controller within a k8s cluster without utilizing the nice and useful proxy-config? How about the clusters that don’t provide that proxy?”. Here in this post we demonstrate a simple way to show that it is indeed possible to access to a controller in any type of k8s cluster with a bit of a support from the environment. The keyword is, the load-balancer.
Normally if the terraform juju provider is working with a k8s cluster that does provide the proxy we mentioned above (such as microk8s
), everything’s good, and you wouldn’t even notice anything different than usual, as the terraform juju provider is using Juju’s http client under the hood, which in turn knows how to utilize the proxy-config
in your local controllers.yaml
that’s generated at the time of bootstrap. However, you’d need another way if there’s no such proxy support from the cluster itself.
One workaround that’s explained in this comment
was to use the kubectl port-forward
to open a custom proxy that can be accessed locally and just go for controller_addresses = 0.0.0.0:17070
in the terraform plan, which is pretty much what the Juju client is doing with the proxy-config
anyways:
proxy-config:
type: kubernetes-port-forward
config:
api-host: https://127.0.0.1:16443
ca-cert: ....
Now this is a fine and valid solution, but it feels like a hack. And I’m sure people who’re more experienced than me in the k8s ecosystem could tell more about the pros and cons of this method.
Here we explore another approach, which is to bring up a loadbalancer in the same namespace and use that to communicate with the Juju controller. This is a general solution for any k8s cluster without fiddling with the details like ports, etc, since the load balancer functionality is commonly available in the k8s clusters, often through an integration like a plugin supported by the cloud provider, e.g., MetalLB, HAProxy, etc.
Let’s just make a quick example using microk8s (only because I’m familiar with it, you may use any cluster to follow along).
So first off, we need a bootstrapped controller (the terraform juju provider is a diva like that, pylibjuju):
$ juju bootstrap microk8s k8s29
A new namespace will be created on microk8s under the name controller-k8s29
. And once the bootstrap is complete,
we’ll be able to see the controller-service
in that namespace:
$ microk8s.kubectl get services -n controller-k8s29 controller-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
controller-service ClusterIP 10.152.183.200 <none> 17070/TCP 2m
This is the service that we want to expose, so we need to create a loadbalancer service that’ll hook us up. To do this, we need 2 things: 1) a plugin to bring up a loadbalancer service, and 2) actually bring up the service. For the (1), we’ll need to find out which plugin to use depending on the cloud provider we’re using. For this I used the Metallb
, which is a popular plugin that’s available on microk8s. For the (2), let’s use the following service manifest to bring up the actual service:
apiVersion: v1
kind: Service
metadata:
name: controller-service-lb
namespace: controller-k8s29
spec:
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: api-server
port: 17070
protocol: TCP
targetPort: 17070
selector:
app.kubernetes.io/name: controller
sessionAffinity: None
type: LoadBalancer
The important parts of this are; the id of the namespace (controller-k8s29
), the port (17070
), and the type of the service (LoadBalancer
). So write this in a file and go microk8s.kubectl apply -f file
. Once this is up, we should be able to see both the services running in our namespace (i.e. the controller and the loadbalancer we’ll connect to).
$ microk8s.kubectl get services -n controller-k8s29
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
modeloperator ClusterIP 10.152.183.56 <none> 17071/TCP 5m57s
controller-service LoadBalancer 10.152.183.200 10.64.140.44 17070:30744/TCP 7m
controller-service-lb LoadBalancer 10.152.183.116 10.64.140.45 17070:32429/TCP 20s
That external IP of the load balancer service (10.64.140.45
) is the one we’ll talk to using our clients (it can be the httpclient in terraform, or even juju-cli). The rest (i.e. the internals that hook up the services, etc) are all handled by the cluser and the used loadbalancer plugin.
Now for a basic test, we should be able to curl that address:
$ curl -k https://10.64.140.45:17070
As a small in-between experiment for this using the juju-cli
, we should be able to go edit the ~/.local/share/juju/controllers.yaml
and remove the proxy-config
field (to avoid using the proxy that the client would normally use), then add the address of the load balancer (https://10.64.140.45:17070
) to the api-endpoints field. After this the client should work just as normal (talking to the load balancer service) instead of the controller directly, e.g., just try juju add-model foo
.
Finally, to have the terraform provider talk to this, very similar to our little experiment above, just point the controller_address
to the load balancer in the terraform plan, and under the hood, the Juju httpclient should be able to reach to the Juju controller through the load balancer just fine. Note that in addition to the controller_address
, we’ll also need to specify in the terraform plan some credentials to authenticate with the controller. So we’ll need the username
, password
and the ca_certificate
fields, all of which can be easily obtained by just running juju show-controller --show-password
.
I suppose one annoying bit there is to put the ca-cert
into a file and use file()
directive in the terraform plan (unless you want to have the certificate itself within the plan). This** might be helpful with that.
In the end, the provider block in the terraform plan will look like this:
provider "juju" {
controller_addresses = "10.64.140.45:17070"
username = "admin"
password = "e887762e2972504767847........."
ca_certificate = file("./ca-cert.pem")
}
So with this way, terraform juju provider can talk to a Juju controller on a k8s cluster just fine, even without going through the proxy that may or may not be provided by the particular cluster in use. So we can continue to rock the cool Terraform/Juju duo on any Kubernetes cluster that can deploy a load balancer service.
Cheers everyone!
** juju show-controller --show-password --format=json | jq -r '.terraformyeni.details."ca-cert"' >> ca-cert.pem