Problems using MicroK8s to creating a high availability kubernetes cluster within a private cloud

Hey guys!

I need some help. I have two root servers, one load balancing public IP address pointing to one of these servers and a local VLAN where these two servers are inside. Goal is to create a “high available” kubernetes cluster with two nodes.

First I tried to use microk8s and its cluster functionality. Both servers were configured to forward 80 and 443 to the ingress IP, which does not work because ARP cannot resolve on its own server. So I tried to install on both servers nginx which forwards to ingress, but only one host was capable of accessing the ingress.

My question is now, can I (somehow) achieve a ha kubernetes cluster on two root servers with juju? Is this possible? Would I use localhost (lxd) provider?


@kos.tsakalozos or @tvansteenburgh are you guys able to provide some input? If there’s a best practice recommendation, we can then see how to drive that with Juju

Currently I am thinking about using LXD clustering and juju:

If this works I would buy a third node but first I need to know if the failover would work and how to set this up.

I hope that the problem won’t stay the same (failover ip -> server1 or server 2 -> internal virtual IP)

You can find a dqlite based HA build of MicroK8s at the temporary channel branch edge/test-dqlite (sudo snap install microk8s --classic --channel=edge/test-dqlite) . You will need at least 3 nodes to setup the cluster and you cannot have the instances running on LXD.

When setting up the workload have you looked at any loadbalancer? We have metallb as an addon you may want to give it a try (microk8s enable metallb).

I used metallb with microk8s. Some more information regarding my previous setup:
Public failover IP is pointing to server 1 or server 2. Each server has a public network interface (same iface as failover IP) and a second iface with a private VLAN. I installed on both servers microk8s and used server 1 as master node. Slave node was connected via private VLAN and added the metallb IP range also inside. NAT (port 80, 443) from failover IP to ingress IP was created but unsuccessful. The reason was: If metallb speaker is on the same node which has the failover IP assigned, ARP won’t work (no route to host). I saw with tcpdump that I have traffic on 443 and that is was forwarded to the internal network but the ARP request was not successful (who has ingress ip? - no response). arping from the second node is successfully pointing to the first node.

Later I tried it with nginx as layer 7 forward to ingress. But I don’t like the idea to have per node one nginx (with certificates) and a nginx ingress inside kubernetes. And it worked only partly. Couldn’t connect from server 2 to ingress IP…