[Windows] Juju and MicroK8s bootstrap error

Dear all,

first of all sorry if this question/issue does not fit here and simply let me know if another place would be best.

I’m currently writing a blog on reproducing Mark Shuttleworth demo at DockerCon 2021, but on Windows.
While WSL2 is working great (setup also in the blog), the Windows side with MicroK8s for Windows ends up with an error during the juju bootstrap command:

The goal is to keep everything “simple”, and leverage the existing tools from Windows side (juju.exe and microk8s.exe).

So my question is if there’s another way to bootstrap?

Please note that the following ways have been also tested:

  • microk8s juju: while the command works fine, when trying to deploy the same app as Mark in his demo, Mattermost, there is an error as the juju client is v2.8 and the charm needs v2.9 at least
  • doing everything through multipass.exe exec: as stated above, the goal is to keep the tooling to a minimum, so while it will work, then the use of juju.exe looses its purpose (just my opinion here, I could be very wrong)

Hope this makes sense and thanks in advance for your feedback/help.

1 Like

Hmm, I’m no expert on WSL, or even using Juju on Windows.

There is a juju add-k8s command which you could use to add the k8s credentials specified at KUBECONFIG to juju, which might enable you to bootstrap.

Something like:

microk8s.exe config | juju add-k8s micro
juju.exe bootstrap micro

I’m surprised that you can’t just juju.exe bootstrap microk8s, my impression was that if you have access to the cluster with kubectl, then juju bootstrap should be able to work fine.

Give that a go and let me know? Some other folk around here might have a better idea… perhaps @manadart or @wallyworld?

The problem appears to be that when we detect the “microk8s” cloud name, we run some *nix-specific commands to check that you are in the right group.

This should be filed as a bug.

As a work-around @jnsgruk’s suggestion might work. I.e. add the cloud under another name so that it simply tries to access the cluster with the registered credentials.

first of all, thanks a lot for the very rapid feedback (Alex Chalkias was right to send me here :stuck_out_tongue: )

Here, WSL just works fine, it’s really the “Windows” side where MicroK8s for Windows creates a VM (thanks to Multipass) and then we interact with MicroK8s with the command from Powershell directly.

As @manadart said, it’s indeed on the commands executed on the “host” of MicroK8s which is the VM. I Will definitively try your solution @jnsgruk
Again, thanks a lot for the prompt replies, really helps.

2 Likes

While the solution worked perfectly, thanks again a million, I’m up into another blocking point, however this time seems more “sneaky”:

The controller pod goes into a CrashLoopBackOff, and comparing it with the WSL2 one, I can see the mounts path are sort of Windows-style:

    Mounts:
          /var/lib/juju from storage (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ncx4p (ro)
          \var\lib\juju\agents\controller-0\template-agent.conf from controller-agent-conf (rw,path="template-agent.conf")
          \var\lib\juju\bootstrap-params from controller-bootstrap-params (ro,path="bootstrap-params")
          \var\lib\juju\shared-secret from controller-shared-secret (ro,path="shared-secret")
          \var\lib\juju\template-server.pem from controller-server-pem (ro,path="template-server.pem")

And the events show the “unbound pvc” error too:

    Events:
      Type     Reason            Age                 From               Message
      ----     ------            ----                ----               -------
      Warning  FailedScheduling  16m                 default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
      Normal   Scheduled         16m                 default-scheduler  Successfully assigned controller-microk8s-localhost/controller-0 to microk8s-vm
      Warning  FailedScheduling  16m                 default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
      Normal   Pulling           16m                 kubelet            Pulling image "jujusolutions/juju-db:4.0"
      Normal   Pulled            15m                 kubelet            Successfully pulled image "jujusolutions/juju-db:4.0" in 7.269073067s
      Normal   Pulling           15m                 kubelet            Pulling image "jujusolutions/jujud-operator:2.9.3"
      Normal   Pulled            15m                 kubelet            Successfully pulled image "jujusolutions/jujud-operator:2.9.3" in 9.668356325s
      Normal   Started           15m (x2 over 15m)   kubelet            Started container api-server
      Normal   Created           15m (x2 over 15m)   kubelet            Created container api-server
      Normal   Pulled            15m (x2 over 15m)   kubelet            Container image "jujusolutions/juju-db:4.0" already present on machine
      Normal   Created           15m (x3 over 15m)   kubelet            Created container mongodb
      Normal   Started           15m (x3 over 15m)   kubelet            Started container mongodb
      Warning  BackOff           15m (x3 over 15m)   kubelet            Back-off restarting failed container
      Normal   Pulled            15m (x2 over 15m)   kubelet            Container image "jujusolutions/jujud-operator:2.9.3" already present on machine
      Warning  BackOff           50s (x92 over 15m)  kubelet            Back-off restarting failed container

Finally, just to be sure I’m providing all the infos, here is the MicroK8s status:

    microk8s is running
    high-availability: no
      datastore master nodes: 127.0.0.1:19001
      datastore standby nodes: none
    addons:
      enabled:
        dns                  # CoreDNS
        ha-cluster           # Configure high availability on the current node
        storage              # Storage class; allocates storage from host directory

Thanks again for the help :slight_smile:

Adding another quick feedback: I connected to the microk8s-vm directly, installed juju (snap), tried to bootstrap again and it worked fine:

As guessed, the paths are now “correct”:

Mounts:
      /var/lib/juju from storage (rw)
      /var/lib/juju/agents/controller-0/template-agent.conf from controller-agent-conf (rw,path="template-agent.conf")
      /var/lib/juju/bootstrap-params from controller-bootstrap-params (ro,path="bootstrap-params")
      /var/lib/juju/shared-secret from controller-shared-secret (ro,path="shared-secret")
      /var/lib/juju/template-server.pem from controller-server-pem (ro,path="template-server.pem")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d85wk (ro)

So a bug on the juju.exe client, I guess?

Hey @wslcorsair, glad it (sort of) worked.

Yeh, I guess that is a bug. The bug tracker for Juju is on launchpad.

My guess is that there should probably be two bugs filed in this case:

  • One against juju bootstrap for Windows where we’re bootstrapping microk8s and checking for unix group membership
  • One for the paths issue you describe above.

Let me know if you need any other assistance!

2 Likes