Trouble creating a microk8s controller

Hi Community,

I’m a beginner using juju, I completed a couple of examples and now I’m trying to deploy GitLab charm using CLI from scratch. However, I ran into trouble that I did not experience before when creating a controller.

I have set up two servers one master and the other node running microk8s. I deploy this application successfully on microk8s.

kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0 --port=8080

Then, I started to make a controller juju bootstrap microk8s gitlab-mk8s and this is what happened.

    pedro@ubuntu-2gb-hel1-2:~$ juju bootstrap microk8s gitlab-mk8s
    Creating Juju controller "gitlab-mk8s" on microk8s/localhost
    Fetching Juju Dashboard 0.3.0
    Creating k8s resources for controller "controller-gitlab-mk8s"
    ERROR failed to bootstrap model: creating controller stack for controller: creating statefulset for controller: timed out waiting for controller pod: unschedulable: 0/2 nodes are available: 2 Insufficient memory.

I think I have enough memory.
is there anyone who could guide me to find a solution?

Welcome!

I don’t see any specific guidelines in terms of memory on https://juju.is/docs/microk8s-cloud but I do see on MicroK8s - Introduction to MicroK8s the recommendation is at least 20G of disk space and 4G of memory.

How much memory do you have in this case (free -m)? I see “2gb” in the shell prompt and was wondering if that meant the instance has 2G of memory. Would be good to know which version of juju and microk8s you’re using as well.

Hello Tom,
Thanks for your reply here the information you requested for.

Memory, you are right:

                       total        used        free      shared  buff/cache   available
Mem:                    1939        1111         131           2         695         750
Swap:                      0           0           0

microk8s && Juju Version:
snap list

juju: 2.8.7 14932 latest/stable canonical✓ classic
microk8s: v1.19.3 1791 1.19/stable canonical✓ classic

I did some research as well on my side, and I found this:

The value for memory.available is derived from the cgroupfs instead of tools like free -m . This is important because free -m does not work in a container, and if users use the node allocatable feature, out of resource decisions are made local to the end user Pod part of the cgroup hierarchy as well as the root node. This script reproduces the same set of steps that the kubelet performs to calculate memory.available . The kubelet excludes inactive_file (i.e. # of bytes of file-backed memory on inactive LRU list) from its calculation as it assumes that memory is reclaimable under pressure.

See result of the script:
memory.capacity_in_bytes: 2033942528
memory.usage_in_bytes: 1756598272
memory.total_inactive_file: 345206784
memory.working_set: 1411391488
memory.available_in_bytes: 622551040
memory.available_in_kb: 607960
memory.available_in_mb: 593

I must say I’m still a little confused because, in the first place, I thought the same as you said free -m/free -h and then I found this

Maybe you can clarify more for me.
I will try anyway to increase the RAM and see what happens. Then I will let you know.

I think what you’ve found is documentation explaining the precise mechanism kubernetes is using to determine available memory - the numbers for those memory.* values differ slightly from free -m so it’s just allowing you to see exactly what’s going on under the hood. In any case, sounds like retrying with an instance with more RAM should help.

What about using juju controller as a service?

Sorry, I’m not sure I understand the question - can you clarify? Do you mean “how much extra resources will I need for running a juju controller on top of the required 4G of RAM for microk8s itself?” Let me know if I’ve misunderstood.