Error while bootstrapping on microk8s

Hello folks,

While bootstrapping on microk8s I got the following error:

$  juju bootstrap microk8s mk8s
Creating Juju controller "mk8s" on microk8s/localhost
Bootstrap to Kubernetes cluster identified as microk8s/localhost
Fetching Juju Dashboard 0.3.0
Creating k8s resources for controller "controller-mk8s"
Downloading images
Starting controller pod
Bootstrap agent now started
Contacting Juju controller at 10.152.183.225 to verify accessibility...
ERROR an error occurred forwarding 36469 -> 17070: error forwarding port 17070 to pod 51155c3ae675bbeb5fd600d3b48ab53002de73a17dab77f99755151c8cb2ef1e, uid : failed to execute portforward in network namespace "/var/run/netns/cni-52b0a8a0-ac29-03ca-d3b4-0d3f626308cf": socat command returns error: exit status 1, stderr: "2021/03/10 15:38:57 socat[298445] E connect(5, AF=2 127.0.0.1:17070, 16): Connection refused\n"

Bootstrap complete, controller "mk8s" is now available in namespace "controller-mk8s"

Now you can run
        juju add-model <model-name>
to create a new model to deploy k8s workloads.

I’m running:

$ snap info microk8s | grep installed | awk '{print $1 $2}'
installed:v1.20.2
$ snap info juju | grep installed | awk '{print $1 $2}'
installed:2.9-rc6

on kubuntu 20.04.

After getting that error I uninstalled everything and installed again executing:

$ juju unregister mk8s
$ sudo snap remove juju
$ sudo snap remove microk8s
$ sudo snap install --beta juju --classic
$ sudo snap install microk8s --classic
$ microk8s.enable dns storage
$ juju bootstrap microk8s mk8s

And the original error persists

1 Like

Yes, I’m also experiencing this, although so far even where the error shows the functionality still seems to work - which is a little confusing…

2 Likes

There’s some internal k8s logging which needs to be suppressed - the next 2.9 RC should have this fixed. Unfortunately the k8s logs print (expected) transient connectivity issues as errors so Juju needs some extra code added to filter those out.

Thanks @wallyworld - PR for anyone interested: https://github.com/juju/juju/pull/12735

+1
I also see this error although all seems to be normal after that.

@jose, were you able to continue your deployment successfully after this?

Hi @pedroleaoc

Yes, I was able to continue my deployment, as @jnsgruk said: even where the error shows the functionality still seems to work, but it was confusing.

Sorry for that… we are aiming to improve the bootstrapping process on v2.9. You are invited to give it a try.

In the meantime, please feel free to continue asking questions on this forum: specially if it’s about bootstrapping!

1 Like

Pedro, nothing to be sorry about!
Thank you very much for your help.

I am getting another similar error message, that seems harmless but it is confusing:

$ juju deploy ./graylog.charm --config admin-password=Pepe --resource graylog-image=graylog/graylog:3.3.8-1
Located local charm "graylog", revision 0
Deploying "graylog" from local charm "graylog", revision 0
ERROR error copying from local connection to remote stream: read tcp4 127.0.0.1:41493->127.0.0.1:32816: read: connection reset by peer

Thanks everyone for the feedback.

In Juju 2.9 we have introduced a proxy mechanism for the Juju client to talk with the controller through the Kubernetes api using port-forwarding. This is similar to how some CICD projects work with Kubernetes and also Helm v2.

Because of the way the Kubernetes port-forwarding mechanism works it isn’t able to surface the above errors via usual means and so it outputs them via klog. Juju has intercepted klog so that these error messages can be processed in normal Juju logs. Originally with this PR we had the log messages going to debug but found a problem where the proxy mechanism required socat and the error message was being silenced by our logging choice to debug.

We opted for sending klog error’s to Juju’s error stream. This had the undesired effect of sending the log messages above also. The connection errors above are not to be worried about and expected. The Kubernetes client and Juju proxy mechanism handles connection resets and so there is know actual problem with the connection just that of a bad log message.

We have passed a new PR that will filter these messages out in the final 2.9 release to remove them as they are not helpful. More importantly we aim to address this further by combining the lifecycle of our TCP connections with that of the proxying and this should remove most all of the reset messages.

I hope the above makes sense and that the new proxying mechanism allows users to bootstrap to Kubernetes clusters that wasn’t previously possible.

3 Likes