While bootstrapping on microk8s I got the following error:
$ juju bootstrap microk8s mk8s
Creating Juju controller "mk8s" on microk8s/localhost
Bootstrap to Kubernetes cluster identified as microk8s/localhost
Fetching Juju Dashboard 0.3.0
Creating k8s resources for controller "controller-mk8s"
Downloading images
Starting controller pod
Bootstrap agent now started
Contacting Juju controller at 10.152.183.225 to verify accessibility...
ERROR an error occurred forwarding 36469 -> 17070: error forwarding port 17070 to pod 51155c3ae675bbeb5fd600d3b48ab53002de73a17dab77f99755151c8cb2ef1e, uid : failed to execute portforward in network namespace "/var/run/netns/cni-52b0a8a0-ac29-03ca-d3b4-0d3f626308cf": socat command returns error: exit status 1, stderr: "2021/03/10 15:38:57 socat[298445] E connect(5, AF=2 127.0.0.1:17070, 16): Connection refused\n"
Bootstrap complete, controller "mk8s" is now available in namespace "controller-mk8s"
Now you can run
juju add-model <model-name>
to create a new model to deploy k8s workloads.
There’s some internal k8s logging which needs to be suppressed - the next 2.9 RC should have this fixed. Unfortunately the k8s logs print (expected) transient connectivity issues as errors so Juju needs some extra code added to filter those out.
I am getting another similar error message, that seems harmless but it is confusing:
$ juju deploy ./graylog.charm --config admin-password=Pepe --resource graylog-image=graylog/graylog:3.3.8-1
Located local charm "graylog", revision 0
Deploying "graylog" from local charm "graylog", revision 0
ERROR error copying from local connection to remote stream: read tcp4 127.0.0.1:41493->127.0.0.1:32816: read: connection reset by peer
In Juju 2.9 we have introduced a proxy mechanism for the Juju client to talk with the controller through the Kubernetes api using port-forwarding. This is similar to how some CICD projects work with Kubernetes and also Helm v2.
Because of the way the Kubernetes port-forwarding mechanism works it isn’t able to surface the above errors via usual means and so it outputs them via klog. Juju has intercepted klog so that these error messages can be processed in normal Juju logs. Originally with this PR we had the log messages going to debug but found a problem where the proxy mechanism required socat and the error message was being silenced by our logging choice to debug.
We opted for sending klog error’s to Juju’s error stream. This had the undesired effect of sending the log messages above also. The connection errors above are not to be worried about and expected. The Kubernetes client and Juju proxy mechanism handles connection resets and so there is know actual problem with the connection just that of a bad log message.
We have passed a new PR that will filter these messages out in the final 2.9 release to remove them as they are not helpful. More importantly we aim to address this further by combining the lifecycle of our TCP connections with that of the proxying and this should remove most all of the reset messages.
I hope the above makes sense and that the new proxying mechanism allows users to bootstrap to Kubernetes clusters that wasn’t previously possible.