Hey there, thanks for reading:
I’ve spun up an Ubuntu 22.04 instance with Docker and K3s. On top of that I’ve installed juju and done the following:
juju add-k8s localkubeflow --client
This comes back fine and is verifiable here:
$ juju clouds | grep kubeflow
localkubeflow 0 k8s 1 local A Kubernetes Cluster
The next step fails. In which, I do:
juju bootstrap localkubeflow --config controller-service-type=cluster --debug --verbose
It seems to make it to contacting the API server:
21:57:15 INFO juju.juju api.go:86 connecting to API addresses: [10.43.200.201:17070]
21:57:24 DEBUG juju.api apiclient.go:567 starting proxier for connection
21:57:24 DEBUG juju.api apiclient.go:571 tunnel proxy in use at localhost on port 38979
21:57:24 INFO juju.api apiclient.go:996 cannot resolve "localhost": lookup localhost: i/o timeout
21:57:24 INFO cmd controller.go:143 Still waiting for API to become available: cannot resolve "localhost": lookup localhost: i/o timeout
I noticed that localhost error, but that doesn’t seem to stop it. It tries 127.0.0.1 until it eventually times out. It looks like this, on repeat:
21:57:27 INFO juju.juju api.go:86 connecting to API addresses: [10.43.200.201:17070]
21:57:27 DEBUG juju.api apiclient.go:567 starting proxier for connection
21:57:27 DEBUG juju.api apiclient.go:571 tunnel proxy in use at localhost on port 45403
21:57:27 DEBUG juju.api apiclient.go:747 looked up localhost -> [127.0.0.1]
21:57:31 INFO cmd controller.go:143 Still waiting for API to become available: unable to connect to API: dial tcp 127.0.0.1:45403: connect: connection refused
21:57:34 INFO juju.juju api.go:86 connecting to API addresses: [10.43.200.201:17070]
I’ve tried it with --keep-broken, which allows me to connect to the container, but I’m not sure where to go from here. Please help!
nvinuesa
(Nicolas Vinuesa)
2 October 2024 07:24
2
Hi Jeremy,
Once in the container you can go check the logs on /var/log/juju
, but in this particular case I don’t think you will find much more useful information there.
Can you share the contents of ~/.kube
please?
I’m also cc’ing @hpidcock who can have better insights in this space.
Hey Nicolas,
Thanks for the help!
Here’s kubeconfig:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: OMITTED
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: OMITTED
client-key-data: OMITTED
There’s also a bunch of cache files:
~/.kube$ find .
.
./config
./cache
./cache/http
./cache/http/.diskv-temp
./cache/http/b4752430e41333ef78276ae689e53c4499fb52097a4291779131865a314a01a8
./cache/http/6e0a83a71913c099b13818854788e45f43c924d79e18500554270c42afc2ec46
./cache/discovery
./cache/discovery/127.0.0.1_6443
./cache/discovery/127.0.0.1_6443/networking.k8s.io
./cache/discovery/127.0.0.1_6443/networking.k8s.io/v1
./cache/discovery/127.0.0.1_6443/networking.k8s.io/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/apiextensions.k8s.io
./cache/discovery/127.0.0.1_6443/apiextensions.k8s.io/v1
./cache/discovery/127.0.0.1_6443/apiextensions.k8s.io/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/coordination.k8s.io
./cache/discovery/127.0.0.1_6443/coordination.k8s.io/v1
./cache/discovery/127.0.0.1_6443/coordination.k8s.io/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/v1
./cache/discovery/127.0.0.1_6443/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/authorization.k8s.io
./cache/discovery/127.0.0.1_6443/authorization.k8s.io/v1
./cache/discovery/127.0.0.1_6443/authorization.k8s.io/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/rbac.authorization.k8s.io
./cache/discovery/127.0.0.1_6443/rbac.authorization.k8s.io/v1
./cache/discovery/127.0.0.1_6443/rbac.authorization.k8s.io/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/admissionregistration.k8s.io
./cache/discovery/127.0.0.1_6443/admissionregistration.k8s.io/v1
./cache/discovery/127.0.0.1_6443/admissionregistration.k8s.io/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/scheduling.k8s.io
./cache/discovery/127.0.0.1_6443/scheduling.k8s.io/v1
./cache/discovery/127.0.0.1_6443/scheduling.k8s.io/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/authentication.k8s.io
./cache/discovery/127.0.0.1_6443/authentication.k8s.io/v1
./cache/discovery/127.0.0.1_6443/authentication.k8s.io/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/k3s.cattle.io
./cache/discovery/127.0.0.1_6443/k3s.cattle.io/v1
./cache/discovery/127.0.0.1_6443/k3s.cattle.io/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/autoscaling
./cache/discovery/127.0.0.1_6443/autoscaling/v1
./cache/discovery/127.0.0.1_6443/autoscaling/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/autoscaling/v2
./cache/discovery/127.0.0.1_6443/autoscaling/v2/serverresources.json
./cache/discovery/127.0.0.1_6443/flowcontrol.apiserver.k8s.io
./cache/discovery/127.0.0.1_6443/flowcontrol.apiserver.k8s.io/v1beta3
./cache/discovery/127.0.0.1_6443/flowcontrol.apiserver.k8s.io/v1beta3/serverresources.json
./cache/discovery/127.0.0.1_6443/flowcontrol.apiserver.k8s.io/v1beta2
./cache/discovery/127.0.0.1_6443/flowcontrol.apiserver.k8s.io/v1beta2/serverresources.json
./cache/discovery/127.0.0.1_6443/batch
./cache/discovery/127.0.0.1_6443/batch/v1
./cache/discovery/127.0.0.1_6443/batch/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/servergroups.json
./cache/discovery/127.0.0.1_6443/helm.cattle.io
./cache/discovery/127.0.0.1_6443/helm.cattle.io/v1
./cache/discovery/127.0.0.1_6443/helm.cattle.io/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/policy
./cache/discovery/127.0.0.1_6443/policy/v1
./cache/discovery/127.0.0.1_6443/policy/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/discovery.k8s.io
./cache/discovery/127.0.0.1_6443/discovery.k8s.io/v1
./cache/discovery/127.0.0.1_6443/discovery.k8s.io/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/metrics.k8s.io
./cache/discovery/127.0.0.1_6443/metrics.k8s.io/v1beta1
./cache/discovery/127.0.0.1_6443/metrics.k8s.io/v1beta1/serverresources.json
./cache/discovery/127.0.0.1_6443/storage.k8s.io
./cache/discovery/127.0.0.1_6443/storage.k8s.io/v1
./cache/discovery/127.0.0.1_6443/storage.k8s.io/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/certificates.k8s.io
./cache/discovery/127.0.0.1_6443/certificates.k8s.io/v1
./cache/discovery/127.0.0.1_6443/certificates.k8s.io/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/events.k8s.io
./cache/discovery/127.0.0.1_6443/events.k8s.io/v1
./cache/discovery/127.0.0.1_6443/events.k8s.io/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/node.k8s.io
./cache/discovery/127.0.0.1_6443/node.k8s.io/v1
./cache/discovery/127.0.0.1_6443/node.k8s.io/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/apiregistration.k8s.io
./cache/discovery/127.0.0.1_6443/apiregistration.k8s.io/v1
./cache/discovery/127.0.0.1_6443/apiregistration.k8s.io/v1/serverresources.json
./cache/discovery/127.0.0.1_6443/apps
./cache/discovery/127.0.0.1_6443/apps/v1
./cache/discovery/127.0.0.1_6443/apps/v1/serverresources.json
I’ll need a few minutes to check the logs, but I’ll update that as soon as I can.
Logs on the controller all appear to be empty.
juju@controller-0:/var/log/juju$ ls
audit.log machine-lock.log slow-query.log
juju@controller-0:/var/log/juju$ cat audit.log
juju@controller-0:/var/log/juju$ cat machine-lock.log
juju@controller-0:/var/log/juju$ cat slow-query.log
juju@controller-0:/var/log/juju$
I’ve tried overwriting the 127.0.0.1 in the kubeconfig on a fresh rebuild to see if that’s where it sources it, but no luck. It still tries 127.0.0.1 on a random port so I’m still not sure what to do next.