Impossible to install Kubuflow with microk8s and Ubuntu 22.04

Since 2 days, I try to install kubeflow with microk8s, but it’s impossible. I try the process describre in charmed kubeflow.

I use Ubuntu 22.04.1 LTS (with my version of a just installed VM) and I try this script:

sudo snap install microk8s --channel=1.22/stable --classic
sudo snap install juju --classic

microk8s enable dns storage dashboard
microk8s status --wait-ready

juju bootstrap microk8s  --debug

But the juju bootstrapnever ends.

+ juju bootstrap microk8s --debug
17:06:59 INFO  juju.cmd supercommand.go:56 running juju [2.9.38 6d211be0d72d6f4d625c61c7c4ddb4e9325226c8 gc go1.18.9]
17:06:59 DEBUG juju.cmd supercommand.go:57   args: []string{"/snap/juju/21790/bin/juju", "bootstrap", "microk8s", "--debug"}
17:07:00 DEBUG juju.caas.kubernetes.provider.proxy setup.go:177 polling caas credential rbac secret, in 1 attempt, token for secret "juju-credential-microk8s" not found
17:07:01 DEBUG juju.kubernetes.provider provider.go:140 opening model "add-cloud".
17:07:01 DEBUG juju.kubernetes.provider metadata.go:169 use the default Storage class "microk8s-hostpath" for operator storage class because it also matches Juju preferred config {hostpath microk8s.io/hostpath map[] false WaitForFirstConsumer}
17:07:01 DEBUG juju.kubernetes.provider metadata.go:190 use the default Storage class "microk8s-hostpath" for nominated storage class
17:07:01 INFO  cmd cloudcredential.go:47 updating credential store
17:07:01 DEBUG juju.cmd.juju.commands bootstrap.go:1313 authenticating with region "" and credential "microk8s" ()
17:07:01 DEBUG juju.cmd.juju.commands bootstrap.go:1461 provider attrs: map[operator-storage: workload-storage:]
17:07:02 INFO  cmd authkeys.go:114 Adding contents of "/home/pprados/.local/share/juju/ssh/juju_id_rsa.pub" to authorized-keys
17:07:02 INFO  cmd authkeys.go:114 Adding contents of "/home/pprados/.ssh/id_rsa.pub" to authorized-keys
17:07:02 DEBUG juju.cmd.juju.commands bootstrap.go:1536 preparing controller with config: map[agent-metadata-url: agent-stream:released apt-ftp-proxy: apt-http-proxy: apt-https-proxy: apt-mirror: apt-no-proxy: authorized-keys:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDEKGCeBvySZHsytc16erRMe+BmX2l6McMcwJBfLuY6mSbxpMR/5Q4OaqBoSMjjFoi+OE0RkSx04JiC6RSXfTpBAUSXuvcrJZQPMWJsycpqtXso0cGCulGagGX9gJqirr/DxuwBCbXU1ADWfO3LGPj1VHoBeIhGDdLjnC4pJnoRsC2e3tqmql8Q+lxd0tY0elPbD+FksnUFgYZWNyegw7XB/GhMs2xCl6xVY2cXMb/+9PK7UB2il8kRwduI19GmefwONdRY5zTHzcdkn/B4/+zopOQvCuvAOdiGv+dGxlr1ffDNJEocZTkI3c3HUe2YiEhIPXN9taW227vWpNbwsD2n juju-client-key
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDRqYXDfmsOdimyqWWVSESpWnXBa9VOBHZZuuoXVUU43JUP5OPX9e8y4cB96ljrxuH02vGyon13GaOp9/cJ002o8d+Ci+ASFQXrvltk5D140y1G+IP0Omcjen+g9C8Qo/wfdRX5ot/9FvhDQiFkUdtrecJTD01USpsEKdUou4UNNYVgD7Sya0bFQNtivvkRzm+qSLko1adiCjK7JA3IXmjrqK7gI3XGczsxiXbWGATLiJFCgpmuQOeqlZNNu7flRcd9BT8Aa367zSGQEDW1GbJoVfRp0U05HWV+RqI4cY6y0YWxfxPhxFWsXgr9sJJ1iOfdbQN4vALw85jGABk9cViLAmYian7Ds5meMmVm2oNMM6riWEvUAz8GJpUURKcK1vK7s+nKTXsHUCwcOGueMclL9tdCBafwmceANM8BEiF888S90ehf++Bmi6rY6b52rVW7IpVUqfaIFPDPNL7aemk78I8SaUUHU3ow4fQzBmuOc4+tWA5xcvfZqApQABXio5mTiRem2MpSNrhC+5flzxF/m8T7R3ofSOiHWJZ1bep/KJs7FvwwLVSJgDaxmtwuPvMcMzL1hQ9sD4gHsv6URk0mtftcG1fJj4xCFRkuzmSOUQLjb6uz7wBguvHXqIUoOWuOrns1lvSNF4ihbv86PnAPfsDMSgRCPjCbLDpigXEwEw== philippe@prados.fr
 automatically-retry-hooks:true backup-dir: charmhub-url:https://api.charmhub.io cloudinit-userdata: container-image-metadata-url: container-image-stream:released container-inherit-properties: container-networking-method: default-series: default-space: development:false disable-network-management:false disable-telemetry:false egress-subnets: enable-os-refresh-update:true enable-os-upgrade:true fan-config: firewall-mode:instance ftp-proxy: http-proxy: https-proxy: ignore-machine-addresses:false image-metadata-url: image-stream:released juju-ftp-proxy: juju-http-proxy: juju-https-proxy: juju-no-proxy:127.0.0.1,localhost,::1 logforward-enabled:false logging-config: logging-output: lxd-snap-channel:5.0/stable max-action-results-age:336h max-action-results-size:5G max-status-history-age:336h max-status-history-size:5G name:controller net-bond-reconfigure-delay:17 no-proxy:127.0.0.1,localhost,::1 num-container-provision-workers:4 num-provision-workers:16 operator-storage:microk8s-hostpath provisioner-harvest-mode:destroyed proxy-ssh:false resource-tags: snap-http-proxy: snap-https-proxy: snap-store-assertions: snap-store-proxy: snap-store-proxy-url: ssl-hostname-verification:true test-mode:false transmit-vendor-metrics:true type:kubernetes update-status-hook-interval:5m uuid:ddc39ce0-51de-40d3-80f6-d3e728175ae3 workload-storage:microk8s-hostpath]
17:07:02 DEBUG juju.kubernetes.provider provider.go:140 opening model "controller".
17:07:02 INFO  cmd bootstrap.go:856 Creating Juju controller "microk8s-localhost" on microk8s/localhost
17:07:02 INFO  juju.cmd.juju.commands bootstrap.go:927 combined bootstrap constraints: 
17:07:02 INFO  cmd bootstrap.go:977 Bootstrap to Kubernetes cluster identified as microk8s/localhost
17:07:04 DEBUG juju.environs.simplestreams simplestreams.go:417 searching for signed metadata in datasource "gui simplestreams"
17:07:04 DEBUG juju.environs.simplestreams simplestreams.go:452 looking for data index using path streams/v1/index2.sjson
17:07:04 DEBUG juju.environs.simplestreams simplestreams.go:464 looking for data index using URL https://streams.canonical.com/juju/gui/streams/v1/index2.sjson
17:07:04 DEBUG juju.environs.simplestreams simplestreams.go:467 streams/v1/index2.sjson not accessed, actual error: [{github.com/juju/juju/environs/simplestreams.(*urlDataSource).Fetch:192: "https://streams.canonical.com/juju/gui/streams/v1/index2.sjson" not found}]
17:07:04 DEBUG juju.environs.simplestreams simplestreams.go:468 streams/v1/index2.sjson not accessed, trying legacy index path: streams/v1/index.sjson
17:07:04 DEBUG juju.environs.simplestreams simplestreams.go:487 read metadata index at "https://streams.canonical.com/juju/gui/streams/v1/index.sjson"
17:07:04 DEBUG juju.environs.simplestreams simplestreams.go:1019 finding products at path "streams/v1/com.canonical.streams-released-dashboard.sjson"
17:07:04 INFO  cmd bootstrap.go:871 Fetching Juju Dashboard 0.8.1
17:07:04 DEBUG juju.kubernetes.provider k8s.go:474 controller pod config: 
&{Tags:map[] Bootstrap:0xc0006c8a80 DisableSSLHostnameVerification:false ProxySettings:{Http: Https: Ftp: NoProxy:127.0.0.1,localhost,::1 AutoNoProxy:} Controller:map[agent-logfile-max-backups:2 agent-logfile-max-size:100M api-port:17070 api-port-open-delay:2s audit-log-capture-args:false audit-log-exclude-methods:[ReadOnlyMethods] audit-log-max-backups:10 audit-log-max-size:300M auditing-enabled:true batch-raft-fsm:false ca-cert:-----BEGIN CERTIFICATE-----
MIIEEjCCAnqgAwIBAgIUTObjs8mz/1yBuQ8qniMxu7Yr+0IwDQYJKoZIhvcNAQEL
BQAwITENMAsGA1UEChMESnVqdTEQMA4GA1UEAxMHanVqdS1jYTAeFw0yMzAxMjQx
NjAyMDJaFw0zMzAxMjQxNjA3MDJaMCExDTALBgNVBAoTBEp1anUxEDAOBgNVBAMT
B2p1anUtY2EwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQC+TbSrDWo+
jpZFCcVrJ2MW2gNzbpLK85KPzh8EnrAPQGMj5evCz5g8WagZKMXTiNtfK1wxI6BS
mrYdA1EveMPvubrqsGxcRgzirYxjXsvmtDQ8qW4WlvvsZmYeuM0Jr85YwuS1/4jz
sbm4Uj4JRKwqps66MvWS7cKDMU3AqaI/WB1zeDDM4mtE3zAp6BeqPukR83JT9SZq
WfYoy7oBVORGqx2/7nCXUZxVvGlqe76hnDypsv+r0BRo7eGGG7vh2mfKmSM4pDPB
ers7X+AY6Qx2RLM1dQE6gspE2cBqO//nLXM9ydam9Wpxt6oATjC4tt63SKDA6kky
b9YTxQ5Dw22Nq+X8o6jeXxRuBv+CnXyZAZPM9EtpHEcZjTpOH480HrJZQEMgjpfB
gpQMFjF2BBZ1UOePTXaPCKjgF7FOl1ZmOAADFBW9vzAxCZDu6xhihcbQEr0qvcNX
a1c/krPUW5bTiS3aMHBy2ifkQOhyqEB4tmyplcvoGoJH/6jHkVbd9RUCAwEAAaNC
MEAwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFBv7
BpSNybSdVFThhMtFn0MIsh4bMA0GCSqGSIb3DQEBCwUAA4IBgQA8ktptUe76jhHk
/3cGbSVHnzT1ga7ejgEuIwX6FZ/Ey579NHc47PtJmiosAb3yGn5vrVCZ1jGe/O8N
fwg7f8q9p5OD2lAJnDCzxhF+WGmdrm8iAJlFq+1N5i59htAnvOqYahcC/IhQL5C8
xslgxo+YG1AT2dUMCaBFPPQWWiFhA9kOvZMGM+mG+JltLrOZWiXhCTPUd4S/8JCB
0b0TtsdPM0gsflrd9LSuvcyw+80h/+PaMMXS/MTrOGquOr+W2YObKVazNKZozi4e
bHRb6Uw12vHC+0GNk+YotrPmh0xwww21QkjQULka9ThVzlMlgJvEhrq3w21Pl8jY
TsqhtIio2pv2PKQQ7quZ+DNQd+oTnYIAJt/vtLHTNGSirmQ4tiKi0NDm7dv5I2UC
8++1upb6YdBMhco9aVyZyOIxHc+eFHf+mlCJRgwVvMgU4AxeVlDLV86nq0qsxLSN
VyeyJMK3ud0ylflHypk/wVvbhn1Z7gJktWs/4e0XvyvVw/2b5t0=
-----END CERTIFICATE-----
 charmstore-url:https://api.jujucharms.com/charmstore controller-name:microk8s-localhost controller-uuid:8c022a7d-85a7-45a2-84bf-dd34d50338ee juju-db-snap-channel:4.4/stable max-agent-state-size:524288 max-charm-state-size:2097152 max-debug-log-duration:24h0m0s max-prune-txn-batch-size:1000000 max-prune-txn-passes:100 max-txn-log-size:10M metering-url:https://api.jujucharms.com/omnibus/v3 migration-agent-wait-time:15m model-logfile-max-backups:2 model-logfile-max-size:10M model-logs-size:20M mongo-memory-profile:default non-synced-writes-to-raft-log:false prune-txn-query-count:1000 prune-txn-sleep-time:10ms set-numa-control-policy:false state-port:37017] APIInfo:0xc000910000 ControllerTag:controller-8c022a7d-85a7-45a2-84bf-dd34d50338ee ControllerName:microk8s-localhost JujuVersion:2.9.38 DataDir:/var/lib/juju LogDir:/var/log/juju MetricsSpoolDir:/var/lib/juju/metricspool ControllerId:0 AgentEnvironment:map[PROVIDER_TYPE:kubernetes]}
17:07:04 INFO  cmd bootstrap.go:395 Creating k8s resources for controller "controller-microk8s-localhost"
17:07:04 DEBUG juju.kubernetes.provider bootstrap.go:628 creating controller service: 
&Service{ObjectMeta:{controller-service  controller-microk8s-localhost    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app.kubernetes.io/managed-by:juju app.kubernetes.io/name:controller] map[controller.juju.is/id:8c022a7d-85a7-45a2-84bf-dd34d50338ee] [] []  []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:api-server,Protocol:,Port:17070,TargetPort:{0 17070 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app.kubernetes.io/name: controller,},ClusterIP:,Type:ClusterIP,ExternalIPs:[],SessionAffinity:,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:nil,ClusterIPs:[],IPFamilies:[],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
17:07:04 DEBUG juju.caas.kubernetes.provider.proxy setup.go:177 polling caas credential rbac secret, in 1 attempt, token for secret "controller-proxy" not found
17:07:07 DEBUG juju.kubernetes.provider configmap.go:84 updating configmap "controller-configmap"
17:07:07 DEBUG juju.kubernetes.provider configmap.go:84 updating configmap "controller-configmap"
17:07:08 DEBUG juju.kubernetes.provider bootstrap.go:1209 mongodb container args:
printf 'args="--dbpath=/var/lib/juju/db --sslPEMKeyFile=/var/lib/juju/server.pem --sslPEMKeyPassword=ignored --sslMode=requireSSL --port=37017 --journal --replSet=juju --quiet --oplogSize=1024 --auth --keyFile=/var/lib/juju/shared-secret --storageEngine=wiredTiger --bind_ip_all"\nipv6Disabled=$(sysctl net.ipv6.conf.all.disable_ipv6 -n)\nif [ $ipv6Disabled -eq 0 ]; then\n  args="${args} --ipv6"\nfi\nexec mongod ${args}\n'>/root/mongo.sh && chmod a+x /root/mongo.sh && /root/mongo.sh
17:07:08 DEBUG juju.kubernetes.provider k8s.go:2008 selecting units "app.kubernetes.io/name=controller" to watch
17:07:08 DEBUG juju.kubernetes.provider.watcher k8swatcher.go:114 fire notify watcher for controller-0
17:07:09 DEBUG juju.kubernetes.provider.watcher k8swatcher.go:114 fire notify watcher for controller
17:07:09 DEBUG juju.kubernetes.provider.watcher k8swatcher.go:114 fire notify watcher for controller-0

What is my mistake ?

Thanks

So the Juju output just stops there? Can you provide some debug info from the k8s cluster? The bootstrap process creates a namespace called controller-<name> where name is the name of the controller. Can you paste a YAML output of what’s in that namespace? And logs of what’s going on inside the 2 controller pod containers etc?

Hi, I have the same problem : https://github.com/canonical/bundle-kubeflow/issues/537

I find the problem. My firewall use ‘public’ mode for all interfaces. :slightly_smiling_face: