ERROR preferred storage “microk8s.io/hostpath” not available Seen this post with similar error but the solution there didn’t work for me. Maybe I can wait a day?? and try again???
cloudop@cloudoperator:~$ juju bootstrap microk8s --bootstrap-constraints cores=2 --constraints mem=4G con-juju --verbose --debug --agent-version="2.9.22"
02:53:05 INFO juju.cmd supercommand.go:56 running juju [2.9.25 0 695d9bc09df0725a168df52707336bb6a3a92ff7 gc go1.17.6]
02:53:05 DEBUG juju.cmd supercommand.go:57 args: []string{"/snap/juju/18221/bin/juju", "bootstrap", "microk8s", "--bootstrap-constraints", "cores=2", "--constraints", "mem=4G", "con-juju", "--verbose", "--debug", "--agent-version=2.9.22"}
02:53:07 DEBUG juju.kubernetes.provider provider.go:140 opening model "add-cloud".
ERROR preferred storage "microk8s.io/hostpath" not available
ryzengrind@calm-fox:~$ microk8s enable storage; microk8s status --wait-ready
DEPRECIATION WARNING: 'storage' is deprecated and will soon be removed. Please use 'hostpath-storage' instead.
Enabling default storage class.
WARNING: Hostpath storage is not suitable for production environments.
deployment.apps/hostpath-provisioner created
storageclass.storage.k8s.io/microk8s-hostpath created
serviceaccount/microk8s-hostpath created
clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created
clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created
Storage will be available soon.
microk8s is running
high-availability: yes
datastore master nodes: 192.168.20.232:19001 192.168.20.225:19001 192.168.20.228:19001
datastore standby nodes: none
addons:
enabled:
dns # CoreDNS
ha-cluster # Configure high availability on the current node
helm3 # Helm 3 - Kubernetes package manager
hostpath-storage # Storage class; allocates storage from host directory
disabled:
dashboard # The Kubernetes dashboard
dashboard-ingress # Ingress definition for Kubernetes dashboard
helm # Helm 2 - the package manager for Kubernetes
host-access # Allow Pods connecting to Host services smoothly
ingress # Ingress controller for external access
linkerd # Linkerd is a service mesh for Kubernetes and other frameworks
metallb # Loadbalancer for your Kubernetes cluster
metrics-server # K8s Metrics Server for API access to service metrics
openebs # OpenEBS is the open-source storage solution for Kubernetes
portainer # Portainer UI for your Kubernetes cluster
prometheus # Prometheus operator for monitoring and logging
rbac # Role-Based Access Control for authorisation
registry # Private image registry exposed on localhost:32000
traefik # traefik Ingress controller for external access
ryzengrind@calm-fox:~$ snap info microk8s
name: microk8s
summary: Kubernetes for workstations and appliances
publisher: Canonical✓
store-url: https://snapcraft.io/microk8s
contact: https://github.com/ubuntu/microk8s
license: unset
description: |
MicroK8s is a small, fast, secure, single node Kubernetes that installs on
just about any Linux box. Use it for offline development, prototyping,
testing, or use it on a VM as a small, cheap, reliable k8s for CI/CD. It's
also a great k8s for appliances - develop your IoT apps for k8s and deploy
them to MicroK8s on your boxes.
commands:
- microk8s.add-node
- microk8s.cilium
- microk8s.config
- microk8s.ctr
- microk8s.dashboard-proxy
- microk8s.dbctl
- microk8s.disable
- microk8s.enable
- microk8s.helm
- microk8s.helm3
- microk8s.inspect
- microk8s.istioctl
- microk8s.join
- microk8s.juju
- microk8s.kubectl
- microk8s.leave
- microk8s.linkerd
- microk8s
- microk8s.refresh-certs
- microk8s.remove-node
- microk8s.reset
- microk8s.start
- microk8s.status
- microk8s.stop
services:
microk8s.daemon-apiserver: simple, enabled, inactive
microk8s.daemon-apiserver-kicker: simple, enabled, active
microk8s.daemon-cluster-agent: simple, enabled, active
microk8s.daemon-containerd: notify, enabled, active
microk8s.daemon-control-plane-kicker: simple, enabled, inactive
microk8s.daemon-controller-manager: simple, enabled, inactive
microk8s.daemon-etcd: simple, enabled, inactive
microk8s.daemon-flanneld: simple, enabled, inactive
microk8s.daemon-k8s-dqlite: simple, enabled, active
microk8s.daemon-kubelet: simple, enabled, inactive
microk8s.daemon-kubelite: simple, enabled, active
microk8s.daemon-proxy: simple, enabled, inactive
microk8s.daemon-scheduler: simple, enabled, inactive
microk8s.daemon-traefik: simple, enabled, inactive
snap-id: EaXqgt1lyCaxKaQCU349mlodBkDCXRcg
tracking: latest/beta
refresh-date: 5 days ago, at 04:39 UTC
channels:
1.23/stable: v1.23.3 2022-01-27 (2954) 192MB classic
1.23/candidate: v1.23.3 2022-01-27 (2954) 192MB classic
1.23/beta: v1.23.3 2022-01-27 (2954) 192MB classic
1.23/edge: v1.23.4 2022-02-23 (3024) 192MB classic
latest/stable: v1.23.3 2022-02-05 (2954) 192MB classic
latest/candidate: v1.23.3 2022-01-27 (2955) 192MB classic
latest/beta: v1.23.3 2022-01-27 (2955) 192MB classic
latest/edge: v1.23.4 2022-03-09 (3043) 194MB classic
1.22/stable: v1.22.6 2022-02-06 (2952) 168MB classic
1.22/candidate: v1.22.6 2022-01-27 (2952) 168MB classic
1.22/beta: v1.22.6 2022-01-27 (2952) 168MB classic
1.22/edge: v1.22.7 2022-02-21 (3012) 168MB classic
1.21/stable: v1.21.9 2022-02-06 (2953) 167MB classic
1.21/candidate: v1.21.9 2022-01-27 (2953) 167MB classic
1.21/beta: v1.21.9 2022-01-27 (2953) 167MB classic
1.21/edge: v1.21.10 2022-02-21 (3013) 167MB classic
1.20/stable: v1.20.13 2021-12-08 (2762) 189MB classic
1.20/candidate: v1.20.13 2021-12-08 (2762) 189MB classic
1.20/beta: v1.20.13 2021-12-08 (2762) 189MB classic
1.20/edge: v1.20.14 2022-01-11 (2852) 185MB classic
1.19/stable: v1.19.15 2021-09-30 (2535) 184MB classic
1.19/candidate: v1.19.15 2021-09-29 (2535) 184MB classic
1.19/beta: v1.19.15 2021-09-29 (2535) 184MB classic
1.19/edge: v1.19.16 2022-01-07 (2823) 180MB classic
1.18/stable: v1.18.20 2021-07-18 (2272) 169MB classic
1.18/candidate: v1.18.20 2021-07-12 (2272) 169MB classic
1.18/beta: v1.18.20 2021-07-12 (2272) 169MB classic
1.18/edge: v1.18.20 2021-11-03 (2651) 169MB classic
1.17/stable: v1.17.17 2021-04-03 (1923) 149MB classic
1.17/candidate: v1.17.17 2021-04-03 (1923) 149MB classic
1.17/beta: v1.17.17 2021-04-03 (1923) 149MB classic
1.17/edge: v1.17.17 2021-01-13 (1923) 149MB classic
1.16/stable: v1.16.15 2020-09-11 (1672) 150MB classic
1.16/candidate: v1.16.15 2020-09-03 (1672) 150MB classic
1.16/beta: v1.16.15 2020-09-03 (1672) 150MB classic
1.16/edge: v1.16.15 2020-09-02 (1672) 150MB classic
1.15/stable: v1.15.11 2020-03-27 (1305) 144MB classic
1.15/candidate: v1.15.11 2020-03-27 (1305) 144MB classic
1.15/beta: v1.15.11 2020-03-27 (1305) 144MB classic
1.15/edge: v1.15.11 2020-03-26 (1305) 144MB classic
1.14/stable: v1.14.10 2019-12-20 (1121) 164MB classic
1.14/candidate: ↑
1.14/beta: ↑
1.14/edge: v1.14.10 2020-03-26 (1306) 164MB classic
1.13/stable: v1.13.6 2019-05-20 (580) 173MB classic
1.13/candidate: ↑
1.13/beta: ↑
1.13/edge: ↑
1.12/stable: v1.12.9 2019-06-07 (613) 197MB classic
1.12/candidate: ↑
1.12/beta: ↑
1.12/edge: ↑
1.11/stable: –
1.11/candidate: –
1.11/beta: –
1.11/edge: v1.11.1 2018-07-24 (138) 164MB classic
1.10/stable: –
1.10/candidate: –
1.10/beta: –
1.10/edge: v1.11.1 2018-07-24 (137) 164MB classic
installed: v1.23.3 (2955) 192MB classic
cloudop@cloudoperator:~$ juju status
ERROR No controllers registered.
Please either create a new controller using "juju bootstrap" or connect to
another controller that you have been given access to using "juju register".
cloudop@cloudoperator:~$ juju debug-log
ERROR No controllers registered.
Please either create a new controller using "juju bootstrap" or connect to
another controller that you have been given access to using "juju register".
cloudop@cloudoperator:~$ juju clouds
Only clouds with registered credentials are shown.
There are more clouds, use --all to see them.
You can bootstrap a new controller using one of these clouds...
Clouds available on the client:
Cloud Regions Default Type Credentials Source Description
localhost 1 localhost lxd 0 built-in LXD Container Hypervisor
maas-cloud 1 default maas 1 local Metal As A Service
maas-mk8s 1 default maas 1 local Metal As A Service
microk8s 1 localhost k8s 0 built-in A Kubernetes Cluster
Any insight on how I can bootstrap a juju controller on my arm64 RPI cluster would be greatly appreciated.
cloudop@cloudoperator:~$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
microk8s-hostpath (default) microk8s.io/hostpath Delete Immediate false 26m