Ceph-mon and ceph-osd on MAAS

I’m having trouble setting up storage for Charmed Kubernetes.

I’m following this: Storage | Ubuntu

ceph-mon deploys fine with:

juju deploy -n 3 ceph-mon

Model       Controller       Cloud/Region        Version  SLA          Timestamp
ceph-model  maas-controller  maas-cloud/default  2.9.37   unsupported  18:46:12Z

App       Version  Status   Scale  Charm     Channel  Rev  Exposed  Message
ceph-mon  10.2.11  waiting      3  ceph-mon  stable    73  no       Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)

Unit         Workload  Agent  Machine  Public address  Ports  Message
ceph-mon/0*  waiting   idle   0        10.11.30.49            Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)
ceph-mon/1   waiting   idle   1        10.11.30.50            Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)
ceph-mon/2   waiting   idle   2        10.11.30.51            Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)

Machine  State    Address      Inst id        Series  AZ       Message
0        started  10.11.30.49  ruling-gecko   xenial  default  Deployed
1        started  10.11.30.50  eager-magpie   xenial  default  Deployed
2        started  10.11.30.51  moving-poodle  xenial  default  Deployed

Relation provider  Requirer      Interface  Type  Message
ceph-mon:mon       ceph-mon:mon  ceph       peer

Then, when I run this, the units seem to hang with “allocating” and “agent initializing”:

juju deploy -n 3 ceph-osd --storage osd-devices=maas,32G,2

Model       Controller       Cloud/Region        Version  SLA          Timestamp
ceph-model  maas-controller  maas-cloud/default  2.9.37   unsupported  18:52:22Z

App       Version  Status   Scale  Charm     Channel  Rev  Exposed  Message
ceph-mon  10.2.11  waiting      3  ceph-mon  stable    73  no       Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)
ceph-osd           waiting    0/3  ceph-osd  stable   513  no       agent initializing

Unit         Workload  Agent       Machine  Public address  Ports  Message
ceph-mon/0*  waiting   idle        0        10.11.30.49            Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)
ceph-mon/1   waiting   idle        1        10.11.30.50            Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)
ceph-mon/2   waiting   idle        2        10.11.30.51            Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)
ceph-osd/0   waiting   allocating  3        10.11.30.52            agent initializing
ceph-osd/1*  waiting   allocating  4        10.11.30.53            agent initializing
ceph-osd/2   waiting   allocating  5        10.11.30.54            agent initializing

Machine  State    Address      Inst id        Series  AZ       Message
0        started  10.11.30.49  ruling-gecko   xenial  default  Deployed
1        started  10.11.30.50  eager-magpie   xenial  default  Deployed
2        started  10.11.30.51  moving-poodle  xenial  default  Deployed
3        started  10.11.30.52  fit-pup        xenial  default  Deployed
4        started  10.11.30.53  well-frog      xenial  default  Deployed
5        started  10.11.30.54  clean-feline   xenial  default  Deployed

Relation provider  Requirer      Interface  Type  Message
ceph-mon:mon       ceph-mon:mon  ceph       peer

Storage Unit  Storage ID     Type   Pool  Mountpoint  Size   Status    Message
ceph-osd/0    osd-devices/0  block  maas              30GiB  attached
ceph-osd/0    osd-devices/1  block  maas              30GiB  attached
ceph-osd/1    osd-devices/2  block  maas              30GiB  attached
ceph-osd/1    osd-devices/3  block  maas              30GiB  attached
ceph-osd/2    osd-devices/4  block  maas              30GiB  attached
ceph-osd/2    osd-devices/5  block  maas              30GiB  attached

I think my problem is closely to related to this:

or this:

The ceph-osd units get the requested block storage:

ubuntu@fit-pup:~$ sudo lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda    253:0    0 59.6G  0 disk
├─vda1 253:1    0    1M  0 part
└─vda2 253:2    0 59.6G  0 part /
vdb    253:16   0 29.8G  0 disk
vdc    253:32   0 29.8G  0 disk
ubuntu@fit-pup:~$ sudo fdisk -l /dev/vd[bc]
Disk /dev/vdb: 29.8 GiB, 32000000000 bytes, 62500000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/vdc: 29.8 GiB, 32000000000 bytes, 62500000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

I ran this in hopes that it help would me, but no luck:

juju config ceph-osd osd-devices='/dev/vdb /dev/vdc'

When I run this, it just hangs:

juju run-action ceph-osd/0 --wait list-disks

The only workaround I found is this method:

juju deploy -n 3 ceph-osd --constraints "root-disk=512G"

juju add-storage ceph-osd/0 osd-devices=256G,1
juju add-storage ceph-osd/1 osd-devices=256G,1
juju add-storage ceph-osd/2 osd-devices=256G,1

But, this method forces the unit to create the storage on a loop device.

Debug log doesn’t tell me anything useful either:

juju debug-log --include ceph-osd --tail

unit-ceph-osd-2: 18:52:20 INFO juju.worker.apicaller [1f6be2] password changed for "unit-ceph-osd-2"
unit-ceph-osd-2: 18:52:20 INFO juju.worker.apicaller [1f6be2] "unit-ceph-osd-2" successfully connected to "10.11.30.21:17070"
unit-ceph-osd-2: 18:52:20 INFO juju.worker.migrationminion migration phase is now: NONE
unit-ceph-osd-2: 18:52:20 INFO juju.worker.logger logger worker started
unit-ceph-osd-2: 18:52:20 INFO juju.worker.upgrader no waiter, upgrader is done
unit-ceph-osd-2: 18:52:20 ERROR juju.worker.meterstatus error running "meter-status-changed": charm missing from disk
unit-ceph-osd-2: 18:52:20 INFO juju.worker.uniter unit "ceph-osd/2" started
unit-ceph-osd-2: 18:52:20 INFO juju.worker.uniter resuming charm install
unit-ceph-osd-2: 18:52:20 INFO juju.worker.uniter.charm downloading ch:amd64/xenial/ceph-osd-513 from API server
unit-ceph-osd-2: 18:52:21 INFO juju.worker.uniter hooks are retried true
unit-ceph-osd-1: 18:51:58 INFO juju Starting unit workers for "ceph-osd/1"
unit-ceph-osd-1: 18:51:58 INFO juju.worker.apicaller [1f6be2] "unit-ceph-osd-1" successfully connected to "10.11.30.21:17070"
unit-ceph-osd-1: 18:51:58 INFO juju.worker.apicaller [1f6be2] password changed for "unit-ceph-osd-1"
unit-ceph-osd-1: 18:51:58 INFO juju.worker.apicaller [1f6be2] "unit-ceph-osd-1" successfully connected to "10.11.30.21:17070"
unit-ceph-osd-1: 18:51:58 INFO juju.worker.migrationminion migration phase is now: NONE
unit-ceph-osd-1: 18:51:58 INFO juju.worker.logger logger worker started
unit-ceph-osd-1: 18:51:58 ERROR juju.worker.meterstatus error running "meter-status-changed": charm missing from disk
unit-ceph-osd-1: 18:51:58 INFO juju.worker.upgrader no waiter, upgrader is done
unit-ceph-osd-1: 18:51:59 INFO juju.worker.uniter unit "ceph-osd/1" started
unit-ceph-osd-1: 18:51:59 INFO juju.worker.uniter resuming charm install
unit-ceph-osd-1: 18:51:59 INFO juju.worker.uniter.charm downloading ch:amd64/xenial/ceph-osd-513 from API server
unit-ceph-osd-1: 18:51:59 INFO juju.worker.uniter hooks are retried true
unit-ceph-osd-0: 18:52:15 INFO juju Starting unit workers for "ceph-osd/0"
unit-ceph-osd-0: 18:52:15 INFO juju.worker.apicaller [1f6be2] "unit-ceph-osd-0" successfully connected to "10.11.30.21:17070"
unit-ceph-osd-0: 18:52:15 INFO juju.worker.apicaller [1f6be2] password changed for "unit-ceph-osd-0"
unit-ceph-osd-0: 18:52:15 INFO juju.worker.apicaller [1f6be2] "unit-ceph-osd-0" successfully connected to "10.11.30.21:17070"
unit-ceph-osd-0: 18:52:16 INFO juju.worker.migrationminion migration phase is now: NONE
unit-ceph-osd-0: 18:52:16 INFO juju.worker.logger logger worker started
unit-ceph-osd-0: 18:52:16 INFO juju.worker.upgrader no waiter, upgrader is done
unit-ceph-osd-0: 18:52:16 ERROR juju.worker.meterstatus error running "meter-status-changed": charm missing from disk
unit-ceph-osd-0: 18:52:16 INFO juju.worker.uniter unit "ceph-osd/0" started
unit-ceph-osd-0: 18:52:16 INFO juju.worker.uniter resuming charm install
unit-ceph-osd-0: 18:52:16 INFO juju.worker.uniter.charm downloading ch:amd64/xenial/ceph-osd-513 from API server
unit-ceph-osd-0: 18:52:17 INFO juju.worker.uniter hooks are retried true
unit-ceph-osd-2: 18:52:20 INFO juju Starting unit workers for "ceph-osd/2"
unit-ceph-osd-2: 18:52:20 INFO juju.worker.apicaller [1f6be2] "unit-ceph-osd-2" successfully connected to "10.11.30.21:17070"

How can I troubleshoot this? What am I doing wrong?

Tagging @wallyworld

Another workaround is manually adding the machines and storage, then deploying ceph-osd:

juju add-model --controller maas-controller ceph-model maas-cloud
juju add-machine -n 3 --disks="maas,256G,2"
juju deploy -n 1 ceph-osd --to 0 --config osd-devices="/dev/vdb /dev/vdc" --config ceph-cluster-network=192.168.21.0/24 --bind "cluster=home"
juju add-unit -n 1 ceph-osd --to 1
juju add-unit -n 1 ceph-osd --to 2
juju deploy -n 3 ceph-mon --config ceph-cluster-network=192.168.21.0/24 --bind "cluster=home"
juju add-relation ceph-osd ceph-mon

Which seems to work as intended:

Model       Controller       Cloud/Region        Version  SLA          Timestamp
ceph-model  maas-controller  maas-cloud/default  2.9.37   unsupported  20:36:48Z

App       Version  Status  Scale  Charm     Channel  Rev  Exposed  Message
ceph-mon  15.2.17  active      3  ceph-mon  stable    73  no       Unit is ready and clustered
ceph-osd  15.2.17  active      3  ceph-osd  stable   513  no       Unit is ready (2 OSD)

Unit         Workload  Agent  Machine  Public address  Ports  Message
ceph-mon/0*  active    idle   3        10.11.30.50            Unit is ready and clustered
ceph-mon/1   active    idle   4        10.11.30.52            Unit is ready and clustered
ceph-mon/2   active    idle   5        10.11.30.54            Unit is ready and clustered
ceph-osd/0*  active    idle   0        10.11.30.49            Unit is ready (2 OSD)
ceph-osd/1   active    idle   1        10.11.30.51            Unit is ready (2 OSD)
ceph-osd/2   active    idle   2        10.11.30.53            Unit is ready (2 OSD)

Machine  State    Address      Inst id        Series  AZ       Message
0        started  10.11.30.49  sharp-ram      focal   default  Deployed
1        started  10.11.30.51  needed-racer   focal   default  Deployed
2        started  10.11.30.53  easy-guinea    focal   default  Deployed
3        started  10.11.30.50  alert-fowl     focal   default  Deployed
4        started  10.11.30.52  saving-rodent  focal   default  Deployed
5        started  10.11.30.54  secure-weevil  focal   default  Deployed

Relation provider  Requirer      Interface  Type     Message
ceph-mon:mon       ceph-mon:mon  ceph       peer
ceph-mon:osd       ceph-osd:mon  ceph-osd   regular

Storage Unit  Storage ID  Type  Pool  Mountpoint  Size  Status  Message

However, the storage section doesn’t report the osd-devices.

Storage Unit  Storage ID  Type  Pool  Mountpoint  Size  Status  Message

On the other hand, it sort of shows up with:

root@juju:~# juju list-storage --volume
Machine  Unit  Storage ID  Volume ID  Provider ID  Device  Size    State     Message
0                          0          volume-0             238GiB  attached
0                          1          volume-1             238GiB  attached
1                          2          volume-2             238GiB  attached
1                          3          volume-3             238GiB  attached
2                          4          volume-4             238GiB  attached
2                          5          volume-5             238GiB  attached

Though, this means I can’t conduct any storage maintenance like juju detach-storage