As you can see there’s no /dev/sdb/…just a /dev/sda and it’s being used by my MAAS nodes root drive so it’s not available to ceph.
How would I go about solving this? Do I use fdisk and partition sda? Or do I need a new drive completely or something…how do I get /dev/sdb for ceph/ can i get it done in dev/sda?
And would I have to delete my entire model and start again for this?
If you look in the ceph-osd unit logs I bet you will find the reason why the devices are being rejected. You may have to use the zap-disk action, but look in the logs first:
The status for the mon can be waiting, that’s fine; the OSD doesn’t have access to the full auth list, which is why I asked for it on the mon, if ceph-mon/0 isn’t around, can you run the command targeting a different ceph-mon?
It looks like it should have the permissions; another thing I’m noticing is that it’s trying to do an upgrade. Did you try installing a different version (Octopus / focal’s distro) and then upgraded to Pacific / focal-xena?
But I ended up not having a block device for ceph-osd so I continued on with the documentation and after everything I used a WD MyCloud device to add a block device to my nodes.
Edit: Not sure if it means anything but when I pass fdisk -l the block device /dev/sdb has the disk label “dos”
So, I think there’s an issue with your deployment that has nothing to do with the OSD config. The OSD charm will happily ignore bad devices so the fact that you’re getting hook errors is a bad thing, and the trace suggests that it’s related to an attempt to upgrade. To summarize, all of the following options should work fine, and if the named thing exists, the charm will use it as an OSD:
Can you provide the output of juju status ceph-mon and juju status ceph-osd, and confirm which commit you’re using openstack-base from? Also, it would be great if you can confirm that the bundle I just linked is the one you’re using.
I installed Openstack via individual charm and as per the documentation it says it uses Openstack Xena(cloud:focal-xena). The link you said is the exact same one in the documentation for installing Openstack via charm bundle so it should be the same(except that for the charm bundle the expected osd count is 3 whereas for individual charm installation its 4)
Also under the yaml files of ceph-osd and ceph-mon the source is cloud:focal-xena. so I’m sure i didn’t use a different version before
I’m about to drop offline for the day, but I wanted to confirm that you’re using a bundle to deploy everything? The charm-deployment-guide page linked to in your first post documents two distinct ways to approach deploying OpenStack via the Charms:
With individual charms, where you’d run things like juju deploy -n 4 --config ceph-osd.yaml --constraints tags=compute ceph-osd
Via a bundle, where you’d run something like: juju deploy ./bundle.yaml
Nope! I’m installing it via individual charm not through the charm bundle. so I ran things like option 1.
I’m doing the individual charm method as I’m trying to learn how it all works and well this is a proof of concept project im working on. So yea I was hoping to get this working without using the charm bundle. Had to reinstall everything multiple times…im guessing i might have to do it again but yea was hoping I could solve this without having to start from scratch.
Thanks for your time, appreciate it. If you have any other suggestions do let me know whenever you’re free next haha.