juju charm-ceph-osd r564 : juju run remove-disk command fails with RADOS object not found

juju v3.2.3 run remove-disk Fails with RADOS object not found.

Add disk succeeded but remove disk fails. Any update would be much appreciated as I am stuck here. Thankyou for the support !

root@c1:~# juju status
Model Controller Cloud/Region Version SLA Timestamp
ceph manual-default manual/default 3.2.3 unsupported 19:38:50Z

App Version Status Scale Charm Channel Rev Exposed Message
ceph-mon 17.2.6 active 3 ceph-mon quincy/stable 183 no Unit is ready and clustered
ceph-osd 17.2.6 active 3 ceph-osd quincy/stable 564 no Unit is ready (1 OSD)

Remove Disk:

root@c1:~# juju run ceph-osd/1 remove-disk osd-ids=osd.3 purge=true
Running operation 7 with 1 task
  - task 8 on unit-ceph-osd-1

Waiting for task 8...
2023-09-25T19:32:25.179+0000 7fed1d95b640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.osd-removal.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
2023-09-25T19:32:25.179+0000 7fed1d95b640 -1 AuthRegistry(0x7fed1805fcf0) no keyring found at /etc/ceph/ceph.client.osd-removal.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
2023-09-25T19:32:25.183+0000 7fed1d95b640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.osd-removal.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
2023-09-25T19:32:25.183+0000 7fed1d95b640 -1 AuthRegistry(0x7fed18065788) no keyring found at /etc/ceph/ceph.client.osd-removal.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
2023-09-25T19:32:25.183+0000 7fed1d95b640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.osd-removal.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
2023-09-25T19:32:25.183+0000 7fed1d95b640 -1 AuthRegistry(0x7fed1d959f80) no keyring found at /etc/ceph/ceph.client.osd-removal.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
[errno 2] RADOS object not found (error connecting to the cluster)
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-ceph-osd-1/charm/actions/remove-disk", line 367, in <module>
    main()
  File "/var/lib/juju/agents/unit-ceph-osd-1/charm/actions/remove-disk", line 347, in main
    action_osd.remove(purge, timeout, force)
  File "/var/lib/juju/agents/unit-ceph-osd-1/charm/actions/remove-disk", line 213, in remove
    reweight_osd(self.osd_id)
  File "/var/lib/juju/agents/unit-ceph-osd-1/charm/actions/remove-disk", line 122, in reweight_osd
    subprocess.check_call(['ceph', '--id', 'osd-removal',
  File "/usr/lib/python3.10/subprocess.py", line 369, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['ceph', '--id', 'osd-removal', 'osd', 'crush', 'reweight', 'osd.3', '0']' returned non-zero exit status 1.
ERROR the following task failed:
 - id "8" with return code 1

Add disk: Success

root@c1:~# juju run ceph-osd/1 add-disk osd-devices=/dev/sdc
Running operation 5 with 1 task
  - task 6 on unit-ceph-osd-1

Waiting for task 6...
  Physical volume "/dev/sdc" successfully created.
  Volume group "ceph-b12e1e3b-dd93-4c3c-9594-7dd847e17b08" successfully created
  Logical volume "osd-block-b12e1e3b-dd93-4c3c-9594-7dd847e17b08" created.
partx: /dev/sdc: failed to read partition table
  Failed to find physical volume "/dev/sdc".
  Failed to find physical volume "/dev/sdc".
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b12e1e3b-dd93-4c3c-9594-7dd847e17b08
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
--> Executable selinuxenabled not in PATH: /var/lib/juju/tools/unit-ceph-osd-1:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-b12e1e3b-dd93-4c3c-9594-7dd847e17b08/osd-block-b12e1e3b-dd93-4c3c-9594-7dd847e17b08
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Running command: /usr/bin/ln -s /dev/ceph-b12e1e3b-dd93-4c3c-9594-7dd847e17b08/osd-block-b12e1e3b-dd93-4c3c-9594-7dd847e17b08 /var/lib/ceph/osd/ceph-3/block
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap
 stderr: 2023-09-25T19:10:48.123+0000 7f77e5be0640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
2023-09-25T19:10:48.123+0000 7f77e5be0640 -1 AuthRegistry(0x7f77e005fe00) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
 stderr: got monmap epoch 1
--> Creating keyring file for osd.3
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid b12e1e3b-dd93-4c3c-9594-7dd847e17b08 --setuser ceph --setgroup ceph
 stderr: 2023-09-25T19:10:48.527+0000 7fc140651800 -1 bluestore(/var/lib/ceph/osd/ceph-3/) _read_fsid unparsable uuid
--> ceph-volume lvm prepare successful for: ceph-b12e1e3b-dd93-4c3c-9594-7dd847e17b08/osd-block-b12e1e3b-dd93-4c3c-9594-7dd847e17b08
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-b12e1e3b-dd93-4c3c-9594-7dd847e17b08/osd-block-b12e1e3b-dd93-4c3c-9594-7dd847e17b08 --path /var/lib/ceph/osd/ceph-3 --no-mon-config
Running command: /usr/bin/ln -snf /dev/ceph-b12e1e3b-dd93-4c3c-9594-7dd847e17b08/osd-block-b12e1e3b-dd93-4c3c-9594-7dd847e17b08 /var/lib/ceph/osd/ceph-3/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
Running command: /usr/bin/systemctl enable ceph-volume@lvm-3-b12e1e3b-dd93-4c3c-9594-7dd847e17b08
 stderr: Created symlink /<email address hidden> → /lib/systemd/system/ceph-volume@.service.
Running command: /usr/bin/systemctl enable --runtime ceph-osd@3
 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@3.service → /lib/systemd/system/ceph-osd@.service.
Running command: /usr/bin/systemctl start ceph-osd@3
--> ceph-volume lvm activate successful for osd ID: 3
--> ceph-volume lvm create successful for: ceph-b12e1e3b-dd93-4c3c-9594-7dd847e17b08/osd-block-b12e1e3b-dd93-4c3c-9594-7dd847e17b08


root@c1:~# juju ssh ceph-mon/leader sudo ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.07794 root default
-5 0.01949 host vm1
 2 hdd 0.01949 osd.2 up 1.00000 1.00000
-7 0.03897 host vm2
 1 hdd 0.01949 osd.1 up 1.00000 1.00000
 3 hdd 0.01949 osd.3 up 1.00000 1.00000
-3 0.01949 host vm3
 0 hdd 0.01949 osd.0 up 1.00000 1.00000