Ceph Device Maper ( /dev/mapper/ceph-) still present after removing OSD, Help please!

After removing an OSD following the instructions in : Removing OSDs - Pre-Quincy - Documentation - Ubuntu Community Hub my osd tree shows that the it was successfully removed. But I sill can see device mapper that was used for that unit /dev/mapper/ceph--df9c0ef1--fbd7--46a0--bb7f--c54b44d2ed54-osd--block--f9c0ef1--fbd7--46a0--bb7f--c54b44d2ed54. .

My questions are:

1.- is that a normal behavior? 2.- Will I be able to run juju run-action ceph-osd/1 add-disk osd-devices 3.- or Should I remove that /dev/mapper/ first?

I hope some one can help me with this.

Thanks.

geoint@MAAS-01:~$ juju ssh ceph-mon/leader sudo ceph osd tree
ID   CLASS  WEIGHT     TYPE NAME               STATUS  REWEIGHT  PRI-AFF
 -1         183.32373  root default                                     
-21          21.83060      host calm-stag                               
  6    hdd   21.83060          osd.6               up   1.00000  1.00000
 -7          21.38379      host clean-hog                               
  4    hdd   21.38379          osd.4               up   1.00000  1.00000
-13          21.38379      host exotic-goblin                           
  5    hdd   21.38379          osd.5               up   1.00000  1.00000
 -5                 0      host key-ox                                  
 -9          21.38379      host liked-hermit                            
  1    hdd   21.38379          osd.1               up   1.00000  1.00000
-17          21.83060      host pumped-bat                              
  7    hdd   21.83060          osd.7               up   1.00000  1.00000
-15                 0      host sharp-grouse                            
-19          32.74359      host sharp-heron                             
  8    hdd   32.74359          osd.8               up   1.00000  1.00000
-11          21.38379      host stable-liger                            
  0    hdd   21.38379          osd.0               up   1.00000  1.00000
 -3          21.38379      host star-koala                              
  3    hdd   21.38379          osd.3               up   1.00000  1.00000



Disk /dev/mapper/ceph--df9c0ef1--fbd7--46a0--bb7f--c54b44d2ed54-osd--block--df9c0ef1--fbd7--46a0--bb7f--c54b44d2ed54: 21.39 TiB, 23511720525824 bytes, 45921329152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

I ran

juju run-action ceph-osd/3 add-disk osd-devices=’/dev/sdb /dev/sdc’ --wait

but i still have the same result

No block devices detected using current configuration

Can some one please help me with this problem?

It’s not normal for the mapper devices to still be there - the remove-disk action should’ve taken care of it.

Speaking of which, what was the output from the remove-disk action? Did it show any errors, or did it notify that (at least) 1 OSD was removed?

I dont have the output, but the osd was removed it is not listed in the osd tree anymore , should i manualy remove /dev/mapper/ceph--df9c0ef1--fbd7--46a0--bb7f--c54b44d2ed54-osd--block--f9c0ef1--fbd7--46a0--bb7f--c54b44d2ed54 ?

should i try this? https://thomas.trocha.com/blog/remove-device-mappers-from-disk/

You should be able to do that, but I worry that if it wasn’t removed by the commands issued by the charm, there may be something else at hand.

somethig else like what? what do you think is the worst case scenario?

there is also something strange osd.2 does not shows with sudo ceph osd tree after removing the disk , as you can read in the original post but if i run ceph-volume lvm list osd.2 is still listed

geoint@MAAS-01:~$ juju ssh ceph-osd/3 sudo ceph-volume lvm list

====== osd.2 =======

[block] /dev/ceph-91050339-f650-49a6-ae51-7214be569906/osd-block-91050339-f650-49a6-ae51-7214be569906

  block device              /dev/ceph-91050339-f650-49a6-ae51-7214be569906/osd-block-91050339-f650-49a6-ae51-7214be569906
  block uuid                MBLs8z-1ggn-TEx5-cBDB-UGRu-oaPF-Rmwd1x
  cephx lockbox secret      
  cluster fsid              bf2cbfe8-9b3c-11ec-81ad-3fc481233260
  cluster name              ceph
  crush device class        None
  encrypted                 0
  osd fsid                  91050339-f650-49a6-ae51-7214be569906
  osd id                    2
  osdspec affinity          
  type                      block
  vdo                       0
  devices                   /dev/sdb

Connection to 10.2.101.136 closed.

what about ceph-volume lvm zap as in here https://www.ntnu.no/wiki/display/skyhigh/Re-creating+an+OSD

I solve it by running

juju run-action --wait ceph-osd/3 zap-disk devices=’/dev/sdb’ i-really-mean-it=yes

and then

juju run-action --wait ceph-osd/3 add-disk osd-devices=’/dev/sdb’