Cinder-ceph ceph

HI All,
i have a small poc cluster deployed openstack with ceph using juju.
now i would like to test add new servers with ceph-osds having ssds.

1.during initial deployment i used hdds for ceph osds.

2.now if i add new server with ssds by running command juju add-unit ceph-osd that may add these ssds to existing pools(i am not sure) and i may not able to see pure ssds performance.

3.so i would like to add new nodes having ssds as a ceph-osds to new pool by some name like ssdpool and would like to create a new vm with this pool
so now cinder should have 2 backends for vms creation one which came by default cinder-ceph before adding new nodes and one after adding new nodes ssdpool

4.please let me know how i can do this by using juju.

Hi @agrahar, in order to get Cinder to map to an additional RBD pool, you can deploy an additional unit of the cinder-ceph charm with a different name (such as cinder-ceph-ssd) and set a different rbd-pool-name config to your SSD pool. Note that a new cinder-ceph deployment will also need all of the required relations to be established.

hi @szeestraten
thank you for quick response,i understand like this,please correct me if i am wrong.
cinder-ceph is subordinate charm.
1.juju deploy cinder-ceph ceinder-ceph-ssdbackend

2.juju add-relatation like for ex:
juju add-relation cinder-ceph-ssd-backend cinder

  1. juju config cinder-ceph-ssd-backend rbd-pool-name=ssdppol

4.please may i know how i can create a new pool called ssdpool with new osds ?

Hi @agrahar, those steps look pretty good. You’ll also have to add a relation between cinder-ceph-ssd-backend and ceph-mon I think. You can double check the relations on your current deployment with juju status cinder-ceph --relations to see what is required.

Regarding creating a new pool and crush rules in Ceph, that is more a Ceph thing so I recommend you read the docs there and perhaps do it directly with the Ceph CLI. However, the ceph-mon charm has most of the actions that you want so make sure to look at the readme there.

Hi @szeestraten,
thank you,i will look into it.