Question: how to customize the failure domain from Ceph charms

Hello,

I am carrying out with the ceph charms (-osd, -mon, -radosgw, -fs) the deployment of a cluster of 5 nodes with 4 OSD each. The objective is to use pools with erasure code k=5 and m=2 in order to maximize usable storage without losing too much reliability.

Since my configuration involves splitting the data into 7 chunks, I am aware that the failure domain should be “osd” instead of “host”. However, I am not finding a way to customize the fault domain to what I need (“osd” instead “host”) in the charm settings. Is this possible?

I only find the boolean variable “customize-failure-domain” (in the osd and mon charms) that allows me to indicate if I want to inherit the map generated by MAAS (true) or forcefully use “host” as failure domain (false)

Could someone here clarify for me how I can set the failure domain as “osd” from the charms themselves?

Postscript, if this is not the correct place to make these questions, I would appreciate it if you could tell me which would be the appropriate channel.

Thank you so much

Hello Santiago,

The ceph-mon charm has an action that can help you - create-erasure-profile. With it, you can specify both erasure parameters themselves (K and M), as well as the failure-domain (OSD for your case).

Thank you very much Luciano for the response. This action will be useful to me to create new pools.

However, I assume from your answer that there are no alternatives to customizing the failure domain of the pools that are automatically created by charms (e.g. default.rgw.buckets.data for ragosgw). This is correct?

I was looking to customize the profile of that particular pool because I don’t see that rados allows me to host buckets in other pools. Am I forced to use that pool or am I missing something?

For the default.rgw.buckets.data pool, you can set the config parameter ec-profile-name parameter to set the failure domain (it can be set to one created by the ceph-mon action above, or one created by yourself through ceph commands).

Thank you very much Luciano, creating the profile first and then using the configuration parameter ec-profile-name to deploy radosgw is the perfect solution.

With this I obtain the expected result without the need to make subsequent manual configurations.

Have a nice weekend!