Deploying the ceph-mon charm into containers is the correct thing to do. As you noted, both charms will want to write to the ceph.conf file which will have some problems since it is a common resource and each charm effectively wants to own the file.
To do this with the ceph charms, you’ll want to use juju network spaces. You create your network spaces within the MAAS and juju will adopt those network spaces (though you may need to do a juju reload-spaces command, but its best if you iron out the network spaces you need prior to bootstrapping juju). Once you have the network spaces configured in both maas and juju, you can use them in your charm deployment. When using network spaces in your deployment, juju will create the necessary bridges on the host to provide an interface to the container for that network.
The ceph-mon and ceph-osd charms have two network space bindings, which represent the public network and the cluster network. These are conveniently called ‘public’ and ‘cluster’ respectively.
To use the spaces in your deployment, you can use the
--bind flag to the juju deploy command, i.e.
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config source=cloud-focal-victoria ceph-mon --bind "public=my-public-space cluster=my-cluster-space"
where my-public-space and my-cluster-space are the name of your network spaces defined in both juju and maas.
You’ll want to use the bindings for both the ceph-osd and ceph-mon charms. I find it convenient (and less error prone) to use a bundle file and iterate over this instead of issuing commands by hand. Its a one-shot deployment that you can use to store all of your config, network bindings, etc.
One of the sample bundles available, unfortunately does not use spaces and simply uses a flat networking arrangement. However, adding the spaces into the configuration is not all too difficult.
The following is an example which might be applied to your environment. However, please do note that I have not actually deployed this configuration and it is intended to be an example. You will need to ensure your osd-devices configuration matches what you have in your environment.
- - ceph-osd:mon
osd-devices: /dev/sdb /dev/sdc ...
Hope this helps.