Juju skip address when using lxd for ceph-mon

I’m installing openstack by https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/install-openstack.html.
My ceph cluster has a public network and a cluster network, so when I install ceph-osd all 3 nodes (not lxd containers) have 3 network interfaces I configured by maas.
When Installing ceph-mon by
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config source=cloud:focal-victoria ceph-mon

juju create 3 lx containers with only one network interface, so monitors can’t create a cluster because they don’t see never OSDd’s.
If I create ceph-mon in the same OSD’s machine the file ceph.conf is rewrited by ceph-mon removing the public network and cluster network.

Deploying the ceph-mon charm into containers is the correct thing to do. As you noted, both charms will want to write to the ceph.conf file which will have some problems since it is a common resource and each charm effectively wants to own the file.

To do this with the ceph charms, you’ll want to use juju network spaces. You create your network spaces within the MAAS and juju will adopt those network spaces (though you may need to do a juju reload-spaces command, but its best if you iron out the network spaces you need prior to bootstrapping juju). Once you have the network spaces configured in both maas and juju, you can use them in your charm deployment. When using network spaces in your deployment, juju will create the necessary bridges on the host to provide an interface to the container for that network.

The ceph-mon and ceph-osd charms have two network space bindings, which represent the public network and the cluster network. These are conveniently called ‘public’ and ‘cluster’ respectively.

To use the spaces in your deployment, you can use the --bind flag to the juju deploy command, i.e.

juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config source=cloud-focal-victoria ceph-mon --bind "public=my-public-space cluster=my-cluster-space"

where my-public-space and my-cluster-space are the name of your network spaces defined in both juju and maas.

You’ll want to use the bindings for both the ceph-osd and ceph-mon charms. I find it convenient (and less error prone) to use a bundle file and iterate over this instead of issuing commands by hand. Its a one-shot deployment that you can use to store all of your config, network bindings, etc.

One of the sample bundles available, unfortunately does not use spaces and simply uses a flat networking arrangement. However, adding the spaces into the configuration is not all too difficult.

The following is an example which might be applied to your environment. However, please do note that I have not actually deployed this configuration and it is intended to be an example. You will need to ensure your osd-devices configuration matches what you have in your environment.

    series: focal
    series: focal
    series: focal
- - ceph-osd:mon
  - ceph-mon:osd
series: focal
    charm: cs:ceph-mon
    num_units: 3
      public: my-public-space
      cluster: my-cluster-space
      expected-osd-count: 3
      source: cloud:focal-victoria
    - lxd:0
    - lxd:1
    - lxd:2
    charm: cs:ceph-osd
    num_units: 3
      public: my-public-space
      cluster: my-cluster-space
      osd-devices: /dev/sdb /dev/sdc ...
      source: cloud:focal-victoria
    - '0'
    - '1'
    - '2'

Hope this helps.

thank you a lot,
I tried your conf but it doesn’t work for me.
I have defined two MAAS spaces: test-cloud-cluster and test-cloud-public and I see them correctly:
juju spaces
Name Space ID Subnets
alpha 0
cloud-cluster 3
cloud-public 2
invalsi 1
test-cloud-cluster 4
test-cloud-public 5

but, running
juju deploy -n 1 --config config.yaml --constraints tags=compute ceph-osd --bind “public=test-cloud-public cluster=test-cloud-cluster”

juju status
Model Controller Cloud/Region Version SLA Timestamp
openstack-test maas-controller invalsi-maas/default 2.8.7 unsupported 17:32:52+01:00

App Version Status Scale Charm Store Rev OS Notes
ceph-osd waiting 0/1 ceph-osd jujucharms 306 ubuntu

Unit Workload Agent Machine Public address Ports Message
ceph-osd/0 waiting allocating 0 waiting for machine

Machine State DNS Inst id Series AZ Message
0 down pending focal matching subnets to zones: cannot use space “alpha” as deployment target: no subnets

my config.yaml
osd-devices: /dev/vdb
source: cloud:focal-victoria

Regarding the issues you had issues with the bundle, you’ll need to adapt it for your environment which will have the various space names, osd devices, etc. Bundles are quite powerful and the documentation is worth a read, but it can encapsulate the various constraints, the network space bindings, config, etc. I understand you’re just trying to get it deployed, but you might find it easier iterating over a bundle and using that for deployment.

The ‘cannot use space “alpha” as deployment target: no subnets’ error seems that it relates to this post in that you need a default space binding as well. You should be able to specify this as in the bundle with the space name being “”, and per that posting there’s a model config parameter as well.

An additional point based on the config.yaml that you’ve posted is that the ceph-public-network and ceph-cluster-network charm options will override the network bindings that you specify. It would work in your case as the networks are the same, but you shouldn’t really need to use those config options if you are using spaces.