Deploy Openstack-base bundle by Juju

I am trying to deploy: the last openstack-base bundle, everything works except the Ceph cluster !!! There is only one ceph-osd created !!! I tried several times but it is always the same result. I tried to do it manually with ceph-volume. I manage to do it but no integration with the ceph-mon cluster.
juju status :

Unit                         Workload  Agent   Machine  Public address                        Ports              Message
ceph-mon/0                   waiting   idle    0/lxd/0  2a01:e0a:21d:f8e0:216:3eff:feeb:127                      Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)
ceph-mon/1*                  waiting   idle    1/lxd/0  2a01:e0a:21d:f8e0:216:3eff:fe1a:83f1                     Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)
ceph-mon/2                   waiting   idle    2/lxd/0  2a01:e0a:21d:f8e0:216:3eff:fe44:6d6                      Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)
ceph-osd/0                   waiting   failed  0                                            Incomplete relation: monitor
ceph-osd/1*                  active    idle    1                                            Unit is ready (1 OSD)
ceph-osd/2                   waiting   failed  2                                            Incomplete relation: monitor
ceph-radosgw/0*              blocked   idle    0/lxd/1  2a01:e0a:21d:f8e0:216:3eff:fef1:c5b2  80/tcp             Missing relations: mon
cinder/0*                    active    idle    1/lxd/1  2a01:e0a:21d:f8e0:216:3eff:fe6b:32a   8776/tcp           Unit is ready
  cinder-ceph/0*             waiting   idle             2a01:e0a:21d:f8e0:216:3eff:fe6b:32a                      Incomplete relations: ceph
  cinder-mysql-router/0*     active    idle             2a01:e0a:21d:f8e0:216:3eff:fe6b:32a                      Unit is ready
glance/0*                    active    idle    2/lxd/1  2a01:e0a:21d:f8e0:216:3eff:fe54:6dcb  9292/tcp           Unit is ready
  glance-mysql-router/0*     active    idle             2a01:e0a:21d:f8e0:216:3eff:fe54:6dcb                     Unit is ready
keystone/0*                  active    idle    0/lxd/2  2a01:e0a:21d:f8e0:216:3eff:fe63:2b72  5000/tcp           Unit is ready
  keystone-mysql-router/0*   active    idle             2a01:e0a:21d:f8e0:216:3eff:fe63:2b72                     Unit is ready
mysql-innodb-cluster/0       active    idle    0/lxd/3  2a01:e0a:21d:f8e0:216:3eff:fe7c:578d                     Unit is ready: Mode: R/O
mysql-innodb-cluster/1*      active    idle    1/lxd/2  2a01:e0a:21d:f8e0:216:3eff:fe1a:6ce                      Unit is ready: Mode: R/W
mysql-innodb-cluster/2       active    idle    2/lxd/2  2a01:e0a:21d:f8e0:216:3eff:fe87:ff47                     Unit is ready: Mode: R/O
neutron-api/0*               active    idle    1/lxd/3  2a01:e0a:21d:f8e0:216:3eff:fef2:dd7d  9696/tcp           Unit is ready
  neutron-api-plugin-ovn/0*  active    idle             2a01:e0a:21d:f8e0:216:3eff:fef2:dd7d                     Unit is ready
  neutron-mysql-router/0*    active    idle             2a01:e0a:21d:f8e0:216:3eff:fef2:dd7d                     Unit is ready
nova-cloud-controller/0*     active    idle    0/lxd/4  2a01:e0a:21d:f8e0:216:3eff:fefd:5ebd  8774/tcp,8775/tcp  Unit is ready
  nova-mysql-router/0*       active    idle             2a01:e0a:21d:f8e0:216:3eff:fefd:5ebd                     Unit is ready
nova-compute/0               waiting   idle    0                                            Incomplete relations: storage-backend
  ntp/2                      active    idle                            123/udp            chrony: Ready
  ovn-chassis/2              active    idle                                               Unit is ready
nova-compute/1*              waiting   idle    1                                            Incomplete relations: storage-backend
  ntp/0*                     active    idle                            123/udp            chrony: Ready
  ovn-chassis/0*             active    idle                                               Unit is ready
nova-compute/2               waiting   idle    2                                            Incomplete relations: storage-backend
  ntp/1                      active    idle                            123/udp            chrony: Ready
  ovn-chassis/1              active    idle                                               Unit is ready
openstack-dashboard/0*       active    idle    1/lxd/4  2a01:e0a:21d:f8e0:216:3eff:fe18:a9b5  80/tcp,443/tcp     Unit is ready
  dashboard-mysql-router/0*  active    idle             2a01:e0a:21d:f8e0:216:3eff:fe18:a9b5                     Unit is ready
ovn-central/0                active    idle    0/lxd/5                         6641/tcp,6642/tcp  Unit is ready
ovn-central/1*               active    idle    1/lxd/5  2a01:e0a:21d:f8e0:216:3eff:fe9a:ae00  6641/tcp,6642/tcp  Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
ovn-central/2                active    idle    2/lxd/3  2a01:e0a:21d:f8e0:216:3eff:fee3:9a20  6641/tcp,6642/tcp  Unit is ready
placement/0*                 active    idle    2/lxd/4  2a01:e0a:21d:f8e0:216:3eff:fefe:d014  8778/tcp           Unit is ready
  placement-mysql-router/0*  active    idle             2a01:e0a:21d:f8e0:216:3eff:fefe:d014                     Unit is ready
rabbitmq-server/0*           active    idle    2/lxd/5  2a01:e0a:21d:f8e0:216:3eff:fe0e:291b  5672/tcp           Unit is ready
vault/0*                     active    idle    0/lxd/6  2a01:e0a:21d:f8e0:216:3eff:fe23:1c55  8200/tcp           Unit is ready (active: true, mlock: disabled)
  vault-mysql-router/0*      active    idle             2a01:e0a:21d:f8e0:216:3eff:fe23:1c55                     Unit is ready

What is the output to:

juju config ceph-osd osd-devices

At least one of these devices need to exist on both machines 0 and 2.

Also check the logs:

juju debug-log --replay --no-tail --include=ceph-osd/0
juju debug-log --replay --no-tail --include=ceph-osd/2

The failed status on those two ceph-osd machines is concerning. Can you include a version of juju status that includes the rest of the status as I suspect the machines didn’t come up correctly, or that, possibly, they can’t connect to the Juju controller

I tried on several servers with this bundle, it’s still the same problem. This is another new deployment, but the problem remains the same.
On the Telemetry bundle this is the result.
juju --version : 2.8.1-focal-amd64


From the status output I can see that from Juju’s perspective some of the units have an IPv4 address and some have an IPv6 address.

This is probably because Juju is connected to a substrate with a dual stacked IPv4/IPv6 network.

While enabling IPv4 and IPv6 is supported by most of the charm payloads it gets a bit complicated when used on the network for communication between the Juju controller and machines/units.

To make the initial deployment easier I would recommend using the one or the other, and then try more advanced configurations.

Once you have done a successful deployment you can explore spaces and see if you can use them to accomplish what you want. There is a spaces overlay that can be used with the base bundle that you need to adapt to the spaces you configure.

I fixed the problem. I migrated with snap, on juju of the day and the deployment works. I will try others again and move on.

Thank you so much.

1 Like


I want to deploy bundle/openstack-base-70 with Juju and the deployment stuck because I have “certificates missing” warning. Any idea?

Hi @bnicolae

You need to Initialise and unseal the vault :innocent:

After that everything should start moving again.


I though the vault is the one with status “waiting” that will be deployed inside the Openstack. Anyway, I installed vault on the maas machine and tried to initialize it but I have a “connection refused” error. I’m following the documentation from Ubuntu and they don’t say anything about Vault.

Ah, you’re simply being a little too hasty.
The fact that there are missing relations is actually fine, it’s because the lxc containers that run vault and other services haven’t been created yet.
Once those units are started, the charms will be installed there too and you’ll see the status on the blocked units will change.

If you manually installed vault on the Maas server your configuration won’t know about it because they’re unrelated. @timClicks wrote an awesome post about what juju relations are and the purpose they serve. … It’s like literally magic :smiley:

Thanks Sean,

Actually, after 24h the status is the same, all the units are waiting for machines. But they are OK. I started installing Openstack manually component by component. Ceph-osd, nova-compute and neutron-gateway were installed successfully. They were all installed inside OS via APT repo. But I tried putting neutron-api inside a LXD container and is the same situation, waiting for machine. I think something is preventing applications to spawn inside LXD. Because in the previous setup NTP and ceph were the only one installed OK. Both outside LXD.

neutron-api/2 waiting allocating 0/lxd/1 waiting for machine

I see. If it was me, at the 24h mark I would’ve just destroyed the model and rebuilt it.

I did that. I suspect the proxy settings. It was a real challenge to bootstrap the controller behind a proxy but I did it. Now I need to figure out what happens when I run “juju deploy - - to lxd” command in order to troubleshoot the issue.

I think I figured out. I’ve added all proxy settings to model-default and now the containers are OK. Vault is here and is active. I initialised and unsealed it but there are still few components complaining about certificates (neutron-api-plugin-ovn, ovn-chassis, ovn-central). The dashboard is ok, I can access it.

‘certificates’ awaiting server certificate data, ‘ovsdb’ incomplete

juju run-action --wait vault/0 generate-root-ca
1 Like

Check this out here :slight_smile:


Yes, that worked. My problem was solved following Certificate Lifecycle Management. Thanks.