Where are bindings "alpha" coming from?

Sorry for bumping this topic, but this post and the mentioned launchpad bug were the only two sources of information about it. I struggled with this issue for many hours yesterday and thought I could share some light on what helped me out.

Note: I did not have this problem with MAAS 2.7.x, but I recently upgraded to MAAS 3.0.

Short history: I have a 1,5 year old Juju model running OpenStack using a MAAS cloud, deployed completely without spaces on MAAS 2.7.x. During the last year (after the initial deploy) we have added spaces to MAAS and set the default-space in the model config, as some charms needed this to be configured properly. As a result, all charms from the initial deploy are still mostly bound to the alpha space, which has no subnets:

$ juju spaces
Name                          Space ID  Subnets         
alpha                         0                         
ceph-cluster-network-space    4         192.168.x.x/24
external01-ha                 5         10.x.x.x/20    
oam-space                     1         172.x.1.x/24   
                                        172.x.2.x/24   
                                        172.x.3.x/24   
os-public-network-space       3         10.y.y.y/24   

Typically, an application deployed a long time ago would look like this:

$ juju show-application nova-compute
nova-compute:
  charm: nova-compute
  series: bionic
  channel: stable
  constraints:
    tags:
    - kraken-compute
  principal: true
  exposed: false
  remote: false
  endpoint-bindings:
    "": alpha
    amqp: alpha
    ceph: alpha
    ceph-access: alpha
    cloud-compute: alpha
    cloud-credentials: alpha
    compute-peer: alpha
    ephemeral-backend: alpha
    image-service: alpha
    internal: alpha
    ironic-api: alpha
    lxd: alpha
    migration: os-public-network-space
    neutron-plugin: alpha
    nova-ceilometer: alpha
    nrpe-external-master: alpha
    secrets-storage: alpha

The effect is that it is impossible to run juju add-unit on this application, regardless of the value of juju model-config default-space. It also did not help to just change the “default endpoint” ("") of the application (but leave all others untouched), which could be achieved like this (you may need to use the --force flag for this command):

$ juju bind nova-compute oam-space amqp=alpha ceph=alpha ceph-access=alpha     cloud-compute=alpha cloud-credentials=alpha compute-peer=alpha ephemeral-backend=alpha image-service=alpha internal=alpha ironic-api=alpha lxd=alpha migration=os-public-network-space neutron-plugin=alpha nova-ceilometer=alpha nrpe-external-master=alpha secrets-storage=alpha
Leaving endpoint in "os-public-network-space": migration
Leaving endpoints in "alpha": amqp, ceph, ceph-access, cloud-compute, cloud-credentials, compute-peer, ephemeral-backend, image-service, internal, ironic-api, lxd, neutron-plugin, nova-ceilometer, nrpe-external-master, secrets-storage

The solution was to repeat the command above but change all the endpoints to spaces with subnets available to my machines, one by one. But be careful here! As this was a production system, I used e.g. juju show-unit nova-compute/10 and inspected the IP addresses used on all endpoints, and put them in the correct space, not to accidentally change the used space/subnet on a relation.

After having fixed all the endpoints to the correct spaces:

$ juju show-application nova-compute
nova-compute:
  charm: nova-compute
  series: bionic
  channel: stable
  constraints:
    tags:
    - kraken-compute
  principal: true
  exposed: false
  remote: false
  endpoint-bindings:
    "": oam-space
    amqp: oam-space
    ceph: oam-space
    ceph-access: oam-space
    cloud-compute: oam-space
    cloud-credentials: oam-space
    compute-peer: oam-space
    ephemeral-backend: oam-space
    image-service: oam-space
    internal: oam-space
    ironic-api: oam-space
    lxd: oam-space
    migration: os-public-network-space
    neutron-plugin: oam-space
    nova-ceilometer: oam-space
    nrpe-external-master: oam-space
    secrets-storage: oam-space

…it was finally possible to run juju add-unit nova-compute. In fact, it was also possible to resume deployment for machines stuck in the cannot use space “alpha” as deployment target state, so for the unit below, I could simply run juju retry-provisioning 66, thus completing the addition of unit nova-compute/10:

$ juju status nova-compute/10
Model              Controller             Cloud/Region         Version  SLA          Timestamp
maas-kraken-model  kraken-lxd-controller  maas-kraken/default  2.9.16   unsupported  09:38:26+01:00

App           Version  Status   Scale  Charm         Store       Channel  Rev  OS      Message
nova-compute           waiting    0/1  nova-compute  charmstore  stable   334  ubuntu  waiting for machine

Unit             Workload  Agent       Machine  Public address  Ports  Message
nova-compute/10  waiting   allocating  68                              waiting for machine

Machine  State  DNS  Inst id  Series  AZ  Message
68       down        pending  bionic      matching subnets to zones: cannot use space "alpha" as deployment target: no subnets

$ juju retry-provisioning 66
3 Likes