Juju, bundles and network spaces (openstack in particular)

Hej,

I hope somebody can point me in the right direction, or provide some insights.

I’m trying to deploy OpenStack, atm clean Wallaby. I’d like to segment the traffic between the services, so in my view ‘network spaces’ should help here.

However, here I’ve not been able to find a lot of documentation on this.

Some charms are good at stating what ‘bindings’ they have, others not so much. Then I found some discussion on this discourse (but of course that tab was lost to the binary history), where it seemed that you could ‘bind’ a space to any interface or if it was relationship, cant remember.

Now, even if the charms would state what bindings they have/use, how can we know how to tie them together via these spaces? Is it the ‘relations’ section in the bundle that holds this information?

Yet another question that comes to mind, who or how are the public addresses selected on a unit? ATM, I’m doing another deployment, with a single network space (bindings here; base-bindings.yaml - Pastebin.com ). That space maps to a private subnet. The hosts (MAAS provided) have several NICs and VLANs associated with them. Definitely one that matches the selected network space, but also others, and one in particular with public IPs. As, the model is deployed with space X (and corresponding subnet X), why/who pics that some units should grab IPs from the one that holds public IPs? Is it done by default, if public IPs are available?

Anyways, those are my thoughts. Hope that somebody can provide some enlightenment.

Have a nice Midsommar!

BR/Patrik

Great question, @pal-arlos!

Spaces, bindings, and relations are all interrelated as you’ve found. Let’s see if I can provide a base description of how they work and try to answer some of your questions.

Spaces are defined in juju as one or more L3 networks (MAAS subnets) that provide a logical manifestation of contiguous networks that provide separate functions per space (like openstack-internal traffic, external (api and/or FIP) traffic, overlay SDN traffic, ceph-replication traffic, admin/operating system traffic.)

Note that you’ll find the space is defined at the L2 (VLAN) layer within MAAS, but you must define an IP on the NIC/BOND/VLAN device in the physical host configuration in MAAS for juju to recognize that the server has a “leg” in a given space (this is due to the slight L2 vs L3 disconnect between juju and MAAS, and is due to juju supporting cloud providers other than MAAS).

You can see which subnets are defined for each space within juju by running juju spaces once you’ve bootstrapped a controller on the MAAS cloud.

To determine which spaces are supported by a charm, if there is not specific documentation, look at the metadata.yaml and each “requires,” “provides,” and “extra-bindings” key is a network-binding point that you can specify which space/subnet is used for communicating with remote units related to those interfaces.

For instance, if I want my nova-cloud-controller to talk to mysql on the “internal” space, I’d define in the bundle:

applications:
  mysql:
    bindings:
      "": default-space
      shared-db: internal-space
  nova-cloud-controller:
    bindings:
      "": default-space
      shared-db: internal-space
relations:
- - nova-cloud-controller:shared-db
  - mysql:shared-db

note that this example does not include charm names, other critical options or bindings and is just a demonstration to nighlight spaces/bindings

In the above example, when the mysql application encounters a relation-joined/relation-changed on it’s shared-db interface, it will query it’s interfaces (based on charm code) for an IP address on the unit (metal/lxd/kvm) that is within the internal-space subnet(s) and announce that particular IP over the relation to the remote-unit (such as the nova-cloud-controller in this case.)

So, for each relation in your bundle, you can define which subnets’ IPs will be used for communication between the two (or more) applications within that relation based on the bindings used on both ends. It’s important that you set the bindings of both the provides and requires end of the relation to the same space, as demonstrated above.

This hopefully clearly answers your questions about “how to tie them together via spaces?” It is the bindings definition within the applications section of the bundle that defines which IP a charm will announce to related applications on that relation’s interface.

Regarding how to define your public IPs, the Openstack charms use a standard 3 extra-bindings which are named “public” “internal” and “admin” which allow you to define which space or spaces to use for these 3 service endpoints when registering to Keystone. These allow you to define spaces to determine the IPs which will be used for registering to keystone’s service endpoint database. These three bindings also relate directly to the optional “os-public-hostname” “os-internal-hostname” and “os-admin-hostname” configuration variables on Openstack API charms if you’re utilizing DNS and Virtual IPs for the service.

If not all of your servers have interfaces on all spaces, it would be important to tag your hosts within MAAS and define those tags and provide “to:” directives in your bundle to ensure that services that need “public” IP connectivity are deployed to machines with public interfaces. Typically this looks like (assuming machines tagged with “compute” don’t have external network ips, and those with “control” do):

machines:
  0:
    tags: compute
  1:
    tags: control
applications:
  neutron-gateway:
    options:
      num_units: 1
    to:
    - 1
  nova-compute:
    options:
      num_units: 1
    to:
    - 0
  nova-cloud-controller:
    options:
      num_units: 1
    to:
    - lxd:1
    bindings:
      "": default-space
      public: external-space

For a list of which relations are made in a deployment and and would be available for assignment of bindings, deploy your cloud with one space, and then run juju export-bundle to view all bindings of each application. Also, running juju status --relations will give you a list of relations and the “interfaces” (which are the same as the bindings names) used for those relations on each application.

I hope this explanation has cleared up some of your questions.

Cheers,
-Drew

3 Likes

Hej @afreiberger,

thank you for your answer, it made things more clear.

juju export-bundle and juju status --relations were useful. So a ‘modus-operandi’ when dealing with a new bundle, would be to do a clean single space (default) deployment, then run export-bundle, to identify the bindings. Place these applications and corresponding bindings in a file, and then adapt this as to bet the bindings on the corresponding subnets/spaces. Redeploy the bundle but now provide the ‘bindings’ file as an argument.

juju deploy bundle.yaml --overlay bindingsFile.yaml

In my test case, I noticed that despite deploying on only ‘one’ space,

juju deploy stable/openstack-base/bundle.yaml --overlay stable/overlay/openstack-base-spaces-overlay.yaml

a second is also used. The alpha space. AFAIK; this is the default space juju creates. As there were applications that were deployed, but not explicitly defined in the bundle, they get deployed with the ‘default’ space, i.e. alpha. This is interesting, as my alpha space isn’t associated with any subnet, or space for that matter.

Using this information, I tried to do another bundle deployment but explicitly stating all bindings, replacing any usage of alpha with the same space as all other applications. Two out of three times this worked, when it failed it failed because one of the three mysql-innodb-cluster nodes could not reach the cluster…

For the successful cases, i.e. bundle deployed, OpenStack seems to be operational (verified by adding images, creating a network, router, and launching VMs). In the case that the host had an IP on a space NOT used by the deployment, provider networks do not work. If that space and IP
is removed, provider networks works and VMs can be reached c/o floating IPs.

This is a bit strange, from the explanation it’s clear that we need to ensure that the host has an IP on the space that we want the deployment to use, but it also seems that we have to remove any other spaces to prevent the deployment (model?) to pick IP addresses from those spaces?

I guess that some questions are juju/network space specific and others are perhaps charm-specific, but from a user’s, perspective it’s not clear where that border is located.

BR/Patrik

You will need to make sure that the bond or bridge you assign to your provider trunk does not have an IP address on it, as the traffic will be directly picked up by OVS before it hits the IP layer. That may have caused your provider net with IP address complication. provider trunks are defined by the data-port and bridge-mappings of neutron-openvswitch-agent or neutron-gateway or ovn-chassis charms, and not via space binding.

And yes, the alpha space is used as a placeholder by juju when bindings are not defined. Each application can have default binding defined within the bundle with:

applications:
  myapp:
    bindings:
      "": default-space

Ultimately, the way the charms reference the bindings is that there are charmhelper routines that allow the charmer to call network-get <interface-name> such as network-get public or network-get shared-db which will then return a list of IP addresses on the given “unit” which are related to that space.

You can get a sense for this by running juju run --unit keystone/0 'network-get public'

The implementation and use of the binding is very specific to the charm being deployed and up to the charm authors to utilize the bindings appropriately within the charm code.