ovn-chassis: nodes bonding configuration

Hi

We were trying to configure a Charmed Openstack cluster with 9 machines:

  • Machine 1 is the head node (MAAS + JUJU)
  • Machine 2,3,4 will be used as the control-plane for core Openstack charms
  • Machine 5,6,7,8 are intended to use as pure worker nodes (hypervisors): nova compute + ovn-chassis charms

We are having problems on decide how to design the network architecture of their interfaces, as the idea is to have 2 separate bondings on each of them, for separate management traffic form the data traffic.

Currently the interfaces are configured as:

  • bond0
    • VLAN MGMT (bond0.mgmt) and over it a OVS bridge (br-mgmt): Internal MGMT network
    • VLAN GUEST (bond0.guest) and over it a OVS bridge (br-guest): Internal VM network
    • VLAN EXT (bond0.ext) and over it a OVS bridge (br-ext): Provider network
    • VLAN API (bond0.api) and over it a OVS bridge (br-api): Internal Openstack network
  • bond1
    • VLAN DATA (bond1.data) and over it a OVS bridge (br-data): Internal DATA backend network

Our main concerns is if that approach is correct or if it’s better to have only 1 OVS bridge with all the VLANS over it.

The other question is how to configure the provider network to allow later the VMs to be reached from the outside and give them access to the Internet.

On example architectures that we found, it is supposed that the traffic from a VM follows this way:

VM > Compute Node (br-guest) > (br-guest) Network Node (br-ext) > Internet

Based on the documentation that we found the recommendation is that the bundle for neutron-api + ovn-chassis should be configured in this way:

  neutron-api:
    charm: ch:neutron-api
    channel: 2023.2/stable
    num_units: 1
    to:
    - lxd:1
    options:
      neutron-security-groups: true
      flat-network-providers: physnet
      enable-ml2-port-security: true
  ovn-chassis:
    charm: ch:ovn-chassis
    channel: 23.09/stable
    options:
      ovn-bridge-mappings: physnet:br-ext
      bridge-interface-mappings: br-ext:bond0
    bindings:
      "": mgmt

Based on that config, we have some questions:

  • How it’s supposed that a “Network Node” is configured, are all the machines configured in the same way?
  • Should the ovn-chassis be applied to every node on the cluster or just some of them?
  • If it’s applied to every node? Do we need to add some extra configuration to provide connectivity to the edge router?

Sorry if this is a “noobish” questions but we are struggling a little bit on understand how the bundle is transformed in the operational architecture.

Best regards!

Not entirely the same here, but we have bond0 for the OVS bridge, then bond1 for ceph. Otherwise similar to you above:

physnet1 physnet1:br-ex br-ex:bond0

For a dedicated network gateway host, you can use Charmhub | Deploy Ovn Dedicated Chassis using Charmhub - The Open Operator Collection

There is also the “prefer-chassis-as-gw | boolean” config option to set certain chassis as the preferred gateways, otherwise they are all used.