In re-reading https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-ovn.html#external-connectivity
I think I may have made a mistake in my deployment
My current ovn chassis configuration is as follows:
ovn-chassis: charm: cs:ovn-chassis-3 channel: stable options: bridge-interface-mappings: br-provider:bond0 ovn-bridge-mappings: dcfabric:br-provider relations: - - ovn-chassis:nova-compute - nova-compute:neutron-plugin - - ovn-chassis:certificates - vault:certificates - - ovn-chassis:ovsdb - ovn-central:ovsdb
Currently every nova compute node has bond0 present with no layer3 addresses assigned to the bond as described in: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/configure-bridge.html
It is not necessary nor recommended to add mapping for external Layer3 networks to all chassis. Doing so will create a scaling problem at the physical network layer that needs to be resolved with globally shared Layer2 (does not scale) or tunneling at the top-of-rack switch layer (adds complexity) and is generally not a recommended configuration.
Since a number of compute nodes have already been deployed what is the best path forward?
Is it as simple as updating the ovn-chassis subordiate charm configuration to use the MAC addresses of some of the existing bond0 interfaces, or are there gotchas to look out for making changes to the bridge interface mappings in a live environment?
juju config ovn-chassis \ bridge-interface-mappings='br-provider:00:00:5e:00:00:41 \ br-provider:00:00:5e:00:00:42 \ br-provider:00:00:5e:00:00:43'
Alternatively does it make sense to use
prefer-chassis-as-gw ovn chassis | Juju
If this is the preferred mechanism, how do you scope a subordinate charm to a specific subset of machines?