ovn-dedicated-chassis docs - index

Overview

The ovn-dedicated-chassis charm provides the Open Virtual Network (OVN) local controller, Open vSwitch Database and Switch. It is used in conjunction with the ovn-central charm.

Open vSwitch bridges for integration, external Layer2 and Layer3 connectivity is managed by the charm.

On successful deployment the unit will be enlisted as a Chassis in the OVN network.

The ovn-dedicated-chassis charm is a principle charm that sets up a software gateway on a dedicated host. Alternatively, the subordinate ovn-chassis charm can be used.

Note: The OVN charms are supported starting with OpenStack Train.

Usage

The OpenStack Base bundle gives an example of how you can deploy OpenStack and OVN with Vault to automate certificate lifecycle management.

OVN makes use of Public Key Infrastructure (PKI) to authenticate and authorize control plane communication. The charm therefore requires a Certificate Authority to be present in the model as represented by the certificates relation.

Refer to Open Virtual Network (OVN) in the OpenStack Charms Deployment Guide for details, including deployment steps.

This charm provides the Open Virtual Network (OVN) local controller, Open vSwitch Database and Switch.

On successful deployment the unit will be enlisted as a Chassis in the OVN network.

Open vSwitch bridges for integration, external Layer2 and Layer3 connectivity is managed by the charm.

Network spaces

This charm supports the use of Juju network spaces.

By binding the ovsdb endpoint you can influence which interface will be used for communication with the OVN Southbound DB as well as overlay traffic.

juju deploy ovn-dedicated-chassis --bind "ovsdb=internal-space"

By binding the data extra-binding you can influence which interface will be used for overlay traffic.

juju deploy ovn-dedicated-chassis --bind "data=overlay-space"

Port configuration

Chassis port configuration is composed of a mapping between physical network names to bridge names (ovn-bridge-mappings) and individual interface to bridge names (bridge-interface-mappings). There must be a match in both configuration options before the charm will configure bridge and interfaces on a unit.

The physical network name can be referenced when the administrator programs the OVN logical flows, either by talking directly to the Northbound database, or by interfacing with a Cloud Management System (CMS).

Networks for use with external Layer3 connectivity should have mappings on chassis located in the vicinity of the datacenter border gateways. Having two or more chassis with mappings for a Layer3 network will have OVN automatically configure highly available routers with liveness detection provided by the Bidirectional Forwarding Detection (BFD) protocol.

Chassis without direct external mapping to a external Layer3 network will forward traffic through a tunnel to one of the chassis acting as a gateway for that network.

Note: It is not necessary, nor recommended, to add mapping for external Layer3 networks to all chassis. Doing so will create a scaling problem at the physical network layer that needs to be resolved with globally shared Layer2 (does not scale) or tunneling at the top-of-rack switch layer (adds complexity) and is generally not a recommended configuration.

Networks for use with external Layer2 connectivity should have mappings present on all chassis with potential to host the consuming payload.

Deferred service events

Operational or maintenance procedures applied to a cloud often lead to the restarting of various OpenStack services and/or the calling of certain charm hooks. Although normal, such events can be undesirable due to the service interruptions they can cause.

The deferred service events feature provides the operator the choice of preventing these service restarts and hook calls from occurring, which can then be resolved at a more opportune time.

See the Deferred service events page in the OpenStack Charms Deployment Guide for an in-depth treatment of this feature.

Bugs

Please report bugs on Launchpad.

For general questions please refer to the OpenStack Charm Guide.

Navigation

Level Path Navlink
1 Home

hi, I’m trying to use ovn-dedicated-chassis on openstack 2023.2 in order to setup a software dedicated gateway.

The problem is that traffic from/to any internal network doesn’t pass the external gateway and from any virtual machine inside any internal network I cannot manage to ping the internal gateway’s ip. From the provider network I manage to ping only the external gateway’s ip.

I have only a compute node where I setup a dedicated chassis with these relations:

Integration provider Requirer Interface Type Message

ovn-central:ovsdb ovn-chassis-gw:ovsdb ovsdb regular
vault:certificates ovn-chassis-gw:certificates tls-certificates regular

and I’ve seen some errors related to the Bridge br-int as soon as I finished the setup by juju

#ovs-vsctl show

02e22c0c-e807-4bea-b9b8-216de8f3fa83

Manager "ptcp:6640:127.0.0.1"
    is_connected: true
Bridge br-provider
    fail_mode: standalone
    datapath_type: system
    Port br-provider
        Interface br-provider
            type: internal
    Port patch-provnet-c6da9795-6859-450e-bd51-9b6e1ed84533-to-br-int
        Interface patch-provnet-c6da9795-6859-450e-bd51-9b6e1ed84533-to-br-int
            type: patch
            options: {peer=patch-br-int-to-provnet-c6da9795-6859-450e-bd51-9b6e1ed84533}
    Port bond0
        Interface bond0
            type: system
Bridge br-int
    fail_mode: secure
    datapath_type: system
    Port tapfa291087-30
        Interface tapfa291087-30
            error: "could not open network device tapfa291087-30 (No such device)"
    Port br-int
        Interface br-int
            type: internal
    Port tap47621598-0b
        Interface tap47621598-0b
            error: "could not open network device tap47621598-0b (No such device)"
    Port patch-br-int-to-provnet-c6da9795-6859-450e-bd51-9b6e1ed84533
        Interface patch-br-int-to-provnet-c6da9795-6859-450e-bd51-9b6e1ed84533
            type: patch
            options: {peer=patch-provnet-c6da9795-6859-450e-bd51-9b6e1ed84533-to-br-int}
ovs_version: "3.2.1"

Where I failed?

Hi @macchese, The errors you are seeing don’t ring any bells with me. We might need a bit more details.

  1. Do you see any errors in the ovn-dedicated-chassis logs? (juju debug-log)
  2. Could you share the juju bundle that you are deploying, to shed bit more light on the overall architecture of your deployment?
  3. If the output of juju status isn’t green across the board, it might be useful to see that as well.

Just out of curiosity, is there a particular reason to use OVN Dedicated Chassis instead of subordinate OVN Chassis? You mentioned that you want the chassis to be deployed on the hypervisor node and for that, the subordinate charm seems like a better solution. I don’t expect it would solve your current problem, but I thought I’d mention it.

Hi martin, the bundle is openstack-2023.2 and yes, I’ve switched to ovn-chassis and it works. So just a curiosity: ovn-dedicated chassis can coesist with a nova-compute without his ovn-chassis? My plan was to have two nova-compute groups, where only one should act also as a ovn-gateway using ovn-dedicated-chassis but I’m note shure if nova-compute works the same without ovn-chassis

Sorry for late reply, I must’ve missed the notification.

It’s bit strange that just switching to ovn-chassis solved your issue, as those two charms are almost identical. But to your questions:

ovn-dedicated chassis can coesist with a nova-compute without his ovn-chassis?

I never tried this set up myself so I can’t speak from experience. I don’t think you’d be able to properly relate standalone chassis to the nova-compute as they are meant to be related via neutron-plugin interface, and that’s a subordinate relation, not a relation between two principal charms.

The more I think about this, if the ovn-dedicated-chassis was not properly related with nova-compute, it could explain the issues you were originally seeing, and also why switching to ovn-chassis solved them.

My plan was to have two nova-compute groups, where only one should act also as a ovn-gateway using ovn-dedicated-chassis

You can achieve this simply via bridge-interface-mappings config option, by using MAC address, instead of an interface name, to define bridge for the gateway (external connectivity). That way, only the machine that has an interface with a matching MAC address will be set up as a gateway.

1 Like