Neutron-gateway config

Hi, I would like to get deeper knowledge about neutron-gateway charm. I am installing openstack using maas and juju although I had to do some modifications of bundle.yaml file https://jaas.ai/openstack-base. Instead of openstack-dashboard deploying to a container I deployed it on a additional bare metal machine with a public ip (for a dashboard) and private ip (for boot using maas dhcp). The problem is that I want to access my instances using floating ip accessable from outside the cluster. My goal is to assign floating ip from 158.129.212.192/26 subnet to an instance. For now all my services except openstack-dashboard are on a private subnet and all bare metal nodes has 2 NIC. Look here:
Basic Network Diagram (1).pdf (91.8 KB)

I suppose if neutron-gateway data:port variable will be on a network interface which has a public ip from 158.129.212.192/26 I will be able to add a public floating ip address?

This is exactly what I do in my cloud.

I set the value of
data-port (string) on the neutron-gateway charm to the physical interface in the box which will carry the VM traffic.
In my case the physical interface name is enp2s0f1
So the value is set to
br-ex:enp2s0f1

I then create a public network inside of openstack with a Network Type of vlan and then use the VLAN ID associated to my block of public IPs

This is how I’ve cabled my Openstack. I’ve tried to match your IP’s to give you my context

openstack.devzero.co.za.pdf (107.0 KB)

Thank you for your response :slight_smile: Did you configure an enps2of1 interface to use VLAN1051 or assign a static ip address to enps2of1 interface via maas web UI? I assume “host” in your diagram is a physical node, which has a neutron-gateway charm deployed. Am I right? So, you didn’t show another host (maas nodes) in your diagram as I understand.

Yes, the host is physical machine with neutron-gateway. ( All 6 of my machines a cabled exactly the same that’s why I only put one)

In maas enps2of1 has no configuration.

The VLAN tag gets put on from an open v switch perspective once you log into Openstack and create a public network.

If your switch isn’t configured for vlan tagging, when you create the public network you can set the network type to a flat network, but would need to know more about your actual a network topology

My both VLANS 158.129.212.192/26 and 10.1.1.0/24 subnets are untagged. As I see in MAAS subnet page.

What kind of network topology information I need to give?

And one more question. Once you add a new node in Maas do you let MAAS controller know that your enps2of1 network interface subnet is a public VLAN 158.129.212.192/26?

In my example above I personally didnt configure eth1 network interface of a MAAS node as from 158.129.212.192/26 subnet manually

I’m going to make an assumption that every machine is cabled to one unmanaged switch?

Is the router with the public block plugged into the same switch or is it a completely separate fabric?

Subnet 158.129.212.192/26 is on fabric 0 and subnet 10.1.1.0/24 is on fabric 1. For now every machine is connected to the same unmanagged switch 10.1.1.0/24 except juju node and maas controller. They both have one NIC connected to a public internet 158.129.212.192/26 and another NIC to 10.1.1.0/24 internal network switch, so they do have public internet. I was planning to connect one more node to a MAAS which will be hosting a neutron-gateway application in the following manner: 1 NIC (eth0) connected to a 10.1.1.10/24 switch (internal network access) and another one NIC (eth1) connected to a 158.129.212.192/26 switch (public network access). Moreover, eth1 will be with neutron-gateway configuration data-port:br-ex:eth1. By doing so I hope to be able to assign public floating ip from 158.129.212.192/26 subnet to my instances. Does it make sense?

In my own opinion, I have some doubts if it will actually work.
Doesn’t neutron-openvswitch need to talk to neutron-gateway across the same fabric?

If you put neutron-gateway on your router fabric and then have the nova compute openvswitches on a different fabric, the network guy in me says something doesn’t add up. :thinking:

I will try to deploy everything as I said and then write how it ended up :slight_smile:

1 Like

Okay. So that didn’t work out :smiley: What would you suggest to do about my network architecture if I have two separate switches: one for booting maas nodes ( from 10.1.1.0/24 subnet) and another one (from 158.129.212.192/26 public subnet and router). I am unable to connect all my nodes (each with 2 NIC) to 158.129.212.192/26 subnet switch, because I do not have enough physical ports

1st prize would be to get your hands on a managed switch. Run VLANs.

2nd prize, get another unmanaged switch and plug the provider router and servers second interface into that one. You can bring up a flat network on fabric 2.

Last place…You could try plug the router into your current switch, and have a flat external network inter mingled with your PXE network. But it’s going to be hacky and ugly. Weird crap might start happening with maas issuing DHCP on the 10.x.x.x subnet…
You need more/better networking hardware captain :black_flag: :skull_and_crossbones: :parrot: :laughing:

1 Like

Ok. I will keep that in mind, but what do you think if I commision node and connect to switches like that all nodes (eth0 goes to 10.1.1.0/24 subnet switch) and eth1 goes to (158.129.212.192/26 switch) in MAAS eth0 will be configured automatically as auto-assign and eth1 of each node will be left unconfigured, but still connected to 158.129.212.192/26 switch. Maybe then it will be possible to access instances from outside? Of course, by doing that my neutron-gateway charm will have config data-port:br-ex:eth1, which is the interface I do connect to the public internet. What about that?

Yeah sounds like 2nd prize case

1 Like

Thank you very very much. Finally I made it by using your 2nd method :slight_smile:

3 Likes

Great to hear that you got it working :metal:

Just something to keep in mind:
While it may be working. It’s not production ready, because you have single points of failure. If any of the switches fail, you’ll either have a platform outage - meaning the Openstack itself or you’ll have a tenant outage - meaning access to the VM’s is lost.

But you’re at a great place to start playing and learning and start formulating ideas on how you would want your production to work (if that is what you’re eventually intending on doing)

1 Like