I have worked a bit with the MicroStack (Multi‐node deployment) deployment. It’s a very nice experience to get started with OpenStack. Now I wanted to make the VM instances directly accessible from the local network (currently I can only access the default 10.20.20.0 network from the OpenStack hosts itself). I would assume there should be two ways of doing this:
enable a flat provider network (so that the instances can directly connect to the physical network instead of being in the virtual one)
allow routing from the local network into the internal virtual one
Some years ago I have played with Neutron and I remember that there is quite some configuration necessary on the controller and compute hosts to allow flat networks. So to enable option 1) I would be able to reconfigure the Neutron configuration (m2l, linux bridge agent, etc.). Yet I’m struggling a bit with this Snap package. I understand that the default snap configuration is read-only and I can only work in the user-data file system, but it seems my modifications in the user-data folder are not overwriting the overlay filesystem in “/var/snap/…”. Of cause I assume that I misunderstand something from Snap but I’m seriously not able to find a clear explanation of how to modify /etc files in snaps and check which files are actually active.
Concerning option 2) (allow routing into the 10.20.20.0 network): as I struggled with the modification of the MicroStack configuration I thought I might be just able to route into the network itself by setting some routes and allow ip_forwarding. Yet it seems that the VM instances are actually receiving by default IPs from the 192.168.0.0 network which is not reachable from the controller. Is there some routing configuration required to still enable access to the VM instances?
Did you find out some solution? We are trying to do the same: configure an external network with our own floating IPs. But we are not finding out good documentation to do this with Microstack.
Unfortunately I’ve burned through multiple days trying to figure this out myself and also found the Microstack docs really bad/incomplete. They don’t really explain how neutron is now hidden inside of microstack.openstack as network. I was able to successfully create my OpenStack cluster using Microstack (4 machines sharing a locale network) but have the same problem when I try to deploy a Juju controller.
Openstack is able to successfully provision the controller on one of my machines (complete installation of Juju) but then errors with this type of failure:
23:52:44 DEBUG juju.provider.common bootstrap.go:615 connection attempt for 192.168.2.224 failed: ssh: connect to host 192.168.2.224 port 22: No route to host
23:52:47 DEBUG juju.provider.common bootstrap.go:615 connection attempt for 10.0.4.7 failed: ssh: connect to host 10.0.4.7 port 22:
My physical router does DHCP so instead of using a virtual one I was hoping to create my “public” network on the router itself. Then I’ll create a private (local) network with a virtual router between the two. All the documentation for linux_bridge are from neutron (not microstack) so interpreting the instructions is very complicated.
The most noticeable problem is anytime I create a network Openstack creates a network:dhcp port in the configuration (even when I don’t check dhcp) and the status is always DOWN.
Any thoughts / help? How can you create a “public” network leveraging your existing router and mount it to your OpenStack network? I tried the instructions above but the only network I see are OVN.
We plan to make the networking config more flexible. Please take a look at the following bug and the spec that is linked to it and let us know if this would solve the issues you’ve run into.
I was able to successfly get two nodes up and migrate a vm from control node to compute node. However when I attempted to boostrap juju ssh hung and all of the networking broke down – the existing test vm’s and their floating ips became inaccessible.
I am using microstack (245), and I was following the link mtbales responded with. However, that caused me quite a few issues.
I fixed the “cinder” stuff, ie: I couldn’t create volumes due to the back end being broken. For the lan access, I created my “public” network, with the additional of setting my gateway and a route to the physical router. Then, I ran ovs-vsctl add-port br-ex ens160 and it all worked. I ended up redoing that portion because of I wanted my dashboard exposed to my local LAN as well.
Hi Corey, I tried to install Microstack on a remote server and I’m able to access Horizon with the default IPs (10.20.20.0/24). However, when I try to configure the IPs before init as below:
I get “504 Gateway Time-out”, nginx/1.19.0, on login…
With “journalctl -xe”, I also see the following errors:
ovsdb-server[114020]: ovs|00035|socket_util|ERR|Dropped 23 log messages in last 57 seconds (most recently, 2 seconds ago>
ovsdb-server[114020]: ovs|00036|socket_util|ERR|6642:0.0.0.0: bind: Address already in use
ovsdb-server[114020]: ovs|00037|ovsdb_jsonrpc_server|ERR|Dropped 23 log messages in last 57 seconds (most recently, 2 se>
ovsdb-server[114020]: ovs|00038|ovsdb_jsonrpc_server|ERR|ptcp:6642:0.0.0.0: listen failed: Address already in use
ovs-vsctl is installed inside the microstack snap, but you will not find it with which ovs-vsctl due to how snaps bundle apps that are a part of the same snap package. The command to access ovs-vsctl is microstack.ovs-vsctl.
microstack.ovs-vsctl is the command because snap structures commands as namespace.app. If an app in the snap package shares the same name as the top-level namespace, it can be invoked as app (i.e. $ microstack). If an app does not share the same name as the top-level namespace, it must be invoked as namespace.app (i.e. $ microstack.ovs-vsctl).
To make the command just ovs-vsctl and not microstack.ovs-vsctl, just use the following command:
$ sudo snap alias microstack.ovs-vsctl ovs-vsctl
If you look through the snapcraft.yaml file for microstack, you will see TODO notes stating plans to request that these aliases are automatically setup at installation
Thank you! That’s the explanation I was looking for re ovs-vsctl. Finally managed to get my VMs connected to the external network. The errors in the log don’t seem to affect it. Only problem now is that I lost internet connection on the host machine now Anyway, one step further, linux networking is next