Hi,
I really struggled with what to name this post. Hopefully the title captures the interest of those who might be able to help.
I’m trying to deploy our first production OpenStack with MaaS/Juju. I’ve successfully built a one-host (LXD localhost cloud) deployment after spending a couple of weeks figuring out how to get MaaS and Juju to play nicely with our corporate proxy. I’ve also successfully built a 5-host MaaS cloud with full HA (pretty much 3 of everything, using vips).
Now I’m trying to build up what will become a 21-host cloud, with potential to grow further, as a showcase for MaaS and Juju (and Ubuntu, as it happens), so I’m keen to get this working, and working well.
I’m working with a relatively limited number of routable IPs; I want to use a /24 for my “provider” network, which is fine (our application only needs a couple of routable IPs per instance), and a /26 for what I have dubbed our “service” network; this is where I want to provide access to MaaS and Juju (for us admins) and the OpenStack APIs and dashboard (for both us and our users).
The problem is, with 21 hosts, 35 containers, and 9 vips, I’m already in need of 65 IPs, plus a few others (e.g. network gateway, MaaS rack controller, Juju controller).
My “solution”, so I thought, was to have a third network, let’s say “internal”, which has all the hosts and containers on it, leaving the vips on the routable “service” network.
A little extra background; I have configured MaaS to give each machine a bond containing all four NICs on each host (LACP, 802.3ad), and on top of that run three VLANs; one for “internal” (untagged/native), one for “service” (tagged), and one for “provider” (tagged). If I deploy a machine with MaaS, I can log in to it and see bond0, bond0.X, and bond0.Y where X and Y are the “service” and “provider” VLAN IDs. I am using a non-routable subnet for the “internal” VLAN, and MaaS has given bond0 an appropriate IP.
However, if I deploy an application - say, mysql (percona-cluster) with mysql-ha (hacluster) - to an lxd container (necessary in order to safely deploy several applications to the same host), I run into difficulties; the lxd container only has an interface bridged to “internal”, but I need an interface on “service” on which to place the vip for the application.
I have configured spaces in MaaS with the relevant VLANs and subnets, and ensured Juju has the spaces too. I have experimented with --constraint "space=..."
and --bind
options, and this results in Juju failing to deploy the container with unable to setup network: host machine "X" has no available device in space(s) "service"
errors. It seems that since the host doesn’t have an IP address on the “service” interface (bond0.X), Juju can’t relate the interface to the space (is this the relevant code?). The connection between space and VLAN exists in MaaS, however, should Juju need it.
An experiment of adding a (static) IP to the bond0.X interface via MaaS got a little further, but since MaaS isn’t providing DHCP on that subnet, LXD seemed to assign a random IP in that subnet to the container, which clashed with an existing IP, and all hell broke loose! Regardless, this wastes IPs on the service subnet, defeating the object.
A second experiment of deploying directly to the machine worked perfectly (the vip was added to the “service” interface), so the charm appears to support this.
I don’t think what I’m attempting is a particularly wacky or unusual idea, as separating service endpoints (on routable IPs) from internal infrastructure (on non-routable IPs) is a common security concept (and more efficient on IP usage).
I’ve read the Juju and IP Addresses development post which seems to be in a related area, so if my use case isn’t currently possible, perhaps it could be considered in that piece of work.
If anyone has any suggestions on how I can achieve my goal, I’d welcome them. For now, I’m having to proceed with a larger subnet (probably a /24) as a combined “service” and “internal” network which should allow sufficient room for this deployment and some further expansion.
Thanks,
D.