Just passing through to say that a significant new document on deploying highly available OpenStack with Juju has been published on openstack.org. Feedback can be given on OFTC #openstack-charms and bugs can be raised here.
Great work @pmatulis and co! Very helpful to have it âon paperâ.
@pmatulis I think that youâre seriously under-selling this material. Itâs excellent!
Itâs a fantastic description/guide of enabling multiple HA strategies for differing applications. Itâs really eye-opening to see Juju being this flexible.
One question from me. Which is this config setting does this sentence refer to?
At a minimum, the configuration option
dns-ha
must be set to âtrueâ and at least one ofos-admin-hostname
,os-internal-hostname
, oros-public-hostname
must be set.
Would love to hear any comment from the @openstack-charmers team about the process of relating to hacluster
(the âVirtual IPâ strategy). Any tips? How hard is it to add the vip
config option to a charm?
This is referring to the DNS-HA options, specifically, dns-ha
, and the os-*-hostname
options. With these options configured, and the hacluster charm configured with MAAS credentials, an A record will be created in MAAS and kept pointing at the current resource holder, rather than a VIP being used.
DNS HA is very useful when your applications span L3 domains and canât share a VIP because of that
Or when you want to save some precious public IPv4 space! It would be great if hacluster charm could support VIPs on different subnets, so as to save precious IPs and instead of 4 per cluster, use just 1.
There are 2 enhancements that could be done:
- using 1 VIP (and 1 hostname) for all OpenStack services;
- the downside here is that all of your services now depend on the lifetime of a single VIP. Operationally, any issues with that VIP will have a larger impact on your deployment which is something to keep in mind. Some services depend on each other (the obvious example is the Keystone use by all other services) so in that case having separate VIPs wonât help much.
- Supporting VIPs on a multi-subnet deployment.
- this is much more complicated since it involves using BGP and, possibly, BFD which requires tight integration with your network gear while currently we donât require any. Some of the problems around that, such as avoiding resets of all connections on failover, are described in this paper.
Hi Dmitrii, Thank you for the reply. I do not understand how this downside, using 1 VIP, is not existent when you use 4 IPs (1 for each host and 1 VIP) instead of potential just the VIP. All issues that might affect your VIP will most probably affect the other IPs on the hosts as well.
From the L2 or L3 perspective - yes. If your L2 or L3 is misconfigured you will have a problem for all VIPs and there is no win compared to just using 1 VIP.
However, if you take one Pacemaker+Corosync cluster (that manages 1 out of all VIPs) down, other VIPs will not be affected. If that VIP is a VIP for Keystone - you still have a problem for the whole deployment, however, if a VIP for Gnocchi is taken down you just lose metrics and the rest of the control plane works.
Iâm looking for some information regarding âcustomize-failure-domainâ in context of the Ussuri networking - ovn-central / ovn-chassis.
The docâs only reference neutron-gateway. Is there an updated version of the docs somewhere?
Upstream OVN allowed this after the latest OpenStack Charms release. Iâll ask when we can hope to see it supported by the charms.