JUJU host names and Vault SSL issuance, best practice questions

I was finally able to successfully set up and deploy VAULT recently only to notice that it’s issuing SSL certs to OpenStack based on hostnames that JUJU supplied on charm deployment. I’m not sure if this is configurable in vault?

Procedurally, what is the best practice for using SSL in this kind of environment successfully setting up charms to talk to each other via SSL(am I wrong here, what’s the point of vault)? I noticed a configuration available to set charms to use MAAS as DNS but even if I set up the charm in maas as “cinder.DOMAIN.TDL” for cinder, vault still issues an SSL cert for something like “juju-d3cbed-0-lxd-0” which results in an error when OpenStack tries to talk to “cinder.DOMAIN.TDL” via SSL as it gets a cert signed for “juju-d3cbed-0-lxd-0” not what is configured in the charm for “os-public-hostname” or any of the other “os-(admin|internal)-hostname”. Another example is with OpenStack-dashboard I have a DNS entry for horizon.DOMAIN.TDL and vault issues a SSL cert for “juju--0-lxd-0” I can see it in the browser when accessing the site. I’m not sure how it’s related but the SSL cert issued to the dashboard by vault is unreadable by apache, fixable with chmod +r a

How do I handle this behavior, so I set up systems in maas with hostnames before using JUJU for the deployment of charms and force juju to deploy charms to specific host’s(I’m going to need guidance on that.) or is there a way to adjust this behavior in vault? Can the hostnames be changed after deployment, should I be using cname records instead of A for the hostnames how does this work with vault? What is “dns-ha-access-record”.

I did find the designate charm for DNS but I’m not sure how it works or if it’s necessary for this situation.

I feel lost all of a sudden PKI is not a strong skill with me.

Thank you for your kind feedback

*DOMAIN.TDL is a place holder as I’m sanitizing my cloud for privacy reasons.

1 Like

Can anyone offer any advice?

In browsing the documentation for Vault certificate management and get_certificate_request code for generating certificate requests from openstack charms via the vault:certificates interface, I’ve discovered some requirements that need to be met to generate certificates that include the os-*-hostname entries as extended names for the certificate.

The certificate is always going to have a common name of the first-related unit (juju--lxd-) for the application, as the cert request always begins with get_hostname_cn() and only the first unit of an application triggers (re)generation of certificates for a given application.

The get_certificate_request code within charmhelpers.contrib.openstack.cert_utils has the following requirements to append an IP or a juju configured os-public-hostname, os-internal-hostname, or os-admin-hostname:

  1. The interface needs to have a space binding that refers to an IP space that includes the IP set for os-$net_type-hostname
  2. DNS PTR and A records need to exist for the os-*-hostname interfaces in your upstream or MAAS DNS. This is typically handled by reserving a VIP in each net_type space (public/admin/internal) and registering it within MAAS or upstream DNS.
  3. The os-*-hostname entries have to be able to be looked up and resolve to an interface on the unit requesting the certificate.

Make sure you have the following defined (for example, within openstack-dashboard, in your bundle)
bindings:
public: public-space
options:
os-public-hostname: horizon.mydomain.tld
vip:

And make sure you’ve related hacluster to openstack-dashboard to configure and manage the VIP across the units. If you are not using HA resources, you’ll need to pre-define your hosts in your cloud provider as having the hostnames of the services they are hosting. if using lxds, this is not really possible, and use of hacluster and VIP would be recommended for full support of VIP and custom FQDN based SSL certificates.

I think there is a potential race in the openstack-dashboard hooks that could create the application’s certificate request from a unit that is not currently hosting the os-*-hostname VIP on the interface(s) you’re interested in. This could fail the check for an IP that matches the os-$net_type-hostname to add the vip’s hostname FQDN to the certificate.

You should be able to view both the vault and the openstack-dashboard juju unit logs to look for certificate generation/request logs based on these code paths to see what requests were made of vault and which unit did the requesting.

I have submitted the potential race-condition found here: Bug #1878064 “Race in openstack ssl certificates generation can ...” : Bugs : Charm Helpers

Please add any notes that you can from your environment if you find yourself experiencing this issue.

Thank you,
-Drew

That bug I’d thought existed ended up being invalid. If you configure a VIP for your service, and add the proper clustering subordinates to charms which require them, the VIP will be included in the SANs of the certificates provided by vault.

@nathan-flowers were you able to make progress?

Thank you all for the feedback.
I’m working though the suggestions and recommendations.

@nathan-flowers So a few things I’ve noticed with the Vault certificate setup on openstack.

Nothing ever works post deployment. I don’t know if this is a bug or just not something they added in the code. if deploy a ha-stack and say you run ‘juju config openstack-dashboard os-public-hostname=“dashboard.domain.tld”’ this has no effect on changing the certificate after running reissue still has the DNS:juju-d3cbed-0-lxd-0.maas which is very frustrating even after meeting the requirements that @afreiberger stated.

I have found I have to create and overlay with the hostnames defined I still have not tested if I can change the os-public-hostname after post deployment or if it’s still just going to ignore the update and generate the same certificate.

I’ve not had time to validate this, but I’ve pinged our Openstack engineering team regarding this.

You may have to remove-relation and add-relation between openstack-dashboard and vault to have it re-handshake the hostname, but I’ve not tested whether that’s a workaround or not. This does sound like a day 2 operations bug and should be filed against https://pad.lv/b/charm-openstack-dashboard.

We’re seeing this problem as well in our test deployments - all services are terminally broken for lack of ability to communicate with each other. We keep our undercloud functions (DB, MQ, vault) in the management network, but need the certificates for interfaces in the public network. Reverse lookips of the VIP addresses point to the public network hostname of the services to which they’re bound, but Vault doesnt honor this either because the request comes from the management address of the unit which is link-local with vault in the management subnet (saddr not matching the public VIP), or because it can’t figure out that the requesting unit is part of an hacluster holding the VIP requiring the hostname SAN record.
How does one fix this? What’s Canonical’s reference DNS and network spaces setup when deploying with maas+juju?

1 Like

have the same problem!