Setting up vault with mysql-innodb-cluster on Charmed Kubernetes v1.21

Hi,

I have been trying to setup vault along with mysql-innodb-cluster to be used as Kubernetes CA for Charmed Kubernetes version 1.21, while to achieve the same as Using Vault as a CA | Ubuntu

Here’s how the vault portion of my bundle looks like:

  hacluster-vault:
    charm: cs:hacluster-78
    options:
       cluster_count: 3
  vault-mysql-router:
    charm: cs:mysql-router-11
    bindings:
      "":db-space
      shared-db: db-space
      db-router: db-space
  vault:
    charm: cs:vault-50
    num_units: 3
    options:
      auto-generate-root-ca-cert: true
      vip: 192.168.x.x
    bindings:
      "": db-space
      shared-db: db-space
    to:
    - 1
    - 2
    - 3

  vault-innodb-cluster:
    charm: cs:mysql-innodb-cluster-5
    num_units: 3
    bindings:
      "": db-space
      cluster: db-space
      db-router: db-space
    to:
    - 4
    - 5
    - 6

And here’s how relations for vault look like:

- - kubernetes-master:certificates
  - vault:certificates
- - etcd:certificates
  - vault:certificates
- - kubernetes-worker:certificates
  - vault:certificates
- - kubeapi-load-balancer:certificates
  - vault:certificates
- - vault-mysql-router:db-router
  - vault-innodb-cluster:db-router
- - vault-mysql-router:shared-db
  - vault:shared-db

While trying to deploy this, I’m stuck with mysql-innodb-cluster giving me the following message:

$ juju status vault-innodb-cluster
Model     Controller                Cloud/Region     Version  SLA          Timestamp
mymodel  juju-controller.maas  maas/default  2.9.15   unsupported  16:11:43+08:00

App                   Version  Status   Scale  Charm                 Store       Channel  Rev  OS      Message
vault-innodb-cluster  ussuri   waiting      3  mysql-innodb-cluster  charmstore  stable     5  ubuntu  'cluster' incomplete, MySQL not installed

Unit                     Workload  Agent  Machine  Public address  Ports  Message
vault-innodb-cluster/11  waiting   idle   1        192.168.x.1           'cluster' incomplete, MySQL not installed
vault-innodb-cluster/12  waiting   idle   2        192.168.x.2           'cluster' incomplete, MySQL not installed
vault-innodb-cluster/13  waiting   idle   3        192.168.x.3           'cluster' incomplete, MySQL not installed

As a result, vault is also waiting for the db:

$ juju status vault
Model     Controller                Cloud/Region     Version  SLA          Timestamp
mymodel  juju-controller.maas  maas/default  2.9.15   unsupported  16:13:34+08:00

App                 Version  Status   Scale  Charm         Store       Channel  Rev  OS      Message
hacluster-vault              active       3  hacluster     charmstore  stable    78  ubuntu  Unit is ready and clustered
vault                        waiting      3  vault         charmstore  stable    50  ubuntu  'shared-db' incomplete
vault-mysql-router  8.0.26   waiting      3  mysql-router  charmstore  stable    11  ubuntu  'db-router' incomplete, MySQL Router not yet bootstrapped

Unit                      Workload  Agent  Machine  Public address  Ports  Message
vault/6                   waiting   idle   4        192.168.x.4           'shared-db' incomplete
  hacluster-vault/1       active    idle            192.168.x.4           Unit is ready and clustered
  vault-mysql-router/35*  waiting   idle            192.168.x.4           'db-router' incomplete, MySQL Router not yet bootstrapped
vault/7                   waiting   idle   5        192.168.x.5           'shared-db' incomplete
  hacluster-vault/2       active    idle            192.168.x.5           Unit is ready and clustered
  vault-mysql-router/36   waiting   idle            192.168.x.5           'db-router' incomplete, MySQL Router not yet bootstrapped
vault/8                   waiting   idle   6        192.168.x.6           'shared-db' incomplete
  hacluster-vault/0*      active    idle            192.168.x.6           Unit is ready and clustered
  vault-mysql-router/37   waiting   idle            192.168.x.6           'db-router' incomplete, MySQL Router not yet bootstrapped

Is there something wrong with my bundle config?

Would be grateful for any hints/suggestions.

Edit: I notice the following errors in juju logs:

2021-10-06 10:05:35 INFO unit.vault-innodb-cluster/11.juju-log server.go:325 Coordinator: Initializing coordinator layer
2021-10-06 10:05:35 INFO unit.vault-innodb-cluster/11.juju-log server.go:325 Initializing Leadership Layer (is follower)
2021-10-06 10:05:36 INFO unit.vault-innodb-cluster/11.juju-log server.go:325 Initializing Snap Layer
2021-10-06 10:05:37 INFO unit.vault-innodb-cluster/11.juju-log server.go:325 Invoking reactive handler: reactive/layer_openstack.py:59:default_update_status
2021-10-06 10:05:37 INFO unit.vault-innodb-cluster/11.juju-log server.go:325 Invoking reactive handler: reactive/mysql_innodb_cluster_handlers.py:339:configure_certificates
2021-10-06 10:05:37 INFO unit.vault-innodb-cluster/11.juju-log server.go:325 Invoking reactive handler: reactive/layer_openstack.py:77:check_really_is_update_status
2021-10-06 10:05:37 INFO unit.vault-innodb-cluster/11.juju-log server.go:325 Invoking reactive handler: reactive/layer_openstack.py:88:run_default_update_status
2021-10-06 10:05:37 INFO unit.vault-innodb-cluster/11.juju-log server.go:325 Invoking reactive handler: hooks/relations/tls-certificates/requires.py:109:broken:certificates
2021-10-06 10:05:37 INFO unit.vault-innodb-cluster/11.juju-log server.go:325 Invoking reactive handler: hooks/relations/mysql-innodb-cluster/peers.py:69:joined:cluster
2021-10-06 10:05:37 INFO unit.vault-innodb-cluster/11.juju-log server.go:325 Invoking reactive handler: hooks/relations/mysql-innodb-cluster/peers.py:75:changed:cluster
2021-10-06 10:05:37 INFO unit.vault-innodb-cluster/11.juju-log server.go:325 Invoking reactive handler: hooks/relations/mysql-router/provides.py:47:joined:db-router
2021-10-06 10:05:37 INFO unit.vault-innodb-cluster/11.juju-log server.go:325 Invoking reactive handler: hooks/relations/mysql-router/provides.py:53:changed:db-router
2021-10-06 10:05:38 INFO unit.vault-innodb-cluster/11.juju-log server.go:325 DEPRECATION WARNING: Function get_os_codename_package is being removed on/around 2021-01 : moved to charmhelpers.contrib.openstack.utils.get_installed_os_version()
2021-10-06 10:05:38 WARNING unit.vault-innodb-cluster/11.juju-log server.go:325 Package openstack-release has no installation candidate.
2021-10-06 10:05:38 INFO unit.vault-innodb-cluster/11.juju-log server.go:325 Installing ['openstack-release'] with options: ['--option=Dpkg::Options::=--force-confold']
2021-10-06 10:05:38 WARNING unit.vault-innodb-cluster/11.update-status logger.go:60 E: Unable to locate package openstack-release
2021-10-06 10:05:39 INFO juju.worker.uniter.operation runhook.go:152 ran "update-status" hook (via explicit, bespoke hook script)

Thanks & Regards, Sana

The log messages related to openstack-release is a warning and shouldn’t be the cause of your issue. Can you share more logs?

1 Like

@pmatulis Thanks much for helping with that.

I see the following error message for mysql-innodb-cluser:

2021-10-13 02:58:26 INFO unit.vault-innodb-cluster/23.juju-log server.go:325 Initializing Leadership Layer (is leader)
2021-10-13 02:58:26 INFO unit.vault-innodb-cluster/23.juju-log server.go:325 Coordinator: Initializing coordinator layer
2021-10-13 02:58:26 INFO unit.vault-innodb-cluster/23.juju-log server.go:325 Initializing Snap Layer
2021-10-13 02:58:28 WARNING unit.vault-innodb-cluster/23.update-status logger.go:60 All snaps up to date.
2021-10-13 02:58:28 INFO unit.vault-innodb-cluster/23.juju-log server.go:325 Invoking reactive handler: reactive/layer_openstack.py:59:default_update_status
2021-10-13 02:58:28 INFO unit.vault-innodb-cluster/23.juju-log server.go:325 Invoking reactive handler: reactive/layer_openstack.py:77:check_really_is_update_status
2021-10-13 02:58:28 INFO unit.vault-innodb-cluster/23.juju-log server.go:325 Invoking reactive handler: reactive/layer_openstack.py:88:run_default_update_status
2021-10-13 02:58:28 INFO unit.vault-innodb-cluster/23.juju-log server.go:325 Invoking reactive handler: reactive/mysql_innodb_cluster_handlers.py:91:create_remote_cluster_user
2021-10-13 02:58:54 ERROR unit.vault-innodb-cluster/23.juju-log server.go:325 Cluster is unavailable: Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
Traceback (most recent call last):
  File "<string>", line 2, in <module>
mysqlsh.Error: Shell Error (51314): Dba.get_cluster: This function is not available through a session to a standalone instance (metadata exists, instance belongs to that metadata, but GR is not active)

2021-10-13 02:58:54 WARNING unit.vault-innodb-cluster/23.juju-log server.go:325 Cannot determine the cluster primary RW node for writes.
2021-10-13 02:58:54 WARNING unit.vault-innodb-cluster/23.juju-log server.go:325 Attempted to write to the RO node: Localhost via UNIX socket in configure_db_for_hosts. Skipping.
2021-10-13 02:58:54 WARNING unit.vault-innodb-cluster/23.juju-log server.go:325 Not all remote users created.
2021-10-13 02:58:54 INFO unit.vault-innodb-cluster/23.juju-log server.go:325 Invoking reactive handler: hooks/relations/tls-certificates/requires.py:109:broken:certificates
2021-10-13 02:58:54 INFO unit.vault-innodb-cluster/23.juju-log server.go:325 Invoking reactive handler: hooks/relations/mysql-router/provides.py:47:joined:db-router
2021-10-13 02:58:54 INFO unit.vault-innodb-cluster/23.juju-log server.go:325 Invoking reactive handler: hooks/relations/mysql-router/provides.py:53:changed:db-router
2021-10-13 02:58:54 INFO unit.vault-innodb-cluster/23.juju-log server.go:325 Invoking reactive handler: hooks/relations/mysql-innodb-cluster/peers.py:69:joined:cluster
2021-10-13 02:58:54 INFO unit.vault-innodb-cluster/23.juju-log server.go:325 Invoking reactive handler: hooks/relations/mysql-innodb-cluster/peers.py:75:changed:cluster
2021-10-13 02:58:55 INFO juju.worker.uniter.operation runhook.go:152 ran "update-status" hook (via explicit, bespoke hook script)

I tried installing the locale and setting it up but that didn’t help as well, I see the same error message yet again.

echo "LC_ALL=en_US.UTF-8" | sudo tee -a /etc/environment
echo "en_US.UTF-8 UTF-8" | sudo tee -a /etc/locale.gen
echo "LANG=en_US.UTF-8" | sudo tee -a /etc/locale.conf
sudo locale-gen en_US.UTF-8

The router charms gives the following error messages:

2021-10-13 03:05:33 INFO unit.vault-mysql-router/45.juju-log server.go:325 Reactive main running for hook update-status
2021-10-13 03:05:33 ERROR unit.vault-mysql-router/45.juju-log server.go:325 Unable to find implementation for relation: requires of juju-info
2021-10-13 03:05:33 INFO unit.vault-mysql-router/45.juju-log server.go:325 Invoking reactive handler: reactive/layer_openstack.py:59:default_update_status
2021-10-13 03:05:33 INFO unit.vault-mysql-router/45.juju-log server.go:325 Invoking reactive handler: reactive/mysql_router_handlers.py:18:db_router_request
2021-10-13 03:05:33 INFO unit.vault-mysql-router/45.juju-log server.go:325 Invoking reactive handler: reactive/layer_openstack.py:77:check_really_is_update_status
2021-10-13 03:05:33 INFO unit.vault-mysql-router/45.juju-log server.go:325 Invoking reactive handler: reactive/layer_openstack.py:88:run_default_update_status
2021-10-13 03:05:33 INFO unit.vault-mysql-router/45.juju-log server.go:325 Invoking reactive handler: hooks/relations/tls-certificates/requires.py:109:broken:certificates
2021-10-13 03:05:33 INFO unit.vault-mysql-router/45.juju-log server.go:325 Invoking reactive handler: hooks/relations/mysql-shared/provides.py:42:joined:shared-db
2021-10-13 03:05:33 INFO unit.vault-mysql-router/45.juju-log server.go:325 Invoking reactive handler: hooks/relations/mysql-shared/provides.py:48:changed:shared-db
2021-10-13 03:05:34 INFO juju.worker.uniter.operation runhook.go:152 ran "update-status" hook (via explicit, bespoke hook script)

Any hints/suggestions would be extremely helpful.

Here’s the current status:

$ juju status vault-innodb-cluster
Model     Controller                Cloud/Region     Version  SLA          Timestamp
mymodel  juju-controller.maas  maas/default  2.9.15   unsupported  11:11:28+08:00

App                   Version  Status   Scale  Charm                 Store       Channel  Rev  OS      Message
vault-innodb-cluster  8.0.26   waiting      3  mysql-innodb-cluster  charmstore  stable     5  ubuntu  Not all instances configured for clustering

Unit                      Workload  Agent  Machine  Public address  Ports  Message
vault-innodb-cluster/23*  waiting   idle   1        192.168.x.1            Not all instances configured for clustering
vault-innodb-cluster/24   waiting   idle   2        192.168.x.2            Instance not yet configured for clustering
vault-innodb-cluster/25   waiting   idle   3        192.168.x.3            Instance not yet configured for clustering

$  juju status vault
Model     Controller                Cloud/Region     Version  SLA          Timestamp
mymodel  juju-controller.maas  maas/default  2.9.15   unsupported  11:15:53+08:00

App                 Version  Status   Scale  Charm         Store       Channel  Rev  OS      Message
vault                        waiting      3  vault         charmstore  stable    50  ubuntu  'shared-db' incomplete
vault-mysql-router  8.0.26   waiting      3  mysql-router  charmstore  stable     6  ubuntu  'db-router' incomplete, MySQL Router not yet bootstrapped

Unit                      Workload  Agent  Machine  Public address  Ports  Message
vault/9*                  waiting   idle   5        192.168.x.5          'shared-db' incomplete
  vault-mysql-router/45   waiting   idle            192.168.x.5          'db-router' incomplete, MySQL Router not yet bootstrapped
vault/10                  waiting   idle   6        192.168.x.6          'shared-db' incomplete
  vault-mysql-router/46   waiting   idle            192.168.x.6          'db-router' incomplete, MySQL Router not yet bootstrapped
vault/11                  waiting   idle   7        192.168.x.7          'shared-db' incomplete
  vault-mysql-router/44*  waiting   idle            192.168.x.7          'db-router' incomplete, MySQL Router not yet bootstrapped

Regards, Sana