Vault: hook failed: "start"

After a a server shoot down the valve is no starting again.with the error
vault 1.5.4 error 1 vault charmstore 44 ubuntu hook failed: “start”

I am attaching part of the log file.
2021-06-10 13:12:03 WARNING start Traceback (most recent call last):
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/urllib3/connection.py”, line 169, in _new_conn
2021-06-10 13:12:03 WARNING start conn = connection.create_connection(
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/urllib3/util/connection.py”, line 96, in create_connection
2021-06-10 13:12:03 WARNING start raise err
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/urllib3/util/connection.py”, line 86, in create_connection
2021-06-10 13:12:03 WARNING start sock.connect(sa)
2021-06-10 13:12:03 WARNING start ConnectionRefusedError: [Errno 111] Connection refused
2021-06-10 13:12:03 WARNING start
2021-06-10 13:12:03 WARNING start During handling of the above exception, another exception occurred:
2021-06-10 13:12:03 WARNING start
2021-06-10 13:12:03 WARNING start Traceback (most recent call last):
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py”, line 699, in urlopen
2021-06-10 13:12:03 WARNING start httplib_response = self._make_request(
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py”, line 394, in _make_request
2021-06-10 13:12:03 WARNING start conn.request(method, url, **httplib_request_kw)
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/urllib3/connection.py”, line 234, in request
2021-06-10 13:12:03 WARNING start super(HTTPConnection, self).request(method, url, body=body, headers=headers)
2021-06-10 13:12:03 WARNING start File “/usr/lib/python3.8/http/client.py”, line 1255, in request
2021-06-10 13:12:03 WARNING start self._send_request(method, url, body, headers, encode_chunked)
2021-06-10 13:12:03 WARNING start File “/usr/lib/python3.8/http/client.py”, line 1301, in _send_request
2021-06-10 13:12:03 WARNING start self.endheaders(body, encode_chunked=encode_chunked)
2021-06-10 13:12:03 WARNING start File “/usr/lib/python3.8/http/client.py”, line 1250, in endheaders
2021-06-10 13:12:03 WARNING start self._send_output(message_body, encode_chunked=encode_chunked)
2021-06-10 13:12:03 WARNING start File “/usr/lib/python3.8/http/client.py”, line 1010, in _send_output
2021-06-10 13:12:03 WARNING start self.send(msg)
2021-06-10 13:12:03 WARNING start File “/usr/lib/python3.8/http/client.py”, line 950, in send
2021-06-10 13:12:03 WARNING start self.connect()
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/urllib3/connection.py”, line 200, in connect
2021-06-10 13:12:03 WARNING start conn = self._new_conn()
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/urllib3/connection.py”, line 181, in _new_conn
2021-06-10 13:12:03 WARNING start raise NewConnectionError(
2021-06-10 13:12:03 WARNING start urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f3a1804a160>: Failed to establish a new connection: [Errno 111] Connection refused
2021-06-10 13:12:03 WARNING start
2021-06-10 13:12:03 WARNING start During handling of the above exception, another exception occurred:
2021-06-10 13:12:03 WARNING start
2021-06-10 13:12:03 WARNING start Traceback (most recent call last):
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/requests/adapters.py”, line 439, in send
2021-06-10 13:12:03 WARNING start resp = conn.urlopen(
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py”, line 755, in urlopen
2021-06-10 13:12:03 WARNING start retries = retries.increment(
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/urllib3/util/retry.py”, line 573, in increment
2021-06-10 13:12:03 WARNING start raise MaxRetryError(_pool, url, error or ResponseError(cause))
2021-06-10 13:12:03 WARNING start urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host=‘127.0.0.1’, port=8220): Max retries exceeded with url: /v1/auth/approle/login (Caused by NewConnectionError(’<urllib3.connection.HTTPConnection object at 0x7f3a1804a160>: Failed to establish a new connection: [Errno 111] Connection refused’))
2021-06-10 13:12:03 WARNING start
2021-06-10 13:12:03 WARNING start During handling of the above exception, another exception occurred:
2021-06-10 13:12:03 WARNING start
2021-06-10 13:12:03 WARNING start Traceback (most recent call last):
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/charm/hooks/start”, line 22, in
2021-06-10 13:12:03 WARNING start main()
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/charms/reactive/init.py”, line 74, in main
2021-06-10 13:12:03 WARNING start bus.dispatch(restricted=restricted_mode)
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/charms/reactive/bus.py”, line 390, in dispatch
2021-06-10 13:12:03 WARNING start _invoke(other_handlers)
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/charms/reactive/bus.py”, line 359, in _invoke
2021-06-10 13:12:03 WARNING start handler.invoke()
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/charms/reactive/bus.py”, line 181, in invoke
2021-06-10 13:12:03 WARNING start self._action(*args)
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/charm/reactive/vault_handlers.py”, line 846, in publish_ca_info
2021-06-10 13:12:03 WARNING start if not client_approle_authorized():
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/charm/reactive/vault_handlers.py”, line 789, in client_approle_authorized
2021-06-10 13:12:03 WARNING start vault.get_local_client()
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/tenacity/init.py”, line 333, in wrapped_f
2021-06-10 13:12:03 WARNING start return self(f, *args, **kw)
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/tenacity/init.py”, line 423, in call
2021-06-10 13:12:03 WARNING start do = self.iter(retry_state=retry_state)
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/tenacity/init.py”, line 360, in iter
2021-06-10 13:12:03 WARNING start return fut.result()
2021-06-10 13:12:03 WARNING start File “/usr/lib/python3.8/concurrent/futures/_base.py”, line 432, in result
2021-06-10 13:12:03 WARNING start return self.__get_result()
2021-06-10 13:12:03 WARNING start File “/usr/lib/python3.8/concurrent/futures/_base.py”, line 388, in __get_result
2021-06-10 13:12:03 WARNING start raise self._exception
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/tenacity/init.py”, line 426, in call
2021-06-10 13:12:03 WARNING start result = fn(*args, **kwargs)
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/charm/lib/charm/vault.py”, line 254, in get_local_client
2021-06-10 13:12:03 WARNING start client.auth_approle(app_role_id)
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/hvac/v1/init.py”, line 2072, in auth_approle
2021-06-10 13:12:03 WARNING start return self.auth(’/v1/auth/{0}/login’.format(mount_point), json=params, use_token=use_token)
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/hvac/v1/init.py”, line 1726, in auth
2021-06-10 13:12:03 WARNING start return self._adapter.auth(
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/hvac/adapters.py”, line 159, in auth
2021-06-10 13:12:03 WARNING start response = self.post(url, **kwargs).json()
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/hvac/adapters.py”, line 103, in post
2021-06-10 13:12:03 WARNING start return self.request(‘post’, url, **kwargs)
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/hvac/adapters.py”, line 218, in request
2021-06-10 13:12:03 WARNING start response = self.session.request(method, url, headers=headers,
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/requests/sessions.py”, line 542, in request
2021-06-10 13:12:03 WARNING start resp = self.send(prep, **send_kwargs)
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/requests/sessions.py”, line 655, in send
2021-06-10 13:12:03 WARNING start r = adapter.send(request, **kwargs)
2021-06-10 13:12:03 WARNING start File “/var/lib/juju/agents/unit-vault-0/.venv/lib/python3.8/site-packages/requests/adapters.py”, line 516, in send
2021-06-10 13:12:03 WARNING start raise ConnectionError(e, request=request)
2021-06-10 13:12:03 WARNING start requests.exceptions.ConnectionError: HTTPConnectionPool(host=‘127.0.0.1’, port=8220): Max retries exceeded with url: /v1/auth/approle/login (Caused by NewConnectionError(’<urllib3.connection.HTTPConnection object at 0x7f3a1804a160>: Failed to establish a new connection: [Errno 111] Connection refused’))
2021-06-10 13:12:03 ERROR juju.worker.uniter.operation runhook.go:136 hook “start” (via explicit, bespoke hook script) failed: exit status 1
2021-06-10 13:12:03 INFO juju.worker.uniter resolver.go:143 awaiting error resolution for “start” hook

Vault will be sealed upon service restart. What is the output to juju status vault?

Model      Controller       Cloud/Region              Version  SLA          Timestamp
openstack  juju-controller  maas.pcty.geoint/default  2.8.8    unsupported  09:55:06-05:00

App                     Version  Status   Scale  Charm                   Store       Channel  Rev  OS      Message
ceph-mon                15.2.7   active       3  ceph-mon                charmstore            53  ubuntu  Unit is ready and clustered
ceph-osd                15.2.11  active       4  ceph-osd                charmstore           308  ubuntu  Unit is ready (1 OSD)
ceph-radosgw            15.2.7   active       1  ceph-radosgw            charmstore           294  ubuntu  Unit is ready
cinder                  17.0.1   active       1  cinder                  charmstore           308  ubuntu  Unit is ready
cinder-ceph             17.0.1   active       1  cinder-ceph             charmstore           260  ubuntu  Unit is ready
cinder-mysql-router     8.0.25   blocked      1  mysql-router            charmstore             6  ubuntu  Failed to connect to MySQL
dashboard-mysql-router  8.0.25   blocked      1  mysql-router            charmstore             6  ubuntu  Failed to connect to MySQL
glance                  21.0.0   active       1  glance                  charmstore           303  ubuntu  Unit is ready
glance-mysql-router     8.0.25   blocked      1  mysql-router            charmstore             6  ubuntu  Failed to connect to MySQL
keystone                18.0.0   active       1  keystone                charmstore           321  ubuntu  Application Ready
keystone-mysql-router   8.0.25   blocked      1  mysql-router            charmstore             6  ubuntu  Failed to connect to MySQL
mysql-innodb-cluster    8.0.25   blocked      3  mysql-innodb-cluster    charmstore             5  ubuntu  MySQL InnoDB Cluster not healthy: None
neutron-api             17.0.0   active       1  neutron-api             charmstore           292  ubuntu  Unit is ready
neutron-api-plugin-ovn  17.0.0   active       1  neutron-api-plugin-ovn  charmstore             4  ubuntu  Unit is ready
neutron-mysql-router    8.0.25   blocked      1  mysql-router            charmstore             6  ubuntu  Failed to connect to MySQL
nova-cloud-controller   22.0.1   active       1  nova-cloud-controller   charmstore           352  ubuntu  PO: Unit is ready
nova-compute            22.0.1   active       3  nova-compute            charmstore           325  ubuntu  Unit is ready
nova-mysql-router       8.0.25   blocked      1  mysql-router            charmstore             6  ubuntu  Failed to connect to MySQL
ntp                     3.5      active       3  ntp                     charmstore            44  ubuntu  chrony: Ready
openstack-dashboard     18.6.1   active       1  openstack-dashboard     charmstore           311  ubuntu  Unit is ready
ovn-central             20.03.1  active       3  ovn-central             charmstore             5  ubuntu  Unit is ready (leader: ovnsb_db)
ovn-chassis             20.03.1  active       3  ovn-chassis             charmstore            10  ubuntu  Unit is ready
placement               4.0.0    active       1  placement               charmstore            17  ubuntu  Unit is ready
placement-mysql-router  8.0.25   blocked      1  mysql-router            charmstore             6  ubuntu  Failed to connect to MySQL
rabbitmq-server         3.8.2    active       1  rabbitmq-server         charmstore           108  ubuntu  Unit is ready
vault                   1.5.4    error        1  vault                   charmstore            44  ubuntu  hook failed: "start"
vault-mysql-router      8.0.25   blocked      1  mysql-router            charmstore             6  ubuntu  Failed to connect to MySQL

Unit                         Workload  Agent      Machine  Public address  Ports              Message
ceph-mon/0                   active    idle       0/lxd/0  192.168.221.78                     Unit is ready and clustered
ceph-mon/1                   active    idle       1/lxd/0  192.168.221.22                     Unit is ready and clustered
ceph-mon/2*                  active    idle       2/lxd/0  192.168.221.12                     Unit is ready and clustered
ceph-osd/1                   active    idle       1        192.168.221.91                     Unit is ready (1 OSD)
ceph-osd/2                   active    idle       2        192.168.221.77                     Unit is ready (1 OSD)
ceph-osd/3                   active    idle       3        192.168.221.7                      Unit is ready (2 OSD)
ceph-osd/5*                  active    idle       6        192.168.221.24                     Unit is ready (2 OSD)
ceph-radosgw/0*              active    idle       0/lxd/1  192.168.221.14  80/tcp             Unit is ready
cinder/0*                    active    idle       1/lxd/1  192.168.221.20  8776/tcp           Unit is ready
  cinder-ceph/0*             active    idle                192.168.221.20                     Unit is ready
  cinder-mysql-router/0*     blocked   idle                192.168.221.20                     Failed to connect to MySQL
glance/0*                    active    idle       2/lxd/1  192.168.221.25  9292/tcp           Unit is ready
  glance-mysql-router/0*     blocked   idle                192.168.221.25                     Failed to connect to MySQL
keystone/0*                  active    idle       0/lxd/2  192.168.221.15  5000/tcp           Unit is ready
  keystone-mysql-router/0*   blocked   idle                192.168.221.15                     Failed to connect to MySQL
mysql-innodb-cluster/0*      blocked   executing  0/lxd/3  192.168.221.99                     MySQL InnoDB Cluster not healthy: None
mysql-innodb-cluster/1       blocked   idle       1/lxd/2  192.168.221.5                      MySQL InnoDB Cluster not healthy: None
mysql-innodb-cluster/2       blocked   idle       2/lxd/2  192.168.221.8                      MySQL InnoDB Cluster not healthy: None
neutron-api/0*               active    idle       1/lxd/3  192.168.221.23  9696/tcp           Unit is ready
  neutron-api-plugin-ovn/0*  active    idle                192.168.221.23                     Unit is ready
  neutron-mysql-router/0*    blocked   idle                192.168.221.23                     Failed to connect to MySQL
nova-cloud-controller/0*     active    idle       0/lxd/4  192.168.221.19  8774/tcp,8775/tcp  PO: Unit is ready
  nova-mysql-router/0*       blocked   idle                192.168.221.19                     Failed to connect to MySQL
nova-compute/0*              active    idle       0        192.168.221.96                     Unit is ready
  ntp/0*                     active    idle                192.168.221.96  123/udp            chrony: Ready
  ovn-chassis/0*             active    idle                192.168.221.96                     Unit is ready
nova-compute/1               active    idle       1        192.168.221.91                     Unit is ready
  ntp/1                      active    idle                192.168.221.91  123/udp            chrony: Ready
  ovn-chassis/1              active    idle                192.168.221.91                     Unit is ready
nova-compute/2               active    idle       2        192.168.221.77                     Unit is ready
  ntp/2                      active    idle                192.168.221.77  123/udp            chrony: Ready
  ovn-chassis/2              active    idle                192.168.221.77                     Unit is ready
openstack-dashboard/0*       active    idle       1/lxd/4  192.168.221.21  80/tcp,443/tcp     Unit is ready
  dashboard-mysql-router/0*  blocked   idle                192.168.221.21                     Failed to connect to MySQL
ovn-central/0*               active    idle       0/lxd/5  192.168.221.17  6641/tcp,6642/tcp  Unit is ready (leader: ovnsb_db)
ovn-central/1                active    idle       1/lxd/5  192.168.221.13  6641/tcp,6642/tcp  Unit is ready (northd: active)
ovn-central/2                active    idle       2/lxd/3  192.168.221.29  6641/tcp,6642/tcp  Unit is ready (leader: ovnnb_db)
placement/0*                 active    idle       2/lxd/4  192.168.221.26  8778/tcp           Unit is ready
  placement-mysql-router/0*  blocked   idle                192.168.221.26                     Failed to connect to MySQL
rabbitmq-server/0*           active    idle       2/lxd/5  192.168.221.4   5672/tcp           Unit is ready
vault/0*                     error     idle       0/lxd/6  192.168.221.11  8200/tcp           hook failed: "start"
  vault-mysql-router/0*      blocked   idle                192.168.221.11                     Failed to connect to MySQL

Machine  State    DNS             Inst id              Series  AZ       Message
0        started  192.168.221.96  PowerEdge-9R0DH13    focal   default  Deployed
0/lxd/0  started  192.168.221.78  juju-191a8d-0-lxd-0  focal   default  Container started
0/lxd/1  started  192.168.221.14  juju-191a8d-0-lxd-1  focal   default  Container started
0/lxd/2  started  192.168.221.15  juju-191a8d-0-lxd-2  focal   default  Container started
0/lxd/3  started  192.168.221.99  juju-191a8d-0-lxd-3  focal   default  Container started
0/lxd/4  started  192.168.221.19  juju-191a8d-0-lxd-4  focal   default  Container started
0/lxd/5  started  192.168.221.17  juju-191a8d-0-lxd-5  focal   default  Container started
0/lxd/6  started  192.168.221.11  juju-191a8d-0-lxd-6  focal   default  Container started
1        started  192.168.221.91  PowerEdge-9R0FH13    focal   default  Deployed
1/lxd/0  started  192.168.221.22  juju-191a8d-1-lxd-0  focal   default  Container started
1/lxd/1  started  192.168.221.20  juju-191a8d-1-lxd-1  focal   default  Container started
1/lxd/2  started  192.168.221.5   juju-191a8d-1-lxd-2  focal   default  Container started
1/lxd/3  started  192.168.221.23  juju-191a8d-1-lxd-3  focal   default  Container started
1/lxd/4  started  192.168.221.21  juju-191a8d-1-lxd-4  focal   default  Container started
1/lxd/5  started  192.168.221.13  juju-191a8d-1-lxd-5  focal   default  Container started
2        started  192.168.221.77  PowerEdge-9R0CH13    focal   default  Deployed
2/lxd/0  started  192.168.221.12  juju-191a8d-2-lxd-0  focal   default  Container started
2/lxd/1  started  192.168.221.25  juju-191a8d-2-lxd-1  focal   default  Container started
2/lxd/2  started  192.168.221.8   juju-191a8d-2-lxd-2  focal   default  Container started
2/lxd/3  started  192.168.221.29  juju-191a8d-2-lxd-3  focal   default  Container started
2/lxd/4  started  192.168.221.26  juju-191a8d-2-lxd-4  focal   default  Container started
2/lxd/5  started  192.168.221.4   juju-191a8d-2-lxd-5  focal   default  Container started
3        started  192.168.221.7   NX3240-4XF2613       focal   default  Deployed
6        started  192.168.221.24  N4610-MJ05EA87       focal   default  Deployed

The root cause is most likely due to the cloud database (application mysql-innodb-cluster) being unwell.

and how can i fix it?

The database’s charm has an action that you can try:

juju run-action --wait mysql-innodb-cluster/leader reboot-cluster-from-complete-outage

If this is not the correct unit to run this action against you will be directed by the action output on which unit to use.

When the action is run on the correct unit give the model some time to settle, then report back with another output to juju status.

1 Like

Thanks now just the vault is missing

geoint@maas:~$ juju status
Model Controller Cloud/Region Version SLA Timestamp
openstack juju-controller maas.pcty.geoint/default 2.8.8 unsupported 09:25:27-05:00

App Version Status Scale Charm Store Channel Rev OS Message
ceph-mon 15.2.7 active 3 ceph-mon charmstore 53 ubuntu Unit is ready and clustered
ceph-osd 15.2.11 active 4 ceph-osd charmstore 308 ubuntu Unit is ready (1 OSD)
ceph-radosgw 15.2.7 active 1 ceph-radosgw charmstore 294 ubuntu Unit is ready
cinder 17.0.1 active 1 cinder charmstore 308 ubuntu Unit is ready
cinder-ceph 17.0.1 active 1 cinder-ceph charmstore 260 ubuntu Unit is ready
cinder-mysql-router 8.0.25 active 1 mysql-router charmstore 6 ubuntu Unit is ready
dashboard-mysql-router 8.0.25 active 1 mysql-router charmstore 6 ubuntu Unit is ready
glance 21.0.0 active 1 glance charmstore 303 ubuntu Unit is ready
glance-mysql-router 8.0.25 active 1 mysql-router charmstore 6 ubuntu Unit is ready
keystone 18.0.0 active 1 keystone charmstore 321 ubuntu Application Ready
keystone-mysql-router 8.0.25 active 1 mysql-router charmstore 6 ubuntu Unit is ready
mysql-innodb-cluster 8.0.25 active 3 mysql-innodb-cluster charmstore 5 ubuntu Unit is ready: Mode: R/W
neutron-api 17.0.0 active 1 neutron-api charmstore 292 ubuntu Unit is ready
neutron-api-plugin-ovn 17.0.0 active 1 neutron-api-plugin-ovn charmstore 4 ubuntu Unit is ready
neutron-mysql-router 8.0.25 active 1 mysql-router charmstore 6 ubuntu Unit is ready
nova-cloud-controller 22.0.1 active 1 nova-cloud-controller charmstore 352 ubuntu PO: Unit is ready
nova-compute 22.0.1 active 3 nova-compute charmstore 325 ubuntu Unit is ready
nova-mysql-router 8.0.25 active 1 mysql-router charmstore 6 ubuntu Unit is ready
ntp 3.5 active 3 ntp charmstore 44 ubuntu chrony: Ready
openstack-dashboard 18.6.1 active 1 openstack-dashboard charmstore 311 ubuntu Unit is ready
ovn-central 20.03.1 active 3 ovn-central charmstore 5 ubuntu Unit is ready (leader: ovnsb_db)
ovn-chassis 20.03.1 active 3 ovn-chassis charmstore 10 ubuntu Unit is ready
placement 4.0.0 active 1 placement charmstore 17 ubuntu Unit is ready
placement-mysql-router 8.0.25 active 1 mysql-router charmstore 6 ubuntu Unit is ready
rabbitmq-server 3.8.2 active 1 rabbitmq-server charmstore 108 ubuntu Unit is ready
vault 1.5.4 error 1 vault charmstore 44 ubuntu hook failed: “start”
vault-mysql-router 8.0.25 active 1 mysql-router charmstore 6 ubuntu Unit is ready

Unit Workload Agent Machine Public address Ports Message
ceph-mon/0 active idle 0/lxd/0 192.168.221.78 Unit is ready and clustered
ceph-mon/1 active idle 1/lxd/0 192.168.221.22 Unit is ready and clustered
ceph-mon/2* active idle 2/lxd/0 192.168.221.12 Unit is ready and clustered
ceph-osd/1 active idle 1 192.168.221.91 Unit is ready (1 OSD)
ceph-osd/2 active idle 2 192.168.221.77 Unit is ready (1 OSD)
ceph-osd/3 active idle 3 192.168.221.7 Unit is ready (2 OSD)
ceph-osd/5* active idle 6 192.168.221.24 Unit is ready (2 OSD)
ceph-radosgw/0* active idle 0/lxd/1 192.168.221.14 80/tcp Unit is ready
cinder/0* active idle 1/lxd/1 192.168.221.20 8776/tcp Unit is ready
cinder-ceph/0* active idle 192.168.221.20 Unit is ready
cinder-mysql-router/0* active idle 192.168.221.20 Unit is ready
glance/0* active idle 2/lxd/1 192.168.221.25 9292/tcp Unit is ready
glance-mysql-router/0* active idle 192.168.221.25 Unit is ready
keystone/0* active idle 0/lxd/2 192.168.221.15 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 192.168.221.15 Unit is ready
mysql-innodb-cluster/0* active idle 0/lxd/3 192.168.221.99 Unit is ready: Mode: R/W
mysql-innodb-cluster/1 active idle 1/lxd/2 192.168.221.5 Unit is ready: Mode: R/O
mysql-innodb-cluster/2 active idle 2/lxd/2 192.168.221.8 Unit is ready: Mode: R/O
neutron-api/0* active idle 1/lxd/3 192.168.221.23 9696/tcp Unit is ready
neutron-api-plugin-ovn/0* active idle 192.168.221.23 Unit is ready
neutron-mysql-router/0* active idle 192.168.221.23 Unit is ready
nova-cloud-controller/0* active idle 0/lxd/4 192.168.221.19 8774/tcp,8775/tcp PO: Unit is ready
nova-mysql-router/0* active idle 192.168.221.19 Unit is ready
nova-compute/0* active idle 0 192.168.221.96 Unit is ready
ntp/0* active idle 192.168.221.96 123/udp chrony: Ready
ovn-chassis/0* active idle 192.168.221.96 Unit is ready
nova-compute/1 active idle 1 192.168.221.91 Unit is ready
ntp/1 active idle 192.168.221.91 123/udp chrony: Ready
ovn-chassis/1 active idle 192.168.221.91 Unit is ready
nova-compute/2 active idle 2 192.168.221.77 Unit is ready
ntp/2 active idle 192.168.221.77 123/udp chrony: Ready
ovn-chassis/2 active idle 192.168.221.77 Unit is ready
openstack-dashboard/0* active idle 1/lxd/4 192.168.221.21 80/tcp,443/tcp Unit is ready
dashboard-mysql-router/0* active idle 192.168.221.21 Unit is ready
ovn-central/0* active idle 0/lxd/5 192.168.221.17 6641/tcp,6642/tcp Unit is ready (leader: ovnsb_db)
ovn-central/1 active idle 1/lxd/5 192.168.221.13 6641/tcp,6642/tcp Unit is ready (northd: active)
ovn-central/2 active idle 2/lxd/3 192.168.221.29 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db)
placement/0* active idle 2/lxd/4 192.168.221.26 8778/tcp Unit is ready
placement-mysql-router/0* active idle 192.168.221.26 Unit is ready
rabbitmq-server/0* active idle 2/lxd/5 192.168.221.4 5672/tcp Unit is ready
vault/0* error idle 0/lxd/6 192.168.221.11 8200/tcp hook failed: “start”
vault-mysql-router/0* active idle 192.168.221.11 Unit is ready

Machine State DNS Inst id Series AZ Message
0 started 192.168.221.96 PowerEdge-9R0DH13 focal default Deployed
0/lxd/0 started 192.168.221.78 juju-191a8d-0-lxd-0 focal default Container started
0/lxd/1 started 192.168.221.14 juju-191a8d-0-lxd-1 focal default Container started
0/lxd/2 started 192.168.221.15 juju-191a8d-0-lxd-2 focal default Container started
0/lxd/3 started 192.168.221.99 juju-191a8d-0-lxd-3 focal default Container started
0/lxd/4 started 192.168.221.19 juju-191a8d-0-lxd-4 focal default Container started
0/lxd/5 started 192.168.221.17 juju-191a8d-0-lxd-5 focal default Container started
0/lxd/6 started 192.168.221.11 juju-191a8d-0-lxd-6 focal default Container started
1 started 192.168.221.91 PowerEdge-9R0FH13 focal default Deployed
1/lxd/0 started 192.168.221.22 juju-191a8d-1-lxd-0 focal default Container started
1/lxd/1 started 192.168.221.20 juju-191a8d-1-lxd-1 focal default Container started
1/lxd/2 started 192.168.221.5 juju-191a8d-1-lxd-2 focal default Container started
1/lxd/3 started 192.168.221.23 juju-191a8d-1-lxd-3 focal default Container started
1/lxd/4 started 192.168.221.21 juju-191a8d-1-lxd-4 focal default Container started
1/lxd/5 started 192.168.221.13 juju-191a8d-1-lxd-5 focal default Container started
2 started 192.168.221.77 PowerEdge-9R0CH13 focal default Deployed
2/lxd/0 started 192.168.221.12 juju-191a8d-2-lxd-0 focal default Container started
2/lxd/1 started 192.168.221.25 juju-191a8d-2-lxd-1 focal default Container started
2/lxd/2 started 192.168.221.8 juju-191a8d-2-lxd-2 focal default Container started
2/lxd/3 started 192.168.221.29 juju-191a8d-2-lxd-3 focal default Container started
2/lxd/4 started 192.168.221.26 juju-191a8d-2-lxd-4 focal default Container started
2/lxd/5 started 192.168.221.4 juju-191a8d-2-lxd-5 focal default Container started
3 started 192.168.221.7 NX3240-4XF2613 focal default Deployed
6 started 192.168.221.24 N4610-MJ05EA87 focal default Deployed

1 Like

Now the only unit with error is the vault
vault/0* error idle 0/lxd/6 192.168.221.11 8200/tcp hook failed: “start”

Try restarting the vault/0 unit:

juju run-action --wait vault/0 restart

If this works Vault will become sealed; it will need to be unsealed.

it did not work

juju run-action --wait vault/0 restart
{}
ERROR action “restart” not defined on unit “vault/0”

I accessed the vault unit with juju ssh and rebooted it, now the vault is ready and sealed but I am unable to unseal it:

geoint@maas:~$ export VAULT_ADDR=“https://192.168.221.11:8200
geoint@maas:~$ vault operator unseal

the correct protocol was http not https
I unsealed the vault and now have

Key Value


Seal Type shamir
Initialized true
Sealed false
Total Shares 5
Threshold 3
Version 1.5.4
Cluster Name vault-cluster-f946276b
Cluster ID 50f8fb87-bb0f-afcf-e263-490fc4c6cbfc
HA Enabled false

BUT juju status still is
vault/0* blocked idle 0/lxd/6 192.168.221.11 8200/tcp Unit is sealed

do I also have to run ?:
export VAULT_TOKEN=
vault token create -ttl=100m
juju run-action --wait vault/leader authorize-charm token=
juju run-action --wait vault/leader generate-root-ca

There will be a delay of a few minutes before the status is updated. Are you OK now?

See cloud operation Unseal Vault for detailed information.

Yes thanks , is up now :slight_smile:

1 Like

I had the same problem, but now after running the command I got this response:

geoint@maas:~$ juju run-action --wait mysql-innodb-cluster/leader reboot-cluster-from-complete-outage
unit-mysql-innodb-cluster-0:
UnitId: mysql-innodb-cluster/0
id: “78”
message: Reboot cluster from complete outage failed.
results:
output: |+
Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
Restoring the default cluster from complete outage…

  Traceback (most recent call last):
    File "<string>", line 2, in <module>
  RuntimeError: Dba.reboot_cluster_from_complete_outage: The MySQL instance '192.168.221.99:3306' belongs to an InnoDB Cluster and is reachable.

traceback: |
  Traceback (most recent call last):
    File "/var/lib/juju/agents/unit-mysql-innodb-cluster-0/charm/actions/reboot-cluster-from-complete-outage", line 164, in reboot_cluster_from_complete_outage
      output = instance.reboot_cluster_from_complete_outage()
    File "/var/lib/juju/agents/unit-mysql-innodb-cluster-0/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 798, in reboot_cluster_from_complete_outage
      raise e
    File "/var/lib/juju/agents/unit-mysql-innodb-cluster-0/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 786, in reboot_cluster_from_complete_outage
      output = self.run_mysqlsh_script(_script).decode("UTF-8")
    File "/var/lib/juju/agents/unit-mysql-innodb-cluster-0/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 1541, in run_mysqlsh_script
      return subprocess.check_output(cmd, stderr=subprocess.PIPE)
    File "/usr/lib/python3.8/subprocess.py", line 415, in check_output
      return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
    File "/usr/lib/python3.8/subprocess.py", line 516, in run
      raise CalledProcessError(retcode, process.args,
  subprocess.CalledProcessError: Command '['/snap/bin/mysqlsh', '--no-wizard', '--python', '-f', '/root/snap/mysql-shell/common/tmpdt83el4j.py']' returned non-zero exit status 1.
status: failed
timing:
completed: 2021-07-07 20:47:15 +0000 UTC
enqueued: 2021-07-07 20:47:13 +0000 UTC
started: 2021-07-07 20:47:13 +0000 UTC

I had the same problem, but now after running the command I got this response:

Hi Mario, please could you elaborate a bit on what happened? (i.e. it seems everything was fine for 19 days and then you’ve hit another problem.) What happened, please? What is “the same problem”?