Mysql-innodb-cluster recover broken cluster

Hey,

our mysql innodbcluster seems totally broken, we only have one active unit which is okay in terms that we can still use our openstack deployment even tho it has no failover, we tried rejoining, add-instances and first of all we tried the reboot-cluster-from-complete outage action, but we cant get all three to work, in out testing environment we had no problems at all. And we are really frustrated now… It would be extremely awesome if we could get some help.

We also tried adding a new unit and adding this unit to the pre existing cluster, but this did not work

No we have one unit in an error state with

hook failed: "cluster-relation-departed"

and one in the block state with

 Cluster is inaccessible from this instance. Please check logs for details.

Unit in Error state /var/log/mysql/error.log

2022-03-18T15:10:34.915404Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error on opening a connection to peer node 172.16.58.101:33061 when joining a group. My local port is: 33061.'
2022-03-18T15:10:34.915451Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error connecting to all peers. Member join failed. Local port: 33061'
2022-03-18T15:10:34.970383Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33061'
2022-03-18T15:10:38.532322Z 2 [ERROR] [MY-011640] [Repl] Plugin group_replication reported: 'Timeout on wait for view after joining group'
2022-03-18T15:10:38.532436Z 2 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member is already leaving or joining a group.'
2022-03-18T15:16:59.774179Z 4670 [System] [MY-013587] [Repl] Plugin group_replication reported: 'Plugin 'group_replication' is starting.'
2022-03-18T15:16:59.775731Z 4670 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Automatically adding IPv4 localhost address to the allowlist. It is mandatory that it is added.'
2022-03-18T15:16:59.775753Z 4670 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Automatically adding IPv6 localhost address to the allowlist. It is mandatory that it is added.'
2022-03-18T15:16:59.804045Z 4675 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2022-03-18T15:16:59.820634Z 4677 [ERROR] [MY-010596] [Repl] Error reading relay log event for channel 'group_replication_applier': corrupted data in log event
2022-03-18T15:16:59.820680Z 4677 [ERROR] [MY-013121] [Repl] Slave SQL for channel 'group_replication_applier': Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, the server was unable to fetch a keyring key required to open an encrypted relay log file, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave. Error_code: MY-013121
2022-03-18T15:16:59.820703Z 4677 [ERROR] [MY-011451] [Repl] Plugin group_replication reported: 'The applier thread execution was aborted. Unable to process more transactions, this member will now leave the group.'
2022-03-18T15:16:59.820741Z 4675 [ERROR] [MY-011452] [Repl] Plugin group_replication reported: 'Fatal error during execution on the Applier process of Group Replication. The server will now leave the group.'
2022-03-18T15:16:59.820797Z 4675 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member is leaving a group without being on one.'
2022-03-18T15:16:59.820820Z 4675 [ERROR] [MY-011644] [Repl] Plugin group_replication reported: 'Unable to confirm whether the server has left the group or not. Check performance_schema.replication_group_members to check group membership information.'
2022-03-18T15:16:59.820834Z 4675 [ERROR] [MY-011712] [Repl] Plugin group_replication reported: 'The server was automatically set into read only mode after an error was detected.'
2022-03-18T15:16:59.820955Z 4677 [ERROR] [MY-010586] [Repl] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'FIRST' position 0
2022-03-18T15:17:01.048055Z 0 [ERROR] [MY-011502] [Repl] Plugin group_replication reported: 'There was a previous plugin error while the member joined the group. The member will now exit the group.'
2022-03-18T15:17:01.048122Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to 172.16.58.79:3306 on view 16476166099420298:2.'
2022-03-18T15:17:01.048160Z 0 [ERROR] [MY-011486] [Repl] Plugin group_replication reported: 'Message received while the plugin is not ready, message discarded.'
2022-03-18T15:17:04.126291Z 0 [System] [MY-011504] [Repl] Plugin group_replication reported: 'Group membership changed: This member has left the group.'
2022-03-18T15:33:29.182121Z 0 [System] [MY-013172] [Server] Received SHUTDOWN from user <via user signal>. Shutting down mysqld (Version: 8.0.28-0ubuntu0.20.04.3).
2022-03-18T15:33:31.712269Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.28-0ubuntu0.20.04.3)  (Ubuntu).
2022-03-18T15:38:08.335817Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.28-0ubuntu0.20.04.3) starting as process 456
2022-03-18T15:38:08.352745Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2022-03-18T15:38:08.684579Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2022-03-18T15:38:08.883793Z 0 [System] [MY-013587] [Repl] Plugin group_replication reported: 'Plugin 'group_replication' is starting.'
2022-03-18T15:38:08.956712Z 0 [Warning] [MY-010068] [Server] CA certificate /etc/ssl/certs/ca-certificates.crt is self signed.
2022-03-18T15:38:08.956749Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2022-03-18T15:38:08.984758Z 0 [Warning] [MY-010604] [Repl] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=juju-3eaf1f-1-lxd-2-relay-bin' to avoid this problem.
2022-03-18T15:38:09.154363Z 0 [Warning] [MY-010818] [Server] Error reading GTIDs from relaylog: -1
2022-03-18T15:38:09.200784Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2022-03-18T15:38:09.200883Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.28-0ubuntu0.20.04.3'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  (Ubuntu).
2022-03-18T15:38:09.205096Z 2 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Automatically adding IPv4 localhost address to the allowlist. It is mandatory that it is added.'
2022-03-18T15:38:09.205130Z 2 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Automatically adding IPv6 localhost address to the allowlist. It is mandatory that it is added.'
2022-03-18T15:38:09.235330Z 14 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2022-03-18T15:38:10.266373Z 2 [System] [MY-011511] [Repl] Plugin group_replication reported: 'This server is working as secondary member with primary member address 172.16.58.45:3306.'
2022-03-18T15:38:11.267447Z 0 [System] [MY-013471] [Repl] Plugin group_replication reported: 'Distributed recovery will transfer data using: Incremental recovery from a group donor'
2022-03-18T15:38:11.267884Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to 172.16.58.45:3306, 172.16.58.79:3306 on view 16476166099420298:4.'
2022-03-18T15:38:17.370366Z 16 [ERROR] [MY-010596] [Repl] Error reading relay log event for channel 'group_replication_applier': corrupted data in log event
2022-03-18T15:38:17.370430Z 16 [ERROR] [MY-013121] [Repl] Slave SQL for channel 'group_replication_applier': Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, the server was unable to fetch a keyring key required to open an encrypted relay log file, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave. Error_code: MY-013121
2022-03-18T15:38:17.370480Z 16 [ERROR] [MY-011451] [Repl] Plugin group_replication reported: 'The applier thread execution was aborted. Unable to process more transactions, this member will now leave the group.'
2022-03-18T15:38:17.370538Z 14 [ERROR] [MY-011452] [Repl] Plugin group_replication reported: 'Fatal error during execution on the Applier process of Group Replication. The server will now leave the group.'
2022-03-18T15:38:17.370645Z 14 [ERROR] [MY-011712] [Repl] Plugin group_replication reported: 'The server was automatically set into read only mode after an error was detected.'
2022-03-18T15:38:17.370688Z 16 [ERROR] [MY-010586] [Repl] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'FIRST' position 0
2022-03-18T15:38:17.370837Z 56 [ERROR] [MY-011622] [Repl] Plugin group_replication reported: 'Unable to evaluate the group replication applier execution status. Group replication recovery will shutdown to avoid data corruption.'
2022-03-18T15:38:17.370908Z 56 [ERROR] [MY-011620] [Repl] Plugin group_replication reported: 'Fatal error during the incremental recovery process of Group Replication. The server will leave the group.'
2022-03-18T15:38:17.370950Z 56 [Warning] [MY-011645] [Repl] Plugin group_replication reported: 'Skipping leave operation: concurrent attempt to leave the group is on-going.'
2022-03-18T15:38:17.370965Z 56 [ERROR] [MY-011712] [Repl] Plugin group_replication reported: 'The server was automatically set into read only mode after an error was detected.'
2022-03-18T15:38:20.447266Z 0 [System] [MY-011504] [Repl] Plugin group_replication reported: 'Group membership changed: This member has left the group.'

unit-mysql-innodb-cluster-1.log /var/log/juju/

2022-03-18 15:46:38 ERROR unit.mysql-innodb-cluster/1.juju-log server.go:327 cluster:6: Cluster is unavailable: Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
Traceback (most recent call last):
  File "<string>", line 2, in <module>
RuntimeError: Dba.get_cluster: Group replication does not seem to be active in instance '172.16.58.79:3306'


2022-03-18 15:46:38 INFO unit.mysql-innodb-cluster/1.juju-log server.go:327 cluster:6: Remove instance: 172.16.58.63.
2022-03-18 15:46:39 ERROR unit.mysql-innodb-cluster/1.juju-log server.go:327 cluster:6: Failed removing instance 172.16.58.63: Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
Traceback (most recent call last):
  File "<string>", line 2, in <module>
RuntimeError: Dba.get_cluster: Group replication does not seem to be active in instance '172.16.58.79:3306'


2022-03-18 15:46:39 ERROR unit.mysql-innodb-cluster/1.juju-log server.go:327 cluster:6: Hook error:
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/.venv/lib/python3.8/site-packages/charms/reactive/__init__.py", line 74, in main
    bus.dispatch(restricted=restricted_mode)
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/.venv/lib/python3.8/site-packages/charms/reactive/bus.py", line 390, in dispatch
    _invoke(other_handlers)
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/.venv/lib/python3.8/site-packages/charms/reactive/bus.py", line 359, in _invoke
    handler.invoke()
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/.venv/lib/python3.8/site-packages/charms/reactive/bus.py", line 181, in invoke
    self._action(*args)
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/charm/reactive/mysql_innodb_cluster_handlers.py", line 473, in scale_in
    instance.remove_instance(
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 986, in remove_instance
    raise e
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 962, in remove_instance
    output = self.run_mysqlsh_script(_script).decode("UTF-8")
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 1659, in run_mysqlsh_script
    return subprocess.check_output(cmd, stderr=subprocess.PIPE)
  File "/usr/lib/python3.8/subprocess.py", line 415, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/usr/lib/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/snap/bin/mysqlsh', '--no-wizard', '--python', '-f', '/root/snap/mysql-shell/common/tmpe2wihrin.py']' returned non-zero exit status 1.

2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60 Traceback (most recent call last):
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60   File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/charm/hooks/cluster-relation-departed", line 22, in <module>
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60     main()
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60   File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/.venv/lib/python3.8/site-packages/charms/reactive/__init__.py", line 74, in main
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60     bus.dispatch(restricted=restricted_mode)
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60   File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/.venv/lib/python3.8/site-packages/charms/reactive/bus.py", line 390, in dispatch
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60     _invoke(other_handlers)
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60   File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/.venv/lib/python3.8/site-packages/charms/reactive/bus.py", line 359, in _invoke
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60     handler.invoke()
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60   File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/.venv/lib/python3.8/site-packages/charms/reactive/bus.py", line 181, in invoke
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60     self._action(*args)
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60   File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/charm/reactive/mysql_innodb_cluster_handlers.py", line 473, in scale_in
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60     instance.remove_instance(
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60   File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 986, in remove_instance
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60     raise e
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60   File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 962, in remove_instance
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60     output = self.run_mysqlsh_script(_script).decode("UTF-8")
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60   File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 1659, in run_mysqlsh_script
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60     return subprocess.check_output(cmd, stderr=subprocess.PIPE)
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60   File "/usr/lib/python3.8/subprocess.py", line 415, in check_output
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60     return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60   File "/usr/lib/python3.8/subprocess.py", line 516, in run
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60     raise CalledProcessError(retcode, process.args,
2022-03-18 15:46:39 WARNING unit.mysql-innodb-cluster/1.cluster-relation-departed logger.go:60 subprocess.CalledProcessError: Command '['/snap/bin/mysqlsh', '--no-wizard', '--python', '-f', '/root/snap/mysql-shell/common/tmpe2wihrin.py']' returned non-zero exit status 1.
2022-03-18 15:46:39 ERROR juju.worker.uniter.operation runhook.go:146 hook "cluster-relation-departed" (via explicit, bespoke hook script) failed: exit status 1
2022-03-18 15:46:39 INFO juju.worker.uniter resolver.go:150 awaiting error resolution for "relation-departed" hook
2022-03-18 15:50:00 INFO juju.worker.uniter resolver.go:150 awaiting error resolution for "relation-departed" hook

Unit in Blocked state /var/log/mysql/error.log

2022-03-18T15:13:02.071901Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error on opening a connection to peer node 172.16.58.79:33061 when joining a group. My local port is: 33061.'
2022-03-18T15:13:04.139256Z 0 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Reading a request from a remote XCom failed.'
2022-03-18T15:13:04.139458Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error connecting to all peers. Member join failed. Local port: 33061'
2022-03-18T15:13:04.200765Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33061'
2022-03-18T15:33:34.754358Z 0 [System] [MY-013172] [Server] Received SHUTDOWN from user <via user signal>. Shutting down mysqld (Version: 8.0.28-0ubuntu0.20.04.3).
2022-03-18T15:33:35.441600Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.28-0ubuntu0.20.04.3)  (Ubuntu).
2022-03-18T15:38:19.023783Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.28-0ubuntu0.20.04.3) starting as process 462
2022-03-18T15:38:19.040459Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2022-03-18T15:38:19.412465Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2022-03-18T15:38:19.646417Z 0 [System] [MY-013587] [Repl] Plugin group_replication reported: 'Plugin 'group_replication' is starting.'
2022-03-18T15:38:19.750688Z 0 [Warning] [MY-010068] [Server] CA certificate /etc/ssl/certs/ca-certificates.crt is self signed.
2022-03-18T15:38:19.750744Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2022-03-18T15:38:19.780367Z 0 [Warning] [MY-010604] [Repl] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=juju-3eaf1f-2-lxd-1-relay-bin' to avoid this problem.
2022-03-18T15:38:20.100977Z 0 [Warning] [MY-010818] [Server] Error reading GTIDs from relaylog: -1
2022-03-18T15:38:20.143736Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.28-0ubuntu0.20.04.3'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  (Ubuntu).
2022-03-18T15:38:20.144025Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2022-03-18T15:38:20.146809Z 2 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Automatically adding IPv4 localhost address to the allowlist. It is mandatory that it is added.'
2022-03-18T15:38:20.146843Z 2 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Automatically adding IPv6 localhost address to the allowlist. It is mandatory that it is added.'
2022-03-18T15:38:20.174038Z 11 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2022-03-18T15:38:27.718171Z 13 [ERROR] [MY-010596] [Repl] Error reading relay log event for channel 'group_replication_applier': corrupted data in log event
2022-03-18T15:38:27.718252Z 13 [ERROR] [MY-013121] [Repl] Slave SQL for channel 'group_replication_applier': Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, the server was unable to fetch a keyring key required to open an encrypted relay log file, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave. Error_code: MY-013121
2022-03-18T15:38:27.718269Z 13 [ERROR] [MY-011451] [Repl] Plugin group_replication reported: 'The applier thread execution was aborted. Unable to process more transactions, this member will now leave the group.'
2022-03-18T15:38:27.718317Z 11 [ERROR] [MY-011452] [Repl] Plugin group_replication reported: 'Fatal error during execution on the Applier process of Group Replication. The server will now leave the group.'
2022-03-18T15:38:27.718402Z 11 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member is already leaving or joining a group.'
2022-03-18T15:38:27.718404Z 13 [ERROR] [MY-010586] [Repl] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'FIRST' position 0
2022-03-18T15:38:27.718426Z 11 [ERROR] [MY-011644] [Repl] Plugin group_replication reported: 'Unable to confirm whether the server has left the group or not. Check performance_schema.replication_group_members to check group membership information.'
2022-03-18T15:38:27.718438Z 11 [ERROR] [MY-011712] [Repl] Plugin group_replication reported: 'The server was automatically set into read only mode after an error was detected.'
2022-03-18T15:38:50.293914Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Timeout while waiting for the group communication engine to be ready!'
2022-03-18T15:38:50.293996Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The group communication engine is not ready for the member to join. Local port: 33061'
2022-03-18T15:38:50.354051Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33061'
2022-03-18T15:38:55.405878Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error on opening a connection to peer node 172.16.58.79:33061 when joining a group. My local port is: 33061.'
2022-03-18T15:38:55.626226Z 0 [ERROR] [MY-011502] [Repl] Plugin group_replication reported: 'There was a previous plugin error while the member joined the group. The member will now exit the group.'
2022-03-18T15:38:55.626279Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to 172.16.58.101:3306 on view 16476166099420298:6.'
2022-03-18T15:38:58.700144Z 0 [System] [MY-011504] [Repl] Plugin group_replication reported: 'Group membership changed: This member has left the group.'

Blocked unit some error logs when trying to add the instance /var/log/juju

Traceback (most recent call last):
  File "<string>", line 2, in <module>
mysqlsh.Error: Shell Error (51314): Dba.get_cluster: This function is not available through a session to a standalone instance (metadata exists, instance belongs to that metadata, but GR is not active)

2022-03-18 15:44:07 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 coordinator.DelayedActionCoordinator Publishing state
2022-03-18 15:44:07 INFO juju.worker.uniter.operation runhook.go:152 ran "update-status" hook (via explicit, bespoke hook script)
2022-03-18 15:49:00 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Reactive main running for hook update-status
2022-03-18 15:49:02 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 coordinator.DelayedActionCoordinator Loading state
2022-03-18 15:49:02 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Coordinator: Initializing coordinator layer
2022-03-18 15:49:02 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Initializing Snap Layer
2022-03-18 15:49:02 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Initializing Leadership Layer (is follower)
2022-03-18 15:49:02 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/layer_openstack.py:64:default_update_status
2022-03-18 15:49:03 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/layer_openstack.py:82:check_really_is_update_status
2022-03-18 15:49:03 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/layer_openstack.py:93:run_default_update_status
2022-03-18 15:49:03 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/mysql_innodb_cluster_handlers.py:132:check_quorum
2022-03-18 15:49:03 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/mysql_innodb_cluster_handlers.py:234:signal_clustered
2022-03-18 15:49:03 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/mysql_innodb_cluster_handlers.py:376:request_certificates
2022-03-18 15:49:03 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: hooks/relations/mysql-innodb-cluster/peers.py:69:joined:cluster
2022-03-18 15:49:03 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: hooks/relations/mysql-innodb-cluster/peers.py:75:changed:cluster
2022-03-18 15:49:03 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: hooks/relations/mysql-router/provides.py:47:joined:db-router
2022-03-18 15:49:03 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: hooks/relations/tls-certificates/requires.py:79:joined:certificates
2022-03-18 15:49:33 ERROR unit.mysql-innodb-cluster/2.juju-log server.go:327 Cluster is unavailable: Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
Traceback (most recent call last):
  File "<string>", line 2, in <module>
mysqlsh.Error: Shell Error (51314): Dba.get_cluster: This function is not available through a session to a standalone instance (metadata exists, instance belongs to that metadata, but GR is not active)

2022-03-18 15:49:33 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 coordinator.DelayedActionCoordinator Publishing state
2022-03-18 15:49:34 INFO juju.worker.uniter.operation runhook.go:152 ran "update-status" hook (via explicit, bespoke hook script)
2022-03-18 15:54:13 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Reactive main running for hook update-status
2022-03-18 15:54:15 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 coordinator.DelayedActionCoordinator Loading state
2022-03-18 15:54:15 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Coordinator: Initializing coordinator layer
2022-03-18 15:54:15 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Initializing Snap Layer
2022-03-18 15:54:15 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Initializing Leadership Layer (is follower)
2022-03-18 15:54:15 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/layer_openstack.py:64:default_update_status
2022-03-18 15:54:15 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/layer_openstack.py:82:check_really_is_update_status
2022-03-18 15:54:15 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/layer_openstack.py:93:run_default_update_status
2022-03-18 15:54:15 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/mysql_innodb_cluster_handlers.py:132:check_quorum
2022-03-18 15:54:15 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/mysql_innodb_cluster_handlers.py:234:signal_clustered
2022-03-18 15:54:15 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/mysql_innodb_cluster_handlers.py:376:request_certificates
2022-03-18 15:54:15 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: hooks/relations/mysql-innodb-cluster/peers.py:69:joined:cluster
2022-03-18 15:54:15 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: hooks/relations/mysql-innodb-cluster/peers.py:75:changed:cluster
2022-03-18 15:54:15 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: hooks/relations/mysql-router/provides.py:47:joined:db-router
2022-03-18 15:54:16 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: hooks/relations/tls-certificates/requires.py:79:joined:certificates
2022-03-18 15:54:46 ERROR unit.mysql-innodb-cluster/2.juju-log server.go:327 Cluster is unavailable: Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
Traceback (most recent call last):
  File "<string>", line 2, in <module>
mysqlsh.Error: Shell Error (51314): Dba.get_cluster: This function is not available through a session to a standalone instance (metadata exists, instance belongs to that metadata, but GR is not active)

2022-03-18 15:54:46 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 coordinator.DelayedActionCoordinator Publishing state
2022-03-18 15:54:46 INFO juju.worker.uniter.operation runhook.go:152 ran "update-status" hook (via explicit, bespoke hook script)
2022-03-18 15:58:42 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Reactive main running for hook update-status
2022-03-18 15:58:44 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 coordinator.DelayedActionCoordinator Loading state
2022-03-18 15:58:44 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Coordinator: Initializing coordinator layer
2022-03-18 15:58:44 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Initializing Snap Layer
2022-03-18 15:58:44 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Initializing Leadership Layer (is follower)
2022-03-18 15:58:44 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/layer_openstack.py:64:default_update_status
2022-03-18 15:58:44 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/layer_openstack.py:82:check_really_is_update_status
2022-03-18 15:58:44 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/layer_openstack.py:93:run_default_update_status
2022-03-18 15:58:45 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/mysql_innodb_cluster_handlers.py:132:check_quorum
2022-03-18 15:58:45 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/mysql_innodb_cluster_handlers.py:234:signal_clustered
2022-03-18 15:58:45 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: reactive/mysql_innodb_cluster_handlers.py:376:request_certificates
2022-03-18 15:58:45 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: hooks/relations/mysql-innodb-cluster/peers.py:69:joined:cluster
2022-03-18 15:58:45 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: hooks/relations/mysql-innodb-cluster/peers.py:75:changed:cluster
2022-03-18 15:58:45 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: hooks/relations/mysql-router/provides.py:47:joined:db-router
2022-03-18 15:58:45 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 Invoking reactive handler: hooks/relations/tls-certificates/requires.py:79:joined:certificates
2022-03-18 15:59:15 ERROR unit.mysql-innodb-cluster/2.juju-log server.go:327 Cluster is unavailable: Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
Traceback (most recent call last):
  File "<string>", line 2, in <module>
mysqlsh.Error: Shell Error (51314): Dba.get_cluster: This function is not available through a session to a standalone instance (metadata exists, instance belongs to that metadata, but GR is not active)

2022-03-18 15:59:15 INFO unit.mysql-innodb-cluster/2.juju-log server.go:327 coordinator.DelayedActionCoordinator Publishing state
2022-03-18 15:59:16 INFO juju.worker.uniter.operation runhook.go:152 ran "update-status" hook (via explicit, bespoke hook script)

If you need any other logs i will provide them. Also if you know any best practice advices in this case please let me know.

did you solve your problem ? I have a similar one

I am experiencing this as well. anyone figure out how to correct it? my issue has to do with a controller that had a psu die now the other two wont accept the server now that its psu is replaced.

I experienced this or something similar last month in our OpenStack test cluster, and I managed to fix it. Be warned, I’m new to MySQL InnoDB clustering so it may be possible to come back to a working system much faster and better than this.

Cause of the problem: I rebooted all of the three mysql-innodb-cluster units within a very short time frame, but not at exactly the same time. My setup:

  • mysql-innodb-cluster/3 (172.23.1.81) on lxd:0 (was leader and primary)
  • mysql-innodb-cluster/4 (172.23.2.84) on lxd:1
  • mysql-innodb-cluster/5 (172.23.3.134) on lxd:2

First observation, all units have this problem when checking quorum:

unit-mysql-innodb-cluster-4: 11:34:57 INFO unit.mysql-innodb-cluster/4.juju-log Invoking reactive handler: reactive/mysql_innodb_cluster_handlers.py:132:check_quorum
unit-mysql-innodb-cluster-4: 11:34:57 DEBUG unit.mysql-innodb-cluster/4.juju-log Found peers: 172.23.1.81,172.23.3.134
unit-mysql-innodb-cluster-4: 11:34:57 DEBUG unit.mysql-innodb-cluster/4.juju-log Expect 2 peers
unit-mysql-innodb-cluster-4: 11:35:30 ERROR unit.mysql-innodb-cluster/4.juju-log Cluster is unavailable: Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
Traceback (most recent call last):
  File "<string>", line 2, in <module>
RuntimeError: Dba.get_cluster: Group replication does not seem to be active in instance '172.23.2.84:3306'

Looking back in /var/log/mysql.log of mysql-innodb-cluster/3, it looks as if it was primary, then threw out the other two cluster members, then itself:

2022-09-06T12:14:24.819920Z 0 [System] [MY-011507] [Repl] Plugin group_replication reported: 'A new primary with address 172.23.1.81:3306 was elected. The new primary will execute all previous group transactions before allowing writes.'
2022-09-06T12:14:46.465891Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to 172.23.2.84:3306, 172.23.1.81:3306 on view 16584085750250418:21.'
2022-09-06T12:15:35.313441Z 0 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Shutting down an outgoing connection. This happens because something might be wrong on a bi-directional connection to node 172.23.3.134:33061. Please check the connection status to this member'
2022-09-06T12:15:37.313714Z 0 [Warning] [MY-011493] [Repl] Plugin group_replication reported: 'Member with address 172.23.2.84:3306 has become unreachable.'
2022-09-06T12:15:37.313811Z 0 [ERROR] [MY-011495] [Repl] Plugin group_replication reported: 'This server is not able to reach a majority of members in the group. This server will now block all updates. The server will remain blocked until contact with the majority is restored. It is possible to use group_replication_force_members to force a new group membership.'
2022-09-06T12:15:47.562098Z 0 [Warning] [MY-011494] [Repl] Plugin group_replication reported: 'Member with address 172.23.2.84:3306 is reachable again.'
2022-09-06T12:15:47.562205Z 0 [Warning] [MY-011498] [Repl] Plugin group_replication reported: 'The member has resumed contact with a majority of the members in the group. Regular operation is restored and transactions are unblocked.'
2022-09-06T12:15:48.040472Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to 172.23.2.84:3306, 172.23.3.134:3306, 172.23.1.81:3306 on view 16584085750250418:22.'
2022-09-06T12:33:49.085385Z 0 [Warning] [MY-011499] [Repl] Plugin group_replication reported: 'Members removed from the group: 172.23.2.84:3306'
2022-09-06T12:33:49.085575Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to 172.23.3.134:3306, 172.23.1.81:3306 on view 16584085750250418:23.'
2022-09-06T12:34:50.059329Z 0 [Warning] [MY-011499] [Repl] Plugin group_replication reported: 'Members removed from the group: 172.23.3.134:3306'
2022-09-06T12:34:50.059461Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to 172.23.1.81:3306 on view 16584085750250418:24.'

At this point mysql-innodb-cluster/3 seems to be alone in the MySQL cluster. Some hours later, another error occurs which makes this unit also leave the cluster:

2022-09-06T12:34:50.059461Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to 172.23.1.81:3306 on view 16584085750250418:24.'
2022-09-06T15:14:46.467266Z 11 [ERROR] [MY-011708] [Repl] Plugin group_replication reported: 'There was an error when trying to access the server with user: mysql.session. Make sure the user is present in the server and that the MySQL upgrade procedure was run correctly.'
2022-09-06T15:14:46.467384Z 11 [ERROR] [MY-011564] [Repl] Plugin group_replication reported: 'Failed to establish an internal server connection to execute plugin operations'
2022-09-06T15:14:46.468891Z 11 [ERROR] [MY-011560] [Repl] Plugin group_replication reported: 'Error when contacting the server to ensure the proper logging of a group change in the binlog'
2022-09-06T15:14:46.468934Z 11 [ERROR] [MY-011445] [Repl] Plugin group_replication reported: 'Error at event handling! Got error: 1.'
2022-09-06T15:14:46.469030Z 11 [ERROR] [MY-011452] [Repl] Plugin group_replication reported: 'Fatal error during execution on the Applier process of Group Replication. The server will now leave the group.'
2022-09-06T15:14:46.470448Z 11 [ERROR] [MY-011712] [Repl] Plugin group_replication reported: 'The server was automatically set into read only mode after an error was detected.'
2022-09-06T15:14:46.486571Z 11 [System] [MY-011565] [Repl] Plugin group_replication reported: 'Setting super_read_only=ON.'
2022-09-06T15:14:49.650027Z 0 [System] [MY-011504] [Repl] Plugin group_replication reported: 'Group membership changed: This member has left the group.'

First I tried to run some charm actions to fix the situation, but this did not help. I’m posting the results here just to share the error messages and the state of things:

Trying action reboot-cluster-from-complete-outage:

juju run-action --wait mysql-innodb-cluster/3 reboot-cluster-from-complete-outage
unit-mysql-innodb-cluster-3:
  UnitId: mysql-innodb-cluster/3
  id: "2400"
  message: Reboot cluster from complete outage failed.
  results:
    output: |+
      Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
      Restoring the default cluster from complete outage...
 
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
      RuntimeError: Dba.reboot_cluster_from_complete_outage: Target member is in state ERROR
 
    traceback: |
      Traceback (most recent call last):
        File "/var/lib/juju/agents/unit-mysql-innodb-cluster-3/charm/actions/reboot-cluster-from-complete-outage", line 164, in reboot_cluster_from_complete_outage
          output = instance.reboot_cluster_from_complete_outage()
        File "/var/lib/juju/agents/unit-mysql-innodb-cluster-3/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 947, in reboot_cluster_from_complete_outage
          raise e
        File "/var/lib/juju/agents/unit-mysql-innodb-cluster-3/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 935, in reboot_cluster_from_complete_outage
          output = self.run_mysqlsh_script(_script).decode("UTF-8")
        File "/var/lib/juju/agents/unit-mysql-innodb-cluster-3/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 1703, in run_mysqlsh_script
          return subprocess.check_output(cmd, stderr=subprocess.PIPE)
        File "/usr/lib/python3.8/subprocess.py", line 415, in check_output
          return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
        File "/usr/lib/python3.8/subprocess.py", line 516, in run
          raise CalledProcessError(retcode, process.args,
      subprocess.CalledProcessError: Command '['/snap/bin/mysqlsh', '--no-wizard', '--python', '-f', '/root/snap/mysql-shell/common/tmp4co5bnv3.py']' returned non-zero exit status 1.
  status: failed
  timing:
    completed: 2022-09-07 09:47:33 +0000 UTC
    enqueued: 2022-09-07 09:47:30 +0000 UTC
    started: 2022-09-07 09:47:30 +0000 UTC

I now tested the actions cluster-status and cluster-rescan, which made it clear that no cluster exists, I only have a standalone instance (or three):


$ juju run-action mysql-innodb-cluster/3 --wait cluster-status
unit-mysql-innodb-cluster-3:
  UnitId: mysql-innodb-cluster/3
  id: "2418"
  results:
    cluster-status: "null"
  status: completed
  timing:
    completed: 2022-09-07 11:47:26 +0000 UTC
    enqueued: 2022-09-07 11:46:52 +0000 UTC
    started: 2022-09-07 11:46:53 +0000 UTC
 
 
$ juju run-action mysql-innodb-cluster/3 --wait cluster-rescan
unit-mysql-innodb-cluster-3:
  UnitId: mysql-innodb-cluster/3
  id: "2420"
  message: Cluster rescan failed.
  results:
    output: |
      Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
      mysqlsh.Error: Shell Error (51314): Dba.get_cluster: This function is not available through a session to a standalone instance (metadata exists, instance belongs to that metadata, but GR is not active)
    traceback: |
      Traceback (most recent call last):
        File "/var/lib/juju/agents/unit-mysql-innodb-cluster-3/charm/actions/cluster-rescan", line 196, in cluster_rescan
          output = instance.cluster_rescan()
        File "/var/lib/juju/agents/unit-mysql-innodb-cluster-3/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 1062, in cluster_rescan
          raise e
        File "/var/lib/juju/agents/unit-mysql-innodb-cluster-3/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 1052, in cluster_rescan
          output = self.run_mysqlsh_script(_script).decode("UTF-8")
        File "/var/lib/juju/agents/unit-mysql-innodb-cluster-3/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 1703, in run_mysqlsh_script
          return subprocess.check_output(cmd, stderr=subprocess.PIPE)
        File "/usr/lib/python3.8/subprocess.py", line 415, in check_output
          return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
        File "/usr/lib/python3.8/subprocess.py", line 516, in run
          raise CalledProcessError(retcode, process.args,
      subprocess.CalledProcessError: Command '['/snap/bin/mysqlsh', '--no-wizard', '--python', '-f', '/root/snap/mysql-shell/common/tmpnlz68w24.py']' returned non-zero exit status 1.
  status: failed
  timing:
    completed: 2022-09-07 11:48:17 +0000 UTC
    enqueued: 2022-09-07 11:47:40 +0000 UTC
    started: 2022-09-07 11:47:41 +0000 UTC

As said, the unit mysql-innodb-cluster/3 was the last one in the cluster. I logged in to it and used the mysql client to check the state of things:

mysql> select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances;
+-------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------+
| instance_name     | mysql_server_uuid                    | addresses                                                                                              |
+-------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------+
| 172.23.2.84:3306  | 96269cc3-88ca-11ec-9910-00163e4abb10 | {"mysqlX": "172.23.2.84:33060", "grLocal": "172.23.2.84:33061", "mysqlClassic": "172.23.2.84:3306"}    |
| 172.23.1.81:3306  | 9fa5b7e7-88ca-11ec-be9a-00163e1e0c51 | {"mysqlX": "172.23.1.81:33060", "grLocal": "172.23.1.81:33061", "mysqlClassic": "172.23.1.81:3306"}    |
| 172.23.3.134:3306 | 9c311a9c-88ca-11ec-803a-00163e9814e9 | {"mysqlX": "172.23.3.134:33060", "grLocal": "172.23.3.134:33061", "mysqlClassic": "172.23.3.134:3306"} |
+-------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------+
3 rows in set (0.00 sec)
 
mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+
| group_replication_applier | 9fa5b7e7-88ca-11ec-be9a-00163e1e0c51 | 172.23.1.81 |        3306 | OFFLINE      |             |                | XCom                       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+

On all three units, I checked the GTIDs (Global Transaction IDentifiers) to kind of assure myself that mysql-innodb-cluster/3 (172.23.1.81) actually has the most recent content in the database, which I think this output says:

### On mysql-innodb-cluster/3: ###
mysql> SHOW VARIABLES LIKE 'gtid_executed';
+---------------+-------------------------------------------------------------------------------------------------------------------------------------+
| Variable_name | Value                                                                                                                               |
+---------------+-------------------------------------------------------------------------------------------------------------------------------------+
| gtid_executed | 96269cc3-88ca-11ec-9910-00163e4abb10:1-16,
b7cc7594-88ca-11ec-b0c1-00163e4abb10:1-152428931:153428857-153429202:154428857-154428987 |
+---------------+-------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)
 
### On mysql-innodb-cluster/4: ###
mysql> SHOW VARIABLES LIKE 'gtid_executed';
+---------------+---------------------------------------------------------------------------------------------+
| Variable_name | Value                                                                                       |
+---------------+---------------------------------------------------------------------------------------------+
| gtid_executed | 96269cc3-88ca-11ec-9910-00163e4abb10:1-16,
b7cc7594-88ca-11ec-b0c1-00163e4abb10:1-152428931 |
+---------------+---------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
 
### On mysql-innodb-cluster/5: ###
mysql> SHOW VARIABLES LIKE 'gtid_executed';
+---------------+-----------------------------------------------------------------------------------------------------------------+
| Variable_name | Value                                                                                                           |
+---------------+-----------------------------------------------------------------------------------------------------------------+
| gtid_executed | 96269cc3-88ca-11ec-9910-00163e4abb10:1-16,
b7cc7594-88ca-11ec-b0c1-00163e4abb10:1-152428931:153428857-153429202 |
+---------------+-----------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)

At this point, as said being a MySQL InnoDB Cluster newbie, I did a mysqldump and decided to throw away the units mysql-innodb-cluster/4 and /5, and just keep the first one.

From here on I will use mysqlsh in JavaScript mode a lot, this needs to be downloaded from oracle.com.

To get the root password: juju run --unit mysql-innodb-cluster/leader leader-get mysql.passwd

Note below: when trying dba.rebootClusterFromCompleteOutage();, I accepted to remove the other nodes from the cluster metadata:

root@juju-2281bf-0-lxd-15:~# mysqlsh --socket
Please provide the password for 'root@localhost': ***********************************
Save password for 'root@localhost'? [Y]es/[N]o/Ne[v]er (default No): Yes
MySQL Shell 8.0.30

 MySQL  localhost  JS > var cluster = dba.getCluster();
NOTE: The installed metadata version 2.0.0 is lower than the version required by Shell which is version 2.1.0. It is recommended to upgrade the metadata. See \? dba.upgradeMetadata for additional details.
Dba.getCluster: This function is not available through a session to a standalone instance (metadata exists, instance belongs to that metadata, but GR is not active) (MYSQLSH 51314)

 MySQL  localhost:33060+ ssl  JS > var cluster = dba.rebootClusterFromCompleteOutage();
NOTE: The installed metadata version 2.0.0 is lower than the version required by Shell which is version 2.1.0. It is recommended to upgrade the metadata. See \? dba.upgradeMetadata for additional details.
Restoring the cluster 'jujuCluster' from complete outage...
 
ERROR: The administrative account credentials for 172.23.2.84:3306 do not match the cluster's administrative account. The cluster administrative account user name and password must be the same on all instances that belong to it.
ERROR: The administrative account credentials for 172.23.3.134:3306 do not match the cluster's administrative account. The cluster administrative account user name and password must be the same on all instances that belong to it.
Could not open a connection to '172.23.2.84:3306': 'MySQL Error 1045: Could not open connection to '172.23.2.84:3306': Access denied for user 'root'@'172.23.1.81' (using password: YES)'
Would you like to remove it from the cluster's metadata? [y/N]: y
 
Could not open a connection to '172.23.3.134:3306': 'MySQL Error 1045: Could not open connection to '172.23.3.134:3306': Access denied for user 'root'@'172.23.1.81' (using password: YES)'
Would you like to remove it from the cluster's metadata? [y/N]: y
 
Validating instance configuration at /var%2Frun%2Fmysqld%2Fmysqld.sock...
 
ERROR: New account(s) with proper source address specification to allow remote connection from all instances must be created to manage the cluster.
Dba.rebootClusterFromCompleteOutage: User 'root' can only connect from 'localhost'. (RuntimeError)

Note that nothing above was successful. Actually, the metadata deletion also didn’t work.

Nevertheless, I now also removed the two units:

$ juju remove-unit mysql-innodb-cluster/4
$ juju remove-unit mysql-innodb-cluster/5

On the only remaining unit mysql-innodb-cluster/3, I now disabled the read only mode from the mysql client:

mysql> SET GLOBAL read_only = 0;
mysql> UNLOCK TABLES;

Now, when checking juju debug-log, we have tons of these messages:

unit-mysql-innodb-cluster-3: 15:55:52 WARNING unit.mysql-innodb-cluster/3.juju-log Cannot determine the cluster primary RW node for writes.
unit-mysql-innodb-cluster-3: 15:55:52 WARNING unit.mysql-innodb-cluster/3.juju-log No connection to the cluster primary RW node skipping DB creation.
unit-mysql-innodb-cluster-3: 15:55:52 DEBUG unit.mysql-innodb-cluster/3.juju-log Proactively resetting ssl_ca
unit-mysql-innodb-cluster-3: 15:55:52 DEBUG unit.mysql-innodb-cluster/3.juju-log Single hostname provided by relation: 172.23.3.96
unit-mysql-innodb-cluster-3: 15:55:52 DEBUG unit.mysql-innodb-cluster/3.juju-log Checking cluster status.
unit-mysql-innodb-cluster-3: 15:56:25 ERROR unit.mysql-innodb-cluster/3.juju-log Cluster is unavailable: Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
Traceback (most recent call last):
  File "<string>", line 2, in <module>
mysqlsh.Error: Shell Error (51314): Dba.get_cluster: This function is not available through a session to a standalone instance (metadata exists, instance belongs to that metadata, but GR is not active)

Keyword: Group Replication - problem. We’ll get back to that.

Now on the “original” unit mysql-innodb-cluster/3 it was possible to “promote” the system to a cluster using mysqlsh. Note that I use the root password found in the leadership, the cluster name was jujuCluster, and I use the IP address 172.23.1.81 of the local machine:

 MySQL  localhost:33060+ ssl  JS > dba.configureInstance('root@localhost',{clusterAdmin: 'root@172.23.1.81',clusterAdminPassword: 'XXXXXXXXXXXXXXX'});
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...
 
This instance reports its own address as 172.23.1.81:3306
 
applierWorkerThreads will be set to the default value of 4.
 
The instance '172.23.1.81:3306' is valid to be used in an InnoDB cluster.
 
Cluster admin user root@172.23.1.81 created.
The instance '172.23.1.81:3306' is already ready to be used in an InnoDB cluster.
 
Successfully enabled parallel appliers.

 MySQL  localhost:33060+ ssl  JS > var cluster = dba.createCluster('jujuCluster')
NOTE: The installed metadata version 2.0.0 is lower than the version required by Shell which is version 2.1.0. It is recommended to upgrade the metadata. See \? dba.upgradeMetadata for additional details.
Dba.createCluster: dba.createCluster: Unable to create cluster. The instance '172.23.1.81:3306' has a populated Metadata schema and belongs to that Metadata. Use either dba.dropMetadataSchema() to drop the schema, or dba.rebootClusterFromCompleteOutage() to reboot the cluster from complete outage. (RuntimeError)

 MySQL  localhost:33060+ ssl  JS > var cluster = dba.rebootClusterFromCompleteOutage()
NOTE: The installed metadata version 2.0.0 is lower than the version required by Shell which is version 2.1.0. It is recommended to upgrade the metadata. See \? dba.upgradeMetadata for additional details.
Restoring the cluster 'jujuCluster' from complete outage...
 
Could not open a connection to '172.23.2.84:3306': 'MySQL Error 2003: Could not open connection to '172.23.2.84:3306': Can't connect to MySQL server on '172.23.2.84:3306' (113)'
Would you like to remove it from the cluster's metadata? [y/N]: y
 
Could not open a connection to '172.23.3.134:3306': 'MySQL Error 2003: Could not open connection to '172.23.3.134:3306': Can't connect to MySQL server on '172.23.3.134:3306' (110)'
Would you like to remove it from the cluster's metadata? [y/N]: y
 
Validating instance configuration at /var%2Frun%2Fmysqld%2Fmysqld.sock...
 
This instance reports its own address as 172.23.1.81:3306
 
Instance configuration is suitable.
* Waiting for seed instance to become ONLINE...
172.23.1.81:3306 was restored.
The cluster was successfully rebooted.

 MySQL  localhost:33060+ ssl  JS > cluster.status()
{
    "clusterName": "jujuCluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "172.23.1.81:3306",
        "ssl": "REQUIRED",
        "status": "OK_NO_TOLERANCE",
        "statusText": "Cluster is NOT tolerant to any failures.",
        "topology": {
            "172.23.1.81:3306": {
                "address": "172.23.1.81:3306",
                "memberRole": "PRIMARY",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": "applier_queue_applied",
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.30"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "172.23.1.81:3306"
}

FINALLY! We now have a (single node) cluster again! The dba.createCluster('jujuCluster') above was probably not necessary though.

The juju status now looks like one of these two (you may want to run the update-status hook to get new info here):

Unit                     Workload  Agent  Machine   Public address  Ports          Message
mysql-innodb-cluster/3*  active    idle   0/lxd/15  172.23.1.81                    Unit is ready: Mode: R/W, Cluster is NOT tolerant to any failures.

Unit
mysql-innodb-cluster/3*  blocked   idle   0/lxd/15  172.23.1.81                    Units not allowed to replicate with this unit: 172.23.3.152. See update-unit-acls action.

Using mysql on mysql-innodb-cluster/3, the cluster metadata now looks like this:

mysql> select * from mysql_innodb_cluster_metadata.instances;
+-------------+--------------------------------------+------------------+--------------------------------------+------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------+
| instance_id | cluster_id                           | address          | mysql_server_uuid                    | instance_name    | addresses                                                                                           | attributes                                                                                                                                 | description |
+-------------+--------------------------------------+------------------+--------------------------------------+------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------+
|           2 | b9d1ccb9-88ca-11ec-b0c1-00163e4abb10 | 172.23.1.81:3306 | 9fa5b7e7-88ca-11ec-be9a-00163e1e0c51 | 172.23.1.81:3306 | {"mysqlX": "172.23.1.81:33060", "grLocal": "172.23.1.81:33061", "mysqlClassic": "172.23.1.81:3306"} | {"joinTime": "2022-02-08 10:34:51.852", "server_id": 1003, "recoveryAccountHost": "%", "recoveryAccountUser": "mysql_innodb_cluster_1003"} | NULL        |
+-------------+--------------------------------------+------------------+--------------------------------------+------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------+

Now add two more units. In my case it would look like this:

$ juju add-unit mysql-innodb-cluster --to lxd:1
$ juju add-unit mysql-innodb-cluster --to lxd:2
$ juju status mysql-innodb-cluster
Unit                     Workload  Agent      Machine   Public address  Ports          Message
mysql-innodb-cluster/3*  blocked   executing  0/lxd/15  172.23.1.81                    Units not allowed to replicate with this unit: 172.23.2.132,172.23.3.155. See update-unit-acls action.
mysql-innodb-cluster/8   waiting   executing  1/lxd/27  172.23.2.132                   Instance not yet in the cluster
mysql-innodb-cluster/9   waiting   executing  2/lxd/28  172.23.3.155                   Instance not yet configured for clustering

Looking in juju debug-log, I found this (excerpt):

unit-mysql-innodb-cluster-3: 11:11:22 ERROR unit.mysql-innodb-cluster/3.juju-log cluster:192: Failed adding instance 172.23.3.155 to cluster: Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
WARNING: A GTID set check of the MySQL instance at '172.23.3.155:3306' determined that it contains transactions that do not originate from the cluster, which must be discarded before it can join the cluster.
 
172.23.3.155:3306 has the following errant GTIDs that do not exist in the cluster:
062a72cd-2f55-11ed-b258-00163eb4a2fb:1-16
 
WARNING: Discarding these extra GTID events can either be done manually or by completely overwriting the state of 172.23.3.155:3306 with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
 
Having extra GTID events is not expected, and it is recommended to investigate this further and ensure that the data can be removed prior to choosing the clone recovery method.
Clone based recovery selected through the recoveryMethod option
 
Validating instance configuration at 172.23.3.155:3306...
This instance reports its own address as 172.23.3.155:3306
NOTE: Some configuration options need to be fixed:
NOTE: Please use the dba.configure_instance() command to repair these issues.
ERROR: Instance must be configured and validated with dba.check_instance_configuration() and dba.configure_instance() before it can be used in an InnoDB cluster.
Traceback (most recent call last):
  File "<string>", line 3, in <module>
RuntimeError: Cluster.add_instance: Instance check failed

When the execution has stopped, we are here with juju status:

Unit                     Workload  Agent  Machine   Public address  Ports          Message
mysql-innodb-cluster/3*  blocked   idle   0/lxd/15  172.23.1.81                    Units not allowed to replicate with this unit: 172.23.2.132,172.23.3.155. See update-unit-acls action.
mysql-innodb-cluster/8   waiting   idle   1/lxd/27  172.23.2.132                   Instance not yet in the cluster
mysql-innodb-cluster/9   waiting   idle   2/lxd/28  172.23.3.155                   Instance not yet in the cluster

The fix is to enable group replication again. On the original unit mysql-innodb-cluster/3:

mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+
| group_replication_applier | 9fa5b7e7-88ca-11ec-be9a-00163e1e0c51 | 172.23.1.81 |        3306 | OFFLINE      |             |                | XCom                       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+
1 row in set (0.00 sec)
 
mysql> SET GLOBAL group_replication_bootstrap_group=ON;
Query OK, 0 rows affected (0.00 sec)
 
mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (1.20 sec)
 
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+
| group_replication_applier | 9fa5b7e7-88ca-11ec-be9a-00163e1e0c51 | 172.23.1.81 |        3306 | ONLINE       | PRIMARY     | 8.0.30         | XCom                       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+
1 row in set (0.00 sec)

It is now possible to run the update-unit-acls action:

$ juju run-action mysql-innodb-cluster/3 --wait update-unit-acls
unit-mysql-innodb-cluster-3:
  UnitId: mysql-innodb-cluster/3
  id: "2467"
  results: {}
  status: completed
  timing:
    completed: 2022-09-08 09:40:36 +0000 UTC
    enqueued: 2022-09-08 09:40:34 +0000 UTC
    started: 2022-09-08 09:40:35 +0000 UTC

Trying to use the add-instance action to add the new unit with IP address 172.23.2.132 to the cluster:

$ juju run-action mysql-innodb-cluster/3 --wait add-instance address=172.23.2.132
unit-mysql-innodb-cluster-3:
  UnitId: mysql-innodb-cluster/3
  id: "2469"
  results:
    outcome: Success
    output: None
  status: completed
  timing:
    completed: 2022-09-08 09:42:56 +0000 UTC
    enqueued: 2022-09-08 09:42:10 +0000 UTC
    started: 2022-09-08 09:42:10 +0000 UTC

IT WORKED! Adding the last unit as well:

$ juju status mysql-innodb-cluster
...
...
Unit                     Workload  Agent  Machine   Public address  Ports          Message
mysql-innodb-cluster/3*  active    idle   0/lxd/15  172.23.1.81                    Unit is ready: Mode: R/W, Cluster is NOT tolerant to any failures.
mysql-innodb-cluster/8   active    idle   1/lxd/27  172.23.2.132                   Unit is ready: Mode: R/O, Cluster is NOT tolerant to any failures.
mysql-innodb-cluster/9   waiting   idle   2/lxd/28  172.23.3.155                   Instance not yet in the cluster

$ juju run-action mysql-innodb-cluster/3 --wait add-instance address=172.23.3.155
$ juju status
...
...
Unit                     Workload  Agent  Machine   Public address  Ports          Message
mysql-innodb-cluster/3*  active    idle   0/lxd/15  172.23.1.81                    Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/8   active    idle   1/lxd/27  172.23.2.132                   Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/9   active    idle   2/lxd/28  172.23.3.155                   Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.

Some result data:

 MySQL  localhost:33060+ ssl  JS > cluster.status()
{
    "clusterName": "jujuCluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "172.23.1.81:3306",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "172.23.1.81:3306": {
                "address": "172.23.1.81:3306",
                "memberRole": "PRIMARY",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": "applier_queue_applied",
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.30"
            },
            "172.23.2.132:3306": {
                "address": "172.23.2.132:3306",
                "memberRole": "SECONDARY",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": "applier_queue_applied",
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.30"
            },
            "172.23.3.155:3306": {
                "address": "172.23.3.155:3306",
                "memberRole": "SECONDARY",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": "applier_queue_applied",
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.30"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "172.23.1.81:3306"
}

mysql> SHOW VARIABLES LIKE 'gtid_executed';
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Variable_name | Value                                                                                                                                                                         |
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| gtid_executed | 96269cc3-88ca-11ec-9910-00163e4abb10:1-16,
9fa5b7e7-88ca-11ec-be9a-00163e1e0c51:1-8,
b7cc7594-88ca-11ec-b0c1-00163e4abb10:1-152499300:153428857-153429202:154428857-154428987 |
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
 
mysql> SHOW VARIABLES LIKE 'gtid_executed';
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Variable_name | Value                                                                                                                                                                         |
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| gtid_executed | 96269cc3-88ca-11ec-9910-00163e4abb10:1-16,
9fa5b7e7-88ca-11ec-be9a-00163e1e0c51:1-8,
b7cc7594-88ca-11ec-b0c1-00163e4abb10:1-152499185:153428857-153429202:154428857-154428987 |
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
 
mysql> SHOW VARIABLES LIKE 'gtid_executed';
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Variable_name | Value                                                                                                                                                                         |
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| gtid_executed | 96269cc3-88ca-11ec-9910-00163e4abb10:1-16,
9fa5b7e7-88ca-11ec-be9a-00163e1e0c51:1-8,
b7cc7594-88ca-11ec-b0c1-00163e4abb10:1-152499203:153428857-153429202:154428857-154428987 |
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

References:

Hope this helps,

Johan Hallbäck, Ibeo Automotive Systems GmbH