mysql-innodb-cluster: Juju-log No connection to the cluster primary RW node skipping DB creation.

Hi,

We have deployed Charm Openstack-Ussuri on top of Ubuntu20.04. Now we are facing the issue with mysql-innodb-cluster.

We have 3 nodes mysql inoodb cluster and it was working fine but suddenly RW tag removed from the leader node automatically.

We removed all 3 node in the cluster and when we trying to add back into the cluster, RW tag on leader node is not getting selected.

Following is the Error:

root@juju-e45362-2-lxd-79:~# tail -f /var/log/juju/unit-mysql-innodb-cluster-91.log 2022-08-03 11:37:53 WARNING juju-log No connection to the cluster primary RW node skipping DB creation. 2022-08-03 11:37:53 WARNING juju-log This instance is not yet clustered: cannot determine the cluster status. 2022-08-03 11:37:53 WARNING juju-log Cannot determine the cluster primary RW node for writes. 2022-08-03 11:37:53 WARNING juju-log No connection to the cluster primary RW node skipping DB creation. 2022-08-03 11:37:53 WARNING juju-log This instance is not yet clustered: cannot determine the cluster status. 2022-08-03 11:37:53 WARNING juju-log Cannot determine the cluster primary RW node for writes. 2022-08-03 11:37:53 WARNING juju-log No connection to the cluster primary RW node skipping DB creation.

Please let me know, if any other details are required to investigate the issue.

So, the RW tag on a leader node can get removed. The update-status hook of the units running within an application is only updated every few minutes by default. It is possible that the RW tag was removed from the leader and then placed on another unit in the meantime, but the charm did not have a way of knowing this or updating the status in the juju status output.

You removed all three nodes that formed the cluster? Hmm. You might be better off just removing the database application and redeploying it. The reason I say this is because if the 3 units of a cluster are removed and the machine/container it is running on is also removed then the data would be gone.

Thanks for reply.

3 unit means containers we have removed and when we are trying to add back into the cluster, its not getting added.

Because if RW node is getting selected non-leader node and if juju is not getting the status then in that case no vm provisioning was happening. How we can fix this Juju part?

If we will fix this with existing cluster anyhow, will it overwrite the data of existing VMs which are already running?