Install and manage a distributed MySQL 8 InnoDB cluster on Ubuntu 20.04

About

This tutorial demonstrates how to deploy a high availability MySQL InnoDB cluster to Ubuntu Server 20.04 Focal Fossa.

Running through the tutorial will enable you to create a cluster of any scale to public clouds, a private cloud or bare metal servers on-prem.

  • MySQL 8
  • MySQL shell
  • MySQL Router

Requirements

Duration: 5:00

Juju software

This tutorial assumes that you have installed Juju and registered your security credentials.

If you are new to Juju, consider our Getting Started with Juju tutorial.

Running controller

Duration 5:00

You should have access to a live Juju controller. This can be achieved in 2 ways:

  1. Run juju login jaas to make use of the Canonical-hosted controller. This service is offered at no cost, but credentials are provided to Canonical.
  2. Run juju bootstrap. This option may incur hosting costs, but the controller runs fully in your control.

Add a Juju model

Duration 1:00

The Juju “model” is a workspace for inter-related applications. It houses machines, containers, applications and other components such as firewall rules.

To add a model to the controller, run the juju add-model command:

$ juju add-model data

Model name is arbitrary: In the example, we used the name “messaging” for the model. You are welcome to use another name.

Deploy

Duration: 10:00

The deployment process is typically brittle and complex. Juju makes this failsafe and simple.

Provision compute infrastructure and install MySQL 8

The juju deploy command controls the deployment process. It instructs the controller to download and run a “charm”. A charm is software that manages software. The mysql-innodb-cluster charm knows how to install and manage a MySQL 8 InnoDB cluster.

To deploy a 5 node cluster called “poc” (for proof of concept), with each node requiring 200GB disk and 8 GB RAM. Juju selects the cheapest instance type that meets those minimum requirements.

juju deploy -n3 mysql-innodb-cluster \
  --constraints="mem=8G root-disk=200G" \
  --config cluster-name=poc

Cluster size is configurable: If you would like a larger cluster, such as 7 nodes, then change the value of the -n option.

Other constraints are available: This example only uses the mem and rook-disk constraints. The full list is available on Juju’s constraints documentation.

Install MySQL Router

The next step is to deploy the MySQL Router application and link it with the cluster nodes. It doesn’t take any additional compute resources. The router is deployed alongside the cluster nodes.

juju deploy mysql-router
juju relate mysql-router mysql-innodb-cluster

A “relation” is a communication link between charms. The software agent (jujud) that is running the charm code communicates with other units via the Juju controller.

Await deployment to complet

The deployment can take several minutes to complete.

You can check on its status with the juju status command.

(Optional) Tweak configuration

Duration: 3:00

To see all configuration options available, use the juju config command without specifying any changes:

juju config mysql-innodb-cluster

(Optional) Expose the database cluster to the Internet

The mysql-innodb-cluster charm is typically deployed as part of a Charmed OpenStack deployment. That means it doesn’t get exposed to the Internet by default.

With a few commands though, it’s possible to ask Juju to manage the firewall on our behalf:

juju run --application mysql-innodb-cluster 'open-port 3306'
juju expose mysql-innodb-cluster

Next steps

Run actions

Several actions are defined on the charm. An “action” in Juju-terminology is a pre-defined operation that devops team might need to perform from time to time.

Use the juju actions command to reveal actions that are defined for the charm(s) you have deployed:

$ juju actions mysql-innodb-cluster

To run an action, use the juju run-action command, then pass the action identifier to juju show-action-output

$ juju run-action <unit-id> <action>
Action queued with id: "<action-id>"
$ juju show-action-output <action-id>

Learn more about charms

This tutorial makes use of two charms, mysql-innodb-cluster and mysql-router. They’re both open source (mysql-router
, mysql-innodb-cluster) and are available to be the basis for your own “charmed” workloads.

Ask for assistance

If you would like to ask any questions, then please create an account on Juju’s Discourse forum.

  • for trouble following the tutorial, please use the Help with Tutorials category
  • for questions relating to deploying the charm(s) in production, please use the Using Juju category
  • for questions relating to deploying your own charm(s), please use the Charming category
1 Like

I want to know how can I connect to the mysql cluster and delete a database?
or where can I find mysql root password?

Hello Ali Reza,

You can get credentials for “mysql-innodb-cluster” by running the following command:

juju run --unit mysql-innodb-cluster/leader leader-get

Remember that only corresponding service IP addresses are allowed to access the database.

Hint: first ssh into mysql-innodb-cluster’s machine and then connect to MySQL using mysql.passwd first.

1 Like

Thank you so much, for a month a could not deploy a working openstack, right now mysql cluster has got the following error.
MySQL InnoDB Cluster not healthy: None

Hi,
i have the same problem: how to recover?

you could fix the issue with this command
sudo juju run-action --wait mysql-innodb-cluster/9 reboot-cluster-from-complete-outage

first look at the result it may say this unit is not the most updated unit then select another unit for example mysql-innodb-cluster/10 or so on.