This tutorial demonstrates how to deploy a high availability MySQL InnoDB cluster to Ubuntu Server 20.04 Focal Fossa.
Running through the tutorial will enable you to create a cluster of any scale to public clouds, a private cloud or bare metal servers on-prem.
- MySQL 8
- MySQL shell
- MySQL Router
If you are new to Juju, consider our Getting Started with Juju tutorial.
You should have access to a live Juju controller. This can be achieved in 2 ways:
juju login jaasto make use of the Canonical-hosted controller. This service is offered at no cost, but credentials are provided to Canonical.
juju bootstrap. This option may incur hosting costs, but the controller runs fully in your control.
Add a Juju model
The Juju “model” is a workspace for inter-related applications. It houses machines, containers, applications and other components such as firewall rules.
To add a model to the controller, run the
juju add-model command:
$ juju add-model data
Model name is arbitrary: In the example, we used the name “messaging” for the model. You are welcome to use another name.
The deployment process is typically brittle and complex. Juju makes this failsafe and simple.
Provision compute infrastructure and install MySQL 8
juju deploy command controls the deployment process. It instructs the controller to download and run a “charm”. A charm is software that manages software. The
mysql-innodb-cluster charm knows how to install and manage a MySQL 8 InnoDB cluster.
To deploy a 5 node cluster called “poc” (for proof of concept), with each node requiring 200GB disk and 8 GB RAM. Juju selects the cheapest instance type that meets those minimum requirements.
juju deploy -n3 mysql-innodb-cluster \ --constraints="mem=8G root-disk=200G" \ --config cluster-name=poc
Cluster size is configurable:
If you would like a larger cluster, such as 7 nodes, then change the value of the
Other constraints are available:
This example only uses the
rook-disk constraints. The full list is available on Juju’s constraints documentation.
Install MySQL Router
The next step is to deploy the MySQL Router application and link it with the cluster nodes. It doesn’t take any additional compute resources. The router is deployed alongside the cluster nodes.
juju deploy mysql-router
juju relate mysql-router mysql-innodb-cluster
A “relation” is a communication link between charms. The software agent (
jujud) that is running the charm code communicates with other units via the Juju controller.
Await deployment to complet
The deployment can take several minutes to complete.
You can check on its status with the
juju status command.
(Optional) Tweak configuration
To see all configuration options available, use the
juju config command without specifying any changes:
juju config mysql-innodb-cluster
(Optional) Expose the database cluster to the Internet
mysql-innodb-cluster charm is typically deployed as part of a Charmed OpenStack deployment. That means it doesn’t get exposed to the Internet by default.
With a few commands though, it’s possible to ask Juju to manage the firewall on our behalf:
juju run --application mysql-innodb-cluster 'open-port 3306' juju expose mysql-innodb-cluster
Several actions are defined on the charm. An “action” in Juju-terminology is a pre-defined operation that devops team might need to perform from time to time.
juju actions command to reveal actions that are defined for the charm(s) you have deployed:
$ juju actions mysql-innodb-cluster
To run an action, use the
juju run-action command, then pass the action identifier to
$ juju run-action <unit-id> <action> Action queued with id: "<action-id>"
$ juju show-action-output <action-id>
Learn more about charms
This tutorial makes use of two charms,
mysql-router. They’re both open source (
mysql-innodb-cluster) and are available to be the basis for your own “charmed” workloads.
Ask for assistance
If you would like to ask any questions, then please create an account on Juju’s Discourse forum.