Charmed MySQL K8s How-to - Performance test

Performance testing for Charmed MySQL K8s

WARNING: it is an internal article. Do NOT use it in production! Contact Canonical Data Platform team if you are interested in the topic.

Installation && Configuration

Create an isolated test VM for an easier cleanup:

multipass launch charm-dev --name charmed-mysql-test --memory 32G --cpus 12 --disk 800G --timeout 100500
multipass shell charmed-mysql-test

Deploying Charmed MySQL K8s (if testing locally). Recommended: deploy DB on the powerful computing resource (e.g. on GKE), outside test VM. The minimal deployment commands:

juju add-model mysql-k8s
juju switch mysql-k8s
juju deploy mysql-k8s --trust --storage database=256G --channel 8.0/edge  -n 3 # --constraints mem=4G
juju deploy mysql-router-k8s --channel 8.0/edge --trust -n 3
juju deploy data-integrator --config database-name=sbtest
juju relate mysql-k8s mysql-router-k8s
juju relate mysql-router-k8s data-integrator

Install all the missing parts for testing:

sudo apt update && sudo apt install --yes sysbench mysql-client-core-8.0

Fetching Percona sysbench TPCC test scenarios:

git clone https://github.com/Percona-Lab/sysbench-tpcc.git
cd sysbench-tpcc/
sed -i 's/con:query("SET SESSION sql_log_bin = 0")/con:query("SET SESSION sql_log_bin = 1")/' ./tpcc_common.lua

Getting test credentials/configuration:

THREADS=8 # 2*CPU cores on ONE/Primary MySQL DB unit.
TABLES=30 # see the explanation below
SCALE=7   # see the explanation below

MYSQL_HOST=$(juju show-unit mysql-router-k8s/0 --endpoint mysql-router-peers  | yq '.. | select(. | has("ingress-address")).ingress-address' | head -1)
MYSQL_PORT=$(juju run-action data-integrator/leader get-credentials --wait | yq '.. | select(. | has("endpoints")).endpoints  | split(":") | .[1]')
MYSQL_USER=$(juju run-action data-integrator/leader get-credentials --wait | yq '.. | select(. | has("username")).username')
MYSQL_PASS=$(juju run-action data-integrator/leader get-credentials --wait | yq '.. | select(. | has("password")).password')
MYSQL_DB=$(juju run-action data-integrator/leader get-credentials --wait | yq '.. | select(. | has("database")).database')

cat << EOF
DB connection / test parameters:
  THREADS=${THREADS} (ensure, it is 2*CPUs on all DB unitS!!!)
  TABLES=${TABLES}
  SCALE=${SCALE}
  MYSQL_HOST=${MYSQL_HOST} # you might need to change K8s service type=ClusterIP to type=LoadBalancer
  MYSQL_PORT=${MYSQL_PORT}
  MYSQL_USER=${MYSQL_USER}
  MYSQL_PASS=${MYSQL_PASS}
  MYSQL_DB=${MYSQL_DB}
EOF

From the documentation:

Sysbench accepts a number of arguments which allows for the test scenario to scale.
Some of these arguments are:

--threads=<int>
The number of threads to run the operation at.
This simulates the number of users addressing the database.
Some example values are: 8, 16, 24, 32, 48, 64, 96, 128, 256, 512, and 1024.
The higher the thread count the more the system resource usage will be. Default is 1.

--tables=<int>
The number of tables to create in the database/schema.

--scale=<int>
The scale factor (warehouses) which increases the amount of data to work on overall.
Increase –tables and –scale to increase the database size and number of rows operated on.
As a rough estimation, 100 warehouses with 1 table set produces about 10GB of data in
non-compressed InnoDB tables (so 100 warehouses with 10 table sets gives about 100GB;
50 tables and 500 warehouses offer 2.5TB-3TB of database size).

Test connection to DB:

> mysql -h${MYSQL_HOST} -P${MYSQL_PORT} -u${MYSQL_USER} -p${MYSQL_PASS} ${MYSQL_DB} -e "show databases"
+--------------------+
| Database           |
+--------------------+
| information_schema |
| performance_schema |
| sbtest             |
+--------------------+

Testing

Prepare the test (create all the necessary tables and generate initial test data):

./tpcc.lua \
--mysql-host="${MYSQL_HOST}" \
--mysql-port="${MYSQL_PORT}" \
--mysql-user="${MYSQL_USER}" \
--mysql-password="${MYSQL_PASS}" \
--mysql-db="${MYSQL_DB}" \
--threads=${THREADS} \
--tables=${TABLES} \
--scale=${SCALE} \
--time=300 \
--force_pk=1 \
--db-driver=mysql \
prepare

Run the test:

./tpcc.lua \
--mysql-host="${MYSQL_HOST}" \
--mysql-port="${MYSQL_PORT}" \
--mysql-user="${MYSQL_USER}" \
--mysql-password="${MYSQL_PASS}" \
--mysql-db="${MYSQL_DB}"  \
--threads=${THREADS} \
--tables=${TABLES} \
--scale=${SCALE} \
--time=300 \
--force_pk=1 \
--db-driver=mysql \
--report-interval=1 \
run

Monitoring

To check the current K8s cluster load, use:

> kubectl top pod
NAME                             CPU(cores)   MEMORY(bytes)   
data-integrator-0                1m           28Mi            
modeloperator-5f66ffbff4-x8wch   1m           17Mi            
mysql-k8s-0                      547m         2193Mi          
mysql-k8s-1                      443m         2078Mi          
mysql-k8s-2                      492m         2409Mi          
mysql-router-k8s-0               9m           83Mi            
mysql-router-k8s-1               43m          89Mi            
mysql-router-k8s-2               53m          87Mi      

> kubectl top node
NAME                                          CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
gke-taurus-19936-default-pool-ca1db000-2cwj   767m         19%    3373Mi          27%       
gke-taurus-19936-default-pool-ca1db000-cpln   1155m        29%    3916Mi          31%       
gke-taurus-19936-default-pool-ca1db000-wnx7   806m         20%    3137Mi          25%    

To check DB disk usage, run:

juju ssh --container mysql mysql-k8s/leader df -h /var/lib/mysql

For the advance monitoring install COS-lite integration.

Cleanup

multipass delete charmed-mysql-test --purge