About
The goal of this document is to provide options for load testing of Charmed MongoDB (K8s charm)
Testbed
For testing purposes we will use LXD running on t3.2xlarge
AWS instance with 500GB ssd drive.
We will deploy 3 node replica set MongoDB cluster on Ubuntu 22.04 using Juju 3.1.7
Related documentation
This document contains all the required information and commands to run performance testing as well as results of the tests. You can refer to to latest\more detailed information using these links:
- Charmed MongoDB K8s tutorial
- Canonical Observability Stack (CoS)
- View metrics from MongoDB in CoS
- YCSB
- NoSQLBench
Testbed preparation
Please perform following steps to prepare your testbed.
1. Deploy AWS instance
This step is optional in case of running tests on your own hardware
2. Set up MicroK8s
sudo snap install microk8s --channel=1.27-strict
sudo usermod -a -G snap_microk8s $(whoami)
mkdir ~/.kube
sudo chown -R $(whoami) ~/.kube
newgrp snap_microk8s
sudo microk8s status --wait-ready
Install additional packages
sudo snap install jq
Enable required plugins
sudo microk8s enable dns storage
IPADDR=$(ip -4 -j route get 2.2.2.2 | jq -r '.[] | .prefsrc')
sudo microk8s enable metallb:$IPADDR-$IPADDR
microk8s kubectl rollout status deployments/hostpath-provisioner -n kube-system -w
microk8s kubectl rollout status deployments/coredns -n kube-system -w
microk8s kubectl rollout status daemonset.apps/speaker -n metallb-system -w
Note: we are enabling metallb instead of ingress plugin to support deployment of CoS
3. Setu Juju
sudo snap install juju --channel 3.1/stable
Juju already has a built-in knowledge of LXD and how it works, so there is no additional setup or configuration needed, however, because Juju 3.x is a strictly confined snap, and is not allowed to create ~/.local/share we need to create it manually.
mkdir -p ~/.local/share
4. Bootstrap a controller
juju bootstrap microk8s k8s --agent-version 3.1.7
5. Add a model
juju add-model benchmarking
*Note: the name of the model is important, if you will changed it - you will need to update pod configuration file and set namespace what matches model name
6. Deploy a replica set
juju deploy mongodb-k8s -n 3 --channel=6/edge
You can check the status of the deployment using:
juju status
or
juju status --watch 1s --color
7. Obtaining connection string
To run tests agains the deployed cluster you need to obtain a connection string that will be used to connect to the cluster.
export DB_USERNAME="operator"
export DB_NAME="admin"
export REPL_SET_NAME="mongodb-k8s"
export DB_PASSWORD=$(juju run mongodb-k8s/leader get-password | grep password| awk '{print $2}')
export HOST_IP_0="mongodb-k8s-0.mongodb-k8s-endpoints"
export HOST_IP_1="mongodb-k8s-1.mongodb-k8s-endpoints"
export HOST_IP_2="mongodb-k8s-2.mongodb-k8s-endpoints"
export MONGODB_URI=mongodb://$DB_USERNAME:$DB_PASSWORD@$HOST_IP_0,$HOST_IP_1,$HOST_IP_2:27017/$DB_NAME?replicaSet=$REPL_SET_NAME
Integration with CoS (optional)
Add a model for CoS
juju add-model cos
Deploy CoS
curl -L https://raw.githubusercontent.com/canonical/cos-lite-bundle/main/overlays/offers-overlay.yaml -O
curl -L https://raw.githubusercontent.com/canonical/cos-lite-bundle/main/overlays/storage-small-overlay.yaml -O
juju deploy cos-lite \
--trust \
--overlay ./offers-overlay.yaml \
--overlay ./storage-small-overlay.yaml
Again, you can see the status of the deployment using
juju status --watch 1s --color
Once cos is deployed, you will need to:
- Obtain password to connect to Grafana dashboard
- Integrate it with Charmed MongoDB.
To get the the password to connect to Grafana dashboard execute
juju run grafana/leader get-admin-password --model cos
The username for the dashboard is admin. The dashboard will be available at this URI:
http://<your_host_public_ip_or_dns_name>/cos-grafana
Please make sure that required security groups settings are applied to the instance\machine where you are running the tests.
Integrate CoS with Charmed MongoDB
juju consume k8s:admin/cos.alertmanager-karma-dashboard
juju consume k8s:admin/cos.grafana-dashboards
juju consume k8s:admin/cos.loki-logging
juju consume k8s:admin/cos.prometheus-receive-remote-write
juju integrate grafana-agent prometheus-receive-remote-write
juju integrate grafana-agent loki-logging
juju integrate grafana-agent grafana-dashboards
juju relate mongodb-k8s loki-logging
juju relate mongodb-k8s grafana-dashboards
Deploying benchmark pod
To perform benchmark tests you need to deploy the benchmark pod. The benchmark pod is created from Charmed MonogDB benchmarking ROCK (OCI) image.
To deploy the pod, first clone the repo
cd ~
git clone https://github.com/canonical/mongodb-benchmarking-oci
cd mongodb-benchmarking-oci
# list namespaces
ubuntu@ip-172-31-59-208:~/development/src$ microk8s.kubectl get namespaces
NAME STATUS AGE
kube-system Active 2d22h
kube-public Active 2d22h
kube-node-lease Active 2d22h
default Active 2d22h
metallb-system Active 2d22h
controller-k8s Active 132m
benchmarking Active 126m
cos Active 31m
You should see benchmarking
namespace because we crated juju model named “benchmarking”.
Note: if you choose to use different model name, edit pod configuration file “mongodb-benchmarking.yaml”
Then create pod
sudo microk8s.kubectl apply -f mongodb-benchmarking.yaml
# get pods
sudo microk8s.kubectl get pods --namespace=benchmarking
NAME READY STATUS RESTARTS AGE
modeloperator-5f5cdcd884-vxhb2 1/1 Running 0 135m
mongodb-k8s-2 2/2 Running 0 135m
mongodb-k8s-0 2/2 Running 0 135m
mongodb-k8s-1 2/2 Running 0 135m
mongodb-benchmarking-6b6b9d7db6-tmwlv 1/1 Running 0 73s
# rollout deployment
sudo microk8s.kubectl rollout status deployment/mongodb-benchmarking --namespace=benchmarking
Test using YCSB
To run the test, connect to benchmarking pod
microk8s.kubectl exec -it -n benchmarking mongodb-benchmarking-6b6b9d7db6-tmwlv -- bash
Load data
To load data, run:
/bin/ycsb_load_data.sh
Update script parameters as required, you can pass parameters using named arguments. Please refer the script for the rest of the variables, as well ass YCSB documentation. For example:
/bin/ycsb_load_data.sh --mongodb-uri "mongodb://operator:5uw65oV1fvcaqzRqKU2dOuHtzwApSe11@mongodb-k8s-0.mongodb-k8s-endpoints,mongodb-k8s-1.mongodb-k8s-endpoints,mongodb-k8s-2.mongodb-k8s-endpoints:27017/admin?replicaSet=mongodb-k8s"
As a result of the script you should see something like this:
[OVERALL], RunTime(ms), 292233
[OVERALL], Throughput(ops/sec), 1710.9635119921434
[TOTAL_GCS_G1_Young_Generation], Count, 33
[TOTAL_GC_TIME_G1_Young_Generation], Time(ms), 97
[TOTAL_GC_TIME_%_G1_Young_Generation], Time(%), 0.03319269213264758
[TOTAL_GCS_G1_Concurrent_GC], Count, 0
[TOTAL_GC_TIME_G1_Concurrent_GC], Time(ms), 0
[TOTAL_GC_TIME_%_G1_Concurrent_GC], Time(%), 0.0
[TOTAL_GCS_G1_Old_Generation], Count, 0
[TOTAL_GC_TIME_G1_Old_Generation], Time(ms), 0
[TOTAL_GC_TIME_%_G1_Old_Generation], Time(%), 0.0
[TOTAL_GCs], Count, 33
[TOTAL_GC_TIME], Time(ms), 97
[TOTAL_GC_TIME_%], Time(%), 0.03319269213264758
[CLEANUP], Operations, 16
[CLEANUP], AverageLatency(us), 317.0
[CLEANUP], MinLatency(us), 1
[CLEANUP], MaxLatency(us), 5043
[CLEANUP], 95thPercentileLatency(us), 6
[CLEANUP], 99thPercentileLatency(us), 5043
[INSERT], Operations, 500000
[INSERT], AverageLatency(us), 9326.298052
[INSERT], MinLatency(us), 2520
[INSERT], MaxLatency(us), 952319
[INSERT], 95thPercentileLatency(us), 12759
[INSERT], 99thPercentileLatency(us), 24175
[INSERT], Return=OK, 500000
Run tests
Update script parameters as required, you can pass parameters using named arguments, for example:
/bin/ycsb_run_operations.sh --mongodb-uri "mongodb://operator:5uw65oV1fvcaqzRqKU2dOuHtzwApSe11@mongodb-k8s-0.mongodb-k8s-endpoints,mongodb-k8s-1.mongodb-k8s-endpoints,mongodb-k8s-2.mongodb-k8s-endpoints:27017/admin?replicaSet=mongodb-k8s"
Please refer the script for the rest of the variables, as well ass YCSB documentation.
As a result of the script you should see something like this:
[OVERALL], RunTime(ms), 1035239
[OVERALL], Throughput(ops/sec), 965.9605173298147
[TOTAL_GCS_G1_Young_Generation], Count, 80
[TOTAL_GC_TIME_G1_Young_Generation], Time(ms), 145
[TOTAL_GC_TIME_%_G1_Young_Generation], Time(%), 0.014006427501282313
[TOTAL_GCS_G1_Concurrent_GC], Count, 0
[TOTAL_GC_TIME_G1_Concurrent_GC], Time(ms), 0
[TOTAL_GC_TIME_%_G1_Concurrent_GC], Time(%), 0.0
[TOTAL_GCS_G1_Old_Generation], Count, 0
[TOTAL_GC_TIME_G1_Old_Generation], Time(ms), 0
[TOTAL_GC_TIME_%_G1_Old_Generation], Time(%), 0.0
[TOTAL_GCs], Count, 80
[TOTAL_GC_TIME], Time(ms), 145
[TOTAL_GC_TIME_%], Time(%), 0.014006427501282313
[READ], Operations, 500286
[READ], AverageLatency(us), 401.2962045709854
[READ], MinLatency(us), 272
[READ], MaxLatency(us), 206335
[READ], 95thPercentileLatency(us), 496
[READ], 99thPercentileLatency(us), 673
[READ], Return=OK, 500286
[CLEANUP], Operations, 2
[CLEANUP], AverageLatency(us), 1427.5
[CLEANUP], MinLatency(us), 6
[CLEANUP], MaxLatency(us), 2849
[CLEANUP], 95thPercentileLatency(us), 2849
[CLEANUP], 99thPercentileLatency(us), 2849
[UPDATE], Operations, 499714
[UPDATE], AverageLatency(us), 3728.2676490952826
[UPDATE], MinLatency(us), 2058
[UPDATE], MaxLatency(us), 213375
[UPDATE], 95thPercentileLatency(us), 5559
[UPDATE], 99thPercentileLatency(us), 10071
[UPDATE], Return=OK, 499714
Cleanup test database
echo "$MONGODB_URI"
juju ssh --container=mongod mongodb-k8s/leader
mongosh "<result of the first command here>"
mongodb [primary] admin> show databases;
admin 524.00 KiB
config 224.00 KiB
local 4.59 MiB
ycsb 2.40 MiB
use ycsb
db.dropDatabase();
Load testing with NoSQLBench tool
Write data to database
/var/load-testing/nb5 run driver=mongodb workload=mongodb-keyvalue2 tags=block:rampup cycles=50k --progress console:1s connection="$MONGODB_URI" database=perf-test
Run main activity
/var/load-testing/nb5 run driver=mongodb workload=mongodb-keyvalue2 tags='block:main.*' cycles=25k cyclerate=2500 threads=25 --progress console:1s connection="$MONGODB_URI" database=perf-test
List of workloads and scenarios provided by NoSQLBench for MongoDB
To list scenarios with related workloads execute:
/var/load-testing/nb5 --list-scenarios | grep mongodb