Login into MongoDB

The following script will SSH to a Juju 2.0 controller machine and start a mongo shell. It optionally takes the machine to connect to and the model name (defaulting to machine “0” in the model named “controller”).

This is the one script to rule them all :face_vomiting:

#!/bin/bash

machine=${1:-0}
model=${2:-controller}

read -d '' -r cmds <<'EOF'
conf=/var/lib/juju/agents/machine-*/agent.conf
user=`sudo grep tag $conf | cut -d' ' -f2`
password=`sudo grep statepassword $conf | cut -d' ' -f2`
if [ -f /snap/bin/juju-db.mongo ]; then
  client=/snap/bin/juju-db.mongo
elif [ -f /usr/lib/juju/mongo*/bin/mongo ]; then
  client=/usr/lib/juju/mongo*/bin/mongo
else
  client=/usr/bin/mongo
fi
echo $user $password
$client 127.0.0.1:37017/juju --authenticationDatabase admin --ssl --sslAllowInvalidCertificates --username "$user" --password "$password"
EOF

out=$(juju ssh -m $model $machine "$cmds" || echo "DEATH")
if [ "${out}" == "DEATH" ]; then
  kubectl_bin=microk8s.kubectl
  k8s_ns=`juju whoami | grep Controller | awk '{print "controller-"$2}'`
  k8s_controller_pod=`${kubectl_bin} -n ${k8s_ns} get pods | grep -E "^controller-([0-9]+)" | awk '{print $1}'`
  echo "${k8s_controller_pod}"

  echo "Connecting to mongo-db instance at: ${k8s_ns}:${k8s_controller_pod}"
  mongo_user=`${kubectl_bin} exec -n ${k8s_ns} ${k8s_controller_pod} -c api-server -it -- bash -c "grep tag /var/lib/juju/agents/controller-*/agent.conf | cut -d' ' -f2 | tr -d '\n'"`
  mongo_pass=`${kubectl_bin} exec -n ${k8s_ns} ${k8s_controller_pod} -c api-server -it -- bash -c "grep statepassword /var/lib/juju/agents/controller-*/agent.conf | cut -d' ' -f2 | tr -d '\n'"`
  ${kubectl_bin} exec -n ${k8s_ns} ${k8s_controller_pod} -c mongodb -it -- bash -c "/bin/mongo 127.0.0.1:37017/juju --authenticationDatabase admin --ssl --sslAllowInvalidCertificates --username '${mongo_user}' --password '${mongo_pass}'"
fi
6 Likes

This is very similar to mine. My one uses a PATH expansion to deal with the different mongo versions.

Also, I have it saved as juju-db in ~/bin which is in my PATH, which means I can do the following:

juju db

Script contents:

#!/bin/bash

machine=${1:-0}
model=${2:-controller}

echo machine $machine

read -d '' -r cmds <<'EOF'
conf=/var/lib/juju/agents/machine-*/agent.conf
user=`sudo grep tag $conf | cut -d' ' -f2`
password=`sudo grep statepassword $conf | cut -d' ' -f2`
PATH="$PATH:$(echo /usr/lib/juju/mongo*/bin)"
mongo 127.0.0.1:37017/juju --authenticationDatabase admin --ssl --sslAllowInvalidCertificates --username "$user" --password "$password"
EOF

juju ssh -m $model $machine "$cmds"

I think perhaps the echo left in there was from me debugging at some stage.

1 Like

That does works, though on Bionic it adds a literal “/usr/lib/juju/mongo*/bin” to PATH if it doesn’t exist. It isn’t a huge deal, as a PATH that doesn’t exist doesn’t match anything. :slight_smile:

Here’s a slightly cleaner/more secure version, based on the snippets found here and elsewhere while searching for a solution:

#!/bin/bash

machine="${1:-0}"
model="${2:-controller}"
juju=$(command -v juju)

read -d '' -r cmds <<'EOF'
conf=/var/lib/juju/agents/machine-*/agent.conf
user=$(sudo awk '/tag/ {print $2}' $conf)
password=$(sudo awk '/statepassword/ {print $2}' $conf)
client=$(command -v mongo)
"$client" 127.0.0.1:37017/juju --authenticationDatabase admin --ssl --sslAllowInvalidCertificates --username "$user" --password "$password"
EOF

"$juju" ssh -m "$model" "$machine" "$cmds"
1 Like

Thank you for sharing +1

I have a lxd cluster.

I enabled enable-ha high-availibility.

The agent.conf is not on every host…

But inside the agent.conf. I found a non existent apiaddresses. (i restarted the lxd server with the juju controller )

And i can’t find a statepassword in the agent.conf.
There is only a apipassword, oldpassword and a cacert

juju ssh didn’t work in my case, because i also deleted an lxd-server from the lxd-cluster.
The deleted lxd-server seems to relevant inside of the mongodb-configuration.

Now i will enter the juju-controller with lxc exec ... bash and look, if some of the passwords let me connect to mongo.

Is the mongodb automatically replicated, or did i need to change every instance ?

… I will report soon, what i could find out …

okay, thank you.

rick_h_ from freenode helps me, to find the statepassword

It was inside the controller container. And not on the host, where i searched for it.

:slight_smile:

EDIT: the script at the top of this page has been updated to work with mongo db instances on k8s and should be used instead of the one suggested below.

If you are trying to access a mongodb instance on k8s, the above scripts won’t do the trick for you.
However, this will work (replace kubectl_bin accordingly if not using microk8s via snap):

#!/bin/bash

kubectl_bin=microk8s.kubectl
k8s_ns=`juju whoami | grep Controller | awk '{print "controller-"$2}'`
k8s_controller_pod=`${kubectl_bin} -n ${k8s_ns} get pods | awk 'NR==2 {print $1}'`

echo "Connecting to mongo-db instance at: ${k8s_ns}:${k8s_controller_pod}"
mongo_user=`${kubectl_bin} exec -n ${k8s_ns} ${k8s_controller_pod} -c api-server -it -- bash -c "grep tag /var/lib/juju/agents/controller-*/agent.conf | cut -d' ' -f2 | tr -d '\n'"`
mongo_pass=`${kubectl_bin} exec -n ${k8s_ns} ${k8s_controller_pod} -c api-server -it -- bash -c "grep statepassword /var/lib/juju/agents/controller-*/agent.conf | cut -d' ' -f2 | tr -d '\n'"`
${kubectl_bin} exec -n ${k8s_ns} ${k8s_controller_pod} -c mongodb -it -- bash -c "/usr/bin/mongo 127.0.0.1:37017/juju --authenticationDatabase admin --ssl --sslAllowInvalidCertificates --username '${mongo_user}' --password '${mongo_pass}'"
1 Like