How to delete juju cluster

Hi all, I am totally new to Juju and I need to delete existing juju cluster (running in AWS - it has been created by someone else long time ago). I looked at the docs but I am still confused - what command let me delete the cluster including SGs and all the other stuff which were created for the cluster ?

Normally it’s deletion by model for the cluster and deletion of the controller afterwards. For Model : juju destroy-model --force -y For Controller : juju kill-controller -y

Thanks @Moula.

$ juju models
Controller: k8s-eu-west-1-18.04                                                                                                                                                                              
                                                                                                                                                                                                         
Model       Cloud/Region   Type  Status     Machines  Cores  Units  Access  Last connection                                                                                                                  
controller  aws/eu-west-1  ec2   available         1      2  -      admin   just now                                                                                                                         
default*    aws/eu-west-1  ec2   available        12     19  24     admin   1 hour ago  

Should I delete both controller and default ?

With what I see, I don’t see any clusters ( Models ) anywhere (like: kubernetes, openstack, microk8s…) just model controller .

Models :

  • Display the list of your models : $ juju models
  • To destroy model : first is to switch on with : $ juju switch name-model
  • To destroy it : $ juju destroy-model name-model --force -y

Juju-Controllers :

  • Display juju-controllers the command is : $ juju controllers
  • Switch to the controller to kill : $ juju switch name-controller
  • Kill active juju-controller : $ juju kill-controller name-controller -y

Hope you understood.

If you are completely done with Juju on AWS then you would remove the controller (it will take all its models with it). If you want to continue but just remove the workload machines then you would remove the model default (I see that there are 12 machines running in it). See the documentation on How to remove things.

To be clear, if you do not remove the controller there will be a machine in AWS that will continue to run (as the Juju controller).

Thanks @pmatulis and @Moula for your explanation.

So the relationship between the controller and model (cluster ?) is that one controller allows me to have many clusters (models ?) and removing this particular controller will remove the controller’s VM + all k8s clusters derived from it ?

@pmatulis yes, we are completely done with juju on AWS

This is what we have:

$ juju models
Controller: k8s-eu-west-1-18.04

Model       Cloud/Region   Type  Status     Machines  Cores  Units  Access  Last connection
controller  aws/eu-west-1  ec2   available         1      2  -      admin   just now
default*    aws/eu-west-1  ec2   available        12     19  24     admin   17 hours ago

$ juju controllers
Use --refresh option with this command to see the latest information.

Controller             Model    User   Access     Cloud/Region   Models  Nodes    HA  Version
k8s-eu-west-1          default  admin  superuser  aws/eu-west-1       1      1  none  2.6.8  
k8s-eu-west-1-18.04*   default  admin  superuser  aws/eu-west-1       2     13     -  2.7.5  
k8s-eu-west-1-18.04.4  default  admin  superuser  aws/eu-west-1       1      1     -  2.7.1  
k8s-eu-west-1-18.04.5  default  admin  superuser  aws/eu-west-1       1      1     -  2.7.5  

(I already used refresh option)

But it looks like in AWS Console we have just two EC2 instances tagged as juju controllers. Can I have more verbose output and see what EC2 instance ID is for each controller ?

How do I clean it up ?

pshemekb@ip-192-168-201-100:~$ juju controllers --refresh
error connecting to api for "k8s-eu-west-1-18.04.5": opening API connection: dial tcp 252.201.194.1:17070: i/o timeout
error connecting to api for "k8s-eu-west-1-18.04.4": opening API connection: dial tcp 192.168.201.237:17070: i/o timeout
Controller             Model    User   Access     Cloud/Region   Models  Nodes  HA  Version
k8s-eu-west-1-18.04.4  default  admin  superuser  aws/eu-west-1       1      1   -  2.7.1  
k8s-eu-west-1-18.04.5  default  admin  superuser  aws/eu-west-1       1      1   -  2.7.5  

pshemekb@ip-192-168-201-100:~$ juju switch k8s-eu-west-1-18.04.4
 -> k8s-eu-west-1-18.04.4:admin/default
pshemekb@ip-192-168-201-100:~$ juju kill-controller -y k8s-eu-west-1-18.04.4
Unable to open API: open connection timed out
ERROR getting controller environ: unable to get bootstrap information from client store or API

According to this: https://juju.is/docs/olm/removing-things#heading--command-kill-controller it should always work.

It might help to go over these:

A model can contain a single machine if you want. So there is no inherent connection between a model and a “cluster”.

Okay.

All of the above output represents 16 EC2 instances (nodes: 1 + 13 + 1 +1). Unless you removed machines manually in your AWS console, all these machines should be running in AWS.

Yes. You can look at the output of the juju machines command. The AWS instance names may not be identical to what you see in the command’s output but they should be very similar. For example, to view the machines for the k8s-eu-west-1-18.04 controller, inspect the machines in both the controller model and the default model:

juju machines -m k8s-eu-west-1-18.04:controller
juju machines -m k8s-eu-west-1-18.04:default

The DNS and Inst id columns should help you find the instances in the console. They give IP addresses and host names, respectively.

Are those machines still actually running? Are you on the same machine that had been used to create the controller originally? The statements around connection timeout would indicate that the machines are already gone. If they are (you should check the AWS dashboard for that), then you can just juju unregister k8s-eu-west-1-18.04.4 and we will stop tracking that controller). We don’t do it by default because we don’t want a short-term disconnect to leave instances costing you money without you verifying that the machines really are gone.

1 Like

As John said, this output suggests that the two referenced controller machines have been manually removed from the AWS console. So Juju cannot find them. I do see that the other two controllers (k8s-eu-west-1 and k8s-eu-west-1-18.04) are now gone (did you remove them successfully?).

Use the information I gave in my other recent comment to find the instances and remove them manually from the console. If the machines command does not work then use the instance search facility in the AWS console to locate them (look for the string juju).

1 Like

@pnew In case you missed my tag on the workshop page: You might want to check out this workshop, which beginners tend to find very helpful: [workshop] Introduction to 2022, introduction to Juju .

1 Like

Thank you @tmihoc, @pmatulis - I managed to delete and/or unregister them. I have a confirmation that those time-outing controller have been manually removed from AWS. Thanks a lot again !

2 Likes

And thank you @jameinel , @Moula (as I cannot mentione more than two people at once and had to post two messages)

2 Likes