What are the best practices to shutdown juju without loosing data?

I am following this tutorial https://www.youtube.com/watch?v=sLADei_c9Qg, to deploy Kubernetes with MAAS and juju.

I am running Juju with 3 MAAS VMs and several lxd containers on my local homeserver and I was wondering if there is a “proper” way to switch everything off , when I want to shutdown my server, without loosing data.

I found this post https://askubuntu.com/q/412112/1578907, but it is 8 years old, I was thinking there is maybe another “modern” way of doing it?

❯ juju status
Model    Controller          Cloud/Region        Version  SLA          Timestamp
default  maas-cloud-default  maas-cloud/default  2.9.26   unsupported  17:06:36Z

App                Version   Status  Scale  Charm              Store       Channel   Rev  OS      Message
ceph-mon           15.2.14   active      3  ceph-mon           charmhub    stable     73  ubuntu  Unit is ready and clustered
ceph-osd           15.2.14   active      3  ceph-osd           charmstore  stable    316  ubuntu  Unit is ready (1 OSD)
containerd         go1.13.8  active      2  containerd         charmstore  stable    200  ubuntu  Container runtime available
easyrsa            3.0.1     active      1  easyrsa            charmstore  stable    441  ubuntu  Certificate Authority connected.
etcd               3.4.5     active      1  etcd               charmstore  stable    655  ubuntu  Healthy with 1 known peer
flannel            0.11.0    active      2  flannel            charmstore  stable    619  ubuntu  Flannel subnet 10.1.85.1/24
kubernetes-master  1.23.4    active      1  kubernetes-master  charmstore  stable   1106  ubuntu  Kubernetes master running.
kubernetes-worker  1.23.4    active      1  kubernetes-worker  charmstore  stable    838  ubuntu  Kubernetes worker running.

Unit                  Workload  Agent  Machine  Public address  Ports           Message
ceph-mon/0            active    idle   0/lxd/0  10.10.10.26                     Unit is ready and clustered
ceph-mon/1            active    idle   1/lxd/0  10.10.10.24                     Unit is ready and clustered
ceph-mon/2*           active    idle   2/lxd/0  10.10.10.25                     Unit is ready and clustered
ceph-osd/0*           active    idle   0        10.10.10.23                     Unit is ready (1 OSD)
ceph-osd/1            active    idle   1        10.10.10.21                     Unit is ready (1 OSD)
ceph-osd/2            active    idle   2        10.10.10.22                     Unit is ready (1 OSD)
easyrsa/0*            active    idle   0/lxd/1  10.10.10.27                     Certificate Authority connected.
etcd/0*               active    idle   0        10.10.10.23     2379/tcp        Healthy with 1 known peer
kubernetes-master/0*  active    idle   0        10.10.10.23     6443/tcp        Kubernetes master running.
  containerd/1        active    idle            10.10.10.23                     Container runtime available
  flannel/1           active    idle            10.10.10.23                     Flannel subnet 10.1.31.1/24
kubernetes-worker/0*  active    idle   1        10.10.10.21     80/tcp,443/tcp  Kubernetes worker running.
  containerd/0*       active    idle            10.10.10.21                     Container runtime available
  flannel/0*          active    idle            10.10.10.21                     Flannel subnet 10.1.85.1/24

Machine  State    DNS          Inst id              Series  AZ       Message
0        started  10.10.10.23  metal-3              focal   default  Deployed
0/lxd/0  started  10.10.10.26  juju-b853ba-0-lxd-0  focal   default  Container started
0/lxd/1  started  10.10.10.27  juju-b853ba-0-lxd-1  focal   default  Container started
1        started  10.10.10.21  metal-1              focal   default  Deployed
1/lxd/0  started  10.10.10.24  juju-b853ba-1-lxd-0  focal   default  Container started
2        started  10.10.10.22  metal-2              focal   default  Deployed
2/lxd/0  started  10.10.10.25  juju-b853ba-2-lxd-0  focal   default  Container started