How to reduce number of master servers in k8s?

We created k8s with juju and by default it created two masters. The whole cluster is running in LXD containers on one physical server so there is no point in having multiple masters.
How do I reduce with juju number of masters to just one ?

To answer my own question - I used

juju remove-unit kubernetes-master/1

but now have the following:

$ juju status 
Model    Controller      Cloud/Region         Version  SLA          Timestamp
default  bob-controller  localhost/localhost  2.9.32   unsupported  15:33:21Z

App                    Version   Status   Scale  Charm                  Channel   Rev  Exposed  Message
containerd             go1.13.8  active       2  containerd             stable    178  no       Container runtime available
easyrsa                3.0.1     active       1  easyrsa                stable    420  no       Certificate Authority connected.
etcd                   3.4.5     active       3  etcd                   stable    634  no       Healthy with 3 known peers
flannel                0.11.0    active       2  flannel                stable    597  no       Flannel subnet 10.1.38.1/24
kubeapi-load-balancer  1.18.0    active       1  kubeapi-load-balancer  stable    844  yes      Loadbalancer ready.
kubernetes-master      1.22.15   error        2  kubernetes-master      stable   1078  no       hook failed: "loadbalancer-internal-relation-broken"
kubernetes-worker      1.22.15   waiting      1  kubernetes-worker      stable    816  yes      Waiting for cluster credentials.

Unit                      Workload  Agent  Machine  Public address  Ports             Message
easyrsa/0*                active    idle   0        10.50.30.221                      Certificate Authority connected.
etcd/0                    active    idle   1        10.50.30.201    2379/tcp          Healthy with 3 known peers
etcd/1                    active    idle   2        10.50.30.248    2379/tcp          Healthy with 3 known peers
etcd/2*                   active    idle   3        10.50.30.219    2379/tcp          Healthy with 3 known peers
kubeapi-load-balancer/0*  active    idle   4        10.50.30.237    443/tcp,6443/tcp  Loadbalancer ready.
kubernetes-master/0       active    idle   5        10.50.30.243    6443/tcp          Kubernetes master running.
  containerd/4*           active    idle            10.50.30.243                      Container runtime available
  flannel/4               active    idle            10.50.30.243                      Flannel subnet 10.1.34.1/24
kubernetes-master/1*      error     idle   6        10.50.30.250    6443/tcp          hook failed: "loadbalancer-internal-relation-broken"
kubernetes-worker/0*      waiting   idle   7        10.50.30.204    80/tcp,443/tcp    Waiting for cluster credentials.
  containerd/1            active    idle            10.50.30.204                      Container runtime available
  flannel/1*              active    idle            10.50.30.204                      Flannel subnet 10.1.38.1/24

Machine  State    Address       Inst id        Series  AZ  Message
0        started  10.50.30.221  juju-e1704d-0  focal       Running
1        started  10.50.30.201  juju-e1704d-1  focal       Running
2        started  10.50.30.248  juju-e1704d-2  focal       Running
3        started  10.50.30.219  juju-e1704d-3  focal       Running
4        started  10.50.30.237  juju-e1704d-4  focal       Running
5        started  10.50.30.243  juju-e1704d-5  focal       Running
6        started  10.50.30.250  juju-e1704d-6  focal       Running
7        started  10.50.30.204  juju-e1704d-7  focal       Running

How do I fix the hook failed: "loadbalancer-internal-relation-broken" ?