So today i created a new environment, so everything is clean… I created the model with
juju add-model --config default-series=focal --config default-space=maas openstack
the output of juju model-config
looks like this
agent-metadata-url default ""
agent-stream default released
agent-version model 2.9.0
apt-ftp-proxy default ""
apt-http-proxy default ""
apt-https-proxy default ""
apt-mirror default ""
apt-no-proxy default ""
automatically-retry-hooks default true
backup-dir default ""
charmhub-url default https://api.charmhub.io
cloudinit-userdata default ""
container-image-metadata-url default ""
container-image-stream default released
container-inherit-properties default ""
container-networking-method model provider
default-series default focal
default-space controller maas
defualt-space controller ""
development default false
disable-network-management default false
egress-subnets default ""
enable-os-refresh-update default true
enable-os-upgrade default true
fan-config default ""
firewall-mode default instance
ftp-proxy default ""
http-proxy default ""
https-proxy default ""
ignore-machine-addresses default false
image-metadata-url default ""
image-stream default released
juju-ftp-proxy default ""
juju-http-proxy default ""
juju-https-proxy default ""
juju-no-proxy default 127.0.0.1,localhost,::1
logforward-enabled default false
logging-config model <root>=INFO
lxd-snap-channel default latest/stable
max-action-results-age default 336h
max-action-results-size default 5G
max-status-history-age default 336h
max-status-history-size default 5G
net-bond-reconfigure-delay default 17
no-proxy default 127.0.0.1,localhost,::1
provisioner-harvest-mode default destroyed
proxy-ssh default false
resource-tags model {}
snap-http-proxy default ""
snap-https-proxy default ""
snap-store-assertions default ""
snap-store-proxy default ""
snap-store-proxy-url default ""
ssl-hostname-verification default true
storage-default-block-source model maas
test-mode default false
transmit-vendor-metrics default true
update-status-hook-interval default 5m
the output of juju-spaces
looks like this
Name Space ID Subnets
alpha 0
maas 1 192.168.122.0/24
2001:1a80:30b4:2::/64
abc 3 192.168.0.0/20
openstack 2 192.168.100.0/24
then i added a machine with
juju add-machine --constraints spaces=maas,openstack, abc
the output of juju status
looks like this now
Model Controller Cloud/Region Version SLA Timestamp
openstack maas-controller mymaas/default 2.9.0 unsupported 07:45:21Z
Machine State DNS Inst id Series AZ Message
0 started 192.168.100.1 node0 focal default Deployed
here i still got a question why is the machine deployed in the 192.168.100.0/24 subnet and not in the 192.168.122.0/24 subnet ? i would have guessed that this is the case cause the default-networks name is maas, and even if both network spaces are present on the machine it would still be conveniet that the network address of the default-network space is shown
the output of juju show-machine 0
looks like this
machines:
"0":
juju-status:
current: started
since: 20 Aug 2021 07:43:50Z
version: 2.9.0
hostname: node0
dns-name: 192.168.100.1
ip-addresses:
- 192.168.100.1
- 192.168.122.159
- 192.168.7.150
- 2001:1a80:30b4:2:5054:ff:fe6d:43e3
instance-id: f8cbwb
display-name: node0
machine-status:
current: running
message: Deployed
since: 20 Aug 2021 07:41:59Z
modification-status:
current: idle
since: 20 Aug 2021 07:35:04Z
series: focal
network-interfaces:
bond0:
ip-addresses:
- 192.168.100.1
mac-address: 52:54:00:f4:b8:3d
space: openstack
is-up: true
ens3:
ip-addresses:
- 192.168.122.159
mac-address: 52:54:00:a6:1e:5b
space: maas
is-up: true
ens14:
ip-addresses:
- 192.168.7.150
- 2001:1a80:30b4:2:5054:ff:fe6d:43e3
mac-address: 52:54:00:6d:43:e3
gateway: gateway ip
space: abc default-space
is-up: true
constraints: spaces=default-space,openstack,abc
hardware: arch=amd64 cores=2 mem=8192M tags=virtual availability-zone=default
after that i deployed 4 ceph-mon applications with 4 different binding configurations
juju deploy ceph-mon --to lxd:0
juju deploy ceph-mon ceph-mon-1 --to lxd:0 --bind "client=openstack admin=maas"
juju deploy ceph-mon ceph-mon2 --to lxd:0 --bind "maas client=openstack admin=maas"
juju deploy ceph-mon ceph-mon3 --to lxd:0 --bind "maas client=openstack"
the output of juju status
looks now like this
Model Controller Cloud/Region Version SLA Timestamp
openstack maas-controller mymaas/default 2.9.0 unsupported 10:59:38Z
App Version Status Scale Charm Store Channel Rev OS Message
ceph-mon 10.2.11 blocked 1 ceph-mon charmhub stable 68 ubuntu Insufficient peer units to bootstrap cluster (require 3)
ceph-mon1 10.2.11 blocked 1 ceph-mon charmhub stable 68 ubuntu Insufficient peer units to bootstrap cluster (require 3)
ceph-mon2 10.2.11 blocked 1 ceph-mon charmhub stable 68 ubuntu Insufficient peer units to bootstrap cluster (require 3)
ceph-mon3 10.2.11 blocked 1 ceph-mon charmhub stable 68 ubuntu Insufficient peer units to bootstrap cluster (require 3)
Unit Workload Agent Machine Public address Ports Message
ceph-mon1/0* blocked idle 0/lxd/1 192.168.100.2 Insufficient peer units to bootstrap cluster (require 3)
ceph-mon2/0* blocked idle 0/lxd/2 192.168.100.3 Insufficient peer units to bootstrap cluster (require 3)
ceph-mon3/0* blocked idle 0/lxd/3 192.168.100.4 Insufficient peer units to bootstrap cluster (require 3)
ceph-mon/0* blocked idle 0/lxd/0 192.168.122.67 Insufficient peer units to bootstrap cluster (require 3)
Machine State DNS Inst id Series AZ Message
0 started 192.168.100.1 node0 focal default Deployed
0/lxd/0 started 192.168.122.67 juju-17844d-0-lxd-0 xenial default Container started
0/lxd/1 started 192.168.100.2 juju-17844d-0-lxd-1 xenial default Container started
0/lxd/2 started 192.168.100.3 juju-17844d-0-lxd-2 xenial default Container started
0/lxd/3 started 192.168.100.4 juju-17844d-0-lxd-3 xenial default Container started
as you can see the first one was deployed in the right network space the maas-space
but the second and third are not deployed in the right space or the dns entry is just from the wrong subnet maybe a way to look up other network addresses of the containers would be nice
thanks for your time