I’ve been consistently struggling with the timeouts with sunbeam bootstrap
as well.
There are a lot of findings that I’m uncovering, though:
- There are substantial differences with
sunbeam cluster bootstrap
between 2023.2/edge and 2023.1/edge
Example 2023.2/edge:
ubuntu@opnstk-server-vm:~$ sunbeam cluster bootstrap --help
Usage: sunbeam cluster bootstrap [OPTIONS]
Bootstrap the local node.
Initialize the sunbeam cluster.
Options:
-a, --accept-defaults Accept all defaults.
-m, --manifest FILE Manifest file.
--role [control|compute|storage]
Specify additional roles, compute or
storage, for the bootstrap node. Defaults to
the compute role.
--topology [auto|single|multi|large]
Allows definition of the intended cluster
configuration: 'auto' for automatic
determination, 'single' for a single-node
cluster, 'multi' for a multi-node cluster,
'large' for a large scale cluster
--database [auto|single|multi] Allows definition of the intended cluster
configuration: 'auto' for automatic
determination, 'single' for a single
database, 'multi' for a database per
service,
-h, --help Show this message and exit.
Example 2023.1/edge:
ubuntu@opnstk-server-vm:~$ sunbeam cluster bootstrap --help
Usage: sunbeam cluster bootstrap [OPTIONS]
Bootstrap the local node.
Initialize the sunbeam cluster.
Options:
-a, --accept-defaults Accept all defaults.
-p, --preseed FILE Preseed file.
--role [control|compute|storage]
Specify additional roles, compute or
storage, for the bootstrap node. Defaults to
the compute role.
--topology [auto|single|multi|large]
Allows definition of the intended cluster
configuration: 'auto' for automatic
determination, 'single' for a single-node
cluster, 'multi' for a multi-node cluster,
'large' for a large scale cluster
--database [auto|single|multi] Allows definition of the intended cluster
configuration: 'auto' for automatic
determination, 'single' for a single
database, 'multi' for a database per
service,
-h, --help Show this message and exit.
I’ve been trying to set up a preseed
or manifest
file, seemingly the “name” of what that flag is called has changed between the versions.
The file taking the shape of:
bootstrap:
management_cidr: A.B.C.D/21
addons:
metallb: 10.20.21.10-10.20.21.20
user:
run_demo_setup: True
username: demo
password: testtesttest
cidr: 192.168.122.0/24
nameservers: 8.8.8.8
security_group_rules: True
remote_access_location: local
external_network:
cidr: A.B.C.D/21
gateway: A.B.C.D
start: A.B.C.D
end: A.B.C.D
network_type: flat
segmentation_id: 0
nic: enp2s0
physical_network: physnet1
microceph_config:
opnstk-server-vm:
osd_devices: /dev/vdc /dev/vdd /dev/vde
- With a preseed / manifest file it seems to take a different shape when bootstrapping. Espeically if these tasks to try to stand up MicroStack Sunbeam are to be automated I found out that it needs to look something like the following:
A. On 2023.2/edge:
- name: bootstrap openstack
become_user: ubuntu
ansible.builtin.shell: |
echo "n" | /snap/bin/sunbeam cluster bootstrap --manifest /home/ubuntu/generated-preseed.yaml
# ansible.builtin.shell: |
# /snap/bin/sunbeam cluster bootstrap --accept-defaults
- name: configure openstack
become_user: ubuntu
ansible.builtin.shell: |
/snap/bin/sunbeam configure --manifest /home/ubuntu/generated-preseed.yaml --openrc demo-openrc
( note the echo "n" |
- as otherwise, w/ 2023.2/edge we’re left in the dark with seemingly no way to configure it “not to prompt” us about the proxy setting )
As if without the echo "n" |
- we’ll see:
ubuntu@opnstk-server-vm:~$ /snap/bin/sunbeam cluster bootstrap --manifest /home/ubuntu/generated-preseed.yaml
Configure proxy for access to external network resources? [y/n] (n): ^C
Aborted!
( hit control+c to bail out early on that ^ )
B. w/ 2023.1/edge it’s just:
- name: bootstrap openstack
become_user: ubuntu
ansible.builtin.shell: |
/snap/bin/sunbeam cluster bootstrap --preseed /home/ubuntu/generated-preseed.yaml
- name: configure openstack
become_user: ubuntu
ansible.builtin.shell: |
/snap/bin/sunbeam configure --preseed /home/ubuntu/generated-preseed.yaml --openrc demo-openrc
But I’ve never successfully been able to actually get the Deploy to finish successfully, I’m wondering if like there is anything else missing in the sunbeam prepare-node-script
, like if either 2023.1/edge or 2023.2/edge are supposed to work with “current” Jammy builds or not…
It’s only ever historically hanging for me at:
Deploying OpenStack Control Plane to Kubernetes (this may take a while) ... waiting for services to come online (22/30)
And then after 1hr + some change, it’s just falling flat with the similar error messages you’re seeing.
And I’ve not been able to get passed it no matter the variants of the networking I supply.
Though it does seem to “provision more” like (24/30) on the 2023.1/edge vs the 2023.2/edge