I’ve recently deployed the charmed openstack bundle with the octavia overlay via maas on a set of 6 machines (plus 1 juju controller)
I then made a new juju controller for kubernetes on openstack and deployed the charmed-kubernetes bundle with the openstack-lb overlay.
Everything seemed to be working fine, all units were working other than the openstack integrator which was complaining about a loadbalancer error.
I sourced the openrc for the admin profile and tried to build my own loadbalancer via the openstack client which also failed.
After checking the logs it seems that the octavia controller cannot connect to the lb instance. (Log snippet below)
I tried pinging the address from the octavia lxd with no replies, I then tried pinging the router ip for the lb-management network which worked fine.
I’m not sure what else to try at this point, the documentation hasn’t been updated for focal/wallaby and doesn’t really help with this specific problem.
openstack loadbalancer list
+--------------------------------------+-----------------------------------------------------+----------------------------------+--------------+---------------------+------------------+----------+
| id | name | project_id | vip_address | provisioning_status | operating_status | provider |
+--------------------------------------+-----------------------------------------------------+----------------------------------+--------------+---------------------+------------------+----------+
| 618f6aca-415c-4b9c-85c2-1ac079861b6f | openstack-integrator-dc77a3aa4ce5-kubernetes-master | dbc6d3a569864840b4e53cdab2cc402a | 192.168.0.21 | ERROR | ONLINE | amphora |
+--------------------------------------+-----------------------------------------------------+----------------------------------+--------------+---------------------+------------------+----------+
tail /var/log/octavia/octavia-worker.log
2021-07-09 17:13:55.471 1301 ERROR oslo_messaging.rpc.server
2021-07-09 17:14:09.732 1301 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='fc00:38c0:e726:4ae4:f816:3eff:fe52:ee20', port=9443): Max retries exceeded with url: // (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7fa9c7d22910>, 'Connection to fc00:38c0:e726:4ae4:f816:3eff:fe52:ee20 timed out. (connect timeout=10.0)'))