Unable to create loadbalancers with octavia overlay on charmed openstack

I’ve recently deployed the charmed openstack bundle with the octavia overlay via maas on a set of 6 machines (plus 1 juju controller)

I then made a new juju controller for kubernetes on openstack and deployed the charmed-kubernetes bundle with the openstack-lb overlay.
Everything seemed to be working fine, all units were working other than the openstack integrator which was complaining about a loadbalancer error.
I sourced the openrc for the admin profile and tried to build my own loadbalancer via the openstack client which also failed.
After checking the logs it seems that the octavia controller cannot connect to the lb instance. (Log snippet below)

I tried pinging the address from the octavia lxd with no replies, I then tried pinging the router ip for the lb-management network which worked fine.

I’m not sure what else to try at this point, the documentation hasn’t been updated for focal/wallaby and doesn’t really help with this specific problem.

openstack loadbalancer list

+--------------------------------------+-----------------------------------------------------+----------------------------------+--------------+---------------------+------------------+----------+
| id                                   | name                                                | project_id                       | vip_address  | provisioning_status | operating_status | provider |
+--------------------------------------+-----------------------------------------------------+----------------------------------+--------------+---------------------+------------------+----------+
| 618f6aca-415c-4b9c-85c2-1ac079861b6f | openstack-integrator-dc77a3aa4ce5-kubernetes-master | dbc6d3a569864840b4e53cdab2cc402a | 192.168.0.21 | ERROR               | ONLINE           | amphora  |
+--------------------------------------+-----------------------------------------------------+----------------------------------+--------------+---------------------+------------------+----------+

tail /var/log/octavia/octavia-worker.log

2021-07-09 17:13:55.471 1301 ERROR oslo_messaging.rpc.server
2021-07-09 17:14:09.732 1301 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='fc00:38c0:e726:4ae4:f816:3eff:fe52:ee20', port=9443): Max retries exceeded with url: // (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7fa9c7d22910>, 'Connection to fc00:38c0:e726:4ae4:f816:3eff:fe52:ee20 timed out. (connect timeout=10.0)'))

Did you try creating a LB with Openstack CLI or through Horizon to make sure Octavia is working ?
I had many issues with Octavia that were fixed recently but is was a long road before getting there.
Last issue I encountered was because " enable-ml2-port-security was not enabled and it made Octavia fail creating a new LB.

So first things first : check Octavia alone.

I did find the ml2-port-security option that I enabled to get to the point where the loadbalancer was actually getting created.

Whenever I create on via the openstack cli I get the same error that the controller cannot connect to the instance

I have faced the same problem. I have checked that the requests to port 9443 reach the vSwitch of the compute node where anfora is running but do not reach this interface.
The requests that anfora tries to make are only seen on its interface.
A new instance deployed manually in lb-mgmt-net accesses the amphora instance fine.
I have also checked that packets are dropped on the virtual switch of the compute node where amphora is deployed.

recirc_id(0),dp_hash(0),skb_priority(0),tunnel(tun_id=0x1,src=192.168.3.2,dst=192.168.3.3,ttl=64,tp_src=32390,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x30005}),flags(+csum+key)),in_port(4),skb_mark(0),ct_state(-new-est-rel-rpl-inv-trk-snat-dnat),ct_zone(0),ct_mark(0),ct_label(0),eth(src=fa:16:3e:05:cc:e0,dst=fa:16:3e:a8:ae:29),eth_type(0x86dd),ipv6(src=fc00:67ab:f8d5:9204:f816:3eff:fe05:cce0,dst=fc00:67ab:f8d5:9204:f816:3eff:fea8:ae29,label=0x53b2b,proto=58,tclass=0,hlimit=64,frag=no),icmpv6(type=128,code=0), packets:0, bytes:0, used:never, actions:drop

recirc_id(0),in_port(12),ct_state(-new-est-rel-rpl-inv-trk),ct_label(0/0x1),eth(src=fa:16:3e:a8:ae:29,dst=33:33:ff:05:cc:e0),eth_type(0x86dd),ipv6(src=fc00:67ab:f8d5:9204:f816:3eff:fea8:ae29,dst=ff02::/ffff::,proto=58,hlimit=255,frag=no),icmpv6(type=135,code=0), packets:4, bytes:344, used:0.984s, actions:drop