Question about Kubernetes OpenStack Operator

Been up and running for a little while with K8s on OpenStack
Recently hit an issue after a password change where the openstack-integrator no longer seems to create load-balancers
Perhaps the method for updating the password was incorrect, so an extra set of eyes would be much appreciated.
Modified credentials.yaml to update openstack password
ran juju update-credentials openstack
Selected Option 2 for Controller Only
ran juju run-action openstack-integrator/0 refresh-credentials --wait

verified that juju status is all green

After the password update there are errors when attempting to create load balancers

Name:                     hello
Namespace:                hello-world
Labels:                   app=hello-world
Annotations:              <none>
Selector:                 app=hello-world
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32270/TCP
Session Affinity:         None
External Traffic Policy:  Cluster
  Type     Reason                    Age   From                Message
  ----     ------                    ----  ----                -------
  Normal   EnsuringLoadBalancer    44m (x8 over 54m)  service-controller  Ensuring load balancer
  Warning  SyncLoadBalancerFailed  44m (x8 over 54m)  service-controller  Error syncing load balancer: failed to ensure load balancer: error g
etting loadbalancer for Service hello-world/hello: Unable to re-authenticate: Expected HTTP response code [200 204 300] when accessing [GET ht
, but got 401 instead
{"error": {"code": 401, "title": "Unauthorized", "message": "The request you have made requires authentication."}}

Is there a step that was omitted somewhere?
Alternatively what is the best direction to look in terms of troubleshooting?
Running OpenStack Focal Ussuri and K8S v1.22.9
openstack-integrator ussuri active 1 openstack-integrator charmstore stable 204 ubuntu Ready

kubectl -n kube-system get secret cloud-config -o jsonpath='{.data}' | jq -r '."cloud.conf"' | base64 -d
shows the expected new credentials.

Not sure whether this was an appropriate way to resolve the issue. It seems a little bit like shooting a mosquito with a bazooka, but it appears to have fixed the issue.

kubectl get pods -n kube-system --no-headers=true| awk '/openstack-cloud-controller-manager/{print$1}' | xargs kubectl delete -n kube-system pod