That looks like whatever is providing your load balancers in k8s might be malfunctioning. I see you’re using microk8s, can you check kubectl get pods -A and look at the metallb pods? I have a feeling something is wrong there.
Did you deploy this in the past day or two? microk8s historically pulled metallb images from dockerhub, but metallb’s dockerhub has been shut down just this week (see this for some context if interested). If your metallb pods are having image pull problems and their images are pointing to dockerhub, I think this is your problem. To work around this, microk8s 1.24/edge points to the up to date metallb repo, and that’ll hit other microk8s risks soon. Or, if you want to recover your current deployment, you should also be able to edit the metallb deployments and update the images like here
I did the same deployment of my microk8s-ha cluster as with kubeflow 1.6 which worked after a few modifications.
The deployment I redid it twice today and there is always the same problem.
Yeah I too was frustrated with this yesterday I feel your pain.
microk8s 1.24/edge works for me. Does that help? Manually changing those metallb images should also get it working, I just haven’t done that yet myself to solve things
From the messages in the status of the units, I think you need to juju trust these charms - that should get things going. Did you deploy this from one of the premade bundles, or did you add these in separately? If a premade bundle, please let me know which one as we might have missed adding trust somewhere in our bundles.
Hi @Moula , thanks for reporting this. You can resolve this issue by running juju trust knative-eventing --scope=cluster, and the same for knative-serving. We have merged the fix for that already, you should not run into this issue in latest kubeflow (1.7/beta revision 333). Please let us know if you find any other issue.