I am hopeful someone can help me figure out how to debug a problem I am having with lease operations and leadership election.
I am trying to follow the LivePatch On-Prem deployment tutorial to deploy a LivePatch server on my local k8s cluster.
I’m feeling my way through this as I’m new to Juju and Charms, but I got to the point were I have Juju setup, the LivePatch Charm bundle deployed, and I can see the pods running in my cluster.
However, the pods never reach readiness, and each seems to be complaining with the same error:
ERROR juju.worker.dependency engine.go:695 "leadership-tracker" manifold worker returned unexpected error: leadership failure: lease operation timed out
I’m trashing on this issue because I cannot figure out what is causing the timeout to occur. Thus far I have not been able to figure out how to get more information out, so I’m looking for some suggestions on what steps to take to further debug this.
I’m also a little confused why I’m having a problem with leadership even though I’m not running a HA setup (at least, that’s not my intention at the moment). If I only have a single unit for an application, why is it still going through a leadership election?
Here’s the status for my setup from Juju, to help illustrate:
Model Controller Cloud/Region Version SLA Timestamp
livepatch kubernetes kubernetes 3.1.6 unsupported 08:20:51-05:00
App Version Status Scale Charm Channel Rev Address Exposed Message
ingress 25.3.0 active 1 nginx-ingress-integrator latest/stable 81 10.111.165.16 no
livepatch waiting 1 canonical-livepatch-server-k8s latest/stable 21 10.111.85.31 no waiting for units to settle down
postgresql 14.9 waiting 1 postgresql-k8s 14/candidate 158 10.109.182.135 no waiting for units to settle down
Unit Workload Agent Address Ports Message
ingress/0 active idle 10.36.0.17
livepatch/0 blocked idle 10.47.0.5 waiting for pg relation.
postgresql/0 waiting idle 10.36.0.15 awaiting for cluster to start