Leader (`ops.model.Unit.is_leader`)

Source: ops.Unit.is_leader

See also: Ops (ops), Juju | Leader

Every application deployed by Juju is guaranteed to have exactly one leader at any time. This is true independent of the Charm authors’ actions; whether or not you implement the hooks or use the tools, the Juju controller will elect a leader when a charm is deployed. Units that hold leader status should not assume they will retain it, a new leader can be elected at any time.

Leadership provides a mechanism whereby multiple units of an application can make use of a single, shared, application data store when forming relations. This property is particularly useful where there are multiple units of an application deployed, but there are situations where it may be sensible to restrict a decision to a single (authoritative) unit. When peer relations are established, the application leader is able to both read and write to the application data on the peer relation. Followers can only read the application data on a peer relation.

Contents:

Identify the current leader

Charm authors can identify if the unit executing the charm code is the current leader using the unit abstraction, for example:

def _on_config_changed(self, event):
    if not self.unit.is_leader()
        return
    # do something

If the unit executing the code is not the leader, there is no facility to identify which other unit is the leader. This is by design, because such functionality could encourage development practices that are exposed to a higher risk of race conditions, or simply stale data should the leader change during, or shortly after such a function might return.

Leadership events

When a new application leader is elected, the Juju controller emits a leader-elected event at least once to the newly elected leader only. Callback methods bound to this event should take any relevant action for the application to act effectively as the leader (perhaps updating some shared configuration):

# ...
class DemoCharm(ops.CharmBase):
    # ...
    def __init__(self, *args):
        super().__init__(*args)
        # Handle the case where Juju elects a new application leader
        self.framework.observe(self.on.leader_elected, self._on_leader_elected)
        # ...

    def _on_leader_elected(self, event: ops.LeaderElectedEvent) -> None:
        """Handle the leader-elected event"""
        logging.debug("A new leader was elected: %s", self.unit.name)
# ...

Immediately after the leader-elected is emitted, all units will receive a config-changed event. There is a more detailed example of responding to this event in the Relations section.

Hi, a couple of questions.

  • How and when is the leader elected? Are there any guaranties that the leader will not change while a hook on the leader is executed?

  • We mention “Immediately after the leader-elected is emitted, all units will receive a config-changed event.” Do we know if the leader_elected event would have been completed on the leader before a config change has been issued on all the nodes?

I’m not sure on the precise mechanism by which a leader is elected. AFAIK, the only guarantee provided is that there will be exactly one leader at any given time. I don’t think there are any guarantees that the leader won’t change during hook execution.

I don’t think you can make any guarantees about this either, but worth checking (@jameinel may know…)

The guarantees that Juju actually makes are:

  • Only one unit can be a leader at a time
  • When we extend a lease (driven by the Unit agent), it is guaranteed to not select another leader for the next 30s. (internally we actually extend the lease for 1min, and then extend it again every 30s, but you are guaranteed 30s)

If everything is running correctly, then Unit agent should be extending leadership regularly, and leadership should only ever change if there is an actual outage or the current leader is removed. You can imagine that we would start a hook, and then lose network connectivity in the middle of it, while some other process that isn’t doing networking work churns, and then we regain connectivity. We can’t really tell the difference from that and the instance going away completely.

As for config-changed after leader-elected. Before the start hook fires, we do guarantee the ordering of hooks. (We will inform the leader that it is the leader before we fire the first config-changed event.) However, once the charm is up and running, leader-elected and config-changed are independent events. The first fires because the old leader stopped checking in, and the other fires because config has changed.

Other units that are not the leader actually get leader-settings-changed event (both when settings are changed, and I believe at any point that the leader changes but they aren’t the leader) before they get config-changed. However, I don’t believe we serialize the units, so if leader-elected took a long time, the other units could easily get config-changed before the leader exited.

2 Likes

Regarding the guarantee that another leader will not be selected in the next 30s and other similar issues while relying on leadership, could we add a warning in the documentation about that?

I’m saying this because while implementing a charm with a Main-Workers architecture, I thought about using is_leader to guarantee that only one unit would be the main but while reading more details about it and after discussing it with more experienced people, this idea was rejected.

So, for using to have a single source of truth it’s perfect but for handling an “orchestration” mechanism could cause unexpected behavior.