How to get logs from k8s charms

See also: How to debug a charm

juju debug-log

Juju automatically logs all agents per model. To view those logs, use juju debug-log.

Logs from charm code that uses the python logging facility,

import logging
logger = logging.getLogger(__name__)

are picked up automatically by juju.

When debugging, we typically first set the unit level to DEBUG:

juju model-config logging-config="<root>=WARNING; unit=DEBUG"

The debug-log shows logs from charm code (charm container), but not the workload container.

kubectl logs

With kubectl logs we can see logs from the charm and the workload containers.

For example:

# Get logs from the charm container
kubectl -n model_name logs pods/prometheus-0 -c charm

# Get logs from the workload container
kubectl -n model_name logs pods/prometheus-0 -c prometheus

Logs that are printed to stdout/stderr are automatically picked up by kubectl (in machine charms: journalctl). Logging to stdout/stderr is an established convention and is generally preferable to only logging to a file.

pebble logs

With pebble logs we can get logs for a particular pebble service:

juju ssh --container prometheus prometheus/0 \
  /charm/bin/pebble logs prometheus

pebble changes and tasks

To see logs related to a service that failed to start, you first obtain the ID of the failed change from the list of changes with pebble changes,

$ juju ssh --container prometheus prometheus/0 /charm/bin/pebble changes
ID   Status  Spawn                   Ready                   Summary
30   Error   yesterday at 21:31 UTC  yesterday at 21:31 UTC  Replan service "prometheus"
31   Done    yesterday at 21:38 UTC  yesterday at 21:38 UTC  Execute command "/usr/bin/promtool"
32   Done    yesterday at 21:38 UTC  yesterday at 21:38 UTC  Replan service "prometheus"

and then query for the logs with pebble tasks,

$ juju ssh --container prometheus prometheus/0 /charm/bin/pebble tasks 30
Status  Spawn                   Ready                   Summary
Error   yesterday at 21:31 UTC  yesterday at 21:31 UTC  Start service "prometheus"

......................................................................
Start service "prometheus"

2023-03-07T21:31:39Z INFO Most recent service output:
    (...)
    ts=2023-03-07T21:31:39.309Z caller=web.go:561 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090
    ts=2023-03-07T21:31:39.309Z caller=main.go:807 level=error msg="Unable to start web listener" err="listen tcp 0.0.0.0:9090: bind: address already in use"
2023-03-07T21:31:39Z ERROR cannot start service: exited quickly with code 1

juju ssh and cat

You could cat log files directly with juju ssh:

juju ssh --container prometheus prometheus/0 \
  cat /var/log/bootstrap.log

If your workload does not write logs to disk, you could modify the pebble service as follows:

"services": {
  "service-name": {
    # trick to drop the logs to a file but also keep them available in the pod logs
    # https://github.com/canonical/traefik-k8s-operator/blob/a287fa6a41077c8c3b7d3eb244f055c2a354bd2a/src/charm.py#L880
    "command": '/bin/sh -c "{} | tee -a {}"'.format(BIN_PATH, LOG_PATH),
  },
},

Integrated solution

For an integrated solution, consider the loki_push_api interface.

h/t @benhoyt

Oh nice, thanks @sed-i. @tmihoc this feels like a think we should include somewhere in our “proper” docs! :slight_smile:

1 Like

Already chatting with @sed-i about it!

1 Like

For “proper” docs, we should also suggest that authors do the normal “log to stdout/stderr”, or at least do that and tee into a file, lest that they find that their charm has actually been logging to /var/log/workload.log and the kubectl/pebble log output doesn’t contain 90% of it.

“Don’t worry about a file, dumping to stdout will do the right thing with journalctl/containers” can never be repeated enough times

Thanks @rbarry, incorporated. Take another look?

tee -a ... is probably safe just in case we forget about this once Pebble runs on machines, so we don’t accidentally clobber the logs instead of appending, but this is great in general :partying_face: