A stateless workload is easy to charm because it involves minimal lifecycle concerns, and therefore makes a good introductory exercise.
For this exercise we’ll charm up flog, a fake log generator, and have it forwarding logs to Loki over the loki_push_api relation interface. This can be useful for testing!
You can browse the charm’s code for each step here:
- 7a47956 - Initial commit
- 5f71af8 - Cleanup auto generated code
- a370c0b - Add pebble layer
- f815b8f - Add relation to Loki
- 7581fbd - Switch to custom image
- 0861e0d - Add config options
Step 0: familiarize yourself with the workload directly
Before charming, let’s take a look at the ready-made container:
docker run mingrammer/flog --format rfc5424 --loop --delay 1s
This generates randomly looking logs to stdout, one line per second.
Using dive, we can see the container is made up of a single executable:
Permission UID:GID Size Filetree
drwxr-xr-x 0:0 3.1 MB └── bin
-rwxr-xr-x 0:0 3.1 MB └── flog
as can be confirmed from the Dockerfile:
FROM scratch
COPY --from=0 /bin/flog /bin/flog
ENTRYPOINT ["flog"]
When charmed, the entrypoint will be automatically changed by pebble.
Step 1: Generate charm from template
You can use operator-template to quickly generate a basic workdir but you would need to further clean it up.
The key points to note so far:
- metadata.yaml specifies a single container, named
workload
, which will run themingrammer/flog
image. - No config.yaml or actions.yaml in the workdir yet.
- The only method in the charm is the
pebble-ready
hook, which sets the status to “Active”.
Step 2: Deploy
charmcraft pack
juju deploy ./flog-k8s_ubuntu-20.04-amd64.charm flog \
--resource workload-image=mingrammer/flog
Let’s see flog is running:
$ pgrep -la flog
and… nothing. That is because pebble took over the entry point and we didn’t provide any service for pebble to start.
Step 3: Add pebble service
Now it’s time to consider some design choices around running multiple instance of flog
(the workload). The reasons to do that include:
- integration tests: have the same charm exercising both
stdout
tolog
methods of forwarding logs to Loki; - load tests: easily simulate multiple sources of logs.
So,
- we could have multiple pebble services in the same container, so that multiple instances of flog would be started (but configuring them via config.yaml and
juju config
is not scalable or convenient); - we could have specified multiple workload containers in metadata.yaml for running
flog
with different config options in each (but specifying them in metadata.yaml is not scalable or convenient, and on top of that there’s the configuration challenge); - we could have specified a peer relation in metadata.yaml so that the
flog
charm could scale up/down its number of deployed units (but units are intended for HA); - or, we could deploy the same single-unit app multiple times.
The last two options align well with the juju model (and with each other).
In that case, a single pebble service (layer) would be needed:
# from ops.pebble import Layer
def _flog_layer(self) -> Layer:
"""Returns Pebble configuration layer for flog."""
def command():
return "/bin/flog --format rfc5424 --loop --delay 1s --type log --output /bin/fake.log"
return Layer({
"summary": "flog layer",
"description": "pebble config layer for flog",
"services": {
"flog": {
"override": "replace",
"summary": "flog service",
"command": command(),
"startup": "enabled",
}
},
})
As well as some helper code for applying the service layer:
def _update_layer(self):
# container name from metadata.yaml
container = self.unit.get_container("workload")
plan = container.get_plan()
overlay = self._flog_layer()
if overlay.services != plan.services:
container.add_layer("flog layer", overlay, combine=True)
container.replan()
Now, pack and deploy, et voilà:
$ pgrep -la flog
2464008 /bin/flog --format rfc5424 --loop --delay 1s --type log --output /bin/fake.log
Step 4: Relate to Loki
To relate our new charm to Loki over one of its relations (loki_push_api) we need to:
A. specify the relation in metadata.yaml
# metadata.yaml
requires:
log-proxy:
interface: loki_push_api
B. obtain the loki charm library for the relation
$ charmcraft fetch-lib charms.loki_k8s.v0.loki_push_api
Library charms.loki_k8s.v0.loki_push_api version 0.9 downloaded.
C. use the charm library in our charm
In the charm’s constructor:
self._log_proxy = LogProxyConsumer(
charm=self, log_files=["/bin/fake.log"], container_name="workload"
)
D. Deploy Loki and relate
$ juju deploy --channel=edge loki-k8s loki
Located charm "loki-k8s" in charm-hub, revision 15
Deploying "loki" from charm-hub charm "loki-k8s", revision 15 in channel edge
$ juju relate loki flog
We can convince ourselves that the relation works by querying the pebble plan via cli:
$ juju exec --unit flog/0 -- \
PEBBLE_SOCKET=/charm/containers/workload/pebble.socket \
pebble plan
services:
flog:
summary: flog service
startup: enabled
override: replace
command: /bin/flog --format rfc5424 --loop --delay 1s --type log --output /bin/fake.log
promtail:
summary: promtail
startup: disabled
override: replace
command: /opt/promtail/promtail-linux-amd64 -config.file=/etc/promtail/promtail_config.yaml
and by looking at the running processes (pgrep -la promtail
).
$ pgrep -la promtail
457233 /opt/promtail/promtail-linux-amd64 -config.file=/etc/promtail/promtail_config.yaml
Note If you were following these steps with the
mingrammer/flog
image, then you will not see promtail running. This is because the image isFROM scratch
, which does not have the tooling necessary for injecting the promtail binary into the flog container. I forked that repo and changed in the original Dockerfile toFROM ubuntu
for this to work. You should instead useghcr.io/sed-i/flog:latest
for the image:juju deploy ./flog-k8s_ubuntu-20.04-amd64.charm flog --resource workload-image=ghcr.io/sed-i/flog:latest
To see these logs from the Loki side,
$ juju show-unit loki/0 --format=json | jq -r '."loki/0".address'
10.1.73.178
$ curl -G -s 10.1.73.178:3100/loki/api/v1/labels
{"status":"success","data":["__name__","filename","job","juju_application","juju_charm","juju_model","juju_model_uuid","juju_unit"]}
$ curl -G -s 10.1.73.178:3100/loki/api/v1/label/juju_application/values
{"status":"success","data":["flog"]}
$ curl -G -s 10.1.73.178:3100/loki/api/v1/query --data-urlencode 'query=sum(rate({juju_application="flog"}[1m])) by (level)' | jq '.data.stats.summary'
{
"bytesProcessedPerSecond": 4773972,
"linesProcessedPerSecond": 9324,
"totalBytesProcessed": 30720,
"totalLinesProcessed": 60,
"execTime": 0.006434892
}
Step 5: Expose some CLI args via config.yaml
flog
has many cli args, out of which two are of particular interest:
- rate, # of logs per second;
- rotate, rotate log after x logs (only in log mode)
The matching config.yaml would look like this:
options:
rate:
default: 1
description: Number of logs per second to generate.
type: int
rotate:
description: Rotate log after x logs.
type: int
Note how rotate
doesn’t have a default: this is intentional, so that flog’s own defaults could be used.
For the charm to be able to respond to juju config
changes, we need to register a hook in the charm’s constructor:
self.framework.observe(
self.on.config_changed, self._on_config_changed
)
and implement the hook:
def _on_config_changed(self, event):
self._update_layer()
and augment the command()
function:
def command():
cmd = (
"/bin/flog --format rfc5424 --loop --type log --output /bin/fake.log --overwrite "
f"--rate {self.model.config['rate']} "
)
if rotate := self.model.config.get("rotate"):
cmd += f"--rotate {rotate} "
return cmd
And we’re good to go!
Intentional omissions from this walkthrough
- Charms that relate to Loki would usually include alert rules, which were not covered here.
- In the current state of this code, unit status is set in a non-centralized fashion, which means there could be unwanted conflicts (e.g. config-changed setting the status to Blocked, but then pebble-ready blindly reverting it to Active).
- Could be useful to have a charm action to delete the
fake.log
file. - Publishing to charmhub.