log_slots
is only for snaps but it isn’t the only way to get logs. We automatically scrape all of /var/log
and the journal as well
So if you write your logs to /var/log/...
, they will get scraped
log_slots
is only for snaps but it isn’t the only way to get logs. We automatically scrape all of /var/log
and the journal as well
So if you write your logs to /var/log/...
, they will get scraped
Is this still true?
I’m working on a machine charm for testing out all this https://github.com/erik78se/juju-operators-examples/tree/main/observed
… and I have used this guide: Charmhub | Using the Grafana Agent Machine Charm
… which unfortunately is full of outdated information and will not work. Its a great guide once its updated. @tmihoc
I can help there once I’ve figured out what goes on here with my cross-model-integrations: Cross Model Integration COS light lxd-plus-microk8 makes juju error - juju - Charmhub
It is, but if you use a modern charmcraft
which supports PYDEPS
directives, it should be handled for you.
Hi @erik-lonroth!
May you mention which information you found outdated so we can fix it?
Thanks!
Jose
Sure, I can try, but its likely not complete…
loki-logging
endpoint seems to have changed name to only “logging” ?stable
yet
loki-logging
is the name of the offer
provided by this overlay and logging
is the endpoint
. You can change this name by using a custom overlay.HOW-TO
, it is more for a REFERENCE
doc.Thanx alot for the updates. I’ve managed to getting it to work and the changes you made are also good. Thanx for the effort put in to it. This is extremely valuable.
We are producing some internal documents that are made to cover the setup of a local development environment. We could share this with you once we have tested it out on some more members of our team.
We have some thoughts about how to set a COSlLight stack up which I’ll be sharing in a separate post.
Is there a reason this overlay also offers the prometheus scraping endpoint https://github.com/canonical/cos-lite-bundle/blob/b014892672258f1d4c9d88e4bfd413a17ca71c5d/overlays/offers-overlay.yaml#L17? That’s not in the docs…
Yes, because Prometheus support two ways of getting metrics:
PULL
(AKA scrape)PUSH
(AKA remote write)So, with these options you can have Prometheus in COS-Lite scraping metrics or receiving metrics.
Ah, I see! This is also very good to mention. We only offer remote-write as of now but will for sure expand this to allow for the PULL (crape) method as well.
Actually, we have recently discussed discontinuing support for scraping cross model. This is mostly because of issues with routing. Most likely the scrape endpoint will be removed from the overlay. You will, of course, still be able to create an offer but it will not be the recommended method.
I don’t understand why routing would be a concern for Juju since this anyway is a networking related issue from the start? Am I missing out on something here? What are those issues you refer to?
The reason is actually not as much technical as it is about user experience. Getting network topology right is hard, especially so when you have tens, hundreds, or even thousands of remote models to scrape and observe.
By saying “cross-model metrics will always be pushed by an agent rather than scraped by Prometheus”, we invert the data flow and move from N firewall configs that need to be properly setup to one: the one that goes into COS.
@0x12b - I get that totally, but why discontinue a feature which would make alot of sense in the cases where PULL/PUSH have different implications on how metrics are collected?
I mean, why not support both methods rather than confining the solution to a single method?
So, this guide is getting really good. But there is this missing piece as how to add and test the “ALERTING” part of the COS.
I’m starting to explore how this would work with the library and also with prometheus and loki which isn’t covered by the guide - which is definitely needed.
This would be fairly complete if this is covered here @tmihoc @jose
I’ll be happy to assist in peering the whole thing.
Hi…
It seems pydantic 2 is out now… And cos_agent does not build with 2, so we have to fix pydantic version to something like 1.10.10…
Yes!! Let’s follow this issue here: https://github.com/canonical/grafana-agent-k8s-operator/issues/214
In the “Step 2”, we are supposed to get the cos_agent from the grafana_agent with the CLI:
charmcraft fetch-lib charms.grafana_agent.v0.cos_agent
This command only pulls the file “lib/charms/grafana_agent/v0/cos_agent.py”
There is no such files as: metadata.yaml
and src/charm.py
that we are supposed to modify.
I cannot proceed with this tutorial.
Would anyone know if I missed something or if this tutorial needs to be updated?