Hi @erik-lonroth!
May you mention which information you found outdated so we can fix it?
Thanks!
Jose
Hi @erik-lonroth!
May you mention which information you found outdated so we can fix it?
Thanks!
Jose
Sure, I can try, but its likely not complete…
loki-logging
endpoint seems to have changed name to only “logging” ?stable
yet
loki-logging
is the name of the offer
provided by this overlay and logging
is the endpoint
. You can change this name by using a custom overlay.HOW-TO
, it is more for a REFERENCE
doc.Thanx alot for the updates. I’ve managed to getting it to work and the changes you made are also good. Thanx for the effort put in to it. This is extremely valuable.
We are producing some internal documents that are made to cover the setup of a local development environment. We could share this with you once we have tested it out on some more members of our team.
We have some thoughts about how to set a COSlLight stack up which I’ll be sharing in a separate post.
Is there a reason this overlay also offers the prometheus scraping endpoint https://github.com/canonical/cos-lite-bundle/blob/b014892672258f1d4c9d88e4bfd413a17ca71c5d/overlays/offers-overlay.yaml#L17? That’s not in the docs…
Yes, because Prometheus support two ways of getting metrics:
PULL
(AKA scrape)PUSH
(AKA remote write)So, with these options you can have Prometheus in COS-Lite scraping metrics or receiving metrics.
Ah, I see! This is also very good to mention. We only offer remote-write as of now but will for sure expand this to allow for the PULL (crape) method as well.
Actually, we have recently discussed discontinuing support for scraping cross model. This is mostly because of issues with routing. Most likely the scrape endpoint will be removed from the overlay. You will, of course, still be able to create an offer but it will not be the recommended method.
I don’t understand why routing would be a concern for Juju since this anyway is a networking related issue from the start? Am I missing out on something here? What are those issues you refer to?
The reason is actually not as much technical as it is about user experience. Getting network topology right is hard, especially so when you have tens, hundreds, or even thousands of remote models to scrape and observe.
By saying “cross-model metrics will always be pushed by an agent rather than scraped by Prometheus”, we invert the data flow and move from N firewall configs that need to be properly setup to one: the one that goes into COS.
@0x12b - I get that totally, but why discontinue a feature which would make alot of sense in the cases where PULL/PUSH have different implications on how metrics are collected?
I mean, why not support both methods rather than confining the solution to a single method?
So, this guide is getting really good. But there is this missing piece as how to add and test the “ALERTING” part of the COS.
I’m starting to explore how this would work with the library and also with prometheus and loki which isn’t covered by the guide - which is definitely needed.
This would be fairly complete if this is covered here @tmihoc @jose
I’ll be happy to assist in peering the whole thing.
Hi…
It seems pydantic 2 is out now… And cos_agent does not build with 2, so we have to fix pydantic version to something like 1.10.10…
Yes!! Let’s follow this issue here: https://github.com/canonical/grafana-agent-k8s-operator/issues/214
In the “Step 2”, we are supposed to get the cos_agent from the grafana_agent with the CLI:
charmcraft fetch-lib charms.grafana_agent.v0.cos_agent
This command only pulls the file “lib/charms/grafana_agent/v0/cos_agent.py”
There is no such files as: metadata.yaml
and src/charm.py
that we are supposed to modify.
I cannot proceed with this tutorial.
Would anyone know if I missed something or if this tutorial needs to be updated?
Hi @gbeuzeboc
Yes, charmcraft fetch-lib charms.grafana_agent.v0.cos_agent
only fetch the lib you need to add to your charm.
In Step 2 you can see that you need to add to metadata.yaml
:
cos-agent:
interface: cos_agent
and this in charm.py
:
self._grafana_agent = COSAgentProvider(
self,
metrics_endpoints=[
{"path": "/metrics", "port": NODE_EXPORTER_PORT},
{"path": "/metrics", "port": JMX_PORT},
{"path": "/metrics", "port": METRICS_PROVIDER_PORT},
],
metrics_rules_dir="./src/alert_rules/prometheus",
logs_rules_dir="./src/alert_rules/loki",
dashboard_dirs=["./src/grafana_dashboards"],
log_slots=["charmed-zookeeper:logs"],
)
Is there anything missing??
@jose, I am not sure where the metadata.yaml
and charm.py
come from. Are they the ones from the Zookeeper
charm?
I was expecting a complete bundle to be deployed on the client machine, with zookeeper
and the agent.
I guess that is it, we have to modify the zookeeper
charm to make it connectible to the cos-agent and then deploy the new zookeeper
charm and the grafana-agent.
Sorry this wasn’t clear to me.
Perhaps I’m missing something, but the metadata.yaml
and charm.py
belongs to the charm you want to integrate to grafana-agent charm.
Zookeeper
is just the charm we use as a guinea pig to explain how to integrate Grafana Agent into any machine charm