Ah, I see! This is also very good to mention. We only offer remote-write as of now but will for sure expand this to allow for the PULL (crape) method as well.
Actually, we have recently discussed discontinuing support for scraping cross model. This is mostly because of issues with routing. Most likely the scrape endpoint will be removed from the overlay. You will, of course, still be able to create an offer but it will not be the recommended method.
I don’t understand why routing would be a concern for Juju since this anyway is a networking related issue from the start? Am I missing out on something here? What are those issues you refer to?
The reason is actually not as much technical as it is about user experience. Getting network topology right is hard, especially so when you have tens, hundreds, or even thousands of remote models to scrape and observe.
By saying “cross-model metrics will always be pushed by an agent rather than scraped by Prometheus”, we invert the data flow and move from N firewall configs that need to be properly setup to one: the one that goes into COS.
@0x12b - I get that totally, but why discontinue a feature which would make alot of sense in the cases where PULL/PUSH have different implications on how metrics are collected?
I mean, why not support both methods rather than confining the solution to a single method?
So, this guide is getting really good. But there is this missing piece as how to add and test the “ALERTING” part of the COS.
I’m starting to explore how this would work with the library and also with prometheus and loki which isn’t covered by the guide - which is definitely needed.
- How to setup some initial alert-rule for loki, prometheus + using the juju-topology with this.
- How to test the alerts.
- Possibly some hints as how to integrate with - lets say - pagerduty, webhooks or whatever.
- How can I monitor lets say an individual UNIT as opposed to a whole APPLICATION in the alert rules? I’m fighting with understanding how much I should add for the rules as opposed to what magic juju (-topology) adds to the rules.
This would be fairly complete if this is covered here @tmihoc @jose
I’ll be happy to assist in peering the whole thing.
Hi…
It seems pydantic 2 is out now… And cos_agent does not build with 2, so we have to fix pydantic version to something like 1.10.10…
Yes!! Let’s follow this issue here: https://github.com/canonical/grafana-agent-k8s-operator/issues/214
In the “Step 2”, we are supposed to get the cos_agent from the grafana_agent with the CLI:
charmcraft fetch-lib charms.grafana_agent.v0.cos_agent
This command only pulls the file “lib/charms/grafana_agent/v0/cos_agent.py”
There is no such files as: metadata.yaml
and src/charm.py
that we are supposed to modify.
I cannot proceed with this tutorial.
Would anyone know if I missed something or if this tutorial needs to be updated?
Hi @gbeuzeboc
Yes, charmcraft fetch-lib charms.grafana_agent.v0.cos_agent
only fetch the lib you need to add to your charm.
In Step 2 you can see that you need to add to metadata.yaml
:
cos-agent:
interface: cos_agent
and this in charm.py
:
self._grafana_agent = COSAgentProvider(
self,
metrics_endpoints=[
{"path": "/metrics", "port": NODE_EXPORTER_PORT},
{"path": "/metrics", "port": JMX_PORT},
{"path": "/metrics", "port": METRICS_PROVIDER_PORT},
],
metrics_rules_dir="./src/alert_rules/prometheus",
logs_rules_dir="./src/alert_rules/loki",
dashboard_dirs=["./src/grafana_dashboards"],
log_slots=["charmed-zookeeper:logs"],
)
Is there anything missing??
@jose, I am not sure where the metadata.yaml
and charm.py
come from. Are they the ones from the Zookeeper
charm?
I was expecting a complete bundle to be deployed on the client machine, with zookeeper
and the agent.
I guess that is it, we have to modify the zookeeper
charm to make it connectible to the cos-agent and then deploy the new zookeeper
charm and the grafana-agent.
Sorry this wasn’t clear to me.
Perhaps I’m missing something, but the metadata.yaml
and charm.py
belongs to the charm you want to integrate to grafana-agent charm.
Zookeeper
is just the charm we use as a guinea pig to explain how to integrate Grafana Agent into any machine charm
Thank you for your reply, for now I am just learning about the COS so in my case I will use the zookeeper charm for the example. I wasn’t expecting that the charm we wanted to monitor needed to be adapted to the grafana-agent. Now I understand, thank you for your replies.
@gbeuzeboc - we are also using the machine version of the grafana-agent and also have a COSlite stack up which @marcus has been spearheading from our end (Dwellir). Let us know if we can help or collaborate. We have ran in to several challenges but we’ll get there.
I’ve been using this guide to get a basic integration of Landscape Server with COS Lite (mainly just logs through Loki). We have some logs in /var/log/landscape-server
that we’d like to scrape. In the CosAgentProvider
object parameters, I saw ways to configure custom metrics endpoints, but not custom log directories. Does anyone know if it’s possible to configure custom log scraping directories from this object? I’ve had success configuring the grafana-agent.yaml
file on the application server directly, but I would need to perform this configuration somehow in the CosAgentProvider
so that the Landscape Server charm is fully configured with COS-Lite out-of-the-box.
We will need to change the behaviour so that it also traverses subpaths. However, given that we want the snap to be strictly confined, getting logs from arbitrary locations won’t be an option. Either /var/log
or through logging plugs/slots.
About @spencerrunde question, there is an on going conversation in our public chat:
https://chat.charmhub.io/charmhub/pl/u3h1futjs7gw3n9uab1xujwbdc