The grafana-cloud-integrator charm can be configured to forward telemetry to not only Grafana cloud, but also to any Loki- (or Prometheus-) compatible endpoint. This is generally useful when running the stack on-premise isn’t desired, or when an additional off-premise storage is used for redundancy.
Vector is a lightweight ultra-fast tool by the Datadog team written in Rust. It’s excellent for being able to build declarative observability pipelines on the fly. In this example, we’ll use it to send our Loki logs to Elasticsearch.
Using the grafana-cloud-integrator charm, we can have grafana-agent send logs to vector, which transforms it to elastic-compatible non-protobuf log lines with a simple setup!
Generate a FileDescriptorSet
for Loki
When setting up Vector, you should configure it to expose an http server using the protobuf definitions of Loki’s PushApi. It’s vital that this protobuf file matches the one used in the version of Grafana Agent you’re using.
Once you have the definitions, you’ll need to generate a FileDescriptorSet
from the .proto
file. This can be accomplished using the following command:
$ protoc \
--include_imports \
--descriptor_set_out /path/to/write/proto.desc \
--proto_path /path/to/proto/definition/ \
/path/to/proto/definition/push.proto
You will then use the resulting FileDescriptorSet
, or .desc
file, to inform Vector how to decode the payload.
Vector configuration for ElasticSearch
You’ll need to have the grafana-agent machine charm up and running, with a subordinate relation to the charm you’re observing. If you don’t know how, you can find more information on the linked charm’s documentation.
Use the FileDescriptorSet created earlier (proto.desc
in the example above) to configure a Vector “source”:
[api]
enabled = true # completely optional, but allows you to use `vector top` and such.
[sources.in]
type = "http_server"
address = "0.0.0.0:8080" # use whatever port you prefer, this is your endpoint.
decoding.codec = "protobuf"
decoding.protobuf.desc_file = "/path/to/write/proto.desc"
decoding.protobuf.message_type = "logproto.PushRequest"
[sinks.my_elasticsearch]
inputs = ["in"]
type = "elasticsearch"
endpoints = [
"https://user:password@example.com"
]
There are multiple additional configuration options for elasticsearch available in the official docs.
Once that is set up, you’ll simply need to deploy grafana-cloud-integrator, configure the aforementioned loki endpoint in the charm, and relate it to grafana-agent:
juju deploy grafana-cloud-integrator cloud
juju config cloud loki-url=http://some.domain.name:8080
juju relate grafana-agent cloud
The logs will then start to seamlessly appear in your ElasticSearch!
Vector configuration for Splunk
Same as for ElasticSearch, except we need a “sink” section for splunk, for example:
[sinks.my_splunk]
type = "splunk_hec_logs"
inputs = [ "in" ]
endpoint = "https://example.com"
There are multiple additional configuration options for splunk available in the official docs.