Set up remote Elasticsearch monitoring of an Elasticsearch cluster

Introduction

Oftentimes it is useful to manage and monitor Elasticsearch by exporting your cluster data to another cluster to be visualized, analyzed, monitored, etc etc.

Architecture

To do this, you need two separate Elasticsearch clusters; a primary datastore, and a second for your monitoring/management datastore.

Kibana will connect to the second of the two clusters, the cluster containing the monitoring data of the primary. This will allow you to monitor and manage your primary cluster using Kibana.

Deploy applications

We’ll use Juju to deploy our base search appliance:

$ juju deploy -n 3 ~omnivector/elasticsearch es-primary
$ juju deploy ~omnivector/elasticsearch es-secondary
$ juju deploy ~omnivector/kibana kibana

Expose

$ juju expose es-primary
$ juju expose kibana

Intended model status

Running the juju status command should provide output similar to the following:

Model               Controller                       Cloud/Region   Version  SLA          Timestamp
es-offsite-demo-00  pdl-aws-prod.peopledatalabs.com  aws/us-west-2  2.7.0    unsupported  20:55:58Z

App           Version  Status  Scale  Charm          Store       Rev  OS      Notes
es-primary    6.8.8    active      3  es-no-storage  jujucharms    1  ubuntu  exposed
es-secondary  6.8.8    active      1  es-no-storage  jujucharms    1  ubuntu
kibana        6.8.8    active      1  kibana         jujucharms    7  ubuntu  exposed

Unit          Workload  Agent  Machine  Public address  Ports              Message
es-primary/0*    active    idle   0        172.31.104.121  9200/tcp,9300/tcp  Elasticsearch Running - 3 x all nodes
es-primary/1     active    idle   1        172.31.102.65   9200/tcp,9300/tcp  Elasticsearch Running - 3 x all nodes
es-primary/2     active    idle   2        172.31.103.208  9200/tcp,9300/tcp  Elasticsearch Running - 3 x all nodes
es-secondary/0*  active    idle   3        172.31.103.6    9200/tcp,9300/tcp  Elasticsearch Running - 1 x all nodes
kibana/0*        active    idle   4        172.31.105.4    80/tcp             Kibana available

Machine  State    DNS             Inst id              Series  AZ          Message
0        started  172.31.104.121  i-06593c260a1d873ea  bionic  us-west-2c  running
1        started  172.31.102.65   i-0dba8479521611179  bionic  us-west-2a  running
2        started  172.31.103.208  i-01a1edc606ec53c79  bionic  us-west-2b  running
3        started  172.31.103.6    i-082f88ee5b20007aa  bionic  us-west-2b  running
4        started  172.31.105.4    i-003b552c10e581d97  bionic  us-west-2d  running

Given the above environment, there are a set of ops that will get us from this initial deploy to one where the components are configured in the way described above.

Stop Elasticsearch and Kibana

Duration: 1:00

Stopping services will enable aggressive configuration changes outside of the charms’ hook execution cycle:

juju run --application es-primary "service elasticsearch stop"

juju run --application es-secondary "service elasticsearch stop"

juju run --application kibana "service kibana stop"

Enable monitoring on the primary node

Define a configuration file, es-primary-custom-config.yaml, with the following data:

# es-primary-custom-config.yaml
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true
xpack.monitoring.exporters:
  es-secondary:
    type: http
    host: ["http://<es-secondary-ip>:9200"]

Apply configuration changes

juju config es-primary custom-config="$(cat es-primary-custom-config.yaml)"

Enable monitoring on the secondary node

juju config es-secondary custom-config="xpack.monitoring.enabled: false"

Restart ElasticSearch

Duration: 1:00

Restarting the search servers will apply their new configuration:

juju run --application es-primary "service elasticsearch start"
juju run --application es-secondary "service elasticsearch start"

Re-configure Kibana

Duration: 5:00

Delete old indices from secondary node

curl -XDELETE http://<es-secondary-ip>:9200/.monitoring-*
curl -XDELETE http://<es-secondary-ip>:9200/.kibana*

Restart Kibana

juju run --application kibana "service kibana start"

Verify

Duration: 3:00

At this point you should be able to log into the Kibana HTTP GUI and verify the es-primary nodes are appearing there:

Finish

Duration: 1:00

The upcoming release of the Omnivector Elasticsearch charms will include support for 7.x as well as support for this functionality as a built in automation.

Thanks!

1 Like

Thanks @jamesbeedy. The threads in this topic support a “magic” Markdown syntax that will enable them to be rendered as nice HTML at Juju | The simplest way to deploy and maintain applications in the cloud.

I’ll add a task to my list to tweak this one. Hopefully you won’t mind the edits.

1 Like

That’s awesome @timClicks - thank you!

This bundle contains support for 7.x, xpack, security, remote monitoring (what is mentioned above), aws direct attached storage (i3en), etc, etc. We are going to be putting in more cycles to the elastics.co stuff here shortly, including the documentation. These are just the early bits…

It would be really cool to get some feedback on the charm code here, as I’m the only one who has read it thus far. I’ve been pretty low on bandwidth, but managed to get a few cycles in and polish this charm up a bit. I used the primitives available to the best of my ability but would be great for them to get a second glance. Any takers?