How to set up remote Elasticsearch monitoring of an Elasticsearch cluster
Key | Value |
---|---|
Summary | Manage and monitor Elasticsearch by exporting your cluster data to another cluster to be visualized, analyzed, and monitored. |
Categories | deploy-applications |
Difficulty | 4 |
Author | James Beedy, Omnivector Solutions |
Introduction
Oftentimes it is useful to manage and monitor Elasticsearch by exporting your cluster data to another cluster to be visualized, analyzed, monitored, etc etc.
Architecture
To do this, you need two separate Elasticsearch clusters; a primary datastore, and a second for your monitoring/management datastore.
Kibana will connect to the second of the two clusters, the cluster containing the monitoring data of the primary. This will allow you to monitor and manage your primary cluster using Kibana.
Deploy applications
We’ll use Juju to deploy our base search appliance:
$ juju add-model t1
$ juju deploy -n 3 cs:~omnivector/elasticsearch es-primary --series bionic
$ juju deploy cs:~omnivector/elasticsearch es-secondary --series bionic
$ juju deploy cs:~omnivector/kibana kibana --series bionic
# workaround to fix a charm bug, we will fix it later in the charm.
$ juju run --application es-primary 'mkdir /etc/elasticsearch/discovery-file'
Relate
$ juju relate kibana:elasticsearch es-primary:client
$ juju relate kibana:elasticsearch es-secondary:client
Expose
$ juju expose es-primary
$ juju expose kibana
Intended model status
Running the juju status
command should provide output similar to the following:
Model Controller Cloud/Region Version SLA Timestamp
t1 k1 aws/ap-southeast-2 2.9.7.1 unsupported 15:44:13+10:00
App Version Status Scale Charm Store Channel Rev OS Message
es-primary 7.13.2 active 3 elasticsearch charmstore stable 40 ubuntu Elasticsearch Running - 3 x all nodes
es-secondary 7.13.2 active 1 elasticsearch charmstore stable 40 ubuntu Elasticsearch Running - 1 x all nodes
kibana 7.13.2 active 1 kibana charmstore stable 12 ubuntu Kibana available
Unit Workload Agent Machine Public address Ports Message
es-primary/0* active idle 0 13.211.162.37 9200/tcp,9300/tcp Elasticsearch Running - 3 x all nodes
es-primary/1 active idle 1 13.210.135.157 9200/tcp,9300/tcp Elasticsearch Running - 3 x all nodes
es-primary/2 active idle 2 3.25.181.245 9200/tcp,9300/tcp Elasticsearch Running - 3 x all nodes
es-secondary/0* active idle 3 52.65.195.228 9200/tcp,9300/tcp Elasticsearch Running - 1 x all nodes
kibana/0* active idle 4 3.25.191.251 80/tcp Kibana available
Machine State DNS Inst id Series AZ Message
0 started 13.211.162.37 i-0dbcfdc188b3efed8 bionic ap-southeast-2a running
1 started 13.210.135.157 i-0a71104c710fe220e bionic ap-southeast-2b running
2 started 3.25.181.245 i-09ff45f182bfae9c6 bionic ap-southeast-2c running
3 started 52.65.195.228 i-08b0afa35d7931c8f bionic ap-southeast-2a running
4 started 3.25.191.251 i-0b940e4aed0f9788a bionic ap-southeast-2b running
Relation provider Requirer Interface Type Message
es-primary:client kibana:elasticsearch elasticsearch regular
es-primary:member es-primary:member elasticsearch peer
es-secondary:client kibana:elasticsearch elasticsearch regular
es-secondary:member es-secondary:member elasticsearch peer
Storage Unit Storage id Type Pool Mountpoint Size Status Message
es-primary/0 data/0 filesystem rootfs 7.7GiB attached
es-primary/1 data/1 filesystem rootfs 7.7GiB attached
es-primary/2 data/2 filesystem rootfs 7.7GiB attached
es-secondary/0 data/3 filesystem rootfs 7.7GiB attached
Given the above environment, there are a set of ops that will get us from this initial deploy to one where the components are configured in the way described above.
Stop Elasticsearch and Kibana
Duration: 1:00
Stopping services will enable aggressive configuration changes outside of the charms’ hook execution cycle:
juju run --application es-primary "service elasticsearch stop"
juju run --application es-secondary "service elasticsearch stop"
juju run --application kibana "service kibana stop"
# workaround to fix a charm bug, we will fix it later in the charm.
juju run --application kibana "sudo chown -R kibana:kibana /etc/kibana"
Enable monitoring on the primary node
Define a configuration file, es-primary-custom-config.yaml
, with the following data:
# es-primary-custom-config.yaml
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true
xpack.monitoring.exporters:
es-secondary:
type: http
host: ["http://<es-secondary-ip>:9200"]
Apply configuration changes
juju config es-primary custom-config="$(cat es-primary-custom-config.yaml)"
Enable monitoring on the secondary node
juju config es-secondary custom-config="xpack.monitoring.enabled: false"
Restart ElasticSearch
Duration: 1:00
Restarting the search servers will apply their new configuration:
juju run --application es-primary "service elasticsearch start"
juju run --application es-secondary "service elasticsearch start"
Re-configure Kibana
Duration: 5:00
Delete old indices from secondary node
juju run --unit es-secondary/0 'curl -XDELETE http://localhost:9200/.monitoring-*'
juju run --unit es-secondary/0 'curl -XDELETE http://localhost:9200/.kibana*'
Restart Kibana
juju run --application kibana "service kibana start"
Verify
Duration: 3:00
At this point you should be able to log into the Kibana HTTP GUI and verify the es-primary
nodes are appearing there:
Finish
Duration: 1:00
The upcoming release of the Omnivector Elasticsearch charms will include support for 7.x as well as support for this functionality as a built in automation.
Thanks!
Last updated 1 year, 24 days ago.