We’ll use Juju to deploy our base search appliance:
$ juju deploy -n 3 ~omnivector/elasticsearch es-primary
$ juju deploy ~omnivector/elasticsearch es-secondary
$ juju deploy ~omnivector/kibana kibana
Expose
$ juju expose es-primary
$ juju expose kibana
Intended model status
Running the juju status
command should provide output similar to the following:
Model Controller Cloud/Region Version SLA Timestamp
es-offsite-demo-00 pdl-aws-prod.peopledatalabs.com aws/us-west-2 2.7.0 unsupported 20:55:58Z
App Version Status Scale Charm Store Rev OS Notes
es-primary 6.8.8 active 3 es-no-storage jujucharms 1 ubuntu exposed
es-secondary 6.8.8 active 1 es-no-storage jujucharms 1 ubuntu
kibana 6.8.8 active 1 kibana jujucharms 7 ubuntu exposed
Unit Workload Agent Machine Public address Ports Message
es-primary/0* active idle 0 172.31.104.121 9200/tcp,9300/tcp Elasticsearch Running - 3 x all nodes
es-primary/1 active idle 1 172.31.102.65 9200/tcp,9300/tcp Elasticsearch Running - 3 x all nodes
es-primary/2 active idle 2 172.31.103.208 9200/tcp,9300/tcp Elasticsearch Running - 3 x all nodes
es-secondary/0* active idle 3 172.31.103.6 9200/tcp,9300/tcp Elasticsearch Running - 1 x all nodes
kibana/0* active idle 4 172.31.105.4 80/tcp Kibana available
Machine State DNS Inst id Series AZ Message
0 started 172.31.104.121 i-06593c260a1d873ea bionic us-west-2c running
1 started 172.31.102.65 i-0dba8479521611179 bionic us-west-2a running
2 started 172.31.103.208 i-01a1edc606ec53c79 bionic us-west-2b running
3 started 172.31.103.6 i-082f88ee5b20007aa bionic us-west-2b running
4 started 172.31.105.4 i-003b552c10e581d97 bionic us-west-2d running
Given the above environment, there are a set of ops that will get us from this initial deploy to one where the components are configured in the way described above.