Get started with Juju

Welcome to Juju, your entrypoint into the Juju universe!

In this tutorial you will learn all the things that you need to know to start deploying, integrating, and managing applications with Juju.


What you’ll need:

  • A working station, e.g., a laptop, that has sufficient resources to launch a virtual machine with 4 CPUs, 8 GB RAM (4 GB seems to work as well), and 30 GB disk space.

What you’ll do:

  1. Get acquainted with Juju
  2. Get comfortable with Juju

Setup: Create your test environment

When you’re trying things out, it’s good to be in an isolated environment, so you don’t have to worry too much about cleanup. It’s also nice if you don’t need to bother too much with setup. In the Juju world you can get both by spinning up an Ubuntu virtual machine (VM) with Multipass, specifically, using their Juju-ready charm-dev blueprint.

Install Multipass: Linux | macOS | Windows. On Linux (assumes your have snapd):

sudo snap install multipass

Use Multipass to launch an Ubuntu VM with the charm-dev blueprint (--memory 4G seems to work as well, if you’d rather do that):

This step may take a few minutes to complete (e.g., 10 mins).

This is because the command downloads, installs, (updates,) and configures a number of packages, and the speed will be affected by network bandwidth (not just your own, but also that of the package sources).

However, once it’s done, you’ll have everything you’ll need – all in a nice isolated environment that you can clean up easily.

multipass launch --cpus 4 --memory 8G --disk 30G --name tutorial-vm charm-dev

Use multipass shell tutorial-vm to open a shell into the VM. Sample session:

$ multipass shell tutorial-vm
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-87-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage


Expanded Security Maintenance for Applications is not enabled.

0 updates can be applied immediately.

Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status


Last login: Mon Oct 30 13:59:02 2023 from 10.238.98.1
ubuntu@tutorial-vm:~$ 

From here onwards, type any command into this VM shell.

Congratulations, your isolated, Juju-ready test environment is ready!

At any point:
  • To exit the shell, press mod key + C or type exit.
  • To stop the VM, run multipass stop tutorial-vm.
  • To restart the VM and re-open a shell into it, type again multipass shell tutorial-vm.

See more: Multipass (or run snap info multipass for a quick preview)

Part 1: Get acquainted with Juju

In this part of the tutorial you will get acquainted with Juju by running a number of Juju read commands to inspect the setup that your Multipass charm-dev-blueprinted VM has achieved for you – from preparing a cloud and connecting it to Juju all the way to having an application deployed on that cloud.

Prepare your cloud

Your VM comes with MicroK8s preinstalled. Run microk8s version to verify. Sample session:

ubuntu@tutorial-vm:~$ microk8s version
MicroK8s v1.27.6 revision 5959

You’re all set: MicroK8s is a small, fast, secure, certified Kubernetes distribution and Juju will be able to use it as a cloud.

Look around:
  • Run snap info microk8s.
  • Run microk8s status.
  • Inspect the .kube/config file: cat ~/.kube/config.
  • Try microk8s kubectl. You won’t need it once you have Juju, but it’s there anyway.

See more: MicroK8s (or run snap info microk8s for a quick preview)

Install the Juju CLI client

Your VM comes with the Juju CLI client preinstalled. Run juju version to verify. Sample session:

ubuntu@tutorial-vm:~$ juju version
3.1.6-genericlinux-amd64
Look around:

To get a quick preview of Juju’s capabilities, run juju help commands. Use juju help commands | grep <keyword> to get a quick sense of the commands related to a particular keyword (e.g., “secret”). Run juju help <command> to find out more about a specific command.

See more: How to install the Juju CLI client

Add your cloud definition to Juju

Juju has automatically used your MicroK8s’s .kube/config file to define a cloud called microk8s.

  • Run juju clouds to verify. Sample session:
ubuntu@tutorial-vm:~$ juju clouds
Only clouds with registered credentials are shown.
There are more clouds, use --all to see them.

Clouds available on the controller:
Cloud     Regions  Default    Type
microk8s  1        localhost  k8s  

Clouds available on the client:
Cloud      Regions  Default    Type  Credentials  Source    Description
localhost  1        localhost  lxd   1            built-in  LXD Container Hypervisor
microk8s   1        localhost  k8s   1            built-in  A Kubernetes Cluster
  • Run juju show-cloud microk8s to find out more.
Look around:
  1. Revisit the output for juju clouds. Note that there are in fact two clouds with the region default given as localhost – our microk8s, a k8s-type cloud, but also localhost, a lxd-type cloud. LXD is a system container and virtual machine manager, and Juju can use it as a cloud as well. Because both are built-in clouds, Juju has retrieved their cloud definition automatically.
  2. Run juju clouds --all. Note that the list of “Clouds available on the client” is now much longer. Because they are public clouds with predictable details, Juju has retrieved their cloud definition automatically as well. Try juju show-cloud aws --client to find out what your Juju client knows about the Amazon EC2 cloud. Repeat this for other clouds. Note that these are not all the clouds that Juju can handle – for a more representative list see List of supported clouds. Juju reduces all this diversity to just one distinction – Kubernetes clouds (e.g., MicroK8s) vs. machine clouds (e.g., LXD) – and even then the user experience is almost exactly the same.
  3. One of the differences between a machine cloud and a Kubernetes cloud in Juju is in how you add the cloud definition to Juju – via juju add-cloud vs. juju add-k8s. Run juju help <command> to preview each.

See more: How to manage clouds > Add a cloud > Kubernetes

Add your cloud credentials to Juju

Juju has automatically used your MicroK8s’s .kube/config file to define a credential for your microk8s cloud.

  • Run juju credentials. You should see a credential called microk8s. Sample session:
ubuntu@tutorial-vm:~$ juju credentials

Controller Credentials:
Cloud     Credentials
microk8s  microk8s

Client Credentials:
Cloud      Credentials
localhost  localhost*
microk8s   microk8s*
  • Run juju credentials --format yaml --show-secrets to view more.
  • Run cat ~/.local/share/juju/credentials.yaml to view the same from the source file.
  • Run juju show-credential microk8s microk8s --show-secrets to zoom in on the microk8s credential. (In the command, the first microk8s is your cloud and the second – your credential.)
Look around:
  1. Revisit the outputs for the commands above. Note the strong connection between a credential and the cloud. In Juju, a credential is always about access to a cloud.
  2. Revisit the outputs for the commands above. Note that Juju knows the credential for both the MicroK8s and the LXD cloud. This is because they are built-in clouds and Juju can retrieve your credentials with them automatically. This is not true for your other cloud credentials, of course – those you will have to provide yourself.

See more: How to manage credentials > Add a credential > Kubernetes

Bootstrap a Juju controller into your cloud

Our VM is set up such that it has already used your microk8s cloud definition and credentials to bootstrap a Juju controller into your microK8s cloud for you.

  • Run juju controllers to verify. You should see a credential called microk8s. Sample session:
ubuntu@tutorial-vm:~$ juju controllers
Use --refresh option with this command to see the latest information.

Controller  Model        User   Access     Cloud/Region         Models  Nodes    HA  Version
lxd         welcome-lxd  admin  superuser  localhost/localhost       2      1  none  3.1.6  
microk8s*   controller   admin  superuser  microk8s/localhost        2      -     -  3.1.6  
  • Run juju show-controller microk8s to view more.
Look around:
  1. Revisit what you’ve seen of controllers so far. What do you think a controller is?

Expand to see the answer

Briefly put, a controller is your control plane for the cloud.


  1. Revisit the output for juju clouds or juju credentials. Notice the classification into client vs. controller. All this classification does is keep track of who is aware of a given cloud definition / credential – the client, the controller, or both. However, this simple distinction has important implications – can you guess which?

Expand to see the answer

It means you can have multiple clouds and credentials on the same controller, and this can help you lower costs: While you have to choose a specific cloud and credential to bootstrap the controller, being able to use different credentials for different groups of workloads (as we’ll see in a bit, ‘models’) on the same cloud means you can potentially split the bill, and not having to maintain a separate controller on each cloud means you can lower your total bill.


  1. Revisit the output for juju controllers. Note the User and Access columns. In Juju, a user is any person able to at least log in to a Juju controller. Run juju whoami, then juju show-user admin – as you can see, your user is called admin and has superuser access to the controller. Run juju help commands | grep user; trick question: which command allows you to share your controller with another user? (Hint: If you’re not sure, run juju help <command> to check.)
  2. Revisit the output for juju controllers. Note that there is a controller for the LXD cloud too. Run juju switch lxd to switch to the LXD cloud controller. (If you check juju controllers again, the asterisk should move to the lxd controller.) Run juju switch microk8s to switch back to the MicroK8s controller.

See more: How to manage controllers > Bootstrap a controller

Add a model to your controller

Our VM is set up such that it has already used your cloud credential to add a Juju model to your microk8s controller for you.

  • Run juju models to verify. You should see a model called welcome-k8s. Sample session:
ubuntu@tutorial-vm:~$ juju models
Controller: microk8s

Model         Cloud/Region        Type        Status     Units  Access  Last connection
controller    microk8s/localhost  kubernetes  available  1       admin  just now
welcome-k8s*  microk8s/localhost  kubernetes  available  1       admin  15 hours ago
  • Run juju show-model welcome-k8s to find out more.

  • Run juju status. (Tip: Use the -m [<[controller:]model> flag to quickly view the status of different models without having to switch first; for example: juju status -m lxd:welcome-lxd should show you that your LXD cloud’s analogous workload model is also empty.) This is the quickest way to view the state of your model – and everything else that you may deploy on it – at a glance.

Look around:
  1. Revisit what you’ve seen of models so far. What do you think a model is?

Expand to see the answer

A model is a logical abstraction. It denotes a workspace, a canvas where you deploy, integrate, and manage applications.


  1. Revisit the juju models output above. Note that our MicroK8s controller actually has two models – the welcome-k8s model you just explored and another model called controller. The welcome-k8s model is a typical ‘workload’ workspace added for you by the VM, and you can have as many of them as you want. In contrast, the controller model is a special workspace that you always get by default when you bootstrap a controller, which takes care of your controller, and you should almost never touch; still, you can switch to it with juju switch controller (run juju help switch to verify that this command really works for both controllers and models), peek into it with juju show-model controller, etc., as you would with any model.
  2. On a Kubernetes cloud, a Juju model corresponds to a Kubernetes namespace. Run microk8s kubectl get namespaces to verify – the output should show a namespace called welcome-k8s, for your welcome-k8s model, and also a namespace called controller-microk8s, for your controller model.

See more: How to manage models > Add a model

Deploy a charm

Whenever you bootstrap a controller into a cloud, Juju does not only automatically add a model for you – the controller model – but also automatically deploys a charm for you – the juju-controller charm, whose deployed instance is the controller application. This is true here as well. Run juju switch microk8s:controller to ensure you are on the controller model, then:

  • Run juju status to verify. This will show you the current status of the controller model, including the application it currently has on it – an application called controller. Sample session:
ubuntu@tutorial-vm:~$ juju status -m microk8s:controller
Model       Controller  Cloud/Region        Version  SLA          Timestamp
controller  microk8s    microk8s/localhost  3.1.6    unsupported  13:29:51+01:00

App         Version  Status  Scale  Charm            Channel     Rev  Address  Exposed  Message
controller           active      1  juju-controller  3.1/stable   14           no       

Unit           Workload  Agent  Address       Ports      Message
controller/0*  active    idle   10.1.170.137  37017/TCP  
  • Run juju show-application controller to view more, for example, to verify that the name of the charm that has delivered this application is indeed juju-controller.
Look around:
  1. Take a look at the official home of charms (including the juju-controller charm) – Charmhub. What do you think a charm is?

Expand to see the answer

A charm is software that delivers a workload along with its operations code so that it can be easily deployed, integrated, and otherwise managed with Juju.


  1. Revisit the outputs of the commands above. The juju-controller charm plays a unique role in a Juju deployment. However, it has the same basic features as most other charms:

    • It is published on Charmhub (see Charmhub | Deploy Juju Controller using Charmhub - The Open Operator Collection) and, if you download and unzip it, it looks just like any other charm. (Try sudo apt install unzip. Then mkdir juju-controller; cd juju-controller; juju download juju-controller; unzip juju-controller_r14.charm; ls; cd ... Then the same for any other charm you want to inspect. They all typically have a metadata.yaml file, a few other yaml files defining things like configurations, a src/charm.py file with Python operations code, etc.)
    • Its deployed instance is an application (see juju status, the controller application), and a running instance of the application is a unit (see juju status, the controller/0 unit), both of which have a deployment status (see juju status; active and idle are healthy statuses).
    • It does not have any special operations scripts, or ‘actions’ (see Charmhub or run juju actions controller) but it does have configuration options (see Charmhub or run juju config controller) and integration endpoints that support integrations with other applications (see Charmhub or run juju show-application controller), etc.

    That is because a charm (or a charm collection, also known as a ‘bundle’) can really be anything whose operations code it makes sense to reuse – a cluster (OpenStack, Kubernetes), a data platform (PostgreSQL, MongoDB, etc.), an observability stack (Canonical Observability Stack), an MLOps solution (Kubeflow), and so much more.

  2. On a Kubernetes cloud, a Juju unit corresponds to a pod. Run microk8s kubectl -n controller-microk8s get pods to verify – it should show a modeloperator... pod, which is always there for a model on a Kubernetes cloud, but also a controller-0 pod, which is the Kubernetes pod corresponding to the controller/0 unit in your juju status.

See more:

Part 2: Get comfortable with Juju

In this part of the tutorial you will get comfortable with Juju by running a number of Juju write commands to achieve a deployment consisting of a Mattermost chat service backed by a PosgreSQL database with TLS-encrypted traffic.

Before you begin:

On two additional terminal windows, access your Multipass VM shell (multipass shell tutorial-vm) and run:

  1. juju status --watch 1s to watch your deployment status evolve.

Things are all right if your App Status and your Unit - Workload reach active and your Unit - Agent reaches idle. See more: Status.

  1. juju debug-log to watch all the details behind your deployment status.

Especially useful when things don’t evolve as expected. In that case, feel free to write to us on Mattermost or Discourse.

Deploy the PostgreSQL charm

In your Multipass shell, run juju switch microk8s:welcome-k8s to ensure you are on your MicroK8s workload model.

Deploy the PostgreSQL charm for Kubernetes as below:

juju deploy postgresql-k8s --channel 14/stable --trust

Sample successful output:

ubuntu@tutorial-vm:~$ juju status
Model        Controller  Cloud/Region        Version  SLA          Timestamp
welcome-k8s  microk8s    microk8s/localhost  3.1.6    unsupported  15:21:10+01:00

App                        Version  Status  Scale  Charm                      Channel    Rev  Address         Exposed  Message
postgresql-k8s             14.9     active      3  postgresql-k8s             14/stable  158  10.152.183.108  no       

Unit                          Workload  Agent  Address       Ports  Message
postgresql-k8s/0*             active    idle   10.1.170.140         

Congratulations, your PostgreSQL charm has deployed, and your PostgreSQL application is up and running with one unit!

Look around:
  1. Run juju show-unit postgresql-k8s/0 to examine the unit.
  2. Run juju storage to get further information about the storage associated with the unit.

See more: Charmhub | postgresql-k8s

Access your PostgreSQL application

First, get:

  • the host IP address of the PostgreSQL unit: retrieve it from juju status or juju show-unit (in the sample outputs above, 10.1.170.140);

  • a PostgreSQL username and password: we can use the internal, default user called operator and set a password for it using the set-password action. Sample session:

juju run postgresql-k8s/leader set-password username=operator password=mysecretpass

See more:

Now, use this information to access the PostgreSQL application:

First, ssh into the PostgreSQL unit (= Kubernetes container). Sample session:

ubuntu@tutorial-vm:~$ juju ssh --container postgresql postgresql-k8s/leader bash
root@postgresql-k8s-0:/#

Verify that psql is already installed. Sample session:

root@postgresql-k8s-0:/# psql --version
psql (PostgreSQL) 14.9 (Ubuntu 14.9-0ubuntu0.22.04.1)

Use psql to view a list of the existing databases. Sample session (make sure to use your own host and password):

root@postgresql-k8s-0:/# psql --host=10.1.170.140 --username=operator --password --list
Password: 
                               List of databases
   Name    |  Owner   | Encoding | Collate |  Ctype  |    Access privileges     
-----------+----------+----------+---------+---------+--------------------------
 postgres  | operator | UTF8     | C       | C.UTF-8 | operator=CTc/operator   +
           |          |          |         |         | backup=CTc/operator     +
           |          |          |         |         | replication=CTc/operator+
           |          |          |         |         | rewind=CTc/operator     +
           |          |          |         |         | monitoring=CTc/operator +
           |          |          |         |         | admin=c/operator
 template0 | operator | UTF8     | C       | C.UTF-8 | =c/operator             +
           |          |          |         |         | operator=CTc/operator
 template1 | operator | UTF8     | C       | C.UTF-8 | =c/operator             +
           |          |          |         |         | operator=CTc/operator
(3 rows)

Finally, use psql to access the postgres database and submit a query. Sample session:

root@postgresql-k8s-0:/# psql --host=10.1.170.140 --username=operator --password postgres
Password: 
psql (14.9 (Ubuntu 14.9-0ubuntu0.22.04.1))
Type "help" for help.

postgres=# SELECT version();
                                                               version                                                              
  
------------------------------------------------------------------------------------------------------------------------------------
--
 PostgreSQL 14.9 (Ubuntu 14.9-0ubuntu0.22.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit
(1 row)

Type exit to get back to your unit shell and then again to return to your Multipass VM shell.

Configure your PostgreSQL application

The PostgreSQL charm defines many application configuration options. Run juju config postgresql-k8s to find out what they are. Change the profile key to testing. Sample session:

ubuntu@tutorial-vm:~$ juju config postgresql-k8s
# Should show a long list, including the 'profile' option, whose default value is 'production'.
ubuntu@tutorial-vm:~$ juju config postgresql-k8s profile=testing
ubuntu@tutorial-vm:~$ juju config postgresql-k8s
# Should show that the 'profile' option is now set to 'testing'.

See more:

Scale your PostgreSQL application

A database failure can be very costly. Let’s scale our application. Sample session:

ubuntu@tutorial-vm:~$ juju scale-application postgresql-k8s 3
postgresql-k8s scaled to 3 units
# wait a minute for things to settle
Model        Controller  Cloud/Region        Version  SLA          Timestamp
welcome-k8s  microk8s    microk8s/localhost  3.1.6    unsupported  17:31:20+01:00

App                        Version  Status  Scale  Charm                      Channel    Rev  Address         Exposed  Message
postgresql-k8s             14.9     active      3  postgresql-k8s             14/stable  158  10.152.183.108  no       

Unit                          Workload  Agent  Address       Ports  Message
postgresql-k8s/0*             active    idle   10.1.170.140         
postgresql-k8s/1              active    idle   10.1.170.144         
postgresql-k8s/2              active    idle   10.1.170.143         

As you might have guessed, the result of scaling an application is that you have multiple running instances of your application – that is, multiple units.

In a production scenario:

You’ll want to make sure that they are also properly distributed over multiple nodes. Our localhost MicroK8s doesn’t allow us to do this (because we only have 1 node), but If you clusterise MicroK8s, you can use it to explore this too!

See more: MicroK8s | Create a multi-node cluster

See more: How to manage applications > Scale

Enable TLS security in your PostgreSQL application

Communication with a database needs to be secure.

Our PostgreSQL charm does not have encrypted traffic built-in. However, it has integration endpoints that make it possible for it to integrate with another charm called TLS Certificates Operator (tls-certificates-operator) to acquire TLS-encrypted traffic.

Use the deploy, config, and integrate commands to deploy the TLS Certificates Operator charm, configure it to generate a TLS certificate, and integrate it with your existing PostgreSQL, then run juju status --relations to inspect the result. Sample session:

ubuntu@tutorial-vm:~$ juju deploy tls-certificates-operator
Located charm "tls-certificates-operator" in charm-hub, revision 22
Deploying "tls-certificates-operator" from charm-hub charm "tls-certificates-operator", revision 22 in channel stable on ubuntu@22.04/stable
ubuntu@tutorial-vm:~$ juju config tls-certificates-operator generate-self-signed-certificates="true" ca-common-name="Test CA"
ubuntu@tutorial-vm:~$ juju integrate postgresql-k8s tls-certificates-operator
# wait a minute for things to settle
ubuntu@tutorial-vm:~$ juju status --relations
Model        Controller  Cloud/Region        Version  SLA          Timestamp
welcome-k8s  microk8s    microk8s/localhost  3.1.6    unsupported  17:31:20+01:00

App                        Version  Status  Scale  Charm                      Channel    Rev  Address         Exposed  Message
postgresql-k8s             14.9     active      3  postgresql-k8s             14/stable  158  10.152.183.108  no       
tls-certificates-operator           active      1  tls-certificates-operator  stable      22  10.152.183.53   no       

Unit                          Workload  Agent  Address       Ports  Message
postgresql-k8s/0*             active    idle   10.1.170.140         
postgresql-k8s/1              active    idle   10.1.170.144         
postgresql-k8s/2              active    idle   10.1.170.143         
tls-certificates-operator/0*  active    idle   10.1.170.145         

Integration provider                    Requirer                            Interface                 Type     Message
postgresql-k8s:database-peers           postgresql-k8s:database-peers       postgresql_peers          peer     
postgresql-k8s:restart                  postgresql-k8s:restart              rolling_op                peer     
postgresql-k8s:upgrade                  postgresql-k8s:upgrade              upgrade                   peer     
tls-certificates-operator:certificates  postgresql-k8s:certificates         tls-certificates          regular  
tls-certificates-operator:replicas      tls-certificates-operator:replicas  tls-certificates-replica 
Look around:
  1. Run juju info postgresql-k8s, then juju info tls-certificates-operator. Compare the relations block: The postgresql-k8s has a requires-type endpoint that can connect to a provides-type endpoint in another charm over the tls-certificates interface, and the tls-certificates-operator has a provides-type endpoint that can connect to a provides-type endpoint in another charm over the tls-certificates interface. It is this compatibility between these two charms that makes juju integrate work. Go to the GitHub projects of the two charms (https://github.com/canonical/postgresql-k8s-operator , https://github.com/canonical/manual-tls-certificates-operator) and inspect the metadata.yaml and the src/charm.py files to find out more.

See more:

Integrate your PostgreSQL application with Mattermost

Time to give your PostgreSQL something to service. Run juju info postgresql-k8s / visit Charmhub | Deploy Charmed PostgreSQL K8s using Charmhub - The Open Operator Collection – as you can see, it provides an endpoint for the pgsql interface, and that interface allows you to connect, for example, to a chat service like Mattermost. Run the deploy and integrate commands again to deploy Mattermost and integrate it with PostgreSQL. Sample session:

ubuntu@tutorial-vm:~$ juju deploy mattermost-k8s
Located charm "mattermost-k8s" in charm-hub, revision 26
Deploying "mattermost-k8s" from charm-hub charm "mattermost-k8s", revision 26 in channel stable on ubuntu@20.04/stable
# wait a minute for things to settle
ubuntu@tutorial-vm:~$ juju integrate mattermost-k8s postgresql-k8s:db
# wait a minute for things to settle
ubuntu@tutorial-vm:~$ juju status
Model        Controller  Cloud/Region        Version  SLA          Timestamp
welcome-k8s  microk8s    microk8s/localhost  3.1.6    unsupported  17:44:05+01:00

App                        Version                         Status  Scale  Charm                      Channel    Rev  Address         Exposed  Message
mattermost-k8s             .../mattermost:v7.1.4-20.04...  active      1  mattermost-k8s             stable      26  10.152.183.251  no       
postgresql-k8s             14.9                            active      3  postgresql-k8s             14/stable  158  10.152.183.108  no       
tls-certificates-operator                                  active      1  tls-certificates-operator  stable      22  10.152.183.53   no       

Unit                          Workload  Agent  Address       Ports     Message
mattermost-k8s/0*             active    idle   10.1.170.147  8065/TCP  
postgresql-k8s/0*             active    idle   10.1.170.140            
postgresql-k8s/1              active    idle   10.1.170.144            
postgresql-k8s/2              active    idle   10.1.170.143            
tls-certificates-operator/0*  active    idle   10.1.170.145            

Look around:
  1. Revisit the line we used to integrate Mattermost with PostgreSQL. Do you notice anything special? That’s right, the name of the PostgreSQL charm is followed by :db, a notation used to specify the db endpoint. That is needed because otherwise the command would be ambiguous, as PostgreSQL has 2 endpoints that could connect to Mattermost. Run a quick search through Charmhub | Deploy Charmed PostgreSQL K8s using Charmhub - The Open Operator Collection to verify.

Now, use the IP address and the port of mattermost-k8s to check that the application is running, on the template below:

curl <IP address>:<port>/api/v4/system/ping

Sample session:

curl 10.1.170.147:8065/api/v4/system/ping

You should see the following:

{"AndroidLatestVersion":"","AndroidMinVersion":"","IosLatestVersion":"","IosMinVersion":"","status":"OK"}

Congratulations, you now have a PostgreSQL that is highly available, TLS-encrypted, and providing useful service to Mattermost!

See more: Charmhub | mattermost-k8s

Cleanup: Destroy your test environment

To clean up and remove every trace of this tutorial, in a shell on your host machine, run multipass delete --purge tutorial-vm, then uninstall Multipass: Linux | macOS | Windows. (On Linux: sudo snap remove multipass.)

Next steps

This tutorial has introduced you to the basic things you can do with Juju. But there is a lot more to explore:

If you are wondering… visit…
“How do I…?” Juju How-to docs
“What is…?” Juju Reference docs
“Why…?”, “So what?” Juju Explanation docs
“How do I build a charm?” SDK docs
“How do I contribute to Juju?” Dev docs

Last updated 3 days ago.