Get started with Juju

Imagine your business needs a chat service such as Mattermost backed up by a database such as PostgreSQL. In a traditional setup, this can be quite a challenge, but with Juju you’ll find yourself deploying, configuring, scaling, integrating, etc., applications in no time. Let’s get started!

The tutorial will take about 1h to complete.

If you’d like a quicker start:

At any point, to give feedback or ask for help: Get in touch on Matrix or Discourse!


What you’ll need:

  • A working station, e.g., a laptop, that has sufficient resources to launch a virtual machine with 4 CPUs, 8 GB RAM (4 GB seems to work as well), and 30 GB disk space.

What you’ll do:

Set things up

On your workstation, spin up an isolated test environment, create a suitable local Kubernetes cloud, install the juju CLI client, add your cloud definition and your cloud credentials to Juju, bootstrap a Juju controller (control plane) into the cloud, and add a Juju model (workspace) on the controller. Whether you are on Linux, macOS, or Windows, you can achieve all of these in two easy steps by installing Multipass and then using it to launch a charm-dev blueprinted (i.e., Juju-ready) Ubuntu VM.

See more: Set up your test environment automatically

  • At the VM launch step, call your VM tutorial-vm.
  • At the verify step, choose the MicroK8s path and switch to the welcome-k8s model.
  • For best results: We strongly recommend that you follow the automatic path as instructed above. However:

    • If you decide to take the manual path:
      We strongly recommend you include the Multipass step.

      • If you decide to set things up directly on your machine:
        Make sure to stay close to the manual path and set things up correctly (i.e., enable the correct addons for MicroK8s and provide for the fact that starting with version 3, Juju is a strictly confined snap).

Look around

1. Learn more about your MicroK8s cloud.
1a. Find out more about its snap: snap info microk8s.
1b. Find out the installed version: microk8s version.
1c. Check its enabled addons: microk8s status.
1d. Inspect its .kube/config file: cat ~/.kube/config.
1e. Try microk8s kubectl; you won’t need it once you have Juju, but it’s there anyway.

2. Learn more about juju.
2a. Find out more about its snap: snap info juju.
2b. Find out the installed version: juju version.
2c. Quickly preview all the commands: juju help commands.
2d. Filter by keyword: Use juju help commands | grep <keyword> to get a quick sense of the commands related to a particular keyword (e.g., “secret”). Try juju help commands | grep -v Alias to exclude any aliases.
2e. Find out more about a specific command: juju help <command>.
2f. Inspect the files on your workstation associated with the client: ls ~/.local/share/juju.
2g. Learn about other Juju clients: Client.

3. Learn more about your cloud definition and credentials in Juju.
3a. Find out more about the Juju notion of a cloud: Cloud.
3b. Find out all the clouds whose definitions your client has already: juju clouds, juju clouds --all.
3c. Find out about about other clouds supported by Juju: List of supported clouds.
3d. Take a look at how Juju has defined your MicroK8s cloud: juju show-cloud microk8s, juju credentials, juju show-credential microk8s microk8s --show-secrets. :warning: In Juju, the term ‘credential’ is always about access to a cloud.
3e. Revisit the output for juju clouds or juju credentials. Notice the classification into client vs. controller. All this classification does is keep track of who is aware of a given cloud definition / credential – the client, the controller, or both. However, this simple distinction has important implications – can you guess which? You can use the same controllers to run multiple clouds and you can decide which cloud account to use.

4. Learn more about Juju controllers.
4a. Find out all the controllers that your client is aware of already: juju controllers. Switch to the LXD cloud controller, then back: juju switch lxd, juju switch microk8s. Get more detail on each controller: juju show-controller <controller name. Take a sneak peek at their current configuration: cat ~/.local/share/juju/bootstrap-config.yaml.
4b. Revisit the output for juju controllers. Note the User and Access columns. In Juju, a user is any person able to at least log in to a Juju controller. Run juju whoami, then juju show-user admin – as you can see, your user is called admin and has superuser access to the controller.

5. Learn more about Juju models, applications, units.
5a. Find out all the models on your microk8s controller: juju models.
5b. Find out more about your welcome-k8s model: juju show-model, juju status -m microk8s:welcome-k8s. What do you think a model is? A model is a logical abstraction. It denotes a workspace, a canvas where you deploy, integrate, and manage applications. On a Kubernetes cloud, a Juju model corresponds to a Kubernetes namespace. Run microk8s kubectl get namespaces to verify – the output should show a namespace called welcome-k8s, for your welcome-k8s model, and also a namespace called controller-microk8s, for your controller model.
5c. Try to guess: What is the controller model about? Switch to it and check: juju switch microk8s:controller, then juju status. When you bootstrap a controller into a cloud, this by default creates the controller model and deploys to it the juju-controller charm, whose units (=running instances of a charm) form the controller application. Find out more about the controller charm: juju info juju-controller or Charmhub | juju-controller. Find out more about the controller application: juju show-application controller. SSH into a controller application unit: juju ssh controller/0, then poke around using ls, cd, and cat (type exit to exit the unit). On a Kubernetes cloud, a Juju unit corresponds to a pod: microk8s kubectl -n controller-microk8s get pods should show a controller-0 pod, which is the Kubernetes pod corresponding to the controller/0 unit.
5d. Switch back to the welcome-k8s model. Tip: When you’re on the same controller, you can skip the controller prefix when you specify the model to switch to.



Your computer with your Multipass VM, your MicroK8s cloud, and a live Juju controller on it (the ‘charm’ bit is the juju-controller charm).

See more: Multipass (or run snap info multipass for a quick preview), MicroK8s, How to install and manage the client, How to manage clouds, How to manage credentials, How to manage controllers, How to manage models, How to manage charms, How to manage applications, How to manage units

Watch Juju transform your operations game

Let’s deploy a Mattermost chat service backed by a PosgreSQL database with TLS-encrypted traffic!

Before you begin:

On two additional terminal windows, access your Multipass VM shell (multipass shell tutorial-vm) and, respectively, run:

  1. juju status --watch 1s to watch your deployment status evolve. (Things are all right if your App Status and your Unit - Workload reach active and your Unit - Agent reaches idle. See more: Status.)

  2. juju debug-log to watch all the details behind your deployment status. (Especially useful when things don’t evolve as expected. In that case, please file a bug on Launchpad.)


Deploy the PostgreSQL charm

In your Multipass shell, run juju switch microk8s:welcome-k8s to ensure you are on your MicroK8s workload model.

Deploy the PostgreSQL charm for Kubernetes as below:

juju deploy postgresql-k8s --channel 14/stable --trust

In your two additional terminal windows with juju status --watch 1s and juju debug-log, watch your deployment come to life. When it’s done, in your main shell, run juju status to view the result statically. Sample successful output:

ubuntu@tutorial-vm:~$ juju status
Model        Controller  Cloud/Region        Version  SLA          Timestamp
welcome-k8s  microk8s    microk8s/localhost  3.1.6    unsupported  15:21:10+01:00

App                        Version  Status  Scale  Charm                      Channel    Rev  Address         Exposed  Message
postgresql-k8s             14.9     active      1  postgresql-k8s             14/stable  158  10.152.183.108  no       

Unit                          Workload  Agent  Address       Ports  Message
postgresql-k8s/0*             active    idle   10.1.170.140         

Congratulations, your PostgreSQL charm has deployed, and your PostgreSQL application is up and running with one unit!


Look around
  1. Run juju show-unit postgresql-k8s/0 to examine the unit.
  2. Run juju storage to get further information about the storage associated with the unit.

See more: Charmhub | postgresql-k8s

Access your PostgreSQL application

First, get:

  • the host IP address of the PostgreSQL unit: retrieve it from juju status or juju show-unit (in the sample outputs above, 10.1.170.140);

  • a PostgreSQL username and password: we can use the internal, default user called operator and set a password for it using the set-password action. Sample session:

juju run postgresql-k8s/leader set-password username=operator password=mysecretpass

See more:

Now, use this information to access the PostgreSQL application:

First, ssh into the PostgreSQL unit (= Kubernetes container). Sample session:

ubuntu@tutorial-vm:~$ juju ssh --container postgresql postgresql-k8s/leader bash
root@postgresql-k8s-0:/#

Verify that psql is already installed. Sample session:

root@postgresql-k8s-0:/# psql --version
psql (PostgreSQL) 14.9 (Ubuntu 14.9-0ubuntu0.22.04.1)

Use psql to view a list of the existing databases. Sample session (make sure to use your own host and password):

root@postgresql-k8s-0:/# psql --host=10.1.170.140 --username=operator --password --list
Password: 
                               List of databases
   Name    |  Owner   | Encoding | Collate |  Ctype  |    Access privileges     
-----------+----------+----------+---------+---------+--------------------------
 postgres  | operator | UTF8     | C       | C.UTF-8 | operator=CTc/operator   +
           |          |          |         |         | backup=CTc/operator     +
           |          |          |         |         | replication=CTc/operator+
           |          |          |         |         | rewind=CTc/operator     +
           |          |          |         |         | monitoring=CTc/operator +
           |          |          |         |         | admin=c/operator
 template0 | operator | UTF8     | C       | C.UTF-8 | =c/operator             +
           |          |          |         |         | operator=CTc/operator
 template1 | operator | UTF8     | C       | C.UTF-8 | =c/operator             +
           |          |          |         |         | operator=CTc/operator
(3 rows)

Finally, use psql to access the postgres database and submit a query. Sample session:

root@postgresql-k8s-0:/# psql --host=10.1.170.140 --username=operator --password postgres
Password: 
psql (14.9 (Ubuntu 14.9-0ubuntu0.22.04.1))
Type "help" for help.

postgres=# SELECT version();
                                                               version                                                              
  
------------------------------------------------------------------------------------------------------------------------------------
--
 PostgreSQL 14.9 (Ubuntu 14.9-0ubuntu0.22.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit
(1 row)

Type exit to get back to your unit shell and then again to return to your Multipass VM shell.

Configure your PostgreSQL application

The PostgreSQL charm defines many application configuration options. Run juju config postgresql-k8s to find out what they are. Change the profile key to testing. Sample session:

ubuntu@tutorial-vm:~$ juju config postgresql-k8s
# Should show a long list, including the 'profile' option, whose default value is 'production'.
ubuntu@tutorial-vm:~$ juju config postgresql-k8s profile=testing
ubuntu@tutorial-vm:~$ juju config postgresql-k8s
# Should show that the 'profile' option is now set to 'testing'.

See more:

Scale your PostgreSQL application

A database failure can be very costly. Let’s scale our application. Sample session:

ubuntu@tutorial-vm:~$ juju scale-application postgresql-k8s 3
postgresql-k8s scaled to 3 units
# wait a minute for things to settle down, then check the result:
ubuntu@tutorial-vm:~$ juju status
Model        Controller  Cloud/Region        Version  SLA          Timestamp
welcome-k8s  microk8s    microk8s/localhost  3.1.6    unsupported  17:31:20+01:00

App                        Version  Status  Scale  Charm                      Channel    Rev  Address         Exposed  Message
postgresql-k8s             14.9     active      3  postgresql-k8s             14/stable  158  10.152.183.108  no       

Unit                          Workload  Agent  Address       Ports  Message
postgresql-k8s/0*             active    idle   10.1.170.140         
postgresql-k8s/1              active    idle   10.1.170.144         
postgresql-k8s/2              active    idle   10.1.170.143         

As you might have guessed, the result of scaling an application is that you have multiple running instances of your application – that is, multiple units.

In a production scenario:

You’ll want to make sure that they are also properly distributed over multiple nodes. Our localhost MicroK8s doesn’t allow us to do this (because we only have 1 node), but If you clusterise MicroK8s, you can use it to explore this too!

See more: MicroK8s | Create a multi-node cluster

See more: How to manage applications > Scale

Enable TLS security in your PostgreSQL application

Communication with a database needs to be secure.

Our PostgreSQL charm does not have encrypted traffic built-in. However, it has integration endpoints that make it possible for it to integrate with another charm called TLS Certificates Operator (tls-certificates-operator) to acquire TLS-encrypted traffic.

Use the deploy, config, and integrate commands to deploy the TLS Certificates Operator charm, configure it to generate a TLS certificate, and integrate it with your existing PostgreSQL, then run juju status --relations to inspect the result. Sample session:

ubuntu@tutorial-vm:~$ juju deploy tls-certificates-operator
Located charm "tls-certificates-operator" in charm-hub, revision 22
Deploying "tls-certificates-operator" from charm-hub charm "tls-certificates-operator", revision 22 in channel stable on ubuntu@22.04/stable
ubuntu@tutorial-vm:~$ juju config tls-certificates-operator generate-self-signed-certificates="true" ca-common-name="Test CA"
ubuntu@tutorial-vm:~$ juju integrate postgresql-k8s tls-certificates-operator
# wait a minute for things to settle
ubuntu@tutorial-vm:~$ juju status --relations
Model        Controller  Cloud/Region        Version  SLA          Timestamp
welcome-k8s  microk8s    microk8s/localhost  3.1.6    unsupported  17:31:20+01:00

App                        Version  Status  Scale  Charm                      Channel    Rev  Address         Exposed  Message
postgresql-k8s             14.9     active      3  postgresql-k8s             14/stable  158  10.152.183.108  no       
tls-certificates-operator           active      1  tls-certificates-operator  stable      22  10.152.183.53   no       

Unit                          Workload  Agent  Address       Ports  Message
postgresql-k8s/0*             active    idle   10.1.170.140         
postgresql-k8s/1              active    idle   10.1.170.144         
postgresql-k8s/2              active    idle   10.1.170.143         
tls-certificates-operator/0*  active    idle   10.1.170.145         

Integration provider                    Requirer                            Interface                 Type     Message
postgresql-k8s:database-peers           postgresql-k8s:database-peers       postgresql_peers          peer     
postgresql-k8s:restart                  postgresql-k8s:restart              rolling_op                peer     
postgresql-k8s:upgrade                  postgresql-k8s:upgrade              upgrade                   peer     
tls-certificates-operator:certificates  postgresql-k8s:certificates         tls-certificates          regular  
tls-certificates-operator:replicas      tls-certificates-operator:replicas  tls-certificates-replica 

Look around
  1. Run juju info postgresql-k8s, then juju info tls-certificates-operator. Compare the relations block: The postgresql-k8s has a requires-type endpoint that can connect to a provides-type endpoint in another charm over the tls-certificates interface, and the tls-certificates-operator has a provides-type endpoint that can connect to a provides-type endpoint in another charm over the tls-certificates interface. It is this compatibility between these two charms that makes juju integrate work. Go to the GitHub projects of the two charms (https://github.com/canonical/postgresql-k8s-operator , https://github.com/canonical/manual-tls-certificates-operator) and inspect the metadata.yaml and the src/charm.py files to find out more.

See more:

Integrate your PostgreSQL application with Mattermost

Time to give your PostgreSQL something to service. Run juju info postgresql-k8s / visit Charmhub | Deploy Charmed PostgreSQL K8s using Charmhub - The Open Operator Collection – as you can see, it provides an endpoint for the pgsql interface, and that interface allows you to connect, for example, to a chat service like Mattermost. Run the deploy and integrate commands again to deploy Mattermost and integrate it with PostgreSQL. Sample session:

ubuntu@tutorial-vm:~$ juju deploy mattermost-k8s
Located charm "mattermost-k8s" in charm-hub, revision 26
Deploying "mattermost-k8s" from charm-hub charm "mattermost-k8s", revision 26 in channel stable on ubuntu@20.04/stable
# wait a minute for things to settle
ubuntu@tutorial-vm:~$ juju integrate mattermost-k8s postgresql-k8s:db
# wait a minute for things to settle
ubuntu@tutorial-vm:~$ juju status
Model        Controller  Cloud/Region        Version  SLA          Timestamp
welcome-k8s  microk8s    microk8s/localhost  3.1.6    unsupported  17:44:05+01:00

App                        Version                         Status  Scale  Charm                      Channel    Rev  Address         Exposed  Message
mattermost-k8s             .../mattermost:v7.1.4-20.04...  active      1  mattermost-k8s             stable      26  10.152.183.251  no       
postgresql-k8s             14.9                            active      3  postgresql-k8s             14/stable  158  10.152.183.108  no       
tls-certificates-operator                                  active      1  tls-certificates-operator  stable      22  10.152.183.53   no       

Unit                          Workload  Agent  Address       Ports     Message
mattermost-k8s/0*             active    idle   10.1.170.147  8065/TCP  
postgresql-k8s/0*             active    idle   10.1.170.140            
postgresql-k8s/1              active    idle   10.1.170.144            
postgresql-k8s/2              active    idle   10.1.170.143            
tls-certificates-operator/0*  active    idle   10.1.170.145            


Look around
  1. Revisit the line we used to integrate Mattermost with PostgreSQL. Do you notice anything special? That’s right, the name of the PostgreSQL charm is followed by :db, a notation used to specify the db endpoint. That is needed because otherwise the command would be ambiguous, as PostgreSQL has 2 endpoints that could connect to Mattermost. Run a quick search through Charmhub | Deploy Charmed PostgreSQL K8s using Charmhub - The Open Operator Collection to verify.

Now, use the IP address and the port of mattermost-k8s to check that the application is running, on the template below:

curl <IP address>:<port>/api/v4/system/ping

Sample session:

curl 10.1.170.147:8065/api/v4/system/ping

You should see the following:

{"AndroidLatestVersion":"","AndroidMinVersion":"","IosLatestVersion":"","IosMinVersion":"","status":"OK"}

Congratulations, you now have a PostgreSQL that is highly available, TLS-encrypted, and providing useful service to Mattermost!

See more: Charmhub | mattermost-k8s


Your computer with your Multipass VM, your MicroK8s cloud, and a live Juju controller (the ‘charm’ in the Controller Unit is the juju-controller charm) + a sample deployed application on it (the ‘charm’ in the Regular Unit stands for any charm that you might deploy). In the Regular Model box, if you replace ‘charm’ in the Regular Unit box with postgresql-k8s, you get the result of juju deploy postgresql-k8s; if you add 2 more identical units, you get the result of juju scale-application postgresql-k8s 3; if you add 2 more units and replace ‘charm’ with tls-certificates-operator and, respectively, mattermost-k8s, you get the result of juju deploy tls-certificates-operator and juju deploy matermost-k8s; if you keep the same but imagine that the path from each Regular Unit’s Unit Agent to the Controller Agent and to the charm is used to create a connection between the applications whose units they are, you get the result of juju integrate postgresql-k8s tls-certificates-operator and of juju integrate mattermost-k8s postgresql-k8s:db (after integration, the workloads may also know how to contact each other directly; still, all communication between their respective charms goes through the Juju controller and the result of that communication is stored in the database in the form of maps known as ‘relation data bags’).

Tear things down

To tear things down, remove your entire Multipass Ubuntu VM, then uninstall Multipass:

See more: How to tear down your test environment automatically

Next steps

This tutorial has introduced you to the basic things you can do with Juju. But there is a lot more to explore:

If you are wondering… visit…
“How do I…?” Juju How-to docs
“What is…?” Juju Reference docs
“Why…?”, “So what?” Juju Explanation docs
“How do I build a charm?” SDK docs
“How do I contribute to Juju?” Dev docs

Contributors (starting with November 2023): @houz42, @hpidcock, @kayrag2 , @manadart, @mrbarco, @nsakkos, @ppasotti, @selcem, @thp, @tmihoc

Last updated 12 hours ago. Help improve this document in the forum.