Storage provider

Storage > Storage provider

See also: How to manage storage

In Juju, a storage provider refers to the technology used to make storage available to a charm.


Generic storage providers

There are several cloud-independent storage providers, which are available to all types of models:


See also: Wikipedia | Loop device

  • Block-type, creates a file on the unit’s root filesystem, associates a loop device with it. The loop device is provided to the charm.


See also: The Linux Kernel Archives | ramfs, rootfs and initramfs

  • Filesystem-type, creates a sub-directory on the unit’s root filesystem for the unit/charmed operator to use. Works with Kubernetes models.


See also: Wikipedia | Tmpfs

  • Filesystem-type, creates a temporary file storage facility that appears as a mounted file system but is stored in volatile memory. Works with Kubernetes models.

Loop devices require extra configuration to be used within LXD. For that, please refer to Loop devices and LXD (below).

Cloud-specific storage providers


Azure-based models have access to the ‘azure’ storage provider.

The ‘azure’ storage provider has an ‘account-type’ configuration option that accepts one of two values: ‘Standard_LRS’ and ‘Premium_LRS’. These are, respectively, associated with defined Juju pools ‘azure’ and ‘azure-premium’.

Newly-created models configured in this way use “Azure Managed Disks”. See Azure Managed Disks Overview for information on what this entails (in particular, what the difference is between standard and premium disk types).


OpenStack-based models have access to the ‘cinder’ storage provider.

The ‘cinder’ storage provider has a ‘volume-type’ configuration option whose value is the name of any volume type registered with Cinder.


AWS-based models have access to the ‘ebs’ storage provider, which supports the following pool attributes:


  • Specifies the EBS volume type to create. You can use either the EBS volume type names, or synonyms defined by Juju (in parentheses):

    • standard (magnetic)
    • gp2 (ssd)
    • gp3
    • io1 (provisioned-iops)
    • io2
    • st1 (optimized-hdd)
    • sc1 (cold-storage)

    Juju’s default pool (also called ‘ebs’) uses gp2/ssd as its own default.


  • The number of IOPS for io1, io2 and gp3 volume types. There are restrictions on minimum and maximum IOPS, as a ratio of the size of volumes. See Provisioned IOPS (SSD) Volumes for more information.


  • Boolean (true|false); indicates whether created volumes are encrypted.


  • The KMS Key ARN used to encrypt the disk. Requires encrypted: true to function.


  • The number of megabyte/s throughput a GP3 volume is provisioned for. Values are passed in the form 1000M or 1G etc.

For detailed information regarding EBS volume types, see the AWS EBS documentation.


Google-based models have access to the ‘gce’ storage provider. The GCE provider does not currently have any specific configuration options.


See also: Persistent storage and Kubernetes

Kubernetes-based models have access to the ‘kubernetes’ storage provider, which supports the following pool attributes:


  • The storage class for the Kubernetes cluster to use. It can be any storage class that you have defined, for example:

    • juju-unit-storage
    • juju-charm-storage
    • microk8s-hostpath


  • The Kubernetes storage provisioner. For example:



  • Extra parameters. For example:

    • gp2
    • pd-standard


The regular package archives for Ubuntu 14.04 LTS (Trusty) and Ubuntu 16.04 LTS (Xenial) do not include a version of LXD that has the ‘lxd’ storage provider feature. You will need at least version 2.16. See the Using LXD with Juju page for installation help.

LXD-based models have access to the ‘lxd’ storage provider. The LXD provider has two configuration options:


  • This is the LXD storage driver (e.g. zfs, btrfs, lvm, ceph).


  • The name to give to the corresponding storage pool in LXD.

Any other parameters will be passed to LXD (e.g. zfs.pool_name). See upstream LXD storage configuration for LXD storage parameters.

Every LXD-based model comes with a minimum of one LXD-specific Juju storage pool called ‘lxd’. If ZFS and/or BTRFS are present when the controller is created then pools ‘lxd-zfs’ and/or ‘lxd-btrfs’ will also be available. The following output to the juju storage-pools command shows all three Juju LXD-specific pools:

Name       Provider  Attributes
loop       loop
lxd        lxd
lxd-btrfs  lxd       driver=btrfs lxd-pool=juju-btrfs
lxd-zfs    lxd       driver=zfs lxd-pool=juju-zfs zfs.pool_name=juju-lxd
rootfs     rootfs
tmpfs      tmpfs

As can be inferred from the above output, for each Juju storage pool based on the ‘lxd’ storage provider there is a LXD storage pool that gets created. It is these LXD pools that will house the actual volumes.

The LXD pool corresponding to the Juju ‘lxd’ pool doesn’t get created until the latter is used for the first time (typically via the juju deploy command). It is called simply ‘juju’.

The command lxc storage list is used to list LXD storage pools. A full “contingent” of LXD non-custom storage pools would like like this:

|    NAME    | DESCRIPTION | DRIVER |               SOURCE               | USED BY |
| default    |             | dir    | /var/lib/lxd/storage-pools/default | 1       |
| juju       |             | dir    | /var/lib/lxd/storage-pools/juju    | 0       |
| juju-btrfs |             | btrfs  | /var/lib/lxd/disks/juju-btrfs.img  | 0       |
| juju-zfs   |             | zfs    | /var/lib/lxd/disks/juju-zfs.img    | 0       |

The three Juju-related pools above are for storing volumes that Juju applications can use. The fourth ‘default’ pool is the standard LXD storage pool where the actual containers (operating systems) live.

To deploy an application, refer to the pool as usual. Here we deploy PostgreSQL using the ‘lxd’ Juju storage pool, which, in turn, uses the ‘juju’ LXD storage pool:

juju deploy postgresql --storage pgdata=lxd,8G

See Using LXD with Juju for how to use LXD in conjunction with Juju, including the use of ZFS as an alternative filesystem.

Loop devices and LXD

LXD (localhost) does not officially support attaching loopback devices for storage out of the box. However, with some configuration you can make this work.

Each container uses the ‘default’ LXD profile, but also uses a model-specific profile with the name juju-<model-name>. Editing a profile will affect all of the containers using it, so you can add loop devices to all LXD containers by editing the ‘default’ profile, or you can scope it to a model.

To add loop devices to your container, add entries to the ‘default’, or model-specific, profile, with lxc profile edit <profile>:

    major: "10"
    minor: "237"
    path: /dev/loop-control
    type: unix-char
    major: "7"
    minor: "0"
    path: /dev/loop0
    type: unix-block
    major: "7"
    minor: "1"
    path: /dev/loop1
    type: unix-block
    major: "7"
    minor: "9"
    path: /dev/loop9
    type: unix-block

Doing so will expose the loop devices so the container can acquire them via the losetup command. However, it is not sufficient to enable the container to mount filesystems onto the loop devices. One way to achieve that is to make the container “privileged” by adding:

  security.privileged: "true"


MAAS has support for discovering information about machine disks, and an API for acquiring nodes with specified disk parameters. Juju’s MAAS provider has an integrated ‘maas’ storage provider. This storage provider is static-only; it is only possible to deploy charmed operators using ‘maas’ storage to a new machine in MAAS, and not to an existing machine, as described in the section on dynamic storage.

The MAAS provider currently has a single configuration attribute:


  • A comma-separated list of tags to match on the disks in MAAS. For example, you might tag some disks as ‘fast’; you can then create a storage pool in Juju that will draw from the disks with those tags.


Oracle-based models have access to the ‘oracle’ storage provider. The Oracle provider currently supports a single pool configuration attribute:


  • Volume type, a value of ‘default’ or ‘latency’. Use ‘latency’ for low-latency, high IOPS requirements, and ‘default’ otherwise.

    For convenience, the Oracle provider registers two predefined pools:

    • ‘oracle’ (volume type is ‘default’)
    • ‘oracle-latency’ (volume type is ‘latency’).

Contributors (starting with Jan 2024): @hpidcock

Last updated 4 months ago. Help improve this document in the forum.