How to write interface tests

Also see:

Suppose you have an interface specification in charm-relation-interfaces, or you are working on one, and you want to add interface tests. These are the steps you need to take.

We will continue from the running example from the previous HowTo on How to register an interface. Your starting setup should look like this:

$ tree ./interfaces/my_fancy_database     
./interfaces/my_fancy_database            
└── v0                                    
    ├── interface.yaml                       
    ├── interface_tests                   
    ├── README.md                         
    └── schema.py                         
                                          
2 directories, 3 files                    

Write the tests

Create the test module

Add a file to the interface_tests directory called test_provider.py.

touch ./interfaces/my_fancy_database/interface_tests/test_provider.py

Write a test for the ‘negative’ path

Write to test_provider.py the code below:

from interface_tester import Tester
from scenario import State, Relation


def test_nothing_happens_if_remote_empty():
    # GIVEN that the remote end has not published any tables
    t = Tester(
        State(
            leader=True,
            relations=[
                Relation(
                    endpoint="my-fancy-database",  # the name doesn't matter
                    interface="my_fancy_database",
                )
            ],
        )
    )
    # WHEN the database charm receives a relation-joined event
    state_out = t.run("my-fancy-database-relation-joined")
    # THEN no data is published to the (local) databags
    t.assert_relation_data_empty()

This test verifies part of a ‘negative’ path: it verifies that if the remote end did not (yet) comply with his part of the contract, then our side did not either.

Write a test for the ‘positive’ path

Append to test_provider.py the code below:

import json

from interface_tester import Tester
from scenario import State, Relation


def test_contract_happy_path():
    # GIVEN that the remote end has requested tables in the right format
    tables_json = json.dumps(["users", "passwords"])
    t = Tester(
        State(
            leader=True,
            relations=[
                Relation(
                    endpoint="my-fancy-database",  # the name doesn't matter
                    interface="my_fancy_database",
                    remote_app_data={"tables": tables_json},
                )
            ],
        )
    )
    # WHEN the database charm receives a relation-changed event
    state_out = t.run("my-fancy-database-relation-changed")
    # THEN the schema is satisfied (the database charm published all required fields)
    t.assert_schema_valid()

This test verifies that the databags of the ‘my-fancy-database’ relation are valid according to the pydantic schema you have specified in schema.py.

To check that things work as they should, you can run interface_tester discover --include my_fancy_database from the charm-relation-interfaces root.

Note that the interface_tester is installed in the previous how-to guide How to register an interface. If you haven’t done it yet, install it by running: pip install pytest-interface-tester .

You should see:

- my_fancy_database:
  - v0:
   - provider:
       - test_contract_happy_path
       - test_nothing_happens_if_remote_empty
     - schema OK
     - charms:
       - my_fancy_database_charm (https://github.com/your-github-slug/my-fancy-database-operator) custom_test_setup=no
   - requirer:
     - <no tests>
     - schema OK
     - <no charms>

In particular, pay attention to the provider field. If it says <no tests> then there is something wrong with your setup, and the collector isn’t able to find your test or identify it as a valid test.

Similarly, you can add tests for requirer in ./interfaces/my_fancy_database/v0/interface_tests/test_requirer.py. Don’t forget to edit the interface.yaml file in the “requirers” section to add the name of the charm and the URL. See the “Edit interface.yaml” section in the previous how-to guide How to register an interface for more detail on editing interface.yaml. Here is an example of tests for requirers added.

Merge in charm-relation-interfaces

You are ready to merge this files in the charm-relation-interfaces repository. Open a PR and drive it to completion.

Prepare the charm

In order to be testable by charm-relation-interfaces, the charm needs to expose and configure a fixture.

This is because the fancy-database interface specification is only supported if the charm is well-configured and has leadership, since it will need to publish data to the application databag. Also, interface tests are Scenario tests and as such they are mock-based: there is no cloud substrate running, no Juju, no real charm unit in the background. So you need to patch out all calls that cannot be mocked by Scenario, as well as provide enough mocks through State so that the charm is ‘ready’ to support the interface you are testing.

Go to the Fancy Database charm repository root.

cd path/to/my-fancy-database-operator

Create a conftest.py file under tests/interface:

mkdir ./tests/interface touch ./tests/interface/conftest.py

Write in conftest.py:

import pytest
from charm import MyFancyDatabaseCharm
from interface_tester import InterfaceTester
from scenario.state import State


@pytest.fixture
def interface_tester(interface_tester: InterfaceTester):
    interface_tester.configure(
        charm_type=MyFancyDatabaseCharm,
        state_template=State(
            leader=True,  # we need leadership
        ),
    )
    # this fixture needs to yield (NOT RETURN!) interface_tester again
    yield interface_tester

This fixture overrides a homonym pytest fixture that comes with pytest-interface-tester.

You can configure the fixture name, as well as its location, but that needs to happen in the charm-relation-interfaces repo.

Verifying the interface_tester configuration

To verify that the fixture is good enough to pass the interface tests, run the run_matrix.py script from the charm-relation-interfaces repo:

cd path/to/charm-relation-interfaces
python run_matrix.py --include my_fancy_database

If you run this test, unless you have already merged the interface tests PR to charm-relation-interfaces, it will fail with some error message telling you that it’s failing to collect the tests for the interface, because by default, pytest-interface-tester will try to find tests in the canonical/charm-relation-interfaces repo’s main branch.

To run tests with a branch in your forked repo, run:

cd path/to/my-forked/charm-relation-interfaces
python run_matrix.py --include my_fancy_database --repo https://github.com/your-github-slug/charm-relation-interfaces --branch my-fancy-database

In the above command, remember to replace your-github-slug to your own slug, change the repo name accordingly (if you have renamed the forked repo), and update the my-fance-database branch name from the above command to the branch that contains your tests.

Now the tests should be collected and executed. You should get similar output to the following:

INFO:root:Running tests for interface: my_fancy_database
INFO:root:Running tests for version: v0
INFO:root:Running tests for role: provider

...

+++ Results +++
{
  "my_fancy_database": {
    "v0": {
      "provider": {
        "my-fancy-database-operator": true
      },
      "requirer": {
        "my-fancy-database-operator": true
      }
    }
  }
}

For reference, here is an example of a bare minimum my-fancy-database-operator charm to make the test pass. In the charm, application relation data and unit relation data are set according to our definition (see the beginning part of the previous how-to guide How to register an interface).

Troubleshooting and debugging the tests

your charm is missing some configurations/mocks

Solution to this is to add the missing mocks/patches to the interface_tester fixture in conftest.py. Essentially, you need to make it so that the charm runtime ‘thinks’ that everything is normal and ready to process and accept the interface you are testing. This may mean mocking the presence and connectivity of a container, system calls, substrate API calls, and more. If you have scenario or unittests in your codebase, you most likely already have all the necessary patches scattered around and it’s a matter of collecting them.

Contributors: @ppasotti @ironcore864

Last updated 3 days ago. Help improve this document in the forum.