How to write interface tests

Also see:

Suppose you have an interface specification in charm-relation-interfaces, or you are working on one, and you want to add interface tests. These are the steps you need to take.

We will continue from the running example from the previous HowTo on How to register an interface. Your starting setup should look like this:

$ tree ./interfaces/my_fancy_database     
./interfaces/my_fancy_database            
└── v0                                    
    ├── charms.yaml                       
    ├── interface_tests                   
    ├── README.md                         
    └── schema.py                         
                                          
2 directories, 3 files                    

Write the tests

Create the test module

Add a file to the interface_tests directory called test_provider.py.

touch ./interfaces/my_fancy_database/interface_tests/test_provider.py

Write a test for the ‘negative’ path

Write to test_provider.py the code below:

from interface_tester import Tester
from scenario import State, Relation

def test_nothing_happens_if_remote_empty():
    # GIVEN that the remote end has not published any tables
    t = Tester(
        State(leader=True,
              relations=[
                  Relation(
                      endpoint="my-fancy-database",  # the name doesn't matter
                      interface="my_fancy_database",
                  )
              ]
              )
    )
    # WHEN the database charm receives a relation-joined event
    state_out = t.run("my-fancy-database-relation-joined")
    # THEN no data is published to the (local) databags
    t.assert_relation_data_empty()

This test verifies part of a ‘negative’ path: it verifies that if the remote end did not (yet) comply with his part of the contract, then our side did not either.

Write a test for the ‘positive’ path

Append to test_provider.py the code below:

import json
from interface_tester import Tester
from scenario import State, Relation

def test_contract_happy_path():
    # GIVEN that the remote end has requested tables in the right format
    tables_json = json.dumps(["users", "passwords"])
    t = Tester(
        State(leader=True,
              relations=[
                  Relation(
                      endpoint="my-fancy-database",  # the name doesn't matter
                      interface="my_fancy_database",
                      remote_app_data={"tables": tables_json}
                  )])
        )
    # WHEN the database charm receives a relation-changed event
    state_out = t.run("my-fancy-database-relation-changed")
    # THEN the schema is satisfied (the database charm published all required fields)
    t.assert_schema_valid()

This test verifies that the databags of the ‘my-fancy-database’ relation are valid according to the pydantic schema you have specified in schema.py.

To check that things work as they should, you can pip install pytest-interface-tester and then run interface_tester discover --include my_fancy_database from the charm-relation-interfaces root.

You should see:

- my_fancy_database:
  - v0:
   - provider:
       - test_contract_happy_path
     - schema OK
     - charms:
       - my_fancy_database_charm (https://github.com/your_org/my_fancy_database_operator) custom_test_setup=no
   - requirer:
     - <no tests>
     - schema OK
     - <no charms>

In particular pay attention to the provider field. If it says <no tests> then there is something wrong with your setup, and the collector isn’t able to find your test or identify it as a valid test.

Merge in charm-relation-interfaces

You are ready to merge this files in the charm-relation-interfaces repository. Open a PR and drive it to completion.

Prepare the charm

In order to be testable by charm-relation-interfaces, the charm needs to expose and configure a fixture.

This is because the fancy-database interface specification is only supported if the charm is well-configured and has leadership, since it will need to publish data to the application databag. Also, interface tests are Scenario tests and as such they are mock-based: there is no cloud substrate running, no Juju, no real charm unit in the background. So you need to patch out all calls that cannot be mocked by Scenario, as well as provide enough mocks through State so that the charm is ‘ready’ to support the interface you are testing.

Go to the Fancy Database charm repository root.

cd path/to/my_fancy_database_charm

Create a conftest.py file under tests/interface:

mkdir ./tests/interface touch ./tests/interface/conftest.py

Write in conftest.py:

import pytest
from charm import MyFancyDatabaseCharm
from interface_tester import InterfaceTester

@pytest.fixture
def interface_tester(interface_tester: InterfaceTester):
        # patch: prevent the charm from making a system call that would give error, since there is no cloud
        MyFancyDatabaseCharm._is_cloud_service_running = True

        interface_tester.configure(
            charm_type=MyFancyDatabaseCharm,
            state_template=State(
                leader=True,  # we need leadership
                config={
                    "foo": "0.0.0.0"  # this is mandatory config for the charm
                }
            )
      # this fixture needs to yield (NOT RETURN!) interface_tester again
      yield interface_tester

this fixture overrides a homonym pytest fixture that comes with pytest-interface-tester.

You can configure the fixture name, as well as its location, but that needs to happen in the charm-relation-interfaces repo.

Verifying the interface_tester configuration

To verify that the fixture is good enough to pass the interface tests, create a new file

touch ./tests/interface/test_fancy_database_interface.py

And write in it:

from interface_tester import InterfaceTester
  
def test_fancy_database_interface(interface_tester: InterfaceTester):
    interface_tester.configure(
        interface_name="my-fancy-database",
        interface_version=0,
    )
    interface_tester.run()

If you run this test, unless you have already merged the interface tests PR to charm-relation-interfaces, it will fail with some error message telling you that it’s failing to collect the tests for the interface.

Solve this by adding (temporarily) this configuration to the interface tester, pointing it to the repository and/or branch where it can find the tests:

from interface_tester import InterfaceTester
  
def test_fancy_database_interface(interface_tester: InterfaceTester):
    interface_tester.configure(
        interface_name="my-fancy-database",
        interface_version=0,
        # TODO: do not merge this in main!
        repo="https://your-org/your-charm-relation-interfaces-fork",
        branch="add-interface-tests",
    )
    interface_tester.run()

Now the tests should be collected and execute.

Troubleshooting and debugging the tests

your charm is missing some configurations/mocks

Solution to this is to add the missing mocks/patches to the interface_tester fixture in conftest.py. Essentially, you need to make it so that the charm runtime ‘thinks’ that everything is normal and ready to process and accept the interface you are testing. This may mean mocking the presence and connectivity of a container, system calls, substrate API calls, and more. If you have scenario or unittests in your codebase, you most likely already have all the necessary patches scattered around and it’s a matter of collecting them.

Contributors: @ppasotti

Last updated 2 months ago. Help improve this document in the forum.