How to write an integration test for a charm

This document covers writing integration tests, smoke tests, and any other tests that might benefit from being run against a live environment.

Integration tests should not be the first thing that charm authors write. The Operator Framework has an excellent testing harness that makes it possible to write unit tests with minimal mocking, and a reasonable expectation that code that passes harness tests can be successfully executed in an actual Juju model.

But integration tests can still be tremendously useful! This document covers setting up, writing, and running a simple Smoke test. The tools and information outlined here can then be used to create more expansive suites of integration tests.

Setup

By convention, integration tests are kept in the charm’s source tree, in a directory called tests/integration. The tests here utilize pytest-operator, which is a lightweight library that wraps python-libjuju with some testing related convenience functions. With pytest-operator, it is easy to setup and tear down temporary models to hold your tests.

The following lines in a charm’s tox.ini will setup integration tests.

[testenv:integration]
description = Run integration tests
deps =
     juju
     pytest
     pytest-operator
     ipdb
commands =
    pytest -v --tb native --ignore={[vars]tst_dir}unit --log-cli-level=INFO -s {posargs}

This document assumes that charm authors have installed python tox, charmcraft, and juju, and have bootstrapped a controller.

An example smoke test.

Here is the simplest possible implementation of an integration test – a smoke test that builds and deploys the charm, and verifies that the installation hooks finish executing without error:

import logging

from pytest_operator.plugin import OpsTest

logger = logging.getLogger(__name__)


@pytest.mark.abort_on_fail
async def test_smoke(ops_test: OpsTest):
    charm = await ops_test.build_charm(".")
    app = await ops_test.model.deploy(charm)
    unit0 = app.units[0]

    await ops_test.model.block_until(lambda: app.status in ("active", "error"), timeout=60)
    assert app.status, "active"

Let’s break that down.

@pytest.mark.abort_on_fail

When pytest runs a test module, it will execute all the routines in that module with test in their name, in the order that they appear in the module. We can take advantage of this to add a single “setup” test at the top of the file. This test will execute before the other tests in the module. The abort_on_fail decorator will abort test execution if something goes wrong.

Note:

  • If two tests should not share a model, simply put them in a different file, with a separate setup.
  • For a more object oriented approach, charm authors may use IsolatedAsyncioTestCase from Python’s unittest library. See the smoke tests in charm-rolling-ops for an example.
  • This example module contains only one test, which serves as both setup and smoke test.
async def test_smoke(ops_test: OpsTest)

pythonlibjuju uses Python’s asyncio library to interact with the Juju model. Our tests are linear, but we do need to define our test routines as asynchronous routines, and use await when calling functions that execute asynchronously.

charm = await ops_test.build_charm(".")

This line will build the charm with charmcraft. This document assumes that charmcraft is setup on the developer’s machine, and that the charm can be built.

app = await ops_test.model.deploy(charm)

ops_test.model is simply an instance of python-libjuju's Model class. Charm authors may reference the python-libjuju docs for a complete list of capabilities and usage.

The call to Model.deploy will result in the freshly built charm being deployed to the test model.

Model.deploy returns an Application object. In other tests, this Application object may be re-referenced with app = ops_test.model.applications[<charm-name>].

await ops_test.model.block_until(lambda: app.status in ("active", "error", "blocked"), timeout=60)

We now wait until the app status is either active or in an error state. All apps start out in with a maintenance, status, so it is sufficient, for our smoke test, to simply wait for that initial status to change. We timeout so that our tests don’t hang on stalls.

Note: there is a helper in pytest-ops to address this particular scenario more completely. We don’t use it here, because this doc is intended to give charm authors ideas for expanding on the basic smoke test. But we could replace the last two lines of our code with the following:

ops_test.model.wait_for_idle(apps=['application_name'], status='active', timeout=60, wait_for_exact_units=1)
assert app.status == "active"

We complete this simple smoke test by verifying that the status is active, meaning that the charm appears to have deployed and installed correctly.

Further Reading

It may be good to mention that there is a higher-level abstraction in pytest-operator for the block_until code you are using here:

You can do:

ops_test.model.wait_for_idle(apps=['application_name'], status='active', timeout=60, wait_for_exact_units=1)

This will block until exactly 1 unit of an application called application_name is deployed, with scale 1, and its status is active. It will raise if this doesn’t happen within 60s, or if the application goes to blocked or error.

@ppasotti Ooh! I didn’t know about this. I’ll revise the example to use it. :slight_smile:

Another little pro-tip that I find myself using a lot:

@contextlib.asynccontextmanager
async def fast_forward(ops_test, interval: str = "10s"):
    # temporarily speed up update-status firing rate
    await ops_test.model.set_config({"update-status-hook-interval": interval})
    yield
    await ops_test.model.set_config({"update-status-hook-interval": "60m"})


async def test_deploy(ops_test: OpsTest):
    async with fast_forward(ops_test):
        await ops_test.model.wait_for_idle(...)
2 Likes

I made a separate post to serve as a “Cookbook” of things like this: Integration Testing Cookbook

Feel free to drop in more examples!

Thanks @pengale for the helpful post! I’d like to echo that most things are easier and better tested at the unit test level with the harness (much faster, and quite powerful!), but that integration tests are also really helpful as there’s a few things that can’t be found easily without really deploying.

Some other integration testing tips that come to mind:

  1. have your first test be a simple “build, deploy, and check if it gets to active” test like @pengale’s test_smoke above. Sometimes you need more (deploy some other charm, establish relations, etc), but resist the urge to add anything more than a wait_for_idle(status='active') in that test. Add additional tests for actual function (interacting with the workload the charm deployed)
  2. when writing tests, take advantage of pycharm -k, which lets you run a subset of your tests. If your application deployment is in a separate test like mentioned in (1), you and then use pytest -k "not build_and_deploy" and quickly run through the workload tests without having to rebuild and redeploy every time, making your tests almost as fast as unit tests
1 Like

Hello there! I have noticed an issue within the example smoke test:

The example is missing import pytest before from pytest_operator.plugin import OpsTest. In the test’s current form, it will fail at @pytest.mark.abort_on_fail because pytest will be an unresolved reference. This should be fixed to prevent confusion among any new charm authors.