Scenario Snapshot: unittest semi-auto generation, juju unit state insight, and more!

When the author of jhack meets the author of Scenario, interesting things happen!

Jhack offers a tool called snapshot. You should be able to reach the entry point by typing jhack scenario snapshot in a shell.

Snapshot used to be shipped with scenario but has been removed in v5.6.1. We moved it over to jhack, where it is available since v0.3.23.

Snapshot’s purpose is to gather the State data structure from a real, live charm running in some cloud your local juju client has access to. This is handy in case:

  • you want to write a test about the state the charm you’re developing is currently in
  • your charm is bork or in some inconsistent state, and you want to write a test to check the charm will handle it correctly the next time around (aka regression testing)
  • you are new to Scenario and want to quickly get started with a real-life example.

Example: gathering state from prometheus-k8s

Suppose you have a Juju model with a prometheus-k8s unit deployed as prometheus-k8s/0. If you type scenario snapshot prometheus-k8s/0, you will get a printout of the State object.

image

Copy-paste that in some file, import all you need from scenario, and you have a working State that you can .trigger() events from.

You can also pass a --format json | pytest flag to obtain respectively:

  • a jsonified State data structure (a plain dataclasses.asdict preprocessing step in fact), for portability

image

  • a full-fledged pytest test case (with imports and all), where you only have to fill in the charm type and the event that you wish to trigger.

image

[…]

image

Pipe that out to a file, and you have your unit-test!

Snapshot is fairly complete, only secrets still need to be implemented, but expect, out of the box:

  • config
  • networks
  • relations (and relation data)
  • containers for k8s charms
  • deferred events
  • stored state
  • status
  • model metadata
  • leadership

With some setup, you can also fetch files from kubernetes workload containers (think: application config files, etc…) and see them assigned automatically to the container mounts they’re at!

4 Likes

What do you think about generating an upgrade test out-of-the-box?

I imagine taking a snapshot would be very useful as part of one of our quality gates (probably candidate or stable):

  • Deploy cos-lite bundle from stable.
  • Snapshot an app: scenario snapshot prometheus/0 ... > test_regression_REV.py. This way we test that revision REV can be upgraded to the current version of the charm.
  • Add the newly created state to the existing suite of upgrade tests.

So perhaps we’d have a tree structure such as:

prometheus-k8s
└── tests
    └── scenario
        └── regression
            ├── test_regression_103.py
            ├── test_regression_117.py
            └── test_regression_98.py

Wdyt?

1 Like

Using snapshot for regression testing has been in my mind for a while, but we need to put a bit more thought into how exactly we think of using a snapshot to test that an upgrade is ‘safe’.

Triggering a scenario checks that

  • the charm processes this event (from this input state) without erroring
  • the output state is valid, whatever that means for this charm

I’d argue we can define an update to be safe if and only if: the NEW charm can process the same events from the same input states (or an upgrade sequence only?)

  • without erroring
  • the output states it produces are valid for the NEW charm (not necessarily for the old one)

So I’m not sure if a snapshot is useful in this specific case, since we presumably already have a battery of scenario tests that encode the valid state transitions for the previous revision. Is it not enough then to run them on the new revision and adapt the output states to match any changes?

That is true. I imagine any existing scenario tests may fall short only in:

  • readiness/eagerness to update the tests when something fails
  • process: if we have a state file per stable revision then (1) we have a state file per stable revision (!), and (2) errors are specific to revision numbers.