One of the problems that I believe Juju struggles with is how to correctly know in advance what may or may not apply when attempting to deploy a charm. The crude analogy “throw it at the wall and see what sticks” is well placed for how we sometimes attempt to deploy certain aspects of a charm.
Out inability to model certain aspects cause us to attempt to just try things and if it fails log it out in status and get the user of Juju to rinse and repeat. The feedback loop for this can be sometimes long and the efficiency that we know Juju can achieve can be lost.
I’ve run into this countless times over the years trying to implement various features;
- LXD profiles: A user wants to apply a profile with a specific kernel only to find out later that kernel module isn’t available on the machine and so Juju attempts to apply it, but it doesn’t stick and we show an error to the user in the status output.
- Architectures: A user wants to use a specific architecture for a given charm, except Juju doesn’t know upfront if that architecture is even available.
When I look at all these problems, they’re a basic modelling problem. Juju could know this information, the providers in Juju could tell us this information, yet we don’t model Juju in this way. I believe we should start to model in this way.
The idea being quite a simple one, we improve the dry run capabilities that already exists within Juju and we make it a real concept. We insert a
Plan phase when attempting to deploy a charm with Juju. The
Plan phase would gather all the information as a series of assertions that we believe are critical to deploying to the substrate. Then ask the provider if the assertions are likely to work, we can either show a confidence rating or just error out. Juju could show the results from the planning phase to the user if we don’t believe the
deploy will work. This would vastly improve the user experience with Juju.