Question about the cpython dependencies with 'charmcraft build'

Hello everyone!

I recently finished a brand new infrastructure charm using the new operator framework. The charm deploys Microsoft SQL Server on Linux (with clustering via pacemaker):

The charm has the Python pymssql dependency which is based on a CPython pre-built library for each supported Python release. When the pymssql is installed via pip, it pulls the appropriate pre-compiled library corresponding to the Python version used to install the package.

For example, the installed package is:

  • on Ubuntu 18.04 (with default Py 3.6):
  • on Ubuntu 20.04 (with default Py 3.8):

However, if I do a charmcraft build on Ubuntu 20.04, and deploy the charm on Ubuntu 18.04, the charm hooks will fail to import pymssql because the bundled venv with the charm (given by charmcraft build) contains the pymssql.cpython-38, corresponding to the Py 3.8 from the Focal machine where the charm was built.

On the other hand, if I build the charm on Ubuntu Bionic, it deploys without any problems on Ubuntu Bionic, since the pre-built CPython libraries will match the Python release from the deployment machine.

The above problem is encountered on every dependency based on pre-built CPython libs.

For the moment, I’ve added this workaround to re-initialize the charm venv (only if it cannot be imported).

This fixes the issue, since it pulls the approapiate CPython libs on the target machine, but I’m not sure if it’s the best way to solve this.

What do you guys think ?


Hi @ionutbalutoiu. Thank you for the question!

The plan for the new CharmHub is to have hooks in charmcraft for making different .charm builds per series/architecture combo, which would address the issue that you’re running into with cpython libs.

Juju will then request the proper series/architecture at deploy time.

The problem is that, while Juju 2.9 can request the correct series/architecture, some of the bits of charmcraft and CharmHub that support this aren’t quite wired up yet. That support is coming very soon. In the meantime, your workaround is probably the best approach.

~ PeteVG

Hi @petevg. Thank you for the clarification.

I’ll keep my workaround for the time being.