Putting dependencies in place prior to running charm code

This is an important line. I assume that without exec, argv would get messed up.

@jameinel If I abandon my method used above for your’s I get this

2020-07-22 19:57:14 DEBUG juju.worker.uniter agent.go:20 [AGENT-STATUS] executing: running install hook
2020-07-22 19:57:14 DEBUG juju.worker.uniter.runner runner.go:715 starting jujuc server  {unix @/var/lib/juju/agents/unit-slurmd-34/agent.socket <nil>}
2020-07-22 19:57:14 DEBUG install /usr/bin/env: python3: No such file or directory
2020-07-22 19:57:14 ERROR juju.worker.uniter.operation runhook.go:136 hook "install" (via hook dispatching script: dispatch) failed: exit status 127

This is why I had to create my workaround above.

My wording was of course meant to be a bit witty, I totally understand that the docs would follow. This functionality is i great to know about.

/usr/bin/env: python3: No such file or directory

I believe many places don’t actually have ‘/usr/bin/env’ so your #! in src/charm.py is wrong.
You can also be more explicit with
exec python3 src/charm.py

Rather than expecting the #! to be correct.

@jameinel while I agree with you that we shouldn’t expect /usr/bin/env to exist, if I run juju add-machine --series centos7 and ssh into the box when it comes up /usr/bin/env exists.

$ type -a /usr/bin/env
/usr/bin/env is /usr/bin/env
[ubuntu@n-c132 ~]$ /usr/bin/env python3
/usr/bin/env: python3: No such file or directory
[ubuntu@n-c132 ~]$ type -a python3
-bash: type: python3: not found

I’m thinking the best solution in this case would be to symlink the hooks to charm.py and forfeit using dispatch and charmcraft for this charm.

So the actual failure here is that python3 still doesn’t exist. It may be that my ‘if type’ bash-fu wasn’t sufficient to detect the lack of python3 and cause it to be installed.
But if it is telling you that /usr/bin/env does exist but python3 doesn’t then that is what you need to fix.
Is it called python3 on CentOS? I know on Windows it is just called ‘python’.

We could introspect the output of python -V perhaps

@jameinel Correct, python3 does not exist by default on centos. The drive behind this is to get python3 installed on centos in order to run the charm/operator code. To do this, I thought I could install python3 in the install hook and then call the operator framework after installing python3. The problem I faced, is that if the dispatch file exists before the install hook runs, I get the /usr/bin/env: python3: No such file or directory error. If I write out the dispatch file in the install hook and then call it after installing python3 at the bottom of the bash file I’m able to get things working. I’ve identified another workaround that avoids using dispatch altogether, but I would like to figure out a way to install dependencies needed to run the operator framework from the charms perspective and use dispatch.

I thought the goal was to put the “make sure python3 is installed” into a quick test inside dispatch and then installing it if it doesn’t exist, which would cover all hooks that are run from there, rather than having an install that then creates a dispatch, etc.

@jameinel yeah, it’s weird, that is definitely the goal. What I’m trying to get at here is that the goal was to put the "make sure python3 is installed” into a quick test inside dispatch and then installing it if it doesn’t exist, which would cover all hooks that are run from there doesn’t work if the dispatch file pre-exists - which is why I have to do my hack

That seems quite surprising, my first thought is that the test to see if it exists is somehow incorrect, thus it isn’t properly causing it to install. My quick bash test seems to say it should be working, though.

$ if ! type -a blababa; then echo "nope" ; fi
bash: type: blababa: not found
nope

Could it be a case of something like env caching the paths? I know for ‘bash’ if you install something you often need to call hash -r so it forgets the existing lookups that it has done. Would there be something equivalent here?

You could also do a more direct approach:

if [ "$JUJU_DISPATCH_PATH" == "hooks/install" ] ; then
...
fi

The main problem with that one is that storage attached hooks fire before install (so that you can know where your storage is by the time you go to install software). You could also do something like:

if [ ! -e ".installed" ] ; then
  yum install python
  touch .installed
fi

this also looked like it worked

$ if ! /usr/bin/env pppp ; then echo 'nope'; fi
/usr/bin/env: ‘pppp’: No such file or directory
nope

And has the advantage that it is the same executable doing the lookup as the one that will be execed in src/charm.py

@jameinel are there other options we can look at here? For example, can we expose some type of charm pre-exec hook that could be separate from and ran before any operator code or actual charm hooks? I know people have previously used a convention where they would just have the install hook do any pre-install work, then have the install hook call install.real and subsequently handle install.real in the python charm code.

The openstack charms are a good example of where this convention is currently used:

Possibly the operator framework can expose something to support this functionality?

The main hurdle here is making sure python3 is on the system prior to any operator code being ran. Maybe it is better to solve that problem individually rather then in the context of the operator framework.

One idea I have is to send python3 in the venv with the operator charm. This seems like a reasonable solution, though I’ve yet to give it a try.

I ended up making some modifications that add python3 to the charm at build time. I’m not sure if this is something that works for everyone, but I thought it was worth exploring.

I think to get this working, glibc also needs to exist at a specific version on the system.

When I try to deploy a centos7 charm and use the supplied python3 from my example, I see that the python3 I supply requires a higher version of glibc then exists on the system.

$ sudo yum install glibc
Package glibc-2.17-307.el7.1.x86_64 already installed and latest version

The charm’s log shows

./venv/bin/python3: /lib64/libc.so.6: version `GLIBC_2.25' not found (required by ./venv/bin/python3)

From the python packaging tutorial ,

Linux C-runtime compatibility is determined by the version of  **glibc**  used for the build.

The glibc library shared by the system is forwards compatible but not backwards compatible. That is, a package built on an older system  *will*  work on a newer system, while a package built on a newer system will not work on an older system.

Which is probably what is going on underneath here.

I also created this issue for further discussion on the charmcraft side of things.

After some reading up, I have found the manylinux project. From what I can tell, manylinux builds python in their docker images with an older toolchain, and older version of glibc.

This leads me to believe that it might be possible to generate and package the venv and python3.8 binary in the charm by using the python3.8 in the manylinux docker image (which is built with the older glibc).

The version of glibc used in the manylinux image is 2.17.

$ docker run -it --rm quay.io/pypa/manylinux2014_x86_64 bash
[root@0b68add7d530 /]# /opt/python/cp38-cp38/bin/python
Python 3.8.5 (default, Jul 27 2020, 16:20:29)
[GCC 9.3.1 20200408 (Red Hat 9.3.1-2)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.confstr('CS_GNU_LIBC_VERSION')
'glibc 2.17'

By mounting the requirements.txt and the build dir, we are able to generate the venv and python3.8 to be packaged into the charm with the manylinux python built with glibc 2.17.

#!/bin/bash

set -eux

docker run -it --rm \
    -v `pwd`/requirements.txt:/srv/requirements.txt \
    -v `pwd`/build:/srv/build \
    -v `pwd`/out:/srv/out \
    quay.io/pypa/manylinux2014_x86_64 \
    /bin/sh -c \
    '/opt/python/cp38-cp38/bin/python -m venv --copies --clear /srv/build/venv && \
     /opt/python/cp38-cp38/bin/pip install -r /srv/requirements.txt -t /srv/build/venv/lib/python3.8/site-packages && \
     cp -r /usr/local/lib/libcrypt.* /srv/build/venv/lib/ && \
     cp -r /opt/_internal/cpython-3.8.5/lib/python3.8 /srv/build/venv/lib/ && \
     cp -r /opt/_internal/cpython-3.8.5/include/* /srv/build/venv/include/ && \
     chown -R 1000:1000 /srv/build && \
     /opt/python/cp38-cp38/bin/python /srv/build/scripts/create_zip.py && \
     chown -R 1000:1000 /srv/out'

By doing this we were able to build a .charm that we can deploy on centos and ubuntu.

This leaves me wondering if a) is charmcraft the right place to look at doing this sort of thing?, b) if yes, can we make charmcraft support a docker based build component like this?

1 Like

To finish out the day, I was able to create a few scripts that build the operator charm with the venv generated from the manylinux python.

1 Like

This looks cool, but would this swap dependencies to docker then? Its perhaps not ideal since it then likely adds a whole new dependency perhaps even bigger than python itself?

Then again, I might not fully grasp the manylinux construct from my initial reading.

It uses docker to generate python3.8 at build time , packages python3.8 with the charm and uses python3.8 to execute the charm code at run time. The charm unaware of docker in this case, only used at build time to get a specific python into the charm.

I’ve iterated on this a bit further and have come up with a working prototype of something that generates the venv once, and then seeds it into multiple charms as it builds them