Error attaching oci resource to juju charm using containerd

Hello everyone,

I am trying to attach an oci-type resource to a kubernetes charm, and I’m getting this error:

$ juju attach ui-k8s ui-image=lightui --debug
16:04:53 INFO  juju.cmd supercommand.go:57 running juju [2.5.4 gc go1.10.4]
16:04:53 DEBUG juju.cmd supercommand.go:58   args: []string{"/snap/juju/7206/bin/juju", "attach", "ui-k8s", "ui-image=lightui", "--debug"}
16:04:53 INFO  juju.juju api.go:67 connecting to API addresses: [10.47.6.37:17070]
16:04:53 DEBUG juju.api apiclient.go:888 successfully dialed "wss://10.47.6.37:17070/model/8354809b-ba02-49f6-823f-a91d28e82022/api"
16:04:53 INFO  juju.api apiclient.go:608 connection established to "wss://10.47.6.37:17070/model/8354809b-ba02-49f6-823f-a91d28e82022/api"
16:04:54 DEBUG httpbakery client.go:243 client do PUT https://10.47.6.37:17070/model/8354809b-ba02-49f6-823f-a91d28e82022/applications/ui-k8s/resources/ui-image {
16:05:01 DEBUG httpbakery client.go:245 } -> error <nil>
16:05:01 DEBUG juju.api monitor.go:35 RPC connection died
ERROR failed to upload resource "ui-image": invalid character 'b' looking for beginning of value
16:05:01 DEBUG cmd supercommand.go:496 error stack: 
invalid character 'b' looking for beginning of value
/build/juju/parts/juju/go/src/github.com/juju/juju/resource/api/client/client.go:121: 
/build/juju/parts/juju/go/src/github.com/juju/juju/cmd/juju/resource/upload.go:134: 
/build/juju/parts/juju/go/src/github.com/juju/juju/cmd/juju/resource/upload.go:120: failed to upload resource "ui-image"

In order to reproduce this error, I’ve created this script. If it’s executed in a clear VM with ubuntu 18.04LTS it should happen exactly the same error.

######################
# Setup environment
sudo apt-get remove --purge -y liblxc1 lxc-common lxcfs lxd lxd-client
sudo apt install -y snapd
sudo snap install lxd
sudo snap install juju --classic --stable
sudo snap install microk8s --classic --stable
cat <<EOF | lxd init --preseed
config: {}
networks:
- config:
    ipv4.address: auto
    ipv6.address: none
  description: ""
  managed: false
  name: lxdbr0
  type: ""
storage_pools:
- config:
    size: 30GB
  description: ""
  name: default
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      nictype: bridged
      parent: lxdbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
cluster: null
EOF

lxd waitready

sudo snap alias microk8s.kubectl kubectl
eval $(microk8s.inspect | grep "sudo" | sed -n -e 's/^.*: //p')
microk8s.status --wait-ready
microk8s.enable dashboard storage dns

juju bootstrap lxd juju-controller
microk8s.config | juju add-k8s k8s
juju add-model model-name k8s


###############
# Download charm
git clone https://git.launchpad.net/canonical-osm 
sudo snap install charm --classic
cd canonical-osm
# Build ui-k8s charm
cd charms/layers/
export CHARM_LAYERS_DIR=.
export CHARM_INTERFACES_DIR=.
mkdir ../builds
charm build ui-k8s -o ../

# Deploy charm using bundle
cat << EOF > bundle.yaml
bundle: kubernetes
applications:
  ui-k8s:
    charm: '../builds/ui-k8s'
    scale: 1
    series: kubernetes
    resources:
      ui-image: '../resources/ui-image.yaml'
EOF
mkdir ../resources
echo "registrypath: opensourcemano/light-ui:releasefive-daily" > ../resources/ui-image.yaml

juju deploy bundle.yaml

# ctr pull
microk8s.ctr image pull docker.io/opensourcemano/light-ui:releasefive-daily
# ctr export 
microk8s.ctr images export lightui docker.io/opensourcemano/light-ui:releasefive-daily
# juju attach
juju attach ui-k8s ui-image=lightui --debug

I was able to reproduce this. Thank you @davigar15.

@wallyworld by googling the error it seems that when the client tried to PUT the image it received a non-json response. Are there any logs we can take a look at to find out what went wrong?

juju attach is used to attach a storage instance to a unit.

The command you want to use is charm attach.

You can snap install charm. From charm help attach:

The attach command can also be used to attach docker resources
to Kubernetes charms. When a charm has a docker image resource,
it can be attached by naming the image in the local docker instance:

    charm attach ~user/mykubernetes-charm myresource=ubuntu

The image will be uploaded to the Docker registry associated with the
charm store. It's also possible to attach an image from an external
registry directly without uploading it to the charmstore's registry
by using the prefix "external::"; for example:

    charm attach ~user/mykubernetes-charm myresource=external::ubuntu

After trying the charm attach command we receive the following error:

$ charm attach ui-k8s ui-image=lightui --debug -c unpublished
14:56:08 DEBUG httpbakery logger.go:28 client do GET https://api.jujucharms.com/charmstore/v5/ui-k8s/meta/any?channel=unpublished&include=charm-metadata {
14:56:09 INFO  httpbakery logger.go:33 HTTP response OK (status 404 Not Found)
14:56:09 DEBUG httpbakery logger.go:28 } -> error <nil>
ERROR cannot get "/ui-k8s/meta/any?include=charm-metadata": no matching charm or bundle for cs:ui-k8s
14:56:09 DEBUG cmd supercommand.go:475 error stack: 
no matching charm or bundle for cs:ui-k8s
gopkg.in/juju/charmrepo.v4/csclient/csclient.go:1031: 
gopkg.in/juju/charmrepo.v4/csclient/csclient.go:976: cannot get "/ui-k8s/meta/any?include=charm-metadata"
github.com/juju/charmstore-client/cmd/charm/charmcmd/cmd.go:482: 
github.com/juju/charmstore-client/cmd/charm/charmcmd/attach.go:112: 

Just to make clear if what we want to achieve is possible. We want to attach a different docker image to a locally deployed charm. That charm is currently not in the charm store. Is this the way to go?

@dominik.f yeah the juju attach command … I think you are correct. See the help on it.

$ juju help attach
Usage: juju attach-resource [options] application name=file

Summary:
Upload a file as a resource for an application.

Global Options:
--debug  (= false)
    equivalent to --show-log --logging-config=<root>=DEBUG
-h, --help  (= false)
    Show help on a command or other topic.
--logging-config (= "")
    specify log levels for modules
--quiet  (= false)
    show no informational output
--show-log  (= false)
    if set, write the log file to stderr
--verbose  (= false)
    show more verbose output

Command Options:
-B, --no-browser-login  (= false)
    Do not use web browser for authentication
-m, --model (= "")
    Model to operate in. Accepts [<controller name>:]<model name>

Details:
This command uploads a file from your local disk to the juju controller to be
used as a resource for an application.

Aliases: attach

Pretty sure @wallyworld is thinking of juju attach-storage :slight_smile:

Now that you mention it is a locally deployed charm, that changes things. The charm attach command is used to attach an OCI image to a charm which has been pushed to the charm store, so that when the charm is deployed from the store, that image is used.

For a local charm on your filesystem, you use the --resource option at deploy time. Assume that the charm defines in metadata an image resource called “workload_image”. You have in your local docker repo an image tagged as “myimage:latest”. You deploy like so:

juju deploy /path/to/mycharm --resource workload_image=myimage:latest

Thats the command I was looking for, we got it working, thanks!

Hey! Sorry for the delay.

I may have forgotten to highlight something. The objective is to attach a new resource to an already existing kubernetes charm (p.e. to do a hotfix).

When I execute the command you just sent, I’m getting an error saying ERROR cannot add application "ui-k8s": application already exists.

juju deploy will deploy a new application, and as such, it checks that there’s not already an application that has previously been deployed with the same name. The error you are seeing is because you are trying to deploy the same application twice.

If you want to upgrade an existing application’s charm, you can use the juju upgrade-charm command. You can specify a different OCI image at that point using the --resource arg.
See juju help upgrade-charm.

You can also (will be able to) use the juju attach-resource command to associate a new OCI image with an existing charm without upgrading the charm at the same time. This is still being finalised for k8s models so doesn’t quite work yet.

Given the above, you are best off (for now) to use the upgrade-charm command with a charm which has been unpacked locally. This is because Juju always considers upgrading a local charm as something has changed, even if the charm source is the same. So you can do:

juju upgrade-charm myapp --path /path/to/charm --resource my_image=imagename

where my_image is the name of the OCI image from the charm metadata and imagename is a tag of an OCI image on docker hub. If you want to docker build a local image, you will need first to seed that into the image repo on the k8s workload node. eg if you are running microk8s, the command will be something like:

docker save imagename | microk8s.ctr -n k8s.io image import -

Thanks for the reply! It’s very clear :slight_smile:

The juju upgrade-charm vimdb --path ./charms/builds/mariadb-k8s --resource mysql-image=registry.hub.docker.com/library/mariadb:5.5 command it’s apparently working in juju 2.6-rc1-bionic-amd64. The watch -c juju status --color shows how it’s fetching the new image, and so on. But it’s only downloading the image, the pod is not really updated. Any ideas?

Thanks again!

You don’t need the full registry URL - just “mariadb:5.5” will do as docker hub is the default.

The charm itself needs to be written is such a way to clear its reactive state when the upgrade hook runs in order to know to use any new image. Earlier versions of the charm omitted this upgrade behaviour. If you grab the latest charm from the store (~juju/mariadb-k8s) and unpack it locally, or build the latest source, it should work as you want.

@wallyworld Is it normal that I have to execute the juju upgrade-charm twice to make effect? The first time apparently downloads the image I want, but it doesn’t create a new unit to replace the previous one. Only when I execute it the second time it actually replaces the old unit with a new one. Any clue?

1 Like

Given your description, the charm appears faulty . Juju will correctly update the deployment pod specification so long as the charm sends in a different spec YAML to what has previously been set for the application. I have seen cases where the charm fails to do this properly, eg no reactive handler for the upgrade-charm hook.