Tips for debugging (install) hook failures with Kubernetes charms?

I’m trying to test out Juju on Kubernetes and am struggling to debug a hook failure.

I’ve created a model and installed an “official” Charm as follows:

# juju add-model example
Added 'example' model on microk8s/localhost with credential 'microk8s' for user 'admin'

# juju deploy cs:~juju/redis-k8s-1
Located charm "cs:~juju/redis-k8s-1".               
Deploying charm "cs:~juju/redis-k8s-1".

Querying the status, I see a hook failed:

# juju status                                          
Model   Controller          Cloud/Region        Version  SLA          Timestamp
example  microk8s-localhost  microk8s/localhost  2.8.1    unsupported  13:51:49Z
                                                                       
App        Version  Status  Scale  Charm      Store       Rev  OS          Address  Notes
redis-k8s           error       1  redis-k8s  jujucharms    1  kubernetes           hook failed: "install"
                                                                  
Unit          Workload  Agent  Address  Ports  Message           
redis-k8s/0*  error     idle                   hook failed: "install"

I started looking at Debugging charm hooks but that seems to not cover Kubernetes:

# juju debug-log --lines 1 --no-tail
application-redis-k8s: 15:42:08 ERROR juju.worker.uniter.operation hook "install" (via explicit, bespoke hook script) failed: exit status 1

# juju debug-hooks                  
ERROR Juju command "debug-hooks" not supported on kubernetes models

According to microk8s.report my microk8s cluster is healthy - any debugging tips please?

I’ve just had the same issue with the next charm I tried (https://jaas.ai/u/omnivector/bitcoind-k8s/5):

# juju deploy cs:~omnivector/bitcoind-k8s-5 --storage blockchain=10G
Located charm "cs:~omnivector/bitcoind-k8s-5".                                   
Deploying charm "cs:~omnivector/bitcoind-k8s-5".

# juju status                                                 
Model  Controller          Cloud/Region        Version  SLA          Timestamp
test1  microk8s-localhost  microk8s/localhost  2.8.1    unsupported  16:21:07Z
                              
App           Version  Status  Scale  Charm         Store       Rev  OS          Address  Notes
bitcoind-k8s           error       1  bitcoind-k8s  jujucharms    5  kubernetes           hook failed: "install"
                                                                         
Unit             Workload  Agent  Address  Ports  Message     
bitcoind-k8s/4*  error     idle                   hook failed: "install"

so I guess it’s looking more and more like a local issue than a charm issue, but it’s not clear to me what the issue might be - this is a fairly vanilla microk8s setup as per Using Juju with MicroK8s

https://jaas.ai/u/juju/mariadb-k8s/3 seems quite happy though…

App           Version  Status  Scale  Charm         Store       Rev  OS          Address        Notes 
bitcoind-k8s           error       1  bitcoind-k8s  jujucharms    5  kubernetes                 hook failed: "install" 
mariadb-k8s            active      1  mariadb-k8s   jujucharms    3  kubernetes  10.152.183.67   
rabbitmq-k8s           error       1  rabbitmq-k8s  jujucharms    4  kubernetes                 hook failed: "install" 

It’s interesting that you’ve inferred official status. This is a demonstration charm, mostly written to test Juju/K8s integration.

@thumper, @wallyworld - I think this is more evidence that we should move these charms into another namespace. The ~juju account should be reserved for charms that are tightly related to Juju itself.

Good to know, thanks! Nothing in the below was giving me demo vibes, for what it’s worth.

I think I would still expect a “demonstration” charm to work though - does that seem unreasonable?

Absolutely not! It’s completely our fault for giving people the wrong impression.

It’s actually not really the charm’s fault either.

The operator image used by 2.8.1 bundles a new version of the charmhelpers library with incompatible behaviour. They’ve moved to Python 3.8, which has broken things underneath the charms.

I’m working to identify how to address this.

juju debug-log --include unit-<unit-id> is a good start. juju debug-hooks is not yet supported within Kubernetes, I believe.

Ah brilliant, thanks for the context and advice!