Hi Tom
On the first point maybe I didn’t quite position it correctly.
If you’re new to charms, charmhub.io or whatever and you’re interested in getting started:
Lets take mattermost as an example, it says its Kubernetes ready, but if you do:
juju deploy cs:mattermost
you get
ERROR series "xenial" in a kubernetes model not valid
Why can’t the front page of charmhub.io show some charms when you filter kubernetes charms that you can actually run, for most you then find you need to clone a charm source code and then deploy it from there. But it transpires there are actually some in the store you can deploy, you just have to go digging for them.
Regarding postgresql, I’ve done this:
juju upgrade-controller --agent-stream proposed
juju add-model postgresql2
juju deploy cs:~postgresql-charmers/postgresql-k8s postgresql
Juju Status shows me this:
Model Controller Cloud/Region Version SLA Timestamp
postgresql2 ovh-test ovh-test 2.9-rc2 unsupported 16:54:52Z
App Version Status Scale Charm Store Rev OS Address Notes
postgresql pgcharm:edge waiting 1 postgresql-k8s jujucharms 5 kubernetes 10.3.98.213
Unit Workload Agent Address Ports Message
postgresql/0* active idle 10.2.1.11 5432/TCP Pod configured
This is it looks like its working due to the “Workload active” message but at the top the status is waiting and the kube pod is actually in a CrashLoopBackOff state:
bugg@toms-laptop:~$ kubectl -n postgresql2 get pods
NAME READY STATUS RESTARTS AGE
modeloperator-68785b8876-n5qkd 1/1 Running 0 3m7s
postgresql-0 0/1 CrashLoopBackOff 3 110s
postgresql-operator-0 1/1 Running 0 2m24s
But its not clear from the status that its hosed and juju debug-log
doesn’t show the actual cause, the log looks clean:
application-postgresql: 16:51:28 INFO unit.postgresql/0.juju-log Pod resources <<EOM
secrets:
- data: {}
name: charm-secrets
type: Opaque
services:
- name: postgresql-master
spec:
clusterIP: ''
ports:
- name: pgsql
port: 5432
protocol: TCP
selector:
app.kubernetes.io/name: postgresql
role: master
type: NodePort
- name: postgresql-standbys
spec:
clusterIP: ''
ports:
- name: pgsql
port: 5432
protocol: TCP
selector:
app.kubernetes.io/name: postgresql
role: standby
type: NodePort
EOM
application-postgresql: 16:51:28 INFO unit.postgresql/0.juju-log Configuring pod
application-postgresql: 16:51:29 INFO unit.postgresql/0.juju-log Pod configured
application-postgresql: 16:51:29 INFO juju.worker.uniter.operation ran "config-changed" hook (via hook dispatching script: dispatch)
application-postgresql: 16:51:39 INFO juju.worker.caasoperator started pod init on "postgresql/0"
application-postgresql: 16:55:37 INFO juju.worker.uniter.operation ran "update-status" hook (via hook dispatching script: dispatch)
The actual error I have to find in the pod logs and its related to my other post from yesterday:
2020-12-06 16:54:54,048 INFO: Updating PostgreSQL configuration in /srv/pgconf/12/main/conf.d/juju_charm.conf
Traceback (most recent call last):
File "/usr/local/bin/docker_entrypoint.py", line 23, in <module>
pgcharm.docker_entrypoint()
File "/usr/local/lib/python3.8/dist-packages/pgcharm.py", line 503, in docker_entrypoint
if is_master():
File "/usr/local/lib/python3.8/dist-packages/pgcharm.py", line 412, in is_master
return get_master() == JUJU_POD_NAME
File "/usr/local/lib/python3.8/dist-packages/pgcharm.py", line 421, in get_master
masters = [i.metadata.name for i in api.list_namespaced_pod(NAMESPACE, label_selector=master_selector).items]
File "/usr/local/lib/python3.8/dist-packages/kubernetes/client/api/core_v1_api.py", line 15302, in list_namespaced_pod
return self.list_namespaced_pod_with_http_info(namespace, **kwargs) # noqa: E501
File "/usr/local/lib/python3.8/dist-packages/kubernetes/client/api/core_v1_api.py", line 15413, in list_namespaced_pod_with_http_info
return self.api_client.call_api(
File "/usr/local/lib/python3.8/dist-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "/usr/local/lib/python3.8/dist-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
File "/usr/local/lib/python3.8/dist-packages/kubernetes/client/api_client.py", line 373, in request
return self.rest_client.GET(url,
File "/usr/local/lib/python3.8/dist-packages/kubernetes/client/rest.py", line 239, in GET
return self.request("GET", url,
File "/usr/local/lib/python3.8/dist-packages/kubernetes/client/rest.py", line 233, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'Audit-Id': '82eaba53-a093-40f8-b0a6-bd7c4446bbf5', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'Date': 'Sun, 06 Dec 2020 16:54:54 GMT', 'Content-Length': '290'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forbidden: User \"system:serviceaccount:postgresql2:default\" cannot list resource \"pods\" in API group \"\" in the namespace \"postgresql2\"","reason":"Forbidden","details":{"kind":"pods"},"code":403}
Now, this is all stuff I’m not sure of, but surely irrespective of what charm you’ve deploy Juju shouldn’t surely be showing some form of waiting status for a situation thats clearly terminal?
Cheers
Tom