Kubernetes examples

Hello again, other question:

Why do half the charms on charmhub.io listed as kubernetes charms actually have no kubernetes in the store against them? Which makes it hard to find ones that you actually can deploy.

But there are a bunch of kubeflow examples which you can seeminly deploy from the store, why not promote them?

Plus the Postgres charm would be epic but you need Juju 2.9.0 upgrades which is something else people need to figure out, and it doesn’t start, but I’m not here to diss the charm because it brings me on to my last point:

Why aren’t there any links through to the version controlled source? Cause there aint much in the way of tutorials, so having some way of finding out how these are written in the operator model would be useful. I did find Stub’s Postgres source on Launchpad so I’ll be borrowing that pattern, thanks Stub.

I’m not really here to grumble, but just a bit confused as someone looking to get into K8S charms properly, its all just a little weird.

Tom

1 Like

All of the charms on charmhub.io are pulled from the charmstore, so I suspect the fact you’re not finding ones in the charmstore that are listed on charmhub is that you’re just getting the “promulgated” results in search - have you tried showing “community” results? Do you have a specific example?

In terms of the links through to the version controlled source I filed https://github.com/canonical-web-and-design/charmhub.io/issues/709 a few days ago and I believe it’s in progress currently.

When you say postgresql on k8s doesn’t start, could you provide a bit more detail? Are there errors in the logs for the pod in question?

It works fine for me here:

mthaddon@tenaya:~$ juju add-model postgresql micro
Added 'postgresql' model on micro/localhost with credential 'micro' for user 'admin'
mthaddon@tenaya:~$ juju deploy cs:~postgresql-charmers/postgresql-k8s postgresql
Located charm "postgresql-k8s" in charm-store, revision 5
Deploying "postgresql" from charm-store charm "postgresql-k8s", revision 5 in channel stable
mthaddon@tenaya:~$ # Wait for juju status to settle to this
mthaddon@tenaya:~$ juju status
Model       Controller           Cloud/Region     Version  SLA          Timestamp
postgresql  localhost-localhost  micro/localhost  2.9-rc3  unsupported  17:11:43+01:00

App         Version       Status  Scale  Charm           Store       Rev  OS          Address  Message
postgresql  pgcharm:edge  active      1  postgresql-k8s  charmstore    5  kubernetes           

Unit           Workload  Agent  Address      Ports     Message
postgresql/0*  active    idle   10.1.234.10  5432/TCP  Pod configured

mthaddon@tenaya:~$ nc -vz 10.1.234.10 5432
Connection to 10.1.234.10 5432 port [tcp/postgresql] succeeded!

Hi Tom

On the first point maybe I didn’t quite position it correctly.

If you’re new to charms, charmhub.io or whatever and you’re interested in getting started:

Lets take mattermost as an example, it says its Kubernetes ready, but if you do:

juju deploy cs:mattermost
you get
ERROR series "xenial" in a kubernetes model not valid
Why can’t the front page of charmhub.io show some charms when you filter kubernetes charms that you can actually run, for most you then find you need to clone a charm source code and then deploy it from there. But it transpires there are actually some in the store you can deploy, you just have to go digging for them.

Regarding postgresql, I’ve done this:

juju upgrade-controller  --agent-stream proposed
juju add-model postgresql2
juju deploy cs:~postgresql-charmers/postgresql-k8s postgresql

Juju Status shows me this:

Model        Controller  Cloud/Region  Version  SLA          Timestamp
postgresql2  ovh-test    ovh-test      2.9-rc2  unsupported  16:54:52Z

App         Version       Status   Scale  Charm           Store       Rev  OS          Address      Notes
postgresql  pgcharm:edge  waiting      1  postgresql-k8s  jujucharms    5  kubernetes  10.3.98.213

Unit           Workload  Agent  Address    Ports     Message
postgresql/0*  active    idle   10.2.1.11  5432/TCP  Pod configured

This is it looks like its working due to the “Workload active” message but at the top the status is waiting and the kube pod is actually in a CrashLoopBackOff state:

bugg@toms-laptop:~$ kubectl -n postgresql2 get pods
NAME                             READY   STATUS             RESTARTS   AGE
modeloperator-68785b8876-n5qkd   1/1     Running            0          3m7s
postgresql-0                     0/1     CrashLoopBackOff   3          110s
postgresql-operator-0            1/1     Running            0          2m24s

But its not clear from the status that its hosed and juju debug-log doesn’t show the actual cause, the log looks clean:


application-postgresql: 16:51:28 INFO unit.postgresql/0.juju-log Pod resources <<EOM
secrets:
- data: {}
  name: charm-secrets
  type: Opaque
services:
- name: postgresql-master
  spec:
    clusterIP: ''
    ports:
    - name: pgsql
      port: 5432
      protocol: TCP
    selector:
      app.kubernetes.io/name: postgresql
      role: master
    type: NodePort
- name: postgresql-standbys
  spec:
    clusterIP: ''
    ports:
    - name: pgsql
      port: 5432
      protocol: TCP
    selector:
      app.kubernetes.io/name: postgresql
      role: standby
    type: NodePort

EOM
application-postgresql: 16:51:28 INFO unit.postgresql/0.juju-log Configuring pod
application-postgresql: 16:51:29 INFO unit.postgresql/0.juju-log Pod configured
application-postgresql: 16:51:29 INFO juju.worker.uniter.operation ran "config-changed" hook (via hook dispatching script: dispatch)
application-postgresql: 16:51:39 INFO juju.worker.caasoperator started pod init on "postgresql/0"
application-postgresql: 16:55:37 INFO juju.worker.uniter.operation ran "update-status" hook (via hook dispatching script: dispatch)

The actual error I have to find in the pod logs and its related to my other post from yesterday:

2020-12-06 16:54:54,048     INFO: Updating PostgreSQL configuration in /srv/pgconf/12/main/conf.d/juju_charm.conf
Traceback (most recent call last):
  File "/usr/local/bin/docker_entrypoint.py", line 23, in <module>
    pgcharm.docker_entrypoint()
  File "/usr/local/lib/python3.8/dist-packages/pgcharm.py", line 503, in docker_entrypoint
    if is_master():
  File "/usr/local/lib/python3.8/dist-packages/pgcharm.py", line 412, in is_master
    return get_master() == JUJU_POD_NAME
  File "/usr/local/lib/python3.8/dist-packages/pgcharm.py", line 421, in get_master
    masters = [i.metadata.name for i in api.list_namespaced_pod(NAMESPACE, label_selector=master_selector).items]
  File "/usr/local/lib/python3.8/dist-packages/kubernetes/client/api/core_v1_api.py", line 15302, in list_namespaced_pod
    return self.list_namespaced_pod_with_http_info(namespace, **kwargs)  # noqa: E501
  File "/usr/local/lib/python3.8/dist-packages/kubernetes/client/api/core_v1_api.py", line 15413, in list_namespaced_pod_with_http_info
    return self.api_client.call_api(
  File "/usr/local/lib/python3.8/dist-packages/kubernetes/client/api_client.py", line 348, in call_api
    return self.__call_api(resource_path, method,
  File "/usr/local/lib/python3.8/dist-packages/kubernetes/client/api_client.py", line 180, in __call_api
    response_data = self.request(
  File "/usr/local/lib/python3.8/dist-packages/kubernetes/client/api_client.py", line 373, in request
    return self.rest_client.GET(url,
  File "/usr/local/lib/python3.8/dist-packages/kubernetes/client/rest.py", line 239, in GET
    return self.request("GET", url,
  File "/usr/local/lib/python3.8/dist-packages/kubernetes/client/rest.py", line 233, in request
    raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'Audit-Id': '82eaba53-a093-40f8-b0a6-bd7c4446bbf5', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'Date': 'Sun, 06 Dec 2020 16:54:54 GMT', 'Content-Length': '290'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forbidden: User \"system:serviceaccount:postgresql2:default\" cannot list resource \"pods\" in API group \"\" in the namespace \"postgresql2\"","reason":"Forbidden","details":{"kind":"pods"},"code":403}

Now, this is all stuff I’m not sure of, but surely irrespective of what charm you’ve deploy Juju shouldn’t surely be showing some form of waiting status for a situation thats clearly terminal?

Cheers

Tom

Hey m8. You are not alone in the frustration which I personally trace to a few fundamental juju issues. Maybe you would agree in that:

  1. The difficulty to find up to date, accurate, consistent and easy to digest documentation on juju and it’s charms. I have been with juju for many years and still suffer from this almost every day. This problem is very fixable.

  2. The lack of a QA process for charms which makes it very difficult to chose and find the right charm for your needs. This touches base with your post and has also been discussed many years. I proposed a “reward/perk” system to promote charms with functional QA (CI/CD) pipelines, docs and a voting system to help the community to wash out dead charms and promote best practices. There are other ways, very much possible to implement.

  3. Lack of best practices on implementation of charms. This, in conjunction with a very fast paced juju development cycle that introduces more and more features while I would like to see at least a temporary focus on getting a base of easy to follow patterns for implementation of new charms. This should help beginners and experts to collaborate better aswell, strengthening the whole juju ecosystem and possibly also help people like you, learning how to work with juju, to get started.

I know this reply doesn’t help you in your specific question, but I thought to provide context to the more fundamental aspects of your problem.

1 Like

So I was pondering similar stuff when I was out for a drive today.

I was thinking back to Ubuntu and why I started using it all those years ago. The reason was because I could stick the CD in the drive and the majority of the hardware would work, Wifi cards etc worked, which for Linux back in the day was a royal PITA.

So why do we want to use Juju for K8S stuff? Because builing K8S platforms is a PITA also.

If you think Juju as an operating system and charms as debs in an apt repository how do we achieve the level of stabilty you’d expect from packages for an operating system? But also how do you achieve the stability you’d expect from the kernel?

Part of it though also is about how it looks to first time users after a whole bunch of PR over the last few weeks, and, well, it doesn’t look great.

But! Juju has the ability to really help define how you build applications on a stack like Kubernetes in a much saner way than sticking together Helm charts and stuff.

Anyway, I’m not really here for a grumble, its been a while since I tested Kubernetes charms and I have a bunch on my todo, so I want to help, I’m just surprised by what I’ve seen, or to the point, not seen.

== Later addition ==

One thing I will say, I knocked up a quick test of a single pod using the operator pattern copying chunks from postgres, and I will say this, and this is why I care, outside of having to learn some new stuff thats not particularly documented, the launch, deployment and connection to my pod is the easiest thing as an administrator, and this isn’t really any different to my debs in ubuntu point, debs aren’t easy to build, but they should be easy to install. Juju needs to be like this, make stuff easy to deploy and the builders will come.

Love the new operator pattern. Lets make it easier for non developers to deploy software.

1 Like

I’ve filed https://github.com/canonical-web-and-design/charmhub.io/issues/715 for this and it’s been assigned already, so hopefully it can be addressed soon.

Thanks Tom,

Its hard to make these points without sounding like I’m having a mega grumble, I’m not, I fully appreciate the engineering complexity behind all of this both on the tooling and the store. Just trying to give honest feedback of a first look of the new stuff.

Tom

1 Like

Not at all, always good to get feedback and hopefully we can use it to help improve things overall. Much better to get feedback pointing out problems than not know about the problems :slightly_smiling_face:

1 Like

I certainly appreciate your response and helping us to find the pain points and improve the process. There was, indeed, a lot of stuff going on concurrently (new web API using a different internal paradigm, trying to import old charms into that process, and k8s interactions all going live at the same time). And it leads to some of these pain points as we are finishing things up. But we appreciate concrete feedback from users of “this piece was harder than it should be”.

I think the error condition you ran into is something that we should be cognizant of. I don’t think we currently poll the k8s api for CrashLoop, etc, which is definitely something that we would want to surface in ‘juju status’.

The particular error you encountered looks like something is trying to access the K8s API directly but whatever credential is set up doesn’t have the right permissions. Which is a bit odd that we wouldn’t see that in other k8s, but is probably a case of your k8s being configured stricter than other ones that we have tested on. (Some particular permission that we have default allowed and you have default denied, in this particular case it looks like listing what pods are running.)

Thanks @jameinel I think theres certainly a case for surfacing more k8s feedback into juju status but obviously thats a design choice you lot have to make.

Kubernetes API warnings I’m trying to get the RBAC stuff figured out over here, once I work out what needs to be applied where hopfully its something document worthy for people firing up juju on different K8S versions so that they can get past this issue, because like with the crashloop, nothing surfaces in juju but its clearly pretty terminal is the postgres charm can’t set stuff up properly.

Can i suggest that stuff like this is even worse:

I can deploy prometheus and grafana from the store, it says there is a relation between them, there isn’t. It says build it from source, but I have no idea where the source is because there isn’t a link from the charmstore to the repository which the source is kept.

Item 1 of the Manifesto you folks developed:

Source required

We make operator source available so that everybody can understand exactly what happens on their systems.

I found it here: https://github.com/canonical/prometheus-operator

but come on folks, you aren’t making life easy for non canonical employees.

Seems that its a changed relation in Grafana? But the readme doesn’t seem to tell you what to do about the local deployment:

https://github.com/canonical/grafana-operator/issues/8

It’s surprising to me that the bugs url and homepage aren’t showing up on the new CharmHub. I’ll follow up w/ the web folks to see whether that is an oversight, or whether they’ve got something in the works to address links to source in a different way. Thank you for pointing that out.

I believe the lack of relation is an error, which the charmers working on those two charms will fix in a near future release. For now, you can try deploying the -0 revision of each charm, which should have the relation. (I’m working on a tutorial involving both of those charms, and have been using that as a workaround while I wait for the fix.)

~ PeteVG

No problem Pete I know I sound narky and raising lots of problems, but they’re honestly just the stuff i’m bumping into as I go along.

Anyway, gave a demo to our developer meetup at JPL tonight, seemed to go down reasonably well, hopefully you’ll have some new users soon!

This is being tracked as https://github.com/canonical-web-and-design/charmhub.io/issues/709, fwiw.

1 Like