Supported use cases for Kubernetes on Openstack + Ceph storage

I’ve used Juju to deploy an Openstack cloud, and then I’ve used the same controller to deploy Kubernetes inside the Openstack cloud.

Seeing as though Openstack is already making use of Ceph for storage, I’m looking to use the same Ceph cluster as a backend for my K8s persistent volumes.

In my current setup, Openstack lives on it’s own model including Ceph, while K8s lives on a separate model on the same controller.

I’ve noticed that if I offer (share) ceph-mon from the Openstack model, consume it on the K8s model and then relate kubernetes-master to ceph-mon, I’m getting a "Unsupported CMR relation" error on ceph-mon.

Is this error due to a limitation with ceph-mon and Cross Model Relations or is this because I’m trying to “Multitenant” the Ceph cluster between different / unrelated applications?

I’m interested to see how "grown ups" are managing their models, applications and integrations between them.

Shell output of ceph-mon and kuberenetes-master

Hi,

We’re targeting the same kind of setup (KaaS aka Kube as a Service ionside an Openstack tenant) and you have 2 (above others) options :

  • either you use Ceph from Kube directly and for that, the best way might be to build 3 models : one for openstack, one for ceph and one for kube and use ceph-proxy to “plug” your openstack/kube to your ceph
  • or for a better and easier integration inside openstack, you should use the openstack integrator and rely on Cinder (which will rely on Ceph as backend) for your volumes inside Kube : Openstack-integrator charm | Ubuntu

I would personnaly recommend the second option unless you want specific Ceph features that you couldn’t find in Cinder (I don’t see any TBH but you can have your reason).

So I thought it made the most logical sense to use the Openstack-integrator route.

One of the prerequisites for openstack-integrator is a working Octavia deployment in the Openstack cloud, and to be honest trying to get Octavia to play nice with an existing deployed cloud in my sandbox has been really rough.

Hi,

I had many difficulties making Octavia work properly.
Most of them were due to bugs eiter in the Charm itself or, more frequently, due to bugs in the underlying components used by Octavia.
Most of them are fixed now but I’m still facing the main issue with the OVN driver not working.

Let me explai, I’m using the focal-ussuri combo for my Openstack deployment with OVN as the networking layer.
By default, in this scenario, Octavia is deployed with both backend drivers : Amphora and OVN.
The Amphora backend works as exptected, I don’t find it great but it works.
On the other hand, there is an OVN Octavia backend that sounds very promising except that this driver don’t work by now because of a bug in the OVN package apparently.
This is being fixed right now and if I’m not wrong, it is tracked here : Bug #1917475 “RBAC Permissions too strict for Port_Binding table...” : Bugs : ovn package : Ubuntu

The workaround I found is to simply disable this driver once Octavia is deployed.
For that, in /etc/octavia/octavia.conf, simply remove the lines with mentions to OVN and restart apache2 service on Octavia units.
You should now have a working Octavia service and be able to spawn load balancers.

This is being fixed right now and if I’m not wrong, it is tracked here : Bug #1917475 “RBAC Permissions too strict for Port_Binding table…” : Bugs : ovn package : Ubuntu

We’ve found quite a few rough edges with the OVN provider at Ussuri - focus this cycle on getting those resolved both in the latest OpenStack release and with appropriate backports of fixes to Ussuri.

The workaround I found is to simply disable this driver once Octavia is deployed.
For that, in /etc/octavia/octavia.conf, simply remove the lines with mentions to OVN and restart apache2 service on Octavia units.

I also think that the OVN provider is only enabled when the ‘ovsdb-cms’ relation between octavia and ovn-central is present in the model - removing that relation should disable the provider.

1 Like