Failed custom cinder backend

Hi,

I’m trying to create a custom storage backend to connect to my storage array. I’ve used the cookiecutter template that should be used to develop custom backends. here

This is my version of the backend: Charm

However, once deployed I’m getting the following error:

2022-09-28 00:49:02 WARNING unit.cinder-hitachig350/2.install logger.go:60   File "./src/charm.py", line 70, in <module>
2022-09-28 00:49:02 WARNING unit.cinder-hitachig350/2.install logger.go:60     main(get_charm_class_for_release())
2022-09-28 00:49:02 WARNING unit.cinder-hitachig350/2.install logger.go:60   File "/var/lib/juju/agents/unit-cinder-hitachig350-2/charm/venv/ops_openstack/core.py", line 343, in get_charm_class_for_release
2022-09-28 00:49:02 WARNING unit.cinder-hitachig350/2.install logger.go:60     return get_charm_class(release=ceph_release, all_releases=releases)
2022-09-28 00:49:02 WARNING unit.cinder-hitachig350/2.install logger.go:60   File "/var/lib/juju/agents/unit-cinder-hitachig350-2/charm/venv/ops_openstack/core.py", line 299, in get_charm_class
2022-09-28 00:49:02 WARNING unit.cinder-hitachig350/2.install logger.go:60     all_releases.index(known_releases[0])):
2022-09-28 00:49:02 WARNING unit.cinder-hitachig350/2.install logger.go:60 ValueError: 'yoga' is not in list
2022-09-28 00:49:02 ERROR juju.worker.uniter.operation runhook.go:140 hook "install" (via hook dispatching script: dispatch) failed: exit status 1
2022-09-28 00:49:02 INFO juju.worker.uniter resolver.go:145 awaiting error resolution for "install" hook
2022-09-28 00:50:24 INFO juju.worker.uniter resolver.go:145 awaiting error resolution for "install" hook

Do you have any ideas as to why I’m getting that error? I’m trying to deploy openstack yoga within Ubuntu 20.04 (Focal).

Thank you in advance.

Greetings,

C.

Hello,

Can you test changing ‘yoga’ to ‘quincy’ as the release name ? Additionally, you may try changing the call from get_charm_class_for_release into get_charm_class (without modifying the release name).

Hi,

I changed the version to quincy and got this error message.

2022-09-28 15:28:25 WARNING unit.cinder-g350h/2.install logger.go:60 Traceback (most recent call last):
2022-09-28 15:28:25 WARNING unit.cinder-g350h/2.install logger.go:60   File "./src/charm.py", line 70, in <module>
2022-09-28 15:28:25 WARNING unit.cinder-g350h/2.install logger.go:60     main(get_charm_class_for_release())
2022-09-28 15:28:25 WARNING unit.cinder-g350h/2.install logger.go:60   File "/var/lib/juju/agents/unit-cinder-g350h-2/charm/venv/ops_openstack/core.py", line 343, in get_charm_class_for_release
2022-09-28 15:28:25 WARNING unit.cinder-g350h/2.install logger.go:60     return get_charm_class(release=ceph_release, all_releases=releases)
2022-09-28 15:28:25 WARNING unit.cinder-g350h/2.install logger.go:60   File "/var/lib/juju/agents/unit-cinder-g350h-2/charm/venv/ops_openstack/core.py", line 300, in get_charm_class
2022-09-28 15:28:25 WARNING unit.cinder-g350h/2.install logger.go:60     raise RuntimeError(
2022-09-28 15:28:25 WARNING unit.cinder-g350h/2.install logger.go:60 RuntimeError: Release octopus is not supported by this charm. Earliest support is quincy release
2022-09-28 15:28:25 ERROR juju.worker.uniter.operation runhook.go:140 hook "install" (via hook dispatching script: dispatch) failed: exit status 1
2022-09-28 15:28:25 INFO juju.worker.uniter resolver.go:145 awaiting error resolution for "install" hook
2022-09-28 15:28:45 INFO juju.worker.uniter resolver.go:145 awaiting error resolution for "install" hook
2022-09-28 15:28:45 WARNING unit.cinder-g350h/2.install logger.go:60 Traceback (most recent call last):
2022-09-28 15:28:45 WARNING unit.cinder-g350h/2.install logger.go:60   File "./src/charm.py", line 70, in <module>
2022-09-28 15:28:45 WARNING unit.cinder-g350h/2.install logger.go:60     main(get_charm_class_for_release())
2022-09-28 15:28:45 WARNING unit.cinder-g350h/2.install logger.go:60   File "/var/lib/juju/agents/unit-cinder-g350h-2/charm/venv/ops_openstack/core.py", line 343, in get_charm_class_for_release
2022-09-28 15:28:45 WARNING unit.cinder-g350h/2.install logger.go:60     return get_charm_class(release=ceph_release, all_releases=releases)
2022-09-28 15:28:45 WARNING unit.cinder-g350h/2.install logger.go:60   File "/var/lib/juju/agents/unit-cinder-g350h-2/charm/venv/ops_openstack/core.py", line 300, in get_charm_class
2022-09-28 15:28:45 WARNING unit.cinder-g350h/2.install logger.go:60     raise RuntimeError(
2022-09-28 15:28:45 WARNING unit.cinder-g350h/2.install logger.go:60 RuntimeError: Release octopus is not supported by this charm. Earliest support is quincy release
2022-09-28 15:28:46 ERROR juju.worker.uniter.operation runhook.go:140 hook "install" (via hook dispatching script: dispatch) failed: exit status 1
2022-09-28 15:28:46 INFO juju.worker.uniter resolver.go:145 awaiting error resolution for "install" hook

Ah, right. You need to modify the metadata.yaml file to include the supported series as well (jammy, in your case).

Hi, so I managed to get it working, but now I have an issue with how the charm works.

The cinder control services are all running under LXC containers and if I add a relation to my charm with the cinder-controller using:

juju add-relation cinder:storage-backend cinder-g350h:storage-backend

my configuration doesn’t work. I need the compute nodes to behave like they are storage nodes and I need my cinder-storage-node services to run on the compute nodes for them to get correctly configured. Look at this service list:

+------------------+-------------------------------+------+---------+-------+----------------------------+
| Binary           | Host                          | Zone | Status  | State | Updated At                 |
+------------------+-------------------------------+------+---------+-------+----------------------------+
| cinder-volume    | compute03@VSPG350 | nova | enabled | up    | 2022-09-28T16:27:10.000000 |
| cinder-scheduler | controller        | nova | enabled | up    | 2022-09-28T16:27:10.000000 |
| cinder-volume    | compute04@VSPG350 | nova | enabled | up    | 2022-09-28T16:27:03.000000 |
| cinder-volume    | compute01@VSPG350 | nova | enabled | up    | 2022-09-28T16:27:11.000000 |
| cinder-volume    | compute05@VSPG350 | nova | enabled | up    | 2022-09-28T16:27:07.000000 |
| cinder-volume    | compute02@VSPG350 | nova | enabled | up    | 2022-09-28T16:27:06.000000 |
| cinder-volume    | compute06@VSPG350 | nova | enabled | up    | 2022-09-28T16:27:03.000000 |
+------------------+-------------------------------+------+---------+-------+----------------------------+

Is there any way to achieve that?

I’m not entirely sure to be honest. When you mention that the configuration “doesn’t work”, what do you mean exactly ? Is the configuration propagated correctly to the cinder units ? I think what you’re trying to achieve could be doable at deploy time via juju, although I need more details as to what isn’t working.

What I mean is that everything from Canonical point of view works just fine. The real issue is that my cinder driver is deployed and configured to use FC interfaces and to plug in volumes via FC to the HBA Ports on the compute nodes.

This is the topology: topology

As you can see, the control nodes can run cinder-scheduler, and cinder-api service without issues, but the cinder-volume service needs to run on the compute nodes, since the cinder-volume service is the one that connects to the storage and then connects the LUN to the host. Is it possible to achieve that using juju? I can do it by hand, so I’m guessing it should be doable.