Custom Cinder backend

Hi,

I am planning on using a centralized storage array with my Openstack deployment. I’ve done the deployment on the past and configured the cinder.conf files by hand, so I know what I need to place on that file and also the .deb file I need to install on every storage node. I want to know if you guys can guide me as to how should I proceed to do the same thing on juju because now I’m deploying a charmed Openstack. Do I need to write a charm configure the cinder.conf file and put there my storage backend configuration? If so, Do you know where I could start looking for information? I have deployed an Openstack cloud with CEPH using other’s charms but never created my own. I also went through the charm store and didn’t find one for my storage vendor.

Hi there. What storage backend are you using? See Develop Cinder storage driver charm in the OpenStack Charms documentation.

Hi, We are using a Hitachi Vantara Storage using Fibre Channel. I’ll look at the info provided.

Sweet. I am very interested in any feedback you may have on the cited documentation resource.

Hey,

I developed the charm following the document you provided but something isn’t working. When I did the juju deploy, I’m getting the following error:

2022-05-20 20:36:34 INFO juju.worker.uniter resolver.go:150 awaiting error resolution for "install" hook
2022-05-20 20:36:34 WARNING unit.cinder-g350/1.install logger.go:60 Traceback (most recent call last):
2022-05-20 20:36:34 WARNING unit.cinder-g350/1.install logger.go:60   File "./src/charm.py", line 26, in <module>
2022-05-20 20:36:34 WARNING unit.cinder-g350/1.install logger.go:60     class Cinderg350Charm(OSBaseCharm):
2022-05-20 20:36:34 WARNING unit.cinder-g350/1.install logger.go:60 NameError: name 'OSBaseCharm' is not defined
2022-05-20 20:36:34 ERROR juju.worker.uniter.operation runhook.go:146 hook "install" (via hook dispatching script: dispatch) failed: exit status 1
2022-05-20 20:36:34 INFO juju.worker.uniter resolver.go:150 awaiting error resolution for "install" hook
2022-05-20 20:37:57 INFO juju.worker.uniter resolver.go:150 awaiting error resolution for "install" hook
2022-05-20 20:37:57 WARNING unit.cinder-g350/1.install logger.go:60 Traceback (most recent call last):
2022-05-20 20:37:57 WARNING unit.cinder-g350/1.install logger.go:60   File "./src/charm.py", line 26, in <module>
2022-05-20 20:37:57 WARNING unit.cinder-g350/1.install logger.go:60     class Cinderg350Charm(OSBaseCharm):
2022-05-20 20:37:57 WARNING unit.cinder-g350/1.install logger.go:60 NameError: name 'OSBaseCharm' is not defined
2022-05-20 20:37:57 ERROR juju.worker.uniter.operation runhook.go:146 hook "install" (via hook dispatching script: dispatch) failed: exit status 1
2022-05-20 20:37:57 INFO juju.worker.uniter resolver.go:150 awaiting error resolution for "install" hook

Any ideas as to what is going on?

Hey Carlos,

I think have an idea of what’s going on…

The code generated from the current cookiecutter template renders it something similar to the following:

...
from ops_openstack.plugins.classes import CinderStoragePluginCharm
...
class Cinderg350Charm(OSBaseCharm):
...

and it should be generating something similar to:

...
from ops_openstack.plugins.classes import CinderStoragePluginCharm
...
class Cinderg350Charm(CinderStoragePluginCharm):
...

The problem happens to be that the OSBaseCharm isn’t imported, and instead should use the CinderStoragePluginCharm class. I had thought this was resolved, but upon revisiting I see that there’s an outstanding PR to resolve this which I’ll immediately go revisit. Landing that, would mean you would need to regenerate a charm from the template, but hopefully shouldn’t be too much work.

Alternatively, you can just switch the class itself over and change OSBaseCharm to CinderStoragePluginCharm. There’s a few bits of extra goodies that come in with the aforementioned PR (like mandatory config and such), but just changing the parent class and rebuilding should probably get it working for you.

If you have the code in a shared repository (like github or something), I’m happy to take a look and help.

Hi,

I have made the change you suggested and it finally deploys successfully. However, the option I used to test out the charm didn’t get deployed. This is the configuration I’m using (on github):

G350 Charm

However, if I look at the cinder.conf I only see this written down:

[cinder-g350] volume_backend_name = g350

And I don’t see the hitachi_pool option. Do you know what I’m doing wrong?

Hi, I tried with the new cookiecutter template, but now the error seems to be different:

2022-05-23 20:14:35 ERROR juju.worker.uniter.operation runhook.go:146 hook “install” (via hook dispatching script: dispatch) failed: exit status 1 2022-05-23 20:14:35 INFO juju.worker.uniter resolver.go:150 awaiting error resolution for “install” hook 2022-05-23 20:16:02 INFO juju.worker.uniter resolver.go:150 awaiting error resolution for “install” hook 2022-05-23 20:16:03 WARNING unit.cinder-hds350/0.install logger.go:60 Traceback (most recent call last): 2022-05-23 20:16:03 WARNING unit.cinder-hds350/0.install logger.go:60 File “./src/charm.py”, line 61, in 2022-05-23 20:16:03 WARNING unit.cinder-hds350/0.install logger.go:60 main(get_charm_class_for_release()) 2022-05-23 20:16:03 WARNING unit.cinder-hds350/0.install logger.go:60 File “/var/lib/juju/agents/unit-cinder-hds350-0/charm/venv/ops_openstack/core.py”, line 342, in get_charm_class_for_release 2022-05-23 20:16:03 WARNING unit.cinder-hds350/0.install logger.go:60 return get_charm_class(release=ceph_release, all_releases=releases) 2022-05-23 20:16:03 WARNING unit.cinder-hds350/0.install logger.go:60 File “/var/lib/juju/agents/unit-cinder-hds350-0/charm/venv/ops_openstack/core.py”, line 298, in get_charm_class 2022-05-23 20:16:03 WARNING unit.cinder-hds350/0.install logger.go:60 all_releases.index(known_releases[0])):

It doesn’t matter the version that I use (wallaby, ussuri, victoria, etc) it always throws that error. Do you know why or why is it making a reference to CEPH when I’m trying to use a different backend?

So I’ve gone and taken a look at your repository and it had a .charm file included in it (you might want to add *.charm to .gitignore - I’ll make a pull request for that).

I deployed the .charm file which was uploaded to your repository and I did not see the error that you have regarding the get_charm_class_for_release error you pasted here. The source code of your linked repository doesn’t use the get_charm_class_for_release (though I note that the cookiecutter does, so I’ll give that a go as well).

Regarding why the hitachi_pool option is not passed is due to the truthy nature of this check where you fill in the options list.

You have:

    raw_options = [
        ('volume_backend_name', volume_backend_name),
        ('hitachi_pool', cget('hitachi_pool')),
    ]

    options = [(x, y) for x, y in raw_options if y]

The hitachi_pool options is an integer value, who’s default value is 0. Since the lookup of the current config value, cget('hitachi_pool'), evaluates to 0 it is Falsey, which causes the if y portion of the list clause to skip over the hitachi_pool option.

Indeed, in my deployment when I checked the configuration and the relation data with the default configurations, I saw that there hitachi_pool was not being passed:

$ juju show-unit cinder/0                                                                                                                                               
cinder/0:                                                                                                                                                                                                          
  workload-version: 20.0.0                                                                                                                                                                                         
  machine: "18"                                                                                                                                                                                                    
  ...                                                                                                                                                               
  - relation-id: 0                                                                                                                                                                                                 
    endpoint: storage-backend                                                                                                                                                                                      
    related-endpoint: storage-backend                                                                                                                                                                              
    application-data: {}                                                                                                                                                                                           
    related-units:                                                                                                                                                                                                 
      cinder-g350/0:                                                                                                                                                                                               
        in-scope: true                                                                                   
        data:                                                                                            
          active_active: "False"                                                                                                                                                                                   
          backend_name: cinder-g350                                                                                                                                                                                
          egress-subnets: 10.5.3.88/32                                                                                                                                                                             
          ingress-address: 10.5.3.88                                                                     
          private-address: 10.5.3.88                                                                     
          stateless: "False"                                                                             
          subordinate_configuration: '{"cinder": {"/etc/cinder/cinder.conf": {"sections":                                                                                                                          
            {"cinder-g350": [["volume_backend_name", "g350"]]}}}}'                                
  ...                                                                          

Once I set the hitachi_pool value to another value, it works fine:

$ juju config cinder-g350 hitachi_pool=1
$ juju show-unit cinder/0
cinder/0:                                                                                                
  workload-version: 20.0.0    
  machine: "18"                  
  ...
  - relation-id: 0
    endpoint: storage-backend
    related-endpoint: storage-backend
    application-data: {}
    related-units:
      cinder-g350/0:
        in-scope: true
        data:
          active_active: "False"
          backend_name: cinder-g350
          egress-subnets: 10.5.3.88/32
          ingress-address: 10.5.3.88
          private-address: 10.5.3.88
          stateless: "False"
          subordinate_configuration: '{"cinder": {"/etc/cinder/cinder.conf": {"sections":
            {"cinder-g350": [["volume_backend_name", "g350"], ["hitachi_pool", 1]]}}}}'
  ...

I can also see its rendering in the cinder.conf file correctly. Hope this helps clarify things?