So I’ve gone and taken a look at your repository and it had a .charm file included in it (you might want to add *.charm to .gitignore - I’ll make a pull request for that).
I deployed the .charm file which was uploaded to your repository and I did not see the error that you have regarding the get_charm_class_for_release
error you pasted here. The source code of your linked repository doesn’t use the get_charm_class_for_release (though I note that the cookiecutter does, so I’ll give that a go as well).
Regarding why the hitachi_pool
option is not passed is due to the truthy nature of this check where you fill in the options list.
You have:
raw_options = [
('volume_backend_name', volume_backend_name),
('hitachi_pool', cget('hitachi_pool')),
]
options = [(x, y) for x, y in raw_options if y]
The hitachi_pool
options is an integer value, who’s default value is 0. Since the lookup of the current config value, cget('hitachi_pool')
, evaluates to 0 it is Falsey, which causes the if y
portion of the list clause to skip over the hitachi_pool
option.
Indeed, in my deployment when I checked the configuration and the relation data with the default configurations, I saw that there hitachi_pool
was not being passed:
$ juju show-unit cinder/0
cinder/0:
workload-version: 20.0.0
machine: "18"
...
- relation-id: 0
endpoint: storage-backend
related-endpoint: storage-backend
application-data: {}
related-units:
cinder-g350/0:
in-scope: true
data:
active_active: "False"
backend_name: cinder-g350
egress-subnets: 10.5.3.88/32
ingress-address: 10.5.3.88
private-address: 10.5.3.88
stateless: "False"
subordinate_configuration: '{"cinder": {"/etc/cinder/cinder.conf": {"sections":
{"cinder-g350": [["volume_backend_name", "g350"]]}}}}'
...
Once I set the hitachi_pool
value to another value, it works fine:
$ juju config cinder-g350 hitachi_pool=1
$ juju show-unit cinder/0
cinder/0:
workload-version: 20.0.0
machine: "18"
...
- relation-id: 0
endpoint: storage-backend
related-endpoint: storage-backend
application-data: {}
related-units:
cinder-g350/0:
in-scope: true
data:
active_active: "False"
backend_name: cinder-g350
egress-subnets: 10.5.3.88/32
ingress-address: 10.5.3.88
private-address: 10.5.3.88
stateless: "False"
subordinate_configuration: '{"cinder": {"/etc/cinder/cinder.conf": {"sections":
{"cinder-g350": [["volume_backend_name", "g350"], ["hitachi_pool", 1]]}}}}'
...
I can also see its rendering in the cinder.conf file correctly. Hope this helps clarify things?