Openstack storage

Hello everyone! I am trying to install Openstack Base on my lab of 3 HP G8 with 4 disks. 2x300 in raid1 /dev/sda for OS installation and 2x600 /dev/sdb, /dev/sdc for Openstack storage. Before deploy I only change “osd-devices:” in openstack-base/bundle.yaml osd-devices: &osd-devices /dev/sdb /dev/sdc

After reinstallation (due to radosgw issue) in juju status everything is green and active. But in Openstack horizon, only system disks are displayed in local storage. 2021-10-05 12_46_27-Hypervisors - OpenStack Dashboard

Model      Controller       Cloud/Region  Version  SLA          Timestamp
openstack  juju-controller  maas/default  2.9.15   unsupported  00:31:50+03:00

App                     Version  Status  Scale  Charm                   Store       Channel  Rev  OS      Message
ceph-mon                16.2.4   active      3  ceph-mon                charmstore  stable    55  ubuntu  Unit is ready and clustered
ceph-osd                16.2.4   active      3  ceph-osd                charmstore  stable   310  ubuntu  Unit is ready (2 OSD)
ceph-radosgw            16.2.4   active      1  ceph-radosgw            charmstore  stable   296  ubuntu  Unit is ready
cinder                  18.0.0   active      1  cinder                  charmstore  stable   310  ubuntu  Unit is ready
cinder-ceph             18.0.0   active      1  cinder-ceph             charmstore  stable   262  ubuntu  Unit is ready
cinder-mysql-router     8.0.26   active      1  mysql-router            charmstore  stable     8  ubuntu  Unit is ready
dashboard-mysql-router  8.0.26   active      1  mysql-router            charmstore  stable     8  ubuntu  Unit is ready
glance                  22.0.0   active      1  glance                  charmstore  stable   305  ubuntu  Unit is ready
glance-mysql-router     8.0.26   active      1  mysql-router            charmstore  stable     8  ubuntu  Unit is ready
keystone                19.0.0   active      1  keystone                charmstore  stable   323  ubuntu  Application Ready
keystone-mysql-router   8.0.26   active      1  mysql-router            charmstore  stable     8  ubuntu  Unit is ready
mysql-innodb-cluster    8.0.26   active      3  mysql-innodb-cluster    charmstore  stable     7  ubuntu  Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api             18.1.0   active      1  neutron-api             charmstore  stable   294  ubuntu  Unit is ready
neutron-api-plugin-ovn  18.1.0   active      1  neutron-api-plugin-ovn  charmstore  stable     6  ubuntu  Unit is ready
neutron-mysql-router    8.0.26   active      1  mysql-router            charmstore  stable     8  ubuntu  Unit is ready
nova-cloud-controller   23.0.2   active      1  nova-cloud-controller   charmstore  stable   355  ubuntu  Unit is ready
nova-compute            23.0.2   active      3  nova-compute            charmstore  stable   327  ubuntu  Unit is ready
nova-mysql-router       8.0.26   active      1  mysql-router            charmstore  stable     8  ubuntu  Unit is ready
ntp                     3.5      active      3  ntp                     charmstore  stable    45  ubuntu  chrony: Ready
openstack-dashboard     19.2.0   active      1  openstack-dashboard     charmstore  stable   313  ubuntu  Unit is ready
ovn-central             20.12.0  active      3  ovn-central             charmstore  stable     7  ubuntu  Unit is ready (northd: active)
ovn-chassis             20.12.0  active      3  ovn-chassis             charmstore  stable    14  ubuntu  Unit is ready
placement               5.0.1    active      1  placement               charmstore  stable    19  ubuntu  Unit is ready
placement-mysql-router  8.0.26   active      1  mysql-router            charmstore  stable     8  ubuntu  Unit is ready
rabbitmq-server         3.8.2    active      1  rabbitmq-server         charmstore  stable   110  ubuntu  Unit is ready
vault                   1.5.9    active      1  vault                   charmstore  stable    46  ubuntu  Unit is ready (active: true, mlock: disabled)
vault-mysql-router      8.0.26   active      1  mysql-router            charmstore  stable     8  ubuntu  Unit is ready

Unit                         Workload  Agent  Machine  Public address  Ports              Message
ceph-mon/0                   active    idle   0/lxd/0  10.50.103.116                      Unit is ready and clustered
ceph-mon/1*                  active    idle   1/lxd/0  10.50.103.111                      Unit is ready and clustered
ceph-mon/2                   active    idle   2/lxd/0  10.50.103.124                      Unit is ready and clustered
ceph-osd/0                   active    idle   0        10.50.103.98                       Unit is ready (2 OSD)
ceph-osd/1*                  active    idle   1        10.50.103.107                      Unit is ready (2 OSD)
ceph-osd/2                   active    idle   2        10.50.103.93                       Unit is ready (2 OSD)
ceph-radosgw/0*              active    idle   0/lxd/1  10.50.103.118   80/tcp             Unit is ready
cinder/0*                    active    idle   1/lxd/1  10.50.103.110   8776/tcp           Unit is ready
  cinder-ceph/0*             active    idle            10.50.103.110                      Unit is ready
  cinder-mysql-router/0*     active    idle            10.50.103.110                      Unit is ready
glance/0*                    active    idle   2/lxd/1  10.50.103.121   9292/tcp           Unit is ready
  glance-mysql-router/0*     active    idle            10.50.103.121                      Unit is ready
keystone/0*                  active    idle   0/lxd/2  10.50.103.120   5000/tcp           Unit is ready
  keystone-mysql-router/0*   active    idle            10.50.103.120                      Unit is ready
mysql-innodb-cluster/0       active    idle   0/lxd/3  10.50.103.119                      Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1*      active    idle   1/lxd/2  10.50.103.108                      Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2       active    idle   2/lxd/2  10.50.103.125                      Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api/0*               active    idle   1/lxd/3  10.50.103.112   9696/tcp           Unit is ready
  neutron-api-plugin-ovn/0*  active    idle            10.50.103.112                      Unit is ready
  neutron-mysql-router/0*    active    idle            10.50.103.112                      Unit is ready
nova-cloud-controller/0*     active    idle   0/lxd/4  10.50.103.117   8774/tcp,8775/tcp  Unit is ready
  nova-mysql-router/0*       active    idle            10.50.103.117                      Unit is ready
nova-compute/0               active    idle   0        10.50.103.98                       Unit is ready
  ntp/1                      active    idle            10.50.103.98    123/udp            chrony: Ready
  ovn-chassis/1              active    idle            10.50.103.98                       Unit is ready
nova-compute/1*              active    idle   1        10.50.103.107                      Unit is ready
  ntp/0*                     active    idle            10.50.103.107   123/udp            chrony: Ready
  ovn-chassis/0*             active    idle            10.50.103.107                      Unit is ready
nova-compute/2               active    idle   2        10.50.103.93                       Unit is ready
  ntp/2                      active    idle            10.50.103.93    123/udp            chrony: Ready
  ovn-chassis/2              active    idle            10.50.103.93                       Unit is ready
openstack-dashboard/0*       active    idle   1/lxd/4  10.50.103.113   80/tcp,443/tcp     Unit is ready
  dashboard-mysql-router/0*  active    idle            10.50.103.113                      Unit is ready
ovn-central/0                active    idle   0/lxd/5  10.50.103.115   6641/tcp,6642/tcp  Unit is ready (northd: active)
ovn-central/1*               active    idle   1/lxd/5  10.50.103.109   6641/tcp,6642/tcp  Unit is ready (leader: ovnnb_db, ovnsb_db)
ovn-central/2                active    idle   2/lxd/3  10.50.103.123   6641/tcp,6642/tcp  Unit is ready
placement/0*                 active    idle   2/lxd/4  10.50.103.126   8778/tcp           Unit is ready
  placement-mysql-router/0*  active    idle            10.50.103.126                      Unit is ready
rabbitmq-server/0*           active    idle   2/lxd/5  10.50.103.122   5672/tcp           Unit is ready
vault/0*                     active    idle   0/lxd/6  10.50.103.114   8200/tcp           Unit is ready (active: true, mlock: disabled)
  vault-mysql-router/0*      active    idle            10.50.103.114                      Unit is ready

Machine  State    DNS            Inst id              Series  AZ       Message
0        started  10.50.103.98   ms-osttest003        focal   default  Deployed
0/lxd/0  started  10.50.103.116  juju-9b141f-0-lxd-0  focal   default  Container started
0/lxd/1  started  10.50.103.118  juju-9b141f-0-lxd-1  focal   default  Container started
0/lxd/2  started  10.50.103.120  juju-9b141f-0-lxd-2  focal   default  Container started
0/lxd/3  started  10.50.103.119  juju-9b141f-0-lxd-3  focal   default  Container started
0/lxd/4  started  10.50.103.117  juju-9b141f-0-lxd-4  focal   default  Container started
0/lxd/5  started  10.50.103.115  juju-9b141f-0-lxd-5  focal   default  Container started
0/lxd/6  started  10.50.103.114  juju-9b141f-0-lxd-6  focal   default  Container started
1        started  10.50.103.107  ms-osttest002        focal   default  Deployed
1/lxd/0  started  10.50.103.111  juju-9b141f-1-lxd-0  focal   default  Container started
1/lxd/1  started  10.50.103.110  juju-9b141f-1-lxd-1  focal   default  Container started
1/lxd/2  started  10.50.103.108  juju-9b141f-1-lxd-2  focal   default  Container started
1/lxd/3  started  10.50.103.112  juju-9b141f-1-lxd-3  focal   default  Container started
1/lxd/4  started  10.50.103.113  juju-9b141f-1-lxd-4  focal   default  Container started
1/lxd/5  started  10.50.103.109  juju-9b141f-1-lxd-5  focal   default  Container started
2        started  10.50.103.93   ms-osttest001        focal   default  Deployed
2/lxd/0  started  10.50.103.124  juju-9b141f-2-lxd-0  focal   default  Container started
2/lxd/1  started  10.50.103.121  juju-9b141f-2-lxd-1  focal   default  Container started
2/lxd/2  started  10.50.103.125  juju-9b141f-2-lxd-2  focal   default  Container started
2/lxd/3  started  10.50.103.123  juju-9b141f-2-lxd-3  focal   default  Container started
2/lxd/4  started  10.50.103.126  juju-9b141f-2-lxd-4  focal   default  Container started
2/lxd/5  started  10.50.103.122  juju-9b141f-2-lxd-5  focal   default  Container started

Ceph status shows 3.3 TB storage.

osadmin@maas002:~$ juju ssh ceph-mon/0 sudo ceph status
  cluster:
    id:     2aa3bb5c-2593-11ec-83ff-5da0642f4c69
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim
            clock skew detected on mon.juju-9b141f-0-lxd-0, mon.juju-9b141f-2-lxd-0

  services:
    mon: 3 daemons, quorum juju-9b141f-1-lxd-0,juju-9b141f-0-lxd-0,juju-9b141f-2-lxd-0 (age 17h)
    mgr: juju-9b141f-1-lxd-0(active, since 17h), standbys: juju-9b141f-0-lxd-0, juju-9b141f-2-lxd-0
    osd: 6 osds: 6 up (since 17h), 6 in (since 17h)
    rgw: 1 daemon active (1 hosts, 1 zones)

  data:
    pools:   18 pools, 171 pgs
    objects: 222 objects, 5.1 KiB
    usage:   415 MiB used, 3.3 TiB / 3.3 TiB avail
    pgs:     171 active+clean

Help please, in which direction to carry out the diagnosis. This is my first Openstack :slight_smile:

Hello everyone! I reinstalled opentack from bundle, but notnigh changes. In cinder/0 /var/log/cinder/cinder-volume.log I get following error message:

 2021-10-09 22:37:46.502 52558 WARNING cinder.volume.manager [req-d35e8913-c6ed-4830-9b04-f6310baa1066 d6545e0e5a6e42d69333ea3a70ab1f56 c2773ce72eff4df4904a334365ad55a0 - - -] Task 'cinder.volume.flows.manager
.manage_existing.PrepareForQuotaReservationTask;volume:manage_existing' (bb6e4e36-0ffa-4351-87eb-f424629bea98) transitioned into state 'FAILURE' from state 'RUNNING'
2 predecessors (most recent first):
  Atom 'cinder.volume.flows.manager.create_volume.NotifyVolumeActionTask;volume:create, manage_existing.start' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'context': <cinder.context.RequestConte
xt object at 0x7f37a8e15700>, 'volume': Volume(_name_id=None,admin_metadata={},attach_status='detached',availability_zone='nova',bootable=False,cluster=<?>,cluster_name=None,consistencygroup=<?>,consistencygr
oup_id=None,created_at=2021-10-09T22:37:46Z,deleted=False,deleted_at=None,display_description=None,display_name=None,ec2_id=None,encryption_key_id=None,glance_metadata=<?>,group=<?>,group_id=None,host='cinder
@cinder-ceph#cinder-ceph',id=baf5fca9-9539-4038-bd30-2f22dc72e550,launched_at=None,metadata={},migration_status=None,multiattach=False,previous_status=None,project_id='c2773ce72eff4df4904a334365ad55a0',provid
er_auth=None,provider_geometry=None,provider_id=None,provider_location=None,replication_driver_data=None,replication_extended_status=None,replication_status=None,scheduled_at=None,service_uuid=None,shared_tar
gets=True,size=0,snapshot_id=None,snapshots=<?>,source_volid=None,status='error_managing',terminated_at=None,updated_at=None,user_id='d6545e0e5a6e42d69333ea3a70ab1f56',volume_attachment=<?>,volume_type=Volume
Type(d1927fab-3d2c-444a-9ab1-2e9bff27da25),volume_type_id=d1927fab-3d2c-444a-9ab1-2e9bff27da25)}, 'provides': None}
  |__Flow 'volume_manage_existing_manager': cinder.exception.ManageExistingInvalidReference: Manage existing volume failed due to invalid backend reference {'source-id': '716a11dc-294c-11ec-9988-7f34c01c931a'
}: Reference must contain source-name element.
2021-10-09 22:37:46.502 52558 ERROR cinder.volume.manager Traceback (most recent call last):
2021-10-09 22:37:46.502 52558 ERROR cinder.volume.manager   File "/usr/lib/python3/dist-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task
2021-10-09 22:37:46.502 52558 ERROR cinder.volume.manager     result = task.execute(**arguments)
2021-10-09 22:37:46.502 52558 ERROR cinder.volume.manager   File "/usr/lib/python3/dist-packages/cinder/volume/flows/manager/manage_existing.py", line 60, in execute
2021-10-09 22:37:46.502 52558 ERROR cinder.volume.manager     flow_common.error_out(volume, reason,
2021-10-09 22:37:46.502 52558 ERROR cinder.volume.manager   File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in __exit__
2021-10-09 22:37:46.502 52558 ERROR cinder.volume.manager     self.force_reraise()
2021-10-09 22:37:46.502 52558 ERROR cinder.volume.manager   File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise
2021-10-09 22:37:46.502 52558 ERROR cinder.volume.manager     raise self.value
2021-10-09 22:37:46.502 52558 ERROR cinder.volume.manager   File "/usr/lib/python3/dist-packages/cinder/volume/flows/manager/manage_existing.py", line 55, in execute
2021-10-09 22:37:46.502 52558 ERROR cinder.volume.manager     size = self.driver.manage_existing_get_size(volume,
2021-10-09 22:37:46.502 52558 ERROR cinder.volume.manager   File "/usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py", line 1722, in manage_existing_get_size
2021-10-09 22:37:46.502 52558 ERROR cinder.volume.manager     raise exception.ManageExistingInvalidReference(
2021-10-09 22:37:46.502 52558 ERROR cinder.volume.manager cinder.exception.ManageExistingInvalidReference: Manage existing volume failed due to invalid backend reference {'source-id': '716a11dc-294c-11ec-9988
-7f34c01c931a'}: Reference must contain source-name element.
2021-10-09 22:37:46.502 52558 ERROR cinder.volume.manager
2021-10-09 22:37:46.516 52558 ERROR cinder.volume.flows.manager.manage_existing [req-d35e8913-c6ed-4830-9b04-f6310baa1066 d6545e0e5a6e42d69333ea3a70ab1f56 c2773ce72eff4df4904a334365ad55a0 - - -] Volume baf5fc
a9-9539-4038-bd30-2f22dc72e550: manage failed.
2021-10-09 22:37:46.518 52558 WARNING cinder.volume.manager [req-d35e8913-c6ed-4830-9b04-f6310baa1066 d6545e0e5a6e42d69333ea3a70ab1f56 c2773ce72eff4df4904a334365ad55a0 - - -] Task 'cinder.volume.flows.manager
.manage_existing.PrepareForQuotaReservationTask;volume:manage_existing' (bb6e4e36-0ffa-4351-87eb-f424629bea98) transitioned into state 'REVERTED' from state 'REVERTING'
2021-10-09 22:37:46.521 52558 WARNING cinder.volume.manager [req-d35e8913-c6ed-4830-9b04-f6310baa1066 d6545e0e5a6e42d69333ea3a70ab1f56 c2773ce72eff4df4904a334365ad55a0 - - -] Task 'cinder.volume.flows.manager
.create_volume.NotifyVolumeActionTask;volume:create, manage_existing.start' (6b4d67ab-ff43-457d-bcf7-15737678c160) transitioned into state 'REVERTED' from state 'REVERTING'
2021-10-09 22:37:46.524 52558 WARNING cinder.volume.manager [req-d35e8913-c6ed-4830-9b04-f6310baa1066 d6545e0e5a6e42d69333ea3a70ab1f56 c2773ce72eff4df4904a334365ad55a0 - - -] Flow 'volume_manage_existing_mana
ger' (0bbda3f5-adc8-4001-a40d-4cb2f376286c) transitioned into state 'REVERTED' from state 'RUNNING'
2021-10-09 22:37:46.525 52558 ERROR oslo_messaging.rpc.server [req-d35e8913-c6ed-4830-9b04-f6310baa1066 d6545e0e5a6e42d69333ea3a70ab1f56 c2773ce72eff4df4904a334365ad55a0 - - -] Exception during message handli
ng: cinder.exception.ManageExistingInvalidReference: Manage existing volume failed due to invalid backend reference {'source-id': '716a11dc-294c-11ec-9988-7f34c01c931a'}: Reference must contain source-name el
ement.

How is the cinder application configured?

juju config cinder
juju config cinder
application: cinder
application-config:
  trust:
    default: false
    description: Does this application have access to trusted credentials
    source: default
    type: bool
    value: false
charm: cinder
settings:
  action-managed-upgrade:
    default: false
    description: |
      If True enables openstack upgrades for this charm via juju actions.
      You will still need to set openstack-origin to the new repository but
      instead of an upgrade running automatically across all units, it will
      wait for you to execute the openstack-upgrade action for this charm on
      each unit. If False it will revert to existing behavior of upgrading
      all units on config change.
    source: default
    type: boolean
    value: false
  api-listening-port:
    default: 8776
    description: OpenStack Volume API listening port.
    source: default
    type: int
    value: 8776
  block-device:
    default: sdb
    description: |
      The block devices on which to create LVM volume group.
      .
      May be set to None for deployments that will not need local
      storage (eg, Ceph/RBD-backed volumes).
      .
      This can also be a space-delimited list of block devices to attempt
      to use in the cinder LVM volume group - each block device detected
      will be added to the available physical volumes in the volume group.
      .
      May be set to the path and size of a local file
      (/path/to/file.img|$sizeG), which will be created and used as a
      loopback device (for testing only). $sizeG defaults to 5G
    source: user
    type: string
    value: None
  ceph-osd-replication-count:
    default: 3
    description: |
      This value dictates the number of replicas ceph must make of any
      object it stores within the cinder rbd pool. Of course, this only
      applies if using Ceph as a backend store. Note that once the cinder
      rbd pool has been created, changing this value will not have any
      effect (although the configuration of a pool can be always be changed
      within ceph itself or via the charm used to deploy ceph).
    source: default
    type: int
    value: 3
  config-flags:
    description: |
      Comma-separated list of key=value config flags. These values will be
      placed in the cinder.conf [DEFAULT] section.
    source: unset
    type: string
  database:
    default: cinder
    description: Database to request access.
    source: default
    type: string
    value: cinder
  database-user:
    default: cinder
    description: Username to request database access.
    source: default
    type: string
    value: cinder
  debug:
    default: false
    description: Enable debug logging.
    source: default
    type: boolean
    value: false
  default-volume-type:
    description: |
      The default volume type when creating a new volume without a
      volume type specified. It's useful only when multiple storage
      backends to be connected to Cinder. The corresponding volume type
      must be created after a deployment via API to take effect, e.g.,
      `openstack volume type create VOLUME_TYPE --property
      volume_backend_name=BACKEND_NAME`
    source: unset
    type: string
  dns-ha:
    default: false
    description: |
      Use DNS HA with MAAS 2.0. Note if this is set do not set vip settings
      below.
    source: default
    type: boolean
    value: false
  enabled-services:
    default: all
    description: |
      If splitting cinder services between units, define which services
      to install and configure.
    source: default
    type: string
    value: all
  ephemeral-unmount:
    description: |
      Cloud instances provide ephemeral storage which is normally mounted
      on /mnt.
      .
      Providing this option will force an unmount of the ephemeral device
      so that it can be used as a Cinder storage device. This is useful for
      testing purposes (cloud deployment is not a typical use case).
    source: unset
    type: string
  glance-api-version:
    default: 1
    description: |
      Newer storage drivers may require the v2 Glance API to perform certain
      actions e.g. the RBD driver requires requires this to support COW
      cloning of images. This option will default to v1 for backwards
      compatibility with older glance services.
    source: user
    type: int
    value: 2
  ha-bindiface:
    default: eth0
    description: |
      Default network interface on which HA cluster will bind to communication
      with the other members of the HA Cluster.
    source: default
    type: string
    value: eth0
  ha-mcastport:
    default: 5454
    description: |
      Default multicast port number that will be used to communicate between
      HA Cluster nodes.
    source: default
    type: int
    value: 5454
  haproxy-client-timeout:
    description: |
      Client timeout configuration in ms for haproxy, used in HA
      configurations. If not provided, default value of 90000ms is used.
    source: unset
    type: int
  haproxy-connect-timeout:
    description: |
      Connect timeout configuration in ms for haproxy, used in HA
      configurations. If not provided, default value of 9000ms is used.
    source: unset
    type: int
  haproxy-queue-timeout:
    description: |
      Queue timeout configuration in ms for haproxy, used in HA
      configurations. If not provided, default value of 9000ms is used.
    source: unset
    type: int
  haproxy-server-timeout:
    description: |
      Server timeout configuration in ms for haproxy, used in HA
      configurations. If not provided, default value of 90000ms is used.
    source: unset
    type: int
  harden:
    description: |
      Apply system hardening. Supports a space-delimited list of modules
      to run. Supported modules currently include os, ssh, apache and mysql.
    source: unset
    type: string
  nagios_context:
    default: juju
    description: |
      Used by the nrpe-external-master subordinate charm. A string that will
      be prepended to instance name to set the host name in nagios. So for
      instance the hostname would be something like 'juju-myservice-0'. If
      you are running multiple environments with the same services in them
      this allows you to differentiate between them.
    source: default
    type: string
    value: juju
  nagios_servicegroups:
    default: ""
    description: |
      A comma-separated list of nagios servicegroups. If left empty, the
      nagios_context will be used as the servicegroup
    source: default
    type: string
    value: ""
  notification-topics:
    description: |
      A comma-separated list of oslo notification topics. If left empty, the
      default topic 'cinder' is going to be used.
    source: unset
    type: string
  openstack-origin:
    default: distro
    description: |
      Repository from which to install. May be one of the following:
      distro (default), ppa:somecustom/ppa, a deb url sources entry,
      or a supported Ubuntu Cloud Archive e.g.
      .
      cloud:<series>-<openstack-release>
      cloud:<series>-<openstack-release>/updates
      cloud:<series>-<openstack-release>/staging
      cloud:<series>-<openstack-release>/proposed
      .
      See https://wiki.ubuntu.com/OpenStack/CloudArchive for info on which
      cloud archives are available and supported.
      .
      NOTE: updating this setting to a source that is known to provide
      a later version of OpenStack will trigger a software upgrade unless
      action-managed-upgrade is set to True.
    source: user
    type: string
    value: cloud:focal-wallaby
  os-admin-hostname:
    description: |
      The hostname or address of the admin endpoints created for cinder
      in the keystone identity provider.
      .
      This value will be used for admin endpoints. For example, an
      os-admin-hostname set to 'cinder.admin.example.com' with ssl enabled will
      create two admin endpoints for cinder:
      .
      https://cinder.admin.example.com:443/v2/$(tenant_id)s and
      https://cinder.admin.example.com:443/v3/$(tenant_id)s
    source: unset
    type: string
  os-admin-network:
    description: |
      The IP address and netmask of the OpenStack Admin network (e.g.
      192.168.0.0/24)
      .
      This network will be used for admin endpoints.
    source: unset
    type: string
  os-internal-hostname:
    description: |
      The hostname or address of the internal endpoints created for cinder
      in the keystone identity provider.
      .
      This value will be used for internal endpoints. For example, an
      os-internal-hostname set to 'cinder.internal.example.com' with ssl
      enabled will create two internal endpoints for cinder:
      .
      https://cinder.internal.example.com:443/v2/$(tenant_id)s and
      https://cinder.internal.example.com:443/v3/$(tenant_id)s
    source: unset
    type: string
  os-internal-network:
    description: |
      The IP address and netmask of the OpenStack Internal network (e.g.
      192.168.0.0/24)
      .
      This network will be used for internal endpoints.
    source: unset
    type: string
  os-public-hostname:
    description: |
      The hostname or address of the public endpoints created for cinder
      in the keystone identity provider.
      .
      This value will be used for public endpoints. For example, an
      os-public-hostname set to 'cinder.example.com' with ssl enabled will
      create two public endpoints for cinder:
      .
      https://cinder.example.com:443/v2/$(tenant_id)s and
      https://cinder.example.com:443/v3/$(tenant_id)s
    source: unset
    type: string
  os-public-network:
    description: |
      The IP address and netmask of the OpenStack Public network (e.g.
      192.168.0.0/24)
      .
      This network will be used for public endpoints.
    source: unset
    type: string
  overwrite:
    default: "false"
    description: |
      If true, charm will attempt to overwrite block devices containing
      previous filesystems or LVM, assuming it is not in use.
    source: default
    type: string
    value: "false"
  prefer-ipv6:
    default: false
    description: |
      If True enables IPv6 support. The charm will expect network interfaces
      to be configured with an IPv6 address. If set to False (default) IPv4
      is expected.
      .
      NOTE: these charms do not currently support IPv6 privacy extension. In
      order for this charm to function correctly, the privacy extension must be
      disabled and a non-temporary address must be configured/available on
      your network interface.
    source: default
    type: boolean
    value: false
  rabbit-user:
    default: cinder
    description: Username to request access on rabbitmq-server.
    source: default
    type: string
    value: cinder
  rabbit-vhost:
    default: openstack
    description: RabbitMQ virtual host to request access on rabbitmq-server.
    source: default
    type: string
    value: openstack
  region:
    default: RegionOne
    description: OpenStack Region
    source: default
    type: string
    value: RegionOne
  remove-missing:
    default: false
    description: |
      If True, charm will attempt to remove missing physical volumes from
      volume group, if logical volumes are not allocated on them.
    source: default
    type: boolean
    value: false
  remove-missing-force:
    default: false
    description: |
      If True, charm will attempt to remove missing physical volumes from
      volume group, even when logical volumes are allocated on them. This
      option overrides 'remove-missing' when set.
    source: default
    type: boolean
    value: false
  restrict-ceph-pools:
    default: false
    description: |
      Cinder can optionally restrict the key it asks Ceph for to only be able
      to access the pools it needs.
    source: default
    type: boolean
    value: false
  ssl_ca:
    description: |
      SSL CA to use with the certificate and key provided - this is only
      required if you are providing a privately signed ssl_cert and ssl_key.
    source: unset
    type: string
  ssl_cert:
    description: |
      SSL certificate to install and use for API ports. Setting this value
      and ssl_key will enable reverse proxying, point Cinder's entry in the
      Keystone catalog to use https, and override any certificate and key
      issued by Keystone (if it is configured to do so).
    source: unset
    type: string
  ssl_key:
    description: SSL key to use with certificate specified as ssl_cert.
    source: unset
    type: string
  use-internal-endpoints:
    default: false
    description: |
      Openstack mostly defaults to using public endpoints for internal
      communication between services. If set to True this option will
      configure services to use internal endpoints where possible.
    source: default
    type: boolean
    value: false
  use-policyd-override:
    default: false
    description: |
      If True then use the resource file named 'policyd-override' to install
      override YAML files in the service's policy.d directory.  The resource
      file should be a ZIP file containing at least one yaml file with a .yaml
      or .yml extension.  If False then remove the overrides.
    source: default
    type: boolean
    value: false
  use-syslog:
    default: false
    description: |
      Setting this to True will allow supporting services to log to syslog.
    source: default
    type: boolean
    value: false
  verbose:
    default: false
    description: Enable verbose logging.
    source: default
    type: boolean
    value: false
  vip:
    description: |
      Virtual IP(s) to use to front API services in HA configuration.
      .
      If multiple networks are being used, a VIP should be provided for each
      network, separated by spaces.
    source: unset
    type: string
  vip_cidr:
    default: 24
    description: |
      Default CIDR netmask to use for HA vip when it cannot be automatically
      determined.
    source: default
    type: int
    value: 24
  vip_iface:
    default: eth0
    description: |
      Default network interface to use for HA vip when it cannot be
      automatically determined.
    source: default
    type: string
    value: eth0
  volume-group:
    default: cinder-volumes
    description: Name of volume group to create and store Cinder volumes.
    source: default
    type: string
    value: cinder-volumes
  volume-usage-audit-period:
    default: month
    description: |
      Time period for which to generate volume usages. The options are hour,
      day, month, or year.
    source: default
    type: string
    value: month
  worker-multiplier:
    description: |
      The CPU core multiplier to use when configuring worker processes for
      this service. By default, the number of workers for each daemon is
      set to twice the number of CPU cores a service unit has. This default
      value will be capped to 4 workers unless this configuration option
      is set.
    source: user
    type: float
    value: 0.25

You’ve included juju status output but minus the relations. Can you provide output to juju status --relations?

Tracking the code with the traceback, it appears that the RBD image name is missing, probably at some point in the configuration. Could you paste the contents of the file /etc/cinder/cinder.conf from the machine that is running the cinder charm ?

https://pastebin.ubuntu.com/p/jtct5SNs74/

cinder.conf

Sorry. The entire file was not copied completely. Fixed link above with new file.

It appears that you’re trying to import an existing volume after deployment. Could you specify what commands (if any) you’re run for that purpose ?