Use FC Storage in a Openstack MAAS Setup with juju

Hello everyone. I have some Questions regarding the usage of Blockstorage over Fiberchannel in OpenStack. I’m new to OpenStack and learning actually much of it. I installed Canonical OpenStack on a 4 Node Cluster actually as Prrof of Concept Setup to learn how it works and if it fits our need migrating away from VMware. I kwon that there are several Modules in OpenStack that support Blockstorage over FC; it also can automatically create Zone-Configurations on Brocade and Cisco FC Switches and so on. But actually i don’t find any useful informations about setting up a already to the Hosts mapped FC LUN in general. Maybe someone here can point me in the right direction?

My Setup is Straight forward to the actual Documentation. MAAS Setup with 4 Nodes; in each Node i have 1 SSD for 1 Ceph OSD. Now i want to use the already attached FC-LUN also on each Host → i don’t know how to do that. multipath is active and running on the hosts.

NAME                                                                                                  MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
loop0                                                                                                   7:0    0  55.7M  1 loop  /snap/core18/2812
loop1                                                                                                   7:1    0  63.9M  1 loop  /snap/core20/2105
loop2                                                                                                   7:2    0  63.9M  1 loop  /snap/core20/2182
loop3                                                                                                   7:3    0    87M  1 loop  /snap/lxd/27037
loop4                                                                                                   7:4    0    87M  1 loop  /snap/lxd/27428
loop5                                                                                                   7:5    0  93.5M  1 loop  /snap/mysql-shell/29
loop6                                                                                                   7:6    0     3M  1 loop  /snap/prometheus-ovs-exporter/9
loop7                                                                                                   7:7    0  40.4M  1 loop  /snap/snapd/20671
loop8                                                                                                   7:8    0  39.1M  1 loop  /snap/snapd/21184
sda                                                                                                     8:0    0 119.2G  0 disk  
├─sda1                                                                                                  8:1    0     1M  0 part  
└─sda2                                                                                                  8:2    0 119.2G  0 part  /
sdb                                                                                                     8:16   0 372.6G  0 disk  
└─ceph--a58d4728--99df--4faa--9416--b2ac416ed781-osd--block--a58d4728--99df--4faa--9416--b2ac416ed781 253:0    0 372.6G  0 lvm   
sdc                                                                                                     8:32   0    40T  0 disk  
└─mpatha                                                                                              253:1    0    40T  0 mpath 
sdd                                                                                                     8:48   0    40T  0 disk  
└─mpatha                                                                                              253:1    0    40T  0 mpath 
sde                                                                                                     8:64   0    40T  0 disk  
└─mpatha                                                                                              253:1    0    40T  0 mpath 
sdf                                                                                                     8:80   0    40T  0 disk  
└─mpatha                                                                                              253:1    0    40T  0 mpath 
sdg                                                                                                     8:96   0    40T  0 disk  
└─mpatha                                                                                              253:1    0    40T  0 mpath 
sdh                                                                                                     8:112  0    40T  0 disk  
└─mpatha                                                                                              253:1    0    40T  0 mpath 
sdi                                                                                                     8:128  0    40T  0 disk  
└─mpatha                                                                                              253:1    0    40T  0 mpath 
sdj                                                                                                     8:144  0    40T  0 disk  
└─mpatha                                                                                              253:1    0    40T  0 mpath 
sr0                                                                                                    11:0    1  1024M  0 rom 

sda is the Boot Disk, sdb the Ceph Disk and sdc to sdj is the FC LUN mapped under /dev/mapper/mpatha

multipath -ll
mpatha (36c8a7761004f8b7290000b00000451) dm-1 VENDOR,TYPE
size=40T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:2:1 sde 8:64  active ready running
| |- 1:0:3:1 sdf 8:80  active ready running
| |- 8:0:1:1 sdh 8:112 active ready running
| `- 8:0:3:1 sdj 8:144 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:1 sdd 8:48  active ghost running
  |- 1:0:1:1 sdc 8:32  active ghost running
  |- 8:0:0:1 sdg 8:96  active ghost running
  `- 8:0:2:1 sdi 8:128 active ghost running

Do i only have to change the cinder.conf from

  block-device: None
  glance-api-version: 2


  block-device: /dev/mapper/mpatha
  glance-api-version: 2


→ How to integrate this Block Device in the juju cinder-configuration and apply the config-changes?

→ How can i map a unique LUN-ID to a unique /dev/mapper/unique-name so that it is persistent on all attached Hosts?

→ Can i define Storage Grades or Labels so that it is possible a User can book e.g. NVMe Disk Storage and other users only need HDD Storage and OpenStack automatically uses the correct Storage Locations and Hosts in Background?


Hi, so how it works is the following:

Openstack Cinder uses a driver architecture. Basically, you tell Cinder what driver to use (a python script) that communicates with your storage array to configure it. Everytime you create a new volume, the driver you selected goes to the storage array and asks for a new LUN, mounts that LUN in the compute node that nova selected and presents that volume as a raw volume to the instance. It is a bit different that using datastores from VMWare because the instances get the volume directly, you don’t tell cinder to place a QCOW2 image in an specific disk. Here is a list of drivers enabled for yoga: List of drivers

You can pick a type of disk or storage array based on volume-types, that should help you decide if you want NVMe based storage or SAS based storage.

Thanks for clarify that. i will check the drivers and test if it works.

Sure, let me know how it goes and I’ll see if I can help you configure it.