Cinder/Specs/NVMEConnectorMDSupport

get_connector_properties
Get host uuid Get host initiator nqn /etc/nvme/hostnqn - If nqn not generated yet, generate it.

return: dict: uuid nqn

connect_volume
connection_properties: volume_replicas: target_nqn portals vol_uuid alias writable 

Check if healing agent is running, if not, launch it by calling its init method

NVMe connect to portals of volume replicas for targets that are not connected. If volume is not replicated, after connect to portals: return path to bare NVMe device If the target was already connected, async re-scan was supposed to be initiated by the driver create_volume call to provisioner publish (in driver spec above.) nvme connect -a  -s  -t  -n  -Q 128 -l -1

For all above replicas (now host should be connected to all their targets) get the host device path: Scan through ALL host NVMe devices and match by target_nqn Then match by volume uuid in all devices from above target controller

Create raid from above found devices for each replica mdadm -C [-o]  -R [-N ] --level  --raid-devices= --bitmap=internal --homehost=any --failfast --assume-clean 

return: type='block' path=

disconnect_volume
connection_properties: device_path volume_replicas device_info: path

Destroy RAID on device path if replicated After disconnect from last remaining NVMe device on a target: `nvme disconnect`

extend_volume
connection_properties: device_path volume_replicas

Grow RAID array to new size mdadm --grow /dev/mdX --size 