Jump to: navigation, search

Cinder/Specs/NVMEConnectorMDSupport

NVMeoF+MD Connector

get_connector_properties

Get host uuid Get host initiator nqn /etc/nvme/hostnqn - If nqn not generated yet, generate it.

return:

  dict:
      uuid
      nqn


connect_volume
  connection_properties:
      volume_replicas:
          target_nqn
          portals
          vol_uuid
          alias
          writable
      <flat volume properties if not replicated>

Check if healing agent is running, if not, launch it by calling its init method

NVMe connect to portals of volume replicas for targets that are not connected. If volume is not replicated, after connect to portals: return path to bare NVMe device If the target was already connected, async re-scan was supposed to be initiated by the driver create_volume call to provisioner publish (in driver spec above.)

  nvme connect -a <portal_address> -s <portal_port> -t <portal_transport> -n <target_nqn> -Q 128 -l -1

For all above replicas (now host should be connected to all their targets) get the host device path: Scan through ALL host NVMe devices and match by target_nqn Then match by volume uuid in all devices from above target controller

Create raid from above found devices for each replica

  mdadm -C [-o] <device_name> -R [-N <name>] --level <raid_type> --raid-devices=<num_drives> --bitmap=internal --homehost=any --failfast --assume-clean <drive1 … driveN>

return:

  type='block'
  path=<device path>


disconnect_volume
  connection_properties:
      device_path
      volume_replicas
      device_info:
      path

Destroy RAID on device path if replicated After disconnect from last remaining NVMe device on a target: `nvme disconnect`


extend_volume
  connection_properties:
      device_path
      volume_replicas

Grow RAID array to new size

  mdadm --grow /dev/mdX --size <new_size>