Cinder/Specs/KumoScaleVolumeDriver

create_volume
volume: display_name name size availability_zone

Call provisioner create_volume with StorageClass and VolumeCreate entities entities.StorageClass(self.num_replicas, None, None, zone_list, self.block_size, self.max_iops_per_gb, self.desired_iops_per_gb, self.max_bw_per_gb, self.desired_bw_per_gb, self.same_rack_allowed, self.max_replica_down_time, None, self.span_allowed) entities.VolumeCreate(volume_name, volume_size, storage_class, Self.provisioning_type, self.vol_reserved_space_percentage, 'NVMeoF', volume_uuid) kumoscale.create_volume(ks_volume)

delete_volume
volume: name

Call provisioner delete_volume kumoscale.delete_volume(volume_uuid)

create_snapshot
snapshot: display_name name volume_id

Call provisioner create_snapshot with SnapshotCreate entity ks_snapshot = entities.SnapshotCreate(snapshot_name, volume_uuid, self.snap_reserved_space_percentage, snapshot_uuid) kumoscale.create_snapshot(ks_snapshot)

delete_snapshot
snapshot: name

Call provisioner delete_snapshot kumoscale.delete_snapshot(snapshot_uuid)

create_volume_from_snapshot
volume: display_name name snapshot: name

Call provisioner create_snapshot_volume with SnapshotVolumeCreate entity entities.SnapshotVolumeCreate(volume_name, snapshot_uuid, self.writable, reserved_space_percentage, volume_uuid, self.max_iops_per_gb, self.max_bw_per_gb, 'NVMeoF', self.snap_vol_span_allowed) kumoscale.create_snapshot_volume(ks_snapshot_volume)

extend_volume
volume: name new_size

Call provisioner extend volume API kumoscale. (volume_uuid, new_size)

initialize_connection
volume: display_name name connector: uuid nqn

Call provisioner host probe (to register initiator host for first time) (alternatively can first check if host already registered) kumoscale. (connector.uuid, connector.nqn, [-1 interval])

Call provisioner publish kumoscale.publish(host_uuid, volume_uuid)

Now, build connection info dict with replica info as per return value spec (at end of this method spec)

Query provisioner for volume dict kumoscale.get_volumes_by_id(volume_uuid) Expected volume dict: (use first element of result) uuid location: [      uuid backend: persistentID ]  writeable

Query provisioner for targets for the volume uuid kumoscale.get_targets(None, volume_uuid)

For each target, query provisioner for its backend and add the backend portals to a list kumoscale.get_backend_by_id(persistent_id) Expected backend dict: (use first (and only) element) pi #backend persistent id  portals: [      ip       port transport ]

Also for each target, loop through volume replicas: For each matching replica by backend persistentID to target backend persistentID: Match replica persistentID with portals list (above) - and add matching portals to replica dict replica dict: vol_uuid target_nqn alias writable portals

return: driver_volume_type data: volume_replicas: [          vol_uuid alias writable target_nqn portals ]

terminate_connection
volume: display_name name connector: uuid

Call provisioner unpublish kumoscale.unpublish(host_uuid, volume_uuid)

get_volume_stats
Populate static/constant values… Call provisioner get_tenants kumoscale.get_tenants Use “default tenant” (0) stats for total capacity and free capacity

return: dict: volume_backend_name vendor_name driver_version storage_protocol consistencygroup_support thin_provisioning_support multiattach total_capacity_gb free_capacity_gb