Jump to: navigation, search

Difference between revisions of "Cinder/VMwareVmdkDriver/blueprint-full-spec"

(get_volume_stats)
(create_volume)
Line 59: Line 59:
  
 
* This is creation of volume from scratch.
 
* This is creation of volume from scratch.
** VM with no controller and disk device will be created in the inventory.
+
** We do not create a backing for the empty volume in the inventory yet. A backing will be created when the empty volume is being attached to an instance for the first time.
* Optionally type of the VMDK can be specified as part of the volume type extra spec.
+
* VMDK type can be specified for the backing. This type will be used at the time of creating backing for the volume.
** Thin provisioning - extra spec vmdk_type=thin.
+
** Thin provisioning - extra spec vmware:vmdk_type=thin.
** Thick provisioning - extra spec vmdk_type=thick.
+
** Thick provisioning - extra spec vmware:vmdk_type=thick.
** Eager zeroed thick provisioning - extra spec vmdk_type=eagerZeroedThick.
+
** Eager zeroed thick provisioning - extra spec vmware:vmdk_type=eagerZeroedThick.
 
** Default is thin provisioning.
 
** Default is thin provisioning.
* Moref of the backing VM is saved as metadata.
 
  
 
  <nowiki>
 
  <nowiki>
 
def create_volume(self, volume):
 
def create_volume(self, volume):
     """Create volume from scratch.
+
     """Creates a volume
    """
 
  
     # Create VM on any resource pool and datastore
+
     We do not create any backing. We do it only for the first time
     # Do not create controller and virtual disk.
+
     it is being attached to a virtual machine.
    volume_moref = create_vm_without_disk(volume['name'])
 
  
     # Create volume_group folder and register volume to that folder
+
     :param volume: Volume object
     create_volume_group_folder_register_volume(volume_moref)
+
     """
 
+
     pass</nowiki>
     return {'metadata':
 
                {'volume_moref': volume_moref}
 
          }</nowiki>
 
  
 
== delete_volume ==
 
== delete_volume ==

Revision as of 12:16, 20 August 2013

Description

VMware VMDK cinder driver

The goal of this blueprint is to implement VMDK driver for cinder. The driver will allow management of cinder volume on any VMware vCenter Server or ESX managed datastore. In this project, we are essentially mapping the Cinder Volume construct to VMDK file(s) that form the persistent block storage for virtual machines within the VMware stack. Today, there is no cinder driver implementation for the VMware stack and the nova driver allows only attaching/detaching discovered iSCSI targets as RDM. This driver will allow life cycle management for a cinder volume that is backed by a VMDK file(s) within a VMware datastore. This project also positions Cinder to take advantage of features provided by VMFS and upcoming technologies such as vSAN, vVol and others.

Because of the design of vCenter, each VMDK needs to be a "child" object of one or more VM's. In this implementation, we use a "shadow" VM to back the Cinder Volume. This ensures that VMware specific features such as snapshots, fast cloning, vMotion, etc. will continue to work without breaking any of the Cinder Volume constructs or abstractions. This virtual machine backing a volume will never be powered on and is only an abstraction for performing operations such as snapshots or cloning of the cinder volume. By using virtual machine as a representation for cinder volume we can perform any operation on a cinder volume that can be done on the corresponding virtual machine using the public SDK.

Work Items

Driver configuration

  • The following are mandatory properties, specific to the driver and are to be specified in the /etc/cinder/cinder.conf file.
volume_driver=cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver or cinder.volume.drivers.vmware.vmdk.VMwareEsxVmdkDriver
vmware_host_ip=10.10.10.9
vmware_host_username=myuser
vmware_host_password=mypass
  • The following are optional properties, specific to the driver and are to be specified in the /etc/cinder/cinder.conf file.
vmware_wsdl_location=http://127.0.0.1:8080/wsdl/vim25/vimService.wsdl
vmware_api_retry_count=3
vmware_task_poll_time=5
vmware_volume_folder=cinder-volumes
  • Assumption is cinder and nova are configured to the same server (for this release)

Driver details

  • Name: VMwareVcVmdkDriver and VMwareEsxVmdkDriver
  • Root: cinder/volume/drivers/vmware/*
class VMwareEsxVmdkDriver(driver.VolumeDriver)
class VMwareVcVmdkDriver(VMwareEsxVmdkDriver)

get_volume_stats

  • Collect stats of datastores being managed by the VMware server. Stats will be used by schedulers such as filter_scheduler for filtering (e.g. CapacityFilter) and weighing (e.g. CapacityWeigher).
  • VMware server uses aggregates of datastores. Using an aggregated capacity/free space or that of a randomly chosen datastore may not be the right metric.
  • 'unknown' will be used for total and free capacity.
def get_volume_stats(self, refresh=False):
    """Obtain status of the volume service"""

    if not self._stats:
        backend_name = self.configuration.safe_get('volume_backend_name')
        if not backend_name:
            backend_name = self.__class__.__name__
        data = {'volume_backend_name': backend_name,
                'vendor_name': 'VMware',
                'driver_version': self.VERSION,
                'storage_protocol': 'LSI Logic SCSI',
                'reserved_percentage': 0,
                'total_capacity_gb': 'unknown',
                'free_capacity_gb': 'unknown'}
        self._stats = data
    return self._stats

create_volume

  • This is creation of volume from scratch.
    • We do not create a backing for the empty volume in the inventory yet. A backing will be created when the empty volume is being attached to an instance for the first time.
  • VMDK type can be specified for the backing. This type will be used at the time of creating backing for the volume.
    • Thin provisioning - extra spec vmware:vmdk_type=thin.
    • Thick provisioning - extra spec vmware:vmdk_type=thick.
    • Eager zeroed thick provisioning - extra spec vmware:vmdk_type=eagerZeroedThick.
    • Default is thin provisioning.
def create_volume(self, volume):
    """Creates a volume

    We do not create any backing. We do it only for the first time
    it is being attached to a virtual machine.

    :param volume: Volume object
    """
    pass

delete_volume

  • This operation deletes the volume.
    • We delete the VM backing the volume from inventory.
def delete_volume(self, volume):
    """Delete volume.
    """

    volume_moref = get_volume_by_name(snapshot['volume_name'])
    delete_vm(volume_moref)

nova#get_volume_connector

  • This is a contract between nova and driver.
  • This method is called before initializing/terminating connections to the volume service.
  • Must return moref of the virtual machine if it is already created.
  • Must return datastore and vmdk_path of the volume when it is being detached.
    • This is needed to consolidate volume's VMDK chains that may have been copied across datastores by SDRS.
  • See detach_volume how the cache details are populated.
  • See initialize_connection and terminate_connection how the returned data is used.
def get_volume_connector(self, instance):
    """ Return volume connector information.
    """
    iqn = volume_util.get_host_iqn(self._session, self._cluster)
    vm_moref = vm_util.get_vm_ref_from_name(self._session, instance['name'])

    # Get the volume's datastore from cache that gets populated during detach
    volume_datastore_moref = self._cache[instance['name']]['volume_datastore_moref']

    # Get the volume's vmdk_path from cache that gets populated during detach
    volume_vmdk_path = self._cache[instance['name']]['volume_vmdk_path']

    # Delete cache entry    
    self._cache.pop(instance['name'])

    data['ip'] = CONF.vmwareapi_host_ip
    data['initiator'] = iqn

    if vm_moref:
        data['vm_moref'] = vm_moref
    if volume_ds:
        data['volume_datastore_moref'] = volume_datastore_moref
    if volume_vmdk_path:
        data['volume_vmdk_path'] = volume_vmdk_path

    return data

initialize_connection

  • This method is called when nova decides to initialize connection to the volume service.
  • The driver decides where to create or relocate volume. This is a contract between nova and driver.

Case - 1

If the virtual machine instance to which the volume is being attached is already present.

  • If the volume does not have a VMDK in the inventory.
    • Relocate the volume to virtual machine instance's ESX host.
    • Create a VMDK device for the volume with the specified vmdk_type (see create_volume) on appropriate datastore.
  • If the volume has a VMDK and its datastore is not visible to the ESX managing the virtual machine instance.
    • Relocate the volume onto one of datastores that is visible to the ESX managing the virtual machine instance.
  • At this point the virtual machine instance and volume are under a common ESX.
  • Return the volume's VMDK path so that nova can attach it as a new disk device to the virtual machine instance.

Case - 2

If the virtual machine instance does not exist. This is the case of booting a virtual machine instance from a volume.

  • If the volume does not have a VMDK. (Ideally we do not expect user to use a fresh volume as a bootable device)
    • We create a VMDK for volume in the inventory on appropriate datastore.
  • At this point we have volume with VMDK in the inventory.
  • We return the volume's VMDK path and datastore. Nova *must* create virtual machine instance on this datastore and add a disk device using the VMDK path.
def _create_volume_vmdk(self, volume, volume_moref, datastore_moref):

    # Get the vmdk_type extra spec from the volume type
    vmdk_type = get_extra_spec_from_volume_type(volume)
    if not vmdk_type:
        vmdk_type = 'thin'

    # Reconfigure by adding disk device
    disk_device = add_disk_device(vmdk_type, volume_moref, datastore_moref)
    
    return disk_device

def initialize_connection(self, volume, metadata, connector):
    """Setup volume to be attached to an instance.
    """
    data = {}
    data['driver_volume_type'] = 'vmdk'
    data['data'] = {}

    disk_device = get_disk_device(metadata['volume_moref'])

    # Get host moref of the virtual machine instance
    host_moref = get_host(connector['vm_moref'])

    if connector['vm_moref']:
        # Case where virtual machine instance is existing

        if not disk_device:
            # Case where volume does not have a VMDK

            # Relocate volume to the virtual machine instance's host
            relocate_volume(metadata['volume_moref'], host_moref)

            # Create volume_group folder and register volume to that folder
            create_volume_group_folder_register_volume(metadata['volume_moref'])

            # Pick appropriate datastore using the inputs
            datastore_moref = select_datastore_strategy(metadata, host_moref)
            disk_device =
                self._create_volume_vmdk(volume, metadata['volume_moref'], datastore_moref)

        else:
            # Case where volume has VMDK

            datastore_moref = get_datastore(disk_device)

            if not is_datastore_visible(host_moref, datastore):
                # Case where volume's VMDK is not visible to the host
                # managing virtual machine instance. Relocate the volume
                # to a visible datastore.

                # Pick appropriate datastore using the inputs.
                datastore_moref = select_datastore_strategy(metadata, host_moref)

                # Relocate volume using the inputs.
                relocate_volume(metadata['volume_moref'], datastore_moref)

                # Create volume_group folder and register volume to that folder
                create_volume_group_folder_register_volume(metadata['volume_moref'])

    else:
        # Case where virtual machine instance is not present (booting from a volume)
        
        if not disk_device:
            # Case where volume does not have a VMDK

            # Pick appropriate datastore using the inputs
            datastore_moref = select_datastore_strategy(metadata, host_moref)
            disk_device =
                self._create_volume_vmdk(volume, metadata['volume_moref'], datastore_moref)

        else:
            datastore_moref = get_datastore(disk_device)

        # Set volume's datastore, so that nova can create the
        # virtual machine instance accordingly.
        data['data']['volume_datastore_moref'] = datastore_moref

    # Set volume's vmdk path
    data['data']['volume_vmdk_path'] = get_vmdk_path(disk_device)

    # Set volume's name
    data['data']['volume_name'] = volume['name']

    return data

nova#attach_volume

  • Reconfigure the existing virtual machine instance by creating a new disk device backed by the volume's VMDK.
  • At this point, the virtual machine instance's ESX can view the volume's datastore.
  • Save the volume's VMDK uuid as part of instance's extraConfig.
  • See initialize_connection how the input connection_info is formed.
def attach_volume(self, connection_info, instance, mountpoint):
    """ Attach volume storage to VM instance
    """

    if connection_info['driver_volume_type'] == 'vmdk':
        self._attach_vmdk_volume(connection_info, instance, mountpoint)

    elif connection_info['driver_volume_type'] == 'iscsi':
        self._attach_scsi_volume(connection_info, instance, mountpoint)

    else:
        raise exception.VolumeDriverNotFound(...)

def _attach_vmdk_volume(self, vmdk_info, instance, mountpoint):
    """ Attach VMDK volume to VM instance
    """

    # Extract volume's VMDK path
    volume_vmdk_path = vmdk_info['volume_vmdk_path']

    # Get virtual machine instance's moref
    vm_moref = get_moref(instance['name'])

    # Get details required for adding disk device such as
    # adapter_type, unit_number, controller_key
    hardware_devices = self._session._call_method(vim_util, 'get_dynamic_property',
                                                  vm_moref, 'VirtualMachine',
                                                  'config.hardware.device')
    vmdk_file_path, controller_key, adapter_type, disk_type, unit_number \
        = vm_util.get_vmdk_path_and_adapter_type(hardware_devices)

    # Attach the disk to virtual machine instance
    volume_device = self.attach_disk_to_vm(vm_moref, instance_name, adapter_type,
                            disk_type=None, vmdk_path=volume_vmdk_path,
                            controller_key=controller_key, unit_number=unit_number)

    # Extract volume's name
    volume_name = vmdk_info['volume_name']

    # Get the volume device uuid
    uuid = get_device_uuid(volume_device)

    # Save volume_name:device_uuid in extraConfig
    reconfig_vm_add_extraConfig(volume_name, uuid)

nova#detach_volume

  • Reconfigure the virtual machine instance by removing volume's disk device.
    • See attach_volume how we set extraConfig entry to identify the volume device.
  • The virtual machine instance may have been moved to another datastore by SDRS by copying the volume's VMDK chain along.
  • See get_volume_connector how the populated cache containing information about volume's VMDK path and datastore is used.
def detach_volume(self, connection_info, instance, mountpoint):
    """ Detach volume from VM instance
    """

    if connection_info['driver_volume_type'] == 'vmdk':
        self._detach_vmdk_volume(connection_info, instance, mountpoint)

    elif connection_info['driver_volume_type'] == 'iscsi':
        self._detach_scsi_volume(connection_info, instance, mountpoint)

def _detach_vmdk_volume(self, vmdk_info, instance, mountpoint):
    """ Detach VMDK volume from VM instance
    """

    # Get virtual machine instance's moref
    vm_moref = get_moref(instance['name'])

    # Get volume's device
    volume_name = vmdk_info['volume_name']
    device_uuid = get_vm_extraConfig_entry(vm_moref, volume_name)
    volume_device = get_device(vm_moref, device_uuid)

    vmdk_path = get_vmdk_path(volume_device)
    datastore_moref = get_datastore(volume_device)

    # Save volume device vmdk path and datastore
    self._cache[instance['name']] = {}
    self._cache[instance['name']]['volume_datastore_moref'] = datastore_moref
    self._cache[instance['name']]['volume_vmdk_path'] = vmdk_path

    # Detach the disk from virtual machine instance and don't delete backing
    self.detach_disk_from_vm(vm_moref, instance_name, device, file_operation=None)

    # Delete extraConfig volume_name:device_uuid
    reconfigure_vm_delete_extraConfig(vm_moref, volume_name)

terminate_connection

  • This method is called when nova decides to terminate connection with volume service.
  • At this point, the volume's VMDK is either moved to a new datastore or not moved by SDRS.
  • If the volume's VMDK has been moved to a new datastore. (See get_volume_connector how connector data is collected.)
    • We relocate the existing volume to the new datastore with VirtualMachineRelocateDiskMoveOptions=moveAllDiskBackingsAndAllowSharing so that we share the read only disks that are part of the volume's VMDK chain (the chain could be due to snapshotting the volume, see create_snapshot)
    • At this point we have two writable VMDK deltas with a common parent VMDK chain. We need to delete the VMDK child that is connected to the volume's backing VM and then add the child VMDK that is attached to or has been detached from the virtual machine instance to the volume's backing VM.
def terminate_connection(self, volume, connector, force=False, **kwargs):
    """Consolidate volume state if possible.
    """

    if not connector['volume_datastore_moref']:
        # The volume is not being detached from the virtual machine instance
        return

    volume_moref = get_volume_by_name(volume['name'])
    datastore_path = get_vmdk_path(volume_moref)
    if datastore_path == connector['volume_datastore_path']:
        # The volume is not moved from its original location.
        # No consolidation is required.
        return

    # The volume has been moved from its original location.
    # We first relocate volume to the new datastore.
    # Delete the existing disk device.
    # Add disk device with connector['volume_datastore_path'] as file backing.

    # Move the volume to the new datastore.
    relocate_volume(volume_moref, connector['volume_datastore'])

    # Create volume_group folder and register volume to that folder
    create_volume_group_folder_register_volume(volume_moref)

    # Delete disk device.
    delete_disk_device(volume_moref)

    # Add new disk with connector['volume_datastore_path'] as file backing.
    add_disk_device(volume_moref, connector['volume_datastore_path'])

Terminate connection.png

create_snapshot

  • Here we create snapshot of the volume's VM backing. We use the snapshot ID as name.
  • Note openstack allows snapshoting only detached or available volume.
def create_snapshot(self, snapshot):
    """Snapshot the volume.
    """

    volume_moref = get_volume_by_name(snapshot['volume_name'])
    snapshot_vm(volume_moref, snapshot['name'])

delete_snapshot

  • Delete the snapshot uniquely identified by name.
def delete_snapshot(self, snapshot):
    """Delete snapshot of the volume.
    """

    volume_moref = get_volume_by_name(snapshot['volume_name'])
    delete_snapshot_by_name(volume_moref, snapshot['name'])

create_volume_from_snapshot

Case 1:

  • This is the default case, where we perform full clone from a snapshot point.
  • We create a new VM backing the new volume. We use VirtualMachineRelocateDiskMoveOptions='moveAllDiskBackingsAndDisallowSharing' as part of clone spec.

Case 2:

  • This is a case where we perform linked clone from snapshot point provided, volume type has extra spec clone_type=fast.
  • We create a new VM backing the new volume. We use VirtualMachineRelocateDiskMoveOptions='createNewChildDiskBacking' as part of clone spec.
def create_volume_from_snapshot(self, volume, snapshot):
    """ Creates volume from given snapshot """

    volume_moref = get_volume_by_name(snapshot['volume_name'])
    snapshot_moref = get_snapshot_by_name(volume_moref, snapshot['volume_name'])
    clone_type = volume_types.get_volume_type(volume['volume_type_id'])

    if clone_type == 'fast':
        relocate_option = 'createNewChildDiskBacking'
    else:
        relocate_option = 'moveAllDiskBackingsAndDisallowSharing'

    new_volume_moref = clone_vm_from_snapshot(volume_moref, snapshot_moref, relocate_option)

    return {'metadata':
                {'volume_moref': new_volume_moref}
           }

clone_image

  • TODO

copy_volume_to_image

  • TODO

copy_image_to_volume

  • TODO

nova#spawn - booting from a volume workflow

  • TODO

devstack support

The following code will need to be added to devstack (lib/cinder)

   elif [ "$CINDER_DRIVER" == "vsphere" ]; then
       echo_summary "Using VMware vCenter driver"
       iniset $CINDER_CONF DEFAULT enabled_backends vmware
       iniset $CINDER_CONF vmware host_ip "$VMWAREAPI_IP"
       iniset $CINDER_CONF vmware host_username "$VMWAREAPI_USER"
       iniset $CINDER_CONF vmware host_password "$VMWAREAPI_PASSWORD"
       iniset $CINDER_CONF vmware cluster_name "$VMWAREAPI_CLUSTER"
       iniset $CINDER_CONF vmware volume_driver "cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver"
   fi