Jump to: navigation, search

Difference between revisions of "Raw-device-mapping-support"

(Created page with "== Support for Raw Device Mapping(RDM) == The corresponding BP is https://blueprints.launchpad.net/nova/+spec/support-for-raw-device-mapping ===RDM=== With RDM, the storag...")
 
(Support for Raw Device Mapping(RDM))
Line 50: Line 50:
 
'''Disk Configuration'''
 
'''Disk Configuration'''
  
The disk configuration needed by RDM has already by block-device-mapping-v2 extension,  as follows:
+
The disk configuration needed by RDM has already partially support by block-device-mapping-v2 extension,  as follows:
 
<pre>
 
<pre>
 
--block-device id=xxx,source=volume,dest=volume,bus=scsi,type=lun
 
--block-device id=xxx,source=volume,dest=volume,bus=scsi,type=lun
 
</pre>
 
</pre>
 +
 +
However, we need to support specify the "controller" of "address" sub-element for the block-device when attaching an RDM device, see follows.
  
 
'''RDM volume'''
 
'''RDM volume'''
Line 65: Line 67:
 
'''Attach RDM volume'''
 
'''Attach RDM volume'''
  
If current instance have not a virtio-scsi controller, we first attach a virtio-scsi controller with an unique "index" volume.
+
# If current instance have not a virtio-scsi controller, we first attach a virtio-scsi controller with an unique "index" value.
 +
# Set the "controller" of "address" sub-element for the volume's corresponding block-device config to the "index" of virtio-scsi controller
  
 
=== References ===
 
=== References ===
  
 
[1] https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-scsi-driver
 
[1] https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-scsi-driver

Revision as of 11:44, 17 March 2014

Support for Raw Device Mapping(RDM)

The corresponding BP is https://blueprints.launchpad.net/nova/+spec/support-for-raw-device-mapping

RDM

With RDM, the storage logical unit number (LUN) can be directly connected to a instance or VM from the storage area network (SAN).

For most data center applications, including databases (Oracle RAC, etc), customer relationship management (CRM)applications and enterprise resource planning (ERP) applications, RDM can be used for configurations involving clustering between virtual machines, between physical and virtual machines or where SAN-aware applications are running inside a virtual machine.

RDM, which permits the use of existing SAN commands, is generally used to improve performance in I/O-intensive applications and block locking. Physical mode provides access to most hardware functions of the storage system that is mapped.

Use Cases

  1. Boot from a RDM volume
  2. Attach a RDM volume to instance

Libvirt Driver Support

The domain xml for RDM is composed by a scsi controller with "virtio-scsi" model and a lun device which is connected to the controller.

The sample xml as follows:

     <disk type='block' device='lun'>
       <driver name='qemu' type='raw' cache='none'/>
       <source dev='/dev/mapper/360022a110000ecba5db427db00000023'/>
       <target dev='sdb' bus='scsi'/>
       <address type='drive' controller='0' bus='0'/>
     </disk>

     <controller type='scsi' index='0' model='virtio-scsi'/>

The device LUN whose path is '/dev/mapper/360022a110000ecba5db427db00000023' in the physical server can be connected by the instance.For example, running the command "sg_inq /dev/sdb" in the guest.

Notices

When we only specify a disk device with "scsi" bus but without a scsi controller, libvirt will create an scsi controller of "lsi" model by default, which will make RDM unable to work properly.

If we attach a RDM volume to an instance already with a scsi controller other than "virtio-scsi" model, we must first attach a "virtio-scsi" controller with a "index" value different from exists scsi controllers, and then attach the volume and set the "controller" in "address" sub-element of disk config to the correct "index" value of the new attached "virtio-scsi" controller.

Related works

  1. Block-device-mapping-v2 has already support attach or boot from a volume which is connected to Instance via scsi bus and with a device type of "lun". But cannot generate a virtio-scsi controller.
  2. Libvirt-virtio-scsi-driver BP ([1]) whose milestone target is icehouse-3 is aim to support generate a virtio-scsi controller when using an image with virtio-scsi property. But it seems not to take boot-from-volume and attach-volume into account.

Implementation

Disk Configuration

The disk configuration needed by RDM has already partially support by block-device-mapping-v2 extension, as follows:

--block-device id=xxx,source=volume,dest=volume,bus=scsi,type=lun

However, we need to support specify the "controller" of "address" sub-element for the block-device when attaching an RDM device, see follows.

RDM volume

Add a "hw_scsi_model" metadata-property to volume with default value "lsi", and can be set to "virtio-scsi" to identify the volume needed be used as RDM.

Boot from RDM volume

In libvirt driver, append the virtio-scsi controller (will be introduced by libvirt-virtio-scsi-driver BP [1]) if we booting from a volume with "hw_scsi_model=virtio-scsi" metadata.

Attach RDM volume

  1. If current instance have not a virtio-scsi controller, we first attach a virtio-scsi controller with an unique "index" value.
  2. Set the "controller" of "address" sub-element for the volume's corresponding block-device config to the "index" of virtio-scsi controller

References

[1] https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-scsi-driver