Jump to: navigation, search

Raw-device-mapping-support

Revision as of 11:40, 17 March 2014 by Trump.Zhang (talk | contribs) (Created page with "== Support for Raw Device Mapping(RDM) == The corresponding BP is https://blueprints.launchpad.net/nova/+spec/support-for-raw-device-mapping ===RDM=== With RDM, the storag...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Support for Raw Device Mapping(RDM)

The corresponding BP is https://blueprints.launchpad.net/nova/+spec/support-for-raw-device-mapping

RDM

With RDM, the storage logical unit number (LUN) can be directly connected to a instance or VM from the storage area network (SAN).

For most data center applications, including databases (Oracle RAC, etc), customer relationship management (CRM)applications and enterprise resource planning (ERP) applications, RDM can be used for configurations involving clustering between virtual machines, between physical and virtual machines or where SAN-aware applications are running inside a virtual machine.

RDM, which permits the use of existing SAN commands, is generally used to improve performance in I/O-intensive applications and block locking. Physical mode provides access to most hardware functions of the storage system that is mapped.

Use Cases

  1. Boot from a RDM volume
  2. Attach a RDM volume to instance

Libvirt Driver Support

The domain xml for RDM is composed by a scsi controller with "virtio-scsi" model and a lun device which is connected to the controller.

The sample xml as follows:

     <disk type='block' device='lun'>
       <driver name='qemu' type='raw' cache='none'/>
       <source dev='/dev/mapper/360022a110000ecba5db427db00000023'/>
       <target dev='sdb' bus='scsi'/>
       <address type='drive' controller='0' bus='0'/>
     </disk>

     <controller type='scsi' index='0' model='virtio-scsi'/>

The device LUN whose path is '/dev/mapper/360022a110000ecba5db427db00000023' in the physical server can be connected by the instance.For example, running the command "sg_inq /dev/sdb" in the guest.

Notices

When we only specify a disk device with "scsi" bus but without a scsi controller, libvirt will create an scsi controller of "lsi" model by default, which will make RDM unable to work properly.

If we attach a RDM volume to an instance already with a scsi controller other than "virtio-scsi" model, we must first attach a "virtio-scsi" controller with a "index" value different from exists scsi controllers, and then attach the volume and set the "controller" in "address" sub-element of disk config to the correct "index" value of the new attached "virtio-scsi" controller.

Related works

  1. Block-device-mapping-v2 has already support attach or boot from a volume which is connected to Instance via scsi bus and with a device type of "lun". But cannot generate a virtio-scsi controller.
  2. Libvirt-virtio-scsi-driver BP ([1]) whose milestone target is icehouse-3 is aim to support generate a virtio-scsi controller when using an image with virtio-scsi property. But it seems not to take boot-from-volume and attach-volume into account.

Implementation

Disk Configuration

The disk configuration needed by RDM has already by block-device-mapping-v2 extension, as follows:

--block-device id=xxx,source=volume,dest=volume,bus=scsi,type=lun

RDM volume

Add a "hw_scsi_model" metadata-property to volume with default value "lsi", and can be set to "virtio-scsi" to identify the volume needed be used as RDM.

Boot from RDM volume

In libvirt driver, append the virtio-scsi controller (will be introduced by libvirt-virtio-scsi-driver BP [1]) if we booting from a volume with "hw_scsi_model=virtio-scsi" metadata.

Attach RDM volume

If current instance have not a virtio-scsi controller, we first attach a virtio-scsi controller with an unique "index" volume.

References

[1] https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-scsi-driver