Difference between revisions of "Cinder/NewLVMbasedDriverForSharedStorageInCinder"
(→Goals) |
|||
Line 6: | Line 6: | ||
== Goals == | == Goals == | ||
− | * | + | * Implement a new LVM-based driver which supports a shared or clustered LVM on a SAN volume instead of using a software iSCSI target. |
− | |||
− | |||
== Overview == | == Overview == | ||
− | At the standard LVM driver, Cinder creates volumes from volume group and Cinder provides these volumes using software iSCSI target | + | At the standard LVM driver, Cinder creates volumes from volume group and Cinder provides these volumes using software iSCSI target. Compute node recognizes these volumes using software iSCSI initiator remotely. In this method, Compute node is not necessary to be connected to a backend storage which includes the volume group. Therefore cinder can provide volumes to Compute nodes easily. However, user can't access a volume via SAN because this LVM driver presupposes a volume access via Network using iSCSI targetd(or LIO). |
+ | |||
+ | In contrast, the new LVM-based shared storage driver supports a volume access via SAN using a shared or clustered LVM on a SAN volume instead of using a software iSCSI target. By using a shared or clustered LVM on a SAN volume, guest VMs on compute nodes can access their attached volumes via SAN instead of Network. This will improve I/O bandwidth and response of guest VMs. | ||
− | |||
=== (a) Standard iSCSI LVM driver === | === (a) Standard iSCSI LVM driver === | ||
Line 22: | Line 21: | ||
== Benefits of this driver == | == Benefits of this driver == | ||
− | # Improve I/O bandwidth and response of guest VM by using LVM on | + | # Improve I/O bandwidth and response of guest VM by supporting a volume access via SAN using a shared or clustered LVM on a SAN volume |
− | # Less traffic of data transfer between Cinder Node and Compute nodes | + | # Less network traffic of data transfer between Cinder Node and Compute nodes |
− | # Basic features of Cinder such as snapshot and backup etc. are | + | # Basic features of Cinder such as snapshot and backup etc. are appreciable from standard cinder LVM driver. |
== Basic Design == | == Basic Design == | ||
− | * This driver uses shared | + | * This driver uses shared or clustered LVM on a SAN volume between Cinder node and compute nodes. By using shared or clustered LVM, multiple servers can refer same VG and LVs. |
− | + | * This driver creates LV from VG on the SAN volume for guest VM. A VG on a SAN volume is prepared previously. | |
− | * | + | * LVM holds management region including metadata of LVM configuration information. If multiple servers update the metadata at the same time, metadata will be broken. Therefore, it is necessary that operation of updating metadata can permit only Cinder node. |
− | * LVM holds management region including metadata of LVM configuration information. If multiple servers update the metadata at the same time, metadata will be broken. Therefore, it is necessary that operation of updating metadata can permit Cinder node. | ||
* The operations of updating metadata are followings. | * The operations of updating metadata are followings. | ||
− | + | ** Volume create | |
− | + | *** When Cinder node create a new LV on a VG, metadata of LVM is renewed but the update does not notify other compute nodes. Only cinder node knows the update this point. | |
− | + | ** Volume delete | |
− | + | *** Delete a LV on a VG from Cinder node. | |
− | + | ** Volume extend | |
+ | *** Extend a LV on a VG from Cinder node. | ||
+ | ** Snapshot create | ||
+ | *** Create a snapthot of a LV on a VG from Cinder node. | ||
+ | ** Snapshot delete | ||
+ | *** Delete a snapthot of a LV on a VG from Cinder node. | ||
+ | |||
+ | * The operations without updating metadata are followings. These operations are permitted every compute node. | ||
+ | ** Volume attach | ||
+ | *** When attaching a LV to guest VM on a compute nodes, compute node have to reload LVM metadata using "lvscan" or "lvs" because compute node does not know latest LVM metadata. After reloading metadata, compute node recognise latest status of VG and LVs. | ||
+ | *** And then, in order to attach new LV, compute nodes need to create a device file such as /dev/"VG name"/"LV name" using "lvchane -ay" command. | ||
+ | *** After activation of LV, nova compute can attach the LV into guest VM. | ||
− | * | + | * Volume detach |
− | + | ** After detaching a volume from guest VM, compute node deactivate the LV using "lvchane -an". As a result, unnecessary device file is removed from the compute node. | |
− | |||
{| class="wikitable" | {| class="wikitable" | ||
Line 59: | Line 67: | ||
* Use QEMU/KVM as a hypervisor (via libvirt compute driver) | * Use QEMU/KVM as a hypervisor (via libvirt compute driver) | ||
* Shared block storages between Cinder node and compute nodes via iSCSI or Fibre Channel | * Shared block storages between Cinder node and compute nodes via iSCSI or Fibre Channel | ||
− | * A volume group on the | + | * A volume group on the SAN volume |
− | * Disable lvmetad | + | * Disable lvmetad on compute nodes |
− | When compute node attaches created volume to a virtual machine, latest LVM metadata is necessary. However the lvmetad caches LVM metadata and this prevent to obtain latest LVM metadata. | + | ** When compute node attaches created volume to a virtual machine, latest LVM metadata is necessary. However the lvmetad caches LVM metadata and this prevent to obtain latest LVM metadata. |
== Configuration == | == Configuration == |
Revision as of 23:49, 6 March 2014
Contents
Related blueprints
- https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage
- https://blueprints.launchpad.net/nova/+spec/lvm-driver-for-shared-storage
Goals
- Implement a new LVM-based driver which supports a shared or clustered LVM on a SAN volume instead of using a software iSCSI target.
Overview
At the standard LVM driver, Cinder creates volumes from volume group and Cinder provides these volumes using software iSCSI target. Compute node recognizes these volumes using software iSCSI initiator remotely. In this method, Compute node is not necessary to be connected to a backend storage which includes the volume group. Therefore cinder can provide volumes to Compute nodes easily. However, user can't access a volume via SAN because this LVM driver presupposes a volume access via Network using iSCSI targetd(or LIO).
In contrast, the new LVM-based shared storage driver supports a volume access via SAN using a shared or clustered LVM on a SAN volume instead of using a software iSCSI target. By using a shared or clustered LVM on a SAN volume, guest VMs on compute nodes can access their attached volumes via SAN instead of Network. This will improve I/O bandwidth and response of guest VMs.
(a) Standard iSCSI LVM driver
Benefits of this driver
- Improve I/O bandwidth and response of guest VM by supporting a volume access via SAN using a shared or clustered LVM on a SAN volume
- Less network traffic of data transfer between Cinder Node and Compute nodes
- Basic features of Cinder such as snapshot and backup etc. are appreciable from standard cinder LVM driver.
Basic Design
- This driver uses shared or clustered LVM on a SAN volume between Cinder node and compute nodes. By using shared or clustered LVM, multiple servers can refer same VG and LVs.
- This driver creates LV from VG on the SAN volume for guest VM. A VG on a SAN volume is prepared previously.
- LVM holds management region including metadata of LVM configuration information. If multiple servers update the metadata at the same time, metadata will be broken. Therefore, it is necessary that operation of updating metadata can permit only Cinder node.
- The operations of updating metadata are followings.
- Volume create
- When Cinder node create a new LV on a VG, metadata of LVM is renewed but the update does not notify other compute nodes. Only cinder node knows the update this point.
- Volume delete
- Delete a LV on a VG from Cinder node.
- Volume extend
- Extend a LV on a VG from Cinder node.
- Snapshot create
- Create a snapthot of a LV on a VG from Cinder node.
- Snapshot delete
- Delete a snapthot of a LV on a VG from Cinder node.
- Volume create
- The operations without updating metadata are followings. These operations are permitted every compute node.
- Volume attach
- When attaching a LV to guest VM on a compute nodes, compute node have to reload LVM metadata using "lvscan" or "lvs" because compute node does not know latest LVM metadata. After reloading metadata, compute node recognise latest status of VG and LVs.
- And then, in order to attach new LV, compute nodes need to create a device file such as /dev/"VG name"/"LV name" using "lvchane -ay" command.
- After activation of LV, nova compute can attach the LV into guest VM.
- Volume attach
- Volume detach
- After detaching a volume from guest VM, compute node deactivate the LV using "lvchane -an". As a result, unnecessary device file is removed from the compute node.
Operations | Volume create | Volume delete | Volume extend | Snapshot create | Snapshot delete | Volume attach | Volume detach |
---|---|---|---|---|---|---|---|
Cinder node | x | x | x | x | x | - | - |
Compute node | - | - | - | - | - | x | x |
Cinder node with compute | x | x | x | x | x | x | x |
Prerequisites
- Use QEMU/KVM as a hypervisor (via libvirt compute driver)
- Shared block storages between Cinder node and compute nodes via iSCSI or Fibre Channel
- A volume group on the SAN volume
- Disable lvmetad on compute nodes
- When compute node attaches created volume to a virtual machine, latest LVM metadata is necessary. However the lvmetad caches LVM metadata and this prevent to obtain latest LVM metadata.
Configuration
In order to enable Shared LVM driver, need to define theses values at /etc/cinder/cinder.conf
Example
[LVM_shared] volume_group=cinder-volumes-shared volume_driver=cinder.volume.drivers.lvm.LVMSharedDriver volume_backend_name=LVM_shared