Jump to: navigation, search

Difference between revisions of "Cinder/NewLVMbasedDriverForSharedStorageInCinder"

(Goals)
Line 6: Line 6:
  
 
== Goals ==
 
== Goals ==
* The goal of this blue print is to implement a new LVM-based driver which supports shared block device on backend storage via Fibre Channel or iSCSI.
+
* Implement a new LVM-based driver which supports a shared or clustered LVM on a SAN volume instead of using a software iSCSI target.
* This will improve I/O bandwidth and response of guest VM compared to standard LVM driver.
 
* Use a volume group directly which is on the shared backend storage(without software iSCSI target).
 
  
 
== Overview ==
 
== Overview ==
At the standard LVM driver, Cinder creates volumes from volume group and Cinder provides these volumes using software iSCSI target and Compute node recognizes these volumes using software iSCSI initiator remotely. In this method, Compute node is not necessary to be connected directly to backend storage which includes the volume group. Therefore cinder can provide volumes to Compute node easily. However, user can’t use a storage which is connected via SAN(Fibre Channel or iSCSI storage) directly by this driver because this LVM driver presupposes virtual storage using iSCSI targetd(or LIO) and this driver does not support a storage via SAN.
+
At the standard LVM driver, Cinder creates volumes from volume group and Cinder provides these volumes using software iSCSI target. Compute node recognizes these volumes using software iSCSI initiator remotely. In this method, Compute node is not necessary to be connected to a backend storage which includes the volume group. Therefore cinder can provide volumes to Compute nodes easily. However, user can't access a volume via SAN because this LVM driver presupposes a volume access via Network using iSCSI targetd(or LIO).
 +
 
 +
In contrast, the new LVM-based shared storage driver supports a volume access via SAN using a shared or clustered LVM on a SAN volume instead of using a software iSCSI target. By using a shared or clustered LVM on a SAN volume, guest VMs on compute nodes can access their attached volumes via SAN instead of Network. This will improve I/O bandwidth and response of guest VMs.
  
In contrast, the new LVM-based shared storage driver supports backend storages which is connected via SAN(Fibre Channel or iSCSI) directly. As a result, this driver will improve I/O bandwidth and response of guest VM compared to standard LVM driver.
 
  
 
=== (a) Standard iSCSI LVM driver ===
 
=== (a) Standard iSCSI LVM driver ===
Line 22: Line 21:
  
 
== Benefits of this driver ==
 
== Benefits of this driver ==
# Improve I/O bandwidth and response of guest VM by using LVM on backend storage directly
+
# Improve I/O bandwidth and response of guest VM by supporting a volume access via SAN using a shared or clustered LVM on a SAN volume
# Less traffic of data transfer between Cinder Node and Compute nodes
+
# Less network traffic of data transfer between Cinder Node and Compute nodes
# Basic features of Cinder such as snapshot and backup etc. are appropriable from standard cinder LVM driver.
+
# Basic features of Cinder such as snapshot and backup etc. are appreciable from standard cinder LVM driver.
  
 
== Basic Design ==
 
== Basic Design ==
* This driver uses shared block storages between Cinder node and compute nodes via iSCSI or Fibre Channel.
+
* This driver uses shared or clustered LVM on a SAN volume between Cinder node and compute nodes. By using shared or clustered LVM, multiple servers can refer same VG and LVs.
* To create Volume Group or Logical Volume on a LVM of shared storage, multiple servers can refer Volume Group or Logical Volume.
+
* This driver creates LV from VG on the SAN volume for guest VM. A VG on a SAN volume is prepared previously.
* The driver can attach LV which is created from shared LVM to a virtual machine as a volume same as traditional LVM driver.
+
* LVM holds management region including metadata of LVM configuration information. If multiple servers update the metadata at the same time, metadata will be broken. Therefore, it is necessary that operation of updating metadata can permit only Cinder node.
* LVM holds management region including metadata of LVM configuration information. If multiple servers update the metadata at the same time, metadata will be broken. Therefore, it is necessary that operation of updating metadata can permit Cinder node.
 
 
* The operations of updating metadata are followings.
 
* The operations of updating metadata are followings.
# Volume create
+
** Volume create
# Volume delete
+
*** When Cinder node create a new LV on a VG, metadata of LVM is renewed but the update does not notify other compute nodes. Only cinder node knows the update this point.
# Volume extend
+
** Volume delete
# Snapshot create
+
*** Delete a LV on a VG from Cinder node.
# Snapshot delete
+
** Volume extend
 +
*** Extend a LV on a VG from Cinder node.
 +
** Snapshot create
 +
*** Create a snapthot of a LV on a VG from Cinder node.
 +
** Snapshot delete
 +
*** Delete a snapthot of a LV on a VG from Cinder node.
 +
 
 +
* The operations without updating metadata are followings. These operations are permitted every compute node.
 +
** Volume attach
 +
*** When attaching a LV to guest VM on a compute nodes, compute node have to reload LVM metadata using "lvscan" or "lvs" because compute node does not know latest LVM metadata. After reloading metadata, compute node recognise latest status of VG and LVs.
 +
*** And then, in order to attach new LV, compute nodes need to create a device file such as /dev/"VG name"/"LV name" using "lvchane -ay" command.
 +
*** After activation of LV, nova compute can attach the LV into guest VM.
  
* On the other hands, the operations without updating metadata are followings. These operations are permitted every compute node.
+
* Volume detach
# Volume attach
+
** After detaching a volume from guest VM, compute node deactivate the LV using "lvchane -an". As a result, unnecessary device file is removed from the compute node.
# Volume detach
 
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 59: Line 67:
 
* Use QEMU/KVM as a hypervisor (via libvirt compute driver)
 
* Use QEMU/KVM as a hypervisor (via libvirt compute driver)
 
* Shared block storages between Cinder node and compute nodes via iSCSI or Fibre Channel
 
* Shared block storages between Cinder node and compute nodes via iSCSI or Fibre Channel
* A volume group on the shared block storage
+
* A volume group on the SAN volume
* Disable lvmetad
+
* Disable lvmetad on compute nodes
When compute node attaches created volume to a virtual machine, latest LVM metadata is necessary. However the lvmetad caches LVM metadata and this prevent to obtain latest LVM metadata.
+
** When compute node attaches created volume to a virtual machine, latest LVM metadata is necessary. However the lvmetad caches LVM metadata and this prevent to obtain latest LVM metadata.
  
 
== Configuration ==
 
== Configuration ==

Revision as of 23:49, 6 March 2014

New LVM-based driver for shared storage in Cinder

Related blueprints

Goals

  • Implement a new LVM-based driver which supports a shared or clustered LVM on a SAN volume instead of using a software iSCSI target.

Overview

At the standard LVM driver, Cinder creates volumes from volume group and Cinder provides these volumes using software iSCSI target. Compute node recognizes these volumes using software iSCSI initiator remotely. In this method, Compute node is not necessary to be connected to a backend storage which includes the volume group. Therefore cinder can provide volumes to Compute nodes easily. However, user can't access a volume via SAN because this LVM driver presupposes a volume access via Network using iSCSI targetd(or LIO).

In contrast, the new LVM-based shared storage driver supports a volume access via SAN using a shared or clustered LVM on a SAN volume instead of using a software iSCSI target. By using a shared or clustered LVM on a SAN volume, guest VMs on compute nodes can access their attached volumes via SAN instead of Network. This will improve I/O bandwidth and response of guest VMs.


(a) Standard iSCSI LVM driver

Standard iSCSI LVM driver

(b) LVM-based driver for shared storage

LVM-based driver for shared storage

Benefits of this driver

  1. Improve I/O bandwidth and response of guest VM by supporting a volume access via SAN using a shared or clustered LVM on a SAN volume
  2. Less network traffic of data transfer between Cinder Node and Compute nodes
  3. Basic features of Cinder such as snapshot and backup etc. are appreciable from standard cinder LVM driver.

Basic Design

  • This driver uses shared or clustered LVM on a SAN volume between Cinder node and compute nodes. By using shared or clustered LVM, multiple servers can refer same VG and LVs.
  • This driver creates LV from VG on the SAN volume for guest VM. A VG on a SAN volume is prepared previously.
  • LVM holds management region including metadata of LVM configuration information. If multiple servers update the metadata at the same time, metadata will be broken. Therefore, it is necessary that operation of updating metadata can permit only Cinder node.
  • The operations of updating metadata are followings.
    • Volume create
      • When Cinder node create a new LV on a VG, metadata of LVM is renewed but the update does not notify other compute nodes. Only cinder node knows the update this point.
    • Volume delete
      • Delete a LV on a VG from Cinder node.
    • Volume extend
      • Extend a LV on a VG from Cinder node.
    • Snapshot create
      • Create a snapthot of a LV on a VG from Cinder node.
    • Snapshot delete
      • Delete a snapthot of a LV on a VG from Cinder node.
  • The operations without updating metadata are followings. These operations are permitted every compute node.
    • Volume attach
      • When attaching a LV to guest VM on a compute nodes, compute node have to reload LVM metadata using "lvscan" or "lvs" because compute node does not know latest LVM metadata. After reloading metadata, compute node recognise latest status of VG and LVs.
      • And then, in order to attach new LV, compute nodes need to create a device file such as /dev/"VG name"/"LV name" using "lvchane -ay" command.
      • After activation of LV, nova compute can attach the LV into guest VM.
  • Volume detach
    • After detaching a volume from guest VM, compute node deactivate the LV using "lvchane -an". As a result, unnecessary device file is removed from the compute node.
Permitted operation matrix
Operations Volume create Volume delete Volume extend Snapshot create Snapshot delete Volume attach Volume detach
Cinder node x x x x x - -
Compute node - - - - - x x
Cinder node with compute x x x x x x x

Prerequisites

  • Use QEMU/KVM as a hypervisor (via libvirt compute driver)
  • Shared block storages between Cinder node and compute nodes via iSCSI or Fibre Channel
  • A volume group on the SAN volume
  • Disable lvmetad on compute nodes
    • When compute node attaches created volume to a virtual machine, latest LVM metadata is necessary. However the lvmetad caches LVM metadata and this prevent to obtain latest LVM metadata.

Configuration

In order to enable Shared LVM driver, need to define theses values at /etc/cinder/cinder.conf

Example

[LVM_shared]
volume_group=cinder-volumes-shared
volume_driver=cinder.volume.drivers.lvm.LVMSharedDriver
volume_backend_name=LVM_shared