Jump to: navigation, search

Difference between revisions of "Cinder/NewLVMbasedDriverForSharedStorageInCinder"

(Configuration: remove redundant spaces)
Line 6: Line 6:
  
 
== Goals ==
 
== Goals ==
* To provide a more reliable and low latency LVM-based driver using shared block storage in Cinder.
+
* The goal of this blue print is to implement a new LVM-based driver which supports shared block device on backend storage via Fibre Channel or iSCSI.
 +
* This will improve I/O bandwidth and response of guest VM compared to standard LVM driver.
  
 
== Prerequisites ==
 
== Prerequisites ==
Line 15: Line 16:
  
 
== Overview ==
 
== Overview ==
At the traditional LVM driver, Cinder creates volumes from volume group and Cinder provides these volumes via software iSCSI target and Compute node recognizes these volumes using software iSCSI initiator remotely. In this method, Compute node is not necessary to be connected directly to block storage which includes the volume group. Therefore cinder can provide volumes to Compute node easily, on the other hands there are concerns that VM loses control of volume access when some kind of troubles happened at iSCSI target process or traffic of data transfer converges to the Cinder node.
+
At the standard LVM driver, Cinder creates volumes from volume group and Cinder provides these volumes using software iSCSI target and Compute node recognizes these volumes using software iSCSI initiator remotely. In this method, Compute node is not necessary to be connected directly to backend storage which includes the volume group. Therefore cinder can provide volumes to Compute node easily. However, user can’t use a storage which is connected via SAN(Fibre Channel or iSCSI storage) directly by this driver because this LVM driver presupposes virtual storage using iSCSI targetd(or LIO) and this driver does not support a storage via SAN.
  
In contrast, this driver premises that block storage is shared via Fibre channel etc. between Cinder node and Compute node, and each server can recognize the shared storage as a local block device. This enables that each Compute node can recognize a volume which Cinder creates from LVM on the storage as a volume of the local block device on Compute node. As a result, system can avoid heavy traffic of data transfer between Cinder node and Compute nodes.
+
In contrast, the new LVM-based shared storage driver supports backend storages which is connected via SAN(Fibre Channel or iSCSI) directly. As a result, this driver will improve I/O bandwidth and response of guest VM compared to standard LVM driver.
  
 
== Benefits of this driver ==
 
== Benefits of this driver ==

Revision as of 21:44, 4 March 2014

New LVM-based driver for shared storage in Cinder

Related blueprints

Goals

  • The goal of this blue print is to implement a new LVM-based driver which supports shared block device on backend storage via Fibre Channel or iSCSI.
  • This will improve I/O bandwidth and response of guest VM compared to standard LVM driver.

Prerequisites

  • Use QEMU/KVM as a hypervisor (via libvirt compute driver)
  • Shared block storages between Cinder node and compute nodes via iSCSI or Fibre Channel.
  • Disable lvmetad.

When compute node attaches created volume to a virtual machine, latest LVM metadata is necessary. However the lvmetad caches LVM metadata and this prevent to obtain latest LVM metadata.

Overview

At the standard LVM driver, Cinder creates volumes from volume group and Cinder provides these volumes using software iSCSI target and Compute node recognizes these volumes using software iSCSI initiator remotely. In this method, Compute node is not necessary to be connected directly to backend storage which includes the volume group. Therefore cinder can provide volumes to Compute node easily. However, user can’t use a storage which is connected via SAN(Fibre Channel or iSCSI storage) directly by this driver because this LVM driver presupposes virtual storage using iSCSI targetd(or LIO) and this driver does not support a storage via SAN.

In contrast, the new LVM-based shared storage driver supports backend storages which is connected via SAN(Fibre Channel or iSCSI) directly. As a result, this driver will improve I/O bandwidth and response of guest VM compared to standard LVM driver.

Benefits of this driver

  1. To be improved reliability compared to the traditional method which uses software iSCSI target
  2. Less traffic of data transfer between Cinder node and compute nodes
  3. High performance and low latency disk I/O by using LVM on local block device
  4. Basic features of Cinder such as snapshot and backup, etc. are appropriable from traditional cinder LVM driver.

Basic Design

  • This driver uses shared block storages between Cinder node and compute nodes via iSCSI or Fibre Channel.
  • To create Volume Group or Logical Volume on a LVM of shared storage, multiple servers can refer Volume Group or Logical Volume.
  • The driver can attach LV which is created from shared LVM to a virtual machine as a volume same as traditional LVM driver.
  • LVM holds management region including metadata of LVM configuration information. If multiple servers update the metadata at the same time, metadata will be broken. Therefore, it is necessary that operation of updating metadata can permit Cinder node.
  • The operations of updating metadata are followings.
  1. Volume create
  2. Volume delete
  3. Volume extend
  4. Snapshot create
  5. Snapshot delete
  • On the other hands, the operations without updating metadata are followings. These operations are permitted every compute node.
  1. Volume attach
  2. Volume detach
Permitted operation matrix
Operations Volume create Volume delete Volume extend Snapshot create Snapshot delete Volume attach Volume detach
Cinder node x x x x x - -
Compute node - - - - - x x
Cinder node with compute x x x x x x x

Configuration

In order to enable Shared LVM driver, need to define theses values at /etc/cinder/cinder.conf

Example

[LVM_shared]
volume_group=cinder-volumes-shared
volume_driver=cinder.volume.drivers.lvm.LVMSharedDriver
volume_backend_name=LVM_shared