Jump to: navigation, search

Cinder/NewLVMbasedDriverForSharedStorageInCinder

< Cinder
Revision as of 22:54, 7 February 2014 by Tomoki-sekiyama (talk | contribs) (Configuration: remove redundant spaces)

New LVM-based driver for shared storage in Cinder

Related blueprints

Goals

  • To provide a more reliable and low latency LVM-based driver using shared block storage in Cinder.

Prerequisites

  • Use QEMU/KVM as a hypervisor (via libvirt compute driver)
  • Shared block storages between Cinder node and compute nodes via iSCSI or Fibre Channel.
  • Disable lvmetad.

When compute node attaches created volume to a virtual machine, latest LVM metadata is necessary. However the lvmetad caches LVM metadata and this prevent to obtain latest LVM metadata.

Overview

At the traditional LVM driver, Cinder creates volumes from volume group and Cinder provides these volumes via software iSCSI target and Compute node recognizes these volumes using software iSCSI initiator remotely. In this method, Compute node is not necessary to be connected directly to block storage which includes the volume group. Therefore cinder can provide volumes to Compute node easily, on the other hands there are concerns that VM loses control of volume access when some kind of troubles happened at iSCSI target process or traffic of data transfer converges to the Cinder node.

In contrast, this driver premises that block storage is shared via Fibre channel etc. between Cinder node and Compute node, and each server can recognize the shared storage as a local block device. This enables that each Compute node can recognize a volume which Cinder creates from LVM on the storage as a volume of the local block device on Compute node. As a result, system can avoid heavy traffic of data transfer between Cinder node and Compute nodes.

Benefits of this driver

  1. To be improved reliability compared to the traditional method which uses software iSCSI target
  2. Less traffic of data transfer between Cinder node and compute nodes
  3. High performance and low latency disk I/O by using LVM on local block device
  4. Basic features of Cinder such as snapshot and backup, etc. are appropriable from traditional cinder LVM driver.

Basic Design

  • This driver uses shared block storages between Cinder node and compute nodes via iSCSI or Fibre Channel.
  • To create Volume Group or Logical Volume on a LVM of shared storage, multiple servers can refer Volume Group or Logical Volume.
  • The driver can attach LV which is created from shared LVM to a virtual machine as a volume same as traditional LVM driver.
  • LVM holds management region including metadata of LVM configuration information. If multiple servers update the metadata at the same time, metadata will be broken. Therefore, it is necessary that operation of updating metadata can permit Cinder node.
  • The operations of updating metadata are followings.
  1. Volume create
  2. Volume delete
  3. Volume extend
  4. Snapshot create
  5. Snapshot delete
  • On the other hands, the operations without updating metadata are followings. These operations are permitted every compute node.
  1. Volume attach
  2. Volume detach
Permitted operation matrix
Operations Volume create Volume delete Volume extend Snapshot create Snapshot delete Volume attach Volume detach
Cinder node x x x x x - -
Compute node - - - - - x x
Cinder node with compute x x x x x x x

Configuration

In order to enable Shared LVM driver, need to define theses values at /etc/cinder/cinder.conf

Example

[LVM_shared]
volume_group=cinder-volumes-shared
volume_driver=cinder.volume.drivers.lvm.LVMSharedDriver
volume_backend_name=LVM_shared