Jump to: navigation, search

Cinder-multi-backend

Revision as of 20:00, 13 February 2013 by Michael (talk)

Cinder multi backends

This article is a explanation for creating an environment w/ multiple backends. Cinder volume backends are spawned as children to cinder-volume, and they are keyed from a unique Queue. They are named cinder-volume.HOST.BACKEND. Example cinder-volume.ubuntu.lvmdriver. The filter scheduler determines where to send the volume based on the volume type thats passed in.

Configuration

The configuration now has a flag, enabled_backends. This defines the names of the option groups for the different backends. Each backend defined here has to link to a config group (example [lvmdriver]) in the configuration. A full example of this is included in the configuration with 2 backends. The options need to be defined in the group, or the defaults in the code will be used. Putting a config value in [DEFAULT] will not be used.

Each of the config groups must also define a volume_type. This is explained below.

  • backend_volume_type=lvm

The cinder filter_scheduler also needs to be used.

Volume Type

The volume_type must be defined in the volume_type table in the database. When creating a volume for a particular backend (or group of > 1 backends) the volume_type is passed in. There can be > 1 backend per volume_type, and the capacity scheduler kicks in and keeps the backends of a particular volume_type even.

Usage

When creating a volume, specify the volume_type. This will be used in determining which type of backend to send to. As previously stated, if there is > 1 backend for a type, the capacity scheduler is used.

  • cinder create --volume_type lvm --display_name my-test 1

If the volume type is defined in the database but no volume types exist for that backend in the configuration, then no valid host will be found. If both are true, it will prov to one of the volume backends.