Jump to: navigation, search

Difference between revisions of "Cinder-multi-backend"

Line 1: Line 1:
 
__NOTOC__
 
__NOTOC__
= Assumptions =
+
== Cinder multi backends ==
* that there can be > 1 of the same backend, so there _has_ to be a mapping in the config between the names of a backend (Backend1, backend2) and its configuration settings.
+
This article is a explanation for creating an environment w/ multiple backends. Cinder volume backends are spawned as children to cinder-volume, and they are keyed from a unique Queue. They are named cinder-volume.HOST.BACKEND. Example cinder-volume.ubuntu.lvmdriver. The filter scheduler determines where to send the volume based on the volume type thats passed in.
  
= Ways to achieve =
+
=== Configuration ===
* 1) Like nova-api, spawn off multiple cinder-volume children, each consisting of its own "backend". See https://gist.github.com/4588322
+
The configuration now has a flag, enabled_backends. This defines the names of the option groups for the different backends. Each backend defined here has to link to a config group (example [lvmdriver]) in the configuration. A full example of this is included in the configuration with 2 backends. The options need to be defined in the group, or the defaults in the code will be used. Putting a config value in [DEFAULT] will not be used.  
* 2) A single cinder-volume, with a special manager impl, for multiple backends, where each method has the backend defined in it. See https://gist.github.com/4588988
 
* 3) DONT DO IT... As in, create multiple cinders with their own config/stack/port and let your scheduler decide which one to talk to
 
  
= Pros of #1 =
+
Each of the config groups must also define a volume_type. This is explained below.
* you dont modify the managers. Its just that > 1 is spawned and represented by a independent process.
 
* Its easy to specify a differing Q name based on the name of the backend (backend1, backend2....etc)
 
  
= Cons of #1 =
+
The cinder filter_scheduler also needs to be used.  
* There must be a way to "pass down" the fact that each child is a spawned instance of a particular backend. Lets call it backend1. This would have to be propagated thru the service code ( server = service.Service.create.....) in the binscript. the nova-api code _does_ do that, but its because the paste configs all work off the name of the app. So single app or multi app binscripts dont change that behavior.
 
* The impls would need some notion of knowing if they are multi backend, then use a different config value, because of assumption 1. - Not sure how this would pan out...
 
* There would need to be some Q fun in the scheduler since they go to different Qs based on the backend name. Just a different routing key i assume...
 
  
= Pros of #2 =
+
=== Volume Type ===
* Same Q, Same process, and the notion of the manager/impl/messageQ dont have to change in service.py
+
The volume_type must be defined in the volume_type table in the database. When creating a volume for a particular backend (or group of > 1 backends) the volume_type is passed in. There can be > 1 backend per volume_type, and the capacity scheduler kicks in and keeps the backends of a particular volume_type even.  
  
= Cons of #2 =
+
=== Usage ===
* Every message sent down would have to have a new parameter added to it, backend=BLAH, where BLAH is the name of the backend.
+
When creating a volume, specify the volume_type. This will be used in determining which type of backend to send to. As previously stated, if there is > 1 backend for a type, the capacity scheduler is used.
* A new multi manager would need to be created with every method being _exactly_ the same as the regular manager, but just having an extra param like the messages sent down.
 
* The impls would need some notion of knowing if they are multi backend, then use a different config value, because of assumption 1. - This would be easier if the [[MultiManager]] was init'ing the children since we can put the params in the impls <u>init</u> methods.
 
  
= Pros of #3 =
+
* cinder create --volume_type lvm --display_name my-test 1
* Very little to do. Make sure there is no Q collision if > 1 cinder stack is started on same node
 
  
= Cons of #3 =
+
If the volume type is defined in the database but no volume types exist for that backend in the configuration, then no valid host will be found. If both are true, it will prov to one of the volume backends.
* The entire cinder stack for X backends will live on your nodes. if u have 10 backends thats 10*3 (api/scheduler/volume) for each
 

Revision as of 19:58, 13 February 2013

Cinder multi backends

This article is a explanation for creating an environment w/ multiple backends. Cinder volume backends are spawned as children to cinder-volume, and they are keyed from a unique Queue. They are named cinder-volume.HOST.BACKEND. Example cinder-volume.ubuntu.lvmdriver. The filter scheduler determines where to send the volume based on the volume type thats passed in.

Configuration

The configuration now has a flag, enabled_backends. This defines the names of the option groups for the different backends. Each backend defined here has to link to a config group (example [lvmdriver]) in the configuration. A full example of this is included in the configuration with 2 backends. The options need to be defined in the group, or the defaults in the code will be used. Putting a config value in [DEFAULT] will not be used.

Each of the config groups must also define a volume_type. This is explained below.

The cinder filter_scheduler also needs to be used.

Volume Type

The volume_type must be defined in the volume_type table in the database. When creating a volume for a particular backend (or group of > 1 backends) the volume_type is passed in. There can be > 1 backend per volume_type, and the capacity scheduler kicks in and keeps the backends of a particular volume_type even.

Usage

When creating a volume, specify the volume_type. This will be used in determining which type of backend to send to. As previously stated, if there is > 1 backend for a type, the capacity scheduler is used.

  • cinder create --volume_type lvm --display_name my-test 1

If the volume type is defined in the database but no volume types exist for that backend in the configuration, then no valid host will be found. If both are true, it will prov to one of the volume backends.