Jump to: navigation, search

Difference between revisions of "Cinder-multi-backend"

(add outdated info notice)
 
(12 intermediate revisions by 5 users not shown)
Line 1: Line 1:
__NOTOC__
+
<div style="background-color:yellow; font-size:125%">
= Assumptions =
+
<p>
* that there can be > 1 of the same backend, so there _has_ to be a mapping in the config between the names of a backend (Backend1, backend2) and its configuration settings.
+
<strong>This page has been superseded by the "official" Cinder documentation:<br>
 +
https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.html</strong>
 +
</p>
 +
<p style="text-align:center">
 +
<em>This page is kept here for historical purposes only and is no longer maintained.</em>
 +
</p>
 +
</div>
 +
= Cinder multi backends =
 +
This article is a explanation for creating an environment w/ multiple backends. Cinder volume backends are spawned as children to cinder-volume, and they are keyed from a unique Queue. They are named cinder-volume.HOST.BACKEND. Example cinder-volume.ubuntu.lvmdriver. The filter scheduler determines where to send the volume based on the volume type thats passed in.
  
= Ways to achieve =
+
== Configuration ==
* 1) Like nova-api, spawn off multiple cinder-volume children, each consisting of its own "backend". See https://gist.github.com/4588322
+
The configuration now has a flag, enabled_backends. This defines the names of the option groups for the different backends. Each backend defined here has to link to a config group (example [lvmdriver]) in the configuration. A full example of this is included in the configuration with 2 backends. The options need to be defined in the group, or the defaults in the code will be used. Putting a config value in [DEFAULT] will not be used.  
* 2) A single cinder-volume, with a special manager impl, for multiple backends, where each method has the backend defined in it. See https://gist.github.com/4588988
 
* 3) DONT DO IT... As in, create multiple cinders with their own config/stack/port and let your scheduler decide which one to talk to
 
  
= Pros of #1 =
+
Example (cinder.conf)
* you dont modify the managers. Its just that > 1 is spawned and represented by a independent process.
+
<div style="margin-left: 2em;">
* Its easy to specify a differing Q name based on the name of the backend (backend1, backend2....etc)
+
  <source lang="ini">
 +
# a list of backends that will be served by this machine
 +
enabled_backends=lvmdriver-1,lvmdriver-2
 +
[lvmdriver-1]
 +
volume_group=stack-volumes-1
 +
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
 +
volume_backend_name=LVM_iSCSI
 +
[lvmdriver-2]
 +
volume_group=stack-volumes-2
 +
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
 +
volume_backend_name=LVM_iSCSI
 +
  </source>
 +
</div>
  
= Cons of #1 =
 
* There must be a way to "pass down" the fact that each child is a spawned instance of a particular backend. Lets call it backend1. This would have to be propagated thru the service code ( server = service.Service.create.....) in the binscript. the nova-api code _does_ do that, but its because the paste configs all work off the name of the app. So single app or multi app binscripts dont change that behavior.
 
* The impls would need some notion of knowing if they are multi backend, then use a different config value, because of assumption 1. - Not sure how this would pan out...
 
* There would need to be some Q fun in the scheduler since they go to different Qs based on the backend name. Just a different routing key i assume...
 
  
= Pros of #2 =
+
The cinder-volume service does not monitor this config file, rather the file is read at startup.  You can restart the cinder-volume service (including the per-backend slaves) with the command `service cinder-volume restart`.
* Same Q, Same process, and the notion of the manager/impl/messageQ dont have to change in service.py
 
  
= Cons of #2 =
+
The cinder filter_scheduler also needs to be used in order to achieve the goal that one can explicitly create volume on certain back-end using volume types. However filter scheduler is the default scheduler starting Grizzly.
* Every message sent down would have to have a new parameter added to it, backend=BLAH, where BLAH is the name of the backend.
+
<div style="margin-left: 2em;">
* A new multi manager would need to be created with every method being _exactly_ the same as the regular manager, but just having an extra param like the messages sent down.
+
  <source lang="ini">
* The impls would need some notion of knowing if they are multi backend, then use a different config value, because of assumption 1. - This would be easier if the [[MultiManager]] was init'ing the children since we can put the params in the impls <u>init</u> methods.
+
scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
 +
  </source>
 +
</div>
  
= Pros of #3 =
+
== Volume Type ==
* Very little to do. Make sure there is no Q collision if > 1 cinder stack is started on same node
+
The volume_type must be defined in the volume_type table in the database. When creating a volume for a particular backend (or group of > 1 backends) the volume_type is passed in. There can be > 1 backend per volume_type, and the capacity scheduler kicks in and keeps the backends of a particular volume_type even. In order to group a volume_type to a backend, you must define extra specs for the type.
  
= Cons of #3 =
+
Example
* The entire cinder stack for X backends will live on your nodes. if u have 10 backends thats 10*3 (api/scheduler/volume) for each
+
* cinder --os-username admin --os-tenant-name admin type-create lvm
 +
* cinder --os-username admin --os-tenant-name admin type-key lvm set volume_backend_name=LVM_iSCSI
 +
* cinder --os-username admin --os-tenant-name admin extra-specs-list (just to check the settings are there)
 +
 
 +
== Usage ==
 +
When creating a volume, specify the volume_type that you created in the above example. This will be used in determining which type of backend to send to. As previously stated, if there is > 1 backend for a type, the capacity scheduler is used.
 +
 
 +
* cinder create --volume_type lvm --display_name my-test 1
 +
 
 +
If the volume type is defined in the database but no volume types exist for that backend in the configuration, then no valid host will be found. If both are true, it will prov to one of the volume backends.

Latest revision as of 13:36, 18 December 2019

This page has been superseded by the "official" Cinder documentation:
https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.html

This page is kept here for historical purposes only and is no longer maintained.

Cinder multi backends

This article is a explanation for creating an environment w/ multiple backends. Cinder volume backends are spawned as children to cinder-volume, and they are keyed from a unique Queue. They are named cinder-volume.HOST.BACKEND. Example cinder-volume.ubuntu.lvmdriver. The filter scheduler determines where to send the volume based on the volume type thats passed in.

Configuration

The configuration now has a flag, enabled_backends. This defines the names of the option groups for the different backends. Each backend defined here has to link to a config group (example [lvmdriver]) in the configuration. A full example of this is included in the configuration with 2 backends. The options need to be defined in the group, or the defaults in the code will be used. Putting a config value in [DEFAULT] will not be used.

Example (cinder.conf)

# a list of backends that will be served by this machine
enabled_backends=lvmdriver-1,lvmdriver-2
[lvmdriver-1]
volume_group=stack-volumes-1
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_iSCSI
[lvmdriver-2]
volume_group=stack-volumes-2
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_iSCSI


The cinder-volume service does not monitor this config file, rather the file is read at startup. You can restart the cinder-volume service (including the per-backend slaves) with the command `service cinder-volume restart`.

The cinder filter_scheduler also needs to be used in order to achieve the goal that one can explicitly create volume on certain back-end using volume types. However filter scheduler is the default scheduler starting Grizzly.

scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler

Volume Type

The volume_type must be defined in the volume_type table in the database. When creating a volume for a particular backend (or group of > 1 backends) the volume_type is passed in. There can be > 1 backend per volume_type, and the capacity scheduler kicks in and keeps the backends of a particular volume_type even. In order to group a volume_type to a backend, you must define extra specs for the type.

Example

  • cinder --os-username admin --os-tenant-name admin type-create lvm
  • cinder --os-username admin --os-tenant-name admin type-key lvm set volume_backend_name=LVM_iSCSI
  • cinder --os-username admin --os-tenant-name admin extra-specs-list (just to check the settings are there)

Usage

When creating a volume, specify the volume_type that you created in the above example. This will be used in determining which type of backend to send to. As previously stated, if there is > 1 backend for a type, the capacity scheduler is used.

  • cinder create --volume_type lvm --display_name my-test 1

If the volume type is defined in the database but no volume types exist for that backend in the configuration, then no valid host will be found. If both are true, it will prov to one of the volume backends.