Jump to: navigation, search

Difference between revisions of "Cinder/MultiVolumeBackend"

 
Line 28: Line 28:
 
== Implementation ==
 
== Implementation ==
  
Some code changes and database changes too.
+
=== Sample Config ===
 +
 
 +
Here's how the configuration would look like with the new changes.
 +
NOTE: You can still run the regular 1-1 manager-driver with the existing configuration options.
 +
 
 +
 
 +
<pre><nowiki>
 +
# Multi Volume Backend support
 +
multi_backend_support=True
 +
multi_backend_configs=debian,cluster1
 +
 
 +
[debian]
 +
volume_driver=nova.volume.driver.ISCSIDriver
 +
 
 +
[cluster1]
 +
volume_driver=nova.volume.san.HpSanISCSIDriver
 +
volume_format_timeout=600
 +
san_ip=x.x.x.x
 +
san_ssh_port=x
 +
san_login=x
 +
san_password=x
 +
san_clustername=cluster1
 +
</nowiki></pre>
 +
 
  
 
=== Code Changes ===
 
=== Code Changes ===

Revision as of 23:55, 8 August 2012

Summary

Allow managing multi volume backends from a single volume manager. Right now there's a 1-1 mapping of manager-driver. This blueprint aims to provide suport for 1-n manager-drivers, where by certain volume drivers that really don't depend on local host storage can take advantage of this to manager multi backends without having to run multi volume managers.

The thought is to use the existing configuration sections to distinguish the various drivers to load for a single volume manager.

Release Note

  1. Run one or more drivers with a single volume manager.
  2. Adds support for volume backends within the volumes table to distinguish which driver to choose for a certain volume.

Rationale

Currently there's no way to manage multi backends, it's 1-1. This provides more flexibility for an operator manager several backends, where they could just run 2 or 3 volume managers for load balancing instead of having to run multi instances of volume managers for each of the backends plus if they need to load balance it.

User stories

No user facing impact.

Design

Provide a way in the configuration to support multiple sections per volume driver configuration.

Implementation

Sample Config

Here's how the configuration would look like with the new changes. NOTE: You can still run the regular 1-1 manager-driver with the existing configuration options.


# Multi Volume Backend support
multi_backend_support=True
multi_backend_configs=debian,cluster1

[debian]
volume_driver=nova.volume.driver.ISCSIDriver

[cluster1]
volume_driver=nova.volume.san.HpSanISCSIDriver
volume_format_timeout=600
san_ip=x.x.x.x
san_ssh_port=x
san_login=x
san_password=x
san_clustername=cluster1


Code Changes

Mostly update the volume manager to support multi drivers and update the drivers to load configuration on startup rather than loading then when they need it.

Migration

Additions:

  1. volume_backends
    1. created_at - datetime
    2. updated_at - datetime
    3. deleted_at - datetime
    4. deleted - tinyint(1)
    5. id - int(11)
    6. name - varchar(255)
    7. description - varchar(255)

Modifications:

  1. volumes
    1. backend_id - int(11) Foreign key to volume_backends.id

Test/Demo Plan

TBD