Jump to: navigation, search

Difference between revisions of "Cinder-multi-backend"

 
Line 1: Line 1:
 
__NOTOC__
 
__NOTOC__
 +
= Assumptions =
 +
* that there can be > 1 of the same backend, so there _has_ to be a mapping in the config between the names of a backend (Backend1, backend2) and its configuration settings.
  
Assumptions
+
= Ways to achieve =
========
+
* 1) Like nova-api, spawn off multiple cinder-volume children, each consisting of its own "backend". See https://gist.github.com/4588322
1) that there can be > 1 of the same backend, so there _has_ to be a mapping in the config between the names of a backend (Backend1, backend2) and its configuration settings.
+
* 2) A single cinder-volume, with a special manager impl, for multiple backends, where each method has the backend defined in it. See https://gist.github.com/4588988
 +
* 3) DONT DO IT... As in, create multiple cinders with their own config/stack/port and let your scheduler decide which one to talk to
  
Ways to achieve
+
= Pros of #1 =
=========
 
1) Like nova-api, spawn off multiple cinder-volume children, each consisting of its own "backend". See https://gist.github.com/4588322
 
2) A single cinder-volume, with a special manager impl, for multiple backends, where each method has the backend defined in it. See https://gist.github.com/4588988
 
3) DONT DO IT... As in, create multiple cinders with their own config/stack/port and let your scheduler decide which one to talk to
 
 
 
Pros of #1
 
=======
 
 
* you dont modify the managers. Its just that > 1 is spawned and represented by a independent process.
 
* you dont modify the managers. Its just that > 1 is spawned and represented by a independent process.
 
* Its easy to specify a differing Q name based on the name of the backend (backend1, backend2....etc)
 
* Its easy to specify a differing Q name based on the name of the backend (backend1, backend2....etc)
  
Cons of #1
+
= Cons of #1 =
=======
 
 
* There must be a way to "pass down" the fact that each child is a spawned instance of a particular backend. Lets call it backend1. This would have to be propagated thru the service code ( server = service.Service.create.....) in the binscript. the nova-api code _does_ do that, but its because the paste configs all work off the name of the app. So single app or multi app binscripts dont change that behavior.  
 
* There must be a way to "pass down" the fact that each child is a spawned instance of a particular backend. Lets call it backend1. This would have to be propagated thru the service code ( server = service.Service.create.....) in the binscript. the nova-api code _does_ do that, but its because the paste configs all work off the name of the app. So single app or multi app binscripts dont change that behavior.  
 
* The impls would need some notion of knowing if they are multi backend, then use a different config value, because of assumption 1. - Not sure how this would pan out...
 
* The impls would need some notion of knowing if they are multi backend, then use a different config value, because of assumption 1. - Not sure how this would pan out...
 
* There would need to be some Q fun in the scheduler since they go to different Qs based on the backend name. Just a different routing key i assume...
 
* There would need to be some Q fun in the scheduler since they go to different Qs based on the backend name. Just a different routing key i assume...
  
Pros of #2
+
= Pros of #2 =
=======
 
 
* Same Q, Same process, and the notion of the manager/impl/messageQ dont have to change in service.py
 
* Same Q, Same process, and the notion of the manager/impl/messageQ dont have to change in service.py
  
Cons of #2
+
= Cons of #2 =
=======
 
 
* Every message sent down would have to have a new parameter added to it, backend=BLAH, where BLAH is the name of the backend.
 
* Every message sent down would have to have a new parameter added to it, backend=BLAH, where BLAH is the name of the backend.
 
* A new multi manager would need to be created with every method being _exactly_ the same as the regular manager, but just having an extra param like the messages sent down.
 
* A new multi manager would need to be created with every method being _exactly_ the same as the regular manager, but just having an extra param like the messages sent down.
 
* The impls would need some notion of knowing if they are multi backend, then use a different config value, because of assumption 1. - This would be easier if the [[MultiManager]] was init'ing the children since we can put the params in the impls <u>init</u> methods.
 
* The impls would need some notion of knowing if they are multi backend, then use a different config value, because of assumption 1. - This would be easier if the [[MultiManager]] was init'ing the children since we can put the params in the impls <u>init</u> methods.
  
Pros of #3
+
= Pros of #3 =
======
 
 
* Very little to do. Make sure there is no Q collision if > 1 cinder stack is started on same node
 
* Very little to do. Make sure there is no Q collision if > 1 cinder stack is started on same node
  
Cons of #3
+
= Cons of #3 =
======
 
 
* The entire cinder stack for X backends will live on your nodes. if u have 10 backends thats 10*3 (api/scheduler/volume) for each
 
* The entire cinder stack for X backends will live on your nodes. if u have 10 backends thats 10*3 (api/scheduler/volume) for each

Revision as of 20:47, 21 January 2013

Assumptions

  • that there can be > 1 of the same backend, so there _has_ to be a mapping in the config between the names of a backend (Backend1, backend2) and its configuration settings.

Ways to achieve

  • 1) Like nova-api, spawn off multiple cinder-volume children, each consisting of its own "backend". See https://gist.github.com/4588322
  • 2) A single cinder-volume, with a special manager impl, for multiple backends, where each method has the backend defined in it. See https://gist.github.com/4588988
  • 3) DONT DO IT... As in, create multiple cinders with their own config/stack/port and let your scheduler decide which one to talk to

Pros of #1

  • you dont modify the managers. Its just that > 1 is spawned and represented by a independent process.
  • Its easy to specify a differing Q name based on the name of the backend (backend1, backend2....etc)

Cons of #1

  • There must be a way to "pass down" the fact that each child is a spawned instance of a particular backend. Lets call it backend1. This would have to be propagated thru the service code ( server = service.Service.create.....) in the binscript. the nova-api code _does_ do that, but its because the paste configs all work off the name of the app. So single app or multi app binscripts dont change that behavior.
  • The impls would need some notion of knowing if they are multi backend, then use a different config value, because of assumption 1. - Not sure how this would pan out...
  • There would need to be some Q fun in the scheduler since they go to different Qs based on the backend name. Just a different routing key i assume...

Pros of #2

  • Same Q, Same process, and the notion of the manager/impl/messageQ dont have to change in service.py

Cons of #2

  • Every message sent down would have to have a new parameter added to it, backend=BLAH, where BLAH is the name of the backend.
  • A new multi manager would need to be created with every method being _exactly_ the same as the regular manager, but just having an extra param like the messages sent down.
  • The impls would need some notion of knowing if they are multi backend, then use a different config value, because of assumption 1. - This would be easier if the MultiManager was init'ing the children since we can put the params in the impls init methods.

Pros of #3

  • Very little to do. Make sure there is no Q collision if > 1 cinder stack is started on same node

Cons of #3

  • The entire cinder stack for X backends will live on your nodes. if u have 10 backends thats 10*3 (api/scheduler/volume) for each