Jump to: navigation, search

CinderBrick

What problems do we want to solve:

  • We have storage related code in multiple projects now
  • Resource and quota utilization is managed in each project indepently
  • Availability Zones is not fine grained enough for scheduling (ie I want an instance with in rack or on node storage)
  • Want to be able to deploy fast local or in rack storage to an instance that might need it (think super high perf DataBase instance etc)
  • Key is they may not have high perf SAN type storage that Cinder would deploy normally, they want to utilize something local on the compute

node, or even a RAID etc that's in the same rack as the compute node (ie minimize network hops/latency)

Focus:

Emphasis on managing the following, rather than having for example both Nova and Cinder duplicate these things (think LOCAL):

  • lvm
  • qcow
  • vdi

UnFocus (what the goal is NOT):

  • iSCSI
we've already done most of this work in H, and it does eliminate a fair amount of duplication so go with it.
BUT that's iSCSI components like tgt and intiiator. Storage related code that's duplicated
It's a win, but it's not the point really
  • Backend drivers
The idea is NOT to move all the backend drivers out of Cinder and into a lib, there's nothing gained by that
For example if you're making an iSCSI attach to a compute node that should still be served up from Cinder

Use Case Example:

  • Admin can create various storage resources on systems in their OpenStack cluster
  • Tenant can then explicity or implicitly request paired or best effort matching based on locality (instance/volume grouping/pairing)
Explicit example: "nova boot --flavor xxx --image yyy --local-storage <size-in-Gig> my-special-instance"
Implicit example: "nova boot --flavor ZZZ --image my-special-instance" (Where perhaps the flavor of the instance has the size of storage etc)
This may be nova calling Cinder which then deploys the resources on the appropriate nodes (so brick is installed on compute/other nodes such that brick becomes a backend)


Best Example (presented by Vish at last summit):

 : request comes in to nova ->
 : nova gets list of potential compute nodes ->
 : nova calculates best X nodes and creates reservations ->
 : nova asks cinder to create volume on best from list ->
 : cinder creates volume and returns selected host ->
 : (clear reservation and retry excluding the original nodes on failure)
 : nova clears reservation and boots instance on selected host
 : (cleanup volume and retry entire operation on failure)

Link to etherpad from last go around:

https://etherpad.openstack.org/havana-cinder-local-storage-library