- Launchpad Entry: NovaSpec:nova-virtual-storage-array
- Created: 2011-04-12
- Contributors: NelsonNahum
In order to emulate the current IT environments, and to provide better capabilities than Amazon's EBS, we would like to add to nova the capability to create virtual storage arrays. VIrtual Storage Arrays are block storage devices, that have the same performance, reliability and features than current enterprise SAN arrays like EMC Clariion or HP 3PAR. With this feature Users of the cloud will be able to buy, on demand, virtual storage arrays and connect them to their virtual servers as they do in the physical environment. Within the Virtual Storage Array (VSA), storage administrators will be able to choose things like type of drives (SSDs, SAS, SATA), type of interface (iSCSI, AoE, FCoE), cache size, how many virtual controllers, policies around snapshots and remote replications and RAID level. With this feature, cloud providers implementing OpenStack will be able to offer to their clients, enterprise class storage systems at the low cost of simple disk drives attached to servers. Users of the cloud, will be able to choose particular QoS for the storage they use (i.e use only SAS drives or only SATA). Besides QoS for the block storage volumes, VSA adds multitenancy for the storage. In the same "Storage Nodes", multiple Virtual Storage Arrays can be defined for different customers.
The proposal is to add VSA as an addition to OpenStack without the need to change the volume APIs.
From the end user standpoint, with this feature they will be able to create Virtual Storage Arrays and set the different QoS (i.e number of virtual controllers, size of cache in GBs, type fo drives (SSD, SAS, SATA) etc). Once a virtual storage array is created the user will create volumes in the same way as today, but with the optional parameter from which VSA. For example if the VSA has SAS drives, 4 virtual controllers and 128 GBs of cache, the new created volume will have much better performance than a volume created in VSA where all the drives are SATA and has only 4GB cache. The VSA is an optional feature and will not impose a change to users that don't want to use this feature.
VSAs will allow enterprises to put applications in the cloud that requires performance, reliability and features of enterprise storage arrays, but at the cost of the rest of the cloud components. It will also allow to distinguish the quality of service needed for different application.
A user creates a VSA and specifies, the size of the cache, the size of each virtual controller and the type of drives. It is possible to add other parameters like RAID algorithm to use, frequency of snaps at the VSA level. When the user creates a volume, he has the option to tell on which VSA to create the volume. The volume will have the performance and reliability of the VSA. A VSA is a container of many volumes that can belong to different instances. These volumes could be placed anywhere in the cloud. Access from the compute nodes to the volumes is with block storage protocols such iSCSI. The Storage Admins, creates and manage the virtual storage arrays, while the application admin don't need to deal with these details when they create volumes.
- Introduce the concept of VSA to Nova with all methods for its handling (create/update/delete)
- Introduce the concept of Drive Types to Nova representing physical drives available for storage allocation
- Extend the OpenStack and EC2 APIs to accommodate VSA & Drive Types.
- The VSA consists of multiple compute instances running VSA S/W (Virtual Controllers or VCs)
- Storage to VSA might be added during VSA creation time or later. Users will need to select number and types of drives they would like to add to the VSA.
- The VSA will use different RAID technologies to protect storage against failures and accelerate storage performance.
- The VSA will present its own volumes to other compute instances or external initiators
- New VSA Controller receiving APIs to create/delete/update VSA.
- New nova-vsa service running on CloudController host (in the future nova-vsa will run on all nodes hosting VSA instances)
- New nova-volume driver responsible for:
- communication with vendor-specific S/W residing on SN node
- gathering info about physical resources and updating scheduler
- creation of special volumes assigned to VSAs and representing particular drive types
- New drive-type & resource aware Scheduler for allocating volumes to VSAs
See the following document:
In the link below there are the screenshots of the reference Dashboard with the addition of the Virtual Storage Array and the modification on the volume screen. Cloud users and project owners can create their own VSAs according to the QoS needed, this is why Virtual Storage Arrays needs to be part of the project menu. At the volume level, the only difference is that the volume owner choose from which virtual array to allocate the volume. This, automatically set the QoS for the volume. In this example, it is possible to create a VSA, exclusively from SSD drives, and the volume allocate from it, it is ensured to be placed in SSD drives. In the future Zadara’s virtual storage arrays will support things like different RAID algorithms, policies around remote replication for disaster recovery etc. By concentrating all these features at the VSA level, cloud providers will allow their customers to put the necessary QoS storage to their different applications, without the need to change much at the volume level, and without the need to invest in SAN based storage arrays.
- data migration, if any
- redirects from old URLs to new ones, if any
- how users will be pointed to the new way of doing things, if necessary.
This need not be added or completed until the specification is nearing beta.
This should highlight any issues that should be addressed in further specifications, and not problems with the specification itself; since any specification with problems cannot be approved.
BoF agenda and discussion
Use this section to take notes during the BoF; if you keep it in the approved spec, use it for summarising what was discussed and note any options that were rejected.