Jump to: navigation, search

Difference between revisions of "Shares Service"

(Introduction)
(Introduction)
Line 6: Line 6:
 
== Introduction ==
 
== Introduction ==
  
This document is intended to vet a concept and establish a vision for establishing a shared file system service to OpenStack.  The primary consumption of shares would be across OpenStack Compute instances, but the capability is intended to be available as an independent generalized service in line with the modular design established by other OpenStack services.  More detailed blueprints (in Launchpad) and further detail in this specification will follow as necessary.
+
This document is intended to vet a concept and establish a vision for establishing a shared file system service to OpenStack.  The primary consumption of file shares would be across OpenStack Compute instances, but the service is intended to be accessible as an independent capability in line with the modular design established by other OpenStack services.  More detailed blueprints (in Launchpad) and further detail in this specification will follow as necessary.
  
 
== Background ==
 
== Background ==

Revision as of 04:43, 10 April 2013

  • Launchpad Entry: cinder-protocol-enhancements
  • Created: 29 Sep 2012
  • Contributors: Robert Esker, Ben Swartzlander

Introduction

This document is intended to vet a concept and establish a vision for establishing a shared file system service to OpenStack. The primary consumption of file shares would be across OpenStack Compute instances, but the service is intended to be accessible as an independent capability in line with the modular design established by other OpenStack services. More detailed blueprints (in Launchpad) and further detail in this specification will follow as necessary.

Background

Cinder characteristics

A partial list of things to consider about Cinder relevant to the File Shares Service:

  • The OpenStack Cinder project separates the former nova-volume into an independent block storage service debuting in the Folsom release.
  • Cinder provides a data persistence model for application data and Nova instances which are otherwise assumed ephemeral.
  • Ad hoc requests for storage, whether by running instances or outside of the Nova context can be accommodated either programmatically via the Cinder API or via the python-cinderclient tool.
  • Cinder, in the initial form and as a legacy of nova-volume, provides a block only construct.
  •  Cinder volumes (aka block devices) are not (as of the Grizzly release) shareable among multiple simultaneous instances
  • Cinder, in it's original role as a control plane for block device access, provides facility for many of the concepts that a File Shares service would depend upon. The present Cinder concepts of volume_types, extra_specs, snapshots, clones, the filter scheduler and more are broadly applicable to both. Storage concepts such as initiator (client), target (server), and capacity are likewise common conceptually (if not entirely semantically).

Reference

Please refer to the OpenStack Glossary for a standard definition of terms. Additionally, the following may clarify, add to, or differ slightly from those definitions:

  • volume: persistent (non-ephemeral) block-based storage. Note that this definition may (and often does) conflict with vendor-specific definitions.
  • file system: a system for organizing and storing data in directories and files. A shared file system provides (and arbitrates) concurrent access from multiple clients generally over IP networks and via established protocols.
  • NFS: Network File System
  • CIFS: Common Internet File System
  • GlusterFS: GlusterFS
  • Ceph: Ceph

High-level proposal

According to IDC in its “Worldwide File-Based Storage 2012-2016 Forecast” (doc#235910, July 2012), file-based storage continues to be a thriving market, with spending on File-based storage solutions to reach north of $34.6 billion in 2016. Of the 27 Exabyte’s (EB) of total disk capacity estimated to have shipped in 2012, IDC projected that nearly 18 EB were of file-based capacity, accounting for over 65% of all disk shipped by capacity. A diversity of applications, from server virtualization to relational or distributed databases to collaborative content creation, often depend on the performance, scalability and simplicity of management associated with file-based systems, and the large ecosystem of supporting software products. OpenStack is commonly contemplated as an option for deploying classic infrastructure in an "as a Service" model, but without specific accommodation for shared file systems represents an incomplete solution.

We propose and have prototyped an evolution of Cinder to add to it's present role. It presently functions as the canonical storage provisioning control plane in OpenStack for block storage and our prototype provides analogous support for coordinated access to shared or distributed file systems. A variety of deployer driven requirements advocating for this expanded capability have informed our efforts. Cinder provides proven existing support common to the task of provisioning storage regardless of protocol type. Concepts such as capacity, target (server in NAS parlance), initiator (likewise the client when referring to shared file systems) are common. Specific Cinder capabilities (such as the filter scheduler, the notion of type, and extra specs) likewise apply to provisioning shared file systems as well. As the Oslo project (aka openstack-common) evolves, and in parallel so the shares service, we can see the File Shares service splitting into a separately managed project with commonality extracted into Oslo. However, we believe the most expedient means of delivering this critical new capability to OpenStack deployers is via the Cinder project presently. The existing prototype is built in this manner and will be available concurrent with the Havana Summit (April 2013) in the form of a WIP submission for community review / discussion.

This proposal and associated blueprints intend to, in phases, accommodate file-based storage as well. This blueprint should be treated as an overarching / umbrella design document with separate blueprints defined for each of the phases and to also account for "whole experience" interaction.

Blueprints

The master blueprint is here: Cinder Protocol Enhancements

1) Extension of the Cinder API

Extension of the Cinder API to accommodate:

  • Creation of file system shares (e.g. the create API needs to support a "protocol" & a permissions "mask" and "ownership" parameters)
  • Coordination of mounting file system shares
  • Creation & deletion of snapshots & clones
  • List, show, provide and deny access to file system shares
  • Unmounting file systems shares
  • Deletion of file systems shares
  • Snapshot, list snapshots, and delete snapshots of file system shares

The API description is here: Cinder Protocol Enhancements API Proposal

2) Extended API Reference Driver

Creation of a reference Cinder driver for shared file system use under the proposed expanded API. As an example, a NetApp driver for this would be able to advertise, accept, and respond to requests for NFSv3, NFSv4, NFSv4.1 (w/ pNFS), & contemporary CIFS / SMB protocols (eg versions 2, 2.1, 3). Additional modification of python-cinderclient will be necessary to provide for the expanded array of request parameters. Additionally, tempest coverage must be provided for. Both a vendor independent reference and NetApp-specific backend are part of the aforementioned prototype and will be part of the WIP submission.

3) End to End Experience (Automated Mounting)

Proposal for handling injection / updates of mounts to instantiated guests operating in the Nova context. A listener / agent that could either be interacted directly or more likely poll or receive updates from instance metadata changes would represent one possible solution. The possible use of either cloud-init or VirtFS (which would attach shared file systems to instances in a manner similar to Cinder block storage) is also under consideration.

4) Quantum Support

If VirtFS is not plausible for either licensing reasons or for its limited support matrix (eg no Windows support presently), the File Shares Service must interact with Quantum for cross-instance network encapsulation.