Jump to: navigation, search

Shares Service

Revision as of 00:55, 26 September 2013 by Robert Esker (talk | contribs) (5) The last mile problem)
  • Launchpad Entry: file-shares-service
  • Created: 29 September 2012
  • Last Updated: 25 September 2013
  • Contributors: Robert Esker, Ben Swartzlander, Rushi Agrawal

Introduction

This page documents the concept and vision for establishing a shared file system service to OpenStack. The working name for this project is Manila. The primary consumption of file shares would be across OpenStack Compute instances, but the service is intended to be accessible as an independent capability in line with the modular design established by other OpenStack services. More detailed blueprints (in Launchpad) and further detail in this specification will follow as necessary. The team's intention is to introduce the capability as an OpenStack incubated project in the Havana timeframe and submit it for consideration as a core service as early as Icehouse.

High-level proposal

According to IDC in its “Worldwide File-Based Storage 2012-2016 Forecast” (doc#235910, July 2012), file-based storage continues to be a thriving market, with spending on File-based storage solutions to reach north of $34.6 billion in 2016. Of the 27 Exabyte’s (EB) of total disk capacity estimated to have shipped in 2012, IDC projected that nearly 18 EB were of file-based capacity, accounting for over 65% of all disk shipped by capacity. A diversity of applications, from server virtualization to relational or distributed databases to collaborative content creation, often depend on the performance, scalability and simplicity of management associated with file-based systems, and the large ecosystem of supporting software products. OpenStack is commonly contemplated as an option for deploying classic infrastructure in an "as a Service" model, but without specific accommodation for shared file systems represents an incomplete solution.

We propose and have prototyped a new OpenStack service (originally based on Cinder). Cinder presently functions as the canonical storage provisioning control plane in OpenStack for block storage as well as delivering a persistence model for instance storage. The File Share Service prototype, in a similar manner, provides coordinated access to shared or distributed file systems.

The design and prototype implementation provide extensibility for multiple backends (to support vendor or file system specific nuances / capabilities) but is intended to be sufficiently abstract to accommodate any of a variety of shared or distributed file system types.

Shares Service.png

Project Considerations

Deployer driven requirements informed the need for this capability. The Cinder project provided proven capabilities in OpenStack that are largely common to the task of provisioning storage regardless of protocol type. Concepts such as capacity, target (server in NAS parlance), initiator (likewise the client when referring to shared file systems) are common. Specific Cinder capabilities (such as the filter scheduler, the notion of type, and extra specs) likewise apply to provisioning shared file systems as well. The initial prototype of the File Share Service is thus based on an evolution of Cinder. The intention is to move any commonality between Cinder and the File Share Service into Oslo.

This proposal and associated blueprints intend to, in phases, accommodate file-based storage as well. This blueprint should be treated as an overarching / umbrella design document with separate blueprints defined for each of the phases and to also account for "whole experience" interaction.

Naming

The project is currently (as of August 2013) operating under the working name "Manila" (pending trademark search / legal review).

Project Plan

File Shares Service Project Plan

Meetings

File Shares Project Meeting

Design

Use Cases (DRAFT)

1) Share existing to Nova instances

Coordinate and provide shared access to previously (externally) established share / export

2) Create and share to Nova instances

Create a new commonly accessible (across a defined set of instances) share / export

3) Bare-metal / non-virtualized consumption

Accommodate and provide mechanisms for last-mile consumption of shares by consumers of the service that aren't mediated by Nova.

4) Cross-tenant sharing

Coordinate shares across tenants

5) Instance creation

Boot from share support in Nova. Analogous to Boot from Volume in Cinder.

5) Import pre-existing shares

Wrap Manila around pre-existing shares / exports so that they can be provisioned.

Blueprints

The master blueprint is here: File Shares Service

1)File Shares Service

The service was initially conceived as an addition of a separate File Share Service, albeit delivered within the Cinder (OpenStack Block Storage) project given the opportunity to make use of common code. In June of 2013, however, the decision was made to establish an independent development project to accommodate:

  • Creation of file system shares (e.g. the create API needs to support a "protocol" & a permissions "mask" and "ownership" parameters)
  • Deletion of file systems shares
  • List, show, provide and deny access, & list share access rules to file system shares
  • Create, list, and delete snapshots / clones of file systems shares
  • Coordination of mounting file system shares
  • Unmounting file systems shares


Implementation status: prototyped.

The API description is here: Shares Service API Proposal

2) Shares Service Reference Provider(s)

Creation of a reference Cinder provider (commonly referred to as drivers) for shared file system use under the proposed expanded API. As an example, a NetApp driver for this would be able to advertise, accept, and respond to requests for NFSv3, NFSv4, NFSv4.1 (w/ pNFS), & contemporary CIFS / SMB protocols (eg versions 2, 2.1, 3). Additional modification of python-cinderclient will be necessary to provide for the expanded array of request parameters. Additionally, tempest coverage must be provided for. Both a vendor independent reference and NetApp-specific backend are part of the aforementioned prototype and are part of the submission.

Implementation status: completed.

3) Intelligent scheduling of Shares using Filter Scheduler and Multi-backend support

Allowing one Cinder node to manage multiple share backends. A backend can run either or both of share and volume services. Support for shares in filter scheduler allows the cloud administrator to manage large-scale share storage by filtering backends based on predefined parameters.

Implementation status: completed.

4) End to End Experience (Automated Mounting)

The last mile problem... A proposal for handling injection / updates of mounts to instantiated guests operating in the Nova context. A listener / agent that could either be interacted directly or more likely poll or receive updates from instance metadata changes would represent one possible solution. The possible use of either cloud-init or VirtFS (which would attach shared file systems to instances in a manner similar to Cinder block storage) is also under consideration. The cinder agent proposed in the following blueprint also represents a potential model:

Additional discussion here: Manila Storage Integration Patterns

Implementation status: Scoping.

5) The last mile problem

Accommodation for a variety of use cases / networking topologies (ranging from flat networks, to Neutron SDNs, to hypervisor mediated options) for connecting from shares to instances are discussed here: Manila Networking

Implementation status: not started.

6) Horizon Support

The Shares Service must expose both administrative and tenant Horizon interfaces.

Implementation status: not started.

Background

Cinder characteristics

A partial list of things to consider about Cinder relevant to the File Shares Service:

  • The OpenStack Cinder project separates the former nova-volume into an independent block storage service debuting in the Folsom release.
  • Cinder provides a data persistence model for application data and Nova instances which are otherwise assumed ephemeral.
  • Ad hoc requests for storage, whether by running instances or outside of the Nova context can be accommodated either programmatically via the Cinder API or via the python-cinderclient tool.
  • Cinder, in the initial form and as a legacy of nova-volume, provides a block only construct.
  • Cinder volumes (aka block devices) are not (as of the Grizzly release) shareable among multiple simultaneous instances
  • Cinder, in it's original role as a control plane for block device access, provides facility for many of the concepts that a File Shares service would depend upon. The present Cinder concepts of volume_types, extra_specs, snapshots, clones, the filter scheduler and more are broadly applicable to both. Storage concepts such as initiator (client), target (server), and capacity are likewise common conceptually (if not entirely semantically).

Reference

Please refer to the OpenStack Glossary for a standard definition of terms. Additionally, the following may clarify, add to, or differ slightly from those definitions:

Volume:

  • Persistent (non-ephemeral) block-based storage. Note that this definition may (and often does) conflict with vendor-specific definitions.

File System:

  • A system for organizing and storing data in directories and files. A shared file system provides (and arbitrates) concurrent access from multiple clients generally over IP networks and via established protocols.

NFS:

CIFS:

GlusterFS:

Ceph: