Jump to: navigation, search

Manila/Incubation Application

Project codename



We're not aware of any trademarks conflicts with the name. The capital city of the Philippines is called Manila, making the name a proper noun. There are no other names used by the project with trademark concerns.

Summary (one sentence abstract of the project)

The Manila project provides an API for management of shared filesystems with support for multiple protocols and backend implementations.

Parent Program name and PTL

Program: Shared Filesystems

PTL: Ben Swartzlander

Mission statement

Stated simply, the goal of the Manila project is to do for shared filesystem storage what Cinder has done for blocks storage.

We aim to provide a vendor neutral management interface that allows for provisioning and attaching shared filesystems such as NFS, CIFS, and more. To the extent possible we aim to mirror the architecture of Cinder, with support for a public REST API, multiple backends, and a scheduler that performs resource assignment decisions. When differences are unavoidable, we plan to design solutions that are compatible with the OpenStack ideals of modularity and scalability.

Detailed Description

The basic assumption underpinning Manila is that shared filesystems provide some valuable features that cannot be obtained from either blocks storage or object storage, and that OpenStack is missing management features for this 3rd form of storage. The unique features afforded by shared filesystems are shared, fine-grained, read/write access to persistent data by multiple instances simultaneously. The NFS and CIFS protocols have been developed to provide these features and still prove popular after decades of use.

The implementation of Manila is actually a modified fork of the Cinder project. The concept for management of shared filesystems was originally proposed as an extension to Cinder (at the SF design summit in April 2012), under the theory that there would be a lot of common code between the implementations, and many of the same developers would be interested in working on both projects. Because of this, the initial implementation for what is now Manila was actually a large patch to the Cinder project submitted in August 2012. For a variety of reasons, we ultimately decided that a separate project would be a better way to deliver the features and the Manila project was born.

Manila consist of all of the code from Cinder with our shared filesystem management code added in and all the blocks-specific code removed. The API largely mirrors the existing Cinder APIs, except that "volumes" have been renamed to "shares" and the attachment procedure is somewhat different.

Basic roadmap for the project

The initial implementation of Manila was a proof of concept that shared filesystem management can fit into the same architecture as Cinder. The main difference between blocks and shared filesystems however is how the storage system and the ultimate user of the storage communicate with one another. In particular, shared filesystems work best when instances are able to communicate directly with the storage backend over the network, and the storage backend is able to serve multiple tenants while maintaining secure separation between them. Block storage can simply be virtualized through a hypervisor, with far fewer requirements on the backend storage system. Because of these differences, additional work is needed to help automate the networking portion of attaching a shared filesystem to one or more instances in a tenant network, and to automate the setup of security domains and other features that exist in a NAS environment but not a SAN environment.

During Icehouse, our main goals are to define and implement these new APIs as well as expanding the backend driver interface to support true multitenancy (the kind with network segmentation) in Manila. We also aim to get as many backends implemented as possible, and to this end we will be improving the reference drivers and documentation for developing new ones. ̈

Location of project source code

Programming language, required technology dependencies




Message queue, database server, keystone, neutron Optional parts of Manila depend on: nova, cinder

There is a plan to make the neutron dependency optional in the future.

Is project currently open sourced? What license?

Yes - Licensed under the Apache License, Version 2.0

Level of maturity of software and team


Aside from the code inherited from Cinder, the new Manila code is a little more than a year old, and has been actively developed from then until now, mostly by developers from NetApp and Mirantis.


The core team now consists of developers from NetApp and Mirantis, with significant community interest since the code was open sourced in August 2013.

Project developers qualifications

Ben Swartzlander

NetApp - Software Architect, Manila - PTL

Ben Swartzlander has been the technical lead for the project since its conception 2 years ago, and plans to continue leading the project from a design and administrative standpoint. Ben has been working the storage industry as a software engineer for more than 13 years and has extensive experience with storage systems, network protocols, virtualization, and open source projects. Ben has been a contributor to the OpenStack project for nearly 3 years.

Yulia Portnova

Mirantis - Software Developer, Manila - Core Team

Valeriy Ponomaryov

Mirantis - Software Developer, Manila - Core Team

Xing Yang

EMC - Software Developer, Manila - Core Team (pending team vote)

Alex Meade

NetApp - Software Developer

Rushil Chugh

NetApp - Software Developer

Clinton Knight

NetApp - Sr. Software Developer

Rushi Agrawal

Reliance Jio Cloud - Software Developer

Andrei Ostapenko

Mirantis - Software Developer

Vitaly Kostenko

Aleksandr Chirko

Vijay Bellur

Red Hat

Csaba Henk

Red Hat

Ramana Raja

Red Hat

Christian Berendt

Shamail Tahir


Scott D'Angelo

HP Public Cloud

Deepak C Shetty

Red Hat - OpenStack Developer

Infrastructure requirements (testing, etc)

Manila does not require any infrastructure above and beyond what's already provided by devstack, gerrit, jenkins, and tempest today.

Have all current contributors agreed to the OpenStack CLA?

All current contributors HAVE agreed to the OpenStack CLA!