- 1 Project codename
- 2 Trademarks
- 3 Summary (one sentence abstract of the project)
- 4 Parent Program name and PTL
- 5 Mission statement
- 6 Detailed Description
- 7 Basic roadmap for the project
- 8 Location of project source code
- 9 Programming language, required technology dependencies
- 10 Is project currently open sourced? What license?
- 11 Level of maturity of software and team
- 12 Project developers qualifications
- 12.1 Ben Swartzlander
- 12.2 Yulia Portnova
- 12.3 Valeriy Ponomaryov
- 12.4 Xing Yang
- 12.5 Thomas Bechtold
- 12.6 Alex Meade
- 12.7 Rushil Chugh
- 12.8 Clinton Knight
- 12.9 Dustin Schoenbrun
- 12.10 Rushi Agrawal
- 12.11 Andrei Ostapenko
- 12.12 Vitaly Kostenko
- 12.13 Aleksandr Chirko
- 12.14 Vijay Bellur
- 12.15 Csaba Henk
- 12.16 Ramana Raja
- 12.17 Christian Berendt
- 12.18 Shamail Tahir
- 12.19 Scott D'Angelo
- 12.20 Deepak C Shetty
- 12.21 Julia Varlamova
- 12.22 Nilesh Bhosale
- 12.23 ZhongJun
- 13 Infrastructure requirements (testing, etc)
- 14 Have all current contributors agreed to the OpenStack CLA?
- 15 Appendix
We're not aware of any trademarks conflicts with the name. The capital city of the Philippines is called Manila, making the name a proper noun. There are no other names used by the project with trademark concerns.
Summary (one sentence abstract of the project)
The Manila project provides an API for management of shared filesystems with support for multiple protocols and backend implementations.
Parent Program name and PTL
Program: Shared Filesystems
PTL: Ben Swartzlander
Stated simply, the goal of the Manila project is to do for shared filesystem storage what Cinder has done for blocks storage.
We aim to provide a vendor neutral management interface that allows for provisioning and attaching shared filesystems such as NFS, CIFS, and more. To the extent possible we aim to mirror the architecture of Cinder, with support for a public REST API, multiple backends, and a scheduler that performs resource assignment decisions. When differences are unavoidable, we plan to design solutions that are compatible with the OpenStack ideals of modularity and scalability.
The basic assumption underpinning Manila is that shared filesystems provide some valuable features that cannot be obtained from either blocks storage or object storage, and that OpenStack is missing management features for this 3rd form of storage. The unique features afforded by shared filesystems are shared, fine-grained, read/write access to persistent data by multiple instances simultaneously. The NFS and CIFS protocols have been developed to provide these features and still prove popular after decades of use.
The implementation of Manila is actually a modified fork of the Cinder project. The concept for management of shared filesystems was originally proposed as an extension to Cinder (at the SF design summit in April 2012), under the theory that there would be a lot of common code between the implementations, and many of the same developers would be interested in working on both projects. Because of this, the initial implementation for what is now Manila was actually a large patch to the Cinder project submitted in August 2012. For a variety of reasons, we ultimately decided that a separate project would be a better way to deliver the features and the Manila project was born.
Manila consist of all of the code from Cinder with our shared filesystem management code added in and all the blocks-specific code removed. The API largely mirrors the existing Cinder APIs, except that "volumes" have been renamed to "shares" and the attachment procedure is somewhat different.
Basic roadmap for the project
The initial implementation of Manila was a proof of concept that shared filesystem management can fit into the same architecture as Cinder. The main difference between blocks and shared filesystems however is how the storage system and the ultimate user of the storage communicate with one another. In particular, shared filesystems work best when instances are able to communicate directly with the storage backend over the network, and the storage backend is able to serve multiple tenants while maintaining secure separation between them. Block storage can simply be virtualized through a hypervisor, with far fewer requirements on the backend storage system. Because of these differences, additional work is needed to help automate the networking portion of attaching a shared filesystem to one or more instances in a tenant network, and to automate the setup of security domains and other features that exist in a NAS environment but not a SAN environment.
During Icehouse, the Manila team implemented full multitenancy so that drivers for storage controllers with support for secure multitenancy are able to automatically create virtual instances of storage servers in a multitenant configuration. A "generic" driver was added which implements this form of secure multitenancy by layering on top of Nova and Cinder. Drivers for hardware that can't support secure multitenancy natively are still allowed to exist in single tenant mode.
During Juno, the focus of the team has been to broaden driver support, and to create a mechanism to provide gateway-based secure multitenancy which adds the secure multitenancy provided by the generic driver to storage backends that only support single tenant mode.
Location of project source code
- Project code: https://github.com/stackforge/manila
- Python client: https://github.com/stackforge/python-manilaclient
Programming language, required technology dependencies
Message queue, database server, keystone, neutron Optional parts of Manila depend on: nova, cinder
There is a plan to make the neutron dependency optional in the future.
Is project currently open sourced? What license?
Yes - Licensed under the Apache License, Version 2.0
Level of maturity of software and team
Aside from the code inherited from Cinder, the new Manila code is a little more than a year old, and has been actively developed from then until now, mostly by developers from NetApp and Mirantis.
The core team now consists of developers from NetApp, Mirantis, EMC and SUSE, with significant community interest since the code was open sourced in August 2013.
Project developers qualifications
NetApp - Software Architect, Manila - PTL
Ben Swartzlander has been the technical lead for the project since its conception 2 years ago, and plans to continue leading the project from a design and administrative standpoint. Ben has been working the storage industry as a software engineer for more than 13 years and has extensive experience with storage systems, network protocols, virtualization, and open source projects. Ben has been a contributor to the OpenStack project for nearly 3 years.
Mirantis - Software Developer, Manila - Core Team
Yulia has been working in Mirantis with OpenStack since Essex release. Has contributions to Nova, Glance, Cinder.
Mirantis - QA Automation Engineer, Manila - Core Team
Valeriy has been working with OpenStack since Folsom release, except Manila project he has contributions to Tempest, CI Config and Oslo-Incubator, also he is one of the developers of Horizon modification and Devstack plugin for Manila project.
EMC - Consultant Technologist, Manila - Core Team
Xing is a technologist from the Office of the CTO at EMC. She has many years of experience working with storage technologies, data protection and disaster recovery, and cloud and virtualization related projects. Xing has been an OpenStack contributor since the Grizzly release and has made contributions in Cinder and Nova.
SUSE - Software Developer, Manila - Core Team
Thomas has been working with OpenStack since Grizzly release and made contributions to many different OpenStack projects.
NetApp - Software Developer
Alex has been working on OpenStack for over 3 years and is a Core contributor for the Glance project. He previously worked on the Rackspace public cloud deployment and is interested in OpenStack at scale.
NetApp - Software Developer
Rushil holds a Masters in Computer Networking and has been working with OpenStack since the Grizzly release. Rushil holds a keen interest in Enterprise Network Deployments, Data Storage and Virtualization.
NetApp - Sr. Software Developer
Clinton holds a doctorate in electrical & computer engineering, has worked in the storage industry for 15 years, and has extensive experience with storage management and protocols as well as virtualization.
NetApp - Software QA Engineer
Dustin holds a Bachelor's in Computer Science from the University of New Hampshire and has worked in the storage industry for 8 years and has been with NetApp for 4 of them. He has worked as a software engineer and QA engineer on several of NetApp's integrations with other vendors' virtualization products and now focuses on OpenStack integrations.
Reliance Jio Cloud - Software Developer
Mirantis - Software Developer
Andrei works with OpenStack since folsom release and is mostly involved in new stackforge components.
Mirantis - Software Developer
Mirantis - Software Developer
HP Public Cloud
Deepak C Shetty
Red Hat - OpenStack Developer
Mirantis - Intern Software Developer
Julia has been working on OpenStack for 1.5 years. Has contributions to Glance, Cinder, Oslo.DB.
IBM - Software Developer
Nilesh has been working on OpenStack for past 1 year. He has contributed to Cinder.
Huawei - Software Developer
Infrastructure requirements (testing, etc)
Manila does not require any infrastructure above and beyond what's already provided by devstack, gerrit, jenkins, and tempest today.
Have all current contributors agreed to the OpenStack CLA?
All current contributors HAVE agreed to the OpenStack CLA!
Manila incubation was considered at a previous TC meetingː 
The main objections raised during that meeting (editted for readability)ː
<ttx> bswartz: I think I can summarize by saying the strongest objection to incubation in last week's discussion was about the maturity of the project <ttx> Both in terms of number of commits and number of developers involved <ttx> (We rejected Designate on similar grounds over the last cycle)
Manila is now 8 months "older" and the size of the team has increased significantly. Many new people are contributing, and many new people are working on contributions.
<ttx> Also seemed like Manila could benefit from another round of discussion regarding its relationship with Cinder
We took some time during the Cinder track at the Atlanta summit to discuss the relationship between Manila and Cinder. I don't think anyone is confused about this anymore. For anyone not familiar, it's possible to layer Manila on top of Cinder, and it's possible to layer Cinder on top of Manila, depending on what you're trying to do. Cinder consumes various types of backend storage (including shared file systems) and provides block storage. Manila consumes various types of backend storage (including block devices) and provies shared filesystems. The challenges the two projects face are quite different even though there is overlap between the technologies, companies, and people involved in the two projects.
<jeblair> bswartz: while it is a recent change, devstack, devstack-gate, and tempest are all now modular enough that any stackforge project can do it; wsme and sqlalchemy-migrate are working on that now. <ttx> bswartz: so it's extremely likely that we'd require you to have gate jobs before being accepted into incubation
We've got the required gate jobs setup. Thanks for the help on this.
<lifeless> bswartz: what are the technical hurdles you face? <bswartz> regarding "technical issues" it's clear that the code we wrote initially only works for single tenant environments -- properly supporting multitenant environments requires a different design <bswartz> we've been working for the last 3 months on designing that and it's about ready to go in <lifeless> So incubation is the process of taking a functional project and getting it fully integrated; I am feeling more and more that this is premature.
<markmc> sounds like we're not at the point of architectural stability, though <zaneb> "The maturity of the project. Has it been used in production and deployed at scale?" <zaneb> nothing about a stable API specifically <markmc> bswartz, so e.g. "Technical stability and architecture maturity" * markmc has to drop off, sorry <zaneb> but "need to rewrite in order to handle multi-tenant" is hard to reconcile with "mature"
Multitenancy was implemented during the Icehouse cycle and we've given some time for the dust to settle. No significant changes have been needed. All of the recent work in this area has been refinement and not rearchitecture.