Manila/Manila Storage Integration Patterns

Overview
The mechanism in which NAS storage integrates with the a Nova instance depends on the capabilities of storage technologies. Integration can be roughly styles of access:
 * Direct IP access, for small/flat networks. The storage backend exports the share directly to the IP of the VM or client, and the network plumbing is assumed to be taken care of. This is the simplest to implement and has already been implemented. The major shortcoming is the lack of network segmentation in a multi-tenant environment.
 * Hypervisor-mediated access. The storage backend exports the share to the compute node where the hypervisor is running, and the hypervisor is responsible for exposing it to the VM through some kind of tunnel in a secure way. Potential approaches:
 * Pass-thru filesystem: NAS storage is mounted by Nova host, then a hypervisor-specific pass-through filesystem such as VirtFS is used tunnel a tenant sub-tree of the ANS volume into the guest.  In this vase the passthru filesystem is responsible for patting from tenant to provider name-spaces and providing secure access.  Depending on the passthru filesystem, and special driver will be required in the guest to mount the pass-thru filesystem
 * Hyper-resident agent: Hypervisor resident agent exposes per-tenant NAS server, responsible for presenting only those Manila volumes attached to the Nova guest.  This can be accomplished by placing a portion of the top of the NAS storage stack within Nova host (appropriate for for software defined storage technologies such as Glusterfs), of placing a proxy agent connected to the back-end storage pool.    Storage can be exposed via NFS, SMB or the storage technologies native protocol.  Connectivity between hypervisor agent and Nova guest can be accomplished virtual network or VSock.
 * Neutron-mediated access. The storage backend is required to support logical networks, such as VLANs or SDNs. The storage backend exports the share directly to the IP of the VM or client, but only on the logical network the client is part of. Quantum must be able to talk to storage backends and connect them into its networking model. This would probably involve some new kind of Neutron plugin or new Neutron APIs that the Shared Filesystem Service could consume to coordinate the plumbing operation. For storage backends that don’t support logical networks a gateway could sit in front of the storage and provide the feature.

Note: Above text based on email exchanges between Ben Swartzlander and Doug Williams

Presentations

 * Hypervisor Mediated Storage Use-case: Glusterfs