Jump to: navigation, search

Difference between revisions of "Manila/Manila Storage Integration Patterns"

m
m (Overview)
 
(2 intermediate revisions by the same user not shown)
Line 3: Line 3:
 
The mechanism in which NAS storage integrates with the a Nova instance depends on the capabilities of storage technologies.  Integration can be roughly styles of access:
 
The mechanism in which NAS storage integrates with the a Nova instance depends on the capabilities of storage technologies.  Integration can be roughly styles of access:
 
* '''Direct IP access''', for small/flat networks. The storage backend exports the share directly to the IP of the VM or client, and the network plumbing is assumed to be taken care of. This is the simplest to implement and has already been implemented. The major shortcoming is the lack of network segmentation in a multi-tenant environment.
 
* '''Direct IP access''', for small/flat networks. The storage backend exports the share directly to the IP of the VM or client, and the network plumbing is assumed to be taken care of. This is the simplest to implement and has already been implemented. The major shortcoming is the lack of network segmentation in a multi-tenant environment.
* '''Hypervisor-mediated access'''. The storage backend exports the share to the compute node where the hypervisor is running, and the hypervisor is responsible for exposing it to the VM through some kind of tunnel in a secure way. This requires changes in Nova, and is dependent both on support in both the hypervisor and the guest OS for the filesystem tunnel. VirtFS is one such solution, but it’s supported by only some hypervisors on some guest OSes. Hypervisor support is known to be in QEMU, KVM, and Xen, which cover a reasonably large fraction of OpenStack use cases, but OS support is only in Linux and BSD, which excludes Windows. VirtFS support can in theory be added to Windows, but that’s a significant project. Another limitation to this approach is that the filesystem would appear to the guest differently than a normal network share – which could cause issues for applications designed explicitly for a particular share protocol (consider the many implementations of “direct NFS”). The major advantage to this approach is that it’s easier than the below model and it much more closely mirrors how storage access works today with Cinder.
+
* '''Hypervisor-mediated access'''. The storage backend exports the share to the compute node where the hypervisor is running, and the hypervisor is responsible for exposing it to the VM through some kind of tunnel in a secure way. Potential approaches:
* '''Neutron-mediated access'''. The storage backend is required to support logical networks, such as VLANs or SDNs. The storage backend exports the share directly to the IP of the VM or client, but only on the logical network the client is part of. Quantum must be able to talk to storage backends and connect them into its networking model. This would probably involve some new kind of Quantum plugin or new Quantum APIs that the Shared Filesystem Service could consume to coordinate the plumbing operation. For storage backends that don’t support logical networks a gateway could sit in front of the storage and provide the feature.
+
** Pass-thru filesystem:  NAS storage is mounted by Nova host, then a hypervisor-specific pass-through filesystem such as VirtFS is used tunnel a tenant sub-tree of the ANS volume into the guest.  In this vase the passthru filesystem is responsible for patting from tenant to provider name-spaces and providing secure access.  Depending on the passthru filesystem, and special driver will be required in the guest to mount the pass-thru filesystem
 +
** Hyper-resident agent:  Hypervisor resident agent exposes per-tenant NAS server, responsible for presenting only those Manila volumes attached to the Nova guest.  This can be accomplished by placing a portion of the top of the NAS storage stack within Nova host (appropriate for for software defined storage technologies such as Glusterfs), of placing a proxy agent connected to the back-end storage pool.   Storage can be exposed via NFS, SMB or the storage technologies native protocol.  Connectivity between hypervisor agent and Nova guest can be accomplished virtual network or VSock.
 +
* '''Neutron-mediated access'''. The storage backend is required to support logical networks, such as VLANs or SDNs. The storage backend exports the share directly to the IP of the VM or client, but only on the logical network the client is part of. Quantum must be able to talk to storage backends and connect them into its networking model. This would probably involve some new kind of Neutron plugin or new Neutron APIs that the Shared Filesystem Service could consume to coordinate the plumbing operation. For storage backends that don’t support logical networks a gateway could sit in front of the storage and provide the feature.
  
 
Note:  Above text based on email exchanges between Ben Swartzlander and Doug Williams
 
Note:  Above text based on email exchanges between Ben Swartzlander and Doug Williams
Line 33: Line 35:
  
 
==Presentations==
 
==Presentations==
 +
* [https://wiki.openstack.org/w/images/d/d8/VirtFileShare_Glusterfs.pdf Hypervisor Mediated Storage Use-case:  Glusterfs]

Latest revision as of 19:31, 22 August 2013

Overview

The mechanism in which NAS storage integrates with the a Nova instance depends on the capabilities of storage technologies. Integration can be roughly styles of access:

  • Direct IP access, for small/flat networks. The storage backend exports the share directly to the IP of the VM or client, and the network plumbing is assumed to be taken care of. This is the simplest to implement and has already been implemented. The major shortcoming is the lack of network segmentation in a multi-tenant environment.
  • Hypervisor-mediated access. The storage backend exports the share to the compute node where the hypervisor is running, and the hypervisor is responsible for exposing it to the VM through some kind of tunnel in a secure way. Potential approaches:
    • Pass-thru filesystem: NAS storage is mounted by Nova host, then a hypervisor-specific pass-through filesystem such as VirtFS is used tunnel a tenant sub-tree of the ANS volume into the guest. In this vase the passthru filesystem is responsible for patting from tenant to provider name-spaces and providing secure access. Depending on the passthru filesystem, and special driver will be required in the guest to mount the pass-thru filesystem
    • Hyper-resident agent: Hypervisor resident agent exposes per-tenant NAS server, responsible for presenting only those Manila volumes attached to the Nova guest. This can be accomplished by placing a portion of the top of the NAS storage stack within Nova host (appropriate for for software defined storage technologies such as Glusterfs), of placing a proxy agent connected to the back-end storage pool. Storage can be exposed via NFS, SMB or the storage technologies native protocol. Connectivity between hypervisor agent and Nova guest can be accomplished virtual network or VSock.
  • Neutron-mediated access. The storage backend is required to support logical networks, such as VLANs or SDNs. The storage backend exports the share directly to the IP of the VM or client, but only on the logical network the client is part of. Quantum must be able to talk to storage backends and connect them into its networking model. This would probably involve some new kind of Neutron plugin or new Neutron APIs that the Shared Filesystem Service could consume to coordinate the plumbing operation. For storage backends that don’t support logical networks a gateway could sit in front of the storage and provide the feature.

Note: Above text based on email exchanges between Ben Swartzlander and Doug Williams

Direct IP Access

Integration Details

Storage Requirements

Manila Requirements

Hypervisor Mediated Access

Integration Details

Storage Requirements

Manila Requirements

Neutron Mediated Access

Integration Details

Storage Requirements

Manila Requirements

Presentations