Jump to: navigation, search

Difference between revisions of "Manila/Kilo Network Changes"

(Network Helper)
(Driver Modes)
Line 125: Line 125:
  
  
Each driver may support one or several of the modes, but the administrator chooses which mode is used by specifying it in the conf file. Depending on which mode is chosen, the administrator needs to provide additional details in the conf file as well.
+
Each driver may support one or several of the modes, but the administrator chooses which mode is used by specifying it in the conf file. It would be possible to have separate drivers for different modes on the same hardware if that made sense. Depending on which mode is chosen, the administrator needs to provide additional details in the conf file as well.
  
 
==== Single SVM ====
 
==== Single SVM ====
Line 156: Line 156:
 
* Currently each Manila driver decides whether to create the SVM directly on the subnet specified in the share network supplied by the tenant or whether to create a new "service network" specifically for the SVM and to configure routing between the tenant subnet and the service network. The logic should be moved out of the drivers to a common place so the drivers can focus on just creating the SVM and not worry about network plumbing.
 
* Currently each Manila driver decides whether to create the SVM directly on the subnet specified in the share network supplied by the tenant or whether to create a new "service network" specifically for the SVM and to configure routing between the tenant subnet and the service network. The logic should be moved out of the drivers to a common place so the drivers can focus on just creating the SVM and not worry about network plumbing.
 
* It's very important for us to remove explicit dependencies on Neutron because we want to enable use cases where Neutron is not present BUT it's also important that we make Neutron the preferred mechanism for managing networks in Manila, and we should try very hard not to re-implement functionality that's already in Neutron.
 
* It's very important for us to remove explicit dependencies on Neutron because we want to enable use cases where Neutron is not present BUT it's also important that we make Neutron the preferred mechanism for managing networks in Manila, and we should try very hard not to re-implement functionality that's already in Neutron.
 
  
 
=== Network Helper ===
 
=== Network Helper ===

Revision as of 03:33, 30 October 2014

Introduction

This document is intended to outline the design changes around Manila networking in the Kilo release.

Use Cases

Simple lab config

A manila developer wants to write a driver for a new hardware platform. The hardware exists in a lab plugged into a network switch with no VLAN tagging. The developer has no management access on the switch or router and while he does have management access on the storage controller he doesn't have any physical access to modify cabling. The storage controller is configured with static IP addresses on its management and data interfaces. On the plus side, the developer had gotten the lab administrators to set aside a pool of IP addresses on the same subnet which he can use to create SVMs.

  • Network: flat, single subnet, multiple SVMs
  • Security: none
  • Other considerations: none

POC installation at a customer site

A customer which already uses OpenStack with some commercial storage wants to experiment with Manila to learn about it and evaluate how useful it might be. The customer already owns several storage controllers but they are all in use either in production or in other proofs of concept or test projects in the eval lab. The customer has administrator access to and physical access to all of the revelant equipment but doesn't wish to disturb the existing environment just to do a quick POC. A storage controller will need to be shared, and the shared controller is cabled to a network switch with no VLAN tagging.

  • Network: flat, single subnet, single SVM
  • Security: none
  • Other considerations: none

Enterprise private cloud

An IT department has built a moderately large private cloud to serve various business units in one company. The business units all know about eachother and rely one simple access control to protect data from prying eyes and accidental damage. The main motivation for building a cloud is the "aaS" model which allows IT to be more efficient and responsive. The company is a large user of NAS storage (on commercial storage controllers) and IT wants to roll out Manila in production to begin to offer NASaaS so its userbase can eventually stop using traditional NAS. The IT department has complete control of its network design, physically and logically. Because all of the "tenants" of the private cloud exist within one company a significant amount of infrastructure is shared, including Active Directory, LDAP, and Kerberos, and tenants are not firewalled off from each other in any complete sense.

  • Network: flat, multiple subnets, multiple SVMs
  • Security: tenants should not be able to access shares of other tenants
  • Other considerations: none

Public Cloud Service

A company that offers cloud services to the public, based on OpenStack, would like to deploy Manila in production and begin to offer NASaaS to its customers alongside block and object storage. Tenants expect complete privacy for their data, both in motion and at rest. Some customers have VPN connections between their corporate offices and the cloud which their expect only their own VMs to be able to access.

  • Network: segmented, multiple subnets, multiple SVMs
  • Security: tenants should not be able to detect the existence of others tenants
  • Other considerations: network segmentation may be VLAN, VXLAN, or GRE

Public cloud (Tenant Facilitated)

Tenants desire to fire up and deliver Manila wholly contained within their tenant context atop a public (or potentially private) cloud.

  • Network: flat, VPC, multiple SVMs
  • Security: defined by tenant owner and contained within the context
  • Other considerations: tenant network may reside atop any number of SDN technologies.

Standalone installation for management of existing NAS

An IT department wishes to offer NASaaS to users but currently uses some form of proprietary cloud software. They decide to use Manila but without the rest of the OpenStack. Commercial storage controllers are dedicated specifically for this purpose.

  • Network: flat, multiple subnets, multiple SVMs
  • Security: tenants should not be able to access shares of other tenants
  • Other considerations: none

Automated test system

A manila developer would like to be able to quickly setup Manila inside a virtual environment for testing. Manila and all of the rest of OpenStack will run inside a single VM with network connectivity to the outside world provided by a virtual router. The storage controller to be tested is not virtualized and is shared with multiple test nodes. The developer has access to configure the storage controller but the VM where the tests are being run is in a different locale, connected only by IP to the storage controller.

  • Network: flat, single subnet, single SVM
  • Security: none
  • Other considerations: none

Field demo

A developer wishes to install a complete OpenStack installation including Manila on his laptop to be able to demo it in public while disconnected from the internet. Everything is virtualized.

  • Network: flat, single subnet, multiple SVMs
  • Security: none
  • Other considerations: none

Triple-O "Under Cloud"

A deployer of OpenStack employing Triple-O employees Manila at the under cloud layer to spin up / down storage server instances to facilitate Manila service expansion / contraction. Under cloud provisioning primitives (e.g. NFS export) for Cinder backends or Glance repositories are employed

  • Network: flat, single subnet, multiple share servers
  • Security: TBD
  • Other considerations: TBD

Design Changes (Second Iteration)

To accommodate the above use cases, the following changes to Manila will be neededː

Driver Modes

I propose that each driver runs in one of a few pre-defined "modes" (I don't like the term mode, but it explains the concept)ː

  • Single SVM
  • Flat network multi SVM
  • Segmented network multi SVM (possibly with variations)


Each driver may support one or several of the modes, but the administrator chooses which mode is used by specifying it in the conf file. It would be possible to have separate drivers for different modes on the same hardware if that made sense. Depending on which mode is chosen, the administrator needs to provide additional details in the conf file as well.

Single SVM

In this mode, drivers have basically no network requirements whatsoever. It's assumed that the actual storage controller(s) being managed by the driver has all of the network interfaces it's going to need. Manila will expect the driver to provision shares directly without creating any "share server" beforehand (Manila may want to create a dummy share server though). Manila will assume that the network interfaces through which any shares are exported are already reachable by all tenants. This mode corresponds to what some existing drivers are already doing, but it makes explicit the choice for the administrator. In this mode, share networks are not really needed at share creation time, and will probably even be ignored. More on that later.

Flat network multi SVM

This mode is new, and is specifically designed to cover a middle ground that's not well addressed by the existing design. Some storage controllers can create SVMs but due to various limitations of the physical/logical network (outlined in the use cases above) all of the SVMs have to be on a flat network. In this mode, the driver needs something to provision IP addresses for the SVMs, but the IPs all come out of the same subnet and that subnet itself it assumed to be reachable by all tenants.

Specifically, drivers in the mode can expect Manila to provideː

  • A subnet definition -- network address, mask, broadcast, and gateway addresses
  • Specific IP address(es) for each share server during creation


In this mode, the security service part of the share networks is important because it allows tenants to specify security requirements such as AD/LDAP domains or a Kerberos realm. The subnet part of the share network isn't really that useful. Manila would have to assume that any hosts referred to in the security service were reachable from the subnet where the SVM is created, which limits the situations where this mode makes sense.

Segmented network multi SVM

This mode corresponds to the primary use case of Manila today and it adds some formalization and clarity around it as well as adding new features. In this mode, the driver is assumed to be able to create SVMs and join them to an existing segmented network. Currently, we use the share network directly in the share driver, which leads to some bad things, such as having neutron-specific code inside the generic driver. I want to define more clearly what the driver requires from Manila and vice versa, and turn that into an interface with multiple implementations (where the implementation is chosen by the administrator).

At a minimum, drivers can expect Manila to provide for every new SVMː

  • A subnet definition (network address, mask, broadcast, and gateway) and list of IP addresses
  • A segmentation typeː VLAN, VXLAN, GRE, STT, etc
  • A segmentation ID and any other info relevant to the segmentation type


A few important things I want to point outː

  • Drivers don't have to support every type of segmentation. In many cases only 1 will be supported. This fact should be explicit and Manila should provide helpful error messages when there's an impedance mismatch.
  • Currently each Manila driver decides whether to create the SVM directly on the subnet specified in the share network supplied by the tenant or whether to create a new "service network" specifically for the SVM and to configure routing between the tenant subnet and the service network. The logic should be moved out of the drivers to a common place so the drivers can focus on just creating the SVM and not worry about network plumbing.
  • It's very important for us to remove explicit dependencies on Neutron because we want to enable use cases where Neutron is not present BUT it's also important that we make Neutron the preferred mechanism for managing networks in Manila, and we should try very hard not to re-implement functionality that's already in Neutron.

Network Helper

I believe we need an more explicitly defined object withing the manila-share service to manage network operations. More than a "network plugin" which has been discussed before, I'm proposing a network helper which would have its own config options in manila.conf. The network helper should support the following use casesː

  1. Management of a pool of addresses in a single subnet. In this mode the helper would just define a subnet and provision IP addresses from that subnet. The actual implementation could be based on using a DHCP server, or referring to a Neutron or Nova-Net network/subnet, or could be implemented in pure python code with state stored in the Manila DB. Ideally all 4 cases would be implemented as plugins to the network helper.
  2. Management of service subnets. For the segemented multi SVM case, administrators may choose to place all of the SVMs on their own subnets and to route traffic between the tenant network and the service network (sometimes referred to as L3 connectivity̠). This approach has many advantages, including reducing the broadcast domains in the datacenter, and allowing mixing of VLAN-based subnets with VXLAN or other subnets. As above, the actual implementation may simply layer on top of Neutron or Nova-Net, or we could provide a standalone plugin for this service. The helper would need to be able to setup routes as part of this operation.
  3. Interacting with share networks. If the administrator chooses, and the driver is capable, we should support provisioning SVMs directly on the tenant's subnet, provided with the share network (sometimes referred to as L2 connectivity). The main advantage to this use case is the ability to connect storage to VMs with high bandwidth/minimal latency by taking routers out of the path. Also, this case is already supported and we should not drop support for it.

Share Network Changes

The addition of driver modes and support for things other than Neutron makes us reconsider the purpose/value of share networks. Clearly they are still critical for some use cases, but in other cases they're not needed at all. In the spirit of being explicit, I think the administrator needs to have ways to set defaults and prevent or allow tenants from modifying them, and the administrator needs a way to communicate to tenants which values are required. This is probably the trickiest part of the proposal (which is why I saved it for last) because it changes the tenant-facing API and because the tenant's experience can change dramatically depending on what choices the administrator makes.

TBD

Design Changes (First Iteration)

Cross-tenant share servers

Administrators should have a way to make a single share server usable by multiple tenants, and disable the creation of new share servers. This enables cases where share servers cannot be created by Manila either because of lack of access or lack of networking resources.

Open questionsː

  • How will the share server get imported into Manila? Conf file options? A new admin API?
  • Is there any point in allowing a mix of manila-created share servers with administrator create share servers?


Proposalsː

1) Easy
1.1) Add support of single preconfigured share-server for each driver separately. - They will work as single-tenant drivers in case no share-network provided.
proː no need to create share-network
conː only one share-server will be used
2) Hard
2.1) Add new APIs for share-server creation and share-server-details update and then link all of it to some share-network and back-end
proː dynamic add of custom share-servers
conː hard to support driver-specific things

Cross-tenant share networks

Administrators should have a way to make a single share network usable by multiple tenants. This network is used to create new share servers when the tenant declines to specify a share network (acting as a "default share network").

Proposalsː

1) Allow creation of "public" and "private" share-networks
2) Handle possibility to create both types by policies
3) Add possibility to make one of "public" share networks as "default" that will be used if one did not provide share-network explicitly.