Manila/Kilo Network Spec
Based on these requirements.
The implementation can be broken into the following features:
The addition of driver modes allows the administrator to tell Manila how each driver should operate. Some drivers can support more than one mode of operation, and for other drivers that only support one mode, it's better if Manila knows explicitly what the mode is, rather than guessing based on the drivers behaviour, which is how it works today. There are two specific differences in behaviour resulting in a total of 3 possible modes (because one combination is impossible).
mode=single_svm In this mode Manila will not create share servers and the driver doesn't need to interact with any network management system. It is assumed that any required network configuration has been done in advance. This mode is very similar to how drivers in Cinder operate.
mode=flat_multi_svm In this mode, Manila will create share servers for each tenant and share network, but the share servers will be assumed to exist on a flat network and that connectivity from the tenant networks to the flat network has been setup in advance. Drivers running in this mode will require a network helper (see below) to specify a subnet and to allocate individual IPs to share servers. This mode is new and covers an important middle ground which I believe many users and deployers will find valuable.
mode=managed_multi_svm In this mode, Manila will create share servers for each tenant and share network, and Manila will assume responsibility for ensuring connectivity between the share servers and the tenant's share network. Drivers running in this mode will require a network helper to manage all of the network connectivity, including allocating IPs on network, creating new networks, establishing routes between networks, and encapsulation of network packets on the SVM to segment the physical network and create virtual overlay networks. This mode is most similar to what we have today, but should result in the administrator having significantly more control over the above operations through the use of the network helper (see below).
Network helpers create a new plugin architecture that allows users to extend Manila for specific use cases, and allows the Manila dev team to support a wider variety of use cases out of the box. Hopefully network helpers allow us to move all of the network-specific code out of the existing drivers as well.
Network helpers are only needed by drivers running in one of the two multi_svm modes mentioned above, as single_svm drivers are presumed to have no network requirements. A network helper is a python object that's instantiated by the share manager and supports a defined set of APIs. Drivers call the network helper during the SVM creation process to get details needed to create net logical/virtual network interfaces.
Drivers running in the flat mode will only expect the network helper to return:
- network address
- broadcast address
- 1 or more IP addresses from the subnet for use by the SVM
Drivers running in the managed mode will expect the network helper to return all of the above plus:
- Segmentation method (VLAN/VXLAN/etc)
- Segmentation ID and any other info relevant to the segmentation method
Furthermore, drivers running in the flat mode can assume that the subnet details (network, mask, broadcast, gateway) are consistent and only the IP addresses change. In the managed mode, the network helper is free to use multiple networks and create new networks as needed.
The network helper object creates an interface for the share manager and driver to interact with, but the actual implementation is pluggable. Obviously the main network helper plugin will be Neutron-based, but other implementations will be possible. Specifically, I'm envisioning a plugin that interacts with Nova-network, for deployers who choose to use that instead of Neutron, and also at least one plugin that has no external dependencies, which should dramticaly simplify creating simple test environments, and will also enable various use cases when Manila is deployed without the rest of OpenStack.
Plugins can have their own config options, so that administrators can tune the operation of the network helper. This is essential because the administrator may want multiple instances of a network helper to correspond to multiple instances of backends, if each backend is plugged into a different flat network, for example.
Consider this use case:
enabled_backends=netapp1,netapp2 [netapp1] driver=manila.share.drivers.netapp.cluster_mode.NetAppClusteredShareDriver mode=flat_multi_svm network_helper=helper1 ... [netapp2] driver=manila.share.drivers.netapp.cluster_mode.NetAppClusteredShareDriver mode=flat_multi_svm network_helper=helper2 ... [helper1] plugin=manila.network.simple.FlatNetworkHelper network=192.168.10.0 netmask=255.255.255.0 gateway=192.168.10.1 ip_range=192.168.10.50-192.168.10.99 [helper2] plugin=manila.network.simple.FlatNetworkHelper network=10.2.0.0 netmask=255.255.0.0 gateway=10.2.0.1 ip_range=10.2.100.0-10.2.100.255
In this example, we have 2 physical cluster mode systems plugged into 2 existing flat networks. This config file allows the administrator to tell Manila about the existing network layout without requiring any interaction with an external tool and without putting any special network code in the driver. More importantly, it allows Manila to manage the cluster and create SVMs without requiring any network administrator access on the switches or routers, which is a common limitation in real world environments.
Another alternative to the above would to be to add the 2 physical networks to Neutron as public networks and to use a Neutron network helper with the relevant network names/IDs entered in the config file.
When single_svm mode drivers are used, the share network will effectively be ignored. In the case of flat_multi_svm backends, the subnet/network ID portion of the share network will be ignored. This is a change from existing behaviour and in order to minimize impact on existing users propose:
- The subnet/network ID portions of a share network should become optional -- omitting them will imply a desire to use a backend in single_svm mode
- The administrator should be able to create "public" share networks which are visible and usable by all tenants (read only)
- The administrator should be able to designate one of the public share networks as the default share network, such that tenants aren't required to always specify a share network
- The administrator can change the default with a config file option
- The administrator can override the default share network on a per share-type basis with an extra_spec