Jump to: navigation, search


Warning.svg Old Design Page

This page was used to help design a feature for a previous release of OpenStack. It may or may not have been implemented. As a result, this page is unlikely to be updated and could contain outdated information. It was last updated on 2014-05-15

Provider Networks support for NVP Plugin


The aim of this blueprint is to implement support for the provider network API extension in the NVP Plugin.

Use Cases

The use cases for this blueprints are exactly the same as the original provider network blueprint, implemented for the Folsom release. There is no use case specific to the NVP Plugin

Implementation Overview

The implementation will be performed in the same way provider networks support has been implemented for other plugins such as the OVS and Linux Bridge ones. Basically, the plugin will provide the data model classes for dealing with bindings between logical networks and physical networks; The plugin will also process authorization for operation on provider networks extended attributes; finally the plugin will extend responses including provider attributes where appropriate.

As this code is mostly recycled from the OVS plugin, this probably calls for two follow-up actions: 1) Using the policy engine for authorization in this case (currently we have a limit since we cannot use it to strip off attributes that should not be visibile to standard users, such as the provider attributes) 2) Provide hooks in the API framework for allowing extensions to extend responses without having to replicate boilerplate code in each plugin.

Moreover, it is arguable whether we should keep the current approach in which plugins are defining the data model for provider network support, or adopt the mixin approach (see security groups and l3).

Anyway all these items span beyond the scope of this blueprint which is specific to the NVP plugin. Separate bug reports/blueprints will be filed to this aim.

The Nicira plugin will perform the mapping between logical switches and physical network using the NVP API. As this might result in creating multiple logical switches for a single Quantum network, the NVP plugin will need to have appropriate logic for managing quantum network mapped onto multiple logical switches.

Data Model Changes

Proposed data model changes are inline with the way this extension has been implemented by other plugins:

   ||<-4> NetworkBinding ||
   || network_id || String || ForeignKey('networks.id', ondelete="CASCADE" || primary_key=True ||
   || binding_type || Enum('flat', 'vlan', 'stt', 'gre') || || nullable=False ||
   || tz_uuid || String || || ||
   || vlan_id || Integer || || ||

Configuration variables

The following variables will be added to /etc/plugins/nicira/nvp.ini

Maximum number of ports for each bridged logical switch max_lp_per_bridged_ls = 64

Maximum number of ports for each overlay (stt, gre) logical switch max_lp_per_overlay_ls = 256


Although this blueprint won't add or change anything to the Quantum API, it will require to alter the validators for the network_type provider attribute. This is because NVP specific network types need to be supported, and currently there is no way for a plugin to change a validator in any way once the attributes have been loaded in the attribute map.

This will be a minimal change to the extension definition which won't affect the way plugins are currently working.

Test Cases

NVP specific unit tests for validating code in the plugin will be added to the test_nicira_plugin module.