Jump to: navigation, search

Neutron/L2-GW

< Neutron
Revision as of 01:15, 22 June 2015 by Sukhdev (talk | contribs) (Code Reviews)

Overview

L2-GW (Layer-2 Gateway) is a service plugin to add to the OpenStack Networking (Neutron) services. L2-GW is being developed outside the main Neutron tree as a separate stackforge project. L2-GW service plugin operates in conjunction with ML2 plugin/drivers and leverages l2pop driver.

For the past few cycles several stake holders have been discussing L2 Gateways to mean different things. Mostly, their definition has been dependent upon their use case. In a general sense, L2 gateway is an entity or resource which bridges two L2 domains (or networks) to achieve one seamless L2 broadcast domain. In the various proposals it is typical that at least one of L2 domains is Neutron orchestrated and another L2 domain exists outside the cloud in which the service resides (e.g. in a datacentre, a WAN or another cloud). A debatable inclusion to L2 gateways is to bridge an L2 domain within Neutron (i.e. a network) to one that is an overlay on another domain (e.g. a VLAN on top of a Neutron network). Depending upon the characteristics of a L2 domain (e.g. VLANs, overlay tunnel, another open-stack domain, etc), may cause folks to come up with different definitions or implementations for L2 gateway.

In order to make forward progress, and define an initial version of the L2 GW API, the team decided to take one use case and move forward. There is explicit understanding that this API may not cover all use cases, even though it will be desired to cover as many use cases as possible. Use cases section is added to this wiki to list known use cases which pertain to L2 Gateway.

L2 Gateway Demos

Watch the following youtube demo on Arista Hardware


Watch the following youtube demo on HP Hardware

Meetings

L2-GW team meets bi-weekly. The meeting details can be found at IRC meetings

Blueprints and Specs

Kilo Release Specs

Team's charter is to implement initial version of L2-GW API in Kilo cycle. The spec is available here:

L2 Gateway Deployment Documentation

Code Reviews

To see active patches under review, please see here:

To see the merged patches, please see here:

Project Repo

L2-Gateway project is implemented as a stack forge project. Please see the project repo here:

L2 Gateway python package available for download

L2-Gateway package can be installed by issuing

   sudo pip install networking-l2gw

Alternatively L2 Gateway can be downloaded from:

  https://pypi.python.org/pypi/networking-l2gw/2015.1.1


Design categories

There are three categories of design that have been proposed relating to L2 Gateways.

1. A new Neutron 'block' attached to networks that bridges one network to an overlay on another network. For instance, a concentrator that takes multiple Neutron networks and encapsulates them as VLANs on another network, or VLANs on a port that feeds into a VM. This is not the primary focus of this group, which is more interested in the internal-external cases, but the design certainly bears similarities to the internal-external case, particularly the need to attach to a network in the manner of a bridge (network attachments via ports are intended in current Neutron to receive traffic explicitly addressed to that port, with a variety of workarounds as the port-address mapping breaks down).

2. A service that bridges to a Neutron network - via either a specialised port or via some form of implicit attachment that does not require a port - that exposes traffic inside the cloud to the outside world. This category is characterised by an API that details the external attachment type and the location to which the network is bridged. APIs have been abstract or concete - 'abstract' meaning that the means of getting the packet to this port is not explicitly detailed, 'concrete' meaning that the API includes programming details of the device that is performing the connection, such as a cloud edge router or a switch. Different APIs must exist for different external domains (e.g. MPLS tunnels of various types, VLANs out of switch ports, and so on).

3. An extension to the Neutron API that simply exposes the Neutron network for use by external controllers but does not include APIs to describe the method of bridging (the 'edge network' (*https://blueprints.launchpad.net/neutron/+spec/cloud-edge-networking). This assumes that a second entirely independent API exists outside of Neutron (but perhaps inside OpenStack, in that it can be registered with Keystone as an endpoint) to link the two bridge domains and describe the external bridge domain. This requires very limited changes to Neutron and does not require specification of the API in Neutron itself but instead means that a separate non-Neutron service must be defined and implemented to take that description and relate it to Neutron.

These designs all address some of, but not all of, the use cases for L2 bridging, and hence there has been no clear choice to select with obvious advantages.

It is important to note that the L2 element of the cloud edge is an administrator-only domain - that is, tenants of the cloud should not have direct insight into the way in which the cloud is integrated into the hardware environment within the DC and the wider network. (L3 use cases are different - where the tenant wishes to encapsulate in an L3 protocol, they may be able to acquire and use a public L3 address - and this is already largely addressed by VPNaaS within Neutron and does not need to be privileged.) Therefore the expectation is that an administrator would set up the bridge between a network and the world and an individual cloud tenant would get use rights to the network. The tenant cannot change the bridge beyond, perhaps, destroying it, and if they have a self-service API this would likely be at a higher level than the L2 Gateway API itself, so that the API can run privileged L2GW operations to implement the more restricted set unprivileged services offered to the tenant.

Use Cases

The L2 Gateway eventually proposed may address all or a subset of these use cases. The list is intended to be comprehensive to ensure that we make informed decisions on what subset is best to implement and so that we can propose how other needs might be met without using the L2GW API and why they are out of scope.

Services

Creating new aaS-resold services within Neutron using hardware: This assumes that a piece of physical hardware, such as a load balancer, exists outside of Neutron, and the code implementing the service will use Neutron and L2 Gateway calls to connect that device - typically using some sort of trunking port - to a network within Neutron to which the Neutron-side virtual service is meant to be attached. The most common use case is to use virtual load balancers in Neutron backed by a physical load balancer appliance within the same DC as the cloud itself. The APIs are, as such, intended to be used by a 'service tenant' that manages the service, and the code in this tenant is the privileged-unprivileged code mentioned above.

Creating new aaS-resold services within Neutron using VMs: This assumes that a number of VMs are controlled by a service tenant and the service tenant wishes to use those VMs to implement services to offer to normal tenants. Multiple tenants may be used to serve a single VM (for cost reasons) and multiple VMs may be used to provide one tenant service (for redundancy or scalability reasons).

Linking Ironic machines to Neutron: This assumes that a number of physical machines are controlled by Ironic, and the easiest way of linking them to the cloud is to reduce the operation 'connect this machine to this network' that Neutron must provide to a more basic 'connect this switch port to this network' operation that falls within the remit of L2GW operations.

Bridging within the DC

Linking testbeds to the cloud: Here, physical hardware forms part of a test setup that is partly virtual, and a topology is created using Neutron to link this into a testbed setup that is then used to run tests. The hardware exists beyond the cloud edge, attached e.g. to a software-programmable switch, and a service is required to program that switch to link a specified port to a specific tenant network in Neutron.

Provider networks: Included for completeness, there is already a method of doing L2 bridging from cloud to neighbouring network using the current provider network functionality in Neutron. This offers a method of linking the cloud to the DC outside the cloud, but is not programmable: a specific physical network is described in static Neutron configuration that is not changeable by API, and the provider network extension allows linking of that network, or an encap over that network (only VLANs are supported by the API), to a tenant network in Neutron.

Bridging to the wider world

Virtual datacentre: Typically but not exclusively involving MPLS, this involves providing an API to link the cloud to a customer network using overlay technologies within a WAN to which the network is attached. Many forms exist, both L2 and L3VPN. The common thread is that an encapsulation must be made, bridging traffic from a nominated Neutron network to the external network, and this bridging is usually done by means of encap/decap. That process may or may not involve a nominated external piece of hardware to do the work, and it may be done by means of simple encapsulation ('use these connection details') or by linking to a logical entity within the network ('use this MPLS network'). It may also involve interacting with network information distribution services ('advertise these routes into this MPLS overlay').

VPNaaS: again included for completeness, this includes two L3VPN technologies that are already provided or expected to be provided by an existing Neutron service: allowing the tenant itself to program a connection from one endpoint in one cloud to another endpoint in another site to be linked by e.g. IPSec, and allowing 'road warrior' access from a mobile endpoint to an endpoint offered by the cloud for 'road warrior' access to administrative networks within a tenant's application.

L2 gateway setup using devstack

1. Download DevStack

2. Add this repo as an external repository: i.e., Add the below lines in local.conf in the end.

[ [local|localrc] ]

   enable_plugin networking-l2gw https://github.com/stackforge/networking-l2gw
   enable_service l2gw-plugin l2gw-agent OVSDB_HOSTS=<ovsdb_identifier>:<ovsdb_server_ip>:<ovsdb_server_port>

3. run stack.sh

Roadmap