Jump to: navigation, search

DynamicCloudNetworkService

Summary

Draft proposal for Dynamic Cloud Network Service (DCNS). This work follows the "network as a service" or NaaS model by providing seamless layer-2 intra- and inter-cloud networking over an arbitrary topology of software and hardware switches.

Rationale

NaaS "provides flexibility in creating networks + associating devices to support interesting network topologies between VMs from the same Tenant." However, The current design and reference implementation only address host-level virtual networking and assume static physical networking over hardware switches. No plug-in provides network service that dynamically sets up and tears down arbitrary layer-2 network with necessary QoS features. None addresses networking in wide area, mutli-domain environment either.

The proposal of DCNS complements the existing Network-Service with such extensions that address all the above issues and provide the following:

  • a unified networking solution for dynamically setting up intra- and inter-cloud layer2 project network
  • dynamic networking for better capacity efficiency + scalable network plumbing + automated operation + network resilience
  • allowing for multi-vendor heterogenous networking by using InterDomain Controller (IDC), DRAGON and OpenFlow etc. technologies.

By leveraging "off-the-shelf" open source dynamic networking technologies, the DCNS will be a natural extension to NaaS bring control-plane based seamless virtual + physical intra- and inter-cloud networking into OpenStack.

Note that we refer to "Network-Service" as a general term for pre-existing Nova network related workflow as well as Quantum L2 services.

User Stories

1. Dynamic physical VLAN per project

  • Create a VLAN for project over physical switches that bridge host ports in an isolated, bandwidth guaranteed layer-2 network
  • Incrementally add and remove instances from project VLAN

2. Dynamic cloud network over multi-domain WAN

  • Compute nodes of a cloud are distributed over multi-domain wide area network
  • network-service plugin contacts IDC to create project VLAN over the WAN

3. Seamless virtual+physical cloud networking (future)

  • A single network-service plugin that uses OpenFlow technology to control Open vSwtich and external hardware switch to create a VLAN that connect all instance vif's
  • Unified network-service with same API calls that creates virtual networks

4. Inter-cloud dynamic network (future)

  • Layer-2 networking requirement is similar to multi-domain WAN
  • Implications on services for cloud users, scheduler and inter-cloud compute resource allocation TBD

Design

Concept

This BluePrint only addresses an Option 2 design as we described here.

In this design, DCNS is implemented as an Agent Service that can be either embedded into a network-service proposal plugin or invoked as an external service by a network-service proposal plugin. This design

  • requires no change to scheduler,
  • completely follows the NaaS model,
  • is an optional service for network plugin
  • requires little global configuration and DB schema change and zero change to layer-3 functions such as security group
  • keeps rest of OpenStack agnostic to physical topology, and
  • can more easily evolve to seamless virtual+physical cloud networking.

--TODO: Add drawing here --

DCNS API

DCNS API provides the following service calls:


    def setup_physical_network (self, project, host_ports, bw, vlan)
  • Arguments:
    • //host_ports//: array of host+port name strings, each in the format of hostname:portname. Example:[bespin107:eth1, bespin109:eth1]
    • //bw//: bandwidth in string format. Example: 100Mbps
    • //vlan//: integer VLAN tag. Example: 1001
  • Return:
    • 1: success
    • error code
  • Note:
    • All links in the project network have uniform directional bandwidth.
    • A single VLAN tag is assigned for all ports. No VLAN translation.


    def modify_physical_network (self, project, add_host_ports, rem_host_ports, bw, vlan)
  • Arguments:
    • //add_host_ports//: array of host+port names to be added to project network, each in string format of hostname:portname.
    • //rem_host_ports//: array of host+port names to be removed from project network, each in string format of hostname:portname.
  • Return:
    • 1: success
    • error code
  • Note:
    • //bw// and //vlan// are optional arguments. When present, they indicate modification of project network into new bandwidth and/or VLAN tag values.


    def teardown_physical_network (self, project)
  • Return:
    • 1: success
    • error code


    def get_network_info (self, project)
  • Return:
    • hash of projet network information, including host_ports, urn mappings, bandwidth and VLAN tag etc.
    • None: project network does not exist

DCNS Agent

  • **Physical topology**
    • Physical topology is the topology of hardware switch connectivity with traffic engineering information including bandwidth, VLAN ranges and switching capabilities etc.
    • DCNS agent is responsible for retrieving physical topology information via network protocols or from existing configuration such as topology description file.
    • DCNS agent maintains physical topology in its local local database and tracks resource allocations.
  • **Host_port name mapping**
    • DCNS represents the "touch points" (TP) of physical topology to host ports in URN format. Example: //urn:ogf:network:domain=testdomain-1:node=node-1-1:port=ge-1/1/0:link=* .
    • DCNS agent maintains mapping of TP<-->host_port so that upon receiving request with list of host_ports, it can translate that into request with TPs as end points of dynamic network topology.
    • The mapping is semi-static created from manually input.
  • **DCN drivers**
    • DCNS agent provisions physical network by utilizing open source dynamic circuit network (DCN) technologies. Here we consider both point-to-point and multipoint-to-multipoint layer2 VLAN with QoS as dynamic circuit.
    • We have multiple options to set up the dynamic layer2 VLAN, each implemented a DCN driver that is called by DCNS agent.
      • DRAGON driver
      • IDC/OSCARS driver
      • OpenFlow driver
  • The top choice for this design is the IDC/OSCARS driver. Only IDC can create both intra- and inter-domain dynamic circuit. IDC/OSCARS can sit on top of DRAGON that eliminates the need for direct DRAGON driver.
    • IDC/OSCARS currently does not support OpenFlow networking. It supports neither multipoint network nor incremental addition and removal of ports. We believe such support may soon be added to IDC/OSCARS.

Network-Service Plugin+ DCNS Workflow

  • The Nova network-service workflow takes network requests from tenant and compute-service and passes them to each compute node.
  • Both the network-service on cloud controller node and every compute node (if not collocated on controller node) will interact with DCNS agent. The workflow is as follows:
    1. Upon receiving create-network request, network-service firstly determines whether DCNS is available and desirable (via flags configuration from nova.conf).
    2. Network-service or compute node then determines this is to create a new project or add/remove instances from existing project.
    3. If yes, it issues either //setup_physical_network// or //modify_physical_network// call to DCNS agent.
    4. DCNS agent sets up or modifies the physical network (VLAN circuit) when it collects requests that combine into requirement for L2 connectivity on physical switches.
    5. //Optionally// Network-service will verify whether both virtual and physical networks are properly created or modified. It uses //get_network_info// API call for physical network part.
    6. Upon terminating the project network, network-service or compute node uses //teardown_physical_network// API call.

Implementation

The following code will be added:

Addition of DCNS agent. Code structure:

dcns/
    _init_.py
    api.py               # DCNS API
    manager.py           # DCNS agent service
    topology.py          # physical topology receiving and maintenance with host/nic to topology touch point mappings
    pce.py               # P2P and MP2MP path computation
    driver_base.py       # base class for driver pluggable
    drivers/
            openflow/
            idc_oscars/


The code is uploaded to LaunchPad

The LaunchPad URN: https://code.launchpad.net/~usc-isi/nova/dynamic-cloud-network-service

The code is branched out from Network Refactoring: https://code.launchpad.net/~midokura/nova/network-refactoring-l2

For release 2011.2-0ubuntu0ppa1~lucid1, there is also a manual patching package located inside the dynamic-cloud-network-service branch: http://bazaar.launchpad.net/~usc-isi/nova/dynamic-cloud-network-service/files/head:/nova/network/dcns/other/nova-lucid-patch/

Test/Demo Plan

TBD

Unresolved Issues

1. What will we do for User Stories 3 and 4, which are towards longer term goals for DCNS? Do we need spin them off in separate blueprints?

2. At this point there, Quantum L2 service has not been integrated into Nova network service workflows. We will keep watching on that work in process and make necessary change for Quantum L2 service integration.

3. We start with DCNS driver that supports OpenFlow driver in initial implementation. Will leave IDC/OSCARS+DRAGON to future implementation, particularly when WAN and multi-domain connectivity is required.