Jump to: navigation, search


~+This blueprint is now superseded

please refer to this wiki page for latest updates on openstack's network service


NaaS: Network as a Service

Openstack-NaaS: The customer-facing service proposed by this blueprint. This distinguishes it from the existing nova-network.

Higher Layer services: L4/L7 network services which might be enabled for networks created by NaaS.

The OpenStack NaaS API: The customer-facing API exposed by openstack-NaaS.

VIF: Virtual InterFace. A VM's network interface. Also known as a vNIC.


The goal of this blueprint is to add a first-class, customer-facing service for the management of network infrastructure within an OpenStack cloud. This will allow service providers to offer "Networking as a Service" (NaaS) to their customers.

This blueprint discusses goals, use cases, requirements and design ideas for features and capabilities to enable in openstack-NaaS in order to be able to create and manage networks intended as collection of virtual ports with shared connectivity, which provide VM instances with Layer-2 and possibly Layer-3 connectivity.

Higher-layer services, such as Firewall, NAT, VPN, and Load Balancing, will instead be provided by distinct services communicating with NaaS through exposed APIs. L4/L7 services are discussed at this wiki page'.


The main aim of NaaS is to provide Openstack users with a service for providing Layer-2 networking to their VM instances; a network created with NaaS can be indeed regarded as a virtual network switch, together with related network devices attached to it, which potentially spans over all the compute nodes in the cloud. Apart from providing Layer-2 services, NaaS also aims at providing Layer-3 networking, intended as IP configuration management and IP routing configuration.

NaaS APIs should be decoupled by the implementation of the network service, which should be provided through plugins. This implies that NaaS does not mandate any specific model for created networks, either at Layer-2 (e.g.: VLANs, IP tunnels), or Layer-3 (e.g.: file-system based injection, DHCP).


Goal 1: Allow customers and CSPs to create, delete, and bridge networks. Networks can either be private, i.e.: available only to a specific customer, or shared. Networks shared only among a specific group of customers can also be considered.

Goal 2: Allow customers and CSPs to manage virtual ports for their networks, and attach instances or other network appliances (physical or virtual) available in the cloud to them.

Goal 3: Allow customers and CSPs to extend their networks from the cloud to a remote site, by attaching a bridging device within the cloud to their networks'; the bridging device would then bridge to the appropriate remote site.

Goal 4: Allow customers and CSPs to manage IP configuration for their networks. Both IPv4 and IPv6 should be supported. Zero, One, or more IP subnets can be associated with a network.

Goal 5: Allow customers and CSPs to define IP routes among subnets. Routines can be defined either across subnets in the same virtual network, or across IP subnets in distinct virtual networks owned by the same tenant.

Goal 6: Allow customer and CSPs to monitor networks by making available statistic information such as total number of bytes transmitted and received per network, port or VIF. IP-based statistics should also be available.

Goal 7: Allow customers and CSPs to securely configure network policies for networks, ports, and devices attached to them. These policies can include, for instance, port security polices, access control lists, high availability, or QoS policies (which are typically available on physical network switches). Only a basic set of configuration options will be supported; the remaining network policies should be assumed as plugin-specific and always be configured through API extension mechanisms.

Goal 8: Allow CSPs to register and configure the plugins providing the actual implementation of the network service. CSPs should be able to select and plug in third-party technologies as appropriate. This may be for extended features, improved performance, or reduced complexity or cost.

Use cases

Use case 1: Create private network and attach instance to it

Related topology: Each tenant has an isolated network. Tenants are free to do whatever they want on their network, because it is completely isolated from those of other tenants. This is the NASA Nebula model as of today, implemented using one VLAN per tenant. Note that although this same model could equally be implemented with different technologies (e.g. GRE tunnels instead of VLANs for isolation), this would not change the nature of the model itself.

  1. Customer uses the NaaS API to create a Network;
  2. On success, NaaS API return a unique identifier for the newly created network;
  3. Customer uses NaaS API for configuring a logical port on the network;
  4. Customer invokes Cloud Controller API to run an instance, specifying network and virtual port for it;
  5. Cloud Controller API dispatches request to compute service;
  6. The compute service creates VM and VIFs. For each VIF, it asks Naas to plug it into port and network specified by Customer in (4).

Use case 2: Attach instance to default public network

Related topology: Similar to the 'Flat' mode currently supported by nova network. Instances from different customers are all deployed on the same virtual network. In this case, the Core NaaS service can provide port isolation policy in order to ensure VM security.

  1. Customer uses Naas API to retrieve public networks;
  2. On success, Naas API returns a list of unique network identifiers; Customer selects a networks from this list;
  3. Customer uses NaaS API for configuring a logical port on the network;
  4. Customer invokes Cloud Controller API to run an instance, specifying network and virtual port for it;
  5. Cloud Controller API dispatches request to compute service;
  6. The Nova Compute service creates VM and VIFs. For each VIF, it asks Naas to plug it into port and network specified by Customer in (4).

The main difference between this use case and the previous one is that the Customer uses a pre-configured network instead of creating it. Another point that needs to be discussed is whether customers should be allowed to manage ports for public networks. Alternatively, the compute service can implicitly create a port when it contacts the NaaS API.

Use case 3: Register bridge for connecting cloud network to other site

Related topology: Customer on-premise data centre extending into the cloud, interconnecting networks in distinct cloud. Although actual implementation can be provided in several ways. we are interested in the abstract model: a single connectivity domain spanning two or more networks in distinct administrative domains.

  1. Customer uses NaaS API to register a bridge for its network;
  2. On success the NaaS API returns the bridge identifier;
  3. Customer uses NaaS API to provide bridge configuration (e.g.: remote endpoint, port, credentials);
  4. Customer uses NaaS API to create a virtual port on the network for the bridge device;
  5. Customer uses NaaS API to plug bridge device into network

Use case 4: Retrieve statistics for a network

  1. Customer uses NaaS API to retrieve a specific network;
  2. On success the NaaS API returns the network identifier;
  3. Customer uses NaaS API to retrieve statistics for the network. Filters can be used to specify which data should be retrieved (e.g.: total bytes RX/TX), and for which components of the network they should be retrieved (whole network, specific port(s), specific VIF(s))
  4. NaaS invokes the implementation plugin to retrieve required information;
  5. NaaS API returns statistic data and customer processes them.

NOTE:the actor for this use case can either be a customer or an higher-layer monitoring or billing service, e.g.: a service which charges customers according to their network usage.

Use case 5: Configure network policies

  1. Customer uses NaaS API to retrieve a specific network;
  2. On success the NaaS API returns the network identifier;
  3. Customer uses NaaS API to enforce a policy on the network; for instance a policy can specify a maximum bit rate on a specific port or block a specific protocol over the whole network;
  4. NaaS invokes the implementation plugin to enforce the policy;
  5. NaaS API informs the user about the result of the operation.

Use case 6: Configure an IP subnet and attach instances to it

  1. Customer uses NaaS API to retrieve a specific network;
  2. On success the NaaS API returns the network identifier;
  3. Customers uses NaaS API to create an IP subnet, by specifying CIDR (or alternative network address and netmask), and the gateway;
  4. On success NaaS API return a unique identifier for the newly created subnet;
  5. Customer invokes NaaS API in order to attach a VIF already attached to one of his L2 networks to the newly created subnets;
  6. NaaS verifies sanity of input data (e.g.: VIF attached to the appropriate network);
  7. NaaS invokes the IP configuration management plugin to provide the supplied VIF with the appropriate configuration;

Use case 7: Configure a route between two IP subnets

For this use case, we assume that the customer already has the identifiers (URIs, UUIDs) of the two subnets that should be routed.

  1. Customer invokes NaaS API to create a route between the two subnets
  2. NaaS API creates appropriate routes using CIDRs and gateway address for the two subnets
  3. NaaS return a unique identifier for the newly created route

The way in which the IP route is created (e.g.: manipulating route table on instances, manipulating routing tables in hypervisors, configuring a router virtual appliance, etc.etc.) is plugin-specific. Routing attributes, such as distance, cost, and weight should also be part of extension APIs as they are not required to provide the basic functionality.

Plug-in use cases

In this section, we give examples of the technologies that a service provider may wish to plug into NaaS to provide L2/L3 networking. Some of these technologies are commercial in nature. Where this is the case, this blueprint will include bindings for an open-source alternative, so that the NaaS API is completely supported by open-source software. It is a goal of this blueprint that a service provider may select different implementation technologies.

Distributed virtual switches
Inter-datacenter tunnels
IP address management


R1. Add a first-class, customer-facing service for management and configuration of network infrastructure via a RESTful API. This service shall be known as openstack-NaaS, and the API that it exposes shall be known as the OpenStack NaaS API.

R2. Modify nova-compute to obtain network details by calling openstack-NaaS through its public API, rather than calling nova-network as it does today.

... lots more coming here don't worry!

Design Ideas

At this stage any attempt to provide a design for NaaS would be quite unrealistic. This section should be therefore regarded as a first attempt to define general design guidelines.

At a very high-level view the Network Service and its interactions with other entities (compute service, database, plugins) can be summarized as follows:


  • The Nova Compute service should not have any knowledge of the hypervisor's networking stack and uses NaaS API for plugging VIFs into networks (this is sligthly different from current nova design where part of the network manager's code - namely setup_compute_network - is used by the compute service as well);
  • Distinct plugins for Layer-2 networking and Layer-3 networking should be allowed; also multiple plugins should be expected at least for Layer-3 networking; for instance one plugin could provide IP configuration through DHCP, whereas another plugin could use agent-based configuration.
  • Layer-3 networking, altough part of NaaS, is not mandatory. In its most simple form, NaaS can be providing Layer-2 connectivity only;
  • Both Layer-2 and Layer-3 plugins can be attached to NaaS; however, if no Layer-3 plugin is provided, NaaS should raise a NotImplemented error for every L3 API request.
  • NaaS API serves requests from both customers and the compute service. In particular, the responsibilities of the compute services w.r.t. NaaS are the following:
    • Plugging/Unplugging virtual interfaces in virtual networks managed by NaaS;
    • Attach/Detach virtual interfaces to IP subnets configured in NaaS.
  • The "Plugin dispatcher" component is in charge of dispatching API requests to the appropriate plugin (which is not part of NaaS);
  • For Layer-2 networking, the plugin enforces network/port configuration on hypervisors using proprietary mechanisms;
  • Similary, Layer-3 networking plugins enforce IP configurationa and routing using proprietary mechanism;
  • Although the diagram supposes a plugin has a 'manager' component on the NaaS node and agent 'component', this is not a requirement, as NaaS should be completely agnostic w.r.t. plugin implementations; the 'plugin network agent' is not mandatory. The design of the plugin providing the implementation of the Core NaaS service is outside the scope of this blueprint;
  • NaaS stores information about networks and IP subnets into its own DB. NaaS data model is discussed more in detail in the rest of this section;

Data Model for NaaS

Currently nova-network uses the nova database, storing information in the networks table. It also uses a few other tables, such as fixed_ips and floating_ips; each project is associated with a network. In order to achieve complete separation between NaaS and the other nova services, then NaaS should have its own database. In the nova DB tables will still be associated with networks; however the network identifier will represent the unique identifier for a network created by NaaS rather than the primary key of a row in the networks table. There could be other elements cached in Nova, such as IP associated with instances but Nova would be no longer the system in which network information are recorded.

Moreover,it might be worth thinking about a 1:N association between networks and projects, which is already required by the NovaSpec:nova-multi-nic blueprint, or even an N:M association, if we want to support networks shared among different projects.

The following diagram reports an high-level view of the NaaS data model. Entities related to Layer-2 networking are reported in green, whereas Layer-3 entities are reported in blue.


The Network entity defines Layer-2 networks; its most important attributes are: a unique identifier (which could be a URI or a UUID), the owner, and the name for the network. Details such as the VLAN ID associated with the network) pertain to the specific implementation provided by the plugin and should therefore not be in the NaaS DB. Plugins (or the systems they connect to) will have their own database. A Network can be associated with several logical ports. Apart from the port number, the Port entity can define the port's administrative status and cache statistic information about traffic flowing through the port itself. Ports are associated in a 1:1 relationship with VIFs. The VIF table tracks attachments of virtual networks interfaces to ports. Although VIFs are not created by NaaS it is important to know which VIFs are attached where.

As regards L3 entities, IP_Subnet is the most important one; its attributes could be the subnet CIDR (or network address/netmask), and the default gateway, which should be optional. The IP configuration schemes used for the subnet should also be part of this entity, assuming that NaaS can use several schemes at the same time. Each entry in the IP_Subnet entity can be associated with one or more IP addresses. These are the address which are currently assigned for a given network (either via a DHCP lease or a static mechanism), and are associated with a VIF. While an IP address can obviously be associated with a single VIF, a VIF can instead be associated with multiple IP addresses. Finally, the IP Routes entity links subnet for which an IP route is configured. Attributes of the IP route, such as cost and weight, are plugin-specific and should not be therefore part of this entity, which could probably be reduced to a self-association on the IP_Subnet table.


Draft document: attachment:NaaS_API_spec-draft-v0.1.docx

Possible IP Configuration Strategies

Each VIF will need an IP address (IPv4 or IPv6); these are given to the VM by one of the schemes described here, which can be implemented by different plugins:

Scheme 1: Agent: Use an agent inside the VM to set the IP address details. The agent will receive the configuration through a virtualization-specific scheme (XenStore, VMCI, etc) or through a more generic scheme such as a virtual CD-ROM. This scheme of course requires installation of the agent in the VM image.

Scheme 2: DHCP: Configure the VM to use DHCP (usually the default anyway) and configure a DHCP server before booting the VM. This may be a private DHCP server, visible only to that VM, or it could be a server that is shared more widely.

Scheme 3: Filesystem modification: Before booting the VM, set the IP configuration by directly modifying its filesystem.


Multiple VIFs per VM. Not in OpenStack in Cactus, but expected to be added to Nova through NovaSpec:multi-nic and NovaSpec:multinic-libvirt for Diablo. This is required for all supported virtualization technologies (currently KVM/libvirt, XenAPI, Hyper-V, ESX).

Development Resources

No commitments have been made yet, but development resources have been offered by Citrix, Grid Dynamics, NTT, Midokura, and Rackspace.

We will sort out how to share the development burden when this specification is nearer completion.

Work in Progress

The following blueprints concerning Network Services for Openstack have been registered:

  • Network Service POC, registered by Hisaharu Ishii from NTT-PF Lab. There is also some POC code being worked on at lp:~ntt-pf-lab/nova/network-service
  • NovaSpec:netcontainers, registered by Ram Durairaj from Cisco
  • NovaSpec:naas-project, registered by Rick Clark, which can be regarded as an attempt to merge all these blueprints in a single specification for NaaS.


  • Erik Carlin is working on a draft spec for the OpenStack Networking API.
  • As already mentioned, work on supporting multiple virtual network cards per instance is already in progress. NovaSpec:nova-multi-nic
  • Ilya Alekseyev has registered the NovaSpec:distros-net-injection blueprint in order to support file-system-based IP configuration in injection for a number of linux distros (nova now supports debian-based distros only). Christian Berendt also registered a similar blueprint, NovaSpec:injection
  • Dan Wendlandt has registered NovaSpec:openvswitch-network-plugin for a NaaS plugin based on Open vSwitch


Erik Carlin has created an Etherpad for general NaaS discussion. Please add your comments here

Other discussion resources:

Etherpad from discussion session at Bexar design summit: http://etherpad.openstack.org/i5aSxrDeUU

Etherpad from alternative discussion session at Bexar design summit: http://etherpad.openstack.org/6tvrm3aEBt

Slide deck from discussion session at Bexar design summit: http://www.slideshare.net/danwent/bexar-network-blueprint