Jump to: navigation, search

Difference between revisions of "NetworkService"

Line 64: Line 64:
 
''Related topology:''  Each tenant has an isolated network. Tenants are free to do whatever they want on their network, because it is completely isolated from those of other tenants. This is the NASA Nebula model as of today, implemented using one VLAN per tenant. Note that although this same model could equally be implemented with different technologies (e.g. GRE tunnels instead of VLANs for isolation), this would not change the nature of the model itself.
 
''Related topology:''  Each tenant has an isolated network. Tenants are free to do whatever they want on their network, because it is completely isolated from those of other tenants. This is the NASA Nebula model as of today, implemented using one VLAN per tenant. Note that although this same model could equally be implemented with different technologies (e.g. GRE tunnels instead of VLANs for isolation), this would not change the nature of the model itself.
  
# Customer uses the Core NaaS API to create a Network;
+
# Customer uses the NaaS API to create a Network;
# On success, Core NaaS API return a unique identifier for the newly created network;
+
# On success, NaaS API return a unique identifier for the newly created network;
# Customer uses Core NaaS API for configuring a logical port on the network;
+
# Customer uses NaaS API for configuring a logical port on the network;
 
# Customer invokes Cloud Controller API to run an instance, specifying network and virtual port for it;
 
# Customer invokes Cloud Controller API to run an instance, specifying network and virtual port for it;
 
# Cloud Controller API dispatches request to compute service;
 
# Cloud Controller API dispatches request to compute service;
# The Nova Compute service creates VM and VIFs. For each VIF, it asks Naas Core to plug it into port and network specified by Customer in (4).
+
# The compute service creates VM and VIFs. For each VIF, it asks Naas to plug it into port and network specified by Customer in (4).
  
 
=== Use case 2: Attach instance to default public network ===
 
=== Use case 2: Attach instance to default public network ===
 
''Related topology:''  Similar to the 'Flat' mode currently supported by nova network. Instances from different customers are all deployed on the same virtual network. In this case, the Core NaaS service can provide port isolation policy in order to ensure VM security.
 
''Related topology:''  Similar to the 'Flat' mode currently supported by nova network. Instances from different customers are all deployed on the same virtual network. In this case, the Core NaaS service can provide port isolation policy in order to ensure VM security.
  
# Customer uses Core Naas API to retrieve public networks;
+
# Customer uses Naas API to retrieve public networks;
# On success, Core Naas API returns a list of unique network identifiers; Customer selects a networks from this list;
+
# On success, Naas API returns a list of unique network identifiers; Customer selects a networks from this list;
# Customer uses Core NaaS API for configuring a logical port on the network;
+
# Customer uses NaaS API for configuring a logical port on the network;
 
# Customer invokes Cloud Controller API to run an instance, specifying network and virtual port for it;
 
# Customer invokes Cloud Controller API to run an instance, specifying network and virtual port for it;
 
# Cloud Controller API dispatches request to compute service;
 
# Cloud Controller API dispatches request to compute service;
# The Nova Compute service creates VM and VIFs. For each VIF, it asks Naas Core to plug it into port and network specified by Customer in (4).
+
# The Nova Compute service creates VM and VIFs. For each VIF, it asks Naas to plug it into port and network specified by Customer in (4).
  
The main difference between this use case and the previous one is that the Customer uses a pre-configured network instead of creating it. Another point that needs to be discussed is whether customers should be allowed to manage ports for public networks. Alternatively, the compute service can implicitly create a port when it contacts the Core NaaS API.
+
The main difference between this use case and the previous one is that the Customer uses a pre-configured network instead of creating it. Another point that needs to be discussed is whether customers should be allowed to manage ports for public networks. Alternatively, the compute service can implicitly create a port when it contacts the NaaS API.
  
 
=== Use case 3: Register bridge for connecting cloud network to other site ===
 
=== Use case 3: Register bridge for connecting cloud network to other site ===
 
''Related topology: '' Customer on-premise data centre extending into the cloud, interconnecting networks in distinct cloud. Although actual implementation can be provided in several ways. we are interested in the abstract model: a single connectivity domain spanning two ore more networks in distinct administrative domains.
 
''Related topology: '' Customer on-premise data centre extending into the cloud, interconnecting networks in distinct cloud. Although actual implementation can be provided in several ways. we are interested in the abstract model: a single connectivity domain spanning two ore more networks in distinct administrative domains.
  
# Customer uses Core NaaS API to register a bridge for its network;
+
# Customer uses NaaS API to register a bridge for its network;
# On success the Core NaaS API returns the bridge identifier;
+
# On success the NaaS API returns the bridge identifier;
# Customer uses Core NaaS API to provide bridge configuration (e.g.: remote endpoint, port, credentials);
+
# Customer uses NaaS API to provide bridge configuration (e.g.: remote endpoint, port, credentials);
# Customer uses Core NaaS API to create a virtual port on the network for the bridge device;
+
# Customer uses NaaS API to create a virtual port on the network for the bridge device;
# Customer uses Core NaaS API to plug bridge device into network
+
# Customer uses NaaS API to plug bridge device into network
  
 
=== Use case 4: Retrieve statistics for a network ===
 
=== Use case 4: Retrieve statistics for a network ===
  
# Customer uses Core NaaS API to retrieve a specific network;  
+
# Customer uses NaaS API to retrieve a specific network;  
# On success the Core NaaS API returns the network identifier;  
+
# On success the NaaS API returns the network identifier;  
# Customer uses Core NaaS API to retrieve statistics for the network. Filters can be used to specify which data should be retrieved (e.g.: total bytes RX/TX), and for which components of the network they should be retrieved (whole network, specific port(s), specific VIF(s))
+
# Customer uses NaaS API to retrieve statistics for the network. Filters can be used to specify which data should be retrieved (e.g.: total bytes RX/TX), and for which components of the network they should be retrieved (whole network, specific port(s), specific VIF(s))
# Core NaaS invokes the implementation plugin to retrieve required information;
+
# NaaS invokes the implementation plugin to retrieve required information;
# Core NaaS API returns statistic data and customer processes them.
+
# NaaS API returns statistic data and customer processes them.
  
 
''NOTE:''the actor for this use case can either be a customer or an higher-layer monitoring or billing service, e.g.: a service which charges customers according to their network usage.
 
''NOTE:''the actor for this use case can either be a customer or an higher-layer monitoring or billing service, e.g.: a service which charges customers according to their network usage.
  
=== Use case 5: Configure network policies ===
+
=== Use case 5: Configure an IP subnet and attach instances to it ===
  
# Customer uses Core NaaS API to retrieve a specific network;
+
=== Use case XX: Configure network policies ===
# On success the Core NaaS API returns the network identifier;
 
# Customer uses Core NaaS API to enforce a policy on the network; for instance a policy can specify a maximum bit rate on a specific port or block a specific protocol over the whole network;
 
# Core NaaS invoke the implementation plugin to enforce the policy;
 
# Core NaaS API informs the user about the result of the operation.
 
  
=== Use case 6: Register and enable higher-layer services ===
+
# Customer uses NaaS API to retrieve a specific network;  
 
+
# On success the NaaS API returns the network identifier;  
Part 1 (CSP):
+
# Customer uses NaaS API to enforce a policy on the network; for instance a policy can specify a maximum bit rate on a specific port or block a specific protocol over the whole network;  
# The CSP uses the Core NaaS API to register a higher-layer network service (e.g.: DHCP, NAT, Firewall, Load Balancing, etc.);
+
# NaaS invokes the implementation plugin to enforce the policy;
    CSP must provide Core NaaS API with the information for accessing the service (e.g.: service URL, and access credentials);
+
# NaaS API informs the user about the result of the operation.
# Core NaaS tries to connect to the network service and informs the caller of the result of the operation
 
 
 
Part 2 (Customers):
 
# Customer uses Core NaaS API to retrieve a specific network service;
 
# On success Core NaaS returns a unique identifier for the network service and/or its base URL;
 
    ''Note'': this operation might fail for different reasons. For instance the network service might not be registered in NaaS or the tenant might not be allowed to use it;
 
# Customer uses Core NaaS API to retrieve a specific network;  
 
# Customer enable the previously retrieved service for that specific network using Core NaaS API.  
 
  
 
=== Plug-in use cases ===
 
=== Plug-in use cases ===
  
In this section, we give examples of the technologies that a service provider may wish to plug in at various points in the infrastructure.
+
In this section, we give examples of the technologies that a service provider may wish to plug into NaaS to provide L2/L3 networking.
 
Some of these technologies are commercial in nature.   
 
Some of these technologies are commercial in nature.   
 
Where this is the case, this blueprint will include bindings for an open-source alternative, so that the NaaS API is completely supported by open-source software.
 
Where this is the case, this blueprint will include bindings for an open-source alternative, so that the NaaS API is completely supported by open-source software.
Line 137: Line 125:
 
|-
 
|-
 
|  Inter-datacenter tunnels  
 
|  Inter-datacenter tunnels  
 +
|-
 +
|  IP address management
 
|}
 
|}
  
Line 148: Line 138:
  
 
== Design Ideas ==
 
== Design Ideas ==
At this stage any attempt to provide a design for Core NaaS would be quite unrealistic. This section should be therefore regarded as a first attempt to define general design guidelines for NaaS.
+
At this stage any attempt to provide a design for NaaS would be quite unrealistic. This section should be therefore regarded as a first attempt to define general design guidelines for NaaS.
  
At a very high-level view Core NaaS  and its interactions with other entities (compute service, nova database, plugins) can be summarized as follows:
+
At a very high-level view the Network Service and its interactions with other entities (compute service, database, plugins) can be summarized as follows:
  
 
[[Image:NetworkService$Naas-Core-Overview.png]]
 
[[Image:NetworkService$Naas-Core-Overview.png]]

Revision as of 13:44, 12 April 2011


This blueprint is being redrafted. Ewan Mellor will be happy to hear from you if you wish to contribute. Nothing here is committed.

There is a Discussion section at the end of this blueprint. Please feel free to put comments there.


<<TableOfContents()>>

Glossary

NaaS: Network as a Service

Openstack-NaaS: The customer-facing service proposed by this blueprint. This distinguishes it from the existing nova-network.

Higher Layer services: L4/L7 network services which might be enabled for networks created by NaaS.

The OpenStack NaaS API: The customer-facing API exposed by openstack-NaaS.

VIF: Virtual InterFace. A VM's network interface. Also known as a vNIC.

Summary

The goal of this blueprint is to add a first-class, customer-facing service for the management of network infrastructure within an OpenStack cloud. This will allow service providers to offer "Networking as a Service" (NaaS) to their customers.

This blueprint discusses goals, use cases, requirements and design ideas for features and capabilities to enable in openstack-NaaS in order to be able to create and manage networks intended as collection of virtual ports with shared connectivity, which provide VM instances with Layer-2 and possibly Layer-3 connectivity.

Higher-layer services, such as Firewall, NAT, VPN, and Load Balancing, will instead be provided by distinct services communicating with NaaS through exposed APIs. L4/L7 services are discussed at this wiki page'.

Rationale

The main aim of NaaS is to provide Openstack users with a service for providing Layer-2 networking to their VM instances; a network created with NaaS can be indeed regarded as a virtual network switch which potentially spans over all the compute nodes in the cloud. Apart from providing Layer-2 services, NaaS also aims at providing Layer-3 networking, intended as IP configuration management and IP routing configuration.

NaaS APIs should be decoupled by the implementation of the network service, which should be provided through plugins. This implies that NaaS does not mandate any specific model for created networks, either at Layer-2 (e.g.: VLANs, IP tunnels), or Layer-3 (e.g.: file-system based injection, DHCP).

Goals

Goal 1: Allow customers and CSPs to create, delete, and bridge networks. Networks can either be private, i.e.: available only to a specific customer, or shared. Networks shared only among a specific group of customers can also be considered.

Goal 2: Allow customers to manage virtual ports for their networks, and attach instances or other network appliances (physical or virtual) available in the cloud to them.

Goal 3: Allow customers to extend their networks from the cloud to a remote site, by attaching a bridging device within the cloud to their networks'; the bridging device would then bridge to the appropriate remote site.

Goal 4: Allow customers to manage IP configuration for their networks. Both IPv4 and IPv6 should be supported. Zero, One, or more IP subnets can be associated with a network.

Goal 5: Allow customers to define IP routes among subnets. Routines can be defined either across subnets in the same virtual network, or across IP subnets in distinct virtual networks owned by the same tenant.

Goal 6: Allow customer and CSPs to monitor networks by making available statistic information such as total number of bytes transmitted and received per network, port or VIF. IP-based statistics should also be available.

Goal 7: Allow customers to configure network policies for networks, ports, and devices attached to them. These policies can include, for instance, port security polices, access control lists, or QoS policies (which are typically available on physical network switches). Since a 'minimum set' of policies supported by each possible plugin can hardly be identified, network policies should be assumed as plugin-specific and always be configured through API extension mechanisms.

Goal 8: Allow CSPs to register and configure the plugins providing the actual implementation of the network service. CSPs should be able to select and plug in third-party technologies as appropriate. This may be for extended features, improved performance, or reduced complexity or cost.

Use cases

Use case 1: Create private network and attach instance to it

Related topology: Each tenant has an isolated network. Tenants are free to do whatever they want on their network, because it is completely isolated from those of other tenants. This is the NASA Nebula model as of today, implemented using one VLAN per tenant. Note that although this same model could equally be implemented with different technologies (e.g. GRE tunnels instead of VLANs for isolation), this would not change the nature of the model itself.

  1. Customer uses the NaaS API to create a Network;
  2. On success, NaaS API return a unique identifier for the newly created network;
  3. Customer uses NaaS API for configuring a logical port on the network;
  4. Customer invokes Cloud Controller API to run an instance, specifying network and virtual port for it;
  5. Cloud Controller API dispatches request to compute service;
  6. The compute service creates VM and VIFs. For each VIF, it asks Naas to plug it into port and network specified by Customer in (4).

Use case 2: Attach instance to default public network

Related topology: Similar to the 'Flat' mode currently supported by nova network. Instances from different customers are all deployed on the same virtual network. In this case, the Core NaaS service can provide port isolation policy in order to ensure VM security.

  1. Customer uses Naas API to retrieve public networks;
  2. On success, Naas API returns a list of unique network identifiers; Customer selects a networks from this list;
  3. Customer uses NaaS API for configuring a logical port on the network;
  4. Customer invokes Cloud Controller API to run an instance, specifying network and virtual port for it;
  5. Cloud Controller API dispatches request to compute service;
  6. The Nova Compute service creates VM and VIFs. For each VIF, it asks Naas to plug it into port and network specified by Customer in (4).

The main difference between this use case and the previous one is that the Customer uses a pre-configured network instead of creating it. Another point that needs to be discussed is whether customers should be allowed to manage ports for public networks. Alternatively, the compute service can implicitly create a port when it contacts the NaaS API.

Use case 3: Register bridge for connecting cloud network to other site

Related topology: Customer on-premise data centre extending into the cloud, interconnecting networks in distinct cloud. Although actual implementation can be provided in several ways. we are interested in the abstract model: a single connectivity domain spanning two ore more networks in distinct administrative domains.

  1. Customer uses NaaS API to register a bridge for its network;
  2. On success the NaaS API returns the bridge identifier;
  3. Customer uses NaaS API to provide bridge configuration (e.g.: remote endpoint, port, credentials);
  4. Customer uses NaaS API to create a virtual port on the network for the bridge device;
  5. Customer uses NaaS API to plug bridge device into network

Use case 4: Retrieve statistics for a network

  1. Customer uses NaaS API to retrieve a specific network;
  2. On success the NaaS API returns the network identifier;
  3. Customer uses NaaS API to retrieve statistics for the network. Filters can be used to specify which data should be retrieved (e.g.: total bytes RX/TX), and for which components of the network they should be retrieved (whole network, specific port(s), specific VIF(s))
  4. NaaS invokes the implementation plugin to retrieve required information;
  5. NaaS API returns statistic data and customer processes them.

NOTE:the actor for this use case can either be a customer or an higher-layer monitoring or billing service, e.g.: a service which charges customers according to their network usage.

Use case 5: Configure an IP subnet and attach instances to it

Use case XX: Configure network policies

  1. Customer uses NaaS API to retrieve a specific network;
  2. On success the NaaS API returns the network identifier;
  3. Customer uses NaaS API to enforce a policy on the network; for instance a policy can specify a maximum bit rate on a specific port or block a specific protocol over the whole network;
  4. NaaS invokes the implementation plugin to enforce the policy;
  5. NaaS API informs the user about the result of the operation.

Plug-in use cases

In this section, we give examples of the technologies that a service provider may wish to plug into NaaS to provide L2/L3 networking. Some of these technologies are commercial in nature. Where this is the case, this blueprint will include bindings for an open-source alternative, so that the NaaS API is completely supported by open-source software. It is a goal of this blueprint that a service provider may select different implementation technologies.

Category
Distributed virtual switches
Inter-datacenter tunnels
IP address management

Requirements

R1. Add a first-class, customer-facing service for management and configuration of network infrastructure via a RESTful API. This service shall be known as openstack-NaaS, and the API that it exposes shall be known as the OpenStack NaaS API.

R2. Modify nova-compute to obtain network details by calling openstack-NaaS through its public API, rather than calling nova-network as it does today.

... lots more coming here don't worry!

Design Ideas

At this stage any attempt to provide a design for NaaS would be quite unrealistic. This section should be therefore regarded as a first attempt to define general design guidelines for NaaS.

At a very high-level view the Network Service and its interactions with other entities (compute service, database, plugins) can be summarized as follows:

File:NetworkService$Naas-Core-Overview.png

  • NaaS Core API serves requests from customers concerning networks and virtual ports. The NaaS API layer then dispatches the request to the plugin (which is not part of Core NaaS). The plugin then enforces network/port configuration on hypervisors using proprietary mechanisms;
  • The Nova Compute service should not have any knowledge of the hypervisor's networking stack and uses NaaS Core API for plugging VIFs into networks (this is sligthly different from current nova design where part of the network manager's code - namely setup_compute_network - is used by the compute service as well);
  • Although the diagram supposes a plugin has a 'manager' component on the NaaS node and agent 'component', this might not be always true, as NaaS should be completely agnostic w.r.t. plugin implementations; the 'core plugin network agent' is not mandatory. The design of the plugin providing the implementation of the Core NaaS service is outside the scope of this blueprint;
  • Just like today's nova-network service, Core NaaS uses the nova DB. However, the database table for describing networks will be much simpler, as it will not contain information about higher-level services, such as VPN. Customer-Network association can still be performed on a project base, but it might be worth thinking about a 1:N association between networks and projects. IP information for networks are a slightly different argument, and will be discussed later in this blueprint.

If IP configuration is deemed part of the core API, the following goal should then be added to the above list:

Pre-requisites

Multiple VIFs per VM. Not in OpenStack in Cactus, but expected to be added to Nova through NovaSpec:multi-nic and NovaSpec:multinic-libvirt for Diablo. This is required for all supported virtualization technologies (currently KVM/libvirt, XenAPI, Hyper-V, ESX).

Development Resources

No commitments have been made yet, but development resources have been offered by Citrix, Grid Dynamics, NTT, Midokura, and Rackspace.

We will sort out how to share the development burden when this specification is nearer completion.

Work in Progress

The following blueprints concerning Network Services for Openstack have been registered:

Also:

  • Erik Carlin is working on a draft spec for the OpenStack Networking API.
  • As already mentioned, work on supporting multiple virtual network cards per instance is already in progress. NovaSpec:nova-multi-nic
  • Ilya Alekseyev has registered the NovaSpec:distros-net-injection blueprint in order to support file-system-based IP configuration in injection for a number of linux distros (nova now supports debian-based distros only). Christian Berendt also registered a similar blueprint, NovaSpec:injection
  • Dan Wendlandt has registered NovaSpec:openvswitch-network-plugin for a NaaS plugin based on Open vSwitch

Discussion

Etherpad from discussion session at Bexar design summit: http://etherpad.openstack.org/i5aSxrDeUU

Etherpad from alternative discussion session at Bexar design summit: http://etherpad.openstack.org/6tvrm3aEBt

Slide deck from discussion session at Bexar design summit: http://www.slideshare.net/danwent/bexar-network-blueprint