Jump to: navigation, search

Difference between revisions of "NetworkService"

m (Text replace - "NovaSpec" to "NovaSpec")
 
(32 intermediate revisions by 4 users not shown)
Line 1: Line 1:
__NOTOC__
 
 
----------
 
----------
  
<span style="font-size: larger">'''This blueprint is being redrafted. [https://launchpad.net/~ewanmellor Ewan Mellor] will be happy to hear from you if you wish to contribute.  Nothing here is committed.'''</span>
+
~+'''This blueprint is now superseded'''
 
+
  please refer to [http://wiki.openstack.org/Network this wiki page] for latest updates on openstack's network service
There is a Discussion section at the end of this blueprint.  Please feel free to put comments there.
 
  
 
----------
 
----------
  
* '''Launchpad Nova blueprint''': [[NovaSpec]]:bexar-network-service
+
* '''Launchpad Nova blueprint''': NovaSpec:network-service
 
* '''Created''': 31 January 2011
 
* '''Created''': 31 January 2011
* '''Last updated''': 6 February 2011
+
* '''Last updated''': 12 April 2011
 
* '''Drafter''': [https://launchpad.net/~ewanmellor Ewan Mellor]
 
* '''Drafter''': [https://launchpad.net/~ewanmellor Ewan Mellor]
* '''Contributors''': [https://launchpad.net/~ilyaalekseyev Ilya Alekseyev], [https://launchpad.net/~patricka Patrick Ancillotti], [https://launchpad.net/~abrindeyev Andrey Brindeyev], [https://launchpad.net/~erik-carlin Erik Carlin], [https://launchpad.net/~dendrobates Rick Clark], [https://launchpad.net/~dmd17 Dan Mihai Dumitriu], [https://launchpad.net/~dramesh Ram Durairaj], [https://launchpad.net/~soren Søren Hansen], [https://launchpad.net/~iida-koji Koji Iida], [https://launchpad.net/~ishii-hisaharu Hisaharu Ishii], [https://launchpad.net/~itoumsn Masanori Itoh], [https://launchpad.net/~adjohn Adam Johnson], [https://launchpad.net/~youcef-laribi Youcef Laribi], [https://launchpad.net/~romain-lenglet Romain Lenglet], [https://launchpad.net/~bmcconne Brad McConnell], [https://launchpad.net/~reldan Eldar Nugaev], [https://launchpad.net/~salvatore-orlando Salvatore Orlando], [https://launchpad.net/~john-purrier John Purrier], [https://launchpad.net/~termie Andy Smith], [https://launchpad.net/~troy-toman Troy Toman], [https://launchpad.net/~dan-nicira Dan Wendlandt], [https://launchpad.net/~zhixue-wu Zhixue Wu]
+
* '''Contributors''': [https://launchpad.net/~ilyaalekseyev Ilya Alekseyev], [https://launchpad.net/~patricka Patrick Ancillotti], [https://launchpad.net/~abrindeyev Andrey Brindeyev], [https://launchpad.net/~erik-carlin Erik Carlin], [https://launchpad.net/~dendrobates Rick Clark], [https://launchpad.net/~dmd17 Dan Mihai Dumitriu], [https://launchpad.net/~dramesh Ram Durairaj], [https://launchpad.net/~uri-elzur Uri Elzur], [https://launchpad.net/~soren Søren Hansen], [https://launchpad.net/~iida-koji Koji Iida], [https://launchpad.net/~ishii-hisaharu Hisaharu Ishii], [https://launchpad.net/~itoumsn Masanori Itoh], [https://launchpad.net/~adjohn Adam Johnson], [https://launchpad.net/~youcef-laribi Youcef Laribi], [https://launchpad.net/~romain-lenglet Romain Lenglet], [https://launchpad.net/~bmcconne Brad McConnell], [https://launchpad.net/~reldan Eldar Nugaev], [https://launchpad.net/~salvatore-orlando Salvatore Orlando], [https://launchpad.net/~john-purrier John Purrier], [https://launchpad.net/~termie Andy Smith], [https://launchpad.net/~troy-toman Troy Toman], [https://launchpad.net/~dan-nicira Dan Wendlandt], [https://launchpad.net/~zhixue-wu Zhixue Wu]
 +
 
 +
__TOC__
 +
 
 +
== Glossary ==
  
== Goals ==
+
'''NaaS''': Network as a Service
  
'''Goal 1''': Add a first-class, customer-facing service for the management of network infrastructure within an OpenStack cloud.  This will allow service providers to offer "Networking as a Service" (NaaS) to their customers.
+
'''Openstack-NaaS''': The customer-facing service proposed by this blueprint.  This distinguishes it from the existing nova-network.
  
'''Goal 2''': Allow the customer to start and stop network-related services provided by the service provider.  These might be load-balancers, firewalls or tunnels, for example.  The service provider may charge for these services, so starting one may be a chargeable event.
+
'''Higher Layer services''': L4/L7 network services which might be enabled for networks created by NaaS.
  
'''Goal 3''': Allow the customer to configure rich network topologies within the cloud.  These topologies will include private connections between VMs, and connections between VMs and network services such as those mentioned in Goal 2.  Of course,  this reconfiguration must happen without affecting other tenants within the cloud.
+
'''The OpenStack NaaS API''': The customer-facing API exposed by openstack-NaaS.
  
'''Goal 4''': Allow the customer to extend their networks from the cloud to a remote siteThis is a simple extension of Goal 3 where the customer would configure a connection from their VMs to a bridging device within the cloud, which would then bridge to the appropriate remote site.
+
'''VIF''': Virtual InterFace.  A VM's network interfaceAlso known as a vNIC.
  
'''Goal 5''': Allow the service provider to select and plug in third-party technologies as appropriate.  This may be for extended features, improved performance, or reduced complexity or cost.  For example, one service provider may choose to offer their firewall service based on hardened Linux VMs, but another one may choose to use commercial firewall software instead.
+
== Summary ==
 +
The goal of this blueprint is to add a first-class, customer-facing service for the management of network infrastructure within an OpenStack cloud.  This will allow service providers to offer "Networking as a Service" (NaaS) to their customers.  
  
'''Goal 6''': The extent to which customers can manipulate their own network infrastructure will depend upon the service provider and the underlying technologies that they have deployed.  Goal 6 is to gracefully manage the disparity between various deployments.  The service provider must be free to limit what customers can do, and the API must gracefully handle that.
+
This blueprint discusses goals, use cases, requirements and design ideas for features and capabilities to enable in openstack-NaaS in order to be able to create and manage networks intended as ''collection of virtual ports with shared connectivity'', which provide VM instances with Layer-2 and possibly Layer-3 connectivity.
  
'''Goal 7''': Support the network topologies already in use today (Topologies 1 and 2 below).
+
Higher-layer services, such as Firewall, NAT, VPN, and Load Balancing, will instead be provided by distinct services communicating with NaaS through exposed APIs. L4/L7 services are discussed at this [http://wiki.openstack.org/HigherLayerNetworkServices wiki page]'.  
  
== Glossary ==
+
== Rationale ==
 +
The main aim of NaaS is to provide Openstack users with a service for providing Layer-2 networking to their VM instances; a network created with NaaS can be indeed regarded as a virtual network switch, together with related network devices attached to it, which potentially spans over all the compute nodes in the cloud.
 +
Apart from providing Layer-2 services, NaaS also aims at providing Layer-3 networking, intended as IP configuration management and IP routing configuration.
  
'''openstack-networking''': The customer-facing service proposed by this blueprint. This distinguishes it from the existing nova-network.
+
NaaS APIs should be decoupled by the implementation of the network service, which should be provided through plugins.
 +
This implies that NaaS does not mandate any specific model for created networks, either at Layer-2 (e.g.: VLANs, IP tunnels), or Layer-3 (e.g.: file-system based injection, DHCP).
  
'''The OpenStack Networking API''': The customer-facing API exposed by openstack-networking.
+
== Goals ==
 +
'''Goal 1''': Allow customers and CSPs to create, delete, and bridge networks. Networks can either be private, i.e.: available only to a specific customer, or shared. Networks shared only among a specific group of customers can also be considered.
  
'''VIF''': Virtual InterFace.  A VM's network interface.  Also known as a vNIC.
+
'''Goal 2''': Allow customers and CSPs to manage virtual ports for their networks, and attach instances or other network appliances (physical or virtual) available in the cloud to them.
  
== Use Cases ==
+
'''Goal 3''': Allow customers and CSPs to extend their networks from the cloud to a remote site, by attaching a bridging device within the cloud to their networks'; the bridging device would then bridge to the appropriate remote site.
  
Below are a number of example network topologies. For each topology you can consider two use-cases -- the customer who wishes to deploy their resources within such a topology, and the service provider who wants to allow the customer to do that.
+
'''Goal 4''': Allow customers and CSPs to manage IP configuration for their networks. Both IPv4 and IPv6 should be supported. Zero, One, or more IP subnets can be associated with a network.  
  
Regardless of the topology selected, each VIF on each VM will need an IP address (IPv4 or IPv6). This may be given to the VM by one of many ways (described below).  IP address injection is orthogonal to the topology choice -- any topology may be used in combination with any address injection scheme.  
+
'''Goal 5''': Allow customers and CSPs to define IP routes among subnets. Routines can be defined either across subnets in the same virtual network, or across IP subnets in distinct virtual networks owned by the same tenant.
  
=== Topology 1: Isolated per-tenant networks ===
+
'''Goal 6''': Allow customer and CSPs to monitor networks by making available statistic information such as total number of bytes transmitted and received per network, port or VIF. IP-based statistics should also be available.
  
Each tenant has an isolated network, which the tenant accesses via a VPN gateway.  Tenants are free to do whatever they want on their network, because it is completely isolated from those of other tenants.
+
'''Goal 7''': Allow customers and CSPs to securely configure network policies for networks, ports, and devices attached to them. These policies can include, for instance, port security polices, access control lists, high availability, or QoS policies (which are typically available on physical network switches). Only a basic set of configuration options will be supported; the remaining network policies should be assumed as plugin-specific and always be configured through API extension mechanisms.
  
This is the NASA Nebula model today.
+
'''Goal 8''': Allow CSPs to register and configure the plugins providing the actual implementation of the network service. CSPs should be able to select and plug in third-party technologies as appropriate.  This may be for extended features, improved performance, or reduced complexity or cost.
  
This is currently implemented using one VLAN per tenant, and with one instance of nova-network per tenant to act as the VPN gateway.  Note that this same model could equally be implemented with different technologies (e.g. a commercial VPN gateway, or using GRE tunnels instead of VLANs for isolation).  One aim of this blueprint is to consider the model independently from the underlying implementation technology.
+
== Use cases ==
  
=== Topology 2: Direct internet connections ===
+
=== Use case 1: Create private network and attach instance to it ===
 +
''Related topology:''  Each tenant has an isolated network. Tenants are free to do whatever they want on their network, because it is completely isolated from those of other tenants. This is the NASA Nebula model as of today, implemented using one VLAN per tenant. Note that although this same model could equally be implemented with different technologies (e.g. GRE tunnels instead of VLANs for isolation), this would not change the nature of the model itself.
  
Each VM has a single public IP address and is connected directly to the Internet. Tenants may not choose their IP address or manage anything about the network topology.
+
# Customer uses the NaaS API to create a Network;
 +
# On success, NaaS API return a unique identifier for the newly created network;
 +
# Customer uses NaaS API for configuring a logical port on the network;
 +
# Customer invokes Cloud Controller API to run an instance, specifying network and virtual port for it;
 +
# Cloud Controller API dispatches request to compute service;
 +
# The compute service creates VM and VIFs. For each VIF, it asks Naas to plug it into port and network specified by Customer in (4).
  
This is the Rackspace Cloud model today.
+
=== Use case 2: Attach instance to default public network ===
 +
''Related topology:''  Similar to the 'Flat' mode currently supported by nova network. Instances from different customers are all deployed on the same virtual network. In this case, the Core NaaS service can provide port isolation policy in order to ensure VM security.
  
=== Topology 3: Firewall service ===
+
# Customer uses Naas API to retrieve public networks;
 +
# On success, Naas API returns a list of unique network identifiers; Customer selects a networks from this list;
 +
# Customer uses NaaS API for configuring a logical port on the network;
 +
# Customer invokes Cloud Controller API to run an instance, specifying network and virtual port for it;
 +
# Cloud Controller API dispatches request to compute service;
 +
# The Nova Compute service creates VM and VIFs. For each VIF, it asks Naas to plug it into port and network specified by Customer in (4).
  
Like Topology 1, but the VPN gateway is replaced by a firewall service that can be managed by the customer through the OpenStack Networking API.  The public side of the firewall would usually be connected directly to the Internet. The firewall itself is provided by the service provider.
+
The main difference between this use case and the previous one is that the Customer uses a pre-configured network instead of creating it. Another point that needs to be discussed is whether customers should be allowed to manage ports for public networks. Alternatively, the compute service can implicitly create a port when it contacts the NaaS API.
  
=== Topology 4: Customer-owned gateway ===
+
=== Use case 3: Register bridge for connecting cloud network to other site ===
 +
''Related topology: '' Customer on-premise data centre extending into the cloud, interconnecting networks in distinct cloud. Although actual implementation can be provided in several ways. we are interested in the abstract model: a single connectivity domain spanning two or more networks in distinct administrative domains.
  
Like Topologies 1 and 3, but instead of the gateway being a service provided by the cloud and managed through the Networking API, the customer instead provides their own gateway, as a virtual appliance.
+
# Customer uses NaaS API to register a bridge for its network;
 +
# On success the NaaS API returns the bridge identifier;
 +
# Customer uses NaaS API to provide bridge configuration (e.g.: remote endpoint, port, credentials);
 +
# Customer uses NaaS API to create a virtual port on the network for the bridge device;
 +
# Customer uses  NaaS API to plug bridge device into network
  
The customer would have most of their VMs attached to their isolated network, but would be able to configure one (or more) of their VMs to have two interfaces, one connected to the isolated network, and one connected to the Internet.  The customer would be responsible for the software within the publicly-connected VMs, and would presumably install a gateway or firewall therein.
+
=== Use case 4: Retrieve statistics for a network ===
  
=== Topology 5: Multi-tier applications ===
+
# Customer uses NaaS API to retrieve a specific network;
 +
# On success the NaaS API returns the network identifier;
 +
# Customer uses NaaS API to retrieve statistics for the network. Filters can be used to specify which data should be retrieved (e.g.: total bytes RX/TX), and for which components of the network they should be retrieved (whole network, specific port(s), specific VIF(s))
 +
# NaaS invokes the implementation plugin to retrieve required information;
 +
# NaaS API returns statistic data and customer processes them.
  
Like Topology 4, but rather than running a gateway or a firewall, the customer is expected to run web servers instead. These would serve content to the public interfaces, and would contact the backend tiers via the private network.
+
''NOTE:''the actor for this use case can either be a customer or an higher-layer monitoring or billing service, e.g.: a service which charges customers according to their network usage.
  
In this topology, it's very likely that there would be more than one web server with a public Internet connection.
+
=== Use case 5: Configure network policies ===
  
=== IP address injection ===
+
# Customer uses NaaS API to retrieve a specific network;
 +
# On success the NaaS API returns the network identifier;
 +
# Customer uses NaaS API to enforce a policy on the network; for instance a policy can specify a maximum bit rate on a specific port or block a specific protocol over the whole network;
 +
# NaaS invokes the implementation plugin to enforce the policy;
 +
# NaaS API informs the user about the result of the operation.
  
Each VIF on each VM will need an IP address (IPv4 or IPv6).  Since OpenStack is managing the network provision, it must also manage the allocation of IP addresses for VMs.  These are then given to the VM by one of the schemes described here.  IP address injection is orthogonal to the topology choice -- any topology may be used in combination with any address injection scheme.
+
=== Use case 6: Configure an IP subnet and attach instances to it ===
  
'''Scheme 1: Agent''': Use an agent inside the VM to set the IP address details.  The agent will receive the configuration through a virtualization-specific scheme (XenStore, VMCI, etc) or through a more generic scheme such as a virtual CD-ROM.  This scheme of course requires installation of the agent in the VM image.
+
# Customer uses NaaS API to retrieve a specific network;
 +
# On success the NaaS API returns the network identifier;
 +
# Customers uses NaaS API to create an IP subnet, by specifying CIDR (or alternative network address and netmask), and the gateway;
 +
# On success NaaS API return a unique identifier for the newly created subnet;
 +
# Customer invokes NaaS API in order to attach a VIF already attached to one of his L2 networks to the newly created subnets;
 +
# NaaS verifies sanity of input data (e.g.: VIF attached to the appropriate network);
 +
# NaaS invokes the IP configuration management plugin to provide the supplied VIF with the appropriate configuration;
  
'''Scheme 2: DHCP''': Configure the VM to use DHCP (usually the default anyway) and configure a DHCP server before booting the VM.  This may be a private DHCP server, visible only to that VM, or it could be a server that is shared more widely.
+
=== Use case 7: Configure a route between two IP subnets ===
  
'''Scheme 3: Filesystem modification''': Before booting the VM, set the IP configuration by directly modifying its filesystem.
+
For this use case, we assume that the customer already has the identifiers (URIs, UUIDs) of the two subnets that should be routed.  
  
=== Plug-in use cases ===
+
# Customer invokes NaaS API to create a route between the two subnets
 +
# NaaS API creates appropriate routes using CIDRs and gateway address for the two subnets
 +
# NaaS return a unique identifier for the newly created route
  
In this section, we give examples of the technologies that a service provider may wish to plug in at various points in the infrastructure.
+
The way in which the IP route is created (e.g.: manipulating route table on instances, manipulating routing tables in hypervisors, configuring a router virtual appliance, etc.etc.) is plugin-specific.
 +
Routing attributes, such as distance, cost, and weight should also be part of extension APIs as they are not required to provide the basic functionality.  
  
Some of these technologies are commercial in nature.  Where this is the case, this blueprint will include bindings for an open-source alternative, so that the Networking API is completely supported by open-source software.
+
=== Plug-in use cases ===
  
It is a goal of this blueprint that a service provider may select different implementation technologies in each category, whether that's to use a commercial technology or simply to replace one open-source technology with another.
+
In this section, we give examples of the technologies that a service provider may wish to plug into NaaS to provide L2/L3 networking.
 +
Some of these technologies are commercial in nature. 
 +
Where this is the case, this blueprint will include bindings for an open-source alternative, so that the NaaS API is completely supported by open-source software.
 +
It is a goal of this blueprint that a service provider may select different implementation technologies.
  
 
{| border="1" cellpadding="2" cellspacing="0"
 
{| border="1" cellpadding="2" cellspacing="0"
Line 97: Line 141:
 
|  Distributed virtual switches  
 
|  Distributed virtual switches  
 
|-
 
|-
VPN gateways
+
Inter-datacenter tunnels
 
|-
 
|-
Load balancers
+
IP address management
|-
 
|  Firewalls
 
|-
 
|  Inter-datacenter tunnels
 
 
|}
 
|}
  
== Pre-requisites ==
+
== Requirements ==
 +
 
 +
R1. Add a first-class, customer-facing service for management and configuration of network infrastructure via a RESTful API.  This service shall be known as openstack-NaaS, and the API that it exposes shall be known as the OpenStack NaaS API.
 +
 
 +
R2. Modify nova-compute to obtain network details by calling openstack-NaaS through its public API, rather than calling nova-network as it does today.
 +
 
 +
... lots more coming here don't worry!
 +
 
 +
== Design Ideas ==
 +
At this stage any attempt to provide a design for NaaS would be quite unrealistic. This section should be therefore regarded as a first attempt to define general design guidelines.
 +
 
 +
At a very high-level view the Network Service and its interactions with other entities (compute service, database, plugins) can be summarized as follows:
 +
 
 +
[[Image:Naas-Core-Overview.png]]
 +
 
 +
* The Nova Compute service should not have any knowledge of the hypervisor's networking stack and uses NaaS API for plugging VIFs into networks (this is sligthly different from current nova design where part of the network manager's code - namely setup_compute_network - is used by the compute service as well);
 +
* Distinct plugins for Layer-2 networking and Layer-3 networking should be allowed; also multiple plugins should be expected at least for Layer-3 networking; for instance one plugin could provide IP configuration through DHCP, whereas another plugin could use agent-based configuration.
 +
* Layer-3 networking, altough part of NaaS, is not mandatory. In its most simple form, NaaS can be providing Layer-2 connectivity only;
 +
* Both Layer-2 and Layer-3 plugins can be attached to NaaS; however, if no Layer-3 plugin is provided, NaaS should raise a [[NotImplemented]] error for every L3 API request.
 +
* NaaS API serves requests from both customers and the compute service. In particular, the responsibilities of the compute services w.r.t. NaaS are the following:
 +
** Plugging/Unplugging virtual interfaces in virtual networks managed by NaaS;
 +
** Attach/Detach virtual interfaces to IP subnets configured in NaaS.
 +
* The "Plugin dispatcher" component is in charge of dispatching API requests to the appropriate plugin (which is not part of NaaS);
 +
* For Layer-2 networking, the plugin enforces network/port configuration on hypervisors using proprietary mechanisms;
 +
* Similary, Layer-3 networking plugins enforce IP configurationa and routing using proprietary mechanism;
 +
* Although the diagram supposes a plugin has a 'manager' component on the NaaS node and agent 'component', this is not a requirement, as NaaS should be completely agnostic w.r.t. plugin implementations; the 'plugin network agent' is not mandatory. The design of the plugin providing the implementation of the Core NaaS service is outside the scope of this blueprint;
 +
* NaaS stores information about networks and IP subnets into its own DB. NaaS data model is discussed more in detail in the rest of this section;
 +
 
 +
=== Data Model for NaaS ===
 +
 
 +
Currently nova-network uses the nova database, storing information in the ''networks'' table. It also uses a few other tables, such as ''fixed_ips'' and ''floating_ips''; each project is associated with a network.
 +
In order to achieve complete separation between NaaS and the other nova services, then NaaS should have its own database.
 +
In the nova DB tables will still be associated with networks; however the network identifier will represent the unique identifier for a network created by NaaS rather than the primary key of a row in the ''networks'' table. There could be other elements cached in Nova, such as IP associated with instances but Nova would be no longer the system in which network information are recorded.
 +
 
 +
Moreover,it might be worth thinking about a 1:N association between networks and projects, which is already required by the NovaSpec:nova-multi-nic blueprint, or even an N:M association, if we want to support networks shared among different projects.
  
'''Multiple VIFs per VM'''. Not in OpenStack in Bexar, but expected to be added to Nova through [[NovaSpec]]:multi-nic and [[NovaSpec]]:multinic-libvirt for Cactus or Diablo.  This is required for all supported virtualization technologies (KVM/libvirt, XenAPI, Hyper-V, ESX).
+
The following diagram reports an high-level view of the NaaS data model. Entities related to Layer-2 networking are reported in green, whereas Layer-3 entities are reported in blue.  
  
== Requirements ==
+
[[Image:NaaS-Data-Model.png]]
  
R1. Add a first-class, customer-facing service for management and configuration of network infrastructure via a RESTful API. This service shall be known as openstack-networking, and the API that it exposes shall be known as the OpenStack Networking API.
+
The ''Network'' entity defines Layer-2 networks; its most important attributes are: a unique identifier (which could be a URI or a UUID), the owner, and the name for the network. Details such as the VLAN ID associated with the network) pertain to the specific implementation provided by the plugin and should therefore not be in the NaaS DB. Plugins (or the systems they connect to) will have their own database.
 +
A ''Network'' can be associated with several logical ports. Apart from the port number, the ''Port'' entity can define the port's administrative status and cache statistic information about traffic flowing through the port itself.
 +
Ports are associated in a 1:1 relationship with VIFs. The ''VIF'' table tracks attachments of virtual networks interfaces to ports. Although VIFs are not created by NaaS it is important to know which VIFs are attached where.
  
R2. Include a facility for IPv4 address management within openstack-networking. The client must be able to register new endpoints on a given logical network, given which openstack-networking will assign and return an appropriate IPv4 address for that endpoint.
+
As regards L3 entities, ''IP_Subnet'' is the most important one; its attributes could be the subnet CIDR (or network address/netmask), and the default gateway, which should be optional.
 +
The IP configuration schemes used for the subnet should also be part of this entity, assuming that NaaS can use several schemes at the same time.
 +
Each entry in the ''IP_Subnet'' entity can be associated with one or more IP addresses. These are the address which are currently assigned for a given network (either via a DHCP lease or a static mechanism), and are associated with a VIF. While an IP address can obviously be associated with a single VIF, a VIF can instead be associated with multiple IP addresses.
 +
Finally, the ''IP Routes'' entity links subnet for which an IP route is configured. Attributes of the IP route, such as cost and weight, are plugin-specific and should not be therefore part of this entity, which could probably be reduced to a self-association on the ''IP_Subnet'' table.
  
R3. The same as R2, but for IPv6.
+
=== NaaS APIs ===
  
Rn. Modify nova-compute to obtain network details by calling openstack-networking through its public API, rather than calling nova-network as it does today.
+
Draft document: [[attachment:NaaS_API_spec-draft-v0.1.docx]]
  
... lots more coming here don't worry!
+
=== Possible IP Configuration Strategies ===
  
== Non-requirements ==
+
Each VIF will need an IP address (IPv4 or IPv6);  these are given to the VM by one of the schemes described here, which can be implemented by different plugins:
  
== Assumptions ==
+
'''Scheme 1: Agent''': Use an agent inside the VM to set the IP address details.  The agent will receive the configuration through a virtualization-specific scheme (XenStore, VMCI, etc) or through a more generic scheme such as a virtual CD-ROM.  This scheme of course requires installation of the agent in the VM image.
  
== Rationale ==
+
'''Scheme 2: DHCP''': Configure the VM to use DHCP (usually the default anyway) and configure a DHCP server before booting the VM.  This may be a private DHCP server, visible only to that VM, or it could be a server that is shared more widely.
  
== Design ==
+
'''Scheme 3: Filesystem modification''': Before booting the VM, set the IP configuration by directly modifying its filesystem.
  
== QA ==
+
== Pre-requisites ==
  
== Future ==
+
'''Multiple VIFs per VM'''.  Not in OpenStack in Cactus, but expected to be added to Nova through NovaSpec:multi-nic and NovaSpec:multinic-libvirt for  Diablo.  This is required for all supported virtualization technologies (currently KVM/libvirt, XenAPI, Hyper-V, ESX).
  
 
== Development Resources ==
 
== Development Resources ==
Line 140: Line 219:
 
We will sort out how to share the development burden when this specification is nearer completion.
 
We will sort out how to share the development burden when this specification is nearer completion.
  
== Release Note ==
+
== Work in Progress ==
  
== Work in Progress ==
+
The following blueprints concerning Network Services for Openstack have been registered:
 +
 
 +
* [http://wiki.openstack.org/NetworkServicePOC Network Service POC], registered by [https://launchpad.net/~ishii-hisaharu Hisaharu Ishii] from [https://launchpad.net/~ntt-pf-lab NTT-PF Lab]. There is also some POC code being worked on at lp:~ntt-pf-lab/nova/network-service
 +
* NovaSpec:netcontainers, registered by [https://launchpad.net/~dramesh Ram Durairaj] from [https://launchpad.net/~cisco-openstack Cisco]
 +
* NovaSpec:naas-project, registered by [https://launchpad.net/~dendrobates Rick Clark], which can be regarded as an attempt to merge all these blueprints in a single specification for NaaS.
  
[https://launchpad.net/~erik-carlin Erik Carlin] is working on a draft spec for the OpenStack Networking API.
+
Also:
 +
* [https://launchpad.net/~erik-carlin Erik Carlin] is working on a draft spec for the OpenStack Networking API.
 +
* As already mentioned, work on supporting multiple virtual network cards per instance is already in progress. NovaSpec:nova-multi-nic
 +
* [https://launchpad.net/~ilyaalekseyev Ilya Alekseyev] has registered the NovaSpec:distros-net-injection blueprint in order to support file-system-based IP configuration in injection for a number of linux distros (nova now supports debian-based distros only). [https://launchpad.net/~berendt Christian Berendt] also registered a similar blueprint, NovaSpec:injection
 +
* [https://launchpad.net/~danwent Dan Wendlandt] has registered NovaSpec:openvswitch-network-plugin for a NaaS plugin based on [http://www.openvswitch.org Open vSwitch]
  
 
== Discussion ==
 
== Discussion ==
 +
 +
Erik Carlin has created an Etherpad for general NaaS discussion.
 +
Please add your comments [http://etherpad.openstack.org/6LJFVsQAL7 here]
 +
 +
Other discussion resources:
  
 
Etherpad from discussion session at Bexar design summit: http://etherpad.openstack.org/i5aSxrDeUU
 
Etherpad from discussion session at Bexar design summit: http://etherpad.openstack.org/i5aSxrDeUU

Latest revision as of 23:31, 17 February 2013


~+This blueprint is now superseded

please refer to this wiki page for latest updates on openstack's network service

Glossary

NaaS: Network as a Service

Openstack-NaaS: The customer-facing service proposed by this blueprint. This distinguishes it from the existing nova-network.

Higher Layer services: L4/L7 network services which might be enabled for networks created by NaaS.

The OpenStack NaaS API: The customer-facing API exposed by openstack-NaaS.

VIF: Virtual InterFace. A VM's network interface. Also known as a vNIC.

Summary

The goal of this blueprint is to add a first-class, customer-facing service for the management of network infrastructure within an OpenStack cloud. This will allow service providers to offer "Networking as a Service" (NaaS) to their customers.

This blueprint discusses goals, use cases, requirements and design ideas for features and capabilities to enable in openstack-NaaS in order to be able to create and manage networks intended as collection of virtual ports with shared connectivity, which provide VM instances with Layer-2 and possibly Layer-3 connectivity.

Higher-layer services, such as Firewall, NAT, VPN, and Load Balancing, will instead be provided by distinct services communicating with NaaS through exposed APIs. L4/L7 services are discussed at this wiki page'.

Rationale

The main aim of NaaS is to provide Openstack users with a service for providing Layer-2 networking to their VM instances; a network created with NaaS can be indeed regarded as a virtual network switch, together with related network devices attached to it, which potentially spans over all the compute nodes in the cloud. Apart from providing Layer-2 services, NaaS also aims at providing Layer-3 networking, intended as IP configuration management and IP routing configuration.

NaaS APIs should be decoupled by the implementation of the network service, which should be provided through plugins. This implies that NaaS does not mandate any specific model for created networks, either at Layer-2 (e.g.: VLANs, IP tunnels), or Layer-3 (e.g.: file-system based injection, DHCP).

Goals

Goal 1: Allow customers and CSPs to create, delete, and bridge networks. Networks can either be private, i.e.: available only to a specific customer, or shared. Networks shared only among a specific group of customers can also be considered.

Goal 2: Allow customers and CSPs to manage virtual ports for their networks, and attach instances or other network appliances (physical or virtual) available in the cloud to them.

Goal 3: Allow customers and CSPs to extend their networks from the cloud to a remote site, by attaching a bridging device within the cloud to their networks'; the bridging device would then bridge to the appropriate remote site.

Goal 4: Allow customers and CSPs to manage IP configuration for their networks. Both IPv4 and IPv6 should be supported. Zero, One, or more IP subnets can be associated with a network.

Goal 5: Allow customers and CSPs to define IP routes among subnets. Routines can be defined either across subnets in the same virtual network, or across IP subnets in distinct virtual networks owned by the same tenant.

Goal 6: Allow customer and CSPs to monitor networks by making available statistic information such as total number of bytes transmitted and received per network, port or VIF. IP-based statistics should also be available.

Goal 7: Allow customers and CSPs to securely configure network policies for networks, ports, and devices attached to them. These policies can include, for instance, port security polices, access control lists, high availability, or QoS policies (which are typically available on physical network switches). Only a basic set of configuration options will be supported; the remaining network policies should be assumed as plugin-specific and always be configured through API extension mechanisms.

Goal 8: Allow CSPs to register and configure the plugins providing the actual implementation of the network service. CSPs should be able to select and plug in third-party technologies as appropriate. This may be for extended features, improved performance, or reduced complexity or cost.

Use cases

Use case 1: Create private network and attach instance to it

Related topology: Each tenant has an isolated network. Tenants are free to do whatever they want on their network, because it is completely isolated from those of other tenants. This is the NASA Nebula model as of today, implemented using one VLAN per tenant. Note that although this same model could equally be implemented with different technologies (e.g. GRE tunnels instead of VLANs for isolation), this would not change the nature of the model itself.

  1. Customer uses the NaaS API to create a Network;
  2. On success, NaaS API return a unique identifier for the newly created network;
  3. Customer uses NaaS API for configuring a logical port on the network;
  4. Customer invokes Cloud Controller API to run an instance, specifying network and virtual port for it;
  5. Cloud Controller API dispatches request to compute service;
  6. The compute service creates VM and VIFs. For each VIF, it asks Naas to plug it into port and network specified by Customer in (4).

Use case 2: Attach instance to default public network

Related topology: Similar to the 'Flat' mode currently supported by nova network. Instances from different customers are all deployed on the same virtual network. In this case, the Core NaaS service can provide port isolation policy in order to ensure VM security.

  1. Customer uses Naas API to retrieve public networks;
  2. On success, Naas API returns a list of unique network identifiers; Customer selects a networks from this list;
  3. Customer uses NaaS API for configuring a logical port on the network;
  4. Customer invokes Cloud Controller API to run an instance, specifying network and virtual port for it;
  5. Cloud Controller API dispatches request to compute service;
  6. The Nova Compute service creates VM and VIFs. For each VIF, it asks Naas to plug it into port and network specified by Customer in (4).

The main difference between this use case and the previous one is that the Customer uses a pre-configured network instead of creating it. Another point that needs to be discussed is whether customers should be allowed to manage ports for public networks. Alternatively, the compute service can implicitly create a port when it contacts the NaaS API.

Use case 3: Register bridge for connecting cloud network to other site

Related topology: Customer on-premise data centre extending into the cloud, interconnecting networks in distinct cloud. Although actual implementation can be provided in several ways. we are interested in the abstract model: a single connectivity domain spanning two or more networks in distinct administrative domains.

  1. Customer uses NaaS API to register a bridge for its network;
  2. On success the NaaS API returns the bridge identifier;
  3. Customer uses NaaS API to provide bridge configuration (e.g.: remote endpoint, port, credentials);
  4. Customer uses NaaS API to create a virtual port on the network for the bridge device;
  5. Customer uses NaaS API to plug bridge device into network

Use case 4: Retrieve statistics for a network

  1. Customer uses NaaS API to retrieve a specific network;
  2. On success the NaaS API returns the network identifier;
  3. Customer uses NaaS API to retrieve statistics for the network. Filters can be used to specify which data should be retrieved (e.g.: total bytes RX/TX), and for which components of the network they should be retrieved (whole network, specific port(s), specific VIF(s))
  4. NaaS invokes the implementation plugin to retrieve required information;
  5. NaaS API returns statistic data and customer processes them.

NOTE:the actor for this use case can either be a customer or an higher-layer monitoring or billing service, e.g.: a service which charges customers according to their network usage.

Use case 5: Configure network policies

  1. Customer uses NaaS API to retrieve a specific network;
  2. On success the NaaS API returns the network identifier;
  3. Customer uses NaaS API to enforce a policy on the network; for instance a policy can specify a maximum bit rate on a specific port or block a specific protocol over the whole network;
  4. NaaS invokes the implementation plugin to enforce the policy;
  5. NaaS API informs the user about the result of the operation.

Use case 6: Configure an IP subnet and attach instances to it

  1. Customer uses NaaS API to retrieve a specific network;
  2. On success the NaaS API returns the network identifier;
  3. Customers uses NaaS API to create an IP subnet, by specifying CIDR (or alternative network address and netmask), and the gateway;
  4. On success NaaS API return a unique identifier for the newly created subnet;
  5. Customer invokes NaaS API in order to attach a VIF already attached to one of his L2 networks to the newly created subnets;
  6. NaaS verifies sanity of input data (e.g.: VIF attached to the appropriate network);
  7. NaaS invokes the IP configuration management plugin to provide the supplied VIF with the appropriate configuration;

Use case 7: Configure a route between two IP subnets

For this use case, we assume that the customer already has the identifiers (URIs, UUIDs) of the two subnets that should be routed.

  1. Customer invokes NaaS API to create a route between the two subnets
  2. NaaS API creates appropriate routes using CIDRs and gateway address for the two subnets
  3. NaaS return a unique identifier for the newly created route

The way in which the IP route is created (e.g.: manipulating route table on instances, manipulating routing tables in hypervisors, configuring a router virtual appliance, etc.etc.) is plugin-specific. Routing attributes, such as distance, cost, and weight should also be part of extension APIs as they are not required to provide the basic functionality.

Plug-in use cases

In this section, we give examples of the technologies that a service provider may wish to plug into NaaS to provide L2/L3 networking. Some of these technologies are commercial in nature. Where this is the case, this blueprint will include bindings for an open-source alternative, so that the NaaS API is completely supported by open-source software. It is a goal of this blueprint that a service provider may select different implementation technologies.

Category
Distributed virtual switches
Inter-datacenter tunnels
IP address management

Requirements

R1. Add a first-class, customer-facing service for management and configuration of network infrastructure via a RESTful API. This service shall be known as openstack-NaaS, and the API that it exposes shall be known as the OpenStack NaaS API.

R2. Modify nova-compute to obtain network details by calling openstack-NaaS through its public API, rather than calling nova-network as it does today.

... lots more coming here don't worry!

Design Ideas

At this stage any attempt to provide a design for NaaS would be quite unrealistic. This section should be therefore regarded as a first attempt to define general design guidelines.

At a very high-level view the Network Service and its interactions with other entities (compute service, database, plugins) can be summarized as follows:

Naas-Core-Overview.png

  • The Nova Compute service should not have any knowledge of the hypervisor's networking stack and uses NaaS API for plugging VIFs into networks (this is sligthly different from current nova design where part of the network manager's code - namely setup_compute_network - is used by the compute service as well);
  • Distinct plugins for Layer-2 networking and Layer-3 networking should be allowed; also multiple plugins should be expected at least for Layer-3 networking; for instance one plugin could provide IP configuration through DHCP, whereas another plugin could use agent-based configuration.
  • Layer-3 networking, altough part of NaaS, is not mandatory. In its most simple form, NaaS can be providing Layer-2 connectivity only;
  • Both Layer-2 and Layer-3 plugins can be attached to NaaS; however, if no Layer-3 plugin is provided, NaaS should raise a NotImplemented error for every L3 API request.
  • NaaS API serves requests from both customers and the compute service. In particular, the responsibilities of the compute services w.r.t. NaaS are the following:
    • Plugging/Unplugging virtual interfaces in virtual networks managed by NaaS;
    • Attach/Detach virtual interfaces to IP subnets configured in NaaS.
  • The "Plugin dispatcher" component is in charge of dispatching API requests to the appropriate plugin (which is not part of NaaS);
  • For Layer-2 networking, the plugin enforces network/port configuration on hypervisors using proprietary mechanisms;
  • Similary, Layer-3 networking plugins enforce IP configurationa and routing using proprietary mechanism;
  • Although the diagram supposes a plugin has a 'manager' component on the NaaS node and agent 'component', this is not a requirement, as NaaS should be completely agnostic w.r.t. plugin implementations; the 'plugin network agent' is not mandatory. The design of the plugin providing the implementation of the Core NaaS service is outside the scope of this blueprint;
  • NaaS stores information about networks and IP subnets into its own DB. NaaS data model is discussed more in detail in the rest of this section;

Data Model for NaaS

Currently nova-network uses the nova database, storing information in the networks table. It also uses a few other tables, such as fixed_ips and floating_ips; each project is associated with a network. In order to achieve complete separation between NaaS and the other nova services, then NaaS should have its own database. In the nova DB tables will still be associated with networks; however the network identifier will represent the unique identifier for a network created by NaaS rather than the primary key of a row in the networks table. There could be other elements cached in Nova, such as IP associated with instances but Nova would be no longer the system in which network information are recorded.

Moreover,it might be worth thinking about a 1:N association between networks and projects, which is already required by the NovaSpec:nova-multi-nic blueprint, or even an N:M association, if we want to support networks shared among different projects.

The following diagram reports an high-level view of the NaaS data model. Entities related to Layer-2 networking are reported in green, whereas Layer-3 entities are reported in blue.

NaaS-Data-Model.png

The Network entity defines Layer-2 networks; its most important attributes are: a unique identifier (which could be a URI or a UUID), the owner, and the name for the network. Details such as the VLAN ID associated with the network) pertain to the specific implementation provided by the plugin and should therefore not be in the NaaS DB. Plugins (or the systems they connect to) will have their own database. A Network can be associated with several logical ports. Apart from the port number, the Port entity can define the port's administrative status and cache statistic information about traffic flowing through the port itself. Ports are associated in a 1:1 relationship with VIFs. The VIF table tracks attachments of virtual networks interfaces to ports. Although VIFs are not created by NaaS it is important to know which VIFs are attached where.

As regards L3 entities, IP_Subnet is the most important one; its attributes could be the subnet CIDR (or network address/netmask), and the default gateway, which should be optional. The IP configuration schemes used for the subnet should also be part of this entity, assuming that NaaS can use several schemes at the same time. Each entry in the IP_Subnet entity can be associated with one or more IP addresses. These are the address which are currently assigned for a given network (either via a DHCP lease or a static mechanism), and are associated with a VIF. While an IP address can obviously be associated with a single VIF, a VIF can instead be associated with multiple IP addresses. Finally, the IP Routes entity links subnet for which an IP route is configured. Attributes of the IP route, such as cost and weight, are plugin-specific and should not be therefore part of this entity, which could probably be reduced to a self-association on the IP_Subnet table.

NaaS APIs

Draft document: attachment:NaaS_API_spec-draft-v0.1.docx

Possible IP Configuration Strategies

Each VIF will need an IP address (IPv4 or IPv6); these are given to the VM by one of the schemes described here, which can be implemented by different plugins:

Scheme 1: Agent: Use an agent inside the VM to set the IP address details. The agent will receive the configuration through a virtualization-specific scheme (XenStore, VMCI, etc) or through a more generic scheme such as a virtual CD-ROM. This scheme of course requires installation of the agent in the VM image.

Scheme 2: DHCP: Configure the VM to use DHCP (usually the default anyway) and configure a DHCP server before booting the VM. This may be a private DHCP server, visible only to that VM, or it could be a server that is shared more widely.

Scheme 3: Filesystem modification: Before booting the VM, set the IP configuration by directly modifying its filesystem.

Pre-requisites

Multiple VIFs per VM. Not in OpenStack in Cactus, but expected to be added to Nova through NovaSpec:multi-nic and NovaSpec:multinic-libvirt for Diablo. This is required for all supported virtualization technologies (currently KVM/libvirt, XenAPI, Hyper-V, ESX).

Development Resources

No commitments have been made yet, but development resources have been offered by Citrix, Grid Dynamics, NTT, Midokura, and Rackspace.

We will sort out how to share the development burden when this specification is nearer completion.

Work in Progress

The following blueprints concerning Network Services for Openstack have been registered:

  • Network Service POC, registered by Hisaharu Ishii from NTT-PF Lab. There is also some POC code being worked on at lp:~ntt-pf-lab/nova/network-service
  • NovaSpec:netcontainers, registered by Ram Durairaj from Cisco
  • NovaSpec:naas-project, registered by Rick Clark, which can be regarded as an attempt to merge all these blueprints in a single specification for NaaS.

Also:

  • Erik Carlin is working on a draft spec for the OpenStack Networking API.
  • As already mentioned, work on supporting multiple virtual network cards per instance is already in progress. NovaSpec:nova-multi-nic
  • Ilya Alekseyev has registered the NovaSpec:distros-net-injection blueprint in order to support file-system-based IP configuration in injection for a number of linux distros (nova now supports debian-based distros only). Christian Berendt also registered a similar blueprint, NovaSpec:injection
  • Dan Wendlandt has registered NovaSpec:openvswitch-network-plugin for a NaaS plugin based on Open vSwitch

Discussion

Erik Carlin has created an Etherpad for general NaaS discussion. Please add your comments here

Other discussion resources:

Etherpad from discussion session at Bexar design summit: http://etherpad.openstack.org/i5aSxrDeUU

Etherpad from alternative discussion session at Bexar design summit: http://etherpad.openstack.org/6tvrm3aEBt

Slide deck from discussion session at Bexar design summit: http://www.slideshare.net/danwent/bexar-network-blueprint