Jump to: navigation, search

NaaS-Core

Revision as of 14:41, 6 April 2011 by SalvatoreOrlando (talk)

This blueprint is being redrafted. Ewan Mellor and Salvatore Orlando will be happy to hear from you if you wish to contribute. Nothing here is committed.


Summary

The core functionality of Openstack Network-as-a-service (NaaS) is to provide a customer-facing service for creating and managing networks intended as "collection of virtual ports with shared connectivity".

Rationale

A network created with the core NaaS API can be regarded as a virtual network switch which potentially spans over all the compute nodes in the cloud. NaaS APIs should be decoupled by the actual implementation of the core service, which should be provided by a plugin implementing the core Naas API. This implies that NaaS does not mandate any specific model for created networks (e.g.: VLANs, IP tunnels).

The core NaaS service can also be regarded as a container for higher level services, for instance DHCP and NAT. Higher level services will come their own API and implementation and they are discussed in detail in the Naas-Higher-Layer blueprint.

Goals

Goal 1: Allow customers and CSPs to create networks. Networks can either be private, i.e.: available only to a specific customer, or shared. Networks shared only among a specific group of customers can also be considered.

Goal 2: Allow customers to manage virtual ports for their networks, and attach instances or other network appliances (physical or virtual) available in the cloud to them.

Goal 3: Allow customers to extend their networks from the cloud to a remote site, by attaching a bridging device within the cloud to their networks'; the bridging device would then bridge to the appropriate remote site.

Goal 4: Allow customers to configure network policies for networks, ports, and devices attached to them. These policies can include, for instance, port security polices, access control lists, or QoS policies (which are typically available on physical network switches). Since a 'minimum set' of policies supported by each possible plugin can hardly be identified, network policies should be assumed as plugin-specific and always be configured through API extension mechanisms.

Goal 5: Allow CSPs to register and configure the plugins providing the actual implementation of the core service, as well as register and configure plugins for higher-level services such as DHCP, Firewall or Load Balancing. CSPs shoould be able to select and plug in third-party technologies as appropriate. This may be for extended features, improved performance, or reduced complexity or cost.

Use cases

Use case 1: Create private network and attach instance to it

Related topology: Each tenant has an isolated network. Tenants are free to do whatever they want on their network, because it is completely isolated from those of other tenants. This is the NASA Nebula model as of today, implemented using one VLAN per tenant. Note that although this same model could equally be implemented with different technologies (e.g. GRE tunnels instead of VLANs for isolation), this would not change the nature of the model itself.

  1. Customer uses the Core NaaS API to create a Network;
  2. On success, Core NaaS API return a unique identifier for the newly created network;
  3. Customer uses Core NaaS API for configuring a logical port on the network;
  4. Customer invokes Cloud Controller API to run an instance, specifying network and virtual port for it;
  5. Cloud Controller API dispatches request to compute service;
  6. Compute service creates VM and VIFs. For each VIF, it asks Naas Core to plug it into port and network specified by Customer in (4).

Use case 2: Attach instance to default public network

Related topology: Similar to the 'Flat' mode currently supported by nova network. Instances from different customers are all deployed on the same virtual network. In this case, the Core NaaS service can provide port isolation policy in order to ensure VM security.

  1. Customer uses Core Naas API to retrieve public networks;
  2. On success, Core Naas API returns a list of unique network identifiers; Customer selects a networks from this list;
  3. Customer uses Core NaaS API for configuring a logical port on the network;
  4. Customer invokes Cloud Controller API to run an instance, specifying network and virtual port for it;
  5. Cloud Controller API dispatches request to compute service;
  6. Compute service creates VM and VIFs. For each VIF, it asks Naas Core to plug it into port and network specified by Customer in (4).

The main difference between this use case and the previous one is that the Customer uses a pre-configured network instead of creating it. Another point that needs to be discussed is whether customers should be allowed to manage ports for public networks. Alternatively, the compute service can implicitly create a port when it contacts the Core NaaS API.

Use case 3: Register bridge for connecting cloud network to other site

Related topology: Customer on-premise data centre extending into the cloud, interconnecting networks in distinct cloud. Although actual implementation can be provided in several ways. we are interested in the abstract model: a single connectivity domain spanning two ore more networks in distinct administrative domains.

  1. Customer uses Core NaaS API to register a bridge for its network;
  2. On success the Core NaaS API returns the bridge identifier;
  3. Customer uses Core NaaS API to provide bridge configuration (e.g.: remote endpoint, port, credentials);
  4. Customer uses Core NaaS API to create a virtual port on the network for the bridge device;
  5. Customer uses Core NaaS API to plug bridge device into network

Plug-in use cases

In this section, we give examples of the technologies that a service provider may wish to plug in at various points in the infrastructure. Some of these technologies are commercial in nature. Where this is the case, this blueprint will include bindings for an open-source alternative, so that the NaaS API is completely supported by open-source software. It is a goal of this blueprint that a service provider may select different implementation technologies.

Category
Distributed virtual switches
Inter-datacenter tunnels

Requirements

R1. Add a first-class, customer-facing service for management and configuration of network infrastructure via a RESTful API. This service shall be known as openstack-NaaS, and the API that it exposes shall be known as the OpenStack NaaS API.

R2. Modify nova-compute to obtain network details by calling openstack-NaaS through its public API, rather than calling nova-network as it does today.

... lots more coming here don't worry!

Design Ideas

At this stage any attempt to provide a design for Core NaaS would be quite unrealistic. This section should be therefore regarded as a first attempt to define general design guidelines for NaaS.

At a very high-level view Core NaaS and its interactions with other entities (compute service, nova database, plugins) can be summarized as follows:

File:NaaS-Core$Naas-Core-Overview.png

  • NaaS Core API serves requests from customers concerning networks and virtual ports. The NaaS API layer then dispatches the request to the plugin (which is not part of Core NaaS). The plugin then enforces network/port configuration on hypervisors using proprietary mechanisms;
  • The Compute service should not have any knowledge of the hypervisor's networking stack and uses NaaS Core API for plugging VIFs into networks (this is sligthly different from current nova design where part of the network manager's code - namely setup_compute_network - is used by the compute service as well);
  • Although the diagram supposes a plugin has a 'manager' component on the NaaS node and agent 'component', this might not be always true, as NaaS should be completely agnostic w.r.t. plugin implementations;
  • Just like today's nova-network service, Core NaaS used the nova DB. However, the database table for describing networks will be much simpler, as it will not contain information about higher-level services, such as VPN. Customer-Network association can still be performed on a project base, but it might be worth thinking about a 1:N association between networks and projects. IP information for networks are a slightly different argument, and will be discussed later in this blueprint.

As far as IP configuration is concerned, this aspect can be regarded as border-line between the core service and a higher-layer service. On the one hand, given that the aim of core service is to provide basic connectivity only, just like a switch provides network connectivity to servers in a rack, IP configuration should be regarded as a higher layer service; for instance, a DHCP service represents an example of an higher-level service for providing instances with IP configuration. On the other hand, it might be rightly argued that IP addressing is such a fundamental service that it should be part of the Core NaaS; the NaaS Core API will then also provide functionalities for configuring IP subnets in terms of CIDR, netmask, and gateway. However, the implementation for the IP API could be provided by a different plugin, and several plugins providing different IP configuration strategies could coexist on the NaaS node (e.g.: DHCP and IP injection). Each VIF on each VM will need an IP address (IPv4 or IPv6); these are then given to the VM by one of the schemes described here, which can be implemented by different plugins:

Scheme 1: Agent: Use an agent inside the VM to set the IP address details. The agent will receive the configuration through a virtualization-specific scheme (XenStore, VMCI, etc) or through a more generic scheme such as a virtual CD-ROM. This scheme of course requires installation of the agent in the VM image.

Scheme 2: DHCP: Configure the VM to use DHCP (usually the default anyway) and configure a DHCP server before booting the VM. This may be a private DHCP server, visible only to that VM, or it could be a server that is shared more widely.

Scheme 3: Filesystem modification: Before booting the VM, set the IP configuration by directly modifying its filesystem.

If IP configuration is deemed part of the core API, the following goal should then be added to the above list:

Goal 6: Customers should be able to configure IP subnets for their networks. Zero, One, or more IP subnets can be associated with a network.