Jump to: navigation, search

Difference between revisions of "NetworkService"

Line 131: Line 131:
  
 
== Development Resources ==
 
== Development Resources ==
 +
 +
No commitments have been made yet, but development resources have been offered by Citrix, Grid Dynamics, NTT, Midokura, and Rackspace.
 +
 +
We will sort out how to share the development burden when this specification is nearer completion.
  
 
== Release Note ==
 
== Release Note ==

Revision as of 02:39, 6 February 2011


This blueprint is being redrafted. Ewan Mellor will be happy to hear from you if you wish to contribute. Nothing here is committed.

There is a Discussion section at the end of this blueprint. Please feel free to put comments there.


Goals

Goal 1: Add a first-class, customer-facing service for the management of network infrastructure within an OpenStack cloud. This will allow service providers to offer "Networking as a Service" (NaaS) to their customers.

Goal 2: Allow the customer to start and stop network-related services provided by the service provider. These might be load-balancers, firewalls or tunnels, for example. The service provider may charge for these services, so starting one may be a chargeable event.

Goal 3: Allow the customer to configure rich network topologies within the cloud. These topologies will include private connections between VMs, and connections between VMs and network services such as those mentioned in Goal 2. Of course, this reconfiguration must happen without affecting other tenants within the cloud.

Goal 4: Allow the customer to extend their networks from the cloud to a remote site. This is a simple extension of Goal 3 where the customer would configure a connection from their VMs to a bridging device within the cloud, which would then bridge to the appropriate remote site.

Goal 5: Allow the service provider to select and plug in third-party technologies as appropriate. This may be for extended features, improved performance, or reduced complexity or cost. For example, one service provider may choose to offer their firewall service based on hardened Linux VMs, but another one may choose to use commercial firewall software instead.

Goal 6: The extent to which customers can manipulate their own network infrastructure will depend upon the service provider and the underlying technologies that they have deployed. Goal 6 is to gracefully manage the disparity between various deployments. The service provider must be free to limit what customers can do, and the API must gracefully handle that.

Goal 7: Support the network topologies already in use today (Topologies 1 and 2 below).

Glossary

openstack-networking: The customer-facing service proposed by this blueprint. This distinguishes it from the existing nova-network.

The OpenStack Networking API: The customer-facing API exposed by openstack-networking.

VIF: Virtual InterFace. A VM's network interface. Also known as a vNIC.

Use Cases

Below are a number of example network topologies. For each topology you can consider two use-cases -- the customer who wishes to deploy their resources within such a topology, and the service provider who wants to allow the customer to do that.

Regardless of the topology selected, each VIF on each VM will need an IP address (IPv4 or IPv6). This may be given to the VM by one of many ways (described below). IP address injection is orthogonal to the topology choice -- any topology may be used in combination with any address injection scheme.

Topology 1: Isolated per-tenant networks

Each tenant has an isolated network, which the tenant accesses via a VPN gateway. Tenants are free to do whatever they want on their network, because it is completely isolated from those of other tenants.

This is the NASA Nebula model today.

This is currently implemented using one VLAN per tenant, and with one instance of nova-network per tenant to act as the VPN gateway. Note that this same model could equally be implemented with different technologies (e.g. a commercial VPN gateway, or using GRE tunnels instead of VLANs for isolation). One aim of this blueprint is to consider the model independently from the underlying implementation technology.

Topology 2: Direct internet connections

Each VM has a single public IP address and is connected directly to the Internet. Tenants may not choose their IP address or manage anything about the network topology.

This is the Rackspace Cloud model today.

Topology 3: Firewall service

Like Topology 1, but the VPN gateway is replaced by a firewall service that can be managed by the customer through the OpenStack Networking API. The public side of the firewall would usually be connected directly to the Internet. The firewall itself is provided by the service provider.

Topology 4: Customer-owned gateway

Like Topologies 1 and 3, but instead of the gateway being a service provided by the cloud and managed through the Networking API, the customer instead provides their own gateway, as a virtual appliance.

The customer would have most of their VMs attached to their isolated network, but would be able to configure one (or more) of their VMs to have two interfaces, one connected to the isolated network, and one connected to the Internet. The customer would be responsible for the software within the publicly-connected VMs, and would presumably install a gateway or firewall therein.

Topology 5: Multi-tier applications

Like Topology 4, but rather than running a gateway or a firewall, the customer is expected to run web servers instead. These would serve content to the public interfaces, and would contact the backend tiers via the private network.

In this topology, it's very likely that there would be more than one web server with a public Internet connection.

IP address injection

Each VIF on each VM will need an IP address (IPv4 or IPv6). Since OpenStack is managing the network provision, it must also manage the allocation of IP addresses for VMs. These are then given to the VM by one of the schemes described here. IP address injection is orthogonal to the topology choice -- any topology may be used in combination with any address injection scheme.

Scheme 1: Agent: Use an agent inside the VM to set the IP address details. The agent will receive the configuration through a virtualization-specific scheme (XenStore, VMCI, etc) or through a more generic scheme such as a virtual CD-ROM. This scheme of course requires installation of the agent in the VM image.

Scheme 2: DHCP: Configure the VM to use DHCP (usually the default anyway) and configure a DHCP server before booting the VM. This may be a private DHCP server, visible only to that VM, or it could be a server that is shared more widely.

Scheme 3: Filesystem modification: Before booting the VM, set the IP configuration by directly modifying its filesystem.

Plug-in use cases

In this section, we give examples of the technologies that a service provider may wish to plug in at various points in the infrastructure.

Some of these technologies are commercial in nature. Where this is the case, this blueprint will include bindings for an open-source alternative, so that the Networking API is completely supported by open-source software.

It is a goal of this blueprint that a service provider may select different implementation technologies in each category, whether that's to use a commercial technology or simply to replace one open-source technology with another.

Category
Distributed virtual switches
VPN gateways
Load balancers
Firewalls
Inter-datacenter tunnels

Pre-requisites

Multiple VIFs per VM. Not in OpenStack in Bexar, but expected to be added to Nova through NovaSpec:multi-nic and NovaSpec:multinic-libvirt for Cactus or Diablo. This is required for all supported virtualization technologies (KVM/libvirt, XenAPI, Hyper-V, ESX).

Requirements

R1. Add a first-class, customer-facing service for management and configuration of network infrastructure via a RESTful API. This service shall be known as openstack-networking, and the API that it exposes shall be known as the OpenStack Networking API.

R2. Modify nova-compute to obtain network details by calling openstack-networking through its public API, rather than calling nova-network as it does today.

... lots more coming here don't worry!

Non-requirements

Assumptions

Rationale

Design

QA

Future

Development Resources

No commitments have been made yet, but development resources have been offered by Citrix, Grid Dynamics, NTT, Midokura, and Rackspace.

We will sort out how to share the development burden when this specification is nearer completion.

Release Note

Work in Progress

Erik Carlin is working on a draft spec for the OpenStack Networking API.

Discussion

Etherpad from discussion session at Bexar design summit: http://etherpad.openstack.org/i5aSxrDeUU

Etherpad from alternative discussion session at Bexar design summit: http://etherpad.openstack.org/6tvrm3aEBt

Slide deck from discussion session at Bexar design summit: http://www.slideshare.net/danwent/bexar-network-blueprint