Jump to: navigation, search

NetworkService

Revision as of 01:24, 1 February 2011 by EwanMellor (talk)

This blueprint is being redrafted. Ewan Mellor will be happy to hear from you if you wish to contribute. Nothing here is committed.


Goals

Add a first-class, customer-facing service for the management of network infrastructure within an OpenStack cloud.

Allow the customer to configure rich network topologies within the cloud. These topologies will include private connections between VMs, and connections between VMs and network services such as load-balancers, firewalls, and tunnels. Of course, this reconfiguration must happen without affecting other tenants within the cloud.

Allow the service provider to select and plug in third-party technologies as appropriate. This may be for extended features, improved performance, or reduced complexity or cost.

The extent to which customers can manipulate their own network infrastructure will depend upon the service provider and the underlying technologies that they have deployed. It is a further goal of this blueprint to gracefully manage the disparity between various deployments.

Additionally, some of the topologies proposed below imply that the compute layer must support multiple network interfaces per VM. This is not supported today, and must be added as part of this work.

Glossary

The customer-facing service proposed by this blueprint shall be named "openstack-networking" for the purpose of this document, to distinguish it from the existing nova-network.

The customer-facing API exposed by openstack-networking will be called the OpenStack Networking API for the purpose of this document.

Neither name is important, and we do not have to stick with these.

Use Cases

Below are a number of example network topologies. For each one of these you can consider two use-cases -- the customer who wishes to deploy their resources within such a topology, and the service provider who wants to allow the customer to do that.

Topology 1: Isolated per-tenant networks

Each tenant has an isolated network, which the tenant accesses via a VPN gateway. Tenants are free to do whatever they want on their network, because it is completely isolated from those of other tenants.

This is the NASA Nebula model today.

This is currently implemented using one VLAN per tenant, and with one instance of nova-network per tenant to act as the VPN gateway. Note that this same model could equally be implemented with different technologies (e.g. a commercial VPN gateway, or using GRE tunnels instead of VLANs for isolation). One aim of this blueprint is to consider the model independently from the underlying implementation technology.

Topology 2: Direct internet connections

Each VM has a single public IP address and is connected directly to the Internet. Tenants may not choose their IP address or manage anything about the network topology.

This is the Rackspace Cloud model today.

Topology 3: Firewall service

Like Topology 1, but the VPN gateway is replaced by a firewall service that can be managed by the customer through the OpenStack Networking API. The public side of the firewall would usually be connected directly to the Internet. The firewall itself is provided by the service provider.

Topology 4: Customer-owned gateway

Like Topologies 1 and 3, but instead of the gateway being a service provided by the cloud and managed through the Networking API, the customer instead provides their own gateway, as a virtual appliance.

The customer would have most of their VMs attached to their isolated network, but would be able to configure one (or more) of their VMs to have two interfaces, one connected to the isolated network, and one connected to the Internet. The customer would be responsible for the software within the publicly-connected VMs, and would presumably install a gateway or firewall therein.

Topology 5: Multi-tier applications

Like Topology 4, but rather than running a gateway or a firewall, the customer is expected to run web servers instead. These would serve content to the public interfaces, and would contact the backend tiers via the private network.

In this topology, it's very likely that there would be more than one web server with a public Internet connection.

Plug-in use cases

In this section, we give examples of the technologies that a service provider may wish to plug in at various points in the infrastructure.

Some of these technologies are commercial in nature. Where this is the case, this blueprint will include bindings for an open-source alternative, so that the Networking API is completely supported by open-source software.

It is a goal of this blueprint that a service provider may select different implementation technologies in each category, whether that's to use a commercial technology or simply to replace one open-source technology with another.

Category
Distributed virtual switches
VPN gateways Citrix Access Gateway, OpenVPN
Load balancers Citrix NetScaler, HAProxy

Requirements

R1. Add a first-class, customer-facing service for management and configuration of network infrastructure via a RESTful API.

Rn. Add support to nova-compute for multiple network interfaces per VM. Add this to all supported virtualization technologies (KVM/libvirt, XenAPI, Hyper-V, ESX).

... lots more coming here don't worry!

Non-requirements

Assumptions

Rationale

Design

QA

Future

Development Resources

Release Note

Discussion

Etherpad from discussion session at Bexar design summit: http://etherpad.openstack.org/i5aSxrDeUU

Slide deck from discussion session at Bexar design summit: http://www.slideshare.net/danwent/bexar-network-blueprint