Jump to: navigation, search

Difference between revisions of "NetworkServicePOC"

m
Line 1: Line 1:
__NOTOC__
 
 
* '''Launchpad Entry''': https://code.launchpad.net/~ntt-pf-lab/nova/network-service
 
* '''Launchpad Entry''': https://code.launchpad.net/~ntt-pf-lab/nova/network-service
 
* '''Created''': [https://launchpad.net/~ishii-hisaharu Hisaharu Ishii]
 
* '''Created''': [https://launchpad.net/~ishii-hisaharu Hisaharu Ishii]
Line 24: Line 23:
 
== Design ==
 
== Design ==
 
=== Network Service POC Overview ===
 
=== Network Service POC Overview ===
[[Image:NetworkServicePOC$openstack-network-service-apis-generic-vs-specific.png]]
+
[[Image:openstack-network-service-apis-generic-vs-specific.png]]
  
 
A pluggable network service is typically broken up into the following three parts:
 
A pluggable network service is typically broken up into the following three parts:

Revision as of 21:30, 16 February 2013

Summary

In the current Nova implementation, there are three types of network services: VLAN, flat, and flat DHCP. For each deployment of Nova, only one of these services can be configured to be used at a time. While the current network architecture in Nova works fine with this design, we would like to propose a more flexible design in which these network services are implemented more modularly, allowing them to be 'plugged-in' to Nova. With this design, it would allow third-party network services to be easily integrated into Nova.

In this POC implementation, we will try to solve the following issues in the current networking implementation:

  1. Strong coupling of compute and network services: In the current implementation, there are fairly large amount of network-related code that exist inside the compute code. This coupling makes it difficult swap the existing services with custom ones.
  2. Association of networking service at the deployment/application level: Networking services should be associated at the level of smallergranularity so that within one deployment of Nova, it would be possible to have multiple types of networks.
  3. Lack of L2 support: Because the currently available networking services are all L3-based networking, there is no option to create an L2-based network. By making network services pluggable, it would make it easier to implement and use an L2(or any type of) network.

Finally, it should be noted that despite these changes, all the currently available features should work the same way as they do now.

Rationale

Network services should be implemented in a pluggable manner so that various types of network service can be used with Nova.

Assumptions

  • Multiple virtual NICs per instance are supported.
  • EC2 API does not support multiple NICs.

Design

Network Service POC Overview

Openstack-network-service-apis-generic-vs-specific.png

A pluggable network service is typically broken up into the following three parts:

  1. net-api: REST APIs for network management. Each plugin is responsible for providing a management tool(just like nova-manage) to manage its networks.
  2. net-agent: Pluggable module that defines generic interfaces called by the compute service. These APIs are meant to be run on the compute node.
  3. net-service: Pluggable service can optionally implement a network service, which is equivalent to nova-network service in the current Nova implementation, that handles various network tasks like DHCP, NAT, and IP allocation.

Management of the networks, such as creating networks, allocating IPs and virtual NICs are all accomplished through net-api. All the network-related code that currently exist in the compute code are moved to net-agent, thereby decoupling compute from the network services. Since net-service is optional, Nova does not have any knowledge of its existence. Only net-api and net-agent may access net-service. For example, for nova-compute to request a service from net-service, it must go through net-agent.

Nova Network Service Plugins

Nova currently comes with three types of network managers: VlanManager, FlatManager andFlatDHCPManager. These are converted into separate plugins in the new model. These plugins are defined in 'nova/networks' directory as distinct modules. The actual paths of these plugin modules are:

OpenStack Networking Concept

The networking in OpenStack consists of the following main concepts:

  • network: Represents a logical network but does not specify whether it's L3 or L2.
  • virtual NIC(VNIC): Represents a virtual network interface card that is attached to a VM instance. One common example of VNIC is an ethernet card.
  • port: Represents a virtual socket to plug the VNIC into for network connectivity. A port belongs to a particular network.

It is assumed that a VM instance is allowed to have multiple VNICs associated with it, with each one connecting to a port that belongs to different networks.

Project-Network Service Association

A project is associated with a network service. A project cannot be associated with more than one network service. By default, every project is associated with a network service defined by the flag, --default_network_service, where the value is the path to the plugin module. If this flag is not set, it defaults to VLAN(nova.network.vlan). New nova-manage commands are provided to help manage the association.

Network Service Configuration File

A list of available network services are listed in a network service configuration file. The location of this file is defined in the flag, --network_service_conf. If this flag is not defined, the default is /etc/nova/nova-network.conf. The format of this file is simply a list of Python module paths of the network services. Any blank line and or one that starts with '#' is ignored.

Example:


# Example network services
nova.network.flat
nova.network.flat_dhcp
nova.network.vlan

Network Management REST API

All the network management is done through the REST API implemented by each plugin. In this proposal, only the OpenStack API is applicable. The EC2 API still works the same way as before, but it does not support multiple NICs for an instance.

Each plugin has its own REST API URL that contains its name in the path. The format of the URL is: http://server:port/version/plugin_name/...

Flat, FlatDHCP and VLAN define the same APIs. They are:

The request and response formats are up to the plugins. They should, however, support both JSON and XML.

The routes for these APIs are set by the plugins themselves. The way this is accomplished is explained in the implementation section below.

Note: There is work being done for the new OpenStack Compute API(version 1.1). It is important to keep the network APIs in sync with the new design. (At the time of this writing, it is NOT in sync).

Data Store

Nova's database keeps the following data:

  • Association of projects and network services
  • Association of instances and virtual NICs

The actual data for virtual NICs, networks and ports are all plugin-specific, and therefore they belong in the data storage. If the plugin uses SQL as its data store, then it is the plugin's responsibility to define the migration scripts, data models and DB APIs.

For VLAN, Flat and FlatDHCP, a virtual NIC is an ethernet card, a port is a fixed IP, and a network is the same as that of the current Nova implementation.

  • Nova is aware of the existence of Networks and VPorts, where VPorts belong to a Network. VPorts represent ports in which you plug the VNICs into for network connectivity. Network is a group of VPorts that could represent either L2 or L3 network.
  • Compute's API takes in a list of list of VNIC IDs to associate with the instances that it is creating.
  • OpenStack API to create a new instance should have a newly defined API that takes in VNICs as a mandatory parameter. An exception should be thrown if VNICs are not supplied or invalid.
  • The legacy OpenStack API to create a new instance(that does not accept VNICs as a parameter), creates a new VNIC and calls the new create VM API with this VNIC as a parameter.
  • EC2 API creates a new VNIC for each instance to be created, and calls the Compute's API to create these instances.
  • During the VM instantiation, the compute code does not make any RPC call to the network node, and does not perform any task related to networking. Any network-related code that gets executed on the compute node is defined in a separate module called Network Agent.
  • Network Service (the service that runs on the network node) is implemented by each plugin. Only Network Agent, which is also implemented by the plugin, can access its own Network Service.
  • A generic API to get the IP address for a given VNIC is defined by the Network Agent for file injection.
  • A generic API to generate libvirt XML network interface definitions is defined by the Network Agent for generating the libvirt domain XML file.
  • Firewall logic is moved from Compute to Network Agent.
  • A generic API is defined by Network Agent to bind a VNIC to a VPort(in most cases, this is just assigning an IP address).

Implementation

Nova-manage script changes

nova_manage script has new Project commands for network service management:

  • nova-manage project network_set project_id network_service
    • Sets a network service module path, network_service, to a project with ID, project_id. If the project already has a network service set, it overwrites it.
  • nova-manage project network_get project_id
    • Gets the network service that is associated with the project with ID, project_id. Returns None if no network service is associated.

network commands in nova-manage are obsolete as all the network management commands are moved to the plugin-specific management tool scripts.

Network Service Classes

  • NetworkServiceManager
    • Defined in nova.network.service, NetworkServiceManager class handles the loading of the available network service plugins, and provides access to these modules to other Nova services. The configuration file to load the network service is read only once, and these modules are stored in memory.
  • NetworkServiceRouteMap
    • NetworkServiceRouteMap class is defined in nova.network.service, which is a wrapper class to python-route map object that is responsible for adding the appropriate prefix to all the REST API routes. Even though the routes for the API are set by the plugins themselves at the initialization of OpenStack API module, the route prefix, however, which includes the name of the plugins, is set by this class.

Database Changes

In the Nova DB, the following new tables are added:

ProjectNetworkServiceAssociation


+ id (PK)
+ project_id
+ network_service
+ deleted, created_at, deleted_at, ...

InstanceVirtualNicAssociation


+ id (PK)
+ instance_id(FK - instances.id)
+ virtual_nic_id
+ deleted, created_at, deleted_at, ...

In the VLAN, FlatDHCP and Flat plugins have EthernetCards table which represents the VNIC:

EthernetCards


+ id (PK)
+ mac_address
+ deleted, created_at, deleted_at, ...

Also, for all the built-in plugins, networks table is ported over from Nova DB. For VLAN and FlatDHCP, fixed_ips and floating_ips tables are copied over. For Flat, fixed_ips table is ported over and renamed to ip_addresses.

Plug-in Generic API

As mentioned earlier, each plugin consists of several components, and some of these components are required by Nova. The actual implementation of these components are up to the plugins, but the plugins must define generic APIs to return these modules/classes to Nova.

These generic APIs are:

  • get_os_service()
    • Returns the module/class that implements generic APIs needed by Nova's OpenStack/EC2 APIs.
  • get_os_api_service()
    • Returns the module/class that implements generic APIs needed by Nova's OpenStack API.
  • get_net_agent()
    • Returns the module/class that implements generic APIs needed by Nova's compute.

Net Agent Generic API

All the net agent APIs are generic.

  • get_default_vnic_id() : vnic_id
    • Gets the default VNIC for the plugin. This API must be implemented by all plugins because the current OpenStack/EC2 API to launch an instance does not require a parameter for VNICs, which means a VNIC must be created dynamically at the time of VM instantiation. This API is called by the Compute API class, and is defined by the plugin's API module/class returned from get_api_service(). Flat, FlatDHCP and VLAN plugins simply create a new ethernet card. It returns the new VNIC ID if successful, and None if it is not.
  • get_project_vpn_address_and_port(project_id) : (vpn_ip, vpn_port)
    • Gets a tuple of VPN IP address and VPN port for a given project. This is only applicable if the service has any VPN data(like VLAN plugin). It is used to construct the credential for the user.
  • bind_vnics_to_ports(vnic_ids)
    • For a list of vnic_ids, the API binds them to the ports if they are not already bound. This binding activates the network connectivity. It is up to the plugin to determine which ports that these VNICs should be plugged into. It does not return anything. It is implemented by the net agent.
  • setup_compute_network(vnic_ids)
    • This method must be implemented to do any networking setup in the compute node, at the time of VM launch.
  • teardown_compute_network(vnic_ids)
    • This method must be implemented to perform any clean up on the compute host. This method gets called when the VM terminates.
  • requires_file_injection(vnic_id) : True/False
    • Returns True if the network service associated with vnic_id requires injecting network information to the /etc/network/interfaces file. This is implemented by the net agent.
  • get_network_info(vnic_id) : (dictionary: cidr, cidr_v6, netmask, netmask_v6, gateway, gateway_v6, dhcp_server, broadcast, dns, ra_server, mac_address, ip_address, and address_v6)
    • Gets network data associated with a VNIC with ID of vnic_id. The return value is a dictionary of network data for the VNIC. There values can be None if they don't apply to that plugin. It is defined in net agent, and is required to be implemented for file injection, Iptables setup, and libvirt XML creation.

Test/Demo Plan

  • Unit tests will be added as the code is developed.

Unresolved issues

  • Network service components, such as DHCP and Firewall, should be pluggable and optional.
  • Firewall logic needs to be refactored. The actual implementation should be plugin-specific, so it should not exist inside the compute/libvirt code as it does now.
  • Project-network service association commands should be more user-friendly with validations and sanity checks.
  • REST API should check that the project that is sent in the request matches the network service that the URL is asking for.
  • Make the Network Management API follow the format of the new Compute API(1.1).
  • Integrate with the multi-nic implementation currently underway.
  • Consider the possibility of joining the three current network managers into one plugin.