Jump to: navigation, search

Difference between revisions of "XenapiVlanNetworking"

 
Line 1: Line 1:
 
__NOTOC__
 
__NOTOC__
Placeholder for full spec for xenapi-vlan-networking blueprint
+
* '''Launchpad Nova blueprint''': [[NovaSpec]]:xenapi-vlan-network-manager
 +
* '''Created''': 16 February 2011
 +
* '''Last updated''': 16 February 2011
 +
* '''Drafter''': [https://launchpad.net/~salvatore-orlando Salvatore Orlando]
 +
 
 +
== Goal ==
 +
 
 +
Provide support for the vlan network manager on xenapi backends
 +
 
 +
== Implementation steps ==
 +
 
 +
Steps for network configuration on compute node:
 +
 
 +
1) retrieve vlan id and bridge identifier using network ref into nova DB
 +
 
 +
2) List existing networks, find network whose attribute 'name_label' is equal to the value of the 'bridge' column in nova db
 +
 
 +
3.a) if network is not found:
 +
3.b) Create a new network - use the value of the 'bridge' column in nova_db as the name for the new network
 +
NOTE: this is necessary as by default bridge name in nova db is brXXX where XXX=vlan id. Unfortunately, xenapi does not allow for assigning arbitrary names to bridges - xapiXXX is used.
 +
3.c) find the PIF for the device specified by FLAGS.vlan_interface
 +
      NOTE: If the target is a pool there might be multiple PIFs
 +
3.d) Create a VLAN using the ID retrieved at step #1 and bind it to the network create in step #3.b and the PIFs retrieved in step #3.c
 +
 
 +
4.a) if network is found:
 +
4.b) retrieve the PIFs associated with the network
 +
4.c) make sure VLAN tagging is enabled for each PIF and the VLAN id is the same as step#1, otherwise throw exception
 +
 
 +
When a new instance is created, we will lookup networks on the xenserver backend not by bridge (as in the flat model) but by name_label. Looking for a network called brXXX will return a reference to the xapiYYY network on the xen back-end.
 +
 
 +
Implementation Strategy:
 +
In the current libvirt implementation both the network and the compute service use the same network driver (linux_net).
 +
In order to provide XenAPI support we need a new network driver (which implements only ensure_vlan_bridge). This network driver will replace brctl/vconfig calls with appropriate [[XenApi]] calls.
 +
For this reason:
 +
- the network service will continue using the linux_net driver
 +
- the compute service will use the new 'xenapi_net' driver
 +
This also means that the two services will use different flags for the network driver

Revision as of 15:16, 16 February 2011

  • Launchpad Nova blueprint: NovaSpec:xenapi-vlan-network-manager
  • Created: 16 February 2011
  • Last updated: 16 February 2011
  • Drafter: Salvatore Orlando

Goal

Provide support for the vlan network manager on xenapi backends

Implementation steps

Steps for network configuration on compute node:

1) retrieve vlan id and bridge identifier using network ref into nova DB

2) List existing networks, find network whose attribute 'name_label' is equal to the value of the 'bridge' column in nova db

3.a) if network is not found: 3.b) Create a new network - use the value of the 'bridge' column in nova_db as the name for the new network NOTE: this is necessary as by default bridge name in nova db is brXXX where XXX=vlan id. Unfortunately, xenapi does not allow for assigning arbitrary names to bridges - xapiXXX is used. 3.c) find the PIF for the device specified by FLAGS.vlan_interface

      NOTE: If the target is a pool there might be multiple PIFs

3.d) Create a VLAN using the ID retrieved at step #1 and bind it to the network create in step #3.b and the PIFs retrieved in step #3.c

4.a) if network is found: 4.b) retrieve the PIFs associated with the network 4.c) make sure VLAN tagging is enabled for each PIF and the VLAN id is the same as step#1, otherwise throw exception

When a new instance is created, we will lookup networks on the xenserver backend not by bridge (as in the flat model) but by name_label. Looking for a network called brXXX will return a reference to the xapiYYY network on the xen back-end.

Implementation Strategy: In the current libvirt implementation both the network and the compute service use the same network driver (linux_net). In order to provide XenAPI support we need a new network driver (which implements only ensure_vlan_bridge). This network driver will replace brctl/vconfig calls with appropriate XenApi calls. For this reason: - the network service will continue using the linux_net driver - the compute service will use the new 'xenapi_net' driver This also means that the two services will use different flags for the network driver