Jump to: navigation, search

Difference between revisions of "XenServer/NetworkingFlags"

(Add more VLAN network details)
Line 123: Line 123:
 
[[Image:XenServer$$NetworkingFlags$vlan.png]]
 
[[Image:XenServer$$NetworkingFlags$vlan.png]]
  
What happens when you use the VLAN Manager and you want to add a VM:
+
Please note:
 +
* Very similar flow to FlatDHCP, please try to understand that one first
 +
* The nova-network node creates its own bridges in the domU (not shown above). It uses eth1 on the DomU as a trunk port. It ensures appropriate DHCP server instance is correctly configured and only listening on the appropriate VLAN network.
 +
 
 +
So when you create a VM, this is roughly what happens:
 
* A PIF identified either by (A) the vlan_interface flag or (B) the bridge_interface column in the networks db table will be used for creating a [[XenServer]] VLAN network
 
* A PIF identified either by (A) the vlan_interface flag or (B) the bridge_interface column in the networks db table will be used for creating a [[XenServer]] VLAN network
 
* VIF for VM instances within this network will be plugged in this VLAN network
 
* VIF for VM instances within this network will be plugged in this VLAN network

Revision as of 18:59, 21 March 2012

XenServer Networking Configuration

Keeping in mind this diagram:

{{http://wiki.openstack.org/XenServer/XenXCPAndXenServer?action=AttachFile&do=get&target=XenServer-dom0-domU.png}}

Key Points

XenServer config:

  • We are assuming the XenServer has three physical interfaces: eth0, eth1, eth2
  • This means Dom0 has the following bridges: xenbr0, xenbr1, xenbr2
  • The Dom0 also has the host local xenapi network, usually the XenServer has the address 169.254.0.1

DomU config:

  • The DomU is a PV virtual machine (has a kernel with the para-virtualization extensions)
  • It generally has four interfaces
    eth0 -> connected to xenapi (xapi trafic)
    eth1 -> xenbr2 Tenant network traffic
    eth2 -> xenbr0 Management traffic (MySQL, RabbitMQ, Glance, etc)
    eth3 -> xenbr1 Public traffic (floating ip, api endpoints)

Flags you probably want to know about

Each have the DevStack setting and the nova.conf entry

Public Interface

The interface on ""DomU"" that connects to the public network. Used by nova-network so it sends the floating ip traffic on the correct network.


PUBLIC_INTERFACE=eth3 # DevStack
public_interface=eth3 # nova.conf


Guest Interfaces

The interface on ""XenServer"" that has Tenant (also called VM instance or Guest) traffic.

It should be the same interface as the (trunk) bridge your DomU's tenant network is attached to.


GUEST_INTERFACE=eth2


This changes the following two flags:


vlan_interface=eth2


This is the XenServer interface on which a bridge on the correct VLAN will be created, and then the VM will be attached to that bridge


flat_interface=eth2


This is the XenServer interface the bridge on which the instance traffic will sit on

Flat Network Bridge

Only needed if you are using Flat or FlatDHCP.

This is the XenServer bridge on which the VM instances will have their VIFs attached. This should be the same has the bridge your DomU's Guest Interface is attached to.


FLAT_NETWORK_BRIDGE=xenbr2
flat_network_bridge=xenbr2


Networking Modes

Flat Networking

Most details are covered in the manual:

This requires the Network address to be injected into the VM image. This is currently quite error prone (needs appropriate guest agent software, or injects files into an ubuntu file system)

FlatDHCP Networking

This used DHCP to hand out IP address to the guest VMs.

Most details are covered in the manual:

It should look a bit like this, when you have network HA turned on:

File:XenServer$$NetworkingFlags$flatdhcp.png

Please note:

  • VM DHCP requests go: VM->xenbr2->nova-network->xenbr2->VM
  • VM to VM traffic goes VM->eth2->switch->eth2->VM
  • floating IP traffic incoming traffic does something like: public->eth1->xenbr1->nova-network->xenbr2->vm

VLAN Networking

Most details are covered in the manual:

There are extra flags for the compute network driver, (so the VLAN network bridges are correctly created on the XenServer):


network_driver=nova.virt.xenapi.vif.XenAPIBridgeDriver
or
network_driver=nova.virt.xenapi.vif.XenAPIOpenVswitchDriver


It could look at bit like this, when you have Network HA turned on (Note: the bridges with dotted lines are automatically created by nova):

File:XenServer$$NetworkingFlags$vlan.png

Please note:

  • Very similar flow to FlatDHCP, please try to understand that one first
  • The nova-network node creates its own bridges in the domU (not shown above). It uses eth1 on the DomU as a trunk port. It ensures appropriate DHCP server instance is correctly configured and only listening on the appropriate VLAN network.

So when you create a VM, this is roughly what happens:

  • A PIF identified either by (A) the vlan_interface flag or (B) the bridge_interface column in the networks db table will be used for creating a XenServer VLAN network
  • VIF for VM instances within this network will be plugged in this VLAN network
  • The 'Openstack domU', i.e. where nova-network is running, acts as a gateway for multiple VLAN networks, so it has to be attached on a VLAN trunk. For this reason it must have an interface on the parent bridge of the VLAN bridge where VM instances are plugged

Some more pointers are available here:

Network HA

This works the same on XenServer as any other hypervisor: nova-network must be run on every compute host, and the following flag must be set:


multi_host=True


It is know to work well with FlatDHCP, but should work with the over modes too (TODO - get confirmation)