XenServer/NetworkingFlags

= XenServer Networking Configuration =

Before you look at configuring OpenStack networking with XenServer, you may find it useful to read up on XenServer networking:
 * http://support.citrix.com/article/CTX117915

Keeping in mind this diagram:

Key Points
XenServer config:
 * We are assuming the XenServer has three physical interfaces: eth0, eth1, eth2
 * This means Dom0 has the following bridges: xenbr0, xenbr1, xenbr2
 * The Dom0 also has the host local xenapi network, usually the XenServer has the address 169.254.0.1

DomU config:
 * The DomU is a PV virtual machine (has a kernel with the para-virtualization extensions)
 * It generally has four interfaces
 * eth0 -> connected to xenapi (xapi trafic)
 * eth1 -> xenbr2 Tenant network traffic
 * eth2 -> xenbr0 Management traffic (MySQL, RabbitMQ, Glance, etc)
 * eth3 -> xenbr1 Public traffic (floating ip, api endpoints)

Flags you probably want to know about
Each have the DevStack setting and the nova.conf entry

Public Interface
The interface on ""DomU"" that connects to the public network. Used by nova-network so it sends the floating ip traffic on the correct network.

PUBLIC_INTERFACE=eth3 # DevStack public_interface=eth3 # nova.conf

VLAN Interface
When using VLAN networking you need to set the following flag:

vlan_interface=eth2

This is the XenServer interface on which a bridge on the correct VLAN will be created, and then the VM will be attached to that bridge. So if the flag is eth2, and your guest network is on VLAN42, a new network bridge may be created on your XenServer host (unless there is an existing one) and it will be used. or an ex

(Possible bug: it would seem this flag is also used for the DomU interface used to attach the DHCP servers and routers to listen on the appropriate VLAN networks. There used to be a separate flag for that, as required for XenServer)

Flat Interface
Only needed if you are using FlatDHCP (TODO - or Flat?).

This is the interface on DomU that you want the tenant/VM network traffic. This is the interface to which the DHCP and NAT will be attached by nova.

flat_interface=eth2

Note: this interface should not be configured, nova will attach all the required bridges.

Flat Network Bridge
Only needed if you are using Flat or FlatDHCP.

This is the XenServer bridge on which the VM instances will have their VIFs attached. This should be the same has the bridge your DomU's Guest Interface is attached to.

FLAT_NETWORK_BRIDGE=xenbr2 flat_network_bridge=xenbr2

The above setting can also take the name label instead of the XenServer name. This is useful when you find on different hosts XenServer gives your networks different names (such as xapi1 and xapi3). So you can alternatively set it to something like:

flat_network_bridge=my_vm_network_name_label

Note: this flag only affects the network bridge written into the database when you add a network using nova-manage, it does not dynamically affect the bridge used when attaching VIFs to your VMs. You may find the need to remove your existing networks and creating a new network to make your new value of the setting apply.

Flat Networking
Most details are covered in the manual:
 * http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-flat-networking.html

This requires the Network address to be injected into the VM image. This is currently quite error prone (needs appropriate guest agent software, or injects files into an ubuntu file system)

FlatDHCP Networking
This used DHCP to hand out IP address to the guest VMs.

Most details are covered in the manual:
 * http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-flat-dhcp-networking.html

It should look a bit like this, when you have network HA turned on:



Please note:
 * VM DHCP requests go: VM->xenbr2->nova-network->xenbr2->VM
 * VM to VM traffic goes VM->eth2->switch->eth2->VM
 * floating IP traffic incoming traffic does something like: public->eth1->xenbr1->nova-network->xenbr2->vm

VLAN Networking
Most details are covered in the manual:
 * http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-vlan-networking.html

There are extra flags for the compute network driver, (so the VLAN network bridges are correctly created on the XenServer):

network_driver=nova.virt.xenapi.vif.XenAPIBridgeDriver or network_driver=nova.virt.xenapi.vif.XenAPIOpenVswitchDriver

It could look at bit like this, when you have Network HA turned on (Note: the bridges with dotted lines are automatically created by nova):



Please note:
 * Very similar flow to FlatDHCP, please try to understand that one first
 * The nova-network node creates its own bridges in the domU (not shown above). It uses eth1 on the DomU as a trunk port. It ensures appropriate DHCP server instance is correctly configured and only listening on the appropriate VLAN network.

So when you create a VM, this is roughly what happens:
 * A PIF identified either by (A) the vlan_interface flag or (B) the bridge_interface column in the networks db table will be used for creating a XenServer VLAN network
 * VIF for VM instances within this network will be plugged in this VLAN network
 * The 'Openstack domU', i.e. where nova-network is running, acts as a gateway for multiple VLAN networks, so it has to be attached on a VLAN trunk. For this reason it must have an interface on the parent bridge of the VLAN bridge where VM instances are plugged

Some more pointers are available here:
 * XenServer/VLANManager

Network HA
This works the same on XenServer as any other hypervisor: nova-network must be run on every compute host, and the following flag must be set:

multi_host=True

It is know to work well with FlatDHCP, but should work with the over modes too (TODO - get confirmation)

= Example Configurations =

Single NIC with no VLANs
Run everything through eth0, make your domU have a single interface too.

public_interface=eth0 flat_interface=eth0 flat_network_bridge=xenbr0
 * 1) update xenapi connection URL to have IP that is available on eth0

TODO - add a diagram to describe this

TODO - add the DevStack config for this

Single NIC with VLANs
DevStack keeps public, managmenet and tenant networks separate by adding extra VLAN networks on XenServer.

TODO - add some more of these

= Example DevStack localrc configuration =

DevStack will help create the DomU VM that runs the OpenStack code. It will help you and create extra networks on your XenServer if required.

Default Configuration: Single NIC on XenServer with VLANs
First the DevStack readme suggests the default configuration for your localrc file:

MYSQL_PASSWORD=my_super_secret SERVICE_TOKEN=my_super_secret ADMIN_PASSWORD=my_super_secret SERVICE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=my_super_secret GUEST_PASSWORD=my_super_secret
 * 1) This is the password for your guest (for both stack and root users)

XENAPI_PASSWORD=my_super_secret
 * 1) IMPORTANT: The following must be set to your dom0 root password!

IMAGE_URLS="" MULTI_HOST=1 ACTIVE_TIMEOUT=45 HOST_IP_IFACE=eth0 NETINSTALLIP="dhcp" NAMESERVERS="" NETMASK="" GATEWAY=""
 * 1) Do not download the usual images
 * 1) Explicitly set multi-host
 * 1) Give extra time for boot
 * 1) Interface on which you would like to access services
 * 1) First time Ubuntu network install params

Here are the current networking defaults (defined in the xenrc file).


 * 1) This is eth0 on the DomU VM is connected to xenapi local host management network and does DHCP

PUB_IP=192.168.1.55 PUB_BR=xenbr0 PUB_DEV=eth0 PUB_VLAN=-1 PUB_NETMASK=255.255.255.0
 * 1) public interface running on home router, with static IP
 * 2) This is eth3 on the DomU VM

VM_IP=$10.255.255.255 # A host-only ip that let's the interface come up, otherwise unused VM_NETMASK=255.255.255.0 VM_BR="" VM_VLAN=100 VM_DEV=eth0
 * 1) VM network on VLAN 100
 * 2) This is eth1 on the DomU VM

MGT_IP=172.16.100.55 MGT_NETMASK=255.255.255.0 MGT_BR="" MGT_VLAN=101 MGT_DEV=eth0
 * 1) Management traffic on VLAN 101
 * 2) this is eth2 on the DomU VM

This means DevStack will, if not already there, create two extra networks:
 * eth0 and VLAN 100 (vmbr)
 * eth0 and VLAN 101 (mgmtbr)

Using two NICs, with VLANs, using PXE
You can install XenServer using PXE. This will tend to leave you with your management network on eth0 of your XenServer, with DHCP. Then you can have the public network on eth1, and run the VM traffic on a VLAN on eth1.

To achieve this, you probably want something like this:

MGT_IP="dhcp" # use dhcp, as the PXE server is on this network MGT_NETMASK=255.255.255.0 MGT_BR=xenbr0 MGT_VLAN=-1 MGT_DEV=eth0
 * 1) MGMT network params

PUB_IP=172.24.4.10 # static ip in same subnet as your floating ip ranage PUB_NETMASK=255.255.255.0 PUB_BR=xenbr1 PUB_VLAN=-1 PUB_DEV=eth1
 * 1) Public network

VM_IP=10.255.255.255 # A host-only ip that let's the interface come up, otherwise unused VM_NETMASK=255.255.255.0 VM_BR="" VM_VLAN=100 VM_DEV=eth1
 * 1) VM network params