Jump to: navigation, search

Difference between revisions of "Obsolete:ConfigureOpenvswitch"

Line 4: Line 4:
  
 
Configuring the openvswitch plugin and agent involves setting configuration variables used by the plugin on the Quantum server node and by the openvswitch agent on all the nodes on which it runs. In certain cases, it also requires configuring OVS bridges on the nodes where the openvswitch agent runs.
 
Configuring the openvswitch plugin and agent involves setting configuration variables used by the plugin on the Quantum server node and by the openvswitch agent on all the nodes on which it runs. In certain cases, it also requires configuring OVS bridges on the nodes where the openvswitch agent runs.
 +
 +
 +
<pre><nowiki>#!wiki caution
 +
'''Update in progress'''
 +
 +
There material below is not entirely updated for Folsom RC1, and should not yet be copied to the admin guide.
 +
</nowiki></pre>
 +
  
 
== Configuration Variables ==
 
== Configuration Variables ==

Revision as of 21:09, 19 September 2012

Configuring the Quantum openvswitch Plugin

The Folsom release of OpenStack Quantum has made significant enhancements to the openvswitch plugin, resulting in changes to the configuration variables and defaults used by the plugin and its agent. These changes include the provider network extension, which allows administrators to explicitly manage the relationship between Quantum virtual networks and underlying physical mechanisms such as VLANs and tunnels. If you have not already done so, please read ProviderExtension to familiarize yourself with the terminology and the API attributes associated with this extension, as these are critical to understanding openvswitch configuration.

Configuring the openvswitch plugin and agent involves setting configuration variables used by the plugin on the Quantum server node and by the openvswitch agent on all the nodes on which it runs. In certain cases, it also requires configuring OVS bridges on the nodes where the openvswitch agent runs.


#!wiki caution
'''Update in progress'''

There material below is not entirely updated for Folsom RC1, and should not yet be copied to the admin guide.


Configuration Variables

The openvswitch plugin and agent are configured by editing the file typically installed as /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini. The following configuration variables are relevant:

  • OVS.integration_bridge - default: "br-int" - Specifies the name of the OVS integration bridge used by the agent for all virtual networks.
  • OVS.tunnel_bridge - default: "br-tun" - Specifies the name of the OVS tunnel bridge used by the agent for GRE tunnels.
  • OVS.local_ip - default: "10.0.0.3" - Specifies the IP address for the local endpoint on which GRE tunnel packets are received by the agent.
  • OVS.bridge_mappings - default: "default:br-eth1" - List of <physical_network>:<bridge> tuples, each specifying the OVS bridge used by the agent for a physical network to which it is connected.
  • OVS.network_vlan_ranges - default: "default:2000:3999" - List of <physical_network>:<vlan_min>:<vlan_max> or <physical_network> tuples on the server, each specifying the name of an available physical network and, optionally, a range of VIDs on that network available for allocation to tenant networks. All physical networks available for provider network creation must be listed at least once, even if no tenant networks will be allocated on that physical network. A physical network can be listed multiple times to make multiple ranges of VIDs on that physical network available for tenant network creation.
  • OVS.tunnel_id_ranges - default: "" - List of <tun_min>:<tun_max> tuples on the server, each specifying a range of tunnel IDs available for tenant network creation.
  • DATABASE.sql_connection - default: "sqlite://" - URL for database connection used by the plugin, and if AGENT.rpc is false, also by the agent.
  • DATABASE.sql_max_retries - default: -1 -
  • DATABASE.reconnect_interval - default: 2 -
  • AGENT.polling_interval - default: 2 -
  • AGENT.root_helper - default: "sudo" -
  • AGENT.log_file - default: None -
  • AGENT.rpc - default: True - Specifies whether the agent uses the RPC mechanism to communicate with the plugin. If False, the agent connects via the database instead.

The RPC, logging, and notification configuration variables defined in /etc/quantum/quantum.conf also apply to the plugin, and the RPC and logging variables apply to the agent.

The physical_network names and bridge names in the above variable should not contain embedded spaces.

Tenant Network Pool Configuration

The openvswitch plugin supports realizing tenant networks as either VLAN networks or GRE tunnels. Each mechanism allows configuration in the server of a pool of physical resources available for allocation to tenant networks. If pools for both mechanisms are configured, when a new tenant network is created, a VLAN network will be used if one is available, and if not, a GRE tunnel will be used. If no pools are configured, or if the supply is exhausted, no new tenant networks can be created, but it still may be possible to create provider networks.

To configure a pool of VLANs that can be allocated as tenant networks, use the OVS.network_vlan_ranges configuration variable in the server:


[OVS]
network_vlan_ranges = physnet1:1:4094,physnet2:1000:1999,physnet2:3000:3999


The above example makes VIDs 1 through 4094 on the physical network named "physnet1" available for tenant networks, along with VIDs 1000 through 1999 and 3000 through 3999 on the physical network named "physnet2".

Since VLANs on a physical network named "default" are specified in the default value of OVS.network_vlan_ranges, override it to disable the pool of VLANs for tenant networks:


[OVS]
network_vlan_ranges =


To configure a pool of GRE tunnels that can be allocated as tenant networks, use the OVS.tunnel_id_ranges configuration variable in the server:


[OVS]
tunnel_id_ranges = 0:999,2000:2999


This example makes tunnel IDs 0 through 999 and 2000 through 2999 available for allocation. Note that, unlike VIDs, tunnel IDs are not specific to a physical network.

The allocation states of the items in each pool are maintained in the openvswitch plugin's database. Each time the quantum server starts, the plugin synchronizes the contents of the database with the current values for the configuration variables. If the configuration variable changes, items may be added to the pool, and unused items may be removed. If a VLAN or tunnel currently allocated to a network is no longer in the specified range, it will continue to be used until the network is deleted, but will not be returned to the pool on deletion.

Provider Network Configuration

When creating a provider network using the provider extension API as described above, the openvswitch plugin validates that the supplied provider:physical_network value is the name of a known physical network. The set of known physical networks is configured on the server using the OVS.network_vlan_ranges variable. Any physical networks for which tenant network VLAN ranges are specified are also available for provider networks. Physical networks can also be made available without ranges of VLANs for tenant networks.


[OVS]
network_vlan_ranges = physnet1:1:4094,physnet2,physnet3:3000:3999


In this example, the physical networks named "physnet1", "physnet2", and "physnet3" are all available for allocation of flat or VLAN provider networks.

Agent Integration Bridge Configuration

A well-known OVS integration bridge connects entities such Nova instance vNICs and the Quantum DHCP and L3 agents with virtual networks. The name of this bridge can be configured using the OVS.integration_bridge variable, but overriding the default value of "br-int" is not recommended as all entities need to agree on the bridge name.

The integration bridge must be administratively created before first running the quantum agent:


sudo ovs-vsctl add-br br-int


Note that OVS bridges are persistent, so this only needs to be created once.

Agent Tunneling Configuration

If GRE tunnels are used for tenant networks, each agent must be configured with the local IP address for its tunnel endpoint:


[OVS]
local_ip = 10.1.2.3


An OVS bridge is used for GRE tunnels, and its name can be configured via the OVS.tunnel_bridge variable. The default value of "br-tun" should be fine for most deployments. This bridge is created automatically by the openvswitch agent, and should not be accessed by any other entities.

Agent Physical Network Bridge Configuration

The physical network names defined in the openvswitch server configuration must be mapped to the names of OVS bridges on each node where the openvswitch agent runs. Each of these bridges provides connectivity to the corresponding physical network, through a single physical interface or through a set of bonded physical interfaces. The node can also be configured with its own IP addresses on flat or VLAN networks via these bridges. The OVS.bridge_mappings variable defines this mapping for the agent:


[OVS]
bridge_mappings = physnet1:br-eth1,physnet2:br-eth2,physnet3:br-eth3


This example maps the physical networks "physnet1", "physnet2", and "physnet3" to the bridges "br-eth1", br-eth2", and "br-eth3", respectively. Note that different nodes can map the physical networks to different bridge names.

Each physical network bridge must be administratively created before the openvswitch agent is started, along with a port connecting it to the physical interface (or bonded set of interfaces) that connect that node to the named physical network:


sudo ovs-vsctl add-br br-eth1
sudo ovs-vsctl add-port br-eth1 eth1


Similar commands would be used to create the bridge for each physical network. Note that OVS bridges and ports are persistent, so these commands only need to be run once.

Additional ports can be administratively added to these bridges to provide access to non-quantum entities with access to the physical network, but the default "normal" (MAC-learning) flows and additional flows created by the openvswitch agent must not be altered.

The node can be given an IP address on the flat network on a bridge as follows:


sudo ip addr add 192.168.0.200 dev br-eth1
sudo ip link set br-eth1 up
sudo ip route add 192.168.0.0/24 dev br-eth1


To give the host an IP address on a VLAN on a bridge, first create an internal port on the bridge for the VLAN:


sudo ovs-vsctl add-port br-eth1 eth1-9 tag=9 -- set interface eth1.9 type=internal
sudo ip addr add 192.168.9.200 dev br-eth1-9
sudo ip link set br-eth1-9 up
sudo ip route add 192.168.9.0/24 dev br-eth1-9


Also keep in mind that if the node originally had an IP address directly on the physical interface, that address needs to be removed before setting up the bridge. Be careful not to lose connectivity to the node while moving IP addresses.

Giving one or more nodes IP addresses on flat or VLAN tenant or provider networks can be useful for testing and debugging quantum configurations, but is generally not necessary or recommended. If a physical network used by openvswitch provides the main connectivity for management of the node, then the system's network startup scripts should be configured to bring up the IP addresses on the ports on the bridge at boot.

Complete Examples

TBD

Using Devstack

Using devstack with the openvswitch patch currently requires applying the devstack patch at https://review.openstack.org/#/c/11418/. Information in QuantumDevstack on single-node and multi-node configurations is still applicable, with the following additions.

As before, devstack by default configures openvswitch to use GRE tunnels for tenant networks, and creates the br-int integration bridge if it doesn't already exist. Now devstack also by default sets OVS.network_vlan_ranges so that no physical networks are available for provider networks. Therefore, as before, no manual bridge creation is necessary to run openvswitch in single-node or multi-node configuration on systems that support OVS GRE tunneling.

To use VLANs for tenant networks with openvswitch in devstack, the localrc file should set OVS_ENABLE_TUNNELING to False:


OVS_ENABLE_TUNNELING=False


This will result in devstack configuring openvswitch to allocate tenant networks as VLANs on the physical network named "default". Additionally, devstack will configure the openvswitch agent to map the physical network "default" to the value of OVS_DEFAULT_BRIDGE if it is set in localrc, and otherwise to "br-$GUEST_INTERFACE_DEFAULT". The default value of GUEST_INTERFACE_DEFAULT is eth0 on most systems, so, if neither OVS_DEFAULT_BRIDGE nor GUEST_INTERFACE_DEFAULT is set in localrc, the bridge for the physical network "default" will be br-eth0. Devstack will create this bridge if it doesn't already exist, but will not add a physical interface port to it. For single-node testing, no physical interface needs to be added. For multi-node testing, the user will need to add the port manually. If the physical interface being used already has an IP address, the IP address will need to be moved to the bridge as explained above.

If OVS_DEFAULT_BRIDGE is explicitly set without also setting OVS_ENABLE_TUNNELING to False, then devstack will configure openvswitch to use GRE tunnels for tenant networks, but will make the OVS_DEFAULT_BRIDGE available for provider networks as the "default" physical network.

When doing multi-node testing with flat provider networks, it is not necessary that the switch connecting the nodes enable VLAN trunking. But when using VLANs for either tenant or provider networks, make sure the VIDs being used are trunked by the switch. The tcpdump tool with ping running in a Nova instance is useful for debugging connectivity.

Known Limitations and Issues

  • GRE tunneling with the openvswitch plugin requires OVS kernel modules that are not part of the Linux kernel source tree. These modules are not available in certain Linux distributions, including Fedora and RHEL. Tunneling must not be configured on systems without the needed kernel modules. The Open vSwitch web site indicates the OVS GRE tunnel support is being moved into the kernel source tree, but patch ports are not. Once GRE support is available, it should be possible to support tunneling by using veth devices instead of patch ports.
  • Nova and Quantum currently assume all nodes where VMs are run will have access to all virtual networks. Cases where not all nodes have connections all physical networks used with Quantum would be much better supported if the Nova scheduler could query Quantum to determine whether a particular compute node supports the needed virtual networks.