Jump to: navigation, search

Difference between revisions of "Neutron/FloodlightPluginSetup"

(Quantum Floodlight Plugin Setup: changed title to Neutron to try to be consistent)
(On the network node: added dhcp agent config info. Removed note about floodlight managing layer-3)
Line 101: Line 101:
  
 
= On the network node =
 
= On the network node =
 +
Install the Neutron DHCP agent package and configure /etc/neutron/dhcp_agent.ini as follows:
  
To be completed. (Author currently has no idea what to do here)
+
[DEFAULT]
 
+
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
Note: Do not run the quantum-l3-agent service on the network node, since the
+
ovs_use_veth = true
Floodlight plugin provides its own L3 services through the Open vSwitch
+
root_helper = sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
managed by the Floodlight controller.
+
use_namespaces = True
 +
debug = False
 +
verbose = True
 +
enable_isolated_metadata = True
  
 +
Note: Do not run the quantum-l3-agent service on the network node as the RestProxy is not compatible with it.
  
 
= Configuring DevStack =
 
= Configuring DevStack =

Revision as of 13:50, 14 October 2013

Neutron Floodlight Plugin Setup

This page describes how to configure the BigSwitch RestProxy Neutron plugin. This plugin can be used with the open source Floodlight controller or the commercial BigSwitch controller. Both controllers uses the OpenFlow protocol to manage the Open vSwitch bridges that your virtual machines connect to. They can also manage any OpenFlow capable physical switches that connect physical compute nodes.

Work in progress

This page is currently a work in progress, as the author has not yet been able to get a fully working version. Outstanding questions are:

  • How do you connect physical interfaces to the br-int bridge on the compute hosts? Do you create a separate bridge like with the openvswitch plugin, or do you connect them directly, or something else?
  • How do you connect the external interface on the network node to the br-int bridge?
  • What other configuration is required on the network node?
  • Do we need to explicitly set the bridge status up on br-int, e.g.: ip link set dev br-int up
  • How do you configure for DevStack? (see last section for current status)

On the physical switches

Your physical switches do not need to support OpenFlow to use this plugin. But the br-int bridges need to be connected via physical interfaces (e.g. eth3) to switch ports that are in the same layer-2 broadcast domain. So these physical switch ports need to be configured as regular access ports in the same switch VLAN. (note: unlike the OpenvSwitch plugin, the Floodlight controller does not use 802.1Q VLAN tagging for isolation anywhere).

If you do have physical switches that support OpenFlow, you can configure them to be managed by the Floodlight controller. How to do this will vary between vendors and models, but once they have OpenFlow firmware enabled, it should be a simple matter of just pointing them at the IP address of the Floodlight controller and TCP port 6633.

On the controller node

Run the Floodlight controller

To use the RestProxy plugin with Floodlight, you need to run a Floodlight controller that can reached from the compute hosts, i.e. a TCP/IP connection over the management network (OpenFlow is an out-of-band protocol). It will manage the flows on the Open vSwitch bridge that you will be setting up on each compute host.

You only need to run a single Floodlight controller in your cloud setup, although the plugin does support specifying multiple controllers if you want to configure for high availability. However that functionality is probably only for BigSwitch controllers.

If your operating system doesn't have a floodlight package, you'll probably want to roll your own service script for starting and stopping it.

Here we describe how to download, compile, and run it in debug mode form the command-line. Running in debug mode isn't suitable for production, since the controller will terminate as soon as you close your terminal, but it's fine for testing.

Install the prereqs for building floodlight (git, JDK, ant). On Ubuntu, do:

sudo apt-get install git default-jdk ant

Grab the floodlight source from github and switch to the latest stable biuld:

git clone https://github.com/floodlight/floodlight
git checkout fl-last-passed-build

Build floodlight:

cd floodlight; ant

Run floodlight:

java -jar target/floodlight.jar -cf src/main/resources/quantum.properties

By default, floodlight listens on port 8080 for ReST connections and 6633 for OpenFlow connections.

See the Floodlight OpenStack documentation and Floodlight installation guide for more info.

Configure quantum.conf

Edit /etc/quantum/quantum.conf and set core_plugin the variable to use the BigSwitch Floodlight plugin

core_plugin = quantum.plugins.bigswitch.plugin.QuantumRestProxyV2

Configure /etc/quantum/plugins/bigswitch/restproxy.ini

You need to specify the login information of the MySQL database, as well as the location of the Floodlight controller. If you have run the Floodlight controller on the cloud controller, specify localhost:8080 as the location of the controller:

[DATABASE]
sql_connection = mysql://<user>:<password>@localhost/restproxy_quantum?charset=utf8
[RESTPROXY]
servers=localhost:8080

On the compute and network hosts

Create an Open vSwitch integration bridge called br-int, and configure it to use the Floodlight controller. If the IP address of the controller node is 10.10.0.1, you would configure the bridge by doing:

sudo ovs-vsctl --no-wait add-br br-int
sudo ovs-vsctl --no-wait br-set-external-id br-int bridge-id br-int
sudo ovs-vsctl --no-wait set-controller br-int tcp:10.10.0.1:6633

Add the interface that connects the physical VLAN used for interconnecting the br-int bridges, e.g. if that is eth3:

sudo ovs-vsctl -- --may-exist add-port br-int eth3

Note: There is no quantum-*-agent service that runs on the compute hosts when Quantum is configured to use the Floodlight plugin.

On the network node

Install the Neutron DHCP agent package and configure /etc/neutron/dhcp_agent.ini as follows:

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
ovs_use_veth = true
root_helper = sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
use_namespaces = True
debug = False
verbose = True
enable_isolated_metadata = True

Note: Do not run the quantum-l3-agent service on the network node as the RestProxy is not compatible with it.

Configuring DevStack

Note: The author has not actually been able to get this working yet

Add the following lines to localrc in DevStack to configure DevStack to use Quantum with the BigSwitch Floodlight:

Q_PLUGIN=bigswitch_floodlight
BS_FL_CONTROLLERS_PORT=127.0.0.1:8080
disable_service n-net
enable_service q-svc
enable_service q-dhcp
enable_service q-meta
enable_service quantum
enable_service bigswitch_floodlight