Jump to: navigation, search

Difference between revisions of "Neutron/FloodlightPluginSetup"

(Quantum Floodlight Plugin Setup)
(On the physical switch: complete rewrite)
Line 16: Line 16:
 
= On the physical switch =
 
= On the physical switch =
  
Your physical switch does not need to support OpenFlow to use this plugin.
+
Your physical switches do not need to support OpenFlow to use this plugin. But the br-int bridges needs to be connected via physical interfaces (e.g. eth3) to switch ports that are in the same layer-2 broadcast domain. So these physical switch ports need to be configured as regular access ports in the same switch VLAN. (note: unlike the OpenvSwitch plugin, the Floodlight controller does not use 802.1Q VLAN tagging for isolation anywhere).
Your physical switch ports need to be configured in trunk mode to allow the VLANs
 
that will be used by your tenant networks. (The Floodlight plugin does not
 
support VLAN translation like the openvswitch plugin does, so the VLANs used by
 
your tenants will be the same as the VLANs that your switch sees).
 
  
If you do have switches that support OpenFlow, you
+
If you do have physical switches that support OpenFlow, you can configure them to be managed by the Floodlight controller. How to do this will vary between vendors and models, but once they have OpenFlow firmware enabled, it should be a simple matter of just pointing them at the IP address of the Floodlight controller and TCP port 6633.
can configure them to be managed by the Floodlight controller, but how to do
 
that isn't currenlty described here.
 
  
 
= On the controller node =
 
= On the controller node =

Revision as of 13:00, 14 October 2013

Quantum Floodlight Plugin Setup

This page describes how to configure the BigSwitch RestProxy Neutron plugin. This plugin can be used with the open source Floodlight controller or the commercial BigSwitch controller. Both controllers uses the OpenFlow protocol to manage the Open vSwitch bridges that your virtual machines connect to. They can also manage any OpenFlow capable physical switches that connect physical compute nodes.

Work in progress

This page is currently a work in progress, as the author has not yet been able to get a fully working version. Outstanding questions are:

  • How do you connect physical interfaces to the br-int bridge on the compute hosts? Do you create a separate bridge like with the openvswitch plugin, or do you connect them directly, or something else?
  • How do you connect the external interface on the network node to the br-int bridge?
  • What other configuration is required on the network node?
  • Do we need to explicitly set the bridge status up on br-int, e.g.: ip link set dev br-int up
  • How do you configure for DevStack? (see last section for current status)

On the physical switch

Your physical switches do not need to support OpenFlow to use this plugin. But the br-int bridges needs to be connected via physical interfaces (e.g. eth3) to switch ports that are in the same layer-2 broadcast domain. So these physical switch ports need to be configured as regular access ports in the same switch VLAN. (note: unlike the OpenvSwitch plugin, the Floodlight controller does not use 802.1Q VLAN tagging for isolation anywhere).

If you do have physical switches that support OpenFlow, you can configure them to be managed by the Floodlight controller. How to do this will vary between vendors and models, but once they have OpenFlow firmware enabled, it should be a simple matter of just pointing them at the IP address of the Floodlight controller and TCP port 6633.

On the controller node

Run the Floodlight controller

To use the Floodlight plugin, you need to run a Floodlight controller that can reach the compute hosts. It will communicate with the Open vSwitch bridge that you will be setting up on each compute host.

You only need to run a single Floodlight controller in your cloud setup, although the plugin does support specifying multiple controllers if you want to configure for high availability.

If your operating system doesn't have a floodlight package, you'll probably want to roll your own service script for starting and stopping it.

Here we describe how to download, compile, and run it in debug mode form the command-line. Running in debug mode isn't suitable for production, since the controller will terminate as soon as you close your terminal, but it's fine for testing.

Install the prereqs for building floodlight (git, JDK, ant). On Ubuntu, do:

sudo apt-get install git default-jdk ant

Grab the floodlight source from github and switch to the latest stable biuld:

git clone https://github.com/floodlight/floodlight
git checkout fl-last-passed-build

Build floodlight:

cd floodlight; ant

Run floodlight:

java -jar target/floodlight.jar -cf src/main/resources/quantum.properties

By default, floodlight listens on port 8080 for ReST connections and 6633 for OpenFlow connections.

See the Floodlight OpenStack documentation and Floodlight installation guide for more info.

Configure quantum.conf

Edit /etc/quantum/quantum.conf and set core_plugin the variable to use the BigSwitch Floodlight plugin

core_plugin = quantum.plugins.bigswitch.plugin.QuantumRestProxyV2

Configure /etc/quantum/plugins/bigswitch/restproxy.ini

You need to specify the login information of the MySQL database, as well as the location of the Floodlight controller. If you have run the Floodlight controller on the cloud controller, specify localhost:8080 as the location of the controller:

[DATABASE]
sql_connection = mysql://<user>:<password>@localhost/restproxy_quantum?charset=utf8
[RESTPROXY]
servers=localhost:8080

On the compute hosts

Create an Open vSwitch integration bridge called br-int, and configure it to use the Floodlight controller. If the IP address of the controller node is 10.10.0.1, you would configure the bridge by doing:

sudo ovs-vsctl --no-wait add-br br-int
sudo ovs-vsctl --no-wait br-set-external-id br-int bridge-id br-int
sudo ovs-vsctl --no-wait set-controller br-int tcp:10.10.0.1:6633

Note: There is no quantum-*-agent service that runs on the compute hosts when Quantum is configured to use the Floodlight plugin.

On the network node

To be completed. (Author currently has no idea what to do here)

Note: Do not run the quantum-l3-agent service on the network node, since the Floodlight plugin provides its own L3 services through the Open vSwitch managed by the Floodlight controller.


Configuring DevStack

Note: The author has not actually been able to get this working yet

Add the following lines to localrc in DevStack to configure DevStack to use Quantum with the BigSwitch Floodlight:

Q_PLUGIN=bigswitch_floodlight
BS_FL_CONTROLLERS_PORT=127.0.0.1:8080
disable_service n-net
enable_service q-svc
enable_service q-dhcp
enable_service q-meta
enable_service quantum
enable_service bigswitch_floodlight