Neutron/FloodlightPluginSetup

= Neutron Floodlight Plugin Setup =

This page describes how to configure the BigSwitch RestProxy Neutron plugin with the open source Floodlight controller. This plugin can also manage the commercial BigSwitch controller, but that is not discussed here. Both controllers uses the OpenFlow protocol to manage the Open vSwitch bridges that your virtual machines connect to. They can also manage any OpenFlow enabled physical switches that are used for inter-connecting the br-int bridges across physical compute and network nodes.

Limitations

 * The RestProxy plugin and Floodlight controller combination does not provide routers or external network access (i.e. no gateway SNAT or floating IPs).


 * The Floodlight controller does not seem to have any form of data persistence - at least when configured with the modules for Quantum as described here.. So stopping and restarting Floodlight causes it to lose all the virtual network information it previously received from proxied ReST calls. It would seem that restarting the RestProxy plugin with sync_data=True should restore this information, but that did not work for me.

= On the physical switches =

Your physical switches do not need to support OpenFlow to use this plugin. But the br-int bridges need to be connected via physical interfaces (e.g. eth3) to switch ports that are in the same layer-2 broadcast domain. So these physical switch ports need to be configured as regular access ports in the same switch VLAN. (note: unlike the OpenvSwitch plugin, the Floodlight controller does not use 802.1Q VLAN tagging for isolation anywhere).

If you do have physical switches that support OpenFlow, you can configure them to be managed by the Floodlight controller. How to do this will vary between vendors and models, but once they have OpenFlow firmware enabled, it should be a simple matter of just pointing them at the IP address of the Floodlight controller and TCP port 6633.

= On the controller node =

Run the Floodlight controller
To use the RestProxy plugin with Floodlight, you need to run a Floodlight controller that can reached from the compute hosts, i.e. a TCP/IP connection over the management network (OpenFlow is an out-of-band protocol). It will manage the flows on the Open vSwitch bridge that you will be setting up on each compute host.

You only need to run a single Floodlight controller in your cloud setup, although the plugin does support specifying multiple controllers if you want to configure for high availability. However that functionality is probably only for BigSwitch controllers.

If your operating system doesn't have a floodlight package, you'll probably want to roll your own service script for starting and stopping it.

Here we describe how to download, compile, and run it in debug mode form the command-line. Running in debug mode isn't suitable for production, since the controller will terminate as soon as you close your terminal, but it's fine for testing.

Install the prereqs for building floodlight (git, JDK, ant). On Ubuntu, do:

sudo apt-get install git default-jdk ant

Grab the floodlight source from github and switch to the latest stable biuld:

git clone https://github.com/floodlight/floodlight git checkout fl-last-passed-build

Build floodlight:

cd floodlight; ant

Run floodlight:

java -jar target/floodlight.jar -cf src/main/resources/quantum.properties

By default, floodlight listens on port 8080 for ReST connections and 6633 for OpenFlow connections.

See the Floodlight OpenStack documentation and Floodlight installation guide for more info.

Configure quantum.conf
Edit /etc/quantum/quantum.conf and set core_plugin the variable to use the BigSwitch Floodlight plugin

core_plugin = quantum.plugins.bigswitch.plugin.QuantumRestProxyV2

Configure /etc/quantum/plugins/bigswitch/restproxy.ini
You need to specify the login information of the MySQL database, as well as the location of the Floodlight controller. If you have run the Floodlight controller on the cloud controller, specify localhost:8080 as the location of the controller:

[DATABASE] sql_connection = mysql:// : @localhost/restproxy_quantum?charset=utf8 [RESTPROXY] servers=localhost:8080

= On the compute and network hosts =

Install Open vSwitch and create an integration bridge called br-int, and configure it to use the Floodlight controller. If the IP address of the node with the Floodlight controller is 192.168.2.10, you would configure the bridge by doing:

sudo ovs-vsctl --no-wait add-br br-int sudo ip link set dev eth2 up sudo ovs-vsctl --no-wait br-set-external-id br-int bridge-id br-int sudo ovs-vsctl --no-wait set-controller br-int tcp:192.168.2.10:6633

Add the interface that connects the physical VLAN used for interconnecting the br-int bridges, e.g. if that is eth2:

sudo ovs-vsctl -- --may-exist add-port br-int eth2

Note: There is no quantum-*-agent service that runs on the compute hosts when Quantum is configured to use the RestProxy plugin.

= On the network node = Install the Neutron DHCP agent package and configure /etc/neutron/dhcp_agent.ini as follows:

[DEFAULT] interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver ovs_use_veth = true root_helper = sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf use_namespaces = True debug = False verbose = True enable_isolated_metadata = True

Note: with Open vSwitch under OpenFlow control, it is required that either ovs_use_veth=true or use_namespaces=false.

Note: Do not run the quantum-l3-agent service on the network node as the RestProxy plugin is not compatible with it.

= Configuring DevStack =

Here is an example of a two node configuration, having a controller node running the Floodlight controller, and a compute node running nova-compute. There is no network node, instead the DHCP agent runs on the controller node. Before running stack.sh, you must install and configure Open vSwitch as described above on both nodes, and install and run Floodlight on the controller.

There is a Vagrant/Ansible script that does most of the grunt work here.

MULTI_HOST=true HOST_IP=192.168.2.10 ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-sch,n-cauth,horizon,mysql,rabbit,sysstat,n-cond,n-novnc,n-xvnc disable_service c-api c-sch c-vol cinder enable_service n-vol disable_service n-net enable_service q-svc enable_service q-dhcp enable_service q-meta enable_service quantum Q_PLUGIN=bigswitch_floodlight BS_FL_CONTROLLERS_PORT=127.0.0.1:8080 Q_OVS_USE_VETH=true DATABASE_PASSWORD=password RABBIT_PASSWORD=password SERVICE_TOKEN=password SERVICE_PASSWORD=password ADMIN_PASSWORD=password
 * 1) localrc for the controller node
 * 1) add n-novnc if you want to run nova-compute on the controller node too

HOST_IP=192.168.2.11 SERVICE_HOST=192.168.2.10 ENABLED_SERVICES=n-cpu,n-novnc,neutron Q_PLUGIN=bigswitch_floodlight Q_HOST=$SERVICE_HOST MYSQL_HOST=$SERVICE_HOST RABBIT_HOST=$SERVICE_HOST DATABASE_PASSWORD=password RABBIT_PASSWORD=password SERVICE_TOKEN=password SERVICE_PASSWORD=password ADMIN_PASSWORD=password VNCSERVER_LISTEN=$HOST_IP VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
 * 1) localrc for the compute node
 * 1) the following 2 lines are probably not necessary as we configure ovs manually beforehand