Jump to: navigation, search

Difference between revisions of "Neutron/FloodlightPluginSetup"

(On the compute and network hosts)
 
(27 intermediate revisions by one other user not shown)
Line 1: Line 1:
= Quantum Floodlight Plugin Setup  =
+
= Neutron Floodlight Plugin Setup  =
  
This page describes how to configure the BigSwitch Floodlight Quantum plugin.
+
This page describes how to configure the BigSwitch RestProxy Neutron plugin with the open source Floodlight controller. This plugin can also manage the commercial BigSwitch controller, but that is not discussed here. Both controllers uses the OpenFlow protocol to manage the Open vSwitch bridges that your virtual machines connect to. They can also manage any OpenFlow enabled physical switches that are used for inter-connecting the br-int bridges across physical compute and network nodes.
This plugin uses the OpenFlow protocol to manage the virtual switches that
 
connect your virtual machines to your physical network.
 
  
== Work in progress ==
+
== Limitations ==
  
This page is currently a work in progress, as the author has not yet been
+
* The RestProxy plugin and Floodlight controller combination does not provide routers or external network access (i.e. no gateway SNAT or floating IPs).
able to get a fully working version. Outstanding questions are:
 
  
* How do you connect physical interfaces to the br-int bridge on the compute hosts? Do you create a separate bridge like with the openvswitch plugin, or do you connect them directly, or something else?
+
* The Floodlight controller does not seem to have any form of data persistence - at least when configured with the modules for Quantum as described [http://www.openflowhub.org/display/floodlightcontroller/OpenStack here.]. So stopping and restarting Floodlight causes it to lose all the virtual network information it previously received from proxied ReST calls. It would seem that restarting the RestProxy plugin with sync_data=True should restore this information, but that did not work for me.
* How do you connect the external interface on the network node to the br-int bridge?
 
* What other configuration is required on the network node?
 
* Do we need to explicitly set the bridge status up on br-int, e.g.: ip link set dev br-int up
 
* How do you configure for DevStack? (see last section for current status)
 
  
= On the physical switch =
+
= On the physical switches =
  
Your physical switch does not need to support OpenFlow to use this plugin.
+
Your physical switches do not need to support OpenFlow to use this plugin. But the br-int bridges need to be connected via physical interfaces (e.g. eth3) to switch ports that are in the same layer-2 broadcast domain. So these physical switch ports need to be configured as regular access ports in the same switch VLAN. (note: unlike the OpenvSwitch plugin, the Floodlight controller does not use 802.1Q VLAN tagging for isolation anywhere).
Your physical switch ports need to be configured in trunk mode to allow the VLANs
 
that will be used by your tenant networks. (The Floodlight plugin does not
 
support VLAN translation like the openvswitch plugin does, so the VLANs used by
 
your tenants will be the same as the VLANs that your switch sees).
 
  
If you do have switches that support OpenFlow, you
+
If you do have physical switches that support OpenFlow, you can configure them to be managed by the Floodlight controller. How to do this will vary between vendors and models, but once they have OpenFlow firmware enabled, it should be a simple matter of just pointing them at the IP address of the Floodlight controller and TCP port 6633.
can configure them to be managed by the Floodlight controller, but how to do
 
that isn't currenlty described here.
 
  
 
= On the controller node =
 
= On the controller node =
Line 33: Line 20:
 
== Run the Floodlight controller ==
 
== Run the Floodlight controller ==
  
To use the Floodlight plugin, you need to run a Floodlight controller that can
+
To use the RestProxy plugin with Floodlight, you need to run a Floodlight controller that can
reach the compute hosts. It will communicate with the Open vSwitch bridge that
+
reached from the compute hosts, i.e. a TCP/IP connection over the management network
you will be setting up on each compute host.
+
(OpenFlow is an out-of-band protocol). It will manage the flows on the Open vSwitch bridge
 +
that you will be setting up on each compute host.
  
 
You only need to run a single Floodlight controller in your cloud setup,
 
You only need to run a single Floodlight controller in your cloud setup,
 
although the plugin does support specifying multiple controllers if you want
 
although the plugin does support specifying multiple controllers if you want
to configure for high availability.
+
to configure for high availability. However that functionality is probably only
 +
for BigSwitch controllers.
  
 
If your operating system doesn't have a floodlight package, you'll probably
 
If your operating system doesn't have a floodlight package, you'll probably
Line 66: Line 55:
 
  java -jar target/floodlight.jar -cf src/main/resources/quantum.properties
 
  java -jar target/floodlight.jar -cf src/main/resources/quantum.properties
  
By default, floodlight runs on port 8080.
+
By default, floodlight listens on port 8080 for ReST connections and 6633 for OpenFlow connections.
  
 
See the [http://docs.projectfloodlight.org/display/floodlightcontroller/OpenStack Floodlight OpenStack documentation] and [http://docs.projectfloodlight.org/display/floodlightcontroller/Installation+Guide Floodlight installation guide] for more info.
 
See the [http://docs.projectfloodlight.org/display/floodlightcontroller/OpenStack Floodlight OpenStack documentation] and [http://docs.projectfloodlight.org/display/floodlightcontroller/Installation+Guide Floodlight installation guide] for more info.
Line 89: Line 78:
 
  servers=localhost:8080
 
  servers=localhost:8080
  
= On the compute hosts =
+
= On the compute and network hosts =
  
Create an Open vSwitch integration bridge called br-int, and configure it
+
Install Open vSwitch and create an integration bridge called br-int, and configure it
to use the Floodlight controller. If the IP address of the controller node
+
to use the Floodlight controller. If the IP address of the node with the Floodlight controller
is 10.10.0.1, you would configure the bridge by doing:
+
is 192.168.2.10, you would configure the bridge by doing:
  
 
  sudo ovs-vsctl --no-wait add-br br-int
 
  sudo ovs-vsctl --no-wait add-br br-int
 +
sudo ip link set dev eth2 up
 
  sudo ovs-vsctl --no-wait br-set-external-id br-int bridge-id br-int
 
  sudo ovs-vsctl --no-wait br-set-external-id br-int bridge-id br-int
  sudo ovs-vsctl --no-wait set-controller br-int tcp:10.10.0.1:8080
+
  sudo ovs-vsctl --no-wait set-controller br-int tcp:192.168.2.10:6633
  
Note: There is no quantum-*-agent service that runs on the compute hosts when Quantum is configured to use the Floodlight plugin.
+
Add the interface that connects the physical VLAN used for
 +
interconnecting the br-int bridges, e.g. if that is eth2:
 +
 
 +
sudo ovs-vsctl -- --may-exist add-port br-int eth2
 +
 
 +
Note: There is no quantum-*-agent service that runs on the compute hosts when Quantum is configured to use the RestProxy plugin.
  
 
= On the network node =
 
= On the network node =
 +
Install the Neutron DHCP agent package and configure /etc/neutron/dhcp_agent.ini as follows:
  
To be completed. (Author currently has no idea what to do here)
+
[DEFAULT]
 +
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
 +
ovs_use_veth = true
 +
root_helper = sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
 +
use_namespaces = True
 +
debug = False
 +
verbose = True
 +
enable_isolated_metadata = True
  
Note: Do not run the quantum-l3-agent service on the network node, since the
+
Note: with Open vSwitch under OpenFlow control, it is required that either ovs_use_veth=true or use_namespaces=false.
Floodlight plugin provides its own L3 services through the Open vSwitch
 
managed by the Floodlight controller.
 
  
 +
Note: Do not run the quantum-l3-agent service on the network node as the RestProxy plugin is not compatible with it.
  
 
= Configuring DevStack =
 
= Configuring DevStack =
  
Note: The author has not actually been able to get this working yet
+
Here is an example of a two node configuration, having a controller node running
 +
the Floodlight controller, and a compute node running nova-compute. There is no
 +
network node, instead the DHCP agent runs on the controller node. Before running
 +
stack.sh, you must install and configure Open vSwitch as described above on both
 +
nodes, and install and run Floodlight on the controller.
  
Add the following lines to localrc in DevStack to configure DevStack to use
+
There is a Vagrant/Ansible script that does most of the grunt work [https://github.com/djoreilly/floodlight-devstack here].
Quantum with the BigSwitch Floodlight:
 
  
  Q_PLUGIN=bigswitch_floodlight
+
  # localrc for the controller node
  BS_FL_CONTROLLERS_PORT=127.0.0.1:8080
+
MULTI_HOST=true
 +
  HOST_IP=192.168.2.10
 +
 +
# add n-novnc if you want to run nova-compute on the controller node too
 +
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-sch,n-cauth,horizon,mysql,rabbit,sysstat,n-cond,n-novnc,n-xvnc
 +
disable_service c-api c-sch c-vol cinder
 +
enable_service n-vol
 +
 
  disable_service n-net
 
  disable_service n-net
 
  enable_service q-svc
 
  enable_service q-svc
Line 124: Line 136:
 
  enable_service q-meta
 
  enable_service q-meta
 
  enable_service quantum
 
  enable_service quantum
  enable_service bigswitch_floodlight
+
   
 +
Q_PLUGIN=bigswitch_floodlight
 +
BS_FL_CONTROLLERS_PORT=127.0.0.1:8080
 +
Q_OVS_USE_VETH=true
 +
 +
DATABASE_PASSWORD=password
 +
RABBIT_PASSWORD=password
 +
SERVICE_TOKEN=password
 +
SERVICE_PASSWORD=password
 +
ADMIN_PASSWORD=password
 +
 
 +
 
 +
# localrc for the compute node
 +
HOST_IP=192.168.2.11
 +
SERVICE_HOST=192.168.2.10
 +
 +
ENABLED_SERVICES=n-cpu,n-novnc,neutron
 +
 +
# the following 2 lines are probably not necessary as we configure ovs manually beforehand
 +
Q_PLUGIN=bigswitch_floodlight
 +
Q_HOST=$SERVICE_HOST
 +
 +
MYSQL_HOST=$SERVICE_HOST
 +
RABBIT_HOST=$SERVICE_HOST
 +
DATABASE_PASSWORD=password
 +
RABBIT_PASSWORD=password
 +
SERVICE_TOKEN=password
 +
SERVICE_PASSWORD=password
 +
ADMIN_PASSWORD=password
 +
 +
VNCSERVER_LISTEN=$HOST_IP
 +
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
 +
 
 +
 
 +
[[File:Floodlight-devstack-wiki.jpg|framed|center]]

Latest revision as of 16:17, 23 December 2013

Neutron Floodlight Plugin Setup

This page describes how to configure the BigSwitch RestProxy Neutron plugin with the open source Floodlight controller. This plugin can also manage the commercial BigSwitch controller, but that is not discussed here. Both controllers uses the OpenFlow protocol to manage the Open vSwitch bridges that your virtual machines connect to. They can also manage any OpenFlow enabled physical switches that are used for inter-connecting the br-int bridges across physical compute and network nodes.

Limitations

  • The RestProxy plugin and Floodlight controller combination does not provide routers or external network access (i.e. no gateway SNAT or floating IPs).
  • The Floodlight controller does not seem to have any form of data persistence - at least when configured with the modules for Quantum as described here.. So stopping and restarting Floodlight causes it to lose all the virtual network information it previously received from proxied ReST calls. It would seem that restarting the RestProxy plugin with sync_data=True should restore this information, but that did not work for me.

On the physical switches

Your physical switches do not need to support OpenFlow to use this plugin. But the br-int bridges need to be connected via physical interfaces (e.g. eth3) to switch ports that are in the same layer-2 broadcast domain. So these physical switch ports need to be configured as regular access ports in the same switch VLAN. (note: unlike the OpenvSwitch plugin, the Floodlight controller does not use 802.1Q VLAN tagging for isolation anywhere).

If you do have physical switches that support OpenFlow, you can configure them to be managed by the Floodlight controller. How to do this will vary between vendors and models, but once they have OpenFlow firmware enabled, it should be a simple matter of just pointing them at the IP address of the Floodlight controller and TCP port 6633.

On the controller node

Run the Floodlight controller

To use the RestProxy plugin with Floodlight, you need to run a Floodlight controller that can reached from the compute hosts, i.e. a TCP/IP connection over the management network (OpenFlow is an out-of-band protocol). It will manage the flows on the Open vSwitch bridge that you will be setting up on each compute host.

You only need to run a single Floodlight controller in your cloud setup, although the plugin does support specifying multiple controllers if you want to configure for high availability. However that functionality is probably only for BigSwitch controllers.

If your operating system doesn't have a floodlight package, you'll probably want to roll your own service script for starting and stopping it.

Here we describe how to download, compile, and run it in debug mode form the command-line. Running in debug mode isn't suitable for production, since the controller will terminate as soon as you close your terminal, but it's fine for testing.

Install the prereqs for building floodlight (git, JDK, ant). On Ubuntu, do:

sudo apt-get install git default-jdk ant

Grab the floodlight source from github and switch to the latest stable biuld:

git clone https://github.com/floodlight/floodlight
git checkout fl-last-passed-build

Build floodlight:

cd floodlight; ant

Run floodlight:

java -jar target/floodlight.jar -cf src/main/resources/quantum.properties

By default, floodlight listens on port 8080 for ReST connections and 6633 for OpenFlow connections.

See the Floodlight OpenStack documentation and Floodlight installation guide for more info.

Configure quantum.conf

Edit /etc/quantum/quantum.conf and set core_plugin the variable to use the BigSwitch Floodlight plugin

core_plugin = quantum.plugins.bigswitch.plugin.QuantumRestProxyV2

Configure /etc/quantum/plugins/bigswitch/restproxy.ini

You need to specify the login information of the MySQL database, as well as the location of the Floodlight controller. If you have run the Floodlight controller on the cloud controller, specify localhost:8080 as the location of the controller:

[DATABASE]
sql_connection = mysql://<user>:<password>@localhost/restproxy_quantum?charset=utf8
[RESTPROXY]
servers=localhost:8080

On the compute and network hosts

Install Open vSwitch and create an integration bridge called br-int, and configure it to use the Floodlight controller. If the IP address of the node with the Floodlight controller is 192.168.2.10, you would configure the bridge by doing:

sudo ovs-vsctl --no-wait add-br br-int
sudo ip link set dev eth2 up
sudo ovs-vsctl --no-wait br-set-external-id br-int bridge-id br-int
sudo ovs-vsctl --no-wait set-controller br-int tcp:192.168.2.10:6633

Add the interface that connects the physical VLAN used for interconnecting the br-int bridges, e.g. if that is eth2:

sudo ovs-vsctl -- --may-exist add-port br-int eth2

Note: There is no quantum-*-agent service that runs on the compute hosts when Quantum is configured to use the RestProxy plugin.

On the network node

Install the Neutron DHCP agent package and configure /etc/neutron/dhcp_agent.ini as follows:

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
ovs_use_veth = true
root_helper = sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
use_namespaces = True
debug = False
verbose = True
enable_isolated_metadata = True

Note: with Open vSwitch under OpenFlow control, it is required that either ovs_use_veth=true or use_namespaces=false.

Note: Do not run the quantum-l3-agent service on the network node as the RestProxy plugin is not compatible with it.

Configuring DevStack

Here is an example of a two node configuration, having a controller node running the Floodlight controller, and a compute node running nova-compute. There is no network node, instead the DHCP agent runs on the controller node. Before running stack.sh, you must install and configure Open vSwitch as described above on both nodes, and install and run Floodlight on the controller.

There is a Vagrant/Ansible script that does most of the grunt work here.

# localrc for the controller node
MULTI_HOST=true
HOST_IP=192.168.2.10

# add n-novnc if you want to run nova-compute on the controller node too
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-sch,n-cauth,horizon,mysql,rabbit,sysstat,n-cond,n-novnc,n-xvnc
disable_service c-api c-sch c-vol cinder
enable_service n-vol

disable_service n-net
enable_service q-svc
enable_service q-dhcp
enable_service q-meta
enable_service quantum

Q_PLUGIN=bigswitch_floodlight
BS_FL_CONTROLLERS_PORT=127.0.0.1:8080
Q_OVS_USE_VETH=true

DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_TOKEN=password
SERVICE_PASSWORD=password
ADMIN_PASSWORD=password


# localrc for the compute node
HOST_IP=192.168.2.11
SERVICE_HOST=192.168.2.10

ENABLED_SERVICES=n-cpu,n-novnc,neutron

# the following 2 lines are probably not necessary as we configure ovs manually beforehand
Q_PLUGIN=bigswitch_floodlight
Q_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_TOKEN=password
SERVICE_PASSWORD=password
ADMIN_PASSWORD=password

VNCSERVER_LISTEN=$HOST_IP
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP


Floodlight-devstack-wiki.jpg