Jump to: navigation, search

Difference between revisions of "DCFabric-neutron-plugin"

(Contents)
 
(4 intermediate revisions by the same user not shown)
Line 7: Line 7:
  
 
===  Architecture  ===
 
===  Architecture  ===
<gallery>
+
[[File:10.png|2000px|thumbnail|center]]
DCFabric_Architecture.png|DCFabric_Architecture
+
 
</gallery>
+
 
 +
 
 
In general, DCFabirc can support larger Layer 2 networking (> 500switches), high efficient Layer 2 switching (more than 10 thousands hosts communicating simultaneously), fast initialization (500 switching node, startthe convergence time<30 seconds), HTML5-based powerful Web UI interface fornetwork topology and traffic exhibition, and Northbound Restful API interfacesupporting for efficient APP development.
 
In general, DCFabirc can support larger Layer 2 networking (> 500switches), high efficient Layer 2 switching (more than 10 thousands hosts communicating simultaneously), fast initialization (500 switching node, startthe convergence time<30 seconds), HTML5-based powerful Web UI interface fornetwork topology and traffic exhibition, and Northbound Restful API interfacesupporting for efficient APP development.
 
DCFabric can be divided into five layers from top to bottom (see above Figure): the first layer is the Web application layer supported by DCFabric, the second layer is the northbound interface layer, the third layer is the system app layer thatcontains a novel SFabric module, the fourth layer is the basic service layer that guarantees the good running of the upper applications, the fifth layer is the southbound interface layer based on several protocols like OpenFlow.
 
DCFabric can be divided into five layers from top to bottom (see above Figure): the first layer is the Web application layer supported by DCFabric, the second layer is the northbound interface layer, the third layer is the system app layer thatcontains a novel SFabric module, the fourth layer is the basic service layer that guarantees the good running of the upper applications, the fifth layer is the southbound interface layer based on several protocols like OpenFlow.
Line 23: Line 24:
 
=== Advantage ===
 
=== Advantage ===
 
Compared with other existing SDNControllers, DCFabric has the following two different characters:
 
Compared with other existing SDNControllers, DCFabric has the following two different characters:
 +
 +
[[File:SFabric.png|2000px|thumbnail|center]]
 +
 +
  
 
1.The SFabric Module: Different from other SDN technologies that compute routes on the host level, DCFabric can compute routes on the switch level with the help of SFabric module. Since the number of switches in a network is much less than that of hosts, SFabric can greatly reduce the flow entries and the working load of DCFabric, bringing the benefit of accelerating the speed of DCFabric.Because DCFabric has better efficiency on dealing with gateway, route, networkaddress translation (NAT), multi-tenant management and etc., it can achieve effective control and management on cloud computing data centers whose scale is still rising.
 
1.The SFabric Module: Different from other SDN technologies that compute routes on the host level, DCFabric can compute routes on the switch level with the help of SFabric module. Since the number of switches in a network is much less than that of hosts, SFabric can greatly reduce the flow entries and the working load of DCFabric, bringing the benefit of accelerating the speed of DCFabric.Because DCFabric has better efficiency on dealing with gateway, route, networkaddress translation (NAT), multi-tenant management and etc., it can achieve effective control and management on cloud computing data centers whose scale is still rising.
Line 29: Line 34:
  
 
=== Prerequsites ===
 
=== Prerequsites ===
 +
 +
OpenStack Juno is requried(Juno is tested)
 +
 
=== Configuration ===
 
=== Configuration ===
 +
 +
* STEP 1    Get the tool script
 +
There is the shell script "gnflush-controller.sh" and ml2 script "mechanism_gnflush.py" under "tools/openstack-tools" directory is used for configuring OpenStack Network node and Compute Node. Copy there two scripts to "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/" directory.
 +
* STEP 2    Stop neutron service
 +
    systemctl stop neutron-server 
 +
    systemctl stop neutron-linuxbridge-agent
 +
    systemctl stop neutron-openvswitch-agent 
 +
    systemctl disable neutron-openvswitch-agent 
 +
    systemctl disable neutron-linuxbridge-agent
 +
* STEP 3    Configure ml2 plugin
 +
 +
edit the file
 +
    vi /usr/lib/python2.7/site-packages/neutron-2014.2.3-py2.7.egg-info/entry_points.txt
 +
add one line
 +
    gnflush = neutron.plugins.ml2.drivers.mechanism_gnflush:GNFlushMechanismDriver
 +
* STEP 4  Configure ml2
 +
Before the configuration,there is something you need to know
 +
# Tenent network type:administrator can choose "gre" or "vlan" network type
 +
# SDN controller IP address:linux server ip address where SDN controller is installed
 +
# Controller REST service port:The REST service is listening the port that is configured in SDN configuration file “[rest_port]” , default is "8081"
 +
Use the command below to edit the ml2 configuration,for example "gre" network type:
 +
    crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers gnflush
 +
Add "ml2_gnflush" in the end in the file "/etc/neutron/plugins/ml2/ml2_conf.ini"
 +
    [ml2_gnflush]
 +
    password = admin
 +
    username = admin
 +
    url = http://<Controller IP>:<Controller REST service port>/gn/neutron
 +
* STEP 5    Create ml2 database
 +
Use these command in OpenStack Controller to create ml2 database
 +
    mysql -u root -p
 +
    drop database if exists neutron_ml2;
 +
    create database neutron_ml2 character set utf8;
 +
    grant all on neutron_ml2.* to 'neutron'@'%';
 +
    grant all on neutron_ml2.* to 'neutron'@'controller' IDENTIFIED BY 'neutron';
 +
    grant all on neutron_ml2.* to 'neutron'@'localhost' IDENTIFIED BY 'neutron'; 
 +
    neutron-db-manage --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head
 +
 +
* STEP 6    Configure openvswitch-agent
 +
Network node:
 +
    sh gnflush-controller.sh --local_ip $local_ip --provider_mappings eth0 --gnflush_ip $DCFabric_ip --external_provider br-ex
 +
Compute node:
 +
    sh gnflush-controller.sh --local_ip $local_ip --provider_mappings eth1 --gnflush_ip $DCFabric_ip
 +
    ovs-vsctl add-br br-int
 +
    ovs-vsctl set-controller br-int tcp:$local_ip
 +
    ovs-vsctl add-port br-int eth0
 +
 +
* STEP 7    Start service
 +
execute these command in OpenStack network node
 +
    systemctl start neutron-server
 +
    systemctl disable neutron-linuxbridge-agent
 +
    systemctl stop neutron-linuxbridge-agent

Latest revision as of 01:20, 19 July 2016

DCFabric Neutron Plugin

Overview

Based on the efficient multi-threaded performance of C, DCFabric shows very high efficiency in network topology discovering,message/event handling, and OpenFlow table entries installing. And based on theanalysis of flow characteristics of cloud computing data centers, we innovative proposed SFabric technology based on intelligent flow installation in advanceand flow aggregation in switch level. SFabric can efficiently solve the main roadblocksin adopting SDN in a large data center network: size limit in flow tables, flowentry match efficiency, and bottleneck of efficiency in SDN controllers.

In general, DCFabirc can support larger Layer 2 networking (> 500switches), high efficient Layer 2 switching (more than 10 thousands hosts communicating simultaneously), fast initialization (500 switching node, startthe convergence time<30 seconds), HTML5-based powerful Web UI interface fornetwork topology and traffic exhibition, and Northbound Restful API interfacesupporting for efficient APP development.

Architecture

10.png


In general, DCFabirc can support larger Layer 2 networking (> 500switches), high efficient Layer 2 switching (more than 10 thousands hosts communicating simultaneously), fast initialization (500 switching node, startthe convergence time<30 seconds), HTML5-based powerful Web UI interface fornetwork topology and traffic exhibition, and Northbound Restful API interfacesupporting for efficient APP development. DCFabric can be divided into five layers from top to bottom (see above Figure): the first layer is the Web application layer supported by DCFabric, the second layer is the northbound interface layer, the third layer is the system app layer thatcontains a novel SFabric module, the fourth layer is the basic service layer that guarantees the good running of the upper applications, the fifth layer is the southbound interface layer based on several protocols like OpenFlow.

TheWeb applications supported by DCFabric mainly include Web GUI, Neutron interface, traffic engineering, firewall, load balance, DDos protection and so on. Accordingly, DCFabric is able to provide a series of supports for data centers with network management, virtualization, security and traffic control,etc.

RestfulAPI is the northbound interface provided by DCFabric for application developers. It separates the network applications from network details, making all kinds of facilities, events (i.e. link interruptions) and specific operations transparent to the application programs. Consequently, data centers can improve users’ experience and realize intelligence and safety by variousflexible network applications.

The system app of DCFabric is going to centrally handle the logical processing and generate specific flow entries according to upper applications, lower basic services (topology discovery, host tracking, traffic monitoring and message handling, etc.) and its internal execution strategies. After that, DCFabric installs those flow entries into switches via OpenFlow or OVSDB. Thus, each switch only needs to execute corresponding actions in accordance with flow entries when processing the packets.

Moreover,considering the limited working capability of a single Controller, in order tocope with increasing network scale and guarantee its stability, the deploymentof cluster Controllers is supported. Therefore, if one Controller is down, the relatedSDN switches will connect with another Controller as soon as possible. Furthermore,multiple Controllers can work as a logical entity due to their good cooperation,which makes the data plane transparent to application programs. Therefore, the deploymentof cluster Controllers make great contributions to the high throughput, lowdelay, good flexibility and strong stability of data centers.

Advantage

Compared with other existing SDNControllers, DCFabric has the following two different characters:

SFabric.png


1.The SFabric Module: Different from other SDN technologies that compute routes on the host level, DCFabric can compute routes on the switch level with the help of SFabric module. Since the number of switches in a network is much less than that of hosts, SFabric can greatly reduce the flow entries and the working load of DCFabric, bringing the benefit of accelerating the speed of DCFabric.Because DCFabric has better efficiency on dealing with gateway, route, networkaddress translation (NAT), multi-tenant management and etc., it can achieve effective control and management on cloud computing data centers whose scale is still rising.

2.Support for Extension of System APPs: DCFabric also supports the redevelopment of system app, which allows other developers improving parts of the functions of DCFabric on their specific demands.Therefore, DCFabric has better compatibility and applications.

Prerequsites

OpenStack Juno is requried(Juno is tested)

Configuration

  • STEP 1 Get the tool script

There is the shell script "gnflush-controller.sh" and ml2 script "mechanism_gnflush.py" under "tools/openstack-tools" directory is used for configuring OpenStack Network node and Compute Node. Copy there two scripts to "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/" directory.

  • STEP 2 Stop neutron service
   systemctl stop neutron-server  
   systemctl stop neutron-linuxbridge-agent
   systemctl stop neutron-openvswitch-agent  
   systemctl disable neutron-openvswitch-agent  
   systemctl disable neutron-linuxbridge-agent
  • STEP 3 Configure ml2 plugin

edit the file

   vi /usr/lib/python2.7/site-packages/neutron-2014.2.3-py2.7.egg-info/entry_points.txt

add one line

   gnflush = neutron.plugins.ml2.drivers.mechanism_gnflush:GNFlushMechanismDriver
  • STEP 4 Configure ml2

Before the configuration,there is something you need to know

  1. Tenent network type:administrator can choose "gre" or "vlan" network type
  2. SDN controller IP address:linux server ip address where SDN controller is installed
  3. Controller REST service port:The REST service is listening the port that is configured in SDN configuration file “[rest_port]” , default is "8081"

Use the command below to edit the ml2 configuration,for example "gre" network type:

   crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers gnflush

Add "ml2_gnflush" in the end in the file "/etc/neutron/plugins/ml2/ml2_conf.ini"

   [ml2_gnflush] 
   password = admin 
   username = admin 
   url = http://<Controller IP>:<Controller REST service port>/gn/neutron
  • STEP 5 Create ml2 database

Use these command in OpenStack Controller to create ml2 database

   mysql -u root -p
   drop database if exists neutron_ml2;
   create database neutron_ml2 character set utf8;
   grant all on neutron_ml2.* to 'neutron'@'%';
   grant all on neutron_ml2.* to 'neutron'@'controller' IDENTIFIED BY 'neutron';
   grant all on neutron_ml2.* to 'neutron'@'localhost' IDENTIFIED BY 'neutron';   
   neutron-db-manage --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head
  • STEP 6 Configure openvswitch-agent

Network node:

   sh gnflush-controller.sh --local_ip $local_ip --provider_mappings eth0 --gnflush_ip $DCFabric_ip --external_provider br-ex

Compute node:

   sh gnflush-controller.sh --local_ip $local_ip --provider_mappings eth1 --gnflush_ip $DCFabric_ip
   ovs-vsctl add-br br-int
   ovs-vsctl set-controller br-int tcp:$local_ip
   ovs-vsctl add-port br-int eth0
  • STEP 7 Start service

execute these command in OpenStack network node

   systemctl start neutron-server
   systemctl disable neutron-linuxbridge-agent
   systemctl stop neutron-linuxbridge-agent