GroupBasedPolicy/InstallODLIntegrationDevstack

Installing and Running GBP
The following are a set of instructions for installing and working with GBP/ODL integration:

VM Set up

 * Setup Ubuntu 14.04 VM in virtualbox or vmware fusion, you can use one VM or two VMs. In the example, one VM is set up for devstack (2 core with 4G RAM), and a second VM for opendaylight controller(2 core with 6G RAM).
 * Run OVS 2.1 minimum! (we recommend 2.3).
 * Instructions

OpenDaylight Set up
1. sudo apt-get install git-core maven openjdk-7-jre openjdk-7-jdk

1.1 Manually upgrade maven from 3.0.5 to 3.1.1 cd ~/Downloads wget http://apache.mirrors.timporter.net/maven/maven-3/3.1.1/binaries/apache-maven-3.1.1-bin.tar.gz  sudo mkdir -p /usr/local/apache-maven sudo mv apache-maven-3.1.1-bin.tar.gz /usr/local/apache-maven cd /usr/local/apache-maven sudo tar -xzvf apache-maven-3.1.1-bin.tar.gz  sudo rm /usr/bin/mvn sudo ln -s /usr/local/apache-maven/apache-maven-3.1.1/bin/mvn /usr/bin/mvn cd

2. git clone https://github.com/opendaylight/groupbasedpolicy.git

3. cd groupbasedpolicy

4. mvn clean install

5. cd distribution-karaf/target/assembly/bin/

6. ./karaf

7. Inside karaf, run following command: feature:install odl-restconf odl-groupbasedpolicy-base odl-groupbasedpolicy-ofoverlay

All-in-one Devstack Installation
1. Grab devstack from github: git clone https://github.com/group-policy/devstack.git -b stable/juno-gbp-odl cd devstack cp local.conf.controller local.conf

2. modify the 'odl_host' at the end of your local.conf file, so:
 * ODL_MGR_IP = 
 * HOST_IP =
 * odl_host = 

3. ./stack.sh

Register OFOverlay
From your POSTMAN application, set following RESTful call: PUT http://:8181/restconf/config/opendaylight-inventory:nodes

{   "opendaylight-inventory:nodes": { "node": [ {               "id": "openflow:XXXX", "ofoverlay:tunnel-ip": "" }       ]    }  }

where XXXX is the DPID from  converted from hex to long

GBP in action
Use the "gbp" CLI binary ("gbp --help" will give you the commands)

Example scenario: Modeling connectivity between Web and App Tiers using GBP: # Authenticate with admin user, demo project. This is for availabilty-zone source openrc admin demo

# Create allow action that can used in several rules gbp policy-action-create allow --action-type allow

# Create ICMP rule gbp policy-classifier-create icmp-traffic --protocol icmp --direction bi gbp policy-rule-create ping-policy-rule --classifier icmp-traffic --actions allow

# Create HTTP Rule gbp policy-classifier-create web-traffic --protocol tcp --port-range 80 --direction in gbp policy-rule-create web-policy-rule --classifier web-traffic --actions allow

# ICMP policy-rule-set gbp policy-rule-set-create icmp-policy-rule-set --policy-rules ping-policy-rule

# WEB policy-rule-set gbp policy-rule-set-create web-policy-rule-set --policy-rules web-policy-rule

# Policy Target Group creation and policy-rule-set association gbp group-create web --provided-policy-rule-sets "icmp-policy-rule-set=true,web-policy-rule-set=true" gbp group-create client-1 --consumed-policy-rule-sets "icmp-policy-rule-set=true,web-policy-rule-set=true"

# Policy Target ceation and launching VMs WEB1=$(gbp policy-target-create web-ep-1 --policy-target-group web | awk "/port_id/ {print \$4}") CLIENT1=$(gbp policy-target-create client-ep-1 --policy-target-group client-1 | awk "/port_id/ {print \$4}")

nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64-uec --nic port-id=$WEB1 web-vm-1 nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64-uec --nic port-id=$CLIENT1 client-vm-1

#Check your availability zone using 

#For multi-node setup, this will launch extra VMs on compute node WEB2=$(gbp policy-target-create web-ep-2 --policy-target-group web | awk "/port_id/ {print \$4}") CLIENT2=$(gbp policy-target-create client-ep-2 --policy-target-group client-1 | awk "/port_id/ {print \$4}") nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64-uec --nic port-id=$WEB2 web-vm-2 --availability-zone=nova:osgbp2 nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64-uec --nic port-id=$CLIENT2 client-vm-2 --availability-zone=nova:osgbp2

####CHECKPOINT: ICMP and HTTP work from app to web and vice versa

Unstack and Restack
Stop OpenDaylight Controller and remove any persistent data logout rm -rf ../data

Modify local.conf uncomment "OFFLINE=True", start to unstack and restack

./unstack.sh --all rm -rf /opt/stack/horizon/openstack_dashboard/enabled/*gbp*.py sudo service rabbitmq-server restart sudo service mysql restart

Start OpenDaylight Controller ./karaf feature:install odl-groupbasedpolicy-openstackgbp odl-restconf

Start Devstack ./stack.sh

Multi-node Devstack Installation

 * 1) Launch ODL controller OpenDaylight Set up
 * 2) Launch Openstack controller node: follow All-in-one Devstack Installation.
 * 3) Create compute node VM VM Set up. Note: this VM size could be smaller, CPU 2core, Memory 2GB is ok.
 * 4) Follow this link to setup compute node Setup compute node
 * 5) On controller node, register OFOverlay OFOverlay
 * 6) On controller node, run GBP in action script GBP in action

Setup compute node
1. Grab devstack from github: git clone https://github.com/group-policy/devstack.git -b stable/juno-gbp-odl cd devstack cp local.conf.compute local.conf

2. modify the IP addresses your local.conf file, so:
 * ODL_MGR_IP = 
 * HOST_IP =
 * SERVICE_HOST =
 * odl_host = 

3. ./stack.sh

Enable Live VM Migration with devstack
The procedure is similar with multi-node setup: # check which compute node the instance is running nova-manage vm list  | awk '{print $1,$2,$4,$5}' | column -t # Run nova live-migration command to move the VM to another compute node: nova live-migration web-vm-1 
 * 1) Launch ODL controller OpenDaylight Set up
 * 2) Use ODL controller as NFS server. NFS server will be used by devstack controller and compute as VM instance data share storage. NFS Server Set up
 * 3) Launch Openstack controller node: refer to All-in-one Devstack Installation. Before launch ./stack.sh, need to ensure hosts configuration Host Configuration,  NFS share is mounted NFS Client Configuration and ensure libvirt is properly configured Libvirt Configuration
 * 4) Create compute node VM VM Set up. Note: this VM size could be smaller, CPU 2core, Memory 2GB is ok.
 * 5) Follow this link to setup compute node Setup compute node. Before launch ./stack.sh, need to ensure hosts configuration Host Configuration,  ensure NFS share is mounted NFS Client Configuration and ensure libvirt is properly configured Libvirt Configuration
 * 6) On controller node, register OFOverlay OFOverlay
 * 7) On controller node, run GBP in action script GBP in action
 * 8) After launch the VMs, launch VM migration:

# check the VM state nova list

# validate which compute node the instance is running nova-manage vm list  | awk '{print $1,$2,$4,$5}' | column -t

NFS Server Setup
sudo adduser stack sudo apt-get install nfs-kernel-server sudo mkdir -p /srv/demo-stack/instances sudo chmod o+x /srv/demo-stack/instances sudo chown stack:stack /srv/demo-stack/instances /srv/demo-stack/instances 192.168.0.0/16(rw,fsid=0,insecure,no_subtree_check,async,no_root_squash) sudo exportfs -ra sudo service nfs-kernel-server restart
 * Install NFS server packages
 * Add the following entry in /etc/exports:
 * Launch NFS Server daemon:

Host Configuraiton
$ ping HostA $ ping HostB
 * Edit your DNS or /etc/hosts to ensure the devstack controller node and compute node can perform name resolution with each other.

NFS Client Configuration
sudo apt-get install rpcbind nfs-common sudo mkdir /opt/stack sudo chown stack:stack /opt/stack mkdir -p /opt/stack/data/instances sudo mount :/srv/demo-stack/instances /opt/stack/data/instances
 * Install NFS client packages
 * Create instance data folder and mount the NFS file share

Libvirt Configuration
listen_tls = 0 listen_tcp = 1 auth_tcp = “none” libvirtd_opts = “ -d -l” sudo service libvirt-bin restart
 * Modify /etc/libvirt/libvirtd.conf file to include the following:
 * Modify /etc/default/libvirt-bin file:
 * Restart libvirt

Reference
1: OpenStack Live Migration