GroupBasedPolicy/InstallODLIntegrationDevstack
Contents
Installing and Running GBP
The following are a set of instructions for installing and working with GBP/ODL integration:
VM Set up
Setup Ubuntu 14.04 VM in virtualbox or vmware fusion, you can use one VM or two VMs. In the example, one VM is set up for devstack (2 core with 4G RAM), and a second VM for opendaylight controller(2 core with 6G RAM).
OpenDaylight Set up
1. sudo apt-get install maven openjdk-7-jre openjdk-jdk
2. git clone https://github.com/opendaylight/groupbasedpolicy.git
3. cd groupbasedpolicy
4. mvn clean install
5. cd distribution-karaf/target/assembly/bin/
6. ./karaf
7. Inside karaf, run following command:
feature:install odl-restconf odl-groupbasedpolicy-openstackgbp
Devstack Installation
1. git clone https://github.com/yapengwu/devstack.git
2. cd devstack
3. git checkout -b odl-juno-gbp-3 origin/odl-juno-gbp-3
4. modify the 'odl_host' at the end of your local.conf file, so: a. "odl_host" points to the IP address of the ODL controller b. "odl_nodes" is format as "flow-id1:ip1,flow-id2:ip2...." (note: there is no space in the middle)
5. ./stack.sh
6. source openrc demo demo
Set up OVS controller
On your openstack VM,
1. sudo ovs-vsctl set-controller br-ex tcp:<odl controller IP>:6653
2. sudo ovs-vsctl set-controller br-int tcp:<odl controller IP>:6653
3. sudo ovs-vsctl set-controller br-tun tcp:<odl controller IP>:6653
Register OFOverlay
From your POSTMAN application, set following RESTful call: PUT http://<controller_IP>:8181/restconf/config/opendaylight-inventory:nodes
{ "opendaylight-inventory:nodes": { "node": [ { "id": "openflow:XXXX", "ofoverlay:tunnel-ip": "<ovs IP address>" } ] } }
where XXXX is the DPID from <sudo ovs-ofctl show br-int -OOpenFlow13> converted from hex to long
GBP in action
Use the "gbp" CLI binary ("gbp --help" will give you the commands)
Example scenario: Modeling connectivity between Web and App Tiers using GBP:
# Create allow action that can used in several rules gbp policy-action-create allow --action-type allow
# Create ICMP rule gbp policy-classifier-create icmp-traffic --protocol icmp --direction bi gbp policy-rule-create ping-policy-rule --classifier icmp-traffic --actions allow
# Create HTTP Rule gbp policy-classifier-create web-traffic --protocol tcp --port-range 80 --direction in gbp policy-rule-create web-policy-rule --classifier web-traffic --actions allow
# ICMP policy-rule-set gbp policy-rule-set-create icmp-policy-rule-set --policy-rules ping-policy-rule
# WEB policy-rule-set gbp policy-rule-set-create web-policy-rule-set --policy-rules web-policy-rule
# Policy Target Group creation and policy-rule-set association gbp group-create web --provided-policy-rule-sets "icmp-policy-rule-set=true,web-policy-rule-set=true" gbp group-create client-1 --consumed-policy-rule-sets "icmp-policy-rule-set=true,web-policy-rule-set=true" gbp group-create client-2 --consumed-policy-rule-sets "icmp-policy-rule-set=true,web-policy-rule-set=true"
# Policy Target ceation and launching VMs WEB1=$(gbp policy-target-create web-ep-1 --policy-target-group web | awk "/port_id/ {print \$4}") CLIENT1=$(gbp policy-target-create client-ep-1 --policy-target-group client-1 | awk "/port_id/ {print \$4}") CLIENT2=$(gbp policy-target-create client-ep-2 --policy-target-group client-2 | awk "/port_id/ {print \$4}")
nova boot --flavor m1.nano --image cirros-0.3.2-x86_64-uec --nic port-id=$WEB1 web-vm-1 nova boot --flavor m1.nano --image cirros-0.3.2-x86_64-uec --nic port-id=$CLIENT1 client-vm-1 nova boot --flavor m1.nano --image cirros-0.3.2-x86_64-uec --nic port-id=$CLIENT2 client-vm-2
####CHECKPOINT: ICMP and HTTP work from app to web and vice versa