Jump to: navigation, search

GroupBasedPolicy/InstallODLIntegrationDevstack

< GroupBasedPolicy
Revision as of 18:47, 22 January 2015 by Yi Yang (talk | contribs) (OpenDaylight Set up)

Installing and Running GBP

The following are a set of instructions for installing and working with GBP/ODL integration:


VM Set up

  • Setup Ubuntu 14.04 VM in virtualbox or vmware fusion, you can use one VM or two VMs. In the example, one VM is set up for devstack (2 core with 4G RAM), and a second VM for opendaylight controller(2 core with 6G RAM).
  • Run OVS 2.1 minimum! (we recommend 2.3).

OpenDaylight Set up

1. sudo apt-get install git-core maven openjdk-7-jre openjdk-7-jdk

  Update on Jan 22, 2015: A latest change of ODL code requires maven3.1.1 as minimum, but the latest maven available in ubuntu is 3.0.5. -- You can manually upgrade your maven to 3.1.1, following the instruction at http://askubuntu.com/questions/420281/how-to-update-maven-3-0-4-3-1-1. Just make sure, at the end of upgrade, create a symbolic link from /usr/local/apache-maven/apache-maven-3.1.1/bin/mvn (or whereever your maven 3.1.1. is) to /usr/bin/mvn

2. git clone https://github.com/opendaylight/groupbasedpolicy.git

3. cd groupbasedpolicy

4. mvn clean install

5. cd distribution-karaf/target/assembly/bin/

6. ./karaf

7. Inside karaf, run following command:

  feature:install odl-restconf odl-groupbasedpolicy-openstackgbp

All-in-one Devstack Installation

1. Grab devstack from github:

    git clone https://github.com/group-policy/devstack.git -b stable/juno-gbp-odl
    cd devstack
    cp local.conf.controller local.conf


2. modify the 'odl_host' at the end of your local.conf file, so:

  • ODL_MGR_IP = <odl-controller-ip>
  • HOST_IP = <openstack controller ip>
  • odl_host = <odl-controller-ip>


3. ./stack.sh

Register OFOverlay

From your POSTMAN application, set following RESTful call: PUT http://<controller_IP>:8181/restconf/config/opendaylight-inventory:nodes

 {
   "opendaylight-inventory:nodes": {
       "node": [
           {
               "id": "openflow:XXXX", 
               "ofoverlay:tunnel-ip": "<ovs IP address>"
           }
       ]
   }
 }

where XXXX is the DPID from <sudo ovs-ofctl show br-int -OOpenFlow13> converted from hex to long

GBP in action

Use the "gbp" CLI binary ("gbp --help" will give you the commands)

Example scenario: Modeling connectivity between Web and App Tiers using GBP:

 # Authenticate with admin user, demo project. This is for availabilty-zone
 source openrc admin demo
 # Create allow action that can used in several rules
 gbp policy-action-create allow --action-type allow
 # Create ICMP rule
 gbp policy-classifier-create icmp-traffic --protocol icmp --direction bi
 gbp policy-rule-create ping-policy-rule --classifier icmp-traffic --actions allow
 # Create HTTP Rule
 gbp policy-classifier-create web-traffic --protocol tcp --port-range 80 --direction in
 gbp policy-rule-create web-policy-rule --classifier web-traffic --actions allow
 # ICMP policy-rule-set
 gbp policy-rule-set-create icmp-policy-rule-set --policy-rules ping-policy-rule
 # WEB policy-rule-set
 gbp policy-rule-set-create web-policy-rule-set --policy-rules web-policy-rule
 # Policy Target Group creation and policy-rule-set association
 gbp group-create  web --provided-policy-rule-sets "icmp-policy-rule-set=true,web-policy-rule-set=true"
 gbp group-create  client-1 --consumed-policy-rule-sets "icmp-policy-rule-set=true,web-policy-rule-set=true"
 # Policy Target ceation and launching VMs
 WEB1=$(gbp policy-target-create web-ep-1 --policy-target-group web | awk "/port_id/ {print \$4}")
 CLIENT1=$(gbp policy-target-create client-ep-1 --policy-target-group client-1 | awk "/port_id/ {print \$4}")
 nova boot --flavor m1.nano --image cirros-0.3.2-x86_64-uec --nic port-id=$WEB1 web-vm-1
 nova boot --flavor m1.nano --image cirros-0.3.2-x86_64-uec --nic port-id=$CLIENT1 client-vm-1
 #Check your availability zone using <nova service-list>
 #For multi-node setup, this will launch extra VMs on compute node
 WEB2=$(gbp policy-target-create web-ep-2 --policy-target-group web | awk "/port_id/ {print \$4}")
 CLIENT2=$(gbp policy-target-create client-ep-2 --policy-target-group client-1 | awk "/port_id/ {print \$4}")
 nova boot --flavor m1.nano --image cirros-0.3.2-x86_64-uec --nic port-id=$WEB2 web-vm-2 --availability-zone=nova:osgbp2
 nova boot --flavor m1.nano --image cirros-0.3.2-x86_64-uec --nic port-id=$CLIENT2 client-vm-2 --availability-zone=nova:osgbp2
 ####CHECKPOINT: ICMP and HTTP work from app to web and vice versa

Unstack and Restack

Stop OpenDaylight Controller and remove any persistent data

 logout
 rm -rf ../data

Modify local.conf uncomment "OFFLINE=True", start to unstack and restack

 ./unstack.sh --all
 rm -rf /opt/stack/horizon/openstack_dashboard/enabled/*gbp*.py
 sudo service rabbitmq-server restart
 sudo service mysql restart

Start OpenDaylight Controller

 ./karaf
 feature:install odl-groupbasedpolicy-openstackgbp odl-restconf

Start Devstack

 ./stack.sh

Multi-node Devstack Installation

  1. Launch ODL controller OpenDaylight Set up
  2. Launch Openstack controller node: follow All-in-one Devstack Installation.
  3. Create compute node VM VM Set up. Note: this VM size could be smaller, CPU 2core, Memory 2GB is ok.
  4. Follow this link to setup compute node Setup compute node
  5. On controller node, register OFOverlay OFOverlay
  6. On controller node, run GBP in action script GBP in action

Setup compute node

1. Grab devstack from github:

    git clone https://github.com/group-policy/devstack.git -b stable/juno-gbp-odl
    cd devstack
    cp local.conf.compute local.conf


2. modify the IP addresses your local.conf file, so:

  • ODL_MGR_IP = <odl-controller-ip>
  • HOST_IP = <openstack compute ip>
  • SERVICE_HOST = <openstack controller ip>
  • odl_host = <odl-controller-ip>


3. ./stack.sh

Enable Live VM Migration with devstack

The procedure is similar with multi-node setup:

  1. Launch ODL controller OpenDaylight Set up
  2. Use ODL controller as NFS server. NFS server will be used by devstack controller and compute as VM instance data share storage. NFS Server Set up
  3. Launch Openstack controller node: refer to All-in-one Devstack Installation. Before launch ./stack.sh, need to ensure hosts configuration Host Configuration, NFS share is mounted NFS Client Configuration and ensure libvirt is properly configured Libvirt Configuration
  4. Create compute node VM VM Set up. Note: this VM size could be smaller, CPU 2core, Memory 2GB is ok.
  5. Follow this link to setup compute node Setup compute node. Before launch ./stack.sh, need to ensure hosts configuration Host Configuration, ensure NFS share is mounted NFS Client Configuration and ensure libvirt is properly configured Libvirt Configuration
  6. On controller node, register OFOverlay OFOverlay
  7. On controller node, run GBP in action script GBP in action
  8. After launch the VMs, launch VM migration:
   # check which compute node the instance is running
   nova-manage vm list   | awk '{print $1,$2,$4,$5}' | column -t
   
   # Run nova live-migration command to move the VM to another compute node:
   nova live-migration web-vm-1 <replace with the destination compute node>
   # check the VM state
   nova list
   # validate which compute node the instance is running
   nova-manage vm list   | awk '{print $1,$2,$4,$5}' | column -t
   

NFS Server Setup

  • Install NFS server packages
    sudo adduser stack
    sudo apt-get install nfs-kernel-server
    sudo mkdir -p /srv/demo-stack/instances
    sudo chmod o+x /srv/demo-stack/instances
    sudo chown stack:stack /srv/demo-stack/instances
  • Add the following entry in /etc/exports:
    /srv/demo-stack/instances 192.168.0.0/16(rw,fsid=0,insecure,no_subtree_check,async,no_root_squash)
  • Launch NFS Server daemon:
    sudo exportfs -ra
    sudo service nfs-kernel-server restart

Host Configuraiton

  • Edit your DNS or /etc/hosts to ensure the devstack controller node and compute node can perform name resolution with each other.
    $ ping HostA
    $ ping HostB

NFS Client Configuration

  • Install NFS client packages
    sudo apt-get install rpcbind nfs-common
  • Create instance data folder and mount the NFS file share
    sudo mkdir /opt/stack
    sudo chown stack:stack /opt/stack
    mkdir -p /opt/stack/data/instances
    sudo mount <your NFS server ip address>:/srv/demo-stack/instances /opt/stack/data/instances

Libvirt Configuration

  • Modify /etc/libvirt/libvirtd.conf file to include the following:
    listen_tls = 0
    listen_tcp = 1
    auth_tcp = “none”
  • Modify /etc/default/libvirt-bin file:
    libvirtd_opts = “ -d -l”
  • Restart libvirt
    sudo service libvirt-bin restart

Reference

1: OpenStack Live Migration