Jump to: navigation, search

Difference between revisions of "Neutron/LBaaS/HowToRun"

< Neutron‎ | LBaaS
(How to run LBaaS on DevStack)
(How to run LBaaS on DevStack)
Line 1: Line 1:
  
= How to run LBaaS on DevStack =
+
== Getting the Code ==
  
The code is available under https://review.openstack.org/#/c/20985/. The patch introduces driver for HAProxy loadbalancer. To run the code on DevStack one needs:
+
Currently, both the quantum, python-quantumclient, and devstack code is under review. Please pull from the following reviews to get the latest code:  
# Make sure you have the latest Nova code because some important bugs have been fixed recently (e. g. https://bugs.launchpad.net/nova/+bug/1130080 which breaks injection of SSH key pairs into new VM instances).
 
# Check out Quantum code from gerrit topic (i.e. <code><nowiki>git fetch https://review.openstack.org/openstack/quantum refs/changes/85/20985/5 && git checkout FETCH_HEAD </nowiki></code>)
 
# Create an image in '''admin's tenant''' with preinstalled haproxy service or download the [https://docs.google.com/file/d/0B1mJ0eQoi7tESGJScnVJTk5mVjQ/edit?usp=sharing image], '''make the image public'''
 
# Create a keypair which will be injected into haproxy VMs. Copy private key to some place on the host.
 
# Update <code><nowiki> /etc/quantum/quantum.conf </nowiki></code> with the following parameters:
 
 
 
<pre><nowiki>
 
core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
 
service_plugins = quantum.plugins.services.loadbalancer.loadbalancer_plugin.LoadBalancerPlugin
 
</nowiki></pre>
 
  
# Update <code><nowiki> etc/service_agent.ini </nowiki></code> with the following parameters:
+
* quantum: https://review.openstack.org/#/c/22794/
 
+
* python-quantumclient: https://review.openstack.org/#/c/22922/
<pre><nowiki>
+
* devstack: https://review.openstack.org/#/c/22937/
admin_user = <admin_user>
 
admin_password = <admin_password>
 
admin_tenant_name = <admin_tenant_name>
 
auth_url = http://127.0.0.1:35357/v2.0
 
  
# Drivers for advanced services
+
== System Setup ==
service_drivers = quantum.plugins.services.loadbalancer.drivers.haproxy.haproxy_driver.HAProxyDriver
 
  
# Flavor name to run a haproxy VM
+
<pre>sudo apt-get install haproxy</pre>
# it's better to create custom flavor with less amount of disk space (2-4 GB) for faster VM provisioning
 
haproxy_flavor_name = m1.small
 
  
# Glance image id for a haproxy VM
+
add the following lines to your localrc:
haproxy_image_id = <image_id>
 
  
# Keypair name for a haproxy VM
+
<pre>enable_service q-lbaas
haproxy_keypair = <keypair name>
+
Q_SERVICE_PLUGINS=quantum.plugins.services.loadbalancer.plugin.LoadBalancerPlugin</pre>  
  
# Path to the private key for ssh to a haproxy VM
+
then re-run stack.sh
haproxy_key_path = <path to the private key on the host>
 
  </nowiki></pre>
 
  
# Stop devstack Quantum server (in corresponding screen)
+
== Topology Setup ==  
# Run quantum server by <code><nowiki> cd /opt/stack/quantum && python bin/quantum-server --config-file=/etc/quantum/quantum.conf --config-file=/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini --debug --verbose </nowiki></code>
 
# Start Agent by <code><nowiki> cd /opt/stack/quantum && python bin/quantum-service-agent --config-file /etc/quantum/quantum.conf --config-file=etc/service_agent.ini --debug </nowiki></code>
 
# Check that the workflow works by creating pool object:
 
  
  <code><nowiki> quantum lb-pool-create --lb-method ROUND_ROBIN --name Uno --protocol TCP --subnet-id <subnet-id> </nowiki></code>
+
Spin up three VMs, two to be servers, and one to be a client.
  
The command outputs Pool in PENDING_CREATE state, to verify that it is actually created (state is ACTIVE), run
+
<pre>nova boot --image <image-uuid> --flavor 1 server1
 +
nova boot --image <image-uuid> --flavor 1 server2
 +
nova boot --image <image-uuid> --flavor 1 client</pre>
  
  <code><nowiki> quantum lb-pool-show <pool-id> </nowiki></code>
+
Get the UUID of the private subnet. 
  
Also check that a new haproxy VM was launched from the image in the agent config.
+
<pre>quantum subnet-list</pre>
  
  Full list of LBaaS CLI commands is available at [[Quantum/LBaaS/CLI]]
+
Create a Pool:
 +
 
 +
<pre>quantum lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id <subnet-id> </pre>
 +
 
 +
Create Members (using the IPs of server1 and server2):
 +
 
 +
 
 +
<pre>nova list
 +
 
 +
quantum lb-member-create --address <server1-ip> --protocol-port 80 mypool
 +
quantum lb-member-create --address <server2-ip> --protocol-port 80 mypool</pre>
 +
 
 +
Create a Healthmonitor and associated it with the pool:
 +
 
 +
<pre>quantum lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3
 +
quantum lb-healthmonitor-associate <healthmonitor-uuid> mypool</pre>
 +
 
 +
Create a VIP
 +
 
 +
<pre>quantum lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id <subnet-id> mypool</pre>
 +
 
 +
note the address for use below. 
 +
 
 +
== Validation ==
 +
 
 +
We now have two hosts with a load balancer pointed at them, but those hosts are serving up any HTTP content.
 +
 
 +
A simple trick is to use netcat on the hosts to implement a simple webserver.  For example, run:
 +
 
 +
<pre>while true; do echo -e 'HTTP/1.0 200 OK\r\n\r\n<servername>' | sudo nc -l -p 80 ; done </pre>
 +
 
 +
replacing <servername> with "server1" and "server2" as appropriate (if you have python installed, you can also just use this simple webserver script http://paste.openstack.org/show/32558/ and create an index.html in each server's cwd with the text "server1" or "server2"). 
 +
 
 +
From your client, use wget to make sure you can access server1 and server2 as expected.
 +
 
 +
<pre>wget -O - http://<server1-ip>
 +
wget -O - http://<server2-ip> </pre>
 +
 
 +
Then use wget to hit the VIP IP several times in succession. You should bounce between seeing server1 and server2. 
 +
 
 +
<pre>wget -O - http://<vip-ip>
 +
wget -O - http://<vip-ip>
 +
wget -O - http://<vip-ip>
 +
wget -O - http://<vip-ip> </pre>
 +
 
 +
Full list of LBaaS CLI commands is available at [[Quantum/LBaaS/CLI]]
 +
 
 +
== Troubleshooting ==
 +
 
 +
LBaas is implemented similar to L3 + DHCP using namespaces.  You can use "ip netns list" to find the namespace named qlbaas-<pool_id>, and then test connectivity from that namespace.
 +
 
 +
Use "screen -x stack" to view the q-svc and q-lbaas tabs for errors.
 +
 
 +
Grep syslog for "haproxy" to see messages from Haproxy (though they are quite cryptic!)

Revision as of 03:07, 27 February 2013

Getting the Code

Currently, both the quantum, python-quantumclient, and devstack code is under review. Please pull from the following reviews to get the latest code:

System Setup

sudo apt-get install haproxy

add the following lines to your localrc:

enable_service q-lbaas
Q_SERVICE_PLUGINS=quantum.plugins.services.loadbalancer.plugin.LoadBalancerPlugin

then re-run stack.sh

Topology Setup

Spin up three VMs, two to be servers, and one to be a client.

nova boot --image <image-uuid> --flavor 1 server1
nova boot --image <image-uuid> --flavor 1 server2
nova boot --image <image-uuid> --flavor 1 client

Get the UUID of the private subnet.

quantum subnet-list

Create a Pool:

quantum lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id <subnet-id> 

Create Members (using the IPs of server1 and server2):


nova list 

quantum lb-member-create --address <server1-ip> --protocol-port 80 mypool
quantum lb-member-create --address <server2-ip> --protocol-port 80 mypool

Create a Healthmonitor and associated it with the pool:

quantum lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3
quantum lb-healthmonitor-associate <healthmonitor-uuid> mypool

Create a VIP

quantum lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id <subnet-id> mypool

note the address for use below.

Validation

We now have two hosts with a load balancer pointed at them, but those hosts are serving up any HTTP content.

A simple trick is to use netcat on the hosts to implement a simple webserver. For example, run:

while true; do echo -e 'HTTP/1.0 200 OK\r\n\r\n<servername>' | sudo nc -l -p 80 ; done 

replacing <servername> with "server1" and "server2" as appropriate (if you have python installed, you can also just use this simple webserver script http://paste.openstack.org/show/32558/ and create an index.html in each server's cwd with the text "server1" or "server2").

From your client, use wget to make sure you can access server1 and server2 as expected.

wget -O - http://<server1-ip> 
wget -O - http://<server2-ip> 

Then use wget to hit the VIP IP several times in succession. You should bounce between seeing server1 and server2.

wget -O - http://<vip-ip> 
wget -O - http://<vip-ip> 
wget -O - http://<vip-ip> 
wget -O - http://<vip-ip> 

Full list of LBaaS CLI commands is available at Quantum/LBaaS/CLI

Troubleshooting

LBaas is implemented similar to L3 + DHCP using namespaces. You can use "ip netns list" to find the namespace named qlbaas-<pool_id>, and then test connectivity from that namespace.

Use "screen -x stack" to view the q-svc and q-lbaas tabs for errors.

Grep syslog for "haproxy" to see messages from Haproxy (though they are quite cryptic!)