Jump to: navigation, search

Difference between revisions of "Neutron/LBaaS/HowToRun"

< Neutron‎ | LBaaS
 
(41 intermediate revisions by 21 users not shown)
Line 1: Line 1:
__NOTOC__
+
{{Warning|header='''Warning - Deprecated'''|body='''As of the Queens OpenStack release cycle neutron-lbaas and neutron-lbaas-dashboard are now deprecated. Please see [[Neutron/LBaaS/Deprecation]]'''}}
= How to run LBaaS on [[DevStack]] =
 
  
The code is available under https://review.openstack.org/#/c/20985/. The patch introduces driver for HAProxy loadbalancer. To run the code on [[DevStack]] one needs:
+
== Getting the Code ==
# Check out Quantum code from gerrit topic (i.e. <code><nowiki>git fetch https://review.openstack.org/openstack/quantum refs/changes/85/20985/2 && git checkout FETCH_HEAD </nowiki></code>)
 
# Create an image with preinstalled haproxy service or download the [https://docs.google.com/file/d/0B1mJ0eQoi7tESGJScnVJTk5mVjQ/edit?usp=sharing image]
 
# Create a keypair which will be injected into haproxy VMs. Copy private key to some place on the host.
 
# Update <code><nowiki> /etc/quantum/quantum.conf </nowiki></code> with the following parameters:
 
 
 
<pre><nowiki>
 
core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
 
service_plugins = quantum.plugins.services.loadbalancer.loadbalancer_plugin.LoadBalancerPlugin
 
  </nowiki></pre>
 
  
# Update <code><nowiki> etc/service_agent.ini </nowiki></code> with the following parameters:
+
LBaaS introduces changes in the following modules: (currently all changes are in master branch)
 
 
<pre><nowiki>
 
admin_user = <admin_user>
 
admin_password = <admin_password>
 
admin_tenant_name = <admin_tenant_name>
 
keystone_endpoint = http://127.0.0.1:35357/v2.0
 
  
# Drivers for advanced services
+
* neutron
service_drivers = quantum.plugins.services.loadbalancer.drivers.haproxy.haproxy_driver.HAProxyDriver
+
* python-neutronclient
 +
* horizon
 +
* devstack
  
# Flavor name to run a haproxy VM
+
== Devstack Setup ==
haproxy_flavor_name = m1.small
 
  
# Glance image id for a haproxy VM
+
Add the following lines to your localrc:
haproxy_image_id = <image_id>
+
<pre>enable_plugin neutron-lbaas https://github.com/openstack/neutron-lbaas.git
 +
</pre>
 +
<pre>enable_plugin octavia https://github.com/openstack/octavia.git
 +
</pre>
 +
<pre>ENABLED_SERVICES+=,q-lbaasv2
 +
</pre>
 +
<pre>ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api
 +
</pre>
  
# Keypair name for a haproxy VM
+
Then run stack.sh
haproxy_keypair = <keypair name>
 
  
# Path to the private key for ssh to a haproxy VM
+
After stack.sh completes you'll be able to manage your Load Balancer via the CLI tools and within Horizon
haproxy_key_path = <path to the private key on the host>
 
  </nowiki></pre>
 
  
# Stop devstack Quantum server (in corresponding screen)
+
==Ubuntu Packages Setup==
# Run quantum server by <code><nowiki> cd /opt/stack/quantum && python bin/quantum-server --config-file=/etc/quantum/quantum.conf --config-file=/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini --debug --verbose </nowiki></code>
 
# Start Agent by <code><nowiki> cd /opt/stack/quantum && python bin/quantum-service-agent --config-file /etc/quantum/quantum.conf --config-file=etc/service_agent.ini --debug </nowiki></code>
 
# Check that the workflow works by creating pool object:
 
  
  <code><nowiki> quantum lb-pool-create --lb-method ROUND_ROBIN --name Uno --protocol TCP --subnet-id <subnet-id> </nowiki></code>
+
Install octavia with your favorite distribution:
  
The command outputs Pool in PENDING_CREATE state, to verify that it is actually created (state is ACTIVE), run
+
<pre>pip install octavia </pre>
  
  <code><nowiki> quantum lb-pool-show <pool-id> </nowiki></code>
+
And edit the service_plugins in [DEFAULT] section in neutron.conf to enable the service:
 +
<pre>sudo sed -i.bak "s/\#\ service_plugins\ \=/service_plugins = neutron.plugins.services.agent_loadbalancer.plugin.LoadBalancerPluginv2/g" /etc/neutron/neutron.conf</pre>
  
Also check that a new haproxy VM was launched from the image in the agent config.
+
Finally enable the Load Balancer section in Horizon by editing <pre>/etc/openstack-dashboard/local_settings.py</pre> and changing:
  
  Full list of LBaaS CLI commands is available at [[Quantum/LBaaS/CLI]]
+
<pre>
 +
OPENSTACK_NEUTRON_NETWORK = {
 +
    'enable_lb': False
 +
}
 +
</pre>
 +
 
 +
to
 +
 
 +
<pre>
 +
OPENSTACK_NEUTRON_NETWORK = {
 +
    'enable_lb': True
 +
}
 +
</pre>
 +
 
 +
Once done restart your Neutron services and Apache to start using.
 +
 
 +
== Topology Setup ==
 +
 
 +
Spin up three VMs, two to be servers, and one to be a client.
 +
 
 +
<pre>nova boot --image <image-uuid> --flavor 1 server1
 +
nova boot --image <image-uuid> --flavor 1 server2
 +
nova boot --image <image-uuid> --flavor 1 client</pre>
 +
 
 +
Get the UUID of the private subnet. 
 +
 
 +
<pre>neutron subnet-list</pre>
 +
 
 +
Create a  Loadbalancer:
 +
 
 +
<pre>neutron lbaas-loadbalancer-create --name lb1 private-subnet</pre>
 +
 
 +
Create a Listener:
 +
 
 +
<pre>neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP --protocol-port 80 --name listener1</pre>
 +
 
 +
Create a Pool:
 +
 
 +
<pre>neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1 </pre>
 +
 
 +
Create Members (using the IPs of server1 and server2):
 +
<pre>
 +
neutron lbaas-member-create --subnet private-subnet --address <server1-ip> --protocol-port 80 pool1
 +
neutron lbaas-member-create  --subnet private-subnet --address <server2-ip> --protocol-port 80 pool1</pre>
 +
 
 +
Create a Healthmonitor and associate it with the pool:
 +
 
 +
<pre>neutron lbaas-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3 --pool pool1</pre>
 +
 
 +
note the address for use below.
 +
 
 +
== Validation ==
 +
 
 +
We now have two hosts with a load balancer pointed at them, but those hosts are not serving up any HTTP content.
 +
 
 +
A simple trick is to use netcat on the hosts to implement a simple webserver.  For example, run:
 +
 
 +
<pre>while true; do echo -e 'HTTP/1.0 200 OK\r\nContent-Length: 8\r\n\r\n<servername>' | sudo nc -l -p 80 ; done </pre>
 +
 
 +
replacing <servername> with "server1" and "server2" as appropriate.  Once the server is started, you'll see incoming HTTP GET requests.. that's the load balancer health check in action!
 +
 
 +
If you have python installed, you can also create an index.html with the text "server1" or "server2" then in the same directory run
 +
 
 +
<pre>sudo python -m SimpleHTTPServer 80</pre>
 +
 
 +
Finally, to test real load balancing, from your client, use wget to make sure your requests are load-balanced across server1 and server2 as expected.
 +
 
 +
<pre>wget -O - http://<server1-ip>
 +
wget -O - http://<server2-ip> </pre>
 +
 
 +
Then use wget to hit the load balancer IP(VIP IP) several times in succession.  You should bounce between seeing server1 and server2. 
 +
 
 +
<pre>wget -O - http://<vip-ip>
 +
wget -O - http://<vip-ip>
 +
wget -O - http://<vip-ip>
 +
wget -O - http://<vip-ip> </pre>
 +
 
 +
If you get some trouble to curl vip-ip, can try the following method:
 +
<pre>sudo ip netns list
 +
qdhcp-xxx
 +
qrouter-xxx
 +
</pre>
 +
<pre>sudo ip netns exec qrouter-xxx curl -v <vip-ip> </pre>
 +
 
 +
Full list of LBaaS CLI commands is available at [[Quantum/LBaaS/CLI]]
 +
 
 +
== Troubleshooting ==
 +
 
 +
LBaas is implemented similar to L3 + DHCP using namespaces.  You can use "ip netns list" to find the namespace named qlbaas-<pool_id>, and then test connectivity from that namespace.
 +
 
 +
Use "screen -x stack" to view the q-svc ,q-lbaas, o-cw, o-api tabs for errors.
 +
 
 +
Grep syslog for "Octavia" to see messages from Octavia (though they are quite cryptic!)

Latest revision as of 00:22, 4 April 2019

Warning icon.svg Warning - Deprecated

As of the Queens OpenStack release cycle neutron-lbaas and neutron-lbaas-dashboard are now deprecated. Please see Neutron/LBaaS/Deprecation

Getting the Code

LBaaS introduces changes in the following modules: (currently all changes are in master branch)

  • neutron
  • python-neutronclient
  • horizon
  • devstack

Devstack Setup

Add the following lines to your localrc:

enable_plugin neutron-lbaas https://github.com/openstack/neutron-lbaas.git
enable_plugin octavia https://github.com/openstack/octavia.git
ENABLED_SERVICES+=,q-lbaasv2
ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api

Then run stack.sh

After stack.sh completes you'll be able to manage your Load Balancer via the CLI tools and within Horizon

Ubuntu Packages Setup

Install octavia with your favorite distribution:

pip install octavia 

And edit the service_plugins in [DEFAULT] section in neutron.conf to enable the service:

sudo sed -i.bak "s/\#\ service_plugins\ \=/service_plugins = neutron.plugins.services.agent_loadbalancer.plugin.LoadBalancerPluginv2/g" /etc/neutron/neutron.conf
Finally enable the Load Balancer section in Horizon by editing
/etc/openstack-dashboard/local_settings.py
and changing:
OPENSTACK_NEUTRON_NETWORK = {
    'enable_lb': False
}

to

OPENSTACK_NEUTRON_NETWORK = {
    'enable_lb': True
}

Once done restart your Neutron services and Apache to start using.

Topology Setup

Spin up three VMs, two to be servers, and one to be a client.

nova boot --image <image-uuid> --flavor 1 server1
nova boot --image <image-uuid> --flavor 1 server2
nova boot --image <image-uuid> --flavor 1 client

Get the UUID of the private subnet.

neutron subnet-list

Create a Loadbalancer:

neutron lbaas-loadbalancer-create --name lb1 private-subnet

Create a Listener:

neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP --protocol-port 80 --name listener1

Create a Pool:

neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1 

Create Members (using the IPs of server1 and server2):

neutron lbaas-member-create  --subnet private-subnet --address <server1-ip> --protocol-port 80 pool1
neutron lbaas-member-create  --subnet private-subnet --address <server2-ip> --protocol-port 80 pool1

Create a Healthmonitor and associate it with the pool:

neutron lbaas-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3 --pool pool1

note the address for use below.

Validation

We now have two hosts with a load balancer pointed at them, but those hosts are not serving up any HTTP content.

A simple trick is to use netcat on the hosts to implement a simple webserver. For example, run:

while true; do echo -e 'HTTP/1.0 200 OK\r\nContent-Length: 8\r\n\r\n<servername>' | sudo nc -l -p 80 ; done 

replacing <servername> with "server1" and "server2" as appropriate. Once the server is started, you'll see incoming HTTP GET requests.. that's the load balancer health check in action!

If you have python installed, you can also create an index.html with the text "server1" or "server2" then in the same directory run

sudo python -m SimpleHTTPServer 80

Finally, to test real load balancing, from your client, use wget to make sure your requests are load-balanced across server1 and server2 as expected.

wget -O - http://<server1-ip> 
wget -O - http://<server2-ip> 

Then use wget to hit the load balancer IP(VIP IP) several times in succession. You should bounce between seeing server1 and server2.

wget -O - http://<vip-ip> 
wget -O - http://<vip-ip> 
wget -O - http://<vip-ip> 
wget -O - http://<vip-ip> 

If you get some trouble to curl vip-ip, can try the following method:

sudo ip netns list
qdhcp-xxx
qrouter-xxx
sudo ip netns exec qrouter-xxx curl -v <vip-ip> 

Full list of LBaaS CLI commands is available at Quantum/LBaaS/CLI

Troubleshooting

LBaas is implemented similar to L3 + DHCP using namespaces. You can use "ip netns list" to find the namespace named qlbaas-<pool_id>, and then test connectivity from that namespace.

Use "screen -x stack" to view the q-svc ,q-lbaas, o-cw, o-api tabs for errors.

Grep syslog for "Octavia" to see messages from Octavia (though they are quite cryptic!)