Jump to: navigation, search

Difference between revisions of "Neutron/LBaaS/HowToRun"

< Neutron‎ | LBaaS
(Validation)
 
(14 intermediate revisions by 7 users not shown)
Line 1: Line 1:
 +
{{Warning|header='''Warning - Deprecated'''|body='''As of the Queens OpenStack release cycle neutron-lbaas and neutron-lbaas-dashboard are now deprecated. Please see [[Neutron/LBaaS/Deprecation]]'''}}
  
 
== Getting the Code ==
 
== Getting the Code ==
Line 4: Line 5:
 
LBaaS introduces changes in the following modules: (currently all changes are in master branch)
 
LBaaS introduces changes in the following modules: (currently all changes are in master branch)
  
* quantum
+
* neutron
* python-quantumclient
+
* python-neutronclient
 
* horizon
 
* horizon
 
* devstack
 
* devstack
Line 12: Line 13:
  
 
Add the following lines to your localrc:  
 
Add the following lines to your localrc:  
 
+
<pre>enable_plugin neutron-lbaas https://github.com/openstack/neutron-lbaas.git
<pre>enable_service q-lbaas
+
</pre>
 +
<pre>enable_plugin octavia https://github.com/openstack/octavia.git
 +
</pre>
 +
<pre>ENABLED_SERVICES+=,q-lbaasv2
 
</pre>  
 
</pre>  
 +
<pre>ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api
 +
</pre>
  
 
Then run stack.sh
 
Then run stack.sh
Line 22: Line 28:
 
==Ubuntu Packages Setup==
 
==Ubuntu Packages Setup==
  
Install the quantum-lbaas-agent and haproxy:
+
Install octavia with your favorite distribution:
 
 
<pre>sudo apt-get install quantum-lbaas-agent haproxy</pre>
 
 
 
Once packages are installed make the directory for the LBaaS service and copy the configuration into place:
 
  
<pre>sudo mkdir -p /etc/quantum/plugins/services/agent_loadbalancer/
+
<pre>pip install octavia </pre>
sudo cp /etc/quantum/lbaas_agent.ini /etc/quantum/plugins/services/agent_loadbalancer/lbaas_agent.ini</pre>
 
  
And edit the service_plugins in [DEFAULT] section in quantum.conf to enable the service:
+
And edit the service_plugins in [DEFAULT] section in neutron.conf to enable the service:
<pre>sudo sed -i.bak "s/\#\ service_plugins\ \=/service_plugins = quantum.plugins.services.agent_loadbalancer.plugin.LoadBalancerPlugin/g" /etc/quantum/quantum.conf</pre>
+
<pre>sudo sed -i.bak "s/\#\ service_plugins\ \=/service_plugins = neutron.plugins.services.agent_loadbalancer.plugin.LoadBalancerPluginv2/g" /etc/neutron/neutron.conf</pre>
  
 
Finally enable the Load Balancer section in Horizon by editing <pre>/etc/openstack-dashboard/local_settings.py</pre> and changing:
 
Finally enable the Load Balancer section in Horizon by editing <pre>/etc/openstack-dashboard/local_settings.py</pre> and changing:
  
 
<pre>
 
<pre>
OPENSTACK_QUANTUM_NETWORK = {
+
OPENSTACK_NEUTRON_NETWORK = {
 
     'enable_lb': False
 
     'enable_lb': False
 
}
 
}
Line 45: Line 46:
  
 
<pre>
 
<pre>
OPENSTACK_QUANTUM_NETWORK = {
+
OPENSTACK_NEUTRON_NETWORK = {
 
     'enable_lb': True
 
     'enable_lb': True
 
}
 
}
 
</pre>
 
</pre>
  
Once done restart your Quantum services and Apache to start using.
+
Once done restart your Neutron services and Apache to start using.
  
 
== Topology Setup ==  
 
== Topology Setup ==  
Line 64: Line 65:
 
<pre>neutron subnet-list</pre>
 
<pre>neutron subnet-list</pre>
  
Create a Pool:  
+
Create a Loadbalancer:
  
<pre>neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id <subnet-id> </pre>  
+
<pre>neutron lbaas-loadbalancer-create --name lb1 private-subnet</pre>
  
Create Members (using the IPs of server1 and server2):  
+
Create a Listener:
  
 +
<pre>neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP --protocol-port 80 --name listener1</pre>
  
<pre>nova list
+
Create a Pool:
  
neutron lb-member-create --address <server1-ip> --protocol-port 80 mypool
+
<pre>neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1 </pre>  
neutron lb-member-create --address <server2-ip> --protocol-port 80 mypool</pre>
 
  
Create a Healthmonitor and associated it with the pool:  
+
Create Members (using the IPs of server1 and server2):  
 +
<pre>
 +
neutron lbaas-member-create  --subnet private-subnet --address <server1-ip> --protocol-port 80 pool1
 +
neutron lbaas-member-create  --subnet private-subnet --address <server2-ip> --protocol-port 80 pool1</pre>
  
<pre>neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3
+
Create a Healthmonitor and associate it with the pool:
neutron lb-healthmonitor-associate <healthmonitor-uuid> mypool</pre>
 
  
Create a VIP
+
<pre>neutron lbaas-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3 --pool pool1</pre>  
 
 
<pre>neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id <subnet-id> mypool</pre>  
 
  
 
note the address for use below.
 
note the address for use below.
Line 89: Line 90:
 
== Validation ==  
 
== Validation ==  
  
We now have two hosts with a load balancer pointed at them, but those hosts are serving up any HTTP content.  
+
We now have two hosts with a load balancer pointed at them, but those hosts are not serving up any HTTP content.  
  
 
A simple trick is to use netcat on the hosts to implement a simple webserver.  For example, run:  
 
A simple trick is to use netcat on the hosts to implement a simple webserver.  For example, run:  
  
<pre>while true; do echo -e 'HTTP/1.0 200 OK\r\n\r\n<servername>' | sudo nc -l -p 80 ; done </pre>  
+
<pre>while true; do echo -e 'HTTP/1.0 200 OK\r\nContent-Length: 8\r\n\r\n<servername>' | sudo nc -l -p 80 ; done </pre>  
  
 
replacing <servername> with "server1" and "server2" as appropriate.  Once the server is started, you'll see incoming HTTP GET requests.. that's the load balancer health check in action!
 
replacing <servername> with "server1" and "server2" as appropriate.  Once the server is started, you'll see incoming HTTP GET requests.. that's the load balancer health check in action!
  
If you have python installed, you can also create an index.html with the text "server1" or "server2" then in the same directory run: sudo python -m SimpleHTTPServer 80)
+
If you have python installed, you can also create an index.html with the text "server1" or "server2" then in the same directory run
 +
 
 +
<pre>sudo python -m SimpleHTTPServer 80</pre>
  
 
Finally, to test real load balancing, from your client, use wget to make sure your requests are load-balanced across server1 and server2 as expected.  
 
Finally, to test real load balancing, from your client, use wget to make sure your requests are load-balanced across server1 and server2 as expected.  
Line 104: Line 107:
 
wget -O - http://<server2-ip> </pre>  
 
wget -O - http://<server2-ip> </pre>  
  
Then use wget to hit the VIP IP several times in succession.  You should bounce between seeing server1 and server2.   
+
Then use wget to hit the load balancer IP(VIP IP) several times in succession.  You should bounce between seeing server1 and server2.   
  
 
<pre>wget -O - http://<vip-ip>  
 
<pre>wget -O - http://<vip-ip>  
Line 110: Line 113:
 
wget -O - http://<vip-ip>  
 
wget -O - http://<vip-ip>  
 
wget -O - http://<vip-ip> </pre>
 
wget -O - http://<vip-ip> </pre>
 +
 +
If you get some trouble to curl vip-ip, can try the following method:
 +
<pre>sudo ip netns list
 +
qdhcp-xxx
 +
qrouter-xxx
 +
</pre>
 +
<pre>sudo ip netns exec qrouter-xxx curl -v <vip-ip> </pre>
  
 
Full list of LBaaS CLI commands is available at [[Quantum/LBaaS/CLI]]
 
Full list of LBaaS CLI commands is available at [[Quantum/LBaaS/CLI]]
Line 117: Line 127:
 
LBaas is implemented similar to L3 + DHCP using namespaces.  You can use "ip netns list" to find the namespace named qlbaas-<pool_id>, and then test connectivity from that namespace.  
 
LBaas is implemented similar to L3 + DHCP using namespaces.  You can use "ip netns list" to find the namespace named qlbaas-<pool_id>, and then test connectivity from that namespace.  
  
Use "screen -x stack" to view the q-svc and q-lbaas tabs for errors.  
+
Use "screen -x stack" to view the q-svc ,q-lbaas, o-cw, o-api tabs for errors.  
  
Grep syslog for "haproxy" to see messages from Haproxy (though they are quite cryptic!)
+
Grep syslog for "Octavia" to see messages from Octavia (though they are quite cryptic!)

Latest revision as of 00:22, 4 April 2019

Warning icon.svg Warning - Deprecated

As of the Queens OpenStack release cycle neutron-lbaas and neutron-lbaas-dashboard are now deprecated. Please see Neutron/LBaaS/Deprecation

Getting the Code

LBaaS introduces changes in the following modules: (currently all changes are in master branch)

  • neutron
  • python-neutronclient
  • horizon
  • devstack

Devstack Setup

Add the following lines to your localrc:

enable_plugin neutron-lbaas https://github.com/openstack/neutron-lbaas.git
enable_plugin octavia https://github.com/openstack/octavia.git
ENABLED_SERVICES+=,q-lbaasv2
ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api

Then run stack.sh

After stack.sh completes you'll be able to manage your Load Balancer via the CLI tools and within Horizon

Ubuntu Packages Setup

Install octavia with your favorite distribution:

pip install octavia 

And edit the service_plugins in [DEFAULT] section in neutron.conf to enable the service:

sudo sed -i.bak "s/\#\ service_plugins\ \=/service_plugins = neutron.plugins.services.agent_loadbalancer.plugin.LoadBalancerPluginv2/g" /etc/neutron/neutron.conf
Finally enable the Load Balancer section in Horizon by editing
/etc/openstack-dashboard/local_settings.py
and changing:
OPENSTACK_NEUTRON_NETWORK = {
    'enable_lb': False
}

to

OPENSTACK_NEUTRON_NETWORK = {
    'enable_lb': True
}

Once done restart your Neutron services and Apache to start using.

Topology Setup

Spin up three VMs, two to be servers, and one to be a client.

nova boot --image <image-uuid> --flavor 1 server1
nova boot --image <image-uuid> --flavor 1 server2
nova boot --image <image-uuid> --flavor 1 client

Get the UUID of the private subnet.

neutron subnet-list

Create a Loadbalancer:

neutron lbaas-loadbalancer-create --name lb1 private-subnet

Create a Listener:

neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP --protocol-port 80 --name listener1

Create a Pool:

neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1 

Create Members (using the IPs of server1 and server2):

neutron lbaas-member-create  --subnet private-subnet --address <server1-ip> --protocol-port 80 pool1
neutron lbaas-member-create  --subnet private-subnet --address <server2-ip> --protocol-port 80 pool1

Create a Healthmonitor and associate it with the pool:

neutron lbaas-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3 --pool pool1

note the address for use below.

Validation

We now have two hosts with a load balancer pointed at them, but those hosts are not serving up any HTTP content.

A simple trick is to use netcat on the hosts to implement a simple webserver. For example, run:

while true; do echo -e 'HTTP/1.0 200 OK\r\nContent-Length: 8\r\n\r\n<servername>' | sudo nc -l -p 80 ; done 

replacing <servername> with "server1" and "server2" as appropriate. Once the server is started, you'll see incoming HTTP GET requests.. that's the load balancer health check in action!

If you have python installed, you can also create an index.html with the text "server1" or "server2" then in the same directory run

sudo python -m SimpleHTTPServer 80

Finally, to test real load balancing, from your client, use wget to make sure your requests are load-balanced across server1 and server2 as expected.

wget -O - http://<server1-ip> 
wget -O - http://<server2-ip> 

Then use wget to hit the load balancer IP(VIP IP) several times in succession. You should bounce between seeing server1 and server2.

wget -O - http://<vip-ip> 
wget -O - http://<vip-ip> 
wget -O - http://<vip-ip> 
wget -O - http://<vip-ip> 

If you get some trouble to curl vip-ip, can try the following method:

sudo ip netns list
qdhcp-xxx
qrouter-xxx
sudo ip netns exec qrouter-xxx curl -v <vip-ip> 

Full list of LBaaS CLI commands is available at Quantum/LBaaS/CLI

Troubleshooting

LBaas is implemented similar to L3 + DHCP using namespaces. You can use "ip netns list" to find the namespace named qlbaas-<pool_id>, and then test connectivity from that namespace.

Use "screen -x stack" to view the q-svc ,q-lbaas, o-cw, o-api tabs for errors.

Grep syslog for "Octavia" to see messages from Octavia (though they are quite cryptic!)