Jump to: navigation, search


< Neutron‎ | LBaaS


This page captures our initial designs around making Libra a backend to the Neutron LBaaS service.This is work in progress.

Background info on Neutron LBaaS:

https://wiki.openstack.org/wiki/Neutron/LBaaS https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements

Background info on Libra:

https://github.com/stackforge/libra http://libra.readthedocs.org/en/latest/index.html

Logical diagram

Neutron LBaaS Libra arch.gif

Use Cases/User Stories for Managed Services

  • These are use cases beyond the standard ones (add more details about the standard ones at some point)
    • Integration with metering, logging, horizon, monitoring, automatic configuration backups, etc.
  • The load-balancer service is monitoring the health of the load-balancers in use by the various tenants and performing fail-over automatically if a load-balancer is failing or unresponsive.
    • Monitoring is automatically setup by the service when the load-balancer is created. No user interaction is needed.
      • The fail-over is done by allocating a new load-balancer from a pool of stand-by load-balancers. This means that:
      • A tenant doesn't have to pay for additional load-balancers to achieve HA in their setup,
      • The replacement of the load-balancer is fast since we are not creating a new load-balancer but allocating one for an existing pool.
    • The service is maintaining the pool of standby load-balancers at a given size.

Open Questions:

  • Can the one eth1 interface/connection between the tenant and lbaas support all of the incoming data as well as the outgoing data to/from the tenants private network?
  • Can Neutron LBaaS's scheduler schedule HA proxy LBs from a stand-by pool of HA proxy LBs? which would consist of 1. allocate HA proxy LB from a pool and 2. attach it to port eth1 on the Tenant A's network? enikanorov__> sballe: scheduling right now chooses the agent; enikanorov__> agent will spawn a new process anyway

Some initial ideas

These are my first thoughts on how this can be done. There are still a lot of stuff that needs to be ironed out.

neutron subnet-list (Get the UUID of the private subnet)

Step1: Tenant 1 creates a pool attached to a specific subnet <pool-name> <subnet-id>

neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id <subnet-id>

  • lp-pool-create API call goes to Neutron API server (subnet_id)
    • Neutron creates port eth1 on Tenant A's private network
    • Libra allocates a LB from pool and attach it to port eth1 on Tenant A private network. The LBaaS tenant will need to have admin privs or maybe we can use Trusts.
      • eth0 is used for the Libra Control Network
      • eth1 is use for the Tenant A private Network.
      • Libra setup health monitoring automatically – Need to do a deeper dive into this and understand the trade-off between using Libra or Neutron LBaaS's monitoring feature.
  • Request is "forwarded" to the Libra API server to allocate LB from pool, allocate VIP from VIP pool (who owns VIP pool? Neutron or Libra)

Step2: Tenant1 creates a VIP assigned to the pool <vip-name> <subnet-id> <pool-name>

neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id <subnet-id> mypool

  • Assign the VIP that has been allocate from Libra?

Step 3 : Tenant 1 creates Members (using Ips of server1 and server2) and add them to the pool

neutron lb-member-create --address <server1-ip> --protocol-port 80 mypool neutron lb-member-create --address <server2-ip> --protocol-port 80 mypool

Use Case 1:

* A web request would come in via the external network through Tenant A' router and private network and access the LB which will load balance requests to VM1 and VM2.

Implementation Diagram

This is work in progress

Swim lanes for create, delete, etc. API calls

This is work in progress