Jump to: navigation, search

Difference between revisions of "Octavia/Meeting Minutes"

Line 1: Line 1:
 
The following page summarizes the meeting minutes from the weekly Octavia meetings.
 
The following page summarizes the meeting minutes from the weekly Octavia meetings.
 +
 +
== 2015-08-26 Weekly meeting: ==
 +
This meeting was held via IRC
 +
* The meeting minutes can be found here:  http://eavesdrop.openstack.org/meetings/octavia/2015/octavia.2015-08-26-20.00.html
 +
* Full transcripts of the meeting can be found here:  http://eavesdrop.openstack.org/meetings/octavia/2015/octavia.2015-08-26-20.00.log.html
  
 
== 2015-08-19 Weekly meeting: ==
 
== 2015-08-19 Weekly meeting: ==

Revision as of 21:08, 26 August 2015

The following page summarizes the meeting minutes from the weekly Octavia meetings.

2015-08-26 Weekly meeting:

This meeting was held via IRC

2015-08-19 Weekly meeting:

This meeting was held via IRC

2015-08-12 Weekly meeting:

This meeting was held via IRC

2015-08-05 Weekly meeting:

This meeting was held via IRC

2015-07-29 Weekly meeting:

This meeting was held via IRC

2015-07-22 Weekly meeting:

This meeting was held via IRC

2015-07-08 Weekly meeting:

This meeting was held via IRC

2015-07-01 Weekly meeting:

This meeting was held via IRC

2015-06-17 Weekly meeting:

This meeting was held via IRC

2015-06-10 Weekly meeting:

This meeting was held via IRC

2015-06-03 Weekly meeting:

This meeting was held via IRC

2015-05-27 Weekly meeting:

This meeting was held via IRC

2015-05-13 Weekly meeting:

This meeting was held via IRC

2015-04-29 Weekly meeting:

This meeting was held via IRC

2015-04-22 Weekly meeting:

This meeting was held via IRC

2015-04-15 Weekly meeting:

This meeting was held via IRC

2015-04-08 Weekly meeting:

This meeting was held via IRC

2015-04-01 Weekly meeting:

This meeting was held via IRC

2015-03-25 Weekly meeting:

This meeting was held via IRC

2015-03-18 Weekly meeting:

This meeting was held via IRC

2015-03-11 Weekly meeting:

This meeting was held via IRC

2015-03-04 Weekly meeting:

This meeting was held via IRC

2015-02-25 Weekly meeting:

This meeting was held via IRC

2015-02-18 Weekly meeting:

This meeting was held via IRC

2015-02-11 Weekly meeting:

This meeting was held via IRC

2015-01-28 Weekly meeting:

This meeting was held via IRC

2015-01-21 Weekly meeting:

This meeting was held via IRC

2015-01-14 Weekly meeting:

This meeting was held via IRC

2015-01-07 Weekly meeting:

This meeting was held via IRC

2014-12-10 Weekly meeting:

This meeting was held via IRC

2014-12-03 Weekly meeting:

This meeting was held via IRC

2014-11-19 Weekly meeting:

This meeting was held via IRC

2014-11-12 Weekly meeting:

This meeting was held via IRC

2014-10-29 Weekly meeting:

This meeting was held via IRC

2014-10-22 Weekly meeting:

This meeting was held via IRC

2014-10-15 Weekly meeting:

This meeting was held via IRC

2014-10-08 Weekly meeting:

This meeting was held via IRC

2014-10-01 Weekly meeting:

This meeting was held via IRC

2014-09-24 Weekly meeting:

This meeting was held via IRC

2014-09-17 Weekly meeting:

This meeting was held via IRC.

2014-09-10 Weekly meeting:

This meeting was held via IRC.

2014-09-03 Weekly meeting:

This meeting was held via IRC.

2014-08-27 Weekly meeting:

This meeting was held via IRC.

2014-08-20 Weekly meeting:

  1. Revisit some basic features of loadbalancing as a service's object model and api.
    1. Brandon advocated Loadbalancer as only root object
      • The reason for root objects was for sharing.
    2. Will we allow sharing of pools in a listener?
      • Stephen suggests providing sharing to the customer for benefits
        • provides simplicity to the user
        • Example: L7 rules all referencing the same pool: simpler for the user to handle it.
        • Without sharing there may also be a series of extra health checks that are unnecessary
      • German wants placement of the pool to be on the load balancer
        • This allows sharing pools between different listeners.
        • Counter argument by Stephen: Sharing pools between HTTP/HTTPS load balancers would be really rare, where normally people would use a different port. Adding another health check wouldn't be a big deal. Proposed L7 policies where you have a complicated rule set causing duplication for a "or" set, would increase the health check requirement. (Refer to email in list)
    3. If we desire many to many, there will be more root objects than just load balancer.
      • Moving to many-to-many after establishing one root object would be difficult
  2. Get consensus on initial project direction and implementation details
    1. One HA proxy instance per load balancer or one HA proxy instance per listener?
      • Per ML discussion: Keeping listener on one HA Proxy instance increases performance on one Octavia VM
        • Desires benchmarks for this to support (German has this included in his next sprint)
      • Suggested shelving this until benchmarks are researched.
      • Future discussions on the ML for this decision
      • A concern from Vijay: with one HA Proxy instance per listener, would that affect scalability?
        • This was suggested to move to the mailing list
  3. When decisions (like #2) have been made, where should this be stored, wiki or in code?
    1. Bad thing about wiki is if Openstack makes a documentation overhaul the decision information might get lost.
    2. Bad thing about code is its harder to find and read.
    3. Decision was to keep it in the Wiki.
  4. Whose responsibility is it to update the wiki with these decisions?
    1. For now, Stephen has been updating the wiki
    2. In the future, people involved in the decision will decide someone to update the wiki at the time
  5. What else is needed to change in the 0.5 design before it can be approved and implementation can begin?
    1. Action item for everyone: Review this design before next week's meeting. Keep in mind the document is supposed to be somewhat general.
  6. Start going over action items (https://etherpad.openstack.org/p/Octavia_Action_Items)
    1. Action Item for everyone: Review the migration information proposed by Brandon.
    2. Per link above, start from 1 and move the way down the list.
    3. How can we decide who is working on what?
      • Get launchpad set up for octavia to allow for blueprint additions and thus allow people to contribute to a specific effort
    4. We need a list of things that are required to do and what needs hooked up how (the glue between the different pieces)
    5. What kind of communication between different components?
      • XMLRPC?
      • A REST interface?
      • Something different?
    6. Brandon working on Data Models and SQL Alchemy Models.
    7. Stephen working on Octavia VM API interface, including what technology to use
    8. Doug working on Skeleton Structure
    9. Brandon working on launchpad and blueprints issue as well
    10. Stephen will also prioritize this list
    11. Topics that need discussed should be expressed and discussed in the mailing list
    12. Michael Johnson working on the base image scripts
      • Would we use an image we've built or set it up after creation of a VM
        • Start with a base image with pre-packaging of Octavia scripts and such instead of Cloud init doing all the work downloading and such. Saves time/resources.
        • Ideally we would have a place in the Octavia repo with a script or something that when run would create an image.
      • The images will potentially change based on flavoring options.
        • This includes custom images via customer requirements

After meeting

  • Q: Are we going to be incubated?
    • A: Yes, we are basically destined for incubation, period. Note: we will assuredly not be in Juno.
  • Q: Why be part of Neutron? Why not just be our own program?
  • A: We want to distance ourselves from Neutron to some extent. We will formalize this via a networking driver in Octavia. Note: we do not want to burn any bridges here, so we want to be appropriate in our spin-out process.

2014-08-13 Weekly meeting:

  1. Discuss future of Octavia in light of Neutron-incubator project proposal.
    1. There are many problems with Neutron-Incubator as currently described
    2. The political happenings in Neutron leave our LBaaS patches under review unlikely to land in Juno
    3. The Incubator proposal doesn't affect Octavia development direction, with inclination to distance ourselves from Neutron proper
    4. With the Neutron Incubator proposal in current scope, efforts of people pushing forward Neutron LBaaS patches should be re-focused into Octavia.
  2. Discuss operator networking requirements (carry-over from last week)
    1. Both HP and Rackspace seem to agree that as long as Octavia uses Neutron-like floating IPs, their networks should be able to work with proposed Octavia topologies
    2. (Blue Box) also wanted to meet with Rackspace's networking team during the operator summit a few weeks from now to thoroughly discuss network concerns
  3. Discuss v0.5 component design proposal [1]
    1. Notification for back-end node health (aka being offline) isn't required for 0.5, but is a must have later
    2. Notification of LB health (HA Proxy, etc) is definitely a requirement in 0.5
    3. Still looking for more feedback on the proposal itself
  4. Discuss timeline on moving these meetings to IRC.
    1. Most members in favor of keeping the webex meetings for the time being
    2. One major point was other openstack/stackforge use video meetings as their "primary" source as well


2014-08-06 Weekly meeting:

  1. Octavia Constitution and Project Direction Documents (Road map)
    1. Constitution and Road map will potentially be adopted after another couple days; providing those who were busy more time to review the information
  2. Octavia Design Proposals
    1. Difference between version 0.5 and 1.0 isn't huge
    2. Version 2 has many network topology changes and Layer 4 routing
      • This includes N node Active-Active
      • Would like to avoid Layer 2 connectivity with Load Balancers (included in version 1 however)
      • Layer router driver
      • Layer router controller
      • Long term solution
    3. After refining Version 1 document (with some scrutiny) all changes will be propagated to the Version 2 document
    4. Version 0.5 is unpublished
    5. All control layer, anything connected to the intermediate message bus in version 1, will be collapsed down to 1 daemon.
      • No scale-able control, but scale-able service delivery
      • Version 1 will be the first large operator compatible version, that will have both scale-able control and scale-able service delivery
      • 0.5 will be a good start
        • laying out ground work
        • rough topology for the end users
        • must be approved by the networking teams for each contributing company
    6. The portions under control of neutron lbaas is the User API and the driver (for neutron lbaas)
    7. If neutron LBaaS is a sufficient front-end (user API doesn't suck), then Octavia will be kept as a vendor driver
    8. Potentially including a REST API on top of Octavia
      • Octavia is initially just a vendor driver, no real desire for another API in front of Octavia
      • If someone wants it, the work is "trivial" and can be done in another project at another time
    9. Octavia should have a loose coupling with Neutron; use a shim for network connectivity (one specifically for Neutron communication in the start)
      • This is going to hold any "dirty hacks" that would be required to get something done, keeping Octavia clean
        • Example: changing the mac address on a port
  3. Operator Network Topology Requirements
    1. One requirement is floating IPs.
    2. IPv6 is in demand, but is currently not supported reliably on Neutron
      • IPv6 would be represented as a different load balancer entity, and possibly include co-location with another Load Balancer
    3. Network interface plug-ability (potentially)
    4. Sections concerning front-end connectivity should be forwarded to each company's network specialists for review
      • Share findings in the mailing list, and dissect the proposals with the information and comment what requirements are needing added etc.
  4. HA/Failover Options/Solutions
    1. Rackspace may have a solution to this, but the conversation will be pushed off to the next meeting (at least)
      • Will gather more information from another member in Rackspace to provide to the ML for initial discussions
    2. One option for HA: Spare pool option (similar to Libra)
      • Poor recovery time is a big problem
    3. Another option for HA: Active/Passive
      • Bluebox uses one active and one passive configuration, and has sub-second fail over. However is not resource-sufficient

Questions:

  • Q: What is the expectation for a release time-frame
    • A: Wishful thinking; Octavia version 0.5 beta for Juno (probably not, but would be awesome to push for that)

Notes:

  • We need to pressure the Neutron core reviewers to review the Neutron LBaaS changes to get merges.
  • Version 2 front-end topology is different than the Version 1. Please review them individually, and thoroughly