https://wiki.openstack.org/w/api.php?action=feedcontributions&user=Russellb&feedformat=atomOpenStack - User contributions [en]2024-03-28T23:31:48ZUser contributionsMediaWiki 1.28.2https://wiki.openstack.org/w/index.php?title=Governance/InteropWG&diff=87236Governance/InteropWG2015-08-03T13:51:04Z<p>Russellb: /* Current Committee Participants */</p>
<hr />
<div>This Committee was formed during the OpenStack Ice House Summit in Hong Kong by Board Resolution on 11/4.<br />
<br />
''DefCore sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. This definition uses community resources and involvement to drive interoperability by creating the minimum standards for products labeled "OpenStack."''<br />
<br />
Our mission is to define "OpenStack Core" as chartered by the by-laws.<br />
<br />
== Important Artifacts ==<br />
# [https://github.com/openstack/defcore/blob/master/doc/source/process/Lexicon.rst Terms Definition]<br />
# [https://github.com/openstack/defcore/blob/master/doc/source/process/CoreDefinition.rst 10 Core Principles] (board approved Hong Kong Summit)<br />
# [https://github.com/openstack/defcore/blob/master/doc/source/process/PlatformCap.rst Capability Levels: Component and Platform] (board approved October 2014)<br />
# [https://github.com/openstack/defcore/blob/master/doc/source/process/CoreCriteria.rst 12 Scoring Criteria] (board approved Atlanta Summit)<br />
# [https://github.com/openstack/defcore/blob/master/doc/source/process/DesignatedSections.rst 10 Designed Sections Principles] (board approved December 2014)<br />
# [https://github.com/openstack/defcore/blob/master/doc/source/process/GovernanceProcess.rst DefCore Governance:]<br />
# [https://github.com/openstack/defcore/blob/master/doc/source/process/2015A.rst DefCore Process]<br />
# Capabilities & Sections<br />
## [https://github.com/openstack/defcore/blob/master/2015.03.rst 2015.03] (review [https://github.com/openstack/defcore/blob/master/2015.03.json JSON] for details)<br />
## [https://github.com/openstack/defcore/blob/master/2015.04.rst 2015.04] (review [https://github.com/openstack/defcore/blob/master/2015.04.json JSON] for details)<br />
## [https://github.com/openstack/defcore/blob/master/2015.05.rst 2015.05] (review [https://github.com/openstack/defcore/blob/master/2015.05.json JSON] for details)<br />
## [https://github.com/openstack/defcore/blob/master/2015.next.json 2015.next ]<br />
<br />
== Objective / Scope ==<br />
<br />
The DefCore charter is around how the OpenStack brand is applied for commercial uses. Initially, this focus is on "what is core" and sustaining that definition over time. The scope will likely expand since brand is an ongoing concern related to specialized marks and other use cases.<br />
<br />
There are three ways in which the community uses the OpenStack brand including referring to projects.<br />
# General community use of the mark<br />
# Project-specific use associated with development activity<br />
# DefCore-governed commercial use<br />
<br />
While the top two of these uses are out of scope for DefCore, the committee has a need to participate in the discussion to ensure consistent and clear use.<br />
<br />
== How to Engage? ==<br />
<br />
* Join the [http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee defcore-committee] list<br />
* Join #openstack-defcore on Freenode IRC<br />
* Follow the code at https://github.com/openstack/defcore<br />
* Join our weekly [[Governance/DefCoreCommittee#Meetings|meetings]]<br />
* Learn the [https://github.com/openstack/defcore/blob/master/HACKING.rst rules for submitting changes]<br />
<br />
== Meetings ==<br />
Meeting times and channels can be found on the [http://eavesdrop.openstack.org/#DefCore_Committee_Meeting official OpenStack IRC meeting list]. An [http://eavesdrop.openstack.org/irc-meetings.ical ICS file] of all OpenStack meetings is also available. <br />
<br />
DefCore Flag Cycle Process/Capabilities Combined Meetings:<br />
* No meeting July 29 due to [https://etherpad.openstack.org/p/DefCoreFlag.MidCycle DefCore midcycle meetup] in Austin, Texas, USA.<br />
* Flag.9 - July 22, 2015 at 15:00 UTC on #openstack-meeting-4:[https://etherpad.openstack.org/p/DefCoreFlag.9 Etherpad] [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-07-22-15.00.html Minutes] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-07-22-15.00.txt Minutes (text)] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-07-22-15.00.log.html Logs]<br />
* Flag.8 - July 15, 2015 at 15:00 UTC on #openstack-meeting-4:[https://etherpad.openstack.org/p/DefCoreFlag.8 Etherpad] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-07-15-15.00.html Minutes] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-07-15-15.00.txt Munutes (text)] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-07-15-15.00.log.html Logs]<br />
* Flag.7 - July 8, 2015 at 01:00 UTC on #openstack-meeting-4: [https://etherpad.openstack.org/p/DefCoreFlag.7 Etherpad] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-07-09-01.00.html Minutes] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-07-09-01.00.txt Minutes (text)] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-07-09-01.00.log.html Logs]<br />
* Flag.6 - July 1, 2015 at 15:00 UTC on #openstack-meeting-4: [https://etherpad.openstack.org/p/DefCoreFlag.6 Etherpad] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-07-01-15.00.html minutes] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-07-01-15.00.txt Minutes (text)] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-07-01-15.00.log.html Logs]<br />
* Flag.5 - June 24, 2015 at 01:00 UTC on #openstack-meeting-4: [https://etherpad.openstack.org/p/DefCoreFlag.5 Etherpad] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-06-25-01.00.html Minutes] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-06-25-01.00.txt Minutes (text)] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-06-25-01.00.log.html Logs]<br />
* Flag.4 - June 17, 2015 at 15:00 UTC on #openstack-meeting-4: [https://etherpad.openstack.org/p/DefCoreFlag.4 Etherpad] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-06-17-15.01.html Minutes] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-06-17-15.01.txt Minutes (text)] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-06-17-15.01.log.html Logs] <br />
* Flag.3 - June 10, 2015 at 01:00 UTC on #openstack-meeting-4: [https://etherpad.openstack.org/p/DefCoreFlag.3 Etherpad] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-06-11-01.02.html Minutes] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-06-11-01.02.txt Minutes (text)] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-06-11-01.02.log.html Logs] <br />
* Flag.2 - June 3, 2015 at 15:00 UTC on #openstack-meeting-4: [https://etherpad.openstack.org/p/DefCoreFlag.2 Etherpad] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-06-03-15.01.html Minutes] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-06-03-15.01.txt Minutes (text)] | [http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-06-03-15.01.log.html Logs] <br />
* Flag.1 - May 20, 2015 at OpenStack Summit: [https://etherpad.openstack.org/p/DefCoreFlag.1 Etherpad]<br />
<br />
<br />
Meeting information (etherpads, etc) from past cycles can be found [https://github.com/openstack/defcore/blob/master/doc/source/process/ProcessCycles.rst here].<br />
<br />
== Process Cycles ==<br />
<br />
Defining OpenStack Core is a long term process and we are doing the work in progressive cycles. For reference, we have named the cycles. This helps describe concrete deliverables for a cycle while allowing discussion of the broader long term issues. For example, we may say that "item X is important to DefCore but out of scope for Elephant." We have found that this approach to breaking down the problem is necessary to maintain community consensus because we are taking smaller bites of the larger challenge (aka eating the elephant). <br />
<br />
See [https://github.com/openstack/defcore/blob/master/doc/source/process/ProcessCycles.rst Process Cycles]<br />
<br />
The current cycle is named the '''''Scale Cycle'''''.<br />
<br />
== Current Committee Participants ==<br />
Current Participants<br />
* [http://www.openstack.org/community/members/profile/221 Rob Hirschfeld] (board member, co-chair)<br />
* [http://www.openstack.org/community/members/profile/3106 Egle Sigler] (board member, co-chair)<br />
* [http://www.openstack.org/community/members/profile/5869 Will Auld]<br />
* [http://www.openstack.org/community/members/profile/13748 Carol Barrett]<br />
* [https://www.openstack.org/community/members/profile/22792 Vince Brunssen]<br />
* [http://www.openstack.org/community/members/profile/9572 Kevin Carter]<br />
* [http://www.openstack.org/community/members/profile/11461 Catherine Diep]<br />
* [http://www.openstack.org/community/members/profile/10273 Rocky Grober]<br />
* [http://www.openstack.org/community/members/profile/10331 Chris Hoge]<br />
* [http://www.openstack.org/community/members/profile/31354 Chris Lee]<br />
* [http://www.openstack.org/community/members/profile/76 Van Lindberg] (board member)<br />
* [http://www.openstack.org/community/members/profile/18704 Jim Meyer]<br />
* [http://www.openstack.org/community/members/profile/1657 Adrian Otto]<br />
* [http://www.openstack.org/community/members/profile/164 Sean Roberts]<br />
* [http://www.openstack.org/community/members/profile/12876 Shamail Tahir]<br />
* [https://www.openstack.org/community/members/profile/54 Mark T. Voelker]<br />
* [https://www.openstack.org/community/members/profile/406 Alan Clark]<br />
<br />
<br />
<br />
[[category: defcore]]<br />
[[Category: Working_Groups]]</div>Russellbhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=81414Meetings/InfraTeamMeeting2015-05-19T00:09:11Z<p>Russellb: /* Upcoming Project Renames */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
{{:Header}}<br />
<br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
** Get your summit on! https://etherpad.openstack.org/p/infra-liberty-summit-planning<br />
* Actions from last meeting<br />
** fungi check our cinder quota in rax-dfw<br />
* Priority Specs<br />
** current list pending summit discussions<br />
* Priority Efforts<br />
** Swift logs<br />
** Nodepool DIB<br />
** Migration to Zanata<br />
** Downstream Puppet<br />
** Askbot migration<br />
** Upgrading Gerrit Saturday May 9<br />
*** http://lists.openstack.org/pipermail/openstack-dev/2015-April/061490.html<br />
*** https://etherpad.openstack.org/p/gerrit-2.10-upgrade<br />
** Docs publishing<br />
*** Holding pattern while Swift logs issues get ironed out<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
* (any additions should mention original->new full names and link to the corresponding change in Gerrit)<br />
* stackforge/mistral -> openstack/mistral: https://review.openstack.org/#/c/175328/<br />
* stackforge/ironic-discoverd -> openstack/ironic-discoverd: https://review.openstack.org/178067<br />
* stackforge/{fuel-plugin-group-based-policy,fuel-plugin-external-nfs} -> stackforge-attic/{fuel-plugin-group-based-policy,fuel-plugin-external-nfs} https://review.openstack.org/#/c/179714/<br />
* stackforge/zvm-driver -> stackforge-attic/zvm-driver: https://review.openstack.org/#/c/179738/<br />
* stackforge/octavia -> openstack/octavia: https://review.openstack.org/182748 (dougwig)<br />
* stackforge/networking-ovn -> openstack/networking-ovn: https://review.openstack.org/184159<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>Russellbhttps://wiki.openstack.org/w/index.php?title=Network/Meetings&diff=77556Network/Meetings2015-04-15T13:46:35Z<p>Russellb: /* On Demand Agenda */</p>
<hr />
<div><br />
{{:Network/Header}}<br />
'''Meeting time: The OpenStack Networking Team ([[Neutron]]) holds public meetings alternating between Mondays at 2100 UTC (#openstack-meeting) and Tuesdays at 1400 UTC (#openstack-meeting). Everyone is encouraged to attend.'''<br />
<br />
== Apologies for Absence ==<br />
* Anita Kuno (50/50 chance I'll make it, depends on traffic)<br />
<br />
== Agenda for Next Neutron Team Meeting ==<br />
'''Tuesday (4/21/2015) at 1400 UTC on #openstack-meeting'''<br />
<br />
=== Announcements / Reminders ===<br />
* Kilo-RC1 is out:<br />
** https://launchpad.net/neutron/+milestone/kilo-rc1<br />
** Unless we find release critical bugs, this will be all for Kilo<br />
* python-neutronclient 2.4.0 is out<br />
** https://launchpad.net/python-neutronclient/2.4/2.4.0<br />
* Neutron Liberty mid-cycle announcement<br />
** June 24-26 in Fort Collins, CO<br />
** http://lists.openstack.org/pipermail/openstack-dev/2015-April/060713.html<br />
* Neutron Policies are documented here:<br />
** http://git.openstack.org/cgit/openstack/neutron/tree/doc/source/policies<br />
<br />
=== Bugs ===<br />
<br />
<br><br />
Important bugs:<br />
** [https://bugs.launchpad.net/neutron/+bug/1432065] DBDeadlock: (OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction') 'DELETE FROM ipallocationpools WHERE ipallocationpools.id<br />
** [https://bugs.launchpad.net/neutron/+bug/1419723] DBDuplicateEntry when creating secgroup<br />
** [https://bugs.launchpad.net/neutron/+bug/1359523] Security group rules errorneously applied to all ports having same ip addresses in different networks<br />
** [https://bugs.launchpad.net/neutron/+bug/1335375] ping still working after security group rule is created, updated, or deleted (related to previous one)<br />
<br />
<br/><br />
These bugs are in nova but are related to neutron. It would be great if we could get neutron reviews on:<br />
** Merged [https://review.openstack.org/#/c/80760/] Remove unneeded call to fetch network info on shutdown - arosen<br />
** Merged [https://review.openstack.org/#/c/81674/] remove unneeded call to network_api on detach_interface - arosen<br />
** Merged [https://review.openstack.org/#/c/81681/] remove unneeded call to network_api on rebuild_instance - arosen<br />
** Merged [https://review.openstack.org/#/c/80055/] Optimize validate_networks to query neutron only when needed - arosen<br />
** [https://review.openstack.org/#/c/80412/] deallocate_for_instance should delete all neutron ports on error - arosen<br />
** [https://review.openstack.org/#/c/59578/] Fix port_security_enabled neutron extension - arosen<br />
** [https://review.openstack.org/#/c/77043/] Fix pre-created ports in neutron from being deleted by nova - arosen<br />
<br />
=== Docs (emagana)===<br />
<br />
==== Networking Guide Doc Day: April 23rd 2015 ====<br />
* Etherpad: https://etherpad.openstack.org/p/networking-guide<br />
* ToC: https://wiki.openstack.org/wiki/NetworkingGuide/TOC<br />
* Gerrit: https://github.com/openstack/openstack-manuals/tree/master/doc/networking-guide<br />
* Minutes from 4/10/15 meeting: http://lists.openstack.org/pipermail/openstack-docs/2015-April/006265.html<br />
<br />
==== Open Items for Kilo: (feedback needed) ====<br />
* Legacy scenario OVS: https://github.com/ionosphere80/openstack-networking-guide/blob/master/scenario-legacy-ovs/scenario-legacy-ovs.md<br />
* Legacy scenario LB: https://github.com/ionosphere80/openstack-networking-guide/blob/master/scenario-legacy-lb/scenario-legacy-lb.md<br />
* New networking diagrams for DVR: https://github.com/ionosphere80/openstack-networking-guide/blob/master/scenario-dvr/scenario-dvr.md<br />
<br />
=== On Demand Agenda ===<br />
* Liberty Design Summit Sessions Discussion<br />
** Etherpad is here: https://etherpad.openstack.org/p/liberty-neutron-summit-topics<br />
** Please collect ideas there so we can work towards the final Summit agenda<br />
* Nova-Network to Neutron Migration<br />
** anteaya to give an update (http://lists.openstack.org/pipermail/openstack-dev/2014-December/053355.html)<br />
* Neutron as the default in DevStack<br />
** We need to document and create a single interface configuration - https://review.openstack.org/#/c/153208/<br />
* Moving stackforge/networking-* repos into openstack/ and under the Neutron team. (russellb)<br />
** 1) Request to consider stackforge/networking-ovn an effort owned by the Neutron team in terms of [http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=77929ea107b18972dcef6bb5ca5b3ccddae071da#n141 OpenStack governance].<br />
** 2) If #1 makes sense, what other networking-foo repos make sense to be considered Neutron team efforts? What criteria should be used?<br />
<br />
== Previous meeting logs ==<br />
* Previous meetings, with their notes and logs, can be found [http://eavesdrop.openstack.org/meetings/networking/ here].<br />
** [http://eavesdrop.openstack.org/meetings/networking/2015/?C=M;O=D networking-2015]<br />
** [http://eavesdrop.openstack.org/meetings/networking/2014/?C=M;O=D networking-2014]<br />
** [http://eavesdrop.openstack.org/meetings/networking/2013/?C=M;O=D networking-2013]<br />
** [http://eavesdrop.openstack.org/meetings/quantum/2013/?C=M;O=D quantum-2013]<br />
** [http://eavesdrop.openstack.org/meetings/quantum/2012/?C=M;O=D quantum-2012]<br />
* Older meeting notes are here: ../MeetingLogs.</div>Russellbhttps://wiki.openstack.org/w/index.php?title=Network/Meetings&diff=77487Network/Meetings2015-04-14T16:15:19Z<p>Russellb: /* On Demand Agenda */</p>
<hr />
<div><br />
{{:Network/Header}}<br />
'''Meeting time: The OpenStack Networking Team ([[Neutron]]) holds public meetings alternating between Mondays at 2100 UTC (#openstack-meeting) and Tuesdays at 1400 UTC (#openstack-meeting). Everyone is encouraged to attend.'''<br />
<br />
== Apologies for Absence ==<br />
* Anita Kuno (50/50 chance I'll make it, depends on traffic)<br />
<br />
== Agenda for Next Neutron Team Meeting ==<br />
'''Tuesday (4/21/2015) at 1400 UTC on #openstack-meeting'''<br />
<br />
=== Announcements / Reminders ===<br />
* Kilo-RC1 is out:<br />
** https://launchpad.net/neutron/+milestone/kilo-rc1<br />
** Unless we find release critical bugs, this will be all for Kilo<br />
* python-neutronclient 2.4.0 is out<br />
** https://launchpad.net/python-neutronclient/2.4/2.4.0<br />
* Neutron Liberty mid-cycle announcement<br />
** June 24-26 in Fort Collins, CO<br />
** http://lists.openstack.org/pipermail/openstack-dev/2015-April/060713.html<br />
* Neutron Policies are documented here:<br />
** http://git.openstack.org/cgit/openstack/neutron/tree/doc/source/policies<br />
<br />
=== Bugs ===<br />
<br />
<br><br />
Important bugs:<br />
** [https://bugs.launchpad.net/neutron/+bug/1432065] DBDeadlock: (OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction') 'DELETE FROM ipallocationpools WHERE ipallocationpools.id<br />
** [https://bugs.launchpad.net/neutron/+bug/1419723] DBDuplicateEntry when creating secgroup<br />
** [https://bugs.launchpad.net/neutron/+bug/1359523] Security group rules errorneously applied to all ports having same ip addresses in different networks<br />
** [https://bugs.launchpad.net/neutron/+bug/1335375] ping still working after security group rule is created, updated, or deleted (related to previous one)<br />
<br />
<br/><br />
These bugs are in nova but are related to neutron. It would be great if we could get neutron reviews on:<br />
** Merged [https://review.openstack.org/#/c/80760/] Remove unneeded call to fetch network info on shutdown - arosen<br />
** Merged [https://review.openstack.org/#/c/81674/] remove unneeded call to network_api on detach_interface - arosen<br />
** Merged [https://review.openstack.org/#/c/81681/] remove unneeded call to network_api on rebuild_instance - arosen<br />
** Merged [https://review.openstack.org/#/c/80055/] Optimize validate_networks to query neutron only when needed - arosen<br />
** [https://review.openstack.org/#/c/80412/] deallocate_for_instance should delete all neutron ports on error - arosen<br />
** [https://review.openstack.org/#/c/59578/] Fix port_security_enabled neutron extension - arosen<br />
** [https://review.openstack.org/#/c/77043/] Fix pre-created ports in neutron from being deleted by nova - arosen<br />
<br />
=== Docs (emagana)===<br />
<br />
==== Networking Guide Doc Day: April 23rd 2015 ====<br />
* Etherpad: https://etherpad.openstack.org/p/networking-guide<br />
* ToC: https://wiki.openstack.org/wiki/NetworkingGuide/TOC<br />
* Gerrit: https://github.com/openstack/openstack-manuals/tree/master/doc/networking-guide<br />
* Minutes from 4/10/15 meeting: http://lists.openstack.org/pipermail/openstack-docs/2015-April/006265.html<br />
<br />
==== Open Items for Kilo: (feedback needed) ====<br />
* Legacy scenario OVS: https://github.com/ionosphere80/openstack-networking-guide/blob/master/scenario-legacy-ovs/scenario-legacy-ovs.md<br />
* Legacy scenario LB: https://github.com/ionosphere80/openstack-networking-guide/blob/master/scenario-legacy-lb/scenario-legacy-lb.md<br />
* New networking diagrams for DVR: https://github.com/ionosphere80/openstack-networking-guide/blob/master/scenario-dvr/scenario-dvr.md<br />
<br />
=== On Demand Agenda ===<br />
* Liberty Design Summit Sessions Discussion<br />
** Etherpad is here: https://etherpad.openstack.org/p/liberty-neutron-summit-topics<br />
** Please collect ideas there so we can work towards the final Summit agenda<br />
* Nova-Network to Neutron Migration<br />
** anteaya to give an update (http://lists.openstack.org/pipermail/openstack-dev/2014-December/053355.html)<br />
* Neutron as the default in DevStack<br />
** We need to document and create a single interface configuration - https://review.openstack.org/#/c/153208/<br />
* Moving stackforge/networking-* repos into openstack/ and under the Neutron team.<br />
<br />
== Previous meeting logs ==<br />
* Previous meetings, with their notes and logs, can be found [http://eavesdrop.openstack.org/meetings/networking/ here].<br />
** [http://eavesdrop.openstack.org/meetings/networking/2015/?C=M;O=D networking-2015]<br />
** [http://eavesdrop.openstack.org/meetings/networking/2014/?C=M;O=D networking-2014]<br />
** [http://eavesdrop.openstack.org/meetings/networking/2013/?C=M;O=D networking-2013]<br />
** [http://eavesdrop.openstack.org/meetings/quantum/2013/?C=M;O=D quantum-2013]<br />
** [http://eavesdrop.openstack.org/meetings/quantum/2012/?C=M;O=D quantum-2012]<br />
* Older meeting notes are here: ../MeetingLogs.</div>Russellbhttps://wiki.openstack.org/w/index.php?title=Release_Naming&diff=70036Release Naming2014-12-09T20:46:45Z<p>Russellb: /* L landmarks in British Columbia */</p>
<hr />
<div>__NOTOC__<br />
OpenStack releases are numbered using a YYYY.N time-based scheme. For example, the first release of 2012 will have the 2012.1 version number. During the development cycle, the release is identified using a codename. Those codenames are ordered alphabetically: Austin was the first release, Bexar is the second, Cactus the third, etc.<br />
<br />
These codenames are chosen by popular vote using the basic Launchpad poll feature over the ~openstack group. Codenames are '''cities or counties near where the corresponding OpenStack design summit took place'''. An exception (called the ''Waldon exception'') is granted to ''elements of the state flag that sound especially cool''. That exception was extended to other major landmarks and reference points.<br />
<br />
* Austin: The first design summit took place in Austin, TX<br />
* Bexar: The second design summit took place in San Antonio, TX (Bexar county).<br />
* Cactus: Cactus is a city in Texas<br />
* [https://launchpad.net/~openstack/+poll/d-release-naming Diablo]: Diablo is a city in the bay area near Santa Clara, CA<br />
* [https://launchpad.net/~openstack/+poll/e-release-naming Essex]: Essex is a city near Boston, MA<br />
* [https://launchpad.net/~openstack/+poll/f-release-naming Folsom]: Folsom is a city near San Francisco, CA<br />
* [https://launchpad.net/~openstack/+poll/g-stands-for-grizzly Grizzly]: Grizzly is an element of the state flag of California (design summit takes place in San Diego, CA)<br />
* [https://launchpad.net/~openstack/+poll/h-release-naming Havana]: Havana is an unincorporated community in Oregon<br />
* [https://launchpad.net/~openstack/+poll/i-release-naming Icehouse]: Ice House is a street in Hong Kong<br />
* Juno: Juno is a locality in Georgia<br />
* Kilo: Paris (Sèvres, actually, but that's close enough) is home to the Kilogram, the only remaining SI unit tied to an artifact<br />
<br /><br />
Only single words with a maximum of 10 characters are good candidates for a name. Bonus points for sounding cool.<br />
<br />
<br />
= L naming =<br />
The Design Summit for the L cycle will take place in Vancouver, BC, Canada.<br />
<br />
=== L cities or villages in British Columbia ===<br />
* Langford<br />
* Langley<br />
* Lumby<br />
* Lytton<br />
<br />
=== L landmarks in British Columbia ===<br />
* Link (Island)<br />
* Lasqueti (Island)<br />
* Lanz (Island)<br />
* Lulu (Island)<br />
* Langara (Island)<br />
* Lady (Peak)<br />
* Lavender (Peak)<br />
* Lemming (Peak) (http://peakery.com/lemming-peak-canada/)<br />
* Lightning (Peak)<br />
* Lizard (Range)<br />
* Llangorse (Mountain)<br />
* (Mount) Lolo<br />
<br />
=== L villages or cities or districts or regions in the rest of Canada ===<br />
* Labrador<br />
* London (no kidding)<br />
* Lachute<br />
* Laval<br />
* Lavaltrie<br />
* Longueuil<br />
* Lorraine<br />
* Louiseville<br />
* Lindsay<br />
* Linden<br />
* Lomond<br />
* Longview<br />
* Lougheed<br />
* Laird<br />
* Lancer<br />
* Landis<br />
* Lang<br />
* Leask<br />
* Lebret<br />
* Leoville<br />
* Leross<br />
* Lestock<br />
* Liberty<br />
* Limerick<br />
* Lintlaw<br />
* Lipton<br />
* Loreburn<br />
* Love<br />
<br />
=== Other symbols ===<br />
* Laurier ([https://en.wikipedia.org/wiki/Wilfrid_Laurier Canadian Prime Minister])<br />
* Laurentian (Canadian Shield)</div>Russellbhttps://wiki.openstack.org/w/index.php?title=Sprints/NeutronKiloSprint&diff=68541Sprints/NeutronKiloSprint2014-11-20T13:48:21Z<p>Russellb: /* Attendees */</p>
<hr />
<div>=== Date ===<br />
Monday-Wednesday, December 8-10<br />
<br />
=== Location ===<br />
Adobe Systems Incorporated<br />
3900 Adobe Way<br />
Lehi, UT 84043<br />
Tel: 385-345-0000<br />
<br />
=== Nearby airport: ===<br />
SLC (Salt Lake City)<br />
<br />
=== Nearby Hotels ===<br />
<br />
[https://goo.gl/maps/F3jFN Google Map of Hotels nearby]<br />
<br />
=== Agenda ===<br />
This is going to be a coding sprint. The agenda for this mid-cyle is focused on the following things:<br />
<br />
* API/RPC layer refactor<br />
* Core Plugin refactor (this will rope in a few ML2ers)<br />
* API Test relocation <br />
* Building community by encouraging cross pollination and sharing of ideas.<br />
<br />
<br />
To this end, we'll break into groups and focus on targeted work to make progress on various refactoring tasks. We will have multiple core reviewers there for encouraging quick iterations. We will also have a representative from QA and infra there to assist with onboarding folks for things like gate triage, elastic-recheck, etc.<br />
<br />
=== Attendees ===<br />
# Kyle Mestery<br />
# Henry Gessau<br />
# Mark McClain<br />
# Sean M. Collins<br />
# Doug Wiegley<br />
# Armando Migliaccio<br />
# Carl Baldwin<br />
# Maru Newby<br />
# Paul Michali (pc_m)<br />
# Bob Kukura<br />
# Brian Haley<br />
# Don Kehn<br />
# Miguel Lavalle<br />
# Russell Bryant<br />
<br />
'''Please send email to "jun.park.earth at gmail dot com" with your name and company affiliation so he can get your wifi setup.'''</div>Russellbhttps://wiki.openstack.org/w/index.php?title=Design_Summit/Kilo/Etherpads&diff=67322Design Summit/Kilo/Etherpads2014-11-05T12:58:15Z<p>Russellb: /* Wed */</p>
<hr />
<div>[[Category:Summit]]<br />
[[Category:Kilo]]<br />
[[Category:Etherpad]]<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
==Cross-Project workshops==<br />
All on Tuesday<br />
* 11:15 - 12:45 - [https://etherpad.openstack.org/p/kilo-crossproject-api-wg API Working Group] (double slot)<br />
* 11:15 - 11:55 - [https://etherpad.openstack.org/p/uuid_for_test_Cases RefStack, DefCore, Refstack Interoperability, and Tempest]<br />
* 12:05 - 12:45 - [https://etherpad.openstack.org/p/kilo-crossproject-move-func-tests-to-projects Moving Functional Tests to Projects]<br />
* 12:05 - 12:45 - [https://etherpad.openstack.org/p/kilo-crossproject-ha-integration How to provide HA in a central manner]<br />
* 14:00 - 14:40 - [https://etherpad.openstack.org/p/kilo-crossproject-specs Specs: The Good, Bad, and the Ugly]<br />
* 14:00 - 14:40 - [https://etherpad.openstack.org/p/kilo-crossproject-upgrades-and-versioning Dealing with RPC and DB changes during upgrade.]<br />
* 14:00 - 14:40 - [https://etherpad.openstack.org/p/kilo-crossproject-notifications Schema and Schema Validation for Notifications]<br />
* 14:50 - 15:30 - [https://etherpad.openstack.org/p/kilo-crossproject-scaling-docs Scaling Documentation across Projects]<br />
* 14:50 - 15:30 - [https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack Approaches for Scaling Out]<br />
* 14:50 - 15:30 - [https://etherpad.openstack.org/p/kilo-crossproject-requirements Changes to our Requirements Management Policy]<br />
* 15:40 - 16:20 - [https://etherpad.openstack.org/p/kilo-crossproject-log-rationalization Log Rationalization]<br />
* 15:40 - 16:20 - [https://etherpad.openstack.org/p/kilo-how-to-tackle-debt How to Tackle Technical Debt in Kilo]<br />
* 15:40 - 16:20 - [https://etherpad.openstack.org/p/kilo-crossproject-end-user-experience-sdks End user experience, SDKs]<br />
<br />
* 16:40 - 18:00 - [https://etherpad.openstack.org/p/kilo-crossproject-growth-challenges Growth Challenges] (double slot)<br />
* 17:20 - 18:00 [https://etherpad.openstack.org/p/paris_translators_meetup Translation/i18n]<br />
<br />
* 17:30 - 18:20 - [https://etherpad.openstack.org/p/kilo-crossproject-larger-policy-discussion]<br />
<br />
==Barbican==<br />
* Tues 14:50 - 15:30 - [https://etherpad.openstack.org/p/barbican-kilo-integration Integration discussion with other OpenStack projects]<br />
* Tues 15:40 - 16:20 - [https://etherpad.openstack.org/p/barbican-kilo-certificate-orders Common Certificate Issuance API]<br />
* Tues 17:30 - 18:10 - [https://etherpad.openstack.org/p/barbican-kilo-entity-auth Per-User Entity-Level Authorization]<br />
<br />
==Ceilometer==<br />
* Wed 17:20 - 18:00 - [https://etherpad.openstack.org/p/kilo-ceilometer-functional-tests Switchover to in-tree functional tests]<br />
* Thurs 9:00 - 9:40 - [https://etherpad.openstack.org/p/kilo-ceilometer-gnocchi-integration Bringing Gnocchi and Ceilometer together]<br />
* Thurs 9:50 - 10:30 - [https://etherpad.openstack.org/p/gnocchi-influxdb-opentsdb-mapping Mapping gnocchi semantics to InfluxDB/OpenTSDB]<br />
* Thurs 11:00 - 11:40 - [https://etherpad.openstack.org/p/kilo-ceilometer-monasca-collaboration Areas to target for collaboration with Monasca]<br />
* Thurs 11:50 - 12:30 - [https://etherpad.openstack.org/p/kilo-ceilometer-persistent-pipeline-cfg Persistent pipeline config] / [https://etherpad.openstack.org/p/kilo-ceilometer-kafka-publisher Kafka publisher]<br />
* Thurs 13:40 - 14:20 - [https://etherpad.openstack.org/p/kilo-ceilometer-notifications-as-contract Notifications-as-a-contract]<br />
<br />
==Cinder==<br />
<br />
* Wednesday, November 5 • 9:00 - 9:40 - [https://etherpad.openstack.org/p/kilo-cinder-async-reporting Async Error Reporting]<br />
* Wednesday, November 5 • 9:50 - 10:30 - [https://etherpad.openstack.org/p/eAhp48wsIi Cinder Automated Discovery] <br />
* Wednesday, November 5 • 11:50 - 12:30 - [https://etherpad.openstack.org/p/cinder-rpc-version-clamp Cinder RPC version clamp]<br />
* Wednesday, November 5 • 13:50 - 14:30 - [https://etherpad.openstack.org/p/objectify-cinder Objectify Cinder]<br />
* Wednesday, November 5 • 14:40 - 15:20 - [https://etherpad.openstack.org/p/cinder-enforcement-of-states Cinder State Machine] [http://kilodesignsummit.sched.org/event/58baf2d2de8b4b32117c645c224c4fe1#.VFQZdPTF_3H Description]<br />
* Wednesday, November 5 • 15:30 - 16:10 - [https://etherpad.openstack.org/p/kilo-cinder-over-subscription Over Subscription in Thin Provisioning] / [https://etherpad.openstack.org/p/kilo-cinder-capacity-headroom Capacity Headroom]<br />
<br />
==Designate==<br />
* https://etherpad.openstack.org/p/designate-kilo-summit-ifxr-session<br />
* https://review.openstack.org/#/c/132638/ Incremental Zone transfer<br />
<br />
==Documentation==<br />
<br />
* Tues 14:50 - 15:30 - [https://etherpad.openstack.org/p/kilo-crossproject-scaling-docs Scaling Documentation across Projects]<br />
* Friday 09:00 - 12:00 [https://etherpad.openstack.org/p/docstopicsparissummit Remaining docs topics in the pod]<br />
<br />
==Glance==<br />
* tbd<br />
<br />
<br />
==Heat==<br />
* Wed 13:50 – 14:30 [https://etherpad.openstack.org/p/kilo-heat-testing Heat integration strategy, functional & unit tests]<br />
* Wed 14:40 – 15:20 [https://etherpad.openstack.org/p/kilo-heat-upgrades Heat Upgrades (testing, status, zero downtime)]<br />
* Wed 15:30 – 16:10 [https://etherpad.openstack.org/p/kilo-heat-convergence Heat Convergence - resolving any questions]<br />
* Wed 16:30 – 17:10 [https://etherpad.openstack.org/p/kilo-heat-autoscaling Heat Autoscaling API]<br />
* Wed 17:20 – 18:00 [https://etherpad.openstack.org/p/kilo-heat-containers Heat and Containers]<br />
<br />
* Thurs 09:00 – 9:40 [https://etherpad.openstack.org/p/kilo-heat-usability-template Heat Template format usability improvements]<br />
* Thurs 09:50 – 10:30 [https://etherpad.openstack.org/p/kilo-heat-usability-general Heat API and general usability improvements]<br />
<br />
==Horizon==<br />
* Wed 1440 – 1520 [https://etherpad.openstack.org/p/kilo-horizon-operators-users Operators, Deployers and Users]<br />
* Wed 1530 – 1610 [https://etherpad.openstack.org/p/kilo-keystone-horizon-cli-federation-sso Horizon/Keystone Cross Topic Session]<br />
<br />
<br />
* Thurs 950 – 1030 [https://etherpad.openstack.org/p/kilo-horizon-django-angular Django-Angular Playing Nice]<br />
* Thurs 1100 – 1140 [https://etherpad.openstack.org/p/kilo-horizon-clientside-data Clientside Data]<br />
* Thurs 1630 – 1710 [https://etherpad.openstack.org/p/kilo-horizon-customize-extend Making Horizon easier to customize and extend]<br />
* Thurs 1720 – 1800 [https://etherpad.openstack.org/p/kilo-horizon-ux UX and Horizon]<br />
<br />
<br />
* Fri 1340 – 1710 [https://etherpad.openstack.org/p/kilo-horizon-contributors-meetup Contributors Meetup]<br />
<br />
<br />
==Infrastructure==<br />
<br />
* Tue 1640 - 1720 [https://etherpad.openstack.org/p/kilo-third-party-items Third-party CI]<br />
* Wed 0950 - 1030 [https://etherpad.openstack.org/p/kilo-infra-manual Infra Manual]<br />
* Wed 1350 - 1430 [https://etherpad.openstack.org/p/kilo-infra-afs Infra AFS]<br />
* Thu 1520 - 1600 [https://etherpad.openstack.org/p/StoryBoard_design_summit_Paris Storyboard]<br />
<br />
==Ironic==<br />
* Wed 0900 - 0940 [https://etherpad.openstack.org/kilo-ironic-exposing-different-capabilities Exposing Different Capabilities]<br />
* Wed 0950 - 1030 [https://etherpad.openstack.org/kilo-ironic-resource-locking Resource Locking]<br />
* Wed 1100 - 1140 [https://etherpad.openstack.org/kilo-ironic-making-it-simple Making Ironic Easier to Use]<br />
* Wed 1150 - 1230 [https://etherpad.openstack.org/kilo-ironic-understanding-hardware Understanding the Hardware we have]<br />
<br />
* Thu 0900 - 0940 [https://etherpad.openstack.org/kilo-ironic-asserting-ready-state Asserting Ready State]<br />
<br />
* Fri 1340 - 1710 [https://etherpad.openstack.org/kilo-ironic-contributor-meetup Contributor Meetup]<br />
<br />
==Keystone==<br />
* Wed 0900 – 0940 [https://etherpad.openstack.org/p/kilo-keystone-object-lifecycle Object Lifecycle / Object Dependencies]<br />
* Wed 0950 – 1030 [https://etherpad.openstack.org/p/hierarchical-multitenancy-kilo-summit Hierarchical Multitenancy]<br />
* Wed 1630 – 1710 [https://etherpad.openstack.org/p/kilo-keystone-horizon-cli-federation-sso Keystone-Horizon-CLI Federation/SSO]<br />
* Wed 1720 – 1800 [https://etherpad.openstack.org/p/kilo-keystone-operators-deployers-and-devops Operators, Deployers, and DevOps]<br />
<br />
<br />
* Thu 0900 – 9040 [https://etherpad.openstack.org/p/kilo-keystone-python-keystoneclient python-keystoneclient]<br />
* Thu 1430 – 1510 [https://etherpad.openstack.org/p/kilo-keystone-authorization Authorization]<br />
* Thu 1520 – 1600 [https://etherpad.openstack.org/p/kilo-keystone-policy-model-token-capabilities Policy Model and User/Token Capabilities]<br />
<br />
<br />
* Fri 1340 – 1710 [https://etherpad.openstack.org/p/kilo-keystone-meetup Keystone Contributor Meetup]<br />
<br />
==Manila==<br />
All on Tuesday<br />
* 15:40 - 16:20 - [https://etherpad.openstack.org/p/kilo-manila-networking-and-multitenancy Networking and Multitenancy]<br />
* 16:40 - 17:20 - [https://etherpad.openstack.org/p/kilo-manila-mount-automation Mount Automation]<br />
* 17:30 - 18:10 - [https://etherpad.openstack.org/p/kilo-manila-access-groups Access Groups]<br />
<br />
<br />
==Neutron==<br />
=== Tue ===<br />
Neutron contributors please attend [https://wiki.openstack.org/wiki/Summit/Kilo/Etherpads#Cross-Project_workshops Cross Project Workshops]<br />
<br />
* 12.05 - 12.45 - [https://etherpad.openstack.org/p/kilo-gbp-design-summit-topics Group-based Policy]<br />
<br />
=== Wed ===<br />
* 09:00 - 09:40 - [https://etherpad.openstack.org/p/neutron-processes Development Process and Procedures]<br />
* 09:50 - 10:30 - [https://etherpad.openstack.org/p/aE7ydRU35m Split Vendor Plugins and Drivers] Part 1<br />
* 11:00 - 11:50 - [https://etherpad.openstack.org/p/aE7ydRU35m Split Vendor Plugins and Drivers] Part 2<br />
* 11:50 - 12:30 - [https://etherpad.openstack.org/p/neutron-cli CLI and Client Lib]<br />
<br />
=== Thu ===<br />
* 11:00 - 11:40 - [https://etherpad.openstack.org/p/neutron-kilo-lightning-talks Lightning Talks]<br />
* 11:50 - 12:30 - [https://etherpad.openstack.org/p/neutron-services Advanced Services Spin Out]<br />
* 13:40 - 14:20 - [https://etherpad.openstack.org/p/neutron-rest-api REST/RPC/Plugin API] Part 1 (REST)<br />
* 14:30 - 15:10 - [https://etherpad.openstack.org/p/neutron-rpc-api REST/RPC/Plugin API] Part 2 (RPC)<br />
* 15:20 - 16:00 - [https://etherpad.openstack.org/p/neutron-plugin-api REST/RPC/Plugin API] Part 3 (Plugin)<br />
* 16:30 - 17:10 - [https://etherpad.openstack.org/p/neutron-ipam Pluggable IPAM]<br />
* 17:20 - 18:00 - Paying down technical debt in [https://etherpad.openstack.org/p/neutron-l2 L2 Agents] and [https://etherpad.openstack.org/p/neutron-l3 L3 Agents]<br />
<br />
=== Fri ===<br />
* 09:00 - 12:30 - [https://etherpad.openstack.org/p/neutron-kilo-meetup-slots Neutron contributors meetup]<br />
* 13:40 - 17:10 - [https://etherpad.openstack.org/p/neutron-kilo-meetup-slots Neutron contributors meetup]<br />
<br />
==Nova==<br />
=== Wed ===<br />
* 9:00 - 10:30 - [https://etherpad.openstack.org/p/kilo-nova-cells Cells 1 & 2] <br />
* 11:00 - 11:40 - [https://etherpad.openstack.org/p/kilo-nova-objects Nova Objects Status and Deadlines]<br />
* 11:50 - 12:30 - [https://etherpad.openstack.org/p/kilo-nova-glance Nova/Glance client library]<br />
* 13:50 - 14:30 - [https://etherpad.openstack.org/p/kilo-nova-summit-unconference Nova Unconference]<br />
* 14:40 - 15:20 - [https://etherpad.openstack.org/p/nova-ci-status-checkpoint-kilo Nova CI status checkpoint]<br />
* 15:30 - 16:10 - [https://etherpad.openstack.org/p/kilo-nova-virt-driver-consistency Virt Driver Consistency]<br />
* 16:30 - 17:10 - [https://etherpad.openstack.org/p/kilo-nova-functional-testing Nova Functional testing]<br />
* 17:20 - 18:00 - [https://etherpad.openstack.org/p/kilo-nova-nfv NFV proposed specs]<br />
<br />
=== Thurs ===<br />
* 9:00 - 9:40 - API Microversions<br />
* 9:50 - 10:30 - nova-network migration<br />
* 11:00 - 12:30 - [https://etherpad.openstack.org/p/kilo-nova-scheduler-rt Scheduler and resource tracker] <br />
* 13:40 - 14:30 - [https://etherpad.openstack.org/p/kilo-nova-summit-unconference Nova Unconference]<br />
* 14:30 - 15:10 - [https://etherpad.openstack.org/p/kilo-nova-zero-downtime-upgrades Nova Upgrades and DB migrations]<br />
* 15:20 - 16:00 - Containers service<br />
* 16:30 - 18:00 - [https://etherpad.openstack.org/p/kilo-nova-priorities Setting nova's roadmap for Kilo 1 & 2]<br />
<br />
==Oslo==<br />
* Wed 2014-11-05 11:00 - [https://etherpad.openstack.org/p/kilo-oslo-library-proposals Oslo graduation schedule]<br />
* Wed 2014-11-05 11:50 - [https://etherpad.openstack.org/p/kilo-oslo-oslo.messaging oslo.messaging ]<br />
* Wed 2014-11-05 13:50 - [https://etherpad.openstack.org/p/kilo-oslo-common-quota-library A Common Quota Management Library]<br />
* Thu 2014-11-06 11:50 - [https://etherpad.openstack.org/p/kilo-oslo-taskflow taskflow]<br />
* Thu 2014-11-06 13:40 - [https://etherpad.openstack.org/p/kilo-oslo-alpha-versioning Using alpha versioning for Oslo libraries]<br />
* Thu 2014-11-06 16:30 - [https://etherpad.openstack.org/p/kilo-oslo-python-3 Python 3 support in Oslo]<br />
* Thu 2014-11-06 17:20 - [https://etherpad.openstack.org/p/kilo-oslo-namespace-packages Moving Oslo away from namespace packages]<br />
<br />
<br />
==QA==<br />
* Wed 9:00 - 9:40 [https://etherpad.openstack.org/p/kilo-summit-gap-analysis-tempest-in-production Gap Analysis for using Tempest in Production]<br />
* Wed 11:00 - 11:40 [https://etherpad.openstack.org/p/kilo-summit-tempest-scope-in-the-brave-new-world Tempest Scope in the Brave New World]<br />
* Wed 14:40 - 15:20 [https://etherpad.openstack.org/p/kilo-gating-relationships QA/Infra Gating Relationships]<br />
* Wed 15:30 - 16:10 [https://etherpad.openstack.org/p/kilo-summit-post-merge-qa-ci QA and CI After Merge]<br />
* Wed 17:20 - 18:00 [https://etherpad.openstack.org/p/kilo-summit-tempest-lib-moving-forward Tempest-lib moving forward]<br />
* Thu 11:50 - 12:30 [https://etherpad.openstack.org/p/kilo-summit-future-auth-interface-design Future Auth Interface Design]<br />
* Thu 14:30 - 13:10 [https://etherpad.openstack.org/p/kilo-summit-devstack-grenade Devstack/Grenade]<br />
<br />
==Release Management==<br />
* Wed 1150 – 1230 [https://etherpad.openstack.org/p/kilo-relmgt-stable-branches Stable branches]<br />
* Thu 0950 – 1030 [https://etherpad.openstack.org/p/kilo-relmgt-vulnerability-management Vulnerability management]<br />
<br />
==Sahara==<br />
* https://etherpad.openstack.org/p/kilo-summit-sahara-edp<br />
* https://etherpad.openstack.org/p/kilo-summit-sahara-ux<br />
* https://etherpad.openstack.org/p/kilo-summit-sahara-production-clusters<br />
* https://etherpad.openstack.org/p/kilo-summit-sahara-production<br />
* https://etherpad.openstack.org/p/kilo-summit-sahara-integration-security<br />
<br />
==Swift==<br />
* tbd<br />
<br />
<br />
==TripleO==<br />
* Wednesday 16:30-17:10 TripleO [https://etherpad.openstack.org/p/kilo-summit-tripleo-onramps Onramps and new implementations]<br />
<br />
==Trove==<br />
Trovesday (Thursday)<br /><br />
14:30 - 15:10: [https://etherpad.openstack.org/p/kilo-summit-trove-clusters Building Out Trove Clusters]<br /><br />
15:20 - 16:00: [https://etherpad.openstack.org/p/kilo-summit-trove-replication-v2 Replication v2 and Scheduling Tasks in Trove]<br /><br />
16:30 - 17:10: [https://etherpad.openstack.org/p/kilo-summit-testing-trove Testing Trove]<br /><br />
17:20 - 18:00: [https://etherpad.openstack.org/p/kilo-summit-trove-openstack-tighter-integration Trove integration with other OpenStack components]<br /><br />
<br />
==Zaqar==<br />
* Tuesday 11:15 [https://etherpad.openstack.org/p/kilo-zaqar-summit-integration-with-services Zaqar integration with other services] <br />
* Tuesday 12:05 [https://etherpad.openstack.org/p/kilo-zaqar-summit-v2 Zaqar API v2: What? Why? When?]<br />
* Tuesday 14:00 [https://etherpad.openstack.org/p/kilo-zaqar-summit-persistent-transports Zaqar Persistent Transports]<br />
* Tuesday 14:50 [https://etherpad.openstack.org/p/kilo-zaqar-summit-infrastructure Zaqar Infrastructure Session]<br />
<br />
==Other Projects==<br />
* Monday 11:40 - 13:10 [https://etherpad.openstack.org/p/kilo-ceph Ceph]<br />
* Monday 14:30 - 16:00 [https://etherpad.openstack.org/p/kilo-ansible-for-openstack Ansible for OpenStack]<br />
* Monday 16:20 - 17:50 [https://etherpad.openstack.org/p/puppet-openstack-paris-agenda puppet-openstack design session]<br />
* Monday 16:30 - 18:30 [https://etherpad.openstack.org/p/paris-product-meeting Product Management session]<br />
* Tuesday 14:00 - 14:40 [https://etherpad.openstack.org/p/kilo-poppy-summit Poppy design session]<br />
* Tuesday 15.40 - 16.20 [https://etherpad.openstack.org/p/paris-2014-design-summit-mistral Mistral design session]<br />
* Tuesday 16:40 - 17:20 [https://etherpad.openstack.org/p/kilo-murano-design-session Murano design session]<br />
* Tuesday 16:40 - 18:10 [https://etherpad.openstack.org/p/par-kilo-congress-design-session Congress design session]<br />
<br />
==Event intro/closure==<br />
* Tue 1115 – 1155 [https://etherpad.openstack.org/p/kilo-summit-101 Summit 101]<br />
* Fri 1720 – 1800 [https://etherpad.openstack.org/p/kilo-summit-feedback Design Summit feedback]<br />
<br />
<br />
==Ops==<br />
* Mon 1140 – 1220 [https://etherpad.openstack.org/p/kilo-summit-ops-pain Top 10 Pain points from the user survey - how to fix them?]<br />
* Mon 1140 – 1220 [https://etherpad.openstack.org/p/kilo-summit-ops-pets What is the best practice for managing 'pets'?]<br />
* Mon 1230 – 1310 [https://etherpad.openstack.org/p/kilo-summit-ops-logging How do we fix logging?]<br />
* Mon 1230 – 1310 [https://etherpad.openstack.org/p/kilo-summit-ops-dvr Distributed Virtual Router - finally neutron HA!]<br />
* Mon 1430 – 1510 [https://etherpad.openstack.org/p/kilo-summit-ops-ironic What do you want from ironic/bare metal?]<br />
* Mon 1430 – 1510 [https://etherpad.openstack.org/p/kilo-summit-ops-app-eco The Application Ecosystem working group]<br />
* Mon 1520 – 1600 [https://etherpad.openstack.org/p/kilo-summit-ops-containers What do we want from containers/docker?]<br />
* Mon 1520 – 1600 [https://etherpad.openstack.org/p/kilo-summit-ops-architecture-upgrades Architecture Show and Tell - Upgrades Special Edition]<br />
* Mon 1620 – 1700 [https://etherpad.openstack.org/p/kilo-summit-ops-ha High availability - how do you do it?]<br />
* Mon 1620 – 1700 [https://etherpad.openstack.org/p/kilo-summit-ops-architecture Architecture Show and Tell]<br />
* Mon 1710 – 1750 [https://etherpad.openstack.org/p/kilo-summit-ops-get-involved How to get involved in Kilo?]<br />
* Mon 1710 – 1750 [https://etherpad.openstack.org/p/kilo-summit-ops-storage Storage - how do you do it?]<br />
<br />
* Wed 1440 - 1520 [http://kilodesignsummit.sched.org/event/3406e6331bd488329eaf9af7406bdc95 Horizon Operators, Deployers and End Users]<br />
* Wed 1720 - 1800 [http://kilodesignsummit.sched.org/event/e78766659871928319d63259167efdd6 Keystone Operators, Deployers, and DevOps]<br />
<br />
* Thu 0900 – 1030 [https://etherpad.openstack.org/p/kilo-summit-ops-log-rationalisation Log Rationalisation]<br />
* Thu 0900 – 1030 [https://etherpad.openstack.org/p/kilo-summit-ops-docs Documentation] <br />
* Thu 0900 – 1030 [https://etherpad.openstack.org/p/kilo-summit-ops-telco Telco WG]<br />
<br />
* Thu 1100 – 1230 [https://etherpad.openstack.org/p/kilo-summit-ops-monitoring Monitoring]<br />
* Thu 1100 – 1230 [https://etherpad.openstack.org/p/kilo-summit-ops-upgrades Upgrades]<br />
* Thu 1100 – 1140 [https://etherpad.openstack.org/p/kilo-summit-ops-stable-branch Stable Branch]<br />
* Thu 1150 – 1230 [https://etherpad.openstack.org/p/kilo-summit-ops-chef Chef]<br />
<br />
* Thu 1340 – 1550 [https://etherpad.openstack.org/p/kilo-summit-ops-app-eco The Application Ecosystem working group]<br />
* Thu 1340 – 1550 [https://etherpad.openstack.org/p/kilo-summit-ops-blueprints Blueprints Working Group]<br />
* Thu 1340 – 1500 [https://etherpad.openstack.org/p/kilo-summit-ops-tools Ops Tools working group]<br />
* Thu 1510 – 1550 [https://etherpad.openstack.org/p/kilo-summit-ops-puppet Puppet]<br />
<br />
* Thu 1630 – 1800 [https://etherpad.openstack.org/p/kilo-summit-ops-large-deployments Large Deployments Team]<br />
* Thu 1630 – 1800 [https://etherpad.openstack.org/p/kilo-summit-ops-api API Working Group]<br />
* Thu 1630 – 1800 [https://etherpad.openstack.org/p/kilo-summit-ops-enterprise Win The Enterprise Working Group]<br />
<br />
*Thu 1720 - 1800 [http://kilodesignsummit.sched.org/event/f4f7fe08c447c048a30065c5adb5ea79 Swift Ops Feedback Session]</div>Russellbhttps://wiki.openstack.org/w/index.php?title=Design_Summit/Kilo/Etherpads&diff=67240Design Summit/Kilo/Etherpads2014-11-04T12:47:29Z<p>Russellb: /* Wed */</p>
<hr />
<div>[[Category:Summit]]<br />
[[Category:Kilo]]<br />
[[Category:Etherpad]]<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
==Cross-Project workshops==<br />
All on Tuesday<br />
* 11:15 - 12:45 - [https://etherpad.openstack.org/p/kilo-crossproject-api-wg API Working Group] (double slot)<br />
* 11:15 - 11:55 - [https://etherpad.openstack.org/p/uuid_for_test_Cases RefStack, DefCore, Refstack Interoperability, and Tempest]<br />
* 12:05 - 12:45 - [https://etherpad.openstack.org/p/kilo-crossproject-move-func-tests-to-projects Moving Functional Tests to Projects]<br />
* 12:05 - 12:45 - [https://etherpad.openstack.org/p/kilo-crossproject-ha-integration How to provide HA in a central manner]<br />
* 14:00 - 14:40 - [https://etherpad.openstack.org/p/kilo-crossproject-specs Specs: The Good, Bad, and the Ugly]<br />
* 14:00 - 14:40 - [https://etherpad.openstack.org/p/kilo-crossproject-notifications Schema and Schema Validation for Notifications]<br />
* 14:50 - 15:30 - [https://etherpad.openstack.org/p/kilo-crossproject-scaling-docs Scaling Documentation across Projects]<br />
* 14:50 - 15:30 - [https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack Approaches for Scaling Out]<br />
* 15:40 - 16:20 - [https://etherpad.openstack.org/p/kilo-crossproject-log-rationalization Log Rationalization]<br />
* 16:40 - 18:00 - [https://etherpad.openstack.org/p/kilo-crossproject-growth-challenges Growth Challenges] (double slot)<br />
<br />
==Barbican==<br />
* tbd<br />
<br />
<br />
==Ceilometer==<br />
* Wed 17:20 - 18:00 - [https://etherpad.openstack.org/p/kilo-ceilometer-functional-tests Switchover to in-tree functional tests]<br />
* Thurs 9:00 - 9:40 - [https://etherpad.openstack.org/p/kilo-ceilometer-gnocchi-integration Bringing Gnocchi and Ceilometer together]<br />
* Thurs 9:50 - 10:30 - [https://etherpad.openstack.org/p/gnocchi-influxdb-opentsdb-mapping Mapping gnocchi semantics to InfluxDB/OpenTSDB]<br />
* Thurs 11:00 - 11:40 - [https://etherpad.openstack.org/p/kilo-ceilometer-monasca-collaboration Areas to target for collaboration with Monasca]<br />
* Thurs 11:50 - 12:30 - [https://etherpad.openstack.org/p/kilo-ceilometer-persistent-pipeline-cfg Persistent pipeline config] / [https://etherpad.openstack.org/p/kilo-ceilometer-kafka-publisher Kafka publisher]<br />
* Thurs 13:40 - 14:20 - [https://etherpad.openstack.org/p/kilo-ceilometer-notifications-as-contract Notifications-as-a-contract]<br />
<br />
==Cinder==<br />
<br />
* Wednesday, November 5 • 14:40 - 15:20 - [https://etherpad.openstack.org/p/cinder-enforcement-of-states Cinder State Machine] [http://kilodesignsummit.sched.org/event/58baf2d2de8b4b32117c645c224c4fe1#.VFQZdPTF_3H Description]<br />
<br />
==Designate==<br />
* https://etherpad.openstack.org/p/designate-kilo-summit-ifxr-session<br />
* https://review.openstack.org/#/c/132638/ Incremental Zone transfer<br />
<br />
==Documentation==<br />
<br />
* Tues 14:50 - 15:30 - [https://etherpad.openstack.org/p/kilo-crossproject-scaling-docs Scaling Documentation across Projects]<br />
* Friday 09:00 - 12:00 [https://etherpad.openstack.org/p/docstopicsparissummit Remaining docs topics in the pod]<br />
<br />
==Glance==<br />
* tbd<br />
<br />
<br />
==Heat==<br />
* tbd<br />
<br />
<br />
==Horizon==<br />
* Wed 1440 – 1520 [https://etherpad.openstack.org/p/kilo-horizon-operators-users Operators, Deployers and Users]<br />
* Wed 1530 – 1610 [https://etherpad.openstack.org/p/kilo-keystone-horizon-cli-federation-sso Horizon/Keystone Cross Topic Session]<br />
<br />
<br />
* Thurs 950 – 1030 [https://etherpad.openstack.org/p/kilo-horizon-django-angular Django-Angular Playing Nice]<br />
* Thurs 1100 – 1140 [https://etherpad.openstack.org/p/kilo-horizon-clientside-data Clientside Data]<br />
* Thurs 1630 – 1710 [https://etherpad.openstack.org/p/kilo-horizon-customize-extend Making Horizon easier to customize and extend]<br />
* Thurs 1720 – 1800 [https://etherpad.openstack.org/p/kilo-horizon-ux UX and Horizon]<br />
<br />
<br />
* Fri 1340 – 1710 [https://etherpad.openstack.org/p/kilo-horizon-contributors-meetup Contributors Meetup]<br />
<br />
<br />
==Infrastructure==<br />
* Thu 1520 - 1600 [https://etherpad.openstack.org/p/StoryBoard_design_summit_Paris Storyboard]<br />
<br />
==Ironic==<br />
* Wed 0900 - 0940 [https://etherpad.openstack.org/kilo-ironic-exposing-different-capabilities Exposing Different Capabilities]<br />
* Wed 0950 - 1030 [https://etherpad.openstack.org/kilo-ironic-resource-locking Resource Locking]<br />
* Wed 1100 - 1140 [https://etherpad.openstack.org/kilo-ironic-making-it-simple Making Ironic Easier to Use]<br />
* Wed 1150 - 1230 [https://etherpad.openstack.org/kilo-ironic-understanding-hardware Understanding the Hardware we have]<br />
<br />
* Thu 0900 - 0940 [https://etherpad.openstack.org/kilo-ironic-asserting-ready-state Asserting Ready State]<br />
<br />
* Fri 1340 - 1710 [https://etherpad.openstack.org/kilo-ironic-contributor-meetup Contributor Meetup]<br />
<br />
==Keystone==<br />
* Wed 0900 – 0940 [https://etherpad.openstack.org/p/kilo-keystone-object-lifecycle Object Lifecycle / Object Dependencies]<br />
* Wed 0950 – 1030 [https://etherpad.openstack.org/p/hierarchical-multitenancy-kilo-summit Hierarchical Multitenancy]<br />
* Wed 1630 – 1710 [https://etherpad.openstack.org/p/kilo-keystone-horizon-cli-federation-sso Keystone-Horizon-CLI Federation/SSO]<br />
* Wed 1720 – 1800 [https://etherpad.openstack.org/p/kilo-keystone-operators-deployers-and-devops Operators, Deployers, and DevOps]<br />
<br />
<br />
* Thu 0900 – 9040 [https://etherpad.openstack.org/p/kilo-keystone-python-keystoneclient python-keystoneclient]<br />
* Thu 1430 – 1510 [https://etherpad.openstack.org/p/kilo-keystone-authorization Authorization]<br />
* Thu 1520 – 1600 [https://etherpad.openstack.org/p/kilo-keystone-policy-model-token-capabilities Policy Model and User/Token Capabilities]<br />
<br />
<br />
* Fri 1340 – 1710 [https://etherpad.openstack.org/p/kilo-keystone-meetup Keystone Contributor Meetup]<br />
<br />
==Manila==<br />
All on Tuesday<br />
* 15:40 - 16:20 - [https://etherpad.openstack.org/p/kilo-manila-networking-and-multitenancy Networking and Multitenancy]<br />
* 16:40 - 17:20 - [https://etherpad.openstack.org/p/kilo-manila-mount-automation Mount Automation]<br />
* 17:30 - 18:10 - [https://etherpad.openstack.org/p/kilo-manila-access-groups Access Groups]<br />
<br />
<br />
==Neutron==<br />
=== Tue ===<br />
Neutron contributors please attend [https://wiki.openstack.org/wiki/Summit/Kilo/Etherpads#Cross-Project_workshops Cross Project Workshops]<br />
<br />
* 12.05 - 12.45 - [https://etherpad.openstack.org/p/kilo-gbp-design-summit-topics Group-based Policy]<br />
<br />
=== Wed ===<br />
* 09:00 - 09:40 - [https://etherpad.openstack.org/p/neutron-processes Development Process and Procedures]<br />
* 09:50 - 10:30 - [https://etherpad.openstack.org/p/neutron-vendors Split Vendor Plugins and Drivers] Part 1<br />
* 11:00 - 11:50 - [https://etherpad.openstack.org/p/neutron-vendors Split Vendor Plugins and Drivers] Part 2<br />
* 11:50 - 12:30 - [https://etherpad.openstack.org/p/neutron-cli CLI and Client Lib]<br />
=== Thu ===<br />
* 11:00 - 11:40 - [https://etherpad.openstack.org/p/neutron-kilo-lightning-talks Lightning Talks]<br />
* 11:50 - 12:30 - [https://etherpad.openstack.org/p/neutron-services Advanced Services Spin Out]<br />
* 13:40 - 14:20 - [https://etherpad.openstack.org/p/neutron-rest-api REST/RPC/Plugin API] Part 1 (REST)<br />
* 14:30 - 15:10 - [https://etherpad.openstack.org/p/neutron-rpc-api REST/RPC/Plugin API] Part 2 (RPC)<br />
* 15:20 - 16:00 - [https://etherpad.openstack.org/p/neutron-plugin-api REST/RPC/Plugin API] Part 3 (Plugin)<br />
* 16:30 - 17:10 - [https://etherpad.openstack.org/p/neutron-ipam Pluggable IPAM]<br />
* 17:20 - 18:00 - Paying down technical debt in [https://etherpad.openstack.org/p/neutron-l2 L2 Agents] and [https://etherpad.openstack.org/p/neutron-l3 L3 Agents]<br />
<br />
=== Fri ===<br />
* 09:00 - 12:30 - [https://etherpad.openstack.org/p/neutron-kilo-meetup-slots Neutron contributors meetup]<br />
* 13:40 - 17:10 - [https://etherpad.openstack.org/p/neutron-kilo-meetup-slots Neutron contributors meetup]<br />
<br />
==Nova==<br />
=== Wed ===<br />
* 9:00 - 10:30 - Cells 1 & 2 <br />
* 11:00 - 11:40 - Nova Objects Status and Deadlines<br />
* 11:50 - 12:30 - [https://etherpad.openstack.org/p/kilo-nova-glance Nova/Glance client library]<br />
* 13:50 - 14:30 - Nova Unconference<br />
* 14:40 - 15:20 - Nova CI status checkpoint<br />
* 15:30 - 16:10 - Virt Driver Consistency<br />
* 16:30 - 17:10 - [https://etherpad.openstack.org/p/kilo-nova-functional-testing Nova Functional testing]<br />
* 17:20 - 18:00 - [https://etherpad.openstack.org/p/kilo-nova-nfv NFV proposed specs]<br />
<br />
=== Thurs ===<br />
* 9:00 - 9:40 - API Microversions<br />
* 9:50 - 10:30 - nova-network migration<br />
* 11:00 - 12:30 - [https://etherpad.openstack.org/p/kilo-nova-scheduler-rt Scheduler and resource tracker] <br />
* 13:40 - 14:30 - Nova Unconference<br />
* 14:30 - 15:10 - Nova Upgrades and DB migrations<br />
* 15:20 - 16:00 - Containers service<br />
* 16:30 - 18:00 - [Setting nova's roadmap for Kilo 1 & 2<br />
<br />
==Oslo==<br />
* Wed 2014-11-05 11:00 - [https://etherpad.openstack.org/p/kilo-oslo-library-proposals Oslo graduation schedule]<br />
* Wed 2014-11-05 11:50 - [https://etherpad.openstack.org/p/kilo-oslo-oslo.messaging oslo.messaging ]<br />
* Wed 2014-11-05 13:50 - [https://etherpad.openstack.org/p/kilo-oslo-common-quota-library A Common Quota Management Library]<br />
* Thu 2014-11-06 11:50 - [https://etherpad.openstack.org/p/kilo-oslo-taskflow taskflow]<br />
* Thu 2014-11-06 13:40 - [https://etherpad.openstack.org/p/kilo-oslo-alpha-versioning Using alpha versioning for Oslo libraries]<br />
* Thu 2014-11-06 16:30 - [https://etherpad.openstack.org/p/kilo-oslo-python-3 Python 3 support in Oslo]<br />
* Thu 2014-11-06 17:20 - [https://etherpad.openstack.org/p/kilo-oslo-namespace-packages Moving Oslo away from namespace packages]<br />
<br />
<br />
==QA==<br />
* Wed 9:00 - 9:40 [https://etherpad.openstack.org/p/kilo-summit-gap-analysis-tempest-in-production Gap Analysis for using Tempest in Production]<br />
* Wed 11:00 - 11:40 [https://etherpad.openstack.org/p/kilo-summit-tempest-scope-in-the-brave-new-world Tempest Scope in the Brave New World]<br />
* Wed 14:40 - 13:20 QA/Infra Gating Relationships<br />
* Wed 15:30 - 16:10 [https://etherpad.openstack.org/p/kilo-summit-post-merge-qa-ci QA and CI After Merge]<br />
* Wed 17:20 - 18:00 [https://etherpad.openstack.org/p/kilo-summit-tempest-lib-moving-forward Tempest-lib moving forward]<br />
* Thu 11:50 - 12:30 [https://etherpad.openstack.org/p/kilo-summit-future-auth-interface-design Future Auth Interface Design]<br />
* Thu 14:30 - 13:10 [https://etherpad.openstack.org/p/kilo-summit-devstack-grenade Devstack/Grenade]<br />
<br />
==Release Management==<br />
* Wed 1150 – 1230 [https://etherpad.openstack.org/p/kilo-relmgt-stable-branches Stable branches]<br />
* Thu 0950 – 1030 [https://etherpad.openstack.org/p/kilo-relmgt-vulnerability-management Vulnerability management]<br />
<br />
==Sahara==<br />
* https://etherpad.openstack.org/p/kilo-summit-sahara-edp<br />
* https://etherpad.openstack.org/p/kilo-summit-sahara-ux<br />
* https://etherpad.openstack.org/p/kilo-summit-sahara-production-clusters<br />
* https://etherpad.openstack.org/p/kilo-summit-sahara-production<br />
* https://etherpad.openstack.org/p/kilo-summit-sahara-integration-security<br />
<br />
==Swift==<br />
* tbd<br />
<br />
<br />
==TripleO==<br />
* tbd<br />
<br />
<br />
==Trove==<br />
Trovesday (Thursday)<br /><br />
14:30 - 15:10: [https://etherpad.openstack.org/p/kilo-summit-trove-clusters Building Out Trove Clusters]<br /><br />
15:20 - 16:00: [https://etherpad.openstack.org/p/kilo-summit-trove-replication-v2 Replication v2 and Scheduling Tasks in Trove]<br /><br />
16:30 - 17:10: [https://etherpad.openstack.org/p/kilo-summit-testing-trove Testing Trove]<br /><br />
17:20 - 18:00: [https://etherpad.openstack.org/p/kilo-summit-trove-openstack-tighter-integration Trove integration with other OpenStack components]<br /><br />
<br />
==Zaqar==<br />
* Tuesday 11:15 [https://etherpad.openstack.org/p/kilo-zaqar-summit-integration-with-services Zaqar integration with other services] <br />
* Tuesday 12:05 [https://etherpad.openstack.org/p/kilo-zaqar-summit-v2 Zaqar API v2: What? Why? When?]<br />
* Tuesday 14:00 [https://etherpad.openstack.org/p/kilo-zaqar-summit-persistent-transports Zaqar Persistent Transports]<br />
* Tuesday 14:50 [https://etherpad.openstack.org/p/kilo-zaqar-summit-infrastructure Zaqar Infrastructure Session]<br />
<br />
==Other Projects==<br />
* Monday 11:40 - 13:10 [https://etherpad.openstack.org/p/kilo-ceph Ceph]<br />
* Monday 14:30 - 16:00 [https://etherpad.openstack.org/p/kilo-ansible-for-openstack Ansible for OpenStack]<br />
* Monday 16:20 - 17:50 [https://etherpad.openstack.org/p/puppet-openstack-paris-agenda puppet-openstack design session]<br />
* Monday 16:30 - 18:30 [https://etherpad.openstack.org/p/paris-product-meeting Product Management session]<br />
* Tuesday 14:00 - 14:40 [https://etherpad.openstack.org/p/kilo-poppy-summit Poppy design session]<br />
* Tuesday 15.40 - 16.20 [https://etherpad.openstack.org/p/paris-2014-design-summit-mistral Mistral design session]<br />
* Tuesday 16:40 - 17:20 [https://etherpad.openstack.org/p/kilo-murano-design-session Murano design session]<br />
* Tuesday 16:40 - 18:10 [https://etherpad.openstack.org/p/par-kilo-congress-design-session Congress design session]<br />
<br />
==Event intro/closure==<br />
* Tue 1115 – 1155 [https://etherpad.openstack.org/p/kilo-summit-101 Summit 101]<br />
* Fri 1720 – 1800 [https://etherpad.openstack.org/p/kilo-summit-feedback Design Summit feedback]<br />
<br />
<br />
==Ops==<br />
* Mon 1140 – 1220 [https://etherpad.openstack.org/p/kilo-summit-ops-pain Top 10 Pain points from the user survey - how to fix them?]<br />
* Mon 1140 – 1220 [https://etherpad.openstack.org/p/kilo-summit-ops-pets What is the best practice for managing 'pets'?]<br />
* Mon 1230 – 1310 [https://etherpad.openstack.org/p/kilo-summit-ops-logging How do we fix logging?]<br />
* Mon 1230 – 1310 [https://etherpad.openstack.org/p/kilo-summit-ops-dvr Distributed Virtual Router - finally neutron HA!]<br />
* Mon 1430 – 1510 [https://etherpad.openstack.org/p/kilo-summit-ops-ironic What do you want from ironic/bare metal?]<br />
* Mon 1430 – 1510 [https://etherpad.openstack.org/p/kilo-summit-ops-app-eco The Application Ecosystem working group]<br />
* Mon 1520 – 1600 [https://etherpad.openstack.org/p/kilo-summit-ops-containers What do we want from containers/docker?]<br />
* Mon 1520 – 1600 [https://etherpad.openstack.org/p/kilo-summit-ops-architecture-upgrades Architecture Show and Tell - Upgrades Special Edition]<br />
* Mon 1620 – 1700 [https://etherpad.openstack.org/p/kilo-summit-ops-ha High availability - how do you do it?]<br />
* Mon 1620 – 1700 [https://etherpad.openstack.org/p/kilo-summit-ops-architecture Architecture Show and Tell]<br />
* Mon 1710 – 1750 [https://etherpad.openstack.org/p/kilo-summit-ops-get-involved How to get involved in Kilo?]<br />
* Mon 1710 – 1750 [https://etherpad.openstack.org/p/kilo-summit-ops-storage Storage - how do you do it?]<br />
<br />
* Wed 1440 - 1520 [http://kilodesignsummit.sched.org/event/3406e6331bd488329eaf9af7406bdc95 Horizon Operators, Deployers and End Users]<br />
<br />
* Thu 0900 – 1030 [https://etherpad.openstack.org/p/kilo-summit-ops-log-rationalisation Log Rationalisation]<br />
* Thu 0900 – 1030 [https://etherpad.openstack.org/p/kilo-summit-ops-docs Documentation] <br />
* Thu 0900 – 1030 [https://etherpad.openstack.org/p/kilo-summit-ops-telco Telco WG]<br />
<br />
* Thu 1100 – 1230 [https://etherpad.openstack.org/p/kilo-summit-ops-monitoring Monitoring]<br />
* Thu 1100 – 1230 [https://etherpad.openstack.org/p/kilo-summit-ops-upgrades Upgrades]<br />
* Thu 1100 – 1140 [https://etherpad.openstack.org/p/kilo-summit-ops-stable-branch Stable Branch]<br />
* Thu 1150 – 1230 [https://etherpad.openstack.org/p/kilo-summit-ops-chef Chef]<br />
<br />
* Thu 1340 – 1550 [https://etherpad.openstack.org/p/kilo-summit-ops-app-eco The Application Ecosystem working group]<br />
* Thu 1340 – 1550 [https://etherpad.openstack.org/p/kilo-summit-ops-blueprints Blueprints Working Group]<br />
* Thu 1340 – 1500 [https://etherpad.openstack.org/p/kilo-summit-ops-tools Ops Tools working group]<br />
* Thu 1510 – 1550 [https://etherpad.openstack.org/p/kilo-summit-ops-puppet Puppet]<br />
<br />
* Thu 1630 – 1800 [https://etherpad.openstack.org/p/kilo-summit-ops-large-deployments Large Deployments Team]<br />
* Thu 1630 – 1800 [https://etherpad.openstack.org/p/kilo-summit-ops-api API Working Group]<br />
* Thu 1630 – 1800 [https://etherpad.openstack.org/p/kilo-summit-ops-enterprise Win The Enterprise Working Group]<br />
<br />
*Thu 1720 - 1800 [http://kilodesignsummit.sched.org/event/f4f7fe08c447c048a30065c5adb5ea79 Swift Ops Feedback Session]</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=66816TelcoWorkingGroup2014-10-27T20:18:42Z<p>Russellb: /* Active Blueprints */ remove some blueprints that are no longer active</p>
<hr />
<div>= What is NFV? =<br />
<br />
NFV stands for Network Functions Virtualization. It defines the replacement of usually stand alone appliances used for high and low level network functions, such as firewalls, network address translation, intrusion detection, caching, gateways, accelerators, etc, into virtual instance or set of virtual instances, which are called Virtual Network Functions (VNF). In other words, it could be seen as replacing some of the hardware network appliances with high-performance software taking advantage of high performance para-virtual devices, other acceleration mechanisms, and smart placement of instances. The origin of NFV comes from a working group from the [http://www.etsi.org/ European Telecommunications Standards Institute (ETSI)] whose work is the basis of most current implementations. The main consumers of NFV are Service providers (telecommunication providers and the like) who are looking to accelerate the deployment of new network services, and to do that, need to eliminate the constraint of slow renewal cycle of hardware appliances, which do not autoscale and limit their innovation.<br />
<br />
NFV support for OpenStack aims to provide the best possible infrastructure for such workloads to be deployed in, while respecting the design principles of a IaaS cloud. In order for VNF to perform correctly in a cloud world, the underlying infrastructure needs to provide a certain number of functionalities which range from scheduling to networking and from orchestration to monitoring capacities. This means that to correctly support NFV use cases in OpenStack, implementations may be required across most, if not all, main OpenStack projects, starting with Neutron and Nova.<br />
<br />
For more details on NFV, the following references may be useful:<br />
* [http://www.etsi.org/technologies-clusters/technologies/nfv Definition of NFV by ETSI]<br />
* [http://en.wikipedia.org/wiki/Network_Functions_Virtualization Definition of NFV on Wikipedia]<br />
<br />
= Who we are =<br />
<br />
''' Add your name here if you're joining the meetings - IRC nicks are pretty anonymous unless you give us a clue! Please keep the list in alphabetical order by IRC nick. '''<br />
{| class="wikitable"<br />
|-<br />
<br />
<br />
| adrian-hoban || Adrian Hoban || Intel OpenStack team || NFV & SDN extensions across OpenStack projects<br />
|-<br />
| akhila-chetlapalle || Akhila Chetlapalle || TCS Openstack Team || NFV & SDN Test Framework with Openstack<br />
|-<br />
| alank35 || Alan Kavanagh || Ericsson Inc || NFV & SDN & Neutron and ODL<br />
|-<br />
| aleksandr_null || Aleksandr Shaposhnikov || Mirantis Inc || NFV, SDN, Core Networking, Neutron, Virtualization, Storage<br />
|-<br />
| Ali_Kafel || Ali Kafel || Stratus Technologies || State-full OpenStack, Cloud Orchestration with High Availability and Fault Tolerance for NFV / SDN<br />
|-<br />
| andrews || Andrew Sergeev || ADVA Optical Networking || NFV, Openstack<br />
|-<br />
| andrewv || Andrew Veitch || BTI Systems || NFV, SDN, Openstack<br />
|-<br />
| armax || Armando Migliaccio || HP || Neutron, NFV, SDN<br />
|-<br />
| arosen || Aaron Rosen || nicira/vmware || Automation, SDN/Neutron/NFV, Openstack<br />
|-<br />
| atylee || Andy Tylee || Metaswitch Networks || NFV, SR-IOV, data plane acceleration, orchestration<br />
|-<br />
| balajip || Balaji Padnala || Freescale OpenStack Team || NFV, SDN, SRIOV, Libvirt, Neutron, Nova, Service VMs and Service Chaining<br />
|-<br />
| banix || Mohammad Banikazemi || IBM || NFV, SDN, Neutron, OpenStack<br />
|-<br />
| bauzas || Sylvain Bauza || Red Hat || SLA and Scheduling in Nova<br />
|-<br />
| bertys || Bertrand Souville || DOCOMO Euro-Labs || NFV, SDN, OpenStack<br />
|-<br />
| cdub || Chris Wright || Red Hat || NFV and SDN work between OpenStack and OpenDaylight<br />
|-<br />
| cgoncalves || Carlos Goncalves || Instituto de Telecomunicacoes || Service Function Chaining, Traffic Steering<br />
|-<br />
| choonho || Choonho Son || Samsung || KVM, OpenStack & DPDK<br />
|-<br />
| cliljenstolpe || Christopher Liljenstolpe || Metaswitch Networks || Neutron, orchestration, network architecture<br />
|-<br />
| cloudon || Calum Loudon || Metaswitch Networks || Neutron, data plane acceleration, orchestration<br />
|-<br />
| ctlmcb || Kevin McBride || CenturyLink || OpenStack for NFV, SDN tech, Programmable Hardware/integrated circuits, etc.<br />
|-<br />
| dandrushko || Dmitriy Andrushko || Mirantis || SDN, NFV, OpenDaylight, network architecture<br />
|-<br />
| danpb || Daniel Berrange || Red Hat || Libvirt, KVM & Nova performance & enablement for NFV<br />
|-<br />
| davej || Dave Johnston|| Openwave Mobility || NFV on Openstack for the Telco industry<br />
|-<br />
| davidmck || David McKinley|| Oracle || OpenStack support for NFV<br />
|-<br />
| diga || Digambar Patil || Persistent System Ltd. || Neutron, Nova, SDN, NFV<br />
|-<br />
| djhunt || Jason Hunt || IBM || Orchestration, SDN, NFV<br />
|-<br />
| dmitry_huawei || Dmitry Meytin || Huawei || MANO integration with OpenStack<br />
|-<br />
| eranb || Eran Bello || ASOCS || NFV compute and accelerator resources integration with OpenStack<br />
|-<br />
| fjramons || Francisco-Javier Ramon Salguero || Telefonica || Libvirt, KVM & Nova performance & enablement for NFV<br />
|-<br />
| ggarcia || Gerardo Garcia || Telefonica || Libvirt, KVM & Nova performance & enablement for NFV<br />
|-<br />
| gc4rella || Giuseppe Carella || Fraunhofer FOKUS || NFV MANO, Service Function Chaining, SDN<br />
|-<br />
| heyongli || Yongli He || Intel Openstack team || nova enabling NFV SRIOV PCI passthrough<br />
|-<br />
| HiteshW || Hitesh Wadekar || Graduate Student at Clarkson University || SDN/Neutron/NFV, Openstack <br />
|-<br />
| ian_ott || Ian Jolliffe || Wind River || Openstack, NFV, Networking<br />
|-<br />
| ijw || Ian Wells || Cisco's Openstack team || Vendor neutral NFV infrastructure, Cisco NFV appliances<br />
|-<br />
| imendel || Itai Mendelsohn || Alcatel-Lucent || NFV in general and how OpenStack can enable it<br />
|-<br />
| irenab || Irena Berezovsky || Mellanox || NFV, SDN, NFV SRIOV PCI passthrough<br />
|-<br />
| JeremyLiu || Jeremy Liu || Huawei || NFV & OpenStack<br />
|-<br />
| jjstevensjj || Joe Stevens || HP Helion || NFV & Neutron<br />
|-<br />
| jmsoares || Joao Soares || Portugal Telecom || Service Function Chaining, Traffic Steering<br />
|-<br />
| kalyan || Kalyanjeet Gogoi || Juniper Networks || NFV integration with OpenStack<br />
|-<br />
| kranthi || Kranthi Molleti || Tech Mahindra || Neutron, NFV and Virtual Networking<br />
|-<br />
| KyleMacDonald || Kyle MacDonald || OpenNile || NFV / OpenStack / Carrier & Telco Deployment<br />
|-<br />
| LouisF || Louis Fourie || Huawei || NFV-MANO, Service Function chaining, Traffic steering<br />
|-<br />
| lukego || Luke Gorrie || Snabb || Making open source NFV work for Deutsche Telekom's TeraStream project<br />
|-<br />
| malini1 || Malini Bhandaru || Intel || NFV, Adv. Service VMs, compute node capabilities, security<br />
|-<br />
| martin_t || Martin Taylor || Metaswitch Networks || Neutron networking and data plane acceleration<br />
|-<br />
| matrohon || Mathieu Rohon || Orange || NFV, SDN, Neutron<br />
|-<br />
| mjbright || Mike Bright || HP || Openstack, NFV/SDN<br />
|-<br />
| mkashyap || Madhu Kashyap || Brocade || OpenStack, NFV / SDN<br />
|-<br />
| mpaolino || Michele Paolino || Virtual Open Systems || ARM, KVM, libvirt, Nova and Neutron<br />
|-<br />
| mpetrus || Margaret Petrus || VMware || NFV-MANO, OpenStack for Service Orchestration<br />
|-<br />
| nbal || Nuri Bal || Cyan || OpenStack support of NFV, MANO in particular<br />
|-<br />
| nbouthors || Nicolas Bouthors || Qosmos || Service Chaining, Classifier VNFC<br />
|-<br />
| nijaba || Nick Barcet || eNovance || NFV support on OpenStack<br />
|-<br />
| nnikolaev || Nikolay Nikolaev || Virtual Open Systems || vhost-user maintainer, Snabbswitch, Nova and Neutron<br />
|-<br />
| prabhu-nk || Prabhuling Kalyani || Global Edge || NFV, Service Chaining<br />
|-<br />
| PushkarU || Pushkar Umaranikar || Graduate student at San Jose State University || SDN/Neutron/NFV, Openstack<br />
|-<br />
| radek ||Radoslaw Smigielski||Alcatel-Lucent||OpenStack+NFV, SR-IOV, PCI passthrough, KVM performance<br />
|-<br />
| Rajesh|| R || HP || NFV MANO and Helion <br />
|-<br />
| ravirik || Ravi Virik || AT&T || Neutron and Flowspace for NFV<br />
|-<br />
| r-mibu || Ryota Mibu || NEC || Nova enhancement for NFV<br />
|-<br />
| ricky.bo || ricky.bo || Huawei || Help better support NFV<br />
|-<br />
| rohit404 ||Rohit Agarwalla||Cisco's OpenStack team||OpenStack+NFV<br />
|-<br />
| rseth|| Rajeev Seth || Sonus Networks || NFV integration with OpenStack<br />
|-<br />
| runarut|| Larry Pearson || AT&T || OpenStack as NFVI, VNF Service Chaining<br />
|-<br />
| russellb || Russell Bryant || Project: OpenStack TC, Nova. Corporate: Red Hat || Nova. Ensuring requirements and designs are consumable by OpenStack developers. Reviewing designs and implementations.<br />
|-<br />
| timmer || Tim Reddin || HP Cloud || NFV, Kernel , OpenStack<br />
|-<br />
| s3wong || Stephen Wong || Midokura || NFV support on OpenStack<br />
|-<br />
| safchain || Sylvain Afchain || eNovance || NFV, SDN, Neutron<br />
|-<br />
| sasud || S Sud || Intel || NFV and SDN use case PoCs<br />
|-<br />
| sean-k-mooney || Sean Mooney || Intel OpenStack team || NFV & SDN enabling<br />
|-<br />
| sgordon || Steve Gordon || Red Hat || NFV and SDN enablement across OpenStack projects but particularly Nova and the Libvirt driver.<br />
|-<br />
| shane-wang || Shane Wang || Intel || NFV support on OpenStack, VM QoS in Nova, PCI/SR-IOV support<br />
|-<br />
| smazziotta || Sandro Mazziotta || eNovance || OpenStack extensions required to meet NFV requirements<br />
|-<br />
| tapiotallgren || Tapio Tallgren || Nokia Networks || OpenStack enhancements for NFV<br />
|-<br />
| tcroteau || Tammy Croteau || HP Cloud || Neutron and NFV<br />
|-<br />
| thomnico || Nicolas Thomas || Canonical || Allowing OpenStack to be gradually used in NFV type of deployments ETSI NFV IG participant.<br />
|-<br />
| tidwellr || Ryan Tidwell || HP || NFV, Neutron, SDN<br />
|-<br />
| Torsten || Torsten Bottjer || Swisscom || Orchestrating VNFs on Openstack<br />
|-<br />
| tvvcox || Tomas Von Veschler || Red Hat || Openstack enablement for NFVi and VIM<br />
|-<br />
| ulikleber || Ulrich Kleber || Huawei || Help better support NFV<br />
|-<br />
| vikasd || Vikas Deolaliker || ... || NFV & SDN extenstions across Openstack projects<br />
|-<br />
| vpandari || Vinod Pandarinathan || Cisco Systems || NFV Framework, Service Chaining and Neutron<br />
|-<br />
| vjardin || Vincent JARDIN || 6WIND || Help using DPDK applications efficiently and ivshmem to start with (memnic)<br />
|-<br />
| Venkatesh || Venkatesh || Wipro Technologies || NFV & SDN extenstions across Openstack projects<br />
|-<br />
| wenjing || Wenjing Chu || Dell || NFV, Openstack for NFV, OPNFV<br />
|-<br />
| yamahata || Isaku Yamahata || Intel || Neutron, servicevm, service chaining, traffic steering<br />
|-<br />
| yjiang5 || Yunhong Jiang || Intel || Nova enablement for NFV<br />
|-<br />
| yukiarbel || Yuki Arbel || Alcatel Lucent || NFV, Openstack for NFV<br />
|-<br />
| zeddii || Bruce Ashfield || Wind River || KVM, libvirt, nova and platform awareness for NFV<br />
|-<br />
| zhyu || Yu Zhang || Huawei || OpenStack enhancement for enabling NFV<br />
|-<br />
| zhipengh || Zhipeng Huang || Huawei || OpenStack enhancement for enabling NFV<br />
|-<br />
| zuqiang || Zu Qiang || Ericsson || NFV support in OpenStack<br />
|}<br />
<br />
= Mission statement =<br />
<br />
<blockquote>The sub-team aims to define the use cases and identify and prioritise the requirements which are needed to run Network Function Virtualization (NFV) workloads on top of OpenStack. This work includes identifying functional gaps, creating blueprints, submitting and reviewing patches to the relevant OpenStack projects and tracking their completion in support of NFV.</blockquote><br />
<br />
<blockquote>The requirements expressed by this group should be made so that each of them have a test case which can be verified using an OpenSource implementation. This is to ensure that tests can be done without any special hardware or proprietary software, which is key for continuous integration tests in the OpenStack gate. If special setups are required which cannot be reproduced on the standard OpenStack gate, the use cases proponent will have to provide a 3rd party CI setup, accessible by OpenStack infra, which will be used to validate developments against.</blockquote><br />
<br />
[[IRC|OpenStack IRC details]]<br />
<br />
Chair: Russell Bryant (russellb)<br />
<br />
== Agenda for next meeting ==<br />
<br />
Alternating between Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=14&min=0&sec=0&p1=0 1400 UTC] in #openstack-meeting-alt and Thursdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=16&min=0&sec=0&p1=0 1600 UTC] in #openstack-meeting. See schedule below.<br />
<br />
Agenda: [https://etherpad.openstack.org/p/nfv-meeting-agenda]<br />
<br />
{| class="wikitable"<br />
|-<br />
! Date !! Time !! IRC Channel<br />
|-<br />
| Wednesday 22th October 2014 || 1400 UTC || #openstack-meeting-alt<br />
|-<br />
| Thursday 30th October 2014 || 1600 UTC || #openstack-meeting<br />
|-<br />
| Wednesday 5th November 2014 || 1400 UTC || No meeting, OpenStack Summit<br />
|-<br />
|}<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
* [https://etherpad.openstack.org/p/juno-nfv-bof Juno Design Summit NFV BoF]<br />
<br />
=Use Cases=<br />
<br />
{| class="wikitable"<br />
|-<br />
! Workload Type !! Description || Characteristics !! Examples !! Requirements<br />
|-<br />
| Data plane || Tasks related to packet handing in an end-to-end communication between edge applications. ||<br />
* Intensive I/O requirements - potentially millions of small VoIP packets per second per core<br />
* Intensive memory R/W requirements<br />
||<br />
* CDN cache node<br />
* Router<br />
* IPSec tunneller<br />
* Session Border Controller - media relay function<br />
|| - <br />
|-<br />
| Control plane || Any other communication between network functions that is not directly related to the end-to-end data communication between edge applications. ||<br />
* Less intensive I/O and R/W requirements than data plane, due to lower packets per second<br />
* More complicated transactions resulting in (potentially) higher CPU load per packet.<br />
||<br />
* PPP session management<br />
* Border Gateway Protocol (BGP) routing<br />
* Remote Authentication Dial In User Service (RADIUS) authentication in a Broadband Remote Access Server (BRAS) network function <br />
* Session Border Controller - SIP signaling function<br />
* IMS core functions (S-CSCF / I-CSCF / BGCF)<br />
|| - <br />
|-<br />
| Signal processing || All network function tasks related to digital processing<br />
|| <br />
* Very sensitive to CPU processing capacity.<br />
* Delay sensitive.<br />
||<br />
* Fast Fourier Transform (FFT) decoding<br />
* Encoding in a Cloud-Radio Access Network (C-RAN) Base Band Unit (BBU)<br />
* Audio transcoding in a Session Border Controller<br />
|| - <br />
|-<br />
| Storage || All tasks related to disk storage.<br />
||<br />
* Varying disk, SAN, or NAS, I/O requirements based on applications, ranging from low to extremely high intensity.<br />
||<br />
* Logger<br />
* Network probe<br />
|| - <br />
|-<br />
|}<br />
<br />
== ETSI-NFV Use Cases - High Level Description ==<br />
<br />
ETSI NFV gap analysis document: https://wiki.openstack.org/wiki/File:NFV%2814%29000154r2_NFV_LS_to_OpenStack.pdf<br />
<br />
===Use Case #1: Network Functions Virtualisation Infrastructure as a Service===<br />
<br />
This is a reasonably generic IaaS requirement. <br />
<br />
===Use Case #2: Virtual Network Function as a Service (VNFaaS)===<br />
This primarily targets Customer Premise Equipment (CPE) devices such as access routers, enterprise firewall, WAN optimizers etc. with some Provider Edge devices possible at a later date. ETSI-NFV Performance & portability considerations will apply to deployments that strive to meet high performance and low latency considerations.<br />
<br />
===Use Case #3: Virtual Network Platform as a Service (VNPaaS)===<br />
This is similar to #2 but at the service level. At larger scale and not at the "app" level only.<br />
<br />
===Use Case #4: VNF Forwarding Graphs===<br />
Dynamic connectivity between apps in a "service chain".<br />
<br />
===Use Case #5: Virtualisation of Mobile Core Network and IMS===<br />
Primarily focusing on Evolved Packet Core appliances such as the Mobility Management Entity (MME), Serving Gateway (S-GW), etc. and the IP Multimedia Subsystem (IMS).<br />
<br />
===Use Case #6: Virtualisation of Mobile base station===<br />
Focusing on parts of the Radio Access Network such as eNodeB's, Radio Link Control and Packet Data Convergence Protocol, etc..<br />
<br />
===Use Case #7: Virtualisation of the Home Environment===<br />
Similar to Use Case 2, but with a focus on virtualising residential devices instead of enterprise devices. Covers DHCP, NAT, PPPoE, Firewall devices, etc. <br />
<br />
===Use Case #8: Virtualisation of CDNs===<br />
Content Delivery Networks focusing on video traffic delivery.<br />
<br />
===Use Case #9: Fixed Access Network Functions Virtualisation===<br />
Wireline related access technologies.<br />
<br />
==Contributed Use Cases==<br />
<br />
===Session Border Controller===<br />
<br />
Contributed by: Calum Loudon<br />
<br />
====Description====<br />
<br />
Perimeta Session Border Controller, Metaswitch Networks. Sits on the edge of a service provider's network and polices SIP and RTP (i.e. VoIP) control and media traffic passing over the access network between end-users and the core network or the trunk network between the core and another SP.<br />
<br />
====Characteristics====<br />
<br />
* Fast and guaranteed performance:<br />
** Performance in the order of several million VoIP packets (~64-220 bytes depending on codec) per second per core (achievable on COTS hardware).<br />
** Guarantees provided via SLAs.<br />
* Fully high availability<br />
** No single point of failure, service continuity over both software and hardware failures.<br />
* Elastically scalable<br />
** NFV orchestrator adds and removes instances in response to network demands.<br />
* Traffic segregation (ideally)<br />
** Separate traffic from different customers via VLANs.<br />
<br />
====Requirements====<br />
<br />
* Fast & guaranteed performance (network)<br />
** Packets per second target -> either SR-IOV or an accelerated DPDK-like data plane:<br />
*** "SR-IOV Networking Support" (https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov)<br />
*** "Open vSwitch to use patch ports" (https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use)<br />
*** "userspace vhost in ovd vif bindings" (https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost)<br />
*** "Snabb NFV driver" (https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver)<br />
*** "VIF_VHOSTUSER" (https://blueprints.launchpad.net/nova/+spec/vif-vhostuser)<br />
<br />
* Fast & guaranteed performance (compute):<br />
** To optimize data rate we need to keep all working data in L3 cache:<br />
***"Virt driver pinning guest vCPUs to host pCPUs" (https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning)<br />
** To optimize data rate need to bind to NIC on host CPU's bus:<br />
*** "I/O (PCIe) Based NUMA Scheduling" (https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling)<br />
**To offer guaranteed performance as opposed to 'best efforts' we need:<br />
** To control placement of cores, minimise TLB misses and get accurate info about core topology (threads vs. hyperthreads etc.); maps to the remaining blueprints on NUMA & vCPU topology:<br />
*** "Virt driver guest vCPU topology configuration" (https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology)<br />
*** "Virt driver guest NUMA node placement & topology" (https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement)<br />
*** "Virt driver large page allocation for guest RAM" (https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages)<br />
** May need support to prevent 'noisy neighbours' stealing L3 cache - unproven, and no blueprint we're aware of.<br />
<br />
* High availability:<br />
** Requires anti-affinity rules to prevent active/passive being instantiated on same host - already supported, so no gap.<br />
<br />
* Elastic scaling:<br />
** Readily achievable using existing features - no gap.<br />
<br />
* VLAN trunking:<br />
** "VLAN trunking networks for NFV" (https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks et al).<br />
<br />
* Other:<br />
** Being able to offer apparent traffic separation (e.g. service traffic vs. application management) over single network is also useful in some cases.<br />
*** "Support two interfaces from one VM attached to the same network" (https://blueprints.launchpad.net/nova/+spec/multiple-if-1-net)<br />
<br />
===Virtual IMS Core===<br />
<br />
Contributed by: Calum Loudon<br />
<br />
====Description====<br />
<br />
Project Clearwater, http://www.projectclearwater.org/. An open source implementation of an IMS core designed to run in the cloud and be massively scalable. It provides SIP-based call control for voice and video as well as SIP-based messaging apps. As an IMS core it provides P/I/S-CSCF function together with a BGCF and an HSS cache, and includes<br />
a WebRTC gateway providing interworking between WebRTC & SIP clients.<br />
<br />
====Characteristics relevant to NFV/OpenStack====<br />
<br />
* Mainly a compute application: modest demands on storage and networking.<br />
* Fully HA, with no SPOFs and service continuity over software and hardware failures; must be able to offer SLAs.<br />
* Elastically scalable by adding/removing instances under the control of the NFV orchestrator.<br />
<br />
====Requirements====<br />
<br />
* Compute application:<br />
** OpenStack already provides everything needed; in particular, there are no requirements for an accelerated data plane, nor for core pinning nor NUMA<br />
<br />
*HA:<br />
** implemented as a series of N+k compute pools; meeting a given SLA requires being able to limit the impact of a single host failure <br />
** potentially a scheduler gap here: affinity/anti-affinity can be expressed pair-wise between VMs, which is sufficient for a 1:1 active/passive architecture, but an N+k pool needs a concept equivalent to "group anti-affinity" i.e. allowing the NFV orchestrator to assign each VM in a pool to one of X buckets, and requesting OpenStack to ensure no single host failure can affect more than one bucket<br />
** (there are other approaches which achieve the same end e.g. defining a group where the scheduler ensures every pair of VMs within that group are not instantiated on the same host)<br />
** for study whether this can be implemented using current scheduler hints<br />
<br />
* Elastic scaling:<br />
** as for compute requirements there is no gap - OpenStack already provides everything needed.<br />
<br />
== References: ==<br />
* [http://docbox.etsi.org/ISG/NFV/Open/Latest_Drafts/NFV-PER001v009%20-%20NFV%20Performance%20&%20Portability%20Best%20Practises.pdf Network Functions Virtualization NFV Performance & Portability Best Practices - DRAFT]<br />
* ETSI-NFV Use Cases V1.1.1 [http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf]<br />
<br />
== Related Teams and Projects ==<br />
* OpenStack Congress - Policy as a Service [https://wiki.openstack.org/wiki/Congress]<br />
<br />
= Development Efforts =<br />
<br />
== Active Bugs ==<br />
<br />
Add the "nfv" tag to bugs to have them appear in these queries:<br />
<br />
* Nova: https://bugs.launchpad.net/nova/+bugs?field.tag=nfv<br />
* Neutron: https://bugs.launchpad.net/neutron/+bugs?field.tag=nfv<br />
<br />
== Active Blueprints ==<br />
The NFV use case mappings identified below are from the perspective of higher performing use cases. Please note that there are many possible configurations of devices for each of these use cases and it is not implied that they will all need the proposed capability in the relevant blueprint. <br />
<br />
There is an automatically updated gerrit dashboard for all specs and code under review here: http://nfv.russellbryant.net<br />
<br />
PRIORITY - repeatedly mentioned at the BOF as blockers:<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Description !! Project(s) !! Status !! Blueprint(s) !! Design(s) !! ETSI-NFV Use Cases<br />
|-<br />
| VLAN trunking networks for NFV<br />
This line item now confuses various requirements together:<br />
VLAN tagged traffic transmissible over a tenant network is the most important (even if Openstack is otherwise VLAN unaware)<br />
decomposition of VLAN trunks to virtual networks<br />
VLAN tagged traffic to a physical appliance<br />
management of VLANs on ports as sub-ports (nice to have, not a blocker)<br />
| Neutron <br />
| New<br />
| https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks (tenant trunking) <br />
https://blueprints.launchpad.net/neutron/+spec/l2-gateway (physical appliance-specific decomposition)<br />
https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms (VLAN port management)<br />
| https://review.openstack.org/#/c/100278/ (physical appliance-specific decomposition)<br />
https://review.openstack.org/97714 (tenant trunking)<br />
https://review.openstack.org/#/c/94612/ (subports)<br />
https://review.openstack.org/#/c/92541/ (patch for subports)<br />
| * #1 is a broadly applicable IaaS requirement.<br />
* #2 TBD?<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 TBD?<br />
* #6 TBD?<br />
* #7 TBD?<br />
* #8 TBD?<br />
* #9 TBD?<br />
|-<br />
| Permit unaddressed interfaces for NFV use cases || Neutron || New<br />
| https://blueprints.launchpad.net/neutron/+spec/nfv-unaddressed-interfaces https://blueprints.launchpad.net/neutron/+spec/ml2-ovs-portsecurity<br />
| https://review.openstack.org/97715 https://review.openstack.org/#/c/99873/ || <br />
* #1 is a broadly applicable IaaS requirement.<br />
* #2 TBD?<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 TBD?<br />
* #6 TBD?<br />
* #7 TBD?<br />
* #8 TBD?<br />
* #9 TBD?<br />
|-<br />
|}<br />
<br />
The rest:<br />
<br />
neutron port enhancement related to servicevm is summarized at https://wiki.openstack.org/wiki/ServiceVM/neutron-port-attributes<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Description !! Project(s) !! Status !! Blueprint(s) !! Design(s) !! ETSI-NFV Use Cases<br />
|-<br />
|<br />
Virt driver guest NUMA node placement & topology<br />
|| Nova || Design Approved / Needs Code Review || https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement || https://review.openstack.org/93636 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed for performance reasons.<br />
* #6 Needed for performance reasons.<br />
* #7 Needed for performance reasons.<br />
* #8 Needed for performance reasons.<br />
* #9 Needed for performance reasons.<br />
|-<br />
|<br />
Virt driver large page allocation for guest RAM [[#dupe|*]]<br />
|| Nova || Design Approved / Needs Code Review || https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages || https://review.openstack.org/93653 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed for performance reasons.<br />
* #6 Needed for performance reasons.<br />
* #7 Needed for performance reasons.<br />
* #8 Needed for performance reasons.<br />
* #9 Needed for performance reasons.<br />
|-<br />
|<br />
Virt driver pinning guest vCPUs to host pCPUs <br />
|| Nova || Design Approved / Needs Code Review || https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning || https://review.openstack.org/93652 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed for performance reasons.<br />
* #6 Needed for performance reasons.<br />
* #7 Needed for performance reasons.<br />
* #8 Needed for performance reasons.<br />
* #9 Needed for performance reasons.<br />
|-<br />
|<br />
: I/O (PCIe) Based NUMA Scheduling <br />
|| Nova || Design Approved / Needs Code Review|| https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling || https://review.openstack.org/#/c/100871/ ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed for performance reasons.<br />
* #6 Needed for performance reasons.<br />
* #7 Needed for performance reasons.<br />
* #8 Needed for performance reasons.<br />
* #9 Needed for performance reasons.<br />
|-<br />
| Soft affinity support for server groups || Nova || Abandoned || https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group || https://review.openstack.org/91328 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 TBD?<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 TBD?<br />
* #6 TBD?<br />
* #7 TBD?<br />
* #8 TBD?<br />
* #9 TBD?<br />
|-<br />
| Open vSwitch-based Security Groups: Open vSwitch Implementation of FirewallDriver || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver || https://review.openstack.org/89712 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed in the non-SR-IOV based deployments.<br />
* #3 TBD?<br />
* #4 vSwitch configuration may be needed to complete the forwarding graph (service chain). <br />
* #5 Needed in the non-SR-IOV based deployments.<br />
* #6 TBD?<br />
* #7 Needed in the non-SR-IOV based deployments.<br />
* #8 TBD?<br />
* #9 Needed in the non-SR-IOV based deployments.<br />
|-<br />
| Framework for Advanced Services in Virtual Machines || Neutron || Under Discussion || https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms || ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Potential lifecycle management support<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Potential lifecycle management support<br />
* #6 Potential lifecycle management support<br />
* #7 Potential lifecycle management support<br />
* #8 Potential lifecycle management support<br />
* #9 Potential lifecycle management support<br />
|-<br />
| Neutron Services Insertion, Chaining, and Steering || Neutron || Design Approved / Needs Code Review || https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering || https://review.openstack.org/93524 ||<br />
NOTE: this service chaining BP is all about chaining aaS services, not chaining tenant NFVs. Is this the one we want or do we require a new BP?<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 May need to chain multiple functions to deliver a service.<br />
* #3 TBD?<br />
* #4 Closely coupled requirement needed to deliver on a forwarding graph.<br />
* #5 May need to chain multiple functions to deliver a service.<br />
* #6 May need to chain multiple functions to deliver a service.<br />
* #7 May need to chain multiple functions to deliver a service.<br />
* #8 May need to chain multiple functions to deliver a service.<br />
* #9 May need to chain multiple functions to deliver a service.<br />
|-<br />
| OVF Meta-Data Import via Glance || Glance || New || https://blueprints.launchpad.net/glance/+spec/epa-ovf-meta-data-import || https://review.openstack.org/#/c/104904/ ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed as one optional path to auto import platform feature requests to meet performance targets. <br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed as one optional path to auto import platform feature requests to meet performance targets. <br />
* #6 Needed as one optional path to auto import platform feature requests to meet performance targets. <br />
* #7 Needed as one optional path to auto import platform feature requests to meet performance targets. <br />
* #8 Needed as one optional path to auto import platform feature requests to meet performance targets. <br />
* #9Needed as one optional path to auto import platform feature requests to meet performance targets. <br />
|-<br />
| colspan="6" | ''Support for high performance Intel(R) Data Plane Development Kit based vSwitches''<br />
|-<br />
|<br />
: Open vSwitch to use patch ports in place of veth pairs for vlan n/w <br />
|| Neutron || Superseded / Unknown || https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use ||<br />
* https://review.openstack.org/96183 <br />
* https://bugs.launchpad.net/neutron/+bug/1331569<br />
*https://bugs.launchpad.net/neutron/+bug/1331569 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed in the non-SR-IOV based deployments.<br />
* #3 TBD?<br />
* #4 Closely coupled requirement needed to deliver on a forwarding graph.<br />
* #5 Needed in the non-SR-IOV based deployments.<br />
* #6 TBD?<br />
* #7 Needed in the non-SR-IOV based deployments.<br />
* #8 TBD?<br />
* #9 Needed in the non-SR-IOV based deployments.<br />
|-<br />
|<br />
: Support userspace vhost in ovs vif bindings <br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost || https://review.openstack.org/95805 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed in the non-SR-IOV based deployments.<br />
* #3 TBD?<br />
* #4 Closely coupled requirement needed to deliver on a forwarding graph.<br />
* #5 Needed in the non-SR-IOV based deployments.<br />
* #6 TBD?<br />
* #7 Needed in the non-SR-IOV based deployments.<br />
* #8 TBD?<br />
* #9 Needed in the non-SR-IOV based deployments.<br />
|-<br />
| [http://snabb.co/nfv.html Snabb NFV] mechanism driver || Neutron || Approved || https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver || https://review.openstack.org/95711 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed in the non-SR-IOV based deployments.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed in the non-SR-IOV based deployments.<br />
* #6 TBD?<br />
* #7 Needed in the non-SR-IOV based deployments.<br />
* #8 TBD?<br />
* #9 Needed in the non-SR-IOV based deployments.<br />
|-<br />
| VIF_VHOSTUSER (qemu vhost-user) support || Nova || Approved || https://blueprints.launchpad.net/nova/+spec/vif-vhostuser || https://review.openstack.org/96138 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed in the non-SR-IOV based deployments.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed in the non-SR-IOV based deployments.<br />
* #6 TBD?<br />
* #7 Needed in the non-SR-IOV based deployments.<br />
* #8 TBD?<br />
* #9 Needed in the non-SR-IOV based deployments.<br />
|-<br />
|Solver Scheduler - complex constraints scheduler with NFV use cases || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/solver-scheduler || https://review.openstack.org/#/c/96543/ ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Possibly needed for smarter scheduling decision making to help with performance. <br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Possibly needed for smarter scheduling decision making to help with performance. <br />
* #6 Possibly needed for smarter scheduling decision making to help with performance.<br />
* #7 Possibly needed for smarter scheduling decision making to help with performance.<br />
* #8 Possibly needed for smarter scheduling decision making to help with performance.<br />
* #9 Possibly needed for smarter scheduling decision making to help with performance.<br />
|-<br />
| Discless VM || Nova || Under discussion || https://blueprints.launchpad.net/nova/+spec/libvirt-empty-vm-boot-pxe || ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 TBD?<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 TBD?<br />
* #6 TBD?<br />
* #7 TBD?<br />
* #8 TBD?<br />
* #9 TBD?<br />
|-<br />
| Network QoS API || Neutron || Under discussion || https://blueprints.launchpad.net/neutron/+spec/quantum-qos-api || https://review.openstack.org/#/c/88599 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons<br />
* #3 TBD?<br />
* #4 Needed to capture network QoS aspects of forwarding graph.<br />
* #5 Needed for performance reasons<br />
* #6 Needed for performance reasons<br />
* #7 Needed for performance reasons<br />
* #8 Needed for performance reasons<br />
* #9 Needed for performance reasons<br />
|-<br />
| Port mirroring || Neutron || Under discussion || https://blueprints.launchpad.net/neutron/+spec/port-mirroring || ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for some specialized use cases.<br />
* #3 TBD?<br />
* #4 Needed based on forwarding graph for specialized use cases.<br />
* #5 Needed for some specialized use cases.<br />
* #6 Needed for some specialized use cases.<br />
* #7 Needed for some specialized use cases.<br />
* #8 TBD?<br />
* #9 Needed for some specialized use cases.<br />
|-<br />
| Traffic Steering Abstraction || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/traffic-steering-abstraction || https://review.openstack.org/92477/ ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Similar to "Neutron Services Insertion, Chaining, and Steering". May need to chain multiple functions to deliver a service.<br />
* #3 TBD?<br />
* #4 Closely coupled requirement needed to deliver on a forwarding graph.<br />
* #5 Similar to "Neutron Services Insertion, Chaining, and Steering". May need to chain multiple functions to deliver a service.<br />
* #6 Similar to "Neutron Services Insertion, Chaining, and Steering". May need to chain multiple functions to deliver a service.<br />
* #7 Similar to "Neutron Services Insertion, Chaining, and Steering". May need to chain multiple functions to deliver a service.<br />
* #8 Similar to "Neutron Services Insertion, Chaining, and Steering". May need to chain multiple functions to deliver a service.<br />
* #9 Similar to "Neutron Services Insertion, Chaining, and Steering". May need to chain multiple functions to deliver a service.<br />
|-<br />
|}<br />
<br />
=== Implemented (Juno) ===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Description !! Project(s) !! Status !! Blueprint(s) !! Design(s) !! ETSI-NFV Use Cases<br />
|-<br />
| Support two interfaces from one VM attached to the same network || Nova || Design Approved / Implemented || https://blueprints.launchpad.net/nova/+spec/multiple-if-1-net || <br />
* Spec: https://review.openstack.org/97716<br />
* Patch: https://review.openstack.org/98488<br />
|| <br />
* #1 is a broadly applicable IaaS requirement.<br />
* #2 TBD?<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 TBD?<br />
* #6 TBD?<br />
* #7 TBD?<br />
* #8 TBD?<br />
* #9 TBD?<br />
|-<br />
| SR-IOV Networking Support || Nova || Design Approved / Needs Code Review || https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov || https://review.openstack.org/#/c/86606/ ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons.<br />
* #3 TBD?<br />
* #4 _Potential_ intersect if forwarding graph makes any particular request about the port connectivity. <br />
* #5 Needed for performance reasons.<br />
* #6 Needed for performance reasons.<br />
* #7 Needed for performance reasons.<br />
* #8 TBD?<br />
* #9 Needed for performance reasons.<br />
|-<br />
| Virt driver guest vCPU topology configuration <br />
|| Nova || Design Approved / Implemented || https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology || https://review.openstack.org/93510 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed for performance reasons.<br />
* #6 Needed for performance reasons.<br />
* #7 Needed for performance reasons.<br />
* #8 Needed for performance reasons.<br />
* #9 Needed for performance reasons.<br />
|-<br />
| Evacuate instance to scheduled host || Nova || Approved / Implemented (juno-2) || https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance || https://review.openstack.org/84429 ||<br />
|- <br />
|}<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=BootstrappingHour/Nova_RPC_Layer&diff=65300BootstrappingHour/Nova RPC Layer2014-10-17T18:51:25Z<p>Russellb: /* Nova RPC Layer */</p>
<hr />
<div>== Nova RPC Layer ==<br />
<br />
An overview of the RPC layer inside of Nova:<br />
<br />
* '''Host(s):''' Jay Pipes, Dan Smith<br />
* '''Experts(s):''' Russell Bryant, Dan Smith<br />
* Etherpad: https://etherpad.openstack.org/p/obh-nova-rpc-layer<br />
* Slides: https://docs.google.com/presentation/d/1D2aTpDei_iaoGhCaXoKBfqd5JIcNMDUmH8Py6GmeGt0/edit?usp=sharing<br />
<br />
[[Category:Contribute]]</div>Russellbhttps://wiki.openstack.org/w/index.php?title=Nova/ReleaseChecklist&diff=64291Nova/ReleaseChecklist2014-10-03T20:50:53Z<p>Russellb: </p>
<hr />
<div>This page is for tracking things that we should be doing in the code before or after each release.<br />
<br />
=== Pre-release Checklist ===<br />
<br />
* Merge latest translations<br />
* Bump rpc major versions. See [[RpcMajorVersionUpdates]]<br />
<br />
=== Post-release Checklist ===<br />
<br />
* Add database migration placeholders to allow backportable DB migrations<br />
* Drop old rpc compat code. See [[RpcMajorVersionUpdates]]<br />
* Update version aliases for rpc outbound version control<br />
<br />
[[Category:Nova]]</div>Russellbhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Juno&diff=59296ReleaseNotes/Juno2014-07-29T01:04:18Z<p>Russellb: /* Upgrade Notes */</p>
<hr />
<div>{| style="color:#000000; border:solid 1px #A8A8A8; padding:0.5em; margin:0.5em 0; background-color:#FFFFFF;font-size:95%; vertical-align:middle;"<br />
| style="padding:1em;width: 40px" | [[Image:Warning.svg|40px]]<br />
| '''Release Under Development'''<br />
This release of OpenStack is under development and has yet to be completed.<br />
<br />
The information on this page may not accurately reflect the state of release at the current point in time.<br />
|}<br />
<br />
= OpenStack 2014.2 (Juno) Release Notes =<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== General Upgrade Notes ==<br />
<br />
* TBD<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
There is a summary of specifications for the Juno release of Nova at [[Nova/Juno-Specs]].<br />
<br />
=== Key New Features ===<br />
<br />
==== Upgrade Support ====<br />
<br />
* TBD<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* TBD<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* TBD<br />
<br />
===== VMware =====<br />
<br />
* TBD<br />
<br />
===== XenServer =====<br />
<br />
* TBD<br />
<br />
==== API ====<br />
<br />
* TBD<br />
<br />
==== Scheduler ====<br />
<br />
* TBD<br />
<br />
==== Other Features ====<br />
<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The nova-manage flavor subcommand is deprecated in Juno and will be removed in the 2015.1 (K) release: https://review.openstack.org/#/c/86122/<br />
* https://review.openstack.org/#/c/102212/<br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
* The ability to upload a public image is now admin-only by default. To continue to use the previous behaviour, edit the publicize_image flag in etc/policy.json to remove the role restriction.<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
<br />
* LDAP/AD configuration: All configuration options containing the term "tenant" have been deprecated in favor of similarly named configuration options using the term "project" (for example, <code>tenant_id_attribute</code> has been replaced by <code>project_id_attribute</code>).<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
Migration to oslo.messaging library for RPC communication.<br />
<br />
=== Known Issues ===<br />
None yet.<br />
<br />
=== Upgrade Notes ===<br />
<br />
Attribute level policies dependent on resources are not enforced anymore. Meaning that some older policies from Icehouse are not needed. (e.g. "get_port:binding:vnic_type": "rule:admin_or_owner").<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
* TBD<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Documentation ==<br />
<br />
=== Key New Features ===<br />
<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
None yet</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=57240TelcoWorkingGroup2014-07-02T16:55:14Z<p>Russellb: /* Active Blueprints */</p>
<hr />
<div>= Weekly NFV sub-team IRC meeting =<br />
'''MEETING TIME: (Proposed, subject to change) Wednesdays, [http://www.timeanddate.com/worldclock/fixedtime.html?hour=14&min=0&sec=0&p1=0 1400 UTC], #openstack-meeting-alt, starting June 4'''<br />
<br />
= What is NFV? =<br />
<br />
NFV stands for Network Functions Virtualization. It defines the replacement of usually stand alone appliances used for high and low level network functions, such as firewalls, network address translation, intrusion detection, caching, gateways, accelerators, etc, into virtual instance or set of virtual instances, which are called Virtual Network Functions (VNF). In other words, it could be seen as replacing some of the hardware network appliances with high-performance software taking advantage of high performance para-virtual devices, other acceleration mechanisms, and smart placement of instances. The origin of NFV comes from a working group from the [http://www.etsi.org/ European Telecommunications Standards Institute (ETSI)] whose work is the basis of most current implementations. The main consumers of NFV are Service providers (telecommunication providers and the like) who are looking to accelerate the deployment of new network services, and to do that, need to eliminate the constraint of slow renewal cycle of hardware appliances, which do not autoscale and limit their innovation.<br />
<br />
NFV support for OpenStack aims to provide the best possible infrastructure for such workloads to be deployed in, while respecting the design principles of a IaaS cloud. In order for VNF to perform correctly in a cloud world, the underlying infrastructure needs to provide a certain number of functionalities which range from scheduling to networking and from orchestration to monitoring capacities. This means that to correctly support NFV use cases in OpenStack, implementations may be required across most, if not all, main OpenStack projects, starting with Neutron and Nova.<br />
<br />
For more details on NFV, the following references may be useful:<br />
* [http://www.etsi.org/technologies-clusters/technologies/nfv Definition of NFV by ETSI]<br />
* [http://en.wikipedia.org/wiki/Network_Functions_Virtualization Definition of NFV on Wikipedia]<br />
<br />
= Who we are =<br />
<br />
''' Add your name here if you're joining the meetings - IRC nicks are pretty anonymous unless you give us a clue! Please keep the list in alphabetical order by IRC nick. '''<br />
{| class="wikitable"<br />
|-<br />
! Nick !! Name !! Affiliation !! Interests<br />
|-<br />
| adrian-hoban || Adrian Hoban || Intel OpenStack team || NFV & SDN extensions across OpenStack projects<br />
|-<br />
| armax || Armando Migliaccio || HP || Neutron, NFV, SDN<br />
|-<br />
| arosen || Aaron Rosen || nicira/vmware || Automation, SDN/Neutron/NFV, Openstack<br />
|-<br />
| alank35 || Alan Kavanagh || Ericsson Inc || NFV & SDN & Neutron and ODL<br />
|-<br />
| balajip || Balaji Padnala || Freescale OpenStack Team || NFV, SDN, SRIOV, Libvirt, Neutron, Nova, Service VMs and Service Chaining<br />
|-<br />
| banix || Mohammad Banikazemi || IBM || NFV, SDN, Neutron, OpenStack<br />
|-<br />
| bauzas || Sylvain Bauza || Red Hat || SLA and Scheduling in Nova<br />
|-<br />
| boh.ricky || boh.ricky || Huawei || Help better support NFV<br />
|-<br />
| cdub || Chris Wright || Red Hat || NFV and SDN work between OpenStack and OpenDaylight<br />
|-<br />
| cgoncalves || Carlos Goncalves || Instituto de Telecomunicacoes || Service Function Chaining, Traffic Steering<br />
|-<br />
| cliljenstolpe || Christopher Liljenstolpe || Metaswitch Networks || Neutron, orchestration, network architecture<br />
|-<br />
| cloudon || Calum Loudon || Metaswitch Networks || Neutron, data plane acceleration, orchestration<br />
|-<br />
| danpb || Daniel Berrange || Red Hat || Libvirt, KVM & Nova performance & enablement for NFV<br />
|-<br />
| davidpc || David Perez Caparros || DOCOMO Euro-Labs || Supporting NFV in OpenStack<br />
|-<br />
| diga || Digambar Patil || Persistent System Ltd. || Neutron, Nova, SDN, NFV<br />
|-<br />
| dmitry_huawei || Dmitry Meytin || Huawei || MANO integration with OpenStack<br />
|-<br />
| eranb || Eran Bello || ASOCS || NFV compute and accelerator resources integration with OpenStack<br />
|-<br />
| fjramons || Francisco-Javier Ramon Salguero || Telefonica || Libvirt, KVM & Nova performance & enablement for NFV<br />
|-<br />
| ggarcia || Gerardo Garcia || Telefonica || Libvirt, KVM & Nova performance & enablement for NFV<br />
|-<br />
| heyongli || Yongli He || Intel Openstack team || nova enabling NFV SRIOV PCI passthrough<br />
|-<br />
| ian_ott || Ian Jolliffe || Wind River || Openstack, NFV, Networking<br />
|-<br />
| ijw || Ian Wells || Cisco's Openstack team || Vendor neutral NFV infrastructure, Cisco NFV appliances<br />
|-<br />
| imendel || Itai Mendelsohn || Alcatel-Lucent || NFV in general and how OpenStack can enable it<br />
|-<br />
| irenab || Irena Berezovsky || Mellanox || NFV, SDN, NFV SRIOV PCI passthrough<br />
|-<br />
| jmsoares || Joao Soares || Portugal Telecom || Service Function Chaining, Traffic Steering<br />
|-<br />
| kalyan || Kalyanjeet Gogoi || Juniper Networks || NFV integration with OpenStack<br />
|-<br />
| LouisF || Louis Fourie || Huawei || NFV-MANO, Service Function chaining, Traffic steering<br />
|-<br />
| lukego || Luke Gorrie || Snabb || Making open source NFV work for Deutsche Telekom's TeraStream project<br />
|-<br />
| malini1 || Malini Bhandaru || Intel || NFV, Adv. Service VMs, compute node capabilities, security<br />
|-<br />
| martin_t || Martin Taylor || Metaswitch Networks || Neutron networking and data plane acceleration<br />
|-<br />
| mjbright || Mike Bright || HP || Openstack, NFV/SDN<br />
|-<br />
| mpetrus || Margaret Petrus || VMware || NFV-MANO, OpenStack for Service Orchestration<br />
|-<br />
| nbal || Nuri Bal || Cyan || OpenStack support of NFV, MANO in particular<br />
|-<br />
| nbouthors || Nicolas Bouthors || Qosmos || Service Chaining, Classifier VNFC<br />
|-<br />
| nijaba || Nick Barcet || eNovance || NFV support on OpenStack<br />
|-<br />
| radek ||Radoslaw Smigielski||Alcatel-Lucent||OpenStack+NFV, SR-IOV, PCI passthrough, KVM performance<br />
|-<br />
| r-mibu || Ryota Mibu || NEC || Nova enhancement for NFV<br />
|-<br />
| rohit404 ||Rohit Agarwalla||Cisco's OpenStack team||OpenStack+NFV<br />
|-<br />
| rseth|| Rajeev Seth || Sonus Networks || NFV integration with OpenStack<br />
|-<br />
| runarut|| Larry Pearson || AT&T || OpenStack as NFVI, VNF Service Chaining<br />
|-<br />
| russellb || Russell Bryant || Project: OpenStack TC, Nova. Corporate: Red Hat || Nova. Ensuring requirements and designs are consumable by OpenStack developers. Reviewing designs and implementations.<br />
|-<br />
| s3wong || Stephen Wong || Midokura || NFV support on OpenStack<br />
|-<br />
| sasud || S Sud || Intel || NFV and SDN use case PoCs<br />
|-<br />
| sgordon || Steve Gordon || Red Hat || NFV and SDN enablement across OpenStack projects but particularly Nova and the Libvirt driver.<br />
|-<br />
| shane-wang || Shane Wang || Intel || NFV support on OpenStack, VM QoS in Nova, PCI/SR-IOV support<br />
|-<br />
| smazziotta || Sandro Mazziotta || eNovance || OpenStack extensions required to meet NFV requirements<br />
|-<br />
| thomnico || Nicolas Thomas || Canonical || Allowing OpenStack to be gradually used in NFV type of deployments ETSI NFV IG participant.<br />
|-<br />
| ulikleber || Ulrich Kleber || Huawei || Help better support NFV<br />
|-<br />
| vjardin || Vincent JARDIN || 6WIND || Help using DPDK applications efficiently and ivshmem to start with (memnic)<br />
|-<br />
| yamahata || Isaku Yamahata || Intel || Neutron, servicevm, service chaining, traffic steering<br />
|-<br />
| yjiang5 || Yunhong Jiang || Intel || Nova enablement for NFV<br />
|-<br />
| yukiarbel || Yuki Arbel || Alcatel Lucent || NFV, Openstack for NFV<br />
|-<br />
| zeddii || Bruce Ashfield || Wind River || KVM, libvirt, nova and platform awareness for NFV<br />
|-<br />
| zuqiang || Zu Qiang || Ericsson || NFV support in OpenStack<br />
|}<br />
<br />
= Mission statement =<br />
<br />
<blockquote>The sub-team aims to define the use cases and identify and prioritise the requirements which are needed to run Network Function Virtualization (NFV) workloads on top of OpenStack. This work includes identifying functional gaps, creating blueprints, submitting and reviewing patches to the relevant OpenStack projects and tracking their completion in support of NFV.</blockquote><br />
<br />
<blockquote>The requirements expressed by this group should be made so that each of them have a test case which can be verified using an OpenSource implementation. This is to ensure that tests can be done without any special hardware or proprietary software, which is key for continuous integration tests in the OpenStack gate. If special setups are required which cannot be reproduced on the standard OpenStack gate, the use cases proponent will have to provide a 3rd party CI setup, accessible by OpenStack infra, which will be used to validate developments against.</blockquote><br />
<br />
[[IRC|OpenStack IRC details]]<br />
<br />
Chair: Russell Bryant (russellb)<br />
<br />
== Agenda for next meeting ==<br />
<br />
Wednesday, June 4 at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=14&min=0&sec=0&p1=0 1400 UTC] in #openstack-meeting-alt.<br />
<br />
Agenda: [https://etherpad.openstack.org/p/nfv-meeting-agenda]<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
* [https://etherpad.openstack.org/p/juno-nfv-bof Juno Design Summit NFV BoF]<br />
<br />
=Use Cases=<br />
<br />
{| class="wikitable"<br />
|-<br />
! Workload Type !! Description || Characteristics !! Examples !! Requirements<br />
|-<br />
| Data plane || Tasks related to packet handing in an end-to-end communication between edge applications. ||<br />
* Intensive I/O requirements - potentially millions of small VoIP packets per second per core<br />
* Intensive memory R/W requirements<br />
||<br />
* CDN cache node<br />
* Router<br />
* IPSec tunneller<br />
* Session Border Controller - media relay function<br />
|| - <br />
|-<br />
| Control plane || Any other communication between network functions that is not directly related to the end-to-end data communication between edge applications. ||<br />
* Less intensive I/O and R/W requirements than data plane, due to lower packets per second<br />
* More complicated transactions resulting in (potentially) higher CPU load per packet.<br />
||<br />
* PPP session management<br />
* Border Gateway Protocol (BGP) routing<br />
* Remote Authentication Dial In User Service (RADIUS) authentication in a Broadband Remote Access Server (BRAS) network function <br />
* Session Border Controller - SIP signaling function<br />
* IMS core functions (S-CSCF / I-CSCF / BGCF)<br />
|| - <br />
|-<br />
| Signal processing || All network function tasks related to digital processing<br />
|| <br />
* Very sensitive to CPU processing capacity.<br />
* Delay sensitive.<br />
||<br />
* Fast Fourier Transform (FFT) decoding<br />
* Encoding in a Cloud-Radio Access Network (C-RAN) Base Band Unit (BBU)<br />
* Audio transcoding in a Session Border Controller<br />
|| - <br />
|-<br />
| Storage || All tasks related to disk storage.<br />
||<br />
* Varying disk, SAN, or NAS, I/O requirements based on applications, ranging from low to extremely high intensity.<br />
||<br />
* Logger<br />
* Network probe<br />
|| - <br />
|-<br />
|}<br />
<br />
== ETSI-NFV Use Cases - High Level Description ==<br />
<br />
===Use Case #1: Network Functions Virtualisation Infrastructure as a Service===<br />
<br />
This is a reasonably generic IaaS requirement. <br />
<br />
===Use Case #2: Virtual Network Function as a Service (VNFaaS)===<br />
This primarily targets Customer Premise Equipment (CPE) devices such as access routers, enterprise firewall, WAN optimizers etc. with some Provider Edge devices possible at a later date. ETSI-NFV Performance & portability considerations will apply to deployments that strive to meet high performance and low latency considerations.<br />
<br />
===Use Case #3: Virtual Network Platform as a Service (VNPaaS)===<br />
This is similar to #2 but at the service level. At larger scale and not at the "app" level only.<br />
<br />
===Use Case #4: VNF Forwarding Graphs===<br />
Dynamic connectivity between apps in a "service chain".<br />
<br />
===Use Case #5: Virtualisation of Mobile Core Network and IMS===<br />
Primarily focusing on Evolved Packet Core appliances such as the Mobility Management Entity (MME), Serving Gateway (S-GW), etc. and the IP Multimedia Subsystem (IMS).<br />
<br />
===Use Case #6: Virtualisation of Mobile base station===<br />
Focusing on parts of the Radio Access Network such as eNodeB's, Radio Link Control and Packet Data Convergence Protocol, etc..<br />
<br />
===Use Case #7: Virtualisation of the Home Environment===<br />
Similar to Use Case 2, but with a focus on virtualising residential devices instead of enterprise devices. Covers DHCP, NAT, PPPoE, Firewall devices, etc. <br />
<br />
===Use Case #8: Virtualisation of CDNs===<br />
Content Delivery Networks focusing on video traffic delivery.<br />
<br />
===Use Case #9: Fixed Access Network Functions Virtualisation===<br />
Wireline related access technologies.<br />
<br />
==Contributed Use Cases==<br />
<br />
===Session Border Controller===<br />
<br />
Contributed by: Calum Loudon<br />
<br />
====Description====<br />
<br />
Perimeta Session Border Controller, Metaswitch Networks. Sits on the edge of a service provider's network and polices SIP and RTP (i.e. VoIP) control and media traffic passing over the access network between end-users and the core network or the trunk network between the core and another SP.<br />
<br />
====Characteristics====<br />
<br />
* Fast and guaranteed performance:<br />
** Performance in the order of several million VoIP packets (~64-220 bytes depending on codec) per second per core (achievable on COTS hardware).<br />
** Guarantees provided via SLAs.<br />
* Fully high availability<br />
** No single point of failure, service continuity over both software and hardware failures.<br />
* Elastically scalable<br />
** NFV orchestrator adds and removes instances in response to network demands.<br />
* Traffic segregation (ideally)<br />
** Separate traffic from different customers via VLANs.<br />
<br />
====Requirements====<br />
<br />
* Fast & guaranteed performance (network)<br />
** Packets per second target -> either SR-IOV or an accelerated DPDK-like data plane:<br />
*** "SR-IOV Networking Support" (https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov)<br />
*** "Open vSwitch to use patch ports" (https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use)<br />
*** "userspace vhost in ovd vif bindings" (https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost)<br />
*** "Snabb NFV driver" (https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver)<br />
*** "VIF_VHOSTUSER" (https://blueprints.launchpad.net/nova/+spec/vif-vhostuser)<br />
<br />
* Fast & guaranteed performance (compute):<br />
** To optimize data rate we need to keep all working data in L3 cache:<br />
***"Virt driver pinning guest vCPUs to host pCPUs" (https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning)<br />
** To optimize data rate need to bind to NIC on host CPU's bus:<br />
*** "I/O (PCIe) Based NUMA Scheduling" (https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling)<br />
**To offer guaranteed performance as opposed to 'best efforts' we need:<br />
** To control placement of cores, minimise TLB misses and get accurate info about core topology (threads vs. hyperthreads etc.); maps to the remaining blueprints on NUMA & vCPU topology:<br />
*** "Virt driver guest vCPU topology configuration" (https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology)<br />
*** "Virt driver guest NUMA node placement & topology" (https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement)<br />
*** "Virt driver large page allocation for guest RAM" (https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages)<br />
** May need support to prevent 'noisy neighbours' stealing L3 cache - unproven, and no blueprint we're aware of.<br />
<br />
* High availability:<br />
** Requires anti-affinity rules to prevent active/passive being instantiated on same host - already supported, so no gap.<br />
<br />
* Elastic scaling:<br />
** Readily achievable using existing features - no gap.<br />
<br />
* VLAN trunking:<br />
** "VLAN trunking networks for NFV" (https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks et al).<br />
<br />
* Other:<br />
** Being able to offer apparent traffic separation (e.g. service traffic vs. application management) over single network is also useful in some cases.<br />
*** "Support two interfaces from one VM attached to the same network" (https://blueprints.launchpad.net/nova/+spec/multiple-if-1-net)<br />
<br />
== References: ==<br />
* [http://docbox.etsi.org/ISG/NFV/Open/Latest_Drafts/NFV-PER001v009%20-%20NFV%20Performance%20&%20Portability%20Best%20Practises.pdf Network Functions Virtualization NFV Performance & Portability Best Practices - DRAFT]<br />
* ETSI-NFV Use Cases V1.1.1 [http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf]<br />
<br />
== Related Teams and Projects ==<br />
* OpenStack Congress - Policy as a Service [https://wiki.openstack.org/wiki/Congress]<br />
<br />
= Development Efforts =<br />
<br />
== Active Bugs ==<br />
<br />
Add the "nfv" tag to bugs to have them appear in these queries:<br />
<br />
* Nova: https://bugs.launchpad.net/nova/+bugs?field.tag=nfv<br />
* Neutron: https://bugs.launchpad.net/neutron/+bugs?field.tag=nfv<br />
<br />
== Active Blueprints ==<br />
The NFV use case mappings identified below are from the perspective of higher performing use cases. Please note that there are many possible configurations of devices for each of these use cases and it is not implied that they will all need the proposed capability in the relevant blueprint. <br />
<br />
There is an automatically updated gerrit dashboard for all specs and code under review here: http://nfv.russellbryant.net<br />
<br />
PRIORITY - repeatedly mentioned at the BOF as blockers:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Description !! Project(s) !! Status !! Blueprint(s) !! Design(s) !! ETSI-NFV Use Cases<br />
|-<br />
| Support two interfaces from one VM attached to the same network || Nova || first BP submit || https://blueprints.launchpad.net/nova/+spec/multiple-if-1-net || https://review.openstack.org/97716 https://review.openstack.org/98488 (patch?) || <br />
* #1 is a broadly applicable IaaS requirement.<br />
* #2 TBD?<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 TBD?<br />
* #6 TBD?<br />
* #7 TBD?<br />
* #8 TBD?<br />
* #9 TBD?<br />
|-<br />
| VLAN trunking networks for NFV || Neutron || first BP submit <br />
| https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks https://blueprints.launchpad.net/neutron/+spec/l2-gateway https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms<br />
| https://review.openstack.org/#/c/100278/ https://review.openstack.org/97714 https://review.openstack.org/#/c/94612/ https://review.openstack.org/#/c/92541/ (patch) || <br />
* #1 is a broadly applicable IaaS requirement.<br />
* #2 TBD?<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 TBD?<br />
* #6 TBD?<br />
* #7 TBD?<br />
* #8 TBD?<br />
* #9 TBD?<br />
|-<br />
| Permit unaddressed interfaces for NFV use cases || Neutron || first BP submit <br />
| https://blueprints.launchpad.net/neutron/+spec/nfv-unaddressed-interfaces https://blueprints.launchpad.net/neutron/+spec/ml2-ovs-portsecurity<br />
| https://review.openstack.org/97715 https://review.openstack.org/#/c/99873/ || <br />
* #1 is a broadly applicable IaaS requirement.<br />
* #2 TBD?<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 TBD?<br />
* #6 TBD?<br />
* #7 TBD?<br />
* #8 TBD?<br />
* #9 TBD?<br />
|-<br />
|}<br />
<br />
The rest:<br />
<br />
neutron port enhancement related to servicevm is summarized at https://wiki.openstack.org/wiki/ServiceVM/neutron-port-attributes<br />
<br />
{| class="wikitable"<br />
|-<br />
! Description !! Project(s) !! Status !! Blueprint(s) !! Design(s) !! ETSI-NFV Use Cases<br />
|-<br />
| SR-IOV Networking Support || Nova || Design Approved || https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov || https://review.openstack.org/#/c/86606/ ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons.<br />
* #3 TBD?<br />
* #4 _Potential_ intersect if forwarding graph makes any particular request about the port connectivity. <br />
* #5 Needed for performance reasons.<br />
* #6 Needed for performance reasons.<br />
* #7 Needed for performance reasons.<br />
* #8 TBD?<br />
* #9 Needed for performance reasons.<br />
|- <br />
| colspan="3" | ''Support for NUMA and VCPU topology configuration'' || ''https://blueprints.launchpad.net/nova/+spec/nova-virt-numa-and-vcpu-topology'' || ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed for performance reasons.<br />
* #6 Needed for performance reasons.<br />
* #7 Needed for performance reasons.<br />
* #8 Needed for performance reasons.<br />
* #9 Needed for performance reasons.<br />
|-<br />
|<br />
: Virt driver guest vCPU topology configuration <br />
|| Nova || Design Approved || https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology || https://review.openstack.org/93510 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed for performance reasons.<br />
* #6 Needed for performance reasons.<br />
* #7 Needed for performance reasons.<br />
* #8 Needed for performance reasons.<br />
* #9 Needed for performance reasons.<br />
|- <br />
|<br />
: Virt driver guest NUMA node placement & topology<br />
|| Nova || Design review Approved || https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement || https://review.openstack.org/93636 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed for performance reasons.<br />
* #6 Needed for performance reasons.<br />
* #7 Needed for performance reasons.<br />
* #8 Needed for performance reasons.<br />
* #9 Needed for performance reasons.<br />
|-<br />
|<br />
: Virt driver large page allocation for guest RAM [[#dupe|*]]<br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages || https://review.openstack.org/93653 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed for performance reasons.<br />
* #6 Needed for performance reasons.<br />
* #7 Needed for performance reasons.<br />
* #8 Needed for performance reasons.<br />
* #9 Needed for performance reasons.<br />
|-<br />
|<br />
: Virt driver pinning guest vCPUs to host pCPUs <br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning || https://review.openstack.org/93652 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed for performance reasons.<br />
* #6 Needed for performance reasons.<br />
* #7 Needed for performance reasons.<br />
* #8 Needed for performance reasons.<br />
* #9 Needed for performance reasons.<br />
|-<br />
|<br />
: I/O (PCIe) Based NUMA Scheduling <br />
|| Nova || Design Approved || https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling || https://review.openstack.org/#/c/100871/ ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed for performance reasons.<br />
* #6 Needed for performance reasons.<br />
* #7 Needed for performance reasons.<br />
* #8 Needed for performance reasons.<br />
* #9 Needed for performance reasons.<br />
|-<br />
| Soft affinity support for server groups || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group || https://review.openstack.org/91328 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 TBD?<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 TBD?<br />
* #6 TBD?<br />
* #7 TBD?<br />
* #8 TBD?<br />
* #9 TBD?<br />
|-<br />
| Open vSwitch-based Security Groups: Open vSwitch Implementation of FirewallDriver || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver || https://review.openstack.org/89712 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed in the non-SR-IOV based deployments.<br />
* #3 TBD?<br />
* #4 vSwitch configuration may be needed to complete the forwarding graph (service chain). <br />
* #5 Needed in the non-SR-IOV based deployments.<br />
* #6 TBD?<br />
* #7 Needed in the non-SR-IOV based deployments.<br />
* #8 TBD?<br />
* #9 Needed in the non-SR-IOV based deployments.<br />
|-<br />
| Framework for Advanced Services in Virtual Machines || Neutron || || https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms || ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Potential lifecycle management support<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Potential lifecycle management support<br />
* #6 Potential lifecycle management support<br />
* #7 Potential lifecycle management support<br />
* #8 Potential lifecycle management support<br />
* #9 Potential lifecycle management support<br />
|-<br />
| Neutron Services Insertion, Chaining, and Steering || Neutron || Approved || https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering || https://review.openstack.org/93524 ||<br />
NOTE: this service chaining BP is all about chaining aaS services, not chaining tenant NFVs. Is this the one we want or do we require a new BP?<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 May need to chain multiple functions to deliver a service.<br />
* #3 TBD?<br />
* #4 Closely coupled requirement needed to deliver on a forwarding graph.<br />
* #5 May need to chain multiple functions to deliver a service.<br />
* #6 May need to chain multiple functions to deliver a service.<br />
* #7 May need to chain multiple functions to deliver a service.<br />
* #8 May need to chain multiple functions to deliver a service.<br />
* #9 May need to chain multiple functions to deliver a service.<br />
|-<br />
| Schedule vms per flavour cpu overcommit || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/flavor-cpu-overcommit || https://review.openstack.org/88286 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed for performance reasons.<br />
* #6 Needed for performance reasons.<br />
* #7 Needed for performance reasons.<br />
* #8 Needed for performance reasons.<br />
* #9 Needed for performance reasons.<br />
|-<br />
| OVF Meta-Data Import via Glance || Glance || Submitted || https://blueprints.launchpad.net/glance/+spec/epa-ovf-meta-data-import || TBD ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed as one optional path to auto import platform feature requests to meet performance targets. <br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed as one optional path to auto import platform feature requests to meet performance targets. <br />
* #6 Needed as one optional path to auto import platform feature requests to meet performance targets. <br />
* #7 Needed as one optional path to auto import platform feature requests to meet performance targets. <br />
* #8 Needed as one optional path to auto import platform feature requests to meet performance targets. <br />
* #9Needed as one optional path to auto import platform feature requests to meet performance targets. <br />
|-<br />
| colspan="6" | ''Support for high performance Intel(R) Data Plane Development Kit based vSwitches''<br />
|-<br />
|<br />
: Open vSwitch to use patch ports in place of veth pairs for vlan n/w <br />
|| Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use || https://review.openstack.org/96183 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed in the non-SR-IOV based deployments.<br />
* #3 TBD?<br />
* #4 Closely coupled requirement needed to deliver on a forwarding graph.<br />
* #5 Needed in the non-SR-IOV based deployments.<br />
* #6 TBD?<br />
* #7 Needed in the non-SR-IOV based deployments.<br />
* #8 TBD?<br />
* #9 Needed in the non-SR-IOV based deployments.<br />
|-<br />
|<br />
: Support userspace vhost in ovs vif bindings <br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost || https://review.openstack.org/95805 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed in the non-SR-IOV based deployments.<br />
* #3 TBD?<br />
* #4 Closely coupled requirement needed to deliver on a forwarding graph.<br />
* #5 Needed in the non-SR-IOV based deployments.<br />
* #6 TBD?<br />
* #7 Needed in the non-SR-IOV based deployments.<br />
* #8 TBD?<br />
* #9 Needed in the non-SR-IOV based deployments.<br />
|-<br />
| NIC state aware scheduling || Nova || Rejected || https://blueprints.launchpad.net/nova/+spec/nic-state-aware-scheduling || https://review.openstack.org/87978 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed to help with service delivery<br />
* #3 TBD?<br />
* #4 Need to understand if the ports are up when deploying the service chain.<br />
* #5 Needed to help with service delivery<br />
* #6 Needed to help with service delivery<br />
* #7 Needed to help with service delivery<br />
* #8 Needed to help with service delivery<br />
* #9 Needed to help with service delivery<br />
|-<br />
| Add PCI and PCIe device capability aware scheduling || Nova || Abandoned || https://blueprints.launchpad.net/nova/+spec/pci-device-capability-aware-scheduling || https://review.openstack.org/92843 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed for performance reasons<br />
* #6 Needed for performance reasons<br />
* #7 Needed for performance reasons<br />
* #8 Needed for performance reasons<br />
* #9 Needed for performance reasons<br />
|-<br />
| [http://snabb.co/nfv.html Snabb NFV] mechanism driver || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver || https://review.openstack.org/95711 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed in the non-SR-IOV based deployments.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed in the non-SR-IOV based deployments.<br />
* #6 TBD?<br />
* #7 Needed in the non-SR-IOV based deployments.<br />
* #8 TBD?<br />
* #9 Needed in the non-SR-IOV based deployments.<br />
|-<br />
| VIF_VHOSTUSER (qemu vhost-user) support || Nova || Submitted w/ code || https://blueprints.launchpad.net/nova/+spec/vif-vhostuser || https://review.openstack.org/96138 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed in the non-SR-IOV based deployments.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed in the non-SR-IOV based deployments.<br />
* #6 TBD?<br />
* #7 Needed in the non-SR-IOV based deployments.<br />
* #8 TBD?<br />
* #9 Needed in the non-SR-IOV based deployments.<br />
|-<br />
|Solver Scheduler - complex constraints scheduler with NFV use cases || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/solver-scheduler || https://review.openstack.org/#/c/96543/ ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Possibly needed for smarter scheduling decision making to help with performance. <br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Possibly needed for smarter scheduling decision making to help with performance. <br />
* #6 Possibly needed for smarter scheduling decision making to help with performance.<br />
* #7 Possibly needed for smarter scheduling decision making to help with performance.<br />
* #8 Possibly needed for smarter scheduling decision making to help with performance.<br />
* #9 Possibly needed for smarter scheduling decision making to help with performance.<br />
|-<br />
| Discless VM || Nova || Under discussion || https://blueprints.launchpad.net/nova/+spec/libvirt-empty-vm-boot-pxe || ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 TBD?<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 TBD?<br />
* #6 TBD?<br />
* #7 TBD?<br />
* #8 TBD?<br />
* #9 TBD?<br />
|-<br />
| Network QoS API || Neutron || Under discussion || https://blueprints.launchpad.net/neutron/+spec/quantum-qos-api || https://review.openstack.org/#/c/88599 ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons<br />
* #3 TBD?<br />
* #4 Needed to capture network QoS aspects of forwarding graph.<br />
* #5 Needed for performance reasons<br />
* #6 Needed for performance reasons<br />
* #7 Needed for performance reasons<br />
* #8 Needed for performance reasons<br />
* #9 Needed for performance reasons<br />
|-<br />
| Persist scheduler hints || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/persist-scheduler-hints || https://review.openstack.org/#/c/88983/ ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for performance reasons on migration.<br />
* #3 TBD?<br />
* #4 TBD?<br />
* #5 Needed for performance reasons on migration.<br />
* #6 Needed for performance reasons on migration.<br />
* #7 Needed for performance reasons on migration.<br />
* #8 Needed for performance reasons on migration.<br />
* #9 Needed for performance reasons on migration.<br />
|-<br />
| Port mirroring || Neutron || Under discussion || https://blueprints.launchpad.net/neutron/+spec/port-mirroring || ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Needed for some specialized use cases.<br />
* #3 TBD?<br />
* #4 Needed based on forwarding graph for specialized use cases.<br />
* #5 Needed for some specialized use cases.<br />
* #6 Needed for some specialized use cases.<br />
* #7 Needed for some specialized use cases.<br />
* #8 TBD?<br />
* #9 Needed for some specialized use cases.<br />
|-<br />
| Traffic Steering Abstraction || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/traffic-steering-abstraction || https://review.openstack.org/92477/ ||<br />
* #1 is a broadly applicable IaaS requirement. <br />
* #2 Similar to "Neutron Services Insertion, Chaining, and Steering". May need to chain multiple functions to deliver a service.<br />
* #3 TBD?<br />
* #4 Closely coupled requirement needed to deliver on a forwarding graph.<br />
* #5 Similar to "Neutron Services Insertion, Chaining, and Steering". May need to chain multiple functions to deliver a service.<br />
* #6 Similar to "Neutron Services Insertion, Chaining, and Steering". May need to chain multiple functions to deliver a service.<br />
* #7 Similar to "Neutron Services Insertion, Chaining, and Steering". May need to chain multiple functions to deliver a service.<br />
* #8 Similar to "Neutron Services Insertion, Chaining, and Steering". May need to chain multiple functions to deliver a service.<br />
* #9 Similar to "Neutron Services Insertion, Chaining, and Steering". May need to chain multiple functions to deliver a service.<br />
|-<br />
| Evacuate instance to scheduled host || Nova || Needs code review || https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance || https://review.openstack.org/84429 ||<br />
|-<br />
| Extensible resource tracker (dependency of other work) || Nova || Design Approved || https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking || https://review.openstack.org/#/c/86050/ ||<br />
|-<br />
|}<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=Sprints/BeavertonJunoSprint&diff=56361Sprints/BeavertonJunoSprint2014-06-19T19:28:07Z<p>Russellb: /* Hotel */</p>
<hr />
<div>The Nova and Ironic teams are having a shared mid cycle meetup venue in Juno. It should be noted that these are separate sprints, but in the same location so we can consult rapidly when required. Details:<br />
<br />
* Where: Intel Campus, Beaverton, OR - 2111 Northeast 25th Avenue, Hillsboro, OR, 97124<br />
* When: July 28 - 30 2014<br />
<br />
== Hotel ==<br />
<br />
We are able to use the Intel rate at Larkspur Landing Hillsboro. Details:<br />
<br />
Larkspur Landing Hillsboro<br />
3133 NE Shute Rd, Hillsboro<br />
OR 97124<br />
<br />
The admin at Intel who is helping us with our booking is Cindy Sirianni, she recommends the hotel above. If you make the reservation by phone, 503/681-2121, and tell them it's for an Intel face to face they'll give you the Intel corporate rate. Feel free to use Cindy Sirianni's name if there are any issues.<br />
<br />
When booking on 2014-06-19, the Intel rate received was $121/night, or a grand total of $399.30 for 3 nights.<br />
<br />
Other convenient hotels are:<br />
* Towne Place Suites by Mariott, 6550 NE Brighton St., Hillsboro, OR 97124. Ph: 503/268-6000<br />
* Spring Hill Suites by Marriott, 7351 NE Butler St., Hillsboro, OR 97124. Ph: 503/547-0202<br />
<br />
== Nova Specifics ==<br />
<br />
Topic ideas here: <br />
https://etherpad.openstack.org/p/juno-nova-mid-cycle-meetup<br />
<br />
Please RSVP for the Nova meetup here: [https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-11878128803]<br />
<br />
== Ironic Specifics ==<br />
<br />
Please RSVP for the Ironic meetup here: [https://www.eventbrite.com/e/openstack-ironic-juno-mid-cycle-developer-meetup-tickets-11886066545]<br />
<br />
Goals and Schedule and other misc info should be tracked [https://etherpad.openstack.org/p/juno-ironic-sprint on this etherpad].</div>Russellbhttps://wiki.openstack.org/w/index.php?title=Sprints/BeavertonJunoSprint&diff=56360Sprints/BeavertonJunoSprint2014-06-19T19:27:41Z<p>Russellb: /* Hotel */ add pricing info</p>
<hr />
<div>The Nova and Ironic teams are having a shared mid cycle meetup venue in Juno. It should be noted that these are separate sprints, but in the same location so we can consult rapidly when required. Details:<br />
<br />
* Where: Intel Campus, Beaverton, OR - 2111 Northeast 25th Avenue, Hillsboro, OR, 97124<br />
* When: July 28 - 30 2014<br />
<br />
== Hotel ==<br />
<br />
We are able to use the Intel rate at Larkspur Landing Hillsboro. Details:<br />
<br />
Larkspur Landing Hillsboro<br />
3133 NE Shute Rd, Hillsboro<br />
OR 97124<br />
<br />
The admin at Intel who is helping us with our booking is Cindy Sirianni, she recommends the hotel above. If you make the reservation by phone, 503/681-2121, and tell them it's for an Intel face to face they'll give you the Intel corporate rate. Feel free to use Cindy Sirianni's name if there are any issues.<br />
<br />
When booking on 2014-06-19, the Intel rate received was $121, or a total of $399.30 for 3 nights.<br />
<br />
Other convenient hotels are:<br />
* Towne Place Suites by Mariott, 6550 NE Brighton St., Hillsboro, OR 97124. Ph: 503/268-6000<br />
* Spring Hill Suites by Marriott, 7351 NE Butler St., Hillsboro, OR 97124. Ph: 503/547-0202<br />
<br />
== Nova Specifics ==<br />
<br />
Topic ideas here: <br />
https://etherpad.openstack.org/p/juno-nova-mid-cycle-meetup<br />
<br />
Please RSVP for the Nova meetup here: [https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-11878128803]<br />
<br />
== Ironic Specifics ==<br />
<br />
Please RSVP for the Ironic meetup here: [https://www.eventbrite.com/e/openstack-ironic-juno-mid-cycle-developer-meetup-tickets-11886066545]<br />
<br />
Goals and Schedule and other misc info should be tracked [https://etherpad.openstack.org/p/juno-ironic-sprint on this etherpad].</div>Russellbhttps://wiki.openstack.org/w/index.php?title=Sprints/BeavertonJunoSprint&diff=56359Sprints/BeavertonJunoSprint2014-06-19T19:17:09Z<p>Russellb: /* Hotel */ fix phone number typo</p>
<hr />
<div>The Nova and Ironic teams are having a shared mid cycle meetup venue in Juno. It should be noted that these are separate sprints, but in the same location so we can consult rapidly when required. Details:<br />
<br />
* Where: Intel Campus, Beaverton, OR - 2111 Northeast 25th Avenue, Hillsboro, OR, 97124<br />
* When: July 28 - 30 2014<br />
<br />
== Hotel ==<br />
<br />
We are able to use the Intel rate at Larkspur Landing Hillsboro. Details:<br />
<br />
Larkspur Landing Hillsboro<br />
3133 NE Shute Rd, Hillsboro<br />
OR 97124<br />
<br />
The admin at Intel who is helping us with our booking is Cindy Sirianni, she recommends the hotel above. If you make the reservation by phone, 503/681-2121, and tell them it's for an Intel face to face they'll give you the Intel corporate rate. Feel free to use Cindy Sirianni's name if there are any issues.<br />
<br />
Other convenient hotels are:<br />
* Towne Place Suites by Mariott, 6550 NE Brighton St., Hillsboro, OR 97124. Ph: 503/268-6000<br />
* Spring Hill Suites by Marriott, 7351 NE Butler St., Hillsboro, OR 97124. Ph: 503/547-0202<br />
<br />
== Nova Specifics ==<br />
<br />
Topic ideas here: <br />
https://etherpad.openstack.org/p/juno-nova-mid-cycle-meetup<br />
<br />
Please RSVP for the Nova meetup here: [https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-11878128803]<br />
<br />
== Ironic Specifics ==<br />
<br />
Please RSVP for the Ironic meetup here: [https://www.eventbrite.com/e/openstack-ironic-juno-mid-cycle-developer-meetup-tickets-11886066545]<br />
<br />
Goals and Schedule and other misc info should be tracked [https://etherpad.openstack.org/p/juno-ironic-sprint on this etherpad].</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=54893TelcoWorkingGroup2014-06-04T17:02:02Z<p>Russellb: /* Who we are */</p>
<hr />
<div>= Weekly NFV sub-team IRC meeting =<br />
'''MEETING TIME: (Proposed, subject to change) Wednesdays, [http://www.timeanddate.com/worldclock/fixedtime.html?hour=14&min=0&sec=0&p1=0 1400 UTC], #openstack-meeting-alt, starting June 4'''<br />
<br />
= Who we are =<br />
<br />
''' Add your name here if you're joining the meetings - IRC nicks are pretty anonymous unless you give us a clue! Please keep the list in alphabetical order by IRC nick. '''<br />
{| class="wikitable"<br />
|-<br />
! Nick !! Name !! Affiliation !! Interests<br />
|-<br />
| adrian-hoban || Adrian Hoban || Intel OpenStack team || NFV & SDN extensions across OpenStack projects<br />
|-<br />
| cgoncalves || Carlos Goncalves || Instituto de Telecomunicacoes || Service Function Chaining, Traffic Steering<br />
|-<br />
| danpb || Daniel Berrange || Red Hat || Libvirt, KVM & Nova performance & enablement for NFV<br />
|-<br />
| ijw || Ian Wells || Cisco's Openstack team || Vendor neutral NFV infrastructure, Cisco NFV appliances<br />
|-<br />
| russellb || Russell Bryant || Project: OpenStack TC, Nova. Corporate: Red Hat || Nova. Ensuring requirements and designs are consumable by OpenStack developers. Reviewing designs and implementations.<br />
|}<br />
<br />
= Mission statement =<br />
<br />
<blockquote>The sub-team aims to define the use cases and identify and prioritise the requirements which are needed to run Network Function Virtualization (NFV) workloads on top of OpenStack. This work includes identifying functional gaps, creating blueprints, submitting and reviewing patches to the relevant OpenStack projects and tracking their completion in support of NFV.</blockquote><br />
<br />
<blockquote>The requirements expressed by this group should be made so that each of them have a test case which can be verified using an OpenSource implementation. This is to ensure that tests can be done without any special hardware or proprietary software, which is key for continuous integration tests in the OpenStack gate. If special setups are required which cannot be reproduced on the standard OpenStack gate, the use cases proponent will have to provide a 3rd party CI setup, accessible by OpenStack infra, which will be used to validate developments against.</blockquote><br />
<br />
[[IRC|OpenStack IRC details]]<br />
<br />
Chair: Russell Bryant (russellb)<br />
<br />
== Agenda for next meeting ==<br />
<br />
Wednesday, June 4 at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=14&min=0&sec=0&p1=0 1400 UTC] in #openstack-meeting-alt.<br />
<br />
Agenda: [https://etherpad.openstack.org/p/nfv-meeting-agenda]<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
* [https://etherpad.openstack.org/p/juno-nfv-bof Juno Design Summit NFV BoF]<br />
<br />
=Use Cases=<br />
<br />
TBD<br />
<br />
= Development Efforts =<br />
<br />
== Active Bugs ==<br />
<br />
Add the "nfv" tag to bugs to have them appear in these queries:<br />
<br />
* Nova: https://bugs.launchpad.net/nova/+bugs?field.tag=nfv<br />
* Neutron: https://bugs.launchpad.net/neutron/+bugs?field.tag=nfv<br />
<br />
== Active Blueprints ==<br />
<br />
PRIORITY - repeatedly mentioned at the BOF as blockers:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Description !! Project(s) !! Status !! Blueprint(s) !! Design<br />
|-<br />
| Support two interfaces from one VM attached to the same network || Nova || first BP submit || https://blueprints.launchpad.net/nova/+spec/2-if-1-net || https://review.openstack.org/97716<br />
|-<br />
| VLAN trunking networks for NFV || Neutron || first BP submit || https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks || https://review.openstack.org/97714<br />
|-<br />
| Permit unaddressed interfaces for NFV use cases || Neutron || first BP submit || https://blueprints.launchpad.net/neutron/+spec/nfv-unaddressed-interfaces || https://review.openstack.org/97715<br />
|-<br />
|}<br />
<br />
The rest:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Description !! Project(s) !! Status !! Blueprint(s) !! Design(s)<br />
|-<br />
| SR-IOV Networking Support || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov || https://review.openstack.org/#/c/86606/<br />
|- <br />
| colspan="3" | ''Support for NUMA and VCPU topology configuration'' || ''https://blueprints.launchpad.net/nova/+spec/nova-virt-numa-and-vcpu-topology'' ||<br />
|-<br />
|<br />
: Virt driver guest vCPU topology configuration <br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology || https://review.openstack.org/93510<br />
|- <br />
|<br />
: Virt driver guest NUMA node placement & topology<br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement || https://review.openstack.org/93636<br />
|-<br />
|<br />
: Virt driver large page allocation for guest RAM [[#dupe|*]]<br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages || https://review.openstack.org/93653<br />
|-<br />
|<br />
: Virt driver pinning guest vCPUs to host pCPUs <br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning || https://review.openstack.org/93652<br />
|-<br />
|<br />
: I/O (PCIe) Based NUMA Scheduling <br />
|| Nova || New || https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling || TBD<br />
|-<br />
| Soft affinity support for server groups || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group || https://review.openstack.org/91328<br />
|-<br />
| Open vSwitch-based Security Groups: Open vSwitch Implementation of FirewallDriver || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver || https://review.openstack.org/89712<br />
|-<br />
| Framework for Advanced Services in Virtual Machines || Neutron || || https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms ||<br />
|-<br />
| Neutron Services Insertion, Chaining, and Steering || Neutron || Approved || https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering || https://review.openstack.org/93524<br />
|-<br />
| Schedule vms per flavour cpu overcommit || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/flavor-cpu-overcommit || https://review.openstack.org/88286<br />
|-<br />
| OVF Meta-Data Import via Glance || Glance || Submitted || https://blueprints.launchpad.net/glance/+spec/epa-ovf-meta-data-import || TBD<br />
|-<br />
| colspan="5" | ''Support for high performance Intel(R) Data Plane Development Kit based vSwitches''<br />
|-<br />
|<br />
: Open vSwitch to use patch ports in place of veth pairs for vlan n/w <br />
|| Neutron || Submitted || https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use || TBD<br />
|-<br />
|<br />
: Libvirt hugepage backed memory support <br />
|| Nova || Submitted || https://blueprints.launchpad.net/nova/+spec/libvirt-hugepage || TBD<br />
|-<br />
|<br />
: Support userspace vhost in ovs vif bindings <br />
|| Nova || Submitted || https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost || TBD<br />
|-<br />
| NIC state aware scheduling || Nova || Rejected || https://blueprints.launchpad.net/nova/+spec/nic-state-aware-scheduling || https://review.openstack.org/87978<br />
|-<br />
| Add PCI and PCIe device capability aware scheduling || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/pci-device-capability-aware-scheduling || https://review.openstack.org/92843<br />
|-<br />
| [http://snabb.co/nfv.html Snabb NFV] mechanism driver || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver || https://review.openstack.org/95711<br />
|-<br />
| VIF_SNABB (qemu vhost-user) support || Nova || Submitted w/ code || https://blueprints.launchpad.net/nova/+spec/vif-snabb || https://review.openstack.org/96138<br />
|-<br />
|Solver Scheduler - complex constraints scheduler with NFV use cases || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/solver-scheduler || https://review.openstack.org/#/c/96543/ <br />
|-<br />
| Discless VM || Nova || Under discussion || https://blueprints.launchpad.net/nova/+spec/libvirt-empty-vm-boot-pxe || <br />
|-<br />
| Network QoS API || Neutron || Under discussion || https://blueprints.launchpad.net/neutron/+spec/quantum-qos-api || https://review.openstack.org/#/c/88599<br />
|-<br />
| Persist scheduler hints || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/persist-scheduler-hints || https://review.openstack.org/#/c/88983/<br />
|-<br />
| Security groups using OpenFlow || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver || https://review.openstack.org/#/c/89712/<br />
|-<br />
| Port mirroring || Neutron || Under discussion || https://blueprints.launchpad.net/neutron/+spec/port-mirroring ||<br />
|-<br />
| Traffic Steering Abstraction || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/traffic-steering-abstraction || https://review.openstack.org/92477/<br />
|-<br />
|}<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=54865TelcoWorkingGroup2014-06-04T15:13:26Z<p>Russellb: /* Weekly NFV sub-team IRC meeting */</p>
<hr />
<div><br />
= Weekly NFV sub-team IRC meeting =<br />
'''MEETING TIME: (Proposed, subject to change) Wednesdays, [http://www.timeanddate.com/worldclock/fixedtime.html?hour=14&min=0&sec=0&p1=0 1400 UTC], #openstack-meeting-alt, starting June 4'''<br />
<br />
Mission statement:<br />
<br />
<blockquote>The sub-team aims to define the use cases and identify and prioritise the requirements which are needed to run Network Function Virtualization (NFV) workloads on top of OpenStack. This work includes identifying functional gaps, creating blueprints, submitting and reviewing patches to the relevant OpenStack projects and tracking their completion in support of NFV.</blockquote><br />
<br />
<blockquote>The requirements expressed by this group should be made so that each of them have a test case which can be verified using an OpenSource implementation. This is to ensure that tests can be done without any special hardware or proprietary software, which is key for continuous integration tests in the OpenStack gate. If special setups are required which cannot be reproduced on the standard OpenStack gate, the use cases proponent will have to provide a 3rd party CI setup, accessible by OpenStack infra, which will be used to validate developments against.</blockquote><br />
<br />
[[IRC|OpenStack IRC details]]<br />
<br />
Chair: Russell Bryant (russellb)<br />
<br />
== Agenda for next meeting ==<br />
<br />
Wednesday, June 4 at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=14&min=0&sec=0&p1=0 1400 UTC] in #openstack-meeting-alt.<br />
<br />
Agenda: [https://etherpad.openstack.org/p/nfv-meeting-agenda]<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
* [https://etherpad.openstack.org/p/juno-nfv-bof Juno Design Summit NFV BoF]<br />
<br />
=Use Cases=<br />
<br />
TBD<br />
<br />
= Development Efforts =<br />
<br />
== Active Bugs ==<br />
<br />
Add the "nfv" tag to bugs to have them appear in these queries:<br />
<br />
* Nova: https://bugs.launchpad.net/nova/+bugs?field.tag=nfv<br />
* Neutron: https://bugs.launchpad.net/neutron/+bugs?field.tag=nfv<br />
<br />
== Active Blueprints ==<br />
<br />
PRIORITY - repeatedly mentioned at the BOF as blockers:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Description !! Project(s) !! Status !! Blueprint(s) !! Design<br />
|-<br />
| Support two interfaces from one VM attached to the same network || Nova || first BP submit || https://blueprints.launchpad.net/nova/+spec/2-if-1-net || https://review.openstack.org/97716<br />
|-<br />
| VLAN trunking networks for NFV || Neutron || first BP submit || https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks || https://review.openstack.org/97714<br />
|-<br />
| Permit unaddressed interfaces for NFV use cases || Neutron || first BP submit || https://blueprints.launchpad.net/neutron/+spec/nfv-unaddressed-interfaces || https://review.openstack.org/97715<br />
|-<br />
|}<br />
<br />
The rest:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Description !! Project(s) !! Status !! Blueprint(s) !! Design(s)<br />
|-<br />
| SR-IOV Networking Support || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov || https://review.openstack.org/#/c/86606/<br />
|- <br />
| colspan="3" | ''Support for NUMA and VCPU topology configuration'' || ''https://blueprints.launchpad.net/nova/+spec/nova-virt-numa-and-vcpu-topology'' ||<br />
|-<br />
|<br />
: Virt driver guest vCPU topology configuration <br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology || https://review.openstack.org/93510<br />
|- <br />
|<br />
: Virt driver guest NUMA node placement & topology<br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement || https://review.openstack.org/93636<br />
|-<br />
|<br />
: Virt driver large page allocation for guest RAM [[#dupe|*]]<br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages || https://review.openstack.org/93653<br />
|-<br />
|<br />
: Virt driver pinning guest vCPUs to host pCPUs <br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning || https://review.openstack.org/93652<br />
|-<br />
|<br />
: I/O (PCIe) Based NUMA Scheduling <br />
|| Nova || New || TBD || TBD<br />
|-<br />
| Soft affinity support for server groups || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group || https://review.openstack.org/91328<br />
|-<br />
| Open vSwitch-based Security Groups: Open vSwitch Implementation of FirewallDriver || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver || https://review.openstack.org/89712<br />
|-<br />
| Framework for Advanced Services in Virtual Machines || Neutron || || https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms ||<br />
|-<br />
| Neutron Services Insertion, Chaining, and Steering || Neutron || Approved || https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering || https://review.openstack.org/93524<br />
|-<br />
| Schedule vms per flavour cpu overcommit || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/flavor-cpu-overcommit || https://review.openstack.org/88286<br />
|-<br />
| OVF Meta-Data Import via Glance || Glance || Submitted || https://blueprints.launchpad.net/glance/+spec/epa-ovf-meta-data-import || TBD<br />
|-<br />
| colspan="5" | ''Support for high performance Intel(R) Data Plane Development Kit based vSwitches''<br />
|-<br />
|<br />
: Open vSwitch to use patch ports in place of veth pairs for vlan n/w <br />
|| Neutron || Submitted || https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use || TBD<br />
|-<br />
|<br />
: Libvirt hugepage backed memory support <br />
|| Nova || Submitted || https://blueprints.launchpad.net/nova/+spec/libvirt-hugepage || TBD<br />
|-<br />
|<br />
: Support userspace vhost in ovs vif bindings <br />
|| Nova || Submitted || https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost || TBD<br />
|-<br />
| NIC state aware scheduling || Nova || Rejected || https://blueprints.launchpad.net/nova/+spec/nic-state-aware-scheduling || https://review.openstack.org/87978<br />
|-<br />
| Add PCI and PCIe device capability aware scheduling || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/pci-device-capability-aware-scheduling || https://review.openstack.org/92843<br />
|-<br />
| [http://snabb.co/nfv.html Snabb NFV] mechanism driver || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver || https://review.openstack.org/95711<br />
|-<br />
| VIF_SNABB (qemu vhost-user) support || Nova || Submitted w/ code || https://blueprints.launchpad.net/nova/+spec/vif-snabb || https://review.openstack.org/96138<br />
|-<br />
|Solver Scheduler - complex constraints scheduler with NFV use cases || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/solver-scheduler || https://review.openstack.org/#/c/96543/ <br />
|-<br />
| Discless VM || Nova || Under discussion || https://blueprints.launchpad.net/nova/+spec/libvirt-empty-vm-boot-pxe || <br />
|-<br />
| Network QoS API || Neutron || Under discussion || https://blueprints.launchpad.net/neutron/+spec/quantum-qos-api || https://review.openstack.org/#/c/88599<br />
|-<br />
| Persist scheduler hints || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/persist-scheduler-hints || https://review.openstack.org/#/c/88983/<br />
|-<br />
| Security groups using OpenFlow || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver || https://review.openstack.org/#/c/89712/<br />
|-<br />
| Port mirroring || Neutron || Under discussion || https://blueprints.launchpad.net/neutron/+spec/port-mirroring ||<br />
|-<br />
| Traffic Steering Abstraction || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/traffic-steering-abstraction || https://review.openstack.org/92477/<br />
|-<br />
|}<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=54861TelcoWorkingGroup2014-06-04T14:34:44Z<p>Russellb: /* Agenda for next meeting */</p>
<hr />
<div><br />
= Weekly NFV sub-team IRC meeting =<br />
'''MEETING TIME: (Proposed, subject to change) Wednesdays, [http://www.timeanddate.com/worldclock/fixedtime.html?hour=14&min=0&sec=0&p1=0 1400 UTC], #openstack-meeting-alt, starting June 4'''<br />
<br />
This meeting is a weekly gathering of developers and operators interested in development activity in support of NFV use cases. We gather requirements to support these use cases and track active development efforts across OpenStack that relate to this area.<br />
<br />
[[IRC|OpenStack IRC details]]<br />
<br />
Chair: Russell Bryant (russellb)<br />
<br />
== Agenda for next meeting ==<br />
<br />
Wednesday, June 4 at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=14&min=0&sec=0&p1=0 1400 UTC] in #openstack-meeting-alt.<br />
<br />
Agenda: [https://etherpad.openstack.org/p/nfv-meeting-agenda]<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
* [https://etherpad.openstack.org/p/juno-nfv-bof Juno Design Summit NFV BoF]<br />
<br />
=Use Cases=<br />
<br />
TBD<br />
<br />
= Development Efforts =<br />
<br />
== Active Bugs ==<br />
<br />
Add the "nfv" tag to bugs to have them appear in these queries:<br />
<br />
* Nova: https://bugs.launchpad.net/nova/+bugs?field.tag=nfv<br />
* Neutron: https://bugs.launchpad.net/neutron/+bugs?field.tag=nfv<br />
<br />
== Active Blueprints ==<br />
<br />
PRIORITY - repeatedly mentioned at the BOF as blockers:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Description !! Project(s) !! Status !! Blueprint(s) !! Design<br />
|-<br />
| Support two interfaces from one VM attached to the same network || Nova || first BP submit || https://blueprints.launchpad.net/nova/+spec/2-if-1-net || https://review.openstack.org/97716<br />
|-<br />
| VLAN trunking networks for NFV || Neutron || first BP submit || https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks || https://review.openstack.org/97714<br />
|-<br />
| Permit unaddressed interfaces for NFV use cases || Neutron || first BP submit || https://blueprints.launchpad.net/neutron/+spec/nfv-unaddressed-interfaces || https://review.openstack.org/97715<br />
|-<br />
|}<br />
<br />
The rest:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Description !! Project(s) !! Status !! Blueprint(s) !! Design(s)<br />
|-<br />
| SR-IOV Networking Support || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov || https://review.openstack.org/#/c/86606/<br />
|- <br />
| colspan="3" | ''Support for NUMA and VCPU topology configuration'' || ''https://blueprints.launchpad.net/nova/+spec/nova-virt-numa-and-vcpu-topology'' ||<br />
|-<br />
|<br />
: Virt driver guest vCPU topology configuration <br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology || https://review.openstack.org/93510<br />
|- <br />
|<br />
: Virt driver guest NUMA node placement & topology<br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement || https://review.openstack.org/93636<br />
|-<br />
|<br />
: Virt driver large page allocation for guest RAM [[#dupe|*]]<br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages || https://review.openstack.org/93653<br />
|-<br />
|<br />
: Virt driver pinning guest vCPUs to host pCPUs <br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning || https://review.openstack.org/93652<br />
|-<br />
|<br />
: I/O (PCIe) Based NUMA Scheduling <br />
|| Nova || New || TBD || TBD<br />
|-<br />
| Soft affinity support for server groups || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group || https://review.openstack.org/91328<br />
|-<br />
| Open vSwitch-based Security Groups: Open vSwitch Implementation of FirewallDriver || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver || https://review.openstack.org/89712<br />
|-<br />
| Framework for Advanced Services in Virtual Machines || Neutron || || https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms ||<br />
|-<br />
| Neutron Services Insertion, Chaining, and Steering || Neutron || Approved || https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering || https://review.openstack.org/93524<br />
|-<br />
| Schedule vms per flavour cpu overcommit || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/flavor-cpu-overcommit || https://review.openstack.org/88286<br />
|-<br />
| OVF Meta-Data Import via Glance || Glance || Submitted || https://blueprints.launchpad.net/glance/+spec/epa-ovf-meta-data-import || TBD<br />
|-<br />
| colspan="5" | ''Support for high performance Intel(R) Data Plane Development Kit based vSwitches''<br />
|-<br />
|<br />
: Open vSwitch to use patch ports in place of veth pairs for vlan n/w <br />
|| Neutron || Submitted || https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use || TBD<br />
|-<br />
|<br />
: Libvirt hugepage backed memory support <br />
|| Nova || Submitted || https://blueprints.launchpad.net/nova/+spec/libvirt-hugepage || TBD<br />
|-<br />
|<br />
: Support userspace vhost in ovs vif bindings <br />
|| Nova || Submitted || https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost || TBD<br />
|-<br />
| NIC state aware scheduling || Nova || Rejected || https://blueprints.launchpad.net/nova/+spec/nic-state-aware-scheduling || https://review.openstack.org/87978<br />
|-<br />
| Add PCI and PCIe device capability aware scheduling || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/pci-device-capability-aware-scheduling || https://review.openstack.org/92843<br />
|-<br />
| [http://snabb.co/nfv.html Snabb NFV] mechanism driver || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver || https://review.openstack.org/95711<br />
|-<br />
| VIF_SNABB (qemu vhost-user) support || Nova || Submitted w/ code || https://blueprints.launchpad.net/nova/+spec/vif-snabb || https://review.openstack.org/96138<br />
|-<br />
|Solver Scheduler - complex constraints scheduler with NFV use cases || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/solver-scheduler || https://review.openstack.org/#/c/96543/ <br />
|-<br />
| Discless VM || Nova || Under discussion || https://blueprints.launchpad.net/nova/+spec/libvirt-empty-vm-boot-pxe || <br />
|-<br />
| Network QoS API || Neutron || Under discussion || https://blueprints.launchpad.net/neutron/+spec/quantum-qos-api || https://review.openstack.org/#/c/88599<br />
|-<br />
| Persist scheduler hints || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/persist-scheduler-hints || https://review.openstack.org/#/c/88983/<br />
|-<br />
| Security groups using OpenFlow || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver || https://review.openstack.org/#/c/89712/<br />
|-<br />
| Port mirroring || Neutron || Under discussion || https://blueprints.launchpad.net/neutron/+spec/port-mirroring ||<br />
|-<br />
| Traffic Steering Abstraction || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/traffic-steering-abstraction || https://review.openstack.org/92477/<br />
|-<br />
|}<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=54853TelcoWorkingGroup2014-06-04T13:42:41Z<p>Russellb: /* Weekly NFV sub-team IRC meeting */</p>
<hr />
<div><br />
= Weekly NFV sub-team IRC meeting =<br />
'''MEETING TIME: (Proposed, subject to change) Wednesdays, [http://www.timeanddate.com/worldclock/fixedtime.html?hour=14&min=0&sec=0&p1=0 1400 UTC], #openstack-meeting-alt, starting June 4'''<br />
<br />
This meeting is a weekly gathering of developers and operators interested in development activity in support of NFV use cases. We gather requirements to support these use cases and track active development efforts across OpenStack that relate to this area.<br />
<br />
[[IRC|OpenStack IRC details]]<br />
<br />
Chair: Russell Bryant (russellb)<br />
<br />
== Agenda for next meeting ==<br />
<br />
Wednesday, June 4 at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=14&min=0&sec=0&p1=0 1400 UTC] in #openstack-meeting-alt.<br />
<br />
Agenda:<br />
* First meeting!<br />
* Meet and greet<br />
* Review Mission<br />
** https://etherpad.openstack.org/p/nvf-subteam-mission-statement<br />
** Positioning relative to complementary projects and subteams, e.g. Nova, Neutron, Heat, ServiceVM, IPv6, etc.<br />
* Review our current blueprint list and fill in anything we're not tracking yet<br />
* Review use case prioritization<br />
* Discuss tracking approaches:<br />
** Use cases<br />
** Blueprints<br />
** Bugs<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
* [https://etherpad.openstack.org/p/juno-nfv-bof Juno Design Summit NFV BoF]<br />
<br />
=Use Cases=<br />
<br />
TBD<br />
<br />
= Development Efforts =<br />
<br />
== Active Bugs ==<br />
<br />
Add the "nfv" tag to bugs to have them appear in these queries:<br />
<br />
* Nova: https://bugs.launchpad.net/nova/+bugs?field.tag=nfv<br />
* Neutron: https://bugs.launchpad.net/neutron/+bugs?field.tag=nfv<br />
<br />
== Active Blueprints ==<br />
<br />
PRIORITY - repeatedly mentioned at the BOF as blockers:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Description !! Project(s) !! Status !! Blueprint(s) !! Design<br />
|-<br />
| Support two interfaces from one VM attached to the same network || Nova || first BP submit || https://blueprints.launchpad.net/nova/+spec/2-if-1-net || https://review.openstack.org/97716<br />
|-<br />
| VLAN trunking networks for NFV || Neutron || first BP submit || https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks || https://review.openstack.org/97714<br />
|-<br />
| Permit unaddressed interfaces for NFV use cases || Neutron || first BP submit || https://blueprints.launchpad.net/neutron/+spec/nfv-unaddressed-interfaces || https://review.openstack.org/97715<br />
|-<br />
|}<br />
<br />
The rest:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Description !! Project(s) !! Status !! Blueprint(s) !! Design(s)<br />
|-<br />
| SR-IOV Networking Support || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov || https://review.openstack.org/#/c/86606/<br />
|- <br />
| colspan="3" | ''Support for NUMA and VCPU topology configuration'' || ''https://blueprints.launchpad.net/nova/+spec/nova-virt-numa-and-vcpu-topology'' ||<br />
|-<br />
|<br />
: Virt driver guest vCPU topology configuration <br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology || https://review.openstack.org/93510<br />
|- <br />
|<br />
: Virt driver guest NUMA node placement & topology<br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement || https://review.openstack.org/93636<br />
|-<br />
|<br />
: Virt driver large page allocation for guest RAM [[#dupe|*]]<br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages || https://review.openstack.org/93653<br />
|-<br />
|<br />
: Virt driver pinning guest vCPUs to host pCPUs <br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning || https://review.openstack.org/93652<br />
|-<br />
| Soft affinity support for server groups || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group || https://review.openstack.org/91328<br />
|-<br />
| Open vSwitch-based Security Groups: Open vSwitch Implementation of FirewallDriver || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver || https://review.openstack.org/89712<br />
|-<br />
| Framework for Advanced Services in Virtual Machines || Neutron || || https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms ||<br />
|-<br />
| Neutron Services Insertion, Chaining, and Steering || Neutron || Approved || https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering || https://review.openstack.org/93524<br />
|-<br />
| Schedule vms per flavour cpu overcommit || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/flavor-cpu-overcommit || https://review.openstack.org/88286<br />
|-<br />
| OVF Meta-Data Import via Glance || Glance || Submitted || https://blueprints.launchpad.net/glance/+spec/epa-ovf-meta-data-import || TBD<br />
|-<br />
| colspan="5" | ''Support for high performance Intel(R) Data Plane Development Kit based vSwitches''<br />
|-<br />
|<br />
: Open vSwitch to use patch ports in place of veth pairs for vlan n/w <br />
|| Neutron || Submitted || https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use || TBD<br />
|-<br />
|<br />
: Libvirt hugepage backed memory support <br />
|| Nova || Submitted || https://blueprints.launchpad.net/nova/+spec/libvirt-hugepage || TBD<br />
|-<br />
|<br />
: Support userspace vhost in ovs vif bindings <br />
|| Nova || Submitted || https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost || TBD<br />
|-<br />
| NIC state aware scheduling || Nova || Rejected || https://blueprints.launchpad.net/nova/+spec/nic-state-aware-scheduling || https://review.openstack.org/87978<br />
|-<br />
| Add PCI and PCIe device capability aware scheduling || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/pci-device-capability-aware-scheduling || https://review.openstack.org/92843<br />
|-<br />
| [http://snabb.co/nfv.html Snabb NFV] mechanism driver || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver || https://review.openstack.org/95711<br />
|-<br />
| VIF_SNABB (qemu vhost-user) support || Nova || Submitted w/ code || https://blueprints.launchpad.net/nova/+spec/vif-snabb || https://review.openstack.org/96138<br />
|-<br />
|Solver Scheduler - complex constraints scheduler with NFV use cases || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/solver-scheduler || https://review.openstack.org/#/c/96543/ <br />
|-<br />
| Discless VM || Nova || Under discussion || https://blueprints.launchpad.net/nova/+spec/libvirt-empty-vm-boot-pxe || <br />
|-<br />
| Network QoS API || Neutron || Under discussion || https://blueprints.launchpad.net/neutron/+spec/quantum-qos-api || https://review.openstack.org/#/c/88599<br />
|-<br />
| Persist scheduler hints || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/persist-scheduler-hints || https://review.openstack.org/#/c/88983/<br />
|-<br />
| Security groups using OpenFlow || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver || https://review.openstack.org/#/c/89712/<br />
|-<br />
| Port mirroring || Neutron || Under discussion || https://blueprints.launchpad.net/neutron/+spec/port-mirroring ||<br />
|-<br />
|}<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=54699TelcoWorkingGroup2014-06-03T14:59:55Z<p>Russellb: /* Weekly NFV sub-team IRC meeting */</p>
<hr />
<div><br />
= Weekly NFV sub-team IRC meeting =<br />
'''MEETING TIME: (Proposed, subject to change) Wednesdays, [http://www.timeanddate.com/worldclock/fixedtime.html?hour=14&min=0&sec=0&p1=0 1400 UTC], #openstack-meeting, starting June 4'''<br />
<br />
This meeting is a weekly gathering of developers and operators interested in development activity in support of NFV use cases. We gather requirements to support these use cases and track active development efforts across OpenStack that relate to this area.<br />
<br />
[[IRC|OpenStack IRC details]]<br />
<br />
Chair: Russell Bryant (russellb)<br />
<br />
== Agenda for next meeting ==<br />
<br />
Wednesday, June 4 at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=14&min=0&sec=0&p1=0 1400 UTC] in #openstack-meeting.<br />
<br />
Agenda:<br />
* First meeting!<br />
* Meet and greet<br />
* Review Mission<br />
* Review our current blueprint list and fill in anything we're not tracking yet<br />
* Review use case prioritization<br />
* Discuss tracking approaches:<br />
** Use cases<br />
** Blueprints<br />
** Bugs<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
* [https://etherpad.openstack.org/p/juno-nfv-bof Juno Design Summit NFV BoF]<br />
<br />
=Use Cases=<br />
<br />
TBD<br />
<br />
= Development Efforts =<br />
<br />
== Active Bugs ==<br />
<br />
Add the "nfv" tag to bugs to have them appear in these queries:<br />
<br />
* Nova: https://bugs.launchpad.net/nova/+bugs?field.tag=nfv<br />
* Neutron: https://bugs.launchpad.net/neutron/+bugs?field.tag=nfv<br />
<br />
== Active Blueprints ==<br />
<br />
{| class="wikitable"<br />
|-<br />
! Description !! Project(s) !! Status !! Blueprint(s) !! Design(s)<br />
|-<br />
| SR-IOV Networking Support || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov || https://review.openstack.org/#/c/86606/<br />
|- <br />
| colspan="3" | ''Support for NUMA and VCPU topology configuration'' || ''https://blueprints.launchpad.net/nova/+spec/nova-virt-numa-and-vcpu-topology'' ||<br />
|-<br />
|<br />
: Virt driver guest vCPU topology configuration <br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology || https://review.openstack.org/93510<br />
|- <br />
|<br />
: Virt driver guest NUMA node placement & topology<br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement || https://review.openstack.org/93636<br />
|-<br />
|<br />
: Virt driver large page allocation for guest RAM [[#dupe|*]]<br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages || https://review.openstack.org/93653<br />
|-<br />
|<br />
: Virt driver pinning guest vCPUs to host pCPUs <br />
|| Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning || https://review.openstack.org/93652<br />
|-<br />
| Soft affinity support for server groups || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group || https://review.openstack.org/91328<br />
|-<br />
| Open vSwitch-based Security Groups: Open vSwitch Implementation of FirewallDriver || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver || https://review.openstack.org/89712<br />
|-<br />
| Framework for Advanced Services in Virtual Machines || Neutron || || https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms ||<br />
|-<br />
| Neutron Services Insertion, Chaining, and Steering || Neutron || Approved || https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering || https://review.openstack.org/93524<br />
|-<br />
| Schedule vms per flavour cpu overcommit || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/flavor-cpu-overcommit || https://review.openstack.org/88286<br />
|-<br />
| OVF Meta-Data Import via Glance || Glance || Submitted || https://blueprints.launchpad.net/glance/+spec/epa-ovf-meta-data-import || TBD<br />
|-<br />
| colspan="5" | ''Support for high performance Intel(R) Data Plane Development Kit based vSwitches''<br />
|-<br />
|<br />
: Open vSwitch to use patch ports in place of veth pairs for vlan n/w <br />
|| Neutron || Submitted || https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use || TBD<br />
|-<br />
|<br />
: Libvirt hugepage backed memory support [[#dupe|*]]<br />
|| Nova || Submitted || https://blueprints.launchpad.net/nova/+spec/libvirt-hugepage || TBD<br />
|-<br />
|<br />
: Support userspace vhost in ovs vif bindings <br />
|| Nova || Submitted || https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost || TBD<br />
|-<br />
| NIC state aware scheduling || Nova || Rejected || https://blueprints.launchpad.net/nova/+spec/nic-state-aware-scheduling || https://review.openstack.org/87978<br />
|-<br />
| Add PCI and PCIe device capability aware scheduling || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/pci-device-capability-aware-scheduling || https://review.openstack.org/92843<br />
|-<br />
| Snabb NFV mechanism driver || Neutron || Design review in progress || https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver || https://review.openstack.org/95711<br />
|-<br />
|Solver Scheduler - complex constraints scheduler with NFV use cases || Nova || Design review in progress || https://blueprints.launchpad.net/nova/+spec/solver-scheduler || https://review.openstack.org/#/c/96543/ <br />
|-<br />
}<br />
<span id="dupe">* Possible duplicate</span><br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=Meetings&diff=52383Meetings2014-05-15T14:28:44Z<p>Russellb: </p>
<hr />
<div>The OpenStack project holds its various public meetings on '''IRC''', in the <code><nowiki>#openstack-meeting</nowiki></code>, <code><nowiki>#openstack-meeting-alt</nowiki></code> and <code><nowiki>#openstack-meeting-3</nowiki></code> channels on Freenode. Everyone is encouraged to attend.<br />
<br />
You can also access the [https://www.google.com/calendar/ical/bj05mroquq28jhud58esggqmh4@group.calendar.google.com/public/basic.ics iCal feed for all OpenStack meetings].<br />
<br />
== OpenStack Project & Release Status meeting ==<br />
* Weekly on Tuesdays at 2100 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): [[ThierryCarrez]]<br />
* See [[Meetings/ProjectMeeting]] for details<br />
<br />
== Technical Committee meeting ==<br />
* Weekly on Tuesdays at 2000 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): [[ThierryCarrez]]<br />
* See [[Governance/TechnicalCommittee]] for details<br />
<br />
== OpenStack Compute (Nova) ==<br />
=== Nova team Meeting ===<br />
* Weekly on Thursdays, alternating times - 1400 UTC and 2100 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code> (1400 UTC)<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code> (2100 UTC) <br />
* Chair (to contact for more information): Russell Bryant<br />
* See [[Meetings/Nova]] for an agenda<br />
<br />
=== Nova Bug Scrub Meeting ===<br />
* Weekly on Wednesday at 1630 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-3</nowiki></code><br />
* Chair (to contact for more information): Tracy Jones<br />
* See [[Meetings/NovaBugScrub]] for an agenda<br />
<br />
=== XenAPI team meeting ===<br />
* Weekly on Wednesdays at 1500 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chaired by: [[JohnGarbutt]]<br />
* See [[Meetings/XenAPI]] for agenda<br />
<br />
=== Nova Hyper-V team meeting ===<br />
* Weekly on Tuesdays at 1600 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chaired by primeministerp (Peter Pouliot)<br />
<br />
=== Gantt (Scheduler) team meeting ===<br />
* Weekly on Tuesdays at 1500 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): n0ano (Don Dugger)<br />
* See [[Meetings/Scheduler]] for details<br />
<br />
=== VMwareAPI team meeting ===<br />
* Weekly on Wednesdays at 1700 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: [[TracyJones]]<br />
* See [[Meetings/VMwareAPI]] for details<br />
<br />
=== PCI Passthrough Meeting ===<br />
* Weekly on Tuesday at [http://www.worldclock.com/world_clock.html 1300 UTC] <br />
* Will change back to once weekly after agreements are reached.<br />
* IRC channel: #openstack-meeting-alt<br />
* Chair: baoli (Robert Li)<br />
* See [[Meetings/Passthrough]] for details<br />
<br />
=== Nova API meeting ===<br />
* Weekly on Friday at 00:00 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: Chris Yeoh<br />
* See [[Meetings/NovaAPI]] for details<br />
<br />
== Documentation team meeting ==<br />
* Every other Wednesday at alternating times, see [[Meetings/DocTeamMeeting]]<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): [[AnneGentle]]<br />
* See [[Meetings/DocTeamMeeting]] for an agenda<br />
<br />
== Project Infrastructure team meeting ==<br />
* Weekly on Tuesdays at 1900 UTC<br />
* IRC channel: #openstack-meeting<br />
* Chair (to contact for more information): [[User:Corvus|James E. Blair (jeblair)]]<br />
* See [[Meetings/InfraTeamMeeting]] for an agenda<br />
<br />
== QA team meeting ==<br />
* Weekly on Thursdays at 1700/2200 UTC (alternating)<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): Matt Treinish<br />
* See [[Meetings/QATeamMeeting]] for an agenda<br />
<br />
== DefCore / RefStack Development Meeting ==<br />
* Weekly on Thursdays at 10 am Pacific (will track daylight savings time) - now at 1700 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-3</nowiki></code><br />
* PTL: David Lenwell (Piston Cloud)<br />
* DefCore Chairs (to contact for more information): Rob "zehicle" Hirschfeld & Joshua McKenty<br />
* See [[Meetings/DefCore]] for an agenda<br />
<br />
Face to Face Meetings planned (Piston HQ in SFO)<br />
* Friday 3/28 1pm for Web Front Page<br />
* Tuesday 4/15 10am for general working session<br />
<br />
== DefCore Progress Update Meeting ==<br />
* 4/1 DefCore meeting, 2pm PST, 90 minutes<br />
* Agenda & Connection Details at https://etherpad.openstack.org/p/DefCoreElephant.7<br />
* Chairs (to contact for more information): Rob "@zehicle" Hirschfeld & Joshua McKenty<br />
<br />
== State management team meeting ==<br />
* Weekly on Thursdays at 1900 UTC (http://weatherarc.com/utc-time-conversion) <br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): [[Harlowja]]<br />
* See [[Meetings/StateManagement]] for an agenda<br />
<br />
== Keystone team meeting ==<br />
* Weekly on Tuesdays at 1800 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): [[DolphMathews]]<br />
* See [[Meetings/KeystoneMeeting]] for an agenda<br />
<br />
== Ironic (Bare Metal) team meeting ==<br />
* Weekly on Mondays at 1900 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more infomation) Devananda van der Veen<br />
* see [[Meetings/Ironic]] for agenda<br />
<br />
== TripleO team meeting ==<br />
* Weekly on Tuesdays at 1900 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair (to contact for more infomation) Robert Collins (lifeless)<br />
* see [[Meetings/TripleO]] for agenda<br />
<br />
== OpenStack Networking (Neutron) ==<br />
=== Neutron team meeting ===<br />
* Weekly on Mondays at 2100 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more infomation) Kyle Mestery (mestery)<br />
* see [[Network/Meetings]] for agenda<br />
<br />
=== LBaaS meeting ===<br />
* Weekly on Thursdays at 1400 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more infomation) enikanorov (Eugene Nikanorov)<br />
* see [[Network/LBaaS]] for agenda<br />
<br />
=== ML2 Network sub-team meeting ===<br />
* Weekly on Wednesdays at 1600 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair (to contact for more information): rkukura/Suhkdev (Bob Kukura / Sukhdev Kapur)<br />
* See [[Meetings/ML2]] for details<br />
<br />
=== Firewall as a Service (FWaaS) team meeting ===<br />
* Weekly on Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?msg=OpenStack+Neutron+FWaaS+IRC&iso=20140326T1830&p1=1440&ah=1 1830 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting-3</nowiki></code><br />
* Chair: snaiksat (Sumit Naiksatam)<br />
* See [[Meetings/FWaaS]] for details<br />
<br />
=== Neutron Advanced Services' Common requirements team meeting ===<br />
* Weekly on Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?msg=OpenStack+Neutron+Adv+Services+IRC&iso=20140326T1730&p1=1440&ah=1 1730 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting-3</nowiki></code><br />
* Chair: snaiksat (Sumit Naiksatam)<br />
* See [[Meetings/AdvancedServices]] for details<br />
<br />
=== Neutron IPv6 sub-team Meeting === <br />
* Weekly on Tuesdays at [http://www.worldclock.com/world_clock.html 1400 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code> <br />
* Chair: sc68cal (Sean M. Collins)<br />
* See [[Meetings/Neutron-IPv6-Subteam]] for details<br />
<br />
=== Neutron Group Policy Sub-Team Meeting ===<br />
* Weekly on Thursdays at 1800 UTC<br />
* IRC channel: #openstack-meeting-3<br />
* Chair: SumitNaiksatam (Sumit Naiksatam)<br />
* See [[Meetings/Neutron_Group_Policy]] for details<br />
<br />
=== Neutron Distributed Virtual Router meeting ===<br />
* Weekly on Wednesdays at [http://www.worldclock.com/world_clock.html 1500 UTC]<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair:Swami (Swaminathan Vasudevan)<br />
* See [[Meetings/Distributed-Virtual-Router]] for details<br />
<br />
=== Neutron blueprint ovs-firewall-driver Meeting ===<br />
* Tentative: Monday, December 16 at 2000 UTC<br />
* IRC channel: #openstack-meeting<br />
* Chair: asadoughi (Amir Sadoughi)<br />
* Agenda: See [[Meetings/Neutron_blueprint_ovs-firewall-driver]]<br />
<br />
=== Neutron L3 Sub Team Meeting ===<br />
* Weekly on Thursday at 1500 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-3</nowiki></code><br />
* Chair: carl_baldwin (Carl Baldwin)<br />
* Agenda: See [[Meetings/Neutron-L3-Subteam]]<br />
<br />
<br />
=== Neutron ServiceVM framework Sub Team Meeting ===<br />
* Weekly on Tuesdays at 500UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: yamahata (Isaku Yamahata)<br />
* Agenda: See [[Meetings/ServiceVM]]<br />
<br />
== Cinder team meeting ==<br />
* Weekly on Wednesdays at 1600 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chaired by [[JohnGriffith]]<br />
* see [[CinderMeetings]] for agenda<br />
<br />
== Ceilometer team meeting ==<br />
* '''Every''' week on Thursdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=15&min=0&sec=0 1500 UTC].<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chaired by eglynn (Eoghan Glynn)<br />
* see [[Meetings/Ceilometer]] for details<br />
<br />
== Designate (DNSaaS) meeting ==<br />
* Weekly Wednesdays at 1700 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair (to contact for more information): Kiall Mac Innes (kiall)<br />
* See [[Meetings/Designate]] for details<br />
<br />
== Trove (DBaaS) meeting ==<br />
* Weekly on Wednesdays at 1800 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair (to contact for more information): Michael Basnight (hub_cap) / Vipul Sabhaya (vipul) / Nikhil Manchanda (SlickNik) / Tim Simpson (grapex)<br />
* See [[Meetings/TroveMeeting]] for details<br />
* For BP Meeting, please see [[Meetings/TroveBPMeeting]] for more details<br />
<br />
== Marconi (queues) team meeting ==<br />
* Weekly on Tuesday at 1500 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair: kgriffs (Kurt Griffiths)<br />
* See [[Meetings/Marconi]] for details<br />
<br />
== OpenStack Data Processing (Sahara) team meeting ==<br />
* Weekly on Thursdays at 1800 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair (to contact for more info): SergeyLukjanov (Sergey Lukjanov)<br />
* See [[Meetings/SaharaAgenda]] for details<br />
<br />
== Mistral meeting ==<br />
* Weekly on Mondays at 1600 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: rakhmerov (Renat Akhmerov)<br />
* See [[Meetings/MistralAgenda]] for details<br />
<br />
== Murano meeting ==<br />
* Weekly on Tuesday at 1700 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair: Georgiy Okrokvertskhov (Georgy_Ok)<br />
* See [[Meetings/MuranoAgenda]] for details<br />
<br />
== Heat (orchestration) team meeting ==<br />
* Weekly on Wednesdays at 2000 UTC in <code><nowiki>#openstack-meeting</nowiki></code> or Thursdays at 0000 UTC in <code><nowiki>#openstack-meeting-alt</nowiki></code> (alternate weeks)<br />
* Chair (to contact for more information): Steve Baker (stevebaker)<br />
* See [[Meetings/HeatAgenda]] for details<br />
<br />
== Horizon team meeting ==<br />
* Weekly on Tuesdays at 1600 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-3</nowiki></code><br />
* Chair: David Lyle (david-lyle)<br />
* See [[Meetings/Horizon]] for details<br />
<br />
== Swift team meeting ==<br />
* Weekly on Wednesdays at 1900 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: notmyname (John Dickinson)<br />
* See [[Meetings/Swift]] for details<br />
<br />
== OpenStack Security Group (OSSG) meeting ==<br />
* Weekly on Thursdays at 1800 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): bdpayne (Bryan Payne)<br />
* See [[Meetings/OpenStackSecurity]] for an agenda<br />
<br />
== Python3 Compatibility Team meeting ==<br />
* Not planned anymore<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): jd_ (Julien Danjou)<br />
* See [[Meetings/Python3]] for details<br />
<br />
== Glance Team meeting ==<br />
* Weekly on Thursdays at 1400/2000 UTC (alternating)<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair (to contact for more information): markwash (Mark Washenberger)<br />
* See [[Meetings/Glance]] for details<br />
<br />
== Oslo Team meeting ==<br />
* On demand on Fridays at 1600 UTC ([http://www.timeanddate.com/worldclock/converted.html?iso=20140425T16&p1=0&p2=2133&p3=195&p4=224 timeanddate.com])<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair (to contact for more information): dhellmann (Doug Hellmann)<br />
* See [[Meetings/Oslo]] for details<br />
<br />
== OpenStack Community team meeting ==<br />
* Weekly on Wednesday at [http://www.worldclock.com/world_clock.html 2300 UTC]<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: reed ([http://www.openstack.org/community/members/profile/1372 Stefano Maffulli])<br />
* See [[Meetings/Community]] for details<br />
<br />
== I18N Team meeting ==<br />
* Bi-weekly on Thursday, alternating between 0800 UTC and 0000 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: daisy<br />
* See [[Meetings/I18nTeamMeeting]] for details<br />
<br />
== Training-manuals Team meeting ==<br />
* Weekly on Monday at [http://www.worldclock.com/current-local-time-in-san-francisco_598.htm 1700 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: sarob<br />
* See [[Meetings/training-manuals]] for details<br />
<br />
== Manila Team meeting ==<br />
* Weekly on Thursday at [http://www.worldclock.com/current-local-time-in-san-francisco_598.htm 1500 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair: bswartz<br />
* See [[Manila/Meetings]] for details<br />
<br />
== Stackalytics team meeting ==<br />
* Be-Weekly on Mondays (starting from October 21st) at [http://www.worldclock.com/world_clock.html 1500 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: ilyashakhat (Ilya Shakhat)<br />
* See [[Meetings/Stackalytics]] for details<br />
<br />
== Climate (Reservations) team meeting ==<br />
* Weekly on Fridays at [http://www.worldclock.com/world_clock.html 1500 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: bauzas (Sylvain Bauza), DinaBelova (Dina Belova)<br />
* See [[Meetings/Climate]] for details<br />
<br />
== Rally meeting ==<br />
* Weekly on Tuesdays at [http://www.worldclock.com/world_clock.html 1700 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair:boris-42 (Boris Pavlovic)<br />
* See [[Meetings/Rally]] for details<br />
<br />
== Solum Team Meeting ==<br />
* Weekly on Tuesdays at 1600/2200 UTC (alternating)<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair: adrian_otto (Adrian Otto)<br />
* See [[Meetings/Solum]] for details<br />
<br />
== Congress Team Meeting ==<br />
* Bi-weekly on Tuesdays at [http://www.worldclock.com/world_clock.html 1700 UTC], e.g. Feb 25, 2014<br />
* IRC channel: #openstack-meeting-3<br />
* Chair: pballand (Pete Balland)<br />
* See [[Meetings/Congress]] for details<br />
<br />
== Barbican Meeting ==<br />
* Weekly on Mondays at [http://www.timeanddate.com/worldclock/fixedtime.html?iso=20130502T2000 2000 UTC]<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair (to contact for more information): jraim (#openstack-barbican @ Freenode)<br />
* See [[Meetings/Barbican]] for an agenda<br />
<br />
== Chef Cookbook meeting ==<br />
* Weekly on Mondays at 1600 UTC<br />
* IRC channel: <code><nowiki>#openstack-chef</nowiki></code><br />
* Chair: mattray (Matt Ray)<br />
* See [[Meetings/ChefCookbook]] for details<br />
<br />
== Milk Meeting ==<br />
* Weekly on Monday at [http://www.worldclock.com/current-local-time-in-san-francisco_598.htm 2000 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: sarob<br />
* See [[Meetings/milk]] for details<br />
<br />
== StoryBoard Meeting ==<br />
* Weekly on Thursdays at 1600 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: cody-somerville or ttx<br />
* See [[StoryBoard]] for details<br />
<br />
<br />
== Hierarchical Multitenancy Meeting ==<br />
* Weekly on Fridays at 1600 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: vishy<br />
* See [[HierarchicalMultitenancy]] for details<br />
<br />
== python-openstacksdk Meeting ==<br />
* Weekly on Tuesdays at [http://www.worldtimebuddy.com/?qm=1&lid=6,0,4726206,100&h=6&date=2014-2-11&sln=13-14 1900 UTC] starting 2/19/2014<br />
* IRC channel: <code><nowiki>#openstack-meeting-3</nowiki></code><br />
* Chair: jnoller<br />
* See [[PythonOpenStackSDK]] for details<br />
<br />
== Satori Team Meeting ==<br />
* Weekly on Mondays at [http://www.worldtimebuddy.com/?qm=1&lid=6,0,4726206,100&h=6&date=2014-2-11&sln=9-10 1500 UTC] starting Feb 24, 2014<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair: Ziad_Sawalha<br />
* See [[Meetings/Satori]] for details<br />
<br />
== Fuel Team Meeting ==<br />
* Weekly on Thursdays at [http://www.worldtimebuddy.com/?qm=1&lid=100&h=100&date=2014-3-27&sln=16-17 1600 UTC] starting Feb 27, 2014<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair: vkozhukalov<br />
* See [[Meetings/Fuel]] for details<br />
<br />
== Third Party OpenStack CI Workshop and Q&A Meetings ==<br />
* Weekly on Mondays at 1800 UTC starting March 3rd, 2014<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: JayPipes<br />
<br />
== MagnetoDB Team meeting ==<br />
* Every second week on Mondays at 0900 UTC starting April 14th, 2014<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: isviridov<br />
<br />
== MagnetoDB Team daily scrum meeting notes ==<br />
* Every day at 1500 UTC starting April 8th, 2014<br />
* IRC channel: <code><nowiki>#magnetodb</nowiki></code><br />
* Chair: isviridov, setho, dukhlov, ikhudoshyn<br />
<br />
== PHP SDK Team Meeting ==<br />
* Weekly on Wednesdays at 1530 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-3</nowiki></code> starting April 9, 2014<br />
* Chair: mfer (Matt Farina)<br />
* See [[Meetings/OpenStack-SDK-PHP]] for details<br />
<br />
== NFV Team Meeting ==<br />
* Weekly on Wednesdays at 1400 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code> starting June 4, 2014<br />
* Chair: Russell Bryant (russellb)<br />
* See [[Meetings/NFV]] for details</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=52382TelcoWorkingGroup2014-05-15T14:28:13Z<p>Russellb: /* Weekly NFV sub-team meeting */</p>
<hr />
<div><br />
= Weekly NFV sub-team meeting =<br />
'''MEETING TIME: (Proposed, subject to change) Wednesdays, 1400 UTC, #openstack-meeting, starting June 4'''<br />
<br />
This meeting is a weekly gathering of developers and operators interested in development activity in support of NFV use cases. We gather requirements to support these use cases and track active development efforts across OpenStack that relate to this area.<br />
<br />
Chair: Russell Bryant (russellb)<br />
<br />
== Agenda for next meeting ==<br />
<br />
Proposed (subject to change) for Wednesday, June 4 at 1400 UTC in #openstack-meeting.<br />
<br />
Agenda:<br />
* First meeting!<br />
* Meet and greet<br />
* Review our current blueprint list and fill in anything we're not tracking yet<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
<br />
= Development Efforts =<br />
<br />
== Active Blueprints ==<br />
<br />
* SR-IOV Networking Support<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov<br />
** Nova design: https://review.openstack.org/#/c/86606/<br />
* Support for NUMA and VCPU topology configuration<br />
** https://blueprints.launchpad.net/nova/+spec/nova-virt-numa-and-vcpu-topology<br />
** Virt driver guest vCPU topology configuration <br />
*** '''Status: Design review in progress'''<br />
*** https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology <br />
*** Nova design: https://review.openstack.org/93510<br />
** Virt driver guest NUMA node placement & topology <br />
*** '''Status: Design review in progress'''<br />
*** https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement<br />
*** Nova design: https://review.openstack.org/93636<br />
** Virt driver large page allocation for guest RAM<br />
*** '''Status: Design review in progress'''<br />
*** https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages<br />
*** Nova design: https://review.openstack.org/93653<br />
** Virt driver pinning guest vCPUs to host pCPUs<br />
*** '''Status: Design review in progress'''<br />
*** https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning<br />
*** Nova design: https://review.openstack.org/93652<br />
* Soft affinity support for server groups<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group <br />
** Nova design: https://review.openstack.org/#/c/91328/<br />
* Open vSwitch-based Security Groups: Open vSwitch Implementation of FirewallDriver<br />
** https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver<br />
** '''Status: Design review in progress'''<br />
** Neutron design: https://review.openstack.org/#/c/89712/<br />
* Framework for Advanced Services in Virtual Machines<br />
** https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms<br />
* Neutron Services Insertion, Chaining, and Steering<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering<br />
** Neutron design: https://review.openstack.org/#/c/93524<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=Meetings&diff=52365Meetings2014-05-15T13:47:57Z<p>Russellb: </p>
<hr />
<div>The OpenStack project holds its various public meetings on '''IRC''', in the <code><nowiki>#openstack-meeting</nowiki></code>, <code><nowiki>#openstack-meeting-alt</nowiki></code> and <code><nowiki>#openstack-meeting-3</nowiki></code> channels on Freenode. Everyone is encouraged to attend.<br />
<br />
You can also access the [https://www.google.com/calendar/ical/bj05mroquq28jhud58esggqmh4@group.calendar.google.com/public/basic.ics iCal feed for all OpenStack meetings].<br />
<br />
== OpenStack Project & Release Status meeting ==<br />
* Weekly on Tuesdays at 2100 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): [[ThierryCarrez]]<br />
* See [[Meetings/ProjectMeeting]] for details<br />
<br />
== Technical Committee meeting ==<br />
* Weekly on Tuesdays at 2000 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): [[ThierryCarrez]]<br />
* See [[Governance/TechnicalCommittee]] for details<br />
<br />
== OpenStack Compute (Nova) ==<br />
=== Nova team Meeting ===<br />
* Weekly on Thursdays, alternating times - 1400 UTC and 2100 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code> (1400 UTC)<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code> (2100 UTC) <br />
* Chair (to contact for more information): Russell Bryant<br />
* See [[Meetings/Nova]] for an agenda<br />
<br />
=== Nova Bug Scrub Meeting ===<br />
* Weekly on Wednesday at 1630 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-3</nowiki></code><br />
* Chair (to contact for more information): Tracy Jones<br />
* See [[Meetings/NovaBugScrub]] for an agenda<br />
<br />
=== XenAPI team meeting ===<br />
* Weekly on Wednesdays at 1500 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chaired by: [[JohnGarbutt]]<br />
* See [[Meetings/XenAPI]] for agenda<br />
<br />
=== Nova Hyper-V team meeting ===<br />
* Weekly on Tuesdays at 1600 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chaired by primeministerp (Peter Pouliot)<br />
<br />
=== Gantt (Scheduler) team meeting ===<br />
* Weekly on Tuesdays at 1500 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): n0ano (Don Dugger)<br />
* See [[Meetings/Scheduler]] for details<br />
<br />
=== VMwareAPI team meeting ===<br />
* Weekly on Wednesdays at 1700 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: [[TracyJones]]<br />
* See [[Meetings/VMwareAPI]] for details<br />
<br />
=== PCI Passthrough Meeting ===<br />
* Weekly on Tuesday at [http://www.worldclock.com/world_clock.html 1300 UTC] <br />
* Will change back to once weekly after agreements are reached.<br />
* IRC channel: #openstack-meeting-alt<br />
* Chair: baoli (Robert Li)<br />
* See [[Meetings/Passthrough]] for details<br />
<br />
=== Nova API meeting ===<br />
* Weekly on Friday at 00:00 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: Chris Yeoh<br />
* See [[Meetings/NovaAPI]] for details<br />
<br />
== Documentation team meeting ==<br />
* Every other Wednesday at alternating times, see [[Meetings/DocTeamMeeting]]<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): [[AnneGentle]]<br />
* See [[Meetings/DocTeamMeeting]] for an agenda<br />
<br />
== Project Infrastructure team meeting ==<br />
* Weekly on Tuesdays at 1900 UTC<br />
* IRC channel: #openstack-meeting<br />
* Chair (to contact for more information): [[User:Corvus|James E. Blair (jeblair)]]<br />
* See [[Meetings/InfraTeamMeeting]] for an agenda<br />
<br />
== QA team meeting ==<br />
* Weekly on Thursdays at 1700/2200 UTC (alternating)<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): Matt Treinish<br />
* See [[Meetings/QATeamMeeting]] for an agenda<br />
<br />
== DefCore / RefStack Development Meeting ==<br />
* Weekly on Thursdays at 10 am Pacific (will track daylight savings time) - now at 1700 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-3</nowiki></code><br />
* PTL: David Lenwell (Piston Cloud)<br />
* DefCore Chairs (to contact for more information): Rob "zehicle" Hirschfeld & Joshua McKenty<br />
* See [[Meetings/DefCore]] for an agenda<br />
<br />
Face to Face Meetings planned (Piston HQ in SFO)<br />
* Friday 3/28 1pm for Web Front Page<br />
* Tuesday 4/15 10am for general working session<br />
<br />
== DefCore Progress Update Meeting ==<br />
* 4/1 DefCore meeting, 2pm PST, 90 minutes<br />
* Agenda & Connection Details at https://etherpad.openstack.org/p/DefCoreElephant.7<br />
* Chairs (to contact for more information): Rob "@zehicle" Hirschfeld & Joshua McKenty<br />
<br />
== State management team meeting ==<br />
* Weekly on Thursdays at 1900 UTC (http://weatherarc.com/utc-time-conversion) <br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): [[Harlowja]]<br />
* See [[Meetings/StateManagement]] for an agenda<br />
<br />
== Keystone team meeting ==<br />
* Weekly on Tuesdays at 1800 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): [[DolphMathews]]<br />
* See [[Meetings/KeystoneMeeting]] for an agenda<br />
<br />
== Ironic (Bare Metal) team meeting ==<br />
* Weekly on Mondays at 1900 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more infomation) Devananda van der Veen<br />
* see [[Meetings/Ironic]] for agenda<br />
<br />
== TripleO team meeting ==<br />
* Weekly on Tuesdays at 1900 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair (to contact for more infomation) Robert Collins (lifeless)<br />
* see [[Meetings/TripleO]] for agenda<br />
<br />
== OpenStack Networking (Neutron) ==<br />
=== Neutron team meeting ===<br />
* Weekly on Mondays at 2100 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more infomation) Kyle Mestery (mestery)<br />
* see [[Network/Meetings]] for agenda<br />
<br />
=== LBaaS meeting ===<br />
* Weekly on Thursdays at 1400 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more infomation) enikanorov (Eugene Nikanorov)<br />
* see [[Network/LBaaS]] for agenda<br />
<br />
=== ML2 Network sub-team meeting ===<br />
* Weekly on Wednesdays at 1600 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair (to contact for more information): rkukura/Suhkdev (Bob Kukura / Sukhdev Kapur)<br />
* See [[Meetings/ML2]] for details<br />
<br />
=== Firewall as a Service (FWaaS) team meeting ===<br />
* Weekly on Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?msg=OpenStack+Neutron+FWaaS+IRC&iso=20140326T1830&p1=1440&ah=1 1830 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting-3</nowiki></code><br />
* Chair: snaiksat (Sumit Naiksatam)<br />
* See [[Meetings/FWaaS]] for details<br />
<br />
=== Neutron Advanced Services' Common requirements team meeting ===<br />
* Weekly on Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?msg=OpenStack+Neutron+Adv+Services+IRC&iso=20140326T1730&p1=1440&ah=1 1730 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting-3</nowiki></code><br />
* Chair: snaiksat (Sumit Naiksatam)<br />
* See [[Meetings/AdvancedServices]] for details<br />
<br />
=== Neutron IPv6 sub-team Meeting === <br />
* Weekly on Tuesdays at [http://www.worldclock.com/world_clock.html 1400 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code> <br />
* Chair: sc68cal (Sean M. Collins)<br />
* See [[Meetings/Neutron-IPv6-Subteam]] for details<br />
<br />
=== Neutron Group Policy Sub-Team Meeting ===<br />
* Weekly on Thursdays at 1800 UTC<br />
* IRC channel: #openstack-meeting-3<br />
* Chair: SumitNaiksatam (Sumit Naiksatam)<br />
* See [[Meetings/Neutron_Group_Policy]] for details<br />
<br />
=== Neutron Distributed Virtual Router meeting ===<br />
* Weekly on Wednesdays at [http://www.worldclock.com/world_clock.html 1500 UTC]<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair:Swami (Swaminathan Vasudevan)<br />
* See [[Meetings/Distributed-Virtual-Router]] for details<br />
<br />
=== Neutron blueprint ovs-firewall-driver Meeting ===<br />
* Tentative: Monday, December 16 at 2000 UTC<br />
* IRC channel: #openstack-meeting<br />
* Chair: asadoughi (Amir Sadoughi)<br />
* Agenda: See [[Meetings/Neutron_blueprint_ovs-firewall-driver]]<br />
<br />
=== Neutron L3 Sub Team Meeting ===<br />
* Weekly on Thursday at 1500 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-3</nowiki></code><br />
* Chair: carl_baldwin (Carl Baldwin)<br />
* Agenda: See [[Meetings/Neutron-L3-Subteam]]<br />
<br />
<br />
=== Neutron ServiceVM framework Sub Team Meeting ===<br />
* Weekly on Tuesdays at 500UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: yamahata (Isaku Yamahata)<br />
* Agenda: See [[Meetings/ServiceVM]]<br />
<br />
== Cinder team meeting ==<br />
* Weekly on Wednesdays at 1600 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chaired by [[JohnGriffith]]<br />
* see [[CinderMeetings]] for agenda<br />
<br />
== Ceilometer team meeting ==<br />
* '''Every''' week on Thursdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=15&min=0&sec=0 1500 UTC].<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chaired by eglynn (Eoghan Glynn)<br />
* see [[Meetings/Ceilometer]] for details<br />
<br />
== Designate (DNSaaS) meeting ==<br />
* Weekly Wednesdays at 1700 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair (to contact for more information): Kiall Mac Innes (kiall)<br />
* See [[Meetings/Designate]] for details<br />
<br />
== Trove (DBaaS) meeting ==<br />
* Weekly on Wednesdays at 1800 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair (to contact for more information): Michael Basnight (hub_cap) / Vipul Sabhaya (vipul) / Nikhil Manchanda (SlickNik) / Tim Simpson (grapex)<br />
* See [[Meetings/TroveMeeting]] for details<br />
* For BP Meeting, please see [[Meetings/TroveBPMeeting]] for more details<br />
<br />
== Marconi (queues) team meeting ==<br />
* Weekly on Tuesday at 1500 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair: kgriffs (Kurt Griffiths)<br />
* See [[Meetings/Marconi]] for details<br />
<br />
== OpenStack Data Processing (Sahara) team meeting ==<br />
* Weekly on Thursdays at 1800 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair (to contact for more info): SergeyLukjanov (Sergey Lukjanov)<br />
* See [[Meetings/SaharaAgenda]] for details<br />
<br />
== Mistral meeting ==<br />
* Weekly on Mondays at 1600 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: rakhmerov (Renat Akhmerov)<br />
* See [[Meetings/MistralAgenda]] for details<br />
<br />
== Murano meeting ==<br />
* Weekly on Tuesday at 1700 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair: Georgiy Okrokvertskhov (Georgy_Ok)<br />
* See [[Meetings/MuranoAgenda]] for details<br />
<br />
== Heat (orchestration) team meeting ==<br />
* Weekly on Wednesdays at 2000 UTC in <code><nowiki>#openstack-meeting</nowiki></code> or Thursdays at 0000 UTC in <code><nowiki>#openstack-meeting-alt</nowiki></code> (alternate weeks)<br />
* Chair (to contact for more information): Steve Baker (stevebaker)<br />
* See [[Meetings/HeatAgenda]] for details<br />
<br />
== Horizon team meeting ==<br />
* Weekly on Tuesdays at 1600 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-3</nowiki></code><br />
* Chair: David Lyle (david-lyle)<br />
* See [[Meetings/Horizon]] for details<br />
<br />
== Swift team meeting ==<br />
* Weekly on Wednesdays at 1900 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: notmyname (John Dickinson)<br />
* See [[Meetings/Swift]] for details<br />
<br />
== OpenStack Security Group (OSSG) meeting ==<br />
* Weekly on Thursdays at 1800 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): bdpayne (Bryan Payne)<br />
* See [[Meetings/OpenStackSecurity]] for an agenda<br />
<br />
== Python3 Compatibility Team meeting ==<br />
* Not planned anymore<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair (to contact for more information): jd_ (Julien Danjou)<br />
* See [[Meetings/Python3]] for details<br />
<br />
== Glance Team meeting ==<br />
* Weekly on Thursdays at 1400/2000 UTC (alternating)<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair (to contact for more information): markwash (Mark Washenberger)<br />
* See [[Meetings/Glance]] for details<br />
<br />
== Oslo Team meeting ==<br />
* On demand on Fridays at 1600 UTC ([http://www.timeanddate.com/worldclock/converted.html?iso=20140425T16&p1=0&p2=2133&p3=195&p4=224 timeanddate.com])<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair (to contact for more information): dhellmann (Doug Hellmann)<br />
* See [[Meetings/Oslo]] for details<br />
<br />
== OpenStack Community team meeting ==<br />
* Weekly on Wednesday at [http://www.worldclock.com/world_clock.html 2300 UTC]<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: reed ([http://www.openstack.org/community/members/profile/1372 Stefano Maffulli])<br />
* See [[Meetings/Community]] for details<br />
<br />
== I18N Team meeting ==<br />
* Bi-weekly on Thursday, alternating between 0800 UTC and 0000 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: daisy<br />
* See [[Meetings/I18nTeamMeeting]] for details<br />
<br />
== Training-manuals Team meeting ==<br />
* Weekly on Monday at [http://www.worldclock.com/current-local-time-in-san-francisco_598.htm 1700 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: sarob<br />
* See [[Meetings/training-manuals]] for details<br />
<br />
== Manila Team meeting ==<br />
* Weekly on Thursday at [http://www.worldclock.com/current-local-time-in-san-francisco_598.htm 1500 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair: bswartz<br />
* See [[Manila/Meetings]] for details<br />
<br />
== Stackalytics team meeting ==<br />
* Be-Weekly on Mondays (starting from October 21st) at [http://www.worldclock.com/world_clock.html 1500 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: ilyashakhat (Ilya Shakhat)<br />
* See [[Meetings/Stackalytics]] for details<br />
<br />
== Climate (Reservations) team meeting ==<br />
* Weekly on Fridays at [http://www.worldclock.com/world_clock.html 1500 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: bauzas (Sylvain Bauza), DinaBelova (Dina Belova)<br />
* See [[Meetings/Climate]] for details<br />
<br />
== Rally meeting ==<br />
* Weekly on Tuesdays at [http://www.worldclock.com/world_clock.html 1700 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair:boris-42 (Boris Pavlovic)<br />
* See [[Meetings/Rally]] for details<br />
<br />
== Solum Team Meeting ==<br />
* Weekly on Tuesdays at 1600/2200 UTC (alternating)<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair: adrian_otto (Adrian Otto)<br />
* See [[Meetings/Solum]] for details<br />
<br />
== Congress Team Meeting ==<br />
* Bi-weekly on Tuesdays at [http://www.worldclock.com/world_clock.html 1700 UTC], e.g. Feb 25, 2014<br />
* IRC channel: #openstack-meeting-3<br />
* Chair: pballand (Pete Balland)<br />
* See [[Meetings/Congress]] for details<br />
<br />
== Barbican Meeting ==<br />
* Weekly on Mondays at [http://www.timeanddate.com/worldclock/fixedtime.html?iso=20130502T2000 2000 UTC]<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair (to contact for more information): jraim (#openstack-barbican @ Freenode)<br />
* See [[Meetings/Barbican]] for an agenda<br />
<br />
== Chef Cookbook meeting ==<br />
* Weekly on Mondays at 1600 UTC<br />
* IRC channel: <code><nowiki>#openstack-chef</nowiki></code><br />
* Chair: mattray (Matt Ray)<br />
* See [[Meetings/ChefCookbook]] for details<br />
<br />
== Milk Meeting ==<br />
* Weekly on Monday at [http://www.worldclock.com/current-local-time-in-san-francisco_598.htm 2000 UTC] <br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: sarob<br />
* See [[Meetings/milk]] for details<br />
<br />
== StoryBoard Meeting ==<br />
* Weekly on Thursdays at 1600 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: cody-somerville or ttx<br />
* See [[StoryBoard]] for details<br />
<br />
<br />
== Hierarchical Multitenancy Meeting ==<br />
* Weekly on Fridays at 1600 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: vishy<br />
* See [[HierarchicalMultitenancy]] for details<br />
<br />
== python-openstacksdk Meeting ==<br />
* Weekly on Tuesdays at [http://www.worldtimebuddy.com/?qm=1&lid=6,0,4726206,100&h=6&date=2014-2-11&sln=13-14 1900 UTC] starting 2/19/2014<br />
* IRC channel: <code><nowiki>#openstack-meeting-3</nowiki></code><br />
* Chair: jnoller<br />
* See [[PythonOpenStackSDK]] for details<br />
<br />
== Satori Team Meeting ==<br />
* Weekly on Mondays at [http://www.worldtimebuddy.com/?qm=1&lid=6,0,4726206,100&h=6&date=2014-2-11&sln=9-10 1500 UTC] starting Feb 24, 2014<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair: Ziad_Sawalha<br />
* See [[Meetings/Satori]] for details<br />
<br />
== Fuel Team Meeting ==<br />
* Weekly on Thursdays at [http://www.worldtimebuddy.com/?qm=1&lid=100&h=100&date=2014-3-27&sln=16-17 1600 UTC] starting Feb 27, 2014<br />
* IRC channel: <code><nowiki>#openstack-meeting-alt</nowiki></code><br />
* Chair: vkozhukalov<br />
* See [[Meetings/Fuel]] for details<br />
<br />
== Third Party OpenStack CI Workshop and Q&A Meetings ==<br />
* Weekly on Mondays at 1800 UTC starting March 3rd, 2014<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: JayPipes<br />
<br />
== MagnetoDB Team meeting ==<br />
* Every second week on Mondays at 0900 UTC starting April 14th, 2014<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code><br />
* Chair: isviridov<br />
<br />
== MagnetoDB Team daily scrum meeting notes ==<br />
* Every day at 1500 UTC starting April 8th, 2014<br />
* IRC channel: <code><nowiki>#magnetodb</nowiki></code><br />
* Chair: isviridov, setho, dukhlov, ikhudoshyn<br />
<br />
== PHP SDK Team Meeting ==<br />
* Weekly on Wednesdays at 1530 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting-3</nowiki></code> starting April 9, 2014<br />
* Chair: mfer (Matt Farina)<br />
* See [[Meetings/OpenStack-SDK-PHP]] for details<br />
<br />
== NFV Team Meeting ==<br />
* Weekly on Wednesdays at 1400 UTC<br />
* IRC channel: <code><nowiki>#openstack-meeting</nowiki></code> starting June 4, 2014<br />
* Chair: russellb (Russell Bryant)<br />
* See [[Meetings/NFV]] for details</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=52330TelcoWorkingGroup2014-05-14T21:25:44Z<p>Russellb: /* Active Blueprints */</p>
<hr />
<div><br />
= Weekly NFV sub-team meeting =<br />
'''MEETING TIME: (Proposed, subject to change) Wednesdays, 1400 UTC, #openstack-meeting, starting June 4'''<br />
<br />
This meeting is a weekly gathering of developers and operators interested in development activity in support of NFV use cases. We gather requirements to support these use cases and track active development efforts across OpenStack that relate to this area.<br />
<br />
== Agenda for next meeting ==<br />
<br />
Proposed (subject to change) for Wednesday, June 4 at 1400 UTC in #openstack-meeting.<br />
<br />
Agenda:<br />
* First meeting!<br />
* Meet and greet<br />
* Review our current blueprint list and fill in anything we're not tracking yet<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
<br />
= Development Efforts =<br />
<br />
== Active Blueprints ==<br />
<br />
* SR-IOV Networking Support<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov<br />
** Nova design: https://review.openstack.org/#/c/86606/<br />
* Support for NUMA and VCPU topology configuration<br />
** https://blueprints.launchpad.net/nova/+spec/nova-virt-numa-and-vcpu-topology<br />
** Guest vCPU topology configuration<br />
*** '''Status: Design review in progress'''<br />
*** https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology <br />
*** Nova design: https://review.openstack.org/93510<br />
* Soft affinity support for server groups<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group <br />
** Nova design: https://review.openstack.org/#/c/91328/<br />
* Open vSwitch-based Security Groups: Open vSwitch Implementation of FirewallDriver<br />
** https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver<br />
** '''Status: Design review in progress'''<br />
** Neutron design: https://review.openstack.org/#/c/89712/<br />
* Framework for Advanced Services in Virtual Machines<br />
** https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms<br />
* Neutron Services Insertion, Chaining, and Steering<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering<br />
** Neutron design: https://review.openstack.org/#/c/93524<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=52329TelcoWorkingGroup2014-05-14T21:24:21Z<p>Russellb: /* Active Blueprints */</p>
<hr />
<div><br />
= Weekly NFV sub-team meeting =<br />
'''MEETING TIME: (Proposed, subject to change) Wednesdays, 1400 UTC, #openstack-meeting, starting June 4'''<br />
<br />
This meeting is a weekly gathering of developers and operators interested in development activity in support of NFV use cases. We gather requirements to support these use cases and track active development efforts across OpenStack that relate to this area.<br />
<br />
== Agenda for next meeting ==<br />
<br />
Proposed (subject to change) for Wednesday, June 4 at 1400 UTC in #openstack-meeting.<br />
<br />
Agenda:<br />
* First meeting!<br />
* Meet and greet<br />
* Review our current blueprint list and fill in anything we're not tracking yet<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
<br />
= Development Efforts =<br />
<br />
== Active Blueprints ==<br />
<br />
* SR-IOV Networking Support<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov<br />
** Nova design: https://review.openstack.org/#/c/86606/<br />
* Support for NUMA and VCPU topology configuration<br />
** https://blueprints.launchpad.net/nova/+spec/nova-virt-numa-and-vcpu-topology<br />
** Guest vCPU topology configuration<br />
*** '''Status: Design review in progress'''<br />
*** https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology <br />
*** Nova design: https://review.openstack.org/93510<br />
* Soft affinity support for server groups<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group <br />
** Nova design: https://review.openstack.org/#/c/91328/<br />
* Open vSwitch-based Security Groups: Open vSwitch Implementation of FirewallDriver<br />
** https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver<br />
** '''Status: Design review in progress'''<br />
** Neutron design: https://review.openstack.org/#/c/89712/<br />
* Framework for Advanced Services in Virtual Machines<br />
** https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms<br />
* Neutron Services Insertion, Chaining, and Steering<br />
** https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=52328TelcoWorkingGroup2014-05-14T21:23:36Z<p>Russellb: /* Active Blueprints */</p>
<hr />
<div><br />
= Weekly NFV sub-team meeting =<br />
'''MEETING TIME: (Proposed, subject to change) Wednesdays, 1400 UTC, #openstack-meeting, starting June 4'''<br />
<br />
This meeting is a weekly gathering of developers and operators interested in development activity in support of NFV use cases. We gather requirements to support these use cases and track active development efforts across OpenStack that relate to this area.<br />
<br />
== Agenda for next meeting ==<br />
<br />
Proposed (subject to change) for Wednesday, June 4 at 1400 UTC in #openstack-meeting.<br />
<br />
Agenda:<br />
* First meeting!<br />
* Meet and greet<br />
* Review our current blueprint list and fill in anything we're not tracking yet<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
<br />
= Development Efforts =<br />
<br />
== Active Blueprints ==<br />
<br />
* SR-IOV Networking Support<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov<br />
** Nova design: https://review.openstack.org/#/c/86606/<br />
* Support for NUMA and VCPU topology configuration<br />
** https://blueprints.launchpad.net/nova/+spec/nova-virt-numa-and-vcpu-topology<br />
** Guest vCPU topology configuration<br />
*** '''Status: Design review in progress'''<br />
*** https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology <br />
*** Nova design: https://review.openstack.org/93510<br />
* Soft affinity support for server groups<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group <br />
** Nova design: https://review.openstack.org/#/c/91328/<br />
* Open vSwitch-based Security Groups: Open vSwitch Implementation of FirewallDriver<br />
** https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver<br />
* Framework for Advanced Services in Virtual Machines<br />
** https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms<br />
* Neutron Services Insertion, Chaining, and Steering<br />
** https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=52327TelcoWorkingGroup2014-05-14T20:51:31Z<p>Russellb: /* Active Blueprints */</p>
<hr />
<div><br />
= Weekly NFV sub-team meeting =<br />
'''MEETING TIME: (Proposed, subject to change) Wednesdays, 1400 UTC, #openstack-meeting, starting June 4'''<br />
<br />
This meeting is a weekly gathering of developers and operators interested in development activity in support of NFV use cases. We gather requirements to support these use cases and track active development efforts across OpenStack that relate to this area.<br />
<br />
== Agenda for next meeting ==<br />
<br />
Proposed (subject to change) for Wednesday, June 4 at 1400 UTC in #openstack-meeting.<br />
<br />
Agenda:<br />
* First meeting!<br />
* Meet and greet<br />
* Review our current blueprint list and fill in anything we're not tracking yet<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
<br />
= Development Efforts =<br />
<br />
== Active Blueprints ==<br />
<br />
* SR-IOV Networking Support<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov<br />
** Nova design: https://review.openstack.org/#/c/86606/<br />
* Support for NUMA and VCPU topology configuration<br />
** https://blueprints.launchpad.net/nova/+spec/nova-virt-numa-and-vcpu-topology<br />
** Guest vCPU topology configuration<br />
*** '''Status: Design review in progress'''<br />
*** https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology <br />
*** Nova design: https://review.openstack.org/93510<br />
* Soft affinity support for server groups<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group <br />
** Nova design: https://review.openstack.org/#/c/91328/<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=52326TelcoWorkingGroup2014-05-14T20:11:37Z<p>Russellb: /* Weekly NFV sub-team meeting */</p>
<hr />
<div><br />
= Weekly NFV sub-team meeting =<br />
'''MEETING TIME: (Proposed, subject to change) Wednesdays, 1400 UTC, #openstack-meeting, starting June 4'''<br />
<br />
This meeting is a weekly gathering of developers and operators interested in development activity in support of NFV use cases. We gather requirements to support these use cases and track active development efforts across OpenStack that relate to this area.<br />
<br />
== Agenda for next meeting ==<br />
<br />
Proposed (subject to change) for Wednesday, June 4 at 1400 UTC in #openstack-meeting.<br />
<br />
Agenda:<br />
* First meeting!<br />
* Meet and greet<br />
* Review our current blueprint list and fill in anything we're not tracking yet<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
<br />
= Development Efforts =<br />
<br />
== Active Blueprints ==<br />
<br />
* SR-IOV Networking Support<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov<br />
** Nova design: https://review.openstack.org/#/c/86606/<br />
* Guest vCPU topology configuration<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology <br />
** Nova design: https://review.openstack.org/93510<br />
* Soft affinity support for server groups<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group <br />
** Nova design: https://review.openstack.org/#/c/91328/<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=52325TelcoWorkingGroup2014-05-14T20:11:10Z<p>Russellb: /* Agenda for next meeting */</p>
<hr />
<div><br />
= Weekly NFV sub-team meeting =<br />
'''MEETING TIME: TBD'''<br />
<br />
This meeting is a weekly gathering of developers and operators interested in development activity in support of NFV use cases. We gather requirements to support these use cases and track active development efforts across OpenStack that relate to this area.<br />
<br />
== Agenda for next meeting ==<br />
<br />
Proposed (subject to change) for Wednesday, June 4 at 1400 UTC in #openstack-meeting.<br />
<br />
Agenda:<br />
* First meeting!<br />
* Meet and greet<br />
* Review our current blueprint list and fill in anything we're not tracking yet<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
<br />
= Development Efforts =<br />
<br />
== Active Blueprints ==<br />
<br />
* SR-IOV Networking Support<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov<br />
** Nova design: https://review.openstack.org/#/c/86606/<br />
* Guest vCPU topology configuration<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology <br />
** Nova design: https://review.openstack.org/93510<br />
* Soft affinity support for server groups<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group <br />
** Nova design: https://review.openstack.org/#/c/91328/<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=52324TelcoWorkingGroup2014-05-14T20:10:25Z<p>Russellb: /* Active Blueprints */</p>
<hr />
<div><br />
= Weekly NFV sub-team meeting =<br />
'''MEETING TIME: TBD'''<br />
<br />
This meeting is a weekly gathering of developers and operators interested in development activity in support of NFV use cases. We gather requirements to support these use cases and track active development efforts across OpenStack that relate to this area.<br />
<br />
== Agenda for next meeting ==<br />
<br />
Scheduled for (TBD)<br />
<br />
Agenda:<br />
* First meeting!<br />
* Meet and greet<br />
* Review our current blueprint list and fill in anything we're not tracking yet<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
<br />
= Development Efforts =<br />
<br />
== Active Blueprints ==<br />
<br />
* SR-IOV Networking Support<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov<br />
** Nova design: https://review.openstack.org/#/c/86606/<br />
* Guest vCPU topology configuration<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology <br />
** Nova design: https://review.openstack.org/93510<br />
* Soft affinity support for server groups<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group <br />
** Nova design: https://review.openstack.org/#/c/91328/<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=52323TelcoWorkingGroup2014-05-14T20:07:55Z<p>Russellb: /* Agenda for next meeting */</p>
<hr />
<div><br />
= Weekly NFV sub-team meeting =<br />
'''MEETING TIME: TBD'''<br />
<br />
This meeting is a weekly gathering of developers and operators interested in development activity in support of NFV use cases. We gather requirements to support these use cases and track active development efforts across OpenStack that relate to this area.<br />
<br />
== Agenda for next meeting ==<br />
<br />
Scheduled for (TBD)<br />
<br />
Agenda:<br />
* First meeting!<br />
* Meet and greet<br />
* Review our current blueprint list and fill in anything we're not tracking yet<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
<br />
= Development Efforts =<br />
<br />
== Active Blueprints ==<br />
<br />
* SR-IOV Networking Support<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov<br />
** Nova design: https://review.openstack.org/#/c/86606/<br />
* Guest vCPU topology configuration<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology <br />
** Nova design: https://review.openstack.org/93510<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=52322TelcoWorkingGroup2014-05-14T20:07:07Z<p>Russellb: /* Development Efforts */</p>
<hr />
<div><br />
= Weekly NFV sub-team meeting =<br />
'''MEETING TIME: TBD'''<br />
<br />
This meeting is a weekly gathering of developers and operators interested in development activity in support of NFV use cases. We gather requirements to support these use cases and track active development efforts across OpenStack that relate to this area.<br />
<br />
== Agenda for next meeting ==<br />
<br />
Scheduled for (TBD)<br />
<br />
Agenda:<br />
* TBD<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
<br />
= Development Efforts =<br />
<br />
== Active Blueprints ==<br />
<br />
* SR-IOV Networking Support<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov<br />
** Nova design: https://review.openstack.org/#/c/86606/<br />
* Guest vCPU topology configuration<br />
** '''Status: Design review in progress'''<br />
** https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology <br />
** Nova design: https://review.openstack.org/93510<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=52313TelcoWorkingGroup2014-05-14T19:30:03Z<p>Russellb: /* Active Blueprints */</p>
<hr />
<div><br />
= Weekly NFV sub-team meeting =<br />
'''MEETING TIME: TBD'''<br />
<br />
This meeting is a weekly gathering of developers and operators interested in development activity in support of NFV use cases. We gather requirements to support these use cases and track active development efforts across OpenStack that relate to this area.<br />
<br />
== Agenda for next meeting ==<br />
<br />
Scheduled for (TBD)<br />
<br />
Agenda:<br />
* TBD<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
<br />
= Development Efforts =<br />
<br />
== Active Blueprints ==<br />
<br />
* SR-IOV Networking Support<br />
** https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov <br />
** Nova design: https://review.openstack.org/#/c/86606/<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=52312TelcoWorkingGroup2014-05-14T19:27:18Z<p>Russellb: /* Active Blueprints */</p>
<hr />
<div><br />
= Weekly NFV sub-team meeting =<br />
'''MEETING TIME: TBD'''<br />
<br />
This meeting is a weekly gathering of developers and operators interested in development activity in support of NFV use cases. We gather requirements to support these use cases and track active development efforts across OpenStack that relate to this area.<br />
<br />
== Agenda for next meeting ==<br />
<br />
Scheduled for (TBD)<br />
<br />
Agenda:<br />
* TBD<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
<br />
= Development Efforts =<br />
<br />
== Active Blueprints ==<br />
<br />
== Needed Development Not Yet Started ==</div>Russellbhttps://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup&diff=52311TelcoWorkingGroup2014-05-14T19:21:06Z<p>Russellb: Initial page template</p>
<hr />
<div><br />
= Weekly NFV sub-team meeting =<br />
'''MEETING TIME: TBD'''<br />
<br />
This meeting is a weekly gathering of developers and operators interested in development activity in support of NFV use cases. We gather requirements to support these use cases and track active development efforts across OpenStack that relate to this area.<br />
<br />
== Agenda for next meeting ==<br />
<br />
Scheduled for (TBD)<br />
<br />
Agenda:<br />
* TBD<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]<br />
<br />
= Active Blueprints =</div>Russellbhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Icehouse&diff=48831ReleaseNotes/Icehouse2014-04-15T21:21:25Z<p>Russellb: /* Known Issues */</p>
<hr />
<div>= OpenStack 2014.1 (Icehouse) Release Notes =<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== General Upgrade Notes ==<br />
<br />
* Windows packagers should use pbr 0.8 to avoid [https://bugs.launchpad.net/pbr/+bug/1294246 bug 1294246]<br />
* The log-config option has been renamed log-config-append, and will now append any configuration specified, rather than completely overriding any other settings as currently occurs. (https://bugs.launchpad.net/oslo/+bug/1169328, https://bugs.launchpad.net/oslo/+bug/1238349)<br />
* To minimize downtime, OpenStack Networking must be upgraded and neutron-metadata-agent restarted before OpenStack Compute is upgraded. Compute must be able to verify the X-Tenant-ID which is now passed by the neutron-metadata-agent service. (https://bugs.launchpad.net/neutron/+bug/1235450)<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
* '''Discoverable capabilities''': A Swift proxy server now by default (although it can be turned off) will respond to requests to /info. The response to these requests include information about the cluster and can be used by clients to determine which features are supported in the cluster. This means that one client will be able to communicate with multiple Swift clusters and take advantage of the features available in each cluster.<br />
<br />
* '''Generic way to persist system metadata''': Swift now supports system-level metadata on accounts and containers. System metadata provides a means to store internal custom metadata with associated Swift resources in a safe and secure fashion without actually having to plumb custom metadata through the core swift servers. The new gatekeeper middleware prevents this system metadata from leaking into the request or being set by a client.<br />
<br />
* '''Account-level ACLs and ACL format v2''': Accounts now have a new privileged header to represent ACLs or any other form of account-level access control. The value of the header is a JSON dictionary string to be interpreted by the auth system. A reference implementation is given in TempAuth. Please see the full docs at http://swift.openstack.org/overview_auth.html<br />
<br />
* '''Object replication ssync (an rsync alternative)''': A Swift storage node can now be configured to use Swift primitives for replication transport instead of rsync.<br />
<br />
* '''Automatic retry on read failures''': If a source times out on an object server read, try another one of them with a modified range. This means that drive failures during a client request will not be visible to the end-user client.<br />
<br />
* '''Work on upcoming storage policies'''<br />
<br />
=== Known Issues ===<br />
<br />
None known at this time<br />
<br />
=== Upgrade Notes ===<br />
<br />
Read full change log notes at https://github.com/openstack/swift/blob/master/CHANGELOG to see any config changes that would affect upgrades.<br />
<br />
As always, Swift can be upgraded with no downtime. <br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Upgrade Support ====<br />
<br />
* Limited live upgrades are now supported. This enables deployers to upgrade controller infrastructure first, and subsequently upgrade individual compute nodes without requiring downtime of the entire cloud to complete.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Added RDP console support.<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the <tt>os_command_line</tt> key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise the default kernel arguments are used.<br />
* The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.<br />
* The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random, however use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.<br />
* The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting the <tt>hw_video_model</tt>, <tt>hw_video_vram</tt>, and <tt>hw_video_head</tt> properties in the image metadata. Currently supported video driver models are <tt>vga</tt>, <tt>cirrus</tt>, <tt>vmvga</tt>, <tt>xen</tt> and <tt>qxl</tt>. <br />
* Watchdog support has been added to the Libvirt driver. The watchdog device used is <tt>i6300esb</tt>. It is enabled by setting the <tt>hw_watchdog_action</tt> property in the image properties or flavor extra specifications (<tt>extra_specs</tt>) to a value other than <tt>disabled</tt>. Supported <tt>hw_watchdog_action</tt> property values, which denote the action for the watchdog device to take in the event of an instance failure, are <tt>poweroff</tt>, <tt>reset</tt>, <tt>pause</tt>, and <tt>none</tt>.<br />
* The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.<br />
* The libvirt driver now supports waiting for an event from Neutron during instance boot for better reliability. This requires a suitably new Neutron that supports sending these events, and avoids a race between the instance expecting networking to be ready and the actual plumbing that is required.<br />
<br />
===== VMware =====<br />
<br />
* The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the "nova diagnostics INSTANCE" command, where INSTANCE is replaced by an instance name or instance identifier.<br />
* The VMware Compute drivers now booting an instance from an ISO image.<br />
* The VMware Compute drivers now support the aging of cached images.<br />
<br />
===== XenServer =====<br />
<br />
* All XenServer specific configuration items have changed name, and moved to a [xenserver] section in nova.conf. While the old names will still work in this release, the old names are now deprecated, and support for them could well be removed in a future release of Nova.<br />
* Added initial support for [https://blueprints.launchpad.net/nova/+spec/pci-passthrough-xenapi PCI passthrough]<br />
* Maintained group B status through the introduction of the [[XenServer/XenServer_CI|XenServer CI]]<br />
* Improved support for ephemeral disks (including [https://blueprints.launchpad.net/nova/+spec/xenapi-migrate-ephemeral-disks migration] and [https://blueprints.launchpad.net/nova/+spec/xenapi-resize-ephemeral-disks resize up] of multiple ephemeral disks)<br />
* Support for [https://blueprints.launchpad.net/nova/+spec/xenapi-vcpu-pin-set vcpu_pin_set], essential when you pin CPU resources to Dom0<br />
* Numerous performance and stability enhancements<br />
<br />
==== API ====<br />
<br />
* In OpenStack Compute, the <tt>OS-DCF:diskConfig</tt> API attribute is no longer supported in V3 of the nova API.<br />
* The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.<br />
* The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had had been disabled and the system re-provisioned. This functionality is provided by the <tt>ExtendedServicesDelete</tt> API extension.<br />
* Separated the V3 API admin_actions plugin into logically separate plugins so operators can enable subsets of the functionality currently present in the plugin.<br />
* The Compute service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3 which allows non-unique tenant names.<br />
* The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the <tt>nova hypervisor-show</tt> command.<br />
<br />
==== Scheduler ====<br />
<br />
* The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.<br />
* A new scheduler filter, <tt>AggregateImagePropertiesIsolation</tt>, has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys <tt>aggregate_image_properties_isolation_namespace</tt> and <tt>aggregate_image_properties_isolation_separator</tt> are used to determine which image properties are examined by the filter.<br />
* Weight normalization in OpenStack Compute: See: <br />
** https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0. <br />
* The scheduler now supports server groups. The following types are supported - anti-affinity and affinity filters. That is, a server that is deployed will be done according to a predefined policy.<br />
<br />
==== Other Features ====<br />
<br />
* Notifications are now generated upon the creation and deletion of keypairs.<br />
* Notifications are now generated when an Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.<br />
* Compute services are now able to shutdown gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.<br />
* The Compute service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the <tt>running_deleted_instance_action</tt> configuration key. A new <tt>shutdown</tt> value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.<br />
* File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the <tt>inject_key</tt> and <tt>inject_partition</tt> configuration keys in <tt>/etc/nova/nova.conf</tt> and restart the Compute services. The file injection mechanism is likely to be disabled in a future release.<br />
* A number of changes have been made to the expected format <tt>/etc/nova/nova.conf</tt> configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.<br />
<br />
=== Known Issues ===<br />
* OpenStack Compute has some features that use newer API versions from other projects, but the following are the only API versions tested in Icehouse:<br />
** Keystone v2<br />
** Cinder v1<br />
** Glance v1<br />
<br />
=== Upgrade Notes ===<br />
<br />
* Scheduler and weight normalization (https://review.openstack.org/#/c/27160/): In previous releases the Compute and Cells scheduler used raw weights (i.e. the weighers returned any value, and that was the value used by the weighing proccess).<br />
** If you were using several weighers for Compute:<br />
*** If several weighers were used (in previous releases Nova only shipped one weigher for compute), it is possible that your multipliers were inflated artificially in order to make an important weigher prevail against any other weigher that returned large raw values. You need to check your weighers and take into account that now the maximum and minimum weights for a host will always be <tt>1.0</tt> and <tt>0.0</tt>.<br />
** If you are using cells:<br />
*** <tt>nova.cells.weights.mute_child.MuteChild</tt>: The weigher returned the value <tt>mute_weight_value</tt> as the weight assigned to a child that didn't update its capabilities in a while. It can still be used, but will have no effect on the final weight that will be computed by the weighing process, that will be <tt>1.0</tt>. If you are using this weigher to mute a child cell you need to adjust the <tt>mute_weight_multiplier</tt>.<br />
*** <tt>nova.cells.weights.weight_offset.WeightOffsetWeigher</tt> introduces a new configuration option <tt>offset_weight_multiplier</tt>. This new option has to be adjusted. In previous releases, the weigher returned the value of the configured offset for each of the cells in the weighing process. While the winner of that process will still be the same, it will get a weight of <tt>1.0</tt>. If you were using this weigher and you were relying in its value to make it prevail against any other weighers you need to adjust its multiplier accordingly.<br />
* An early Docker compute driver was included in the Havana release. This driver has been moved from Nova into its own repository. The new location is http://git.openstack.org/cgit/stackforge/nova-docker<br />
* https://review.openstack.org/50668 - The <tt>compute_api_class</tt> configuration option has been removed.<br />
* https://review.openstack.org/#/c/54290/ - The following deprecated configuration option aliases have been removed in favor of their new names:<br />
** <tt>service_quantum_metadata_proxy</tt><br />
** <tt>quantum_metadata_proxy_shared_secret</tt><br />
** <tt>use_quantum_default_nets</tt><br />
** <tt>quantum_default_tenant_id</tt><br />
** <tt>vpn_instance_type</tt><br />
** <tt>default_instance_type</tt><br />
** <tt>quantum_url</tt><br />
** <tt>quantum_url_timeout</tt><br />
** <tt>quantum_admin_username</tt><br />
** <tt>quantum_admin_password</tt><br />
** <tt>quantum_admin_tenant_name</tt><br />
** <tt>quantum_region_name</tt><br />
** <tt>quantum_admin_auth_url</tt><br />
** <tt>quantum_api_insecure</tt><br />
** <tt>quantum_auth_strategy</tt><br />
** <tt>quantum_ovs_bridge</tt><br />
** <tt>quantum_extension_sync_interval</tt><br />
** <tt>vmwareapi_host_ip</tt><br />
** <tt>vmwareapi_host_username</tt><br />
** <tt>vmwareapi_host_password</tt><br />
** <tt>vmwareapi_cluster_name</tt><br />
** <tt>vmwareapi_task_poll_interval</tt><br />
** <tt>vmwareapi_api_retry_count</tt><br />
** <tt>vnc_port</tt><br />
** <tt>vnc_port_total</tt><br />
** <tt>use_linked_clone</tt><br />
** <tt>vmwareapi_vlan_interface</tt><br />
** <tt>vmwareapi_wsdl_loc</tt><br />
* The PowerVM driver has been removed: https://review.openstack.org/#/c/57774/<br />
* The keystone_authtoken defaults changed in nova.conf: https://review.openstack.org/#/c/62815/<br />
* libvirt lvm names changed from using instance_name_template to instance uuid (https://review.openstack.org/#/c/76968). Possible manual cleanup required if using a non default instance_name_template.<br />
* rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)<br />
* Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.<br />
* Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual mplementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.<br />
* Nova previously included a nova.conf.sample. This file was automatically generated and is no longer included directly. If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.<br />
* Nova now defaults to requiring an event from Neutron when booting libvirt guests. If you upgrade Nova before Neutron, you must disable this feature in Nova until Neutron supports it by setting '''vif_plugging_is_fatal=False''' and '''vif_plugging_timeout=0'''. Recommended order is: Nova (with this disabled), Neutron (with the notifications enabled), and then enable vif_plugging_is_fatal=True with the default value of vif_plugging_timeout.<br />
* Nova supports a limited live upgrade model for the compute nodes in Icehouse. To do this, upgrade controller infrastructure (everthing except nova-compute) first, but set the '''[upgrade_levels]/compute=icehouse-compat''' option. This will enable Icehouse controller services to talk to Havana compute services. Upgrades of individual compute nodes can then proceed normally. When all the computes are upgraded, unset the compute version option to retain the default and restart the controller services.<br />
* The following configuration options are marked as deprecated in this release. See <tt>nova.conf.sample</tt> for their replacements. <tt>[GROUP]/option</tt><br />
** <tt>[DEFAULT]/rabbit_durable_queues</tt><br />
** <tt>[rpc_notifier2]/topics</tt><br />
** <tt>[DEFAULT]/log_config</tt><br />
** <tt>[DEFAULT]/logfile</tt><br />
** <tt>[DEFAULT]/logdir</tt><br />
** <tt>[DEFAULT]/base_dir_name</tt><br />
** <tt>[DEFAULT]/instance_type_extra_specs</tt><br />
** <tt>[DEFAULT]/db_backend</tt><br />
** <tt>[DEFAULT]/sql_connection</tt><br />
** <tt>[DATABASE]/sql_connection</tt><br />
** <tt>[sql]/connection</tt><br />
** <tt>[DEFAULT]/sql_idle_timeout</tt><br />
** <tt>[DATABASE]/sql_idle_timeout</tt><br />
** <tt>[sql]/idle_timeout</tt><br />
** <tt>[DEFAULT]/sql_min_pool_size</tt><br />
** <tt>[DATABASE]/sql_min_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_pool_size</tt><br />
** <tt>[DATABASE]/sql_max_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_retries</tt><br />
** <tt>[DATABASE]/sql_max_retries</tt><br />
** <tt>[DEFAULT]/sql_retry_interval</tt><br />
** <tt>[DATABASE]/reconnect_interval</tt><br />
** <tt>[DEFAULT]/sql_max_overflow</tt><br />
** <tt>[DATABASE]/sqlalchemy_max_overflow</tt><br />
** <tt>[DEFAULT]/sql_connection_debug</tt><br />
** <tt>[DEFAULT]/sql_connection_trace</tt><br />
** <tt>[DATABASE]/sqlalchemy_pool_timeout</tt><br />
** <tt>[DEFAULT]/memcache_servers</tt><br />
** <tt>[DEFAULT]/libvirt_type</tt><br />
** <tt>[DEFAULT]/libvirt_uri</tt><br />
** <tt>[DEFAULT]/libvirt_inject_password</tt><br />
** <tt>[DEFAULT]/libvirt_inject_key</tt><br />
** <tt>[DEFAULT]/libvirt_inject_partition</tt><br />
** <tt>[DEFAULT]/libvirt_vif_driver</tt><br />
** <tt>[DEFAULT]/libvirt_volume_drivers</tt><br />
** <tt>[DEFAULT]/libvirt_disk_prefix</tt><br />
** <tt>[DEFAULT]/libvirt_wait_soft_reboot_seconds</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_mode</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_model</tt><br />
** <tt>[DEFAULT]/libvirt_snapshots_directory</tt><br />
** <tt>[DEFAULT]/libvirt_images_type</tt><br />
** <tt>[DEFAULT]/libvirt_images_volume_group</tt><br />
** <tt>[DEFAULT]/libvirt_sparse_logical_volumes</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_pool</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_ceph_conf</tt><br />
** <tt>[DEFAULT]/libvirt_snapshot_compression</tt><br />
** <tt>[DEFAULT]/libvirt_use_virtio_for_bridges</tt><br />
** <tt>[DEFAULT]/libvirt_iscsi_use_multipath</tt><br />
** <tt>[DEFAULT]/libvirt_iser_use_multipath</tt><br />
** <tt>[DEFAULT]/matchmaker_ringfile</tt><br />
** <tt>[DEFAULT]/agent_timeout</tt><br />
** <tt>[DEFAULT]/agent_version_timeout</tt><br />
** <tt>[DEFAULT]/agent_resetnetwork_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_agent_path</tt><br />
** <tt>[DEFAULT]/xenapi_disable_agent</tt><br />
** <tt>[DEFAULT]/xenapi_use_agent_default</tt><br />
** <tt>[DEFAULT]/xenapi_login_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_connection_concurrent</tt><br />
** <tt>[DEFAULT]/xenapi_connection_url</tt><br />
** <tt>[DEFAULT]/xenapi_connection_username</tt><br />
** <tt>[DEFAULT]/xenapi_connection_password</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_poll_interval</tt><br />
** <tt>[DEFAULT]/xenapi_check_host</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_max_attempts</tt><br />
** <tt>[DEFAULT]/xenapi_sr_base_path</tt><br />
** <tt>[DEFAULT]/target_host</tt><br />
** <tt>[DEFAULT]/target_port</tt><br />
** <tt>[DEFAULT]/iqn_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_base_url</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_chance</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_duration</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_last_accessed</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_start</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_end</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_download_stall_cutoff</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_seeder_processes_per_host</tt><br />
** <tt>[DEFAULT]/use_join_force</tt><br />
** <tt>[DEFAULT]/xenapi_ovs_integration_bridge</tt><br />
** <tt>[DEFAULT]/cache_images</tt><br />
** <tt>[DEFAULT]/xenapi_image_compression_level</tt><br />
** <tt>[DEFAULT]/default_os_type</tt><br />
** <tt>[DEFAULT]/block_device_creation_timeout</tt><br />
** <tt>[DEFAULT]/max_kernel_ramdisk_size</tt><br />
** <tt>[DEFAULT]/sr_matching_filter</tt><br />
** <tt>[DEFAULT]/xenapi_sparse_copy</tt><br />
** <tt>[DEFAULT]/xenapi_num_vbd_unplug_retries</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_images</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_network_name</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_boot_menu_url</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_mkisofs_cmd</tt><br />
** <tt>[DEFAULT]/xenapi_running_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_vif_driver</tt><br />
** <tt>[DEFAULT]/xenapi_image_upload_handler</tt><br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* The calculation of storage quotas has been improved. Deleted images are now excluded from the count (https://bugs.launchpad.net/glance/+bug/1261738), which may affect your existing usage figures.<br />
* Glance has moved to using 0-based indices for location entries, to be in line with JSON-pointer RFC6901 (https://bugs.launchpad.net/glance/+bug/1282437)<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* Thanks to the [[I18nTeam]] Horizon is now available in Hindi, German and Serbian. Translations for Australian English, British English, Dutch, French, Japanese, Korean, Polish, Portuguese, Simplified and Traditional Chinese, Spanish and Russian have also been updated.<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* New v3 API features<br />
** <code>/v3/OS-FEDERATION/</code> allows Keystone to consume federated authentication via [https://shibboleth.net/ Shibboleth] for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-federation-ext.md documentation]).<br />
** <code>POST /v3/users/{user_id}/password</code> allows API users to update their own passwords (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#change-user-password-post-usersuser_idpassword documentation]).<br />
** <code>GET v3/auth/token?nocatalog</code> allows API users to opt-out of receiving the service catalog when performing online token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#validate-token-get-authtokensnocatalog documentation]).<br />
** <code>/v3/regions</code> provides a public interface for describing multi-region deployments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#regions-v3regions documentation]).<br />
** <code>/v3/OS-SIMPLECERT/</code> now publishes the certificates used for PKI token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-simple-certs-ext.md documentation]).<br />
** <code>/v3/OS-TRUST/trusts</code> is now capable of providing limited-use delegation via the <code>remaining_uses</code> attribute of [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md trusts].<br />
* The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.<br />
* The token KVS driver is now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB.<br />
* Keystone's driver interfaces are now implemented as Abstract Base Classes (ABCs) to make it easier to track compatibility of custom driver implementations across releases.<br />
* Keystone's default <code>etc/policy.json</code> has been rewritten in an easier to read format.<br />
* [http://docs.openstack.org/developer/keystone/event_notifications.html Notifications] are now emitted in response to create, update and delete events on roles, groups, and trusts.<br />
* Custom extensions and driver implementations may now subscribe to internal-only event notifications, including ''disable'' events (which are only exposed externally as part of ''update'' events).<br />
* Keystone now emits [http://www.dmtf.org/standards/cadf Cloud Audit Data Federation (CADF)] event notifications in response to authentication events.<br />
* [https://review.openstack.org/#/c/50362/ Additional plugins] are provided to handle external authentication via <code>REMOTE_USER</code> with respect to single-domain versus multi-domain deployments.<br />
* <code>policy.json</code> can now perform enforcement on the target domain in a domain-aware operation using, for example, <code>%(target.{entity}.domain_id)s</code>.<br />
* The LDAP driver for the assignment backend now supports group-based role assignment operations.<br />
* Keystone now publishes token revocation ''events'' in addition to providing continued support for token revocation ''lists''. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Keystone eliminate token persistence during the Juno release.<br />
* Deployers can now define arbitrary limits on the size of collections in API responses (for example, <code>GET /v3/users</code> might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.<br />
* Lazy translation has been enabled to translating responses according to the requested Accept-Language header.<br />
* Keystone now emits i18n-ready log messages.<br />
* Collection filtering is now performed in the driver layer, where possible, for improved performance.<br />
<br />
=== Known Issues ===<br />
<br />
* [https://bugs.launchpad.net/keystone/+bug/1291157 Bug 1291157]: If using the <code>OS-FEDERATION</code> extension, deleting an Identity Provider or Protocol does ''not'' result in previously-issued tokens being revoked. This will not be fixed in the ''stable/icehouse'' branch.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The v2 API has been prepared for deprecation, but remains stable in the Icehouse release. It may be formally deprecated during the Juno release pending widespread support for the v3 API.<br />
* Backwards compatibility for <code>keystone.middleware.auth_token</code> has been removed. <code>auth_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.auth_token</code> instead.<br />
* The <code>s3_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.s3_token</code> instead. Backwards compatibility for <code>keystone.middleware.s3_token</code> will be removed in Juno.<br />
* The default token duration has been reduced from 24 hours to just 1 hour. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been deprecated in favor of external tooling and may be removed in the K release.<br />
* <code>keystone.middleware.XmlBodyMiddleware</code> has been deprecated in favor of support for "application/json" only and may be removed in the K release.<br />
* A v3 API version of the EC2 Credential system has been implemented. To use this, the following section needs to be added to <code>keystone-paste.ini</code>:<br />
[filter:ec2_extension_v3]<br />
paste.filter_factory = keystone.contrib.ec2:Ec2ExtensionV3.factory<br />
... and <code>ec2_extension_v3</code> needs to be added to the pipeline variable in the <code>[pipeline:api_v3]</code> section of <code>keystone-paste.ini</code>.<br />
* <code>etc/policy.json</code> updated to provide rules for the new v3 EC2 Credential CRUD as show in the updated sample <code>policy.json</code> and <code>policy.v3cloudsample.json</code><br />
* Migration numbers 38, 39 and 40 move all role assignment data into a single, unified table with first-class columns for role references.<br />
* TODO: deprecations for the move to oslo-incubator db<br />
* A new configuration option, <code>mutable_domain_id</code> is <code>false</code> by default to harden security around domain-level administration boundaries. This may break API functionality that you depended on in Havana. If so, set this value to <code>true</code> and ''please'' voice your use case to the Keystone community.<br />
* TODO: any non-ideal default values that will be changed in the future<br />
* Keystone's move to ''oslo.messaging'' for emitting event notifications has resulted in new configuration options which are potentially incompatible with those from Havana (TODO: enumerate old/new config values)<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
During Icehouse cycle the team focused on stability and testing of the Neutron codebase. <br />
<br />
==== New Drivers/Plugins====<br />
* Nuage<br />
* OneConvergence<br />
* OpenDaylight<br />
<br />
==== New Load Balancing as a Service Drivers ====<br />
* Embrane<br />
* NetScaler<br />
* Radware<br />
<br />
==== New VPN Driver ====<br />
*Cisco CSR<br />
<br />
=== Known Issues ===<br />
None yet.<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
* Ability to change the type of an existing volume (retype)<br />
* Add volume metadata support to the Cinder Backup Object<br />
* Implement Multiple API workers<br />
* Add ability to delete Quota<br />
* Add ability to import/export backups in to Cinder<br />
* Added Fibre Channel Zone manager for automated FC zoning during volume attach/detach<br />
<br />
=== New Backend Drivers/Plugins ===<br />
* EMC SMI-S FC Driver <br />
* EMC VNX iSCSI Direct Driver<br />
* HP MSA 2040<br />
* IBM SONAS and Storwize V7000 Unified Storage Systems<br />
<br />
=== Known Issues ===<br />
* Reconnect on failure for multiple servers always connects to first server (Bug: #1261631)<br />
* Storwize/SVC driver crashes when check volume copy status (Bug: #1304115)<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* API additions<br />
** arbitrarily complex combinations of query constraints for meters, samples and alarms<br />
** capabilities API for discovery of storage driver specific features<br />
** selectable aggregates for statistics, including new cardinality and standard deviation functions <br />
** direct access to samples decoupled from a specific meter<br />
** events API, in the style of [https://github.com/rackerlabs/stacktach StackTach]<br />
<br />
* Alarming improvements<br />
** time-constrained alarms, providing flexibility to set the bar higher or lower depending on time of day or day of the week<br />
** exclusion of weak data points with anomalously low sample counts <br />
** derived rate-based meters for disk & network, more suited to threshold-oriented alarming <br />
<br />
* Integration touch-points<br />
** split collector into notification agent solely responsible for consuming external notifications<br />
** redesign of pipeline configuration for pluggable resource discovery<br />
** configurable persistence of raw notification payloads, in the style of [https://github.com/rackerlabs/stacktach StackTach]<br />
<br />
* Storage drivers<br />
** approaching feature parity in HBase & SQLAlchemy & DB2 drivers<br />
** optimization of resource queries<br />
** HBase: add Alarm support<br />
<br />
* New sources of metrics<br />
** Neutron north-bound API on SDN controller<br />
** VMware vCenter Server API<br />
** SNMP daemons on baremetal hosts<br />
** OpenDaylight REST APIs<br />
<br />
=== Known Issues ===<br />
* SQLAlchemy storage driver is problematic with a scaled out collector service when run against PostgreSQL https://bugs.launchpad.net/ceilometer/+bug/1305332<br />
* HBase storage driver reports truncated list of meters: https://bugs.launchpad.net/ceilometer/+bug/1288284<br />
* HBase storage driver doesn't work with HappyBase version 0.7 <br />
* excessive load on nova-api service induced by compute agent: https://bugs.launchpad.net/ceilometer/+bug/1297528<br />
<br />
=== Upgrade Notes ===<br />
* the pre-existing collector service has been augmented with a new notification agent that must also be started up post-upgrade<br />
* MongoDB storage driver now requires the MongoDB installation to be version 2.4 or greater (the lower bound for Havana was 2.2), see [http://docs.mongodb.org/manual/release-notes/2.4-upgrade upgrade instructions].<br />
* Force detach API call is now an admin only call and no longer the policy default of admin and owner. Force detach requires clean up work by the admin, in which the admin would not know when an owner did this operation.<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
=== Key New Features ===<br />
* '''HOT templates''': The [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html HOT template format] is now supported as the recommended format for authoring heat templates.<br />
* '''OpenStack resources''': There is now sufficient coverage of resource types to port any template to [http://docs.openstack.org/developer/heat/template_guide/openstack.html native OpenStack resources]<br />
* '''Software configuration''': New API and resources to allow software configuration to be performed using a variety of techniques and tools<br />
* '''Non-admin users''': It is now possible to launch any stack without requiring admin user credentials. See the upgrade notes on enabling this by configuring stack domain users.<br />
* '''Operator API''': Cloud operators now have a dedicated admin API to perform operations on all stacks<br />
* '''Autoscaling resources''': [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::AutoScalingGroup OS::Heat::AutoScalingGroup] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ScalingPolicy OS::Heat::ScalingPolicy] now allow the autoscaling of any arbitrary collection of resources<br />
* '''Notifications''': Heat now sends RPC notifications for events such as stack state changes and autoscaling triggers<br />
* '''Heat engine scaling''': It is now possible to share orchestration load across multiple instances of heat-engine. Locking is coordinated by a pluggable distributed lock, with a SQL based default lock plugin.<br />
* '''File inclusion with get_file''': The [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#intrinsic-functions intrinsic function] get_file is used by python-heatclient and heat to allow files to be attached to stack create and update actions, which is useful for representing configuration files and nested stacks in separate files.<br />
* '''Cloud-init resources''': The [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::CloudConfig OS::Heat::CloudConfig] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::MultipartMime OS::Heat::MultipartMime]<br />
* '''Stack abandon and adopt''': It is now possible to abandon a stack, which deletes the stack from Heat without deleting the actual OpenStack resources. The resulting abandon data can also be used to adopt a stack, which creates a new stack based on already existing OpenStack resources. Adopt should be considered an experimental feature for the Icehouse release of Heat.<br />
* '''Stack preview''': The stack-preview action returns a list of resources which are expected to be created if a stack is created with the provided template<br />
* '''New resources''': The following new resources are implemented in this release:<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::CloudConfig OS::Heat::CloudConfig]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::MultipartMime OS::Heat::MultipartMime]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::SoftwareConfig OS::Heat::SoftwareConfig]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::SoftwareDeployment OS::Heat::SoftwareDeployment]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::StructuredConfig OS::Heat::StructuredConfig]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::StructuredDeployment OS::Heat::StructuredDeployment]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::RandomString OS::Heat::RandomString]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup OS::Heat::ResourceGroup]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::AutoScalingGroup OS::Heat::AutoScalingGroup]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ScalingPolicy OS::Heat::ScalingPolicy]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::SecurityGroup OS::Neutron::SecurityGroup]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::MeteringLabel OS::Neutron::MeteringLabel]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::MeteringRule OS::Neutron::MeteringRule]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::ProviderNet OS::Neutron::ProviderNet]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::NetworkGateway OS::Neutron::NetworkGateway]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember OS::Neutron::PoolMember]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::KeyPair OS::Nova::KeyPair]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::FloatingIP OS::Nova::FloatingIP]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::FloatingIPAssociation OS::Nova::FloatingIPAssociation]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Trove::Instance OS::Trove::Instance]<br />
<br />
=== Known Issues ===<br />
* Any error during a stack-update operation (for example from a transient cloud error, a heat bug, or a user template error) can lead to stacks going into an unrecoverable error state. Currently it is only recommended to attempt stack updates if it is practical to recover from errors by deleting and recreating the stack.<br />
* The new stack-adopt operation should be considered an experimental feature<br />
* CFN API returns HTTP status code 500 on all errors ([https://bugs.launchpad.net/heat/+bug/1291079 bug 1291079])<br />
* Deleting stacks containing volume attachments may need to be attempted multiple times due to a volume detachment race ([https://bugs.launchpad.net/heat/+bug/1298350 bug 1298350])<br />
<br />
=== Upgrade Notes ===<br />
Please read the general notes on [https://wiki.openstack.org/wiki/Security/Icehouse/Heat Heat's security model].<br />
<br />
==== Deferred authentication method ====<br />
The default <code>deferred_auth_method</code> of <code>password</code> is deprecated as of Icehouse, so although it is still the default, deployers are strongly encouraged to move to using <code>deferred_auth_method=trusts</code>, which is planned to become the default for Juno. This model has the following benefits:<br />
* It avoids storing user credentials in the heat database<br />
* It removes the need to provide a password as well as a token on stack create<br />
* It limits the actions the heat service user can perform on a users behalf.<br />
<br />
To enable trusts for deferred operations:<br />
* Ensure the keystone service heat is configured to use has enabled the [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md OS-TRUST extension]<br />
* Set <code>deferred_auth_method = trusts</code> in <code>/etc/heat/heat.conf</code><br />
* Optionally specify the roles to be delegated to the heat service user (<code>trusts_delegated_roles</code> in <code>heat.conf</code>, defaults to <code>heat_stack_owner</code> which will be referred to in the following instructions. You may wish to modify this list of roles to suit your local RBAC policies)<br />
* Ensure the role(s) to be delegated exist, e.g <code>heat_stack_owner</code> exists when running <code>keystone role-list</code><br />
* All users creating heat stacks should possess this role in the project where they are creating the stack. A trust will be created by heat on stack creation between the stack owner (user creating the stack) and the heat service user, delegating the <code>heat_stack_user</code> role to the heat service user, for the lifetime of the stack.<br />
<br />
==== Stack domain users ====<br />
(shardy TODO)<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
* User/Schema management<br />
** MySQL feature to allow users to do CRUD mgmt on Users and Schemas<br />
* Flavor / Cinder Volume resizes<br />
** Resize up/down the flavor that defines the instance<br />
** Resize up only the optional Cinder Volume size<br />
* Multiple datastore support<br />
** Full feature support for MySQL and Percona<br />
** Experimental (not full feature) support for MongoDB, Redis, Cassandra, and Couchbase<br />
* Configuration groups<br />
** Define a set of configuration options to attach to new or existing instances<br />
* Backups and Restore<br />
** Executes native backup software on a datastore, and steam the output to a swift container<br />
** Full and incremental backups<br />
* Optional DNS support via designate<br />
** Flag to define whether to provision DNS for an instance<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* Trove Conductor is a new daemon to proxy database communication from guests. It needs to be installed and running.<br />
* new Datastores feature requires operators to define (or remove) the datastores your installation will support<br />
* new Configuration Groups feature allows operators to define a subset of configuration options for a particular datastore<br />
<br />
== OpenStack Documentation ==<br />
<br />
=== Key New Features ===<br />
<br />
* New manual: Command-Line Interface Reference<br />
* API reference has been updated and includes now PDF files as well<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
None yet</div>Russellbhttps://wiki.openstack.org/w/index.php?title=Meetings/Nova&diff=48413Meetings/Nova2014-04-11T15:38:59Z<p>Russellb: /* Agenda for next meeting */</p>
<hr />
<div><br />
= Weekly Nova team meeting =<br />
'''MEETING TIME: Thursdays alternating 14:00 UTC (#openstack-meeting-alt) and 21:00 UTC (#openstack-meeting)'''<br />
<br />
This meeting is a weekly gathering of developers working on [[Nova|OpenStack Compute (Nova)]]. We cover topics such as release planning and status, bugs, reviews, and other current topics worthy of real-time discussion.<br />
<br />
== Agenda for next meeting ==<br />
<br />
Scheduled for April 17, 2014, 21:00 UTC (http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140417T140030<br />
* icehouse<br />
* bugs<br />
* blueprints<br />
* open discussion<br />
<br />
=== Sub-teams ===<br />
<br />
There are also some Nova subteam meetings. See [[Teams#Nova_subteams]] for details.<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nova/ All other meetings are here]<br />
* [http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-08-16-21.01.html 2012-08-16]<br />
* [http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-08-09-21.00.html 2012-08-09]<br />
* [http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-08-02-21.40.html 2012-08-02]</div>Russellbhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Icehouse&diff=48019ReleaseNotes/Icehouse2014-04-08T12:39:25Z<p>Russellb: /* Compute Drivers */</p>
<hr />
<div>= OpenStack 2014.1 (Icehouse) Release Notes =<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== General Upgrade Notes ==<br />
<br />
* Windows packagers should use pbr 0.8 to avoid [https://bugs.launchpad.net/pbr/+bug/1294246 bug 1294246]<br />
* The log-config option has been renamed log-config-append, and will now append any configuration specified, rather than completely overriding any other settings as currently occurs. (https://bugs.launchpad.net/oslo/+bug/1169328, https://bugs.launchpad.net/oslo/+bug/1238349)<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Upgrade Support ====<br />
<br />
* Limited live upgrades are now supported. This enables deployers to upgrade controller infrastructure first, and subsequently upgrade individual compute nodes without requiring downtime of the entire cloud to complete.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Added RDP console support.<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the <tt>os_command_line</tt> key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise the default kernel arguments are used.<br />
* The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.<br />
* The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random, however use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.<br />
* The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting the <tt>hw_video_model</tt>, <tt>hw_video_vram</tt>, and <tt>hw_video_head</tt> properties in the image metadata. Currently supported video driver models are <tt>vga</tt>, <tt>cirrus</tt>, <tt>vmvga</tt>, <tt>xen</tt> and <tt>qxl</tt>. <br />
* Watchdog support has been added to the Libvirt driver. The watchdog device used is <tt>i6300esb</tt>. It is enabled by setting the <tt>hw_watchdog_action</tt> property in the image properties or flavor extra specifications (<tt>extra_specs</tt>) to a value other than <tt>disabled</tt>. Supported <tt>hw_watchdog_action</tt> property values, which denote the action for the watchdog device to take in the event of an instance failure, are <tt>poweroff</tt>, <tt>reset</tt>, <tt>pause</tt>, and <tt>none</tt>.<br />
* The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.<br />
* The libvirt driver now supports waiting for an event from Neutron during instance boot for better reliability. This requires a suitably new Neutron that supports sending these events, and avoids a race between the instance expecting networking to be ready and the actual plumbing that is required.<br />
<br />
===== VMware =====<br />
<br />
* The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the "nova diagnostics INSTANCE" command, where INSTANCE is replaced by an instance name or instance identifier.<br />
* The VMware Compute drivers now booting an instance from an ISO image.<br />
* The VMware Compute drivers now support the aging of cached images.<br />
<br />
===== XenServer =====<br />
<br />
* Added initial support for [https://blueprints.launchpad.net/nova/+spec/pci-passthrough-xenapi PCI passthrough]<br />
* Maintained group B status through the introduction of the [[XenServer/XenServer_CI|XenServer CI]]<br />
* Improved support for ephemeral disks (including [https://blueprints.launchpad.net/nova/+spec/xenapi-migrate-ephemeral-disks migration] and [https://blueprints.launchpad.net/nova/+spec/xenapi-resize-ephemeral-disks resize up] of multiple ephemeral disks)<br />
* Support for [https://blueprints.launchpad.net/nova/+spec/xenapi-vcpu-pin-set vcpu_pin_set], essential when you pin CPU resources to Dom0<br />
* Numerous performance and stability enhancements<br />
<br />
==== API ====<br />
<br />
* In OpenStack Compute, the <tt>OS-DCF:diskConfig</tt> API attribute is no longer supported in V3 of the nova API.<br />
* The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.<br />
* The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had had been disabled and the system re-provisioned. This functionality is provided by the <tt>ExtendedServicesDelete</tt> API extension.<br />
* Separated the V3 API admin_actions plugin into logically separate plugins so operators can enable subsets of the functionality currently present in the plugin.<br />
* The Compute service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3 which allows non-unique tenant names.<br />
* The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the <tt>nova hypervisor-show</tt> command.<br />
<br />
==== Scheduler ====<br />
<br />
* The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.<br />
* A new scheduler filter, <tt>AggregateImagePropertiesIsolation</tt>, has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys <tt>aggregate_image_properties_isolation_namespace</tt> and <tt>aggregate_image_properties_isolation_separator</tt> are used to determine which image properties are examined by the filter.<br />
* Weight normalization in OpenStack Compute: See: <br />
** https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0. <br />
* The scheduler now supports server groups. The following types are supported - anti-affinity and affinity filters. That is, a server that is deployed will be done according to a predefined policy.<br />
<br />
==== Other Features ====<br />
<br />
* Notifications are now generated upon the creation and deletion of keypairs.<br />
* Notifications are now generated when an Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.<br />
* Compute services are now able to shutdown gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.<br />
* The Compute service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the <tt>running_deleted_instance_action</tt> configuration key. A new <tt>shutdown</tt> value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.<br />
* File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the <tt>inject_key</tt> and <tt>inject_partition</tt> configuration keys in <tt>/etc/nova/nova.conf</tt> and restart the Compute services. The file injection mechanism is likely to be disabled in a future release.<br />
* A number of changes have been made to the expected format <tt>/etc/nova/nova.conf</tt> configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.<br />
<br />
=== Known Issues ===<br />
* OpenStack Compute has some features that use newer API versions from other projects, but the following are the only API versions tested in Icehouse:<br />
** Keystone v2<br />
** Cinder v1<br />
** Glance v1<br />
* The libvirt driver backed by Xen or LXC is an untested configuration (group C on [[HypervisorSupportMatrix]]). Since it's untested, a change made it in that broke both of these configurations. [https://bugs.launchpad.net/nova/+bug/1301453]<br />
<br />
=== Upgrade Notes ===<br />
<br />
* Scheduler and weight normalization (https://review.openstack.org/#/c/27160/): In previous releases the Compute and Cells scheduler used raw weights (i.e. the weighers returned any value, and that was the value used by the weighing proccess).<br />
** If you were using several weighers for Compute:<br />
*** If several weighers were used (in previous releases Nova only shipped one weigher for compute), it is possible that your multipliers were inflated artificially in order to make an important weigher prevail against any other weigher that returned large raw values. You need to check your weighers and take into account that now the maximum and minimum weights for a host will always be <tt>1.0</tt> and <tt>0.0</tt>.<br />
** If you are using cells:<br />
*** <tt>nova.cells.weights.mute_child.MuteChild</tt>: The weigher returned the value <tt>mute_weight_value</tt> as the weight assigned to a child that didn't update its capabilities in a while. It can still be used, but will have no effect on the final weight that will be computed by the weighing process, that will be <tt>1.0</tt>. If you are using this weigher to mute a child cell you need to adjust the <tt>mute_weight_multiplier</tt>.<br />
*** <tt>nova.cells.weights.weight_offset.WeightOffsetWeigher</tt> introduces a new configuration option <tt>offset_weight_multiplier</tt>. This new option has to be adjusted. In previous releases, the weigher returned the value of the configured offset for each of the cells in the weighing process. While the winner of that process will still be the same, it will get a weight of <tt>1.0</tt>. If you were using this weigher and you were relying in its value to make it prevail against any other weighers you need to adjust its multiplier accordingly.<br />
* An early Docker compute driver was included in the Havana release. This driver has been moved from Nova into its own repository. The new location is http://git.openstack.org/cgit/stackforge/nova-docker<br />
* https://review.openstack.org/50668 - The <tt>compute_api_class</tt> configuration option has been removed.<br />
* https://review.openstack.org/#/c/54290/ - The following deprecated configuration option aliases have been removed in favor of their new names:<br />
** <tt>service_quantum_metadata_proxy</tt><br />
** <tt>quantum_metadata_proxy_shared_secret</tt><br />
** <tt>use_quantum_default_nets</tt><br />
** <tt>quantum_default_tenant_id</tt><br />
** <tt>vpn_instance_type</tt><br />
** <tt>default_instance_type</tt><br />
** <tt>quantum_url</tt><br />
** <tt>quantum_url_timeout</tt><br />
** <tt>quantum_admin_username</tt><br />
** <tt>quantum_admin_password</tt><br />
** <tt>quantum_admin_tenant_name</tt><br />
** <tt>quantum_region_name</tt><br />
** <tt>quantum_admin_auth_url</tt><br />
** <tt>quantum_api_insecure</tt><br />
** <tt>quantum_auth_strategy</tt><br />
** <tt>quantum_ovs_bridge</tt><br />
** <tt>quantum_extension_sync_interval</tt><br />
** <tt>vmwareapi_host_ip</tt><br />
** <tt>vmwareapi_host_username</tt><br />
** <tt>vmwareapi_host_password</tt><br />
** <tt>vmwareapi_cluster_name</tt><br />
** <tt>vmwareapi_task_poll_interval</tt><br />
** <tt>vmwareapi_api_retry_count</tt><br />
** <tt>vnc_port</tt><br />
** <tt>vnc_port_total</tt><br />
** <tt>use_linked_clone</tt><br />
** <tt>vmwareapi_vlan_interface</tt><br />
** <tt>vmwareapi_wsdl_loc</tt><br />
* The PowerVM driver has been removed: https://review.openstack.org/#/c/57774/<br />
* The keystone_authtoken defaults changed in nova.conf: https://review.openstack.org/#/c/62815/<br />
* libvirt lvm names changed from using instance_name_template to instance uuid (https://review.openstack.org/#/c/76968). Possible manual cleanup required if using a non default instance_name_template.<br />
* rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)<br />
* Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.<br />
* Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual mplementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.<br />
* Nova previously included a nova.conf.sample. This file was automatically generated and is no longer included directly. If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.<br />
* Nova now defaults to requiring an event from Neutron when booting libvirt guests. If you upgrade Nova before Neutron, you must disable this feature in Nova until Neutron supports it by setting '''vif_plugging_is_fatal=False''' and '''vif_plugging_timeout=0'''. Recommended order is: Nova (with this disabled), Neutron (with the notifications enabled), and then enable vif_plugging_is_fatal=True with the default value of vif_plugging_timeout.<br />
* Nova supports a limited live upgrade model for the compute nodes in Icehouse. To do this, upgrade controller infrastructure (everthing except nova-compute) first, but set the '''[upgrade_levels]/compute=icehouse-compat''' option. This will enable Icehouse controller services to talk to Havana compute services. Upgrades of individual compute nodes can then proceed normally. When all the computes are upgraded, unset the compute version option to retain the default and restart the controller services.<br />
* The following configuration options are marked as deprecated in this release. See <tt>nova.conf.sample</tt> for their replacements. <tt>[GROUP]/option</tt><br />
** <tt>[DEFAULT]/rabbit_durable_queues</tt><br />
** <tt>[rpc_notifier2]/topics</tt><br />
** <tt>[DEFAULT]/log_config</tt><br />
** <tt>[DEFAULT]/logfile</tt><br />
** <tt>[DEFAULT]/logdir</tt><br />
** <tt>[DEFAULT]/base_dir_name</tt><br />
** <tt>[DEFAULT]/instance_type_extra_specs</tt><br />
** <tt>[DEFAULT]/db_backend</tt><br />
** <tt>[DEFAULT]/sql_connection</tt><br />
** <tt>[DATABASE]/sql_connection</tt><br />
** <tt>[sql]/connection</tt><br />
** <tt>[DEFAULT]/sql_idle_timeout</tt><br />
** <tt>[DATABASE]/sql_idle_timeout</tt><br />
** <tt>[sql]/idle_timeout</tt><br />
** <tt>[DEFAULT]/sql_min_pool_size</tt><br />
** <tt>[DATABASE]/sql_min_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_pool_size</tt><br />
** <tt>[DATABASE]/sql_max_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_retries</tt><br />
** <tt>[DATABASE]/sql_max_retries</tt><br />
** <tt>[DEFAULT]/sql_retry_interval</tt><br />
** <tt>[DATABASE]/reconnect_interval</tt><br />
** <tt>[DEFAULT]/sql_max_overflow</tt><br />
** <tt>[DATABASE]/sqlalchemy_max_overflow</tt><br />
** <tt>[DEFAULT]/sql_connection_debug</tt><br />
** <tt>[DEFAULT]/sql_connection_trace</tt><br />
** <tt>[DATABASE]/sqlalchemy_pool_timeout</tt><br />
** <tt>[DEFAULT]/memcache_servers</tt><br />
** <tt>[DEFAULT]/libvirt_type</tt><br />
** <tt>[DEFAULT]/libvirt_uri</tt><br />
** <tt>[DEFAULT]/libvirt_inject_password</tt><br />
** <tt>[DEFAULT]/libvirt_inject_key</tt><br />
** <tt>[DEFAULT]/libvirt_inject_partition</tt><br />
** <tt>[DEFAULT]/libvirt_vif_driver</tt><br />
** <tt>[DEFAULT]/libvirt_volume_drivers</tt><br />
** <tt>[DEFAULT]/libvirt_disk_prefix</tt><br />
** <tt>[DEFAULT]/libvirt_wait_soft_reboot_seconds</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_mode</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_model</tt><br />
** <tt>[DEFAULT]/libvirt_snapshots_directory</tt><br />
** <tt>[DEFAULT]/libvirt_images_type</tt><br />
** <tt>[DEFAULT]/libvirt_images_volume_group</tt><br />
** <tt>[DEFAULT]/libvirt_sparse_logical_volumes</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_pool</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_ceph_conf</tt><br />
** <tt>[DEFAULT]/libvirt_snapshot_compression</tt><br />
** <tt>[DEFAULT]/libvirt_use_virtio_for_bridges</tt><br />
** <tt>[DEFAULT]/libvirt_iscsi_use_multipath</tt><br />
** <tt>[DEFAULT]/libvirt_iser_use_multipath</tt><br />
** <tt>[DEFAULT]/matchmaker_ringfile</tt><br />
** <tt>[DEFAULT]/agent_timeout</tt><br />
** <tt>[DEFAULT]/agent_version_timeout</tt><br />
** <tt>[DEFAULT]/agent_resetnetwork_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_agent_path</tt><br />
** <tt>[DEFAULT]/xenapi_disable_agent</tt><br />
** <tt>[DEFAULT]/xenapi_use_agent_default</tt><br />
** <tt>[DEFAULT]/xenapi_login_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_connection_concurrent</tt><br />
** <tt>[DEFAULT]/xenapi_connection_url</tt><br />
** <tt>[DEFAULT]/xenapi_connection_username</tt><br />
** <tt>[DEFAULT]/xenapi_connection_password</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_poll_interval</tt><br />
** <tt>[DEFAULT]/xenapi_check_host</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_max_attempts</tt><br />
** <tt>[DEFAULT]/xenapi_sr_base_path</tt><br />
** <tt>[DEFAULT]/target_host</tt><br />
** <tt>[DEFAULT]/target_port</tt><br />
** <tt>[DEFAULT]/iqn_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_base_url</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_chance</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_duration</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_last_accessed</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_start</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_end</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_download_stall_cutoff</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_seeder_processes_per_host</tt><br />
** <tt>[DEFAULT]/use_join_force</tt><br />
** <tt>[DEFAULT]/xenapi_ovs_integration_bridge</tt><br />
** <tt>[DEFAULT]/cache_images</tt><br />
** <tt>[DEFAULT]/xenapi_image_compression_level</tt><br />
** <tt>[DEFAULT]/default_os_type</tt><br />
** <tt>[DEFAULT]/block_device_creation_timeout</tt><br />
** <tt>[DEFAULT]/max_kernel_ramdisk_size</tt><br />
** <tt>[DEFAULT]/sr_matching_filter</tt><br />
** <tt>[DEFAULT]/xenapi_sparse_copy</tt><br />
** <tt>[DEFAULT]/xenapi_num_vbd_unplug_retries</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_images</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_network_name</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_boot_menu_url</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_mkisofs_cmd</tt><br />
** <tt>[DEFAULT]/xenapi_running_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_vif_driver</tt><br />
** <tt>[DEFAULT]/xenapi_image_upload_handler</tt><br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* The calculation of storage quotas has been improved. Deleted images are now excluded from the count (https://bugs.launchpad.net/glance/+bug/1261738), which may affect your existing usage figures.<br />
* Glance has moved to using 0-based indices for location entries, to be in line with JSON-pointer RFC6901 (https://bugs.launchpad.net/glance/+bug/1282437)<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* Thanks to the [[I18nTeam]] Horizon is now available in Hindi, German and Serbian. Translations for Australian English, British English, Dutch, French, Japanese, Korean, Polish, Portuguese, Simplified and Traditional Chinese, Spanish and Russian have also been updated.<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* New API features<br />
** <code>/v3/OS-FEDERATION/</code> allows Keystone to consume federated authentication via [https://shibboleth.net/ Shibboleth] for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-federation-ext.md documentation]).<br />
** <code>POST /v3/users/{user_id}/password</code> allows API users to update their own passwords (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#change-user-password-post-usersuser_idpassword documentation]).<br />
** <code>GET v3/auth/token?nocatalog</code> allows API users to opt-out of receiving the service catalog when performing online token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#validate-token-get-authtokensnocatalog documentation]).<br />
** <code>/v3/regions</code> provides a public interface for describing multi-region deployments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#regions-v3regions documentation]).<br />
** <code>/v3/OS-SIMPLECERT/</code> now publishes the certificates used for PKI token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-simple-certs-ext.md documentation]).<br />
** <code>/v3/OS-TRUST/trusts</code> is now capable of providing limited-use delegation via the <code>remaining_uses</code> attribute of [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md trusts].<br />
* The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.<br />
* KVS drivers are now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB.<br />
* Keystone's driver interfaces are now implemented as Abstract Base Classes (ABCs) to make it easier to track compatibility of custom driver implementations across releases.<br />
* Keystone's default <code>etc/policy.json</code> has been rewritten in an easier to read format.<br />
* [http://docs.openstack.org/developer/keystone/event_notifications.html Notifications] are now emitted in response to create, update and delete events on roles, groups, and trusts.<br />
* Custom extensions and driver implementations may now subscribe to internal-only event notifications, including ''disable'' events (which are only exposed externally as part of ''update'' events).<br />
* Keystone now emits [http://www.dmtf.org/standards/cadf Cloud Audit Data Federation (CADF)] event notifications in response to authentication events.<br />
* [https://review.openstack.org/#/c/50362/ Additional plugins] are provided to handle external authentication via <code>REMOTE_USER</code> with respect to single-domain versus multi-domain deployments.<br />
* <code>policy.json</code> can now perform enforcement on the target domain in a domain-aware operationusing, for example, <code>%(target.{entity}.domain_id)s</code>.<br />
* The LDAP driver for the assignment backend now supports group-based role assignment operations.<br />
* Keystone now publishes token revocation ''events'' in addition to providing continued support for token revocation ''lists''. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Keystone eliminate token persistence during the Juno release.<br />
* Deployers can now define arbitrary limits on the size of collections in API responses (for example, <code>GET /v3/users</code> might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.<br />
* Lazy translation has been enabled to translating responses according to the requested Accept-Language header.<br />
* Keystone now emits i18n-ready log messages.<br />
* Collection filtering is now performed in the driver layer, where possible, for improved performance.<br />
<br />
=== Known Issues ===<br />
<br />
* [https://bugs.launchpad.net/keystone/+bug/1291157 Bug 1291157]: If using the <code>OS-FEDERATION</code> extension, deleting an Identity Provider or Protocol does ''not'' result in previously-issued tokens being revoked. This will not be fixed in the ''stable/icehouse'' branch.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The v2 API has been prepared for deprecation, but remains stable in the Icehouse release. It may be formally deprecated during the Juno release pending widespread support for the v3 API.<br />
* Backwards compatibility for <code>keystone.middleware.auth_token</code> has been removed. <code>auth_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.auth_token</code> instead.<br />
* The <code>s3_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.s3_token</code> instead. Backwards compatibility for <code>keystone.middleware.s3_token</code> will be removed in Juno.<br />
* The default token duration has been reduced from 24 hours to just 1 hour. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been deprecated in favor of external tooling and may be removed in the K release.<br />
* <code>keystone.middleware.XmlBodyMiddleware</code> has been deprecated in favor of support for "application/json" only and may be removed in the K release.<br />
* A V3 API version of of the EC2 Credential system has been implemented. To use this, the following section needs to be added to keystone-paste.ini:<br />
[filter:ec2_extension_v3]<br />
paste.filter_factory = keystone.contrib.ec2:Ec2ExtensionV3.factory<br />
: and 'ec2_extension_v3' needs to be added to the pipeline variable in the '[pipeline:api_v3]' section of keystone-paste.ini <br />
* Policy.json updated to provide rules for the new V3 EC2credential CRUD as show in the updated sample policy.json and policy.v3cloudsample.json<br />
* TODO: unified assignment table migration<br />
* TODO: deprecations for the move to oslo-incubator db<br />
* TODO: mutable domain_id is false by default we opted (iirc) to say "this is security related" so if osmeone relies on that they need to re-enable it -morganfainberg<br />
* TODO: any non-ideal default values that will be changed in the future<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet.<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Metering (Ceilometer) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
https://bugs.launchpad.net/ceilometer/+bug/1297528<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Documentation ==<br />
<br />
=== Key New Features ===<br />
<br />
* New manual: Command-Line Interface Reference<br />
* API reference has been updated and includes now PDF files as well<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
None yet</div>Russellbhttps://wiki.openstack.org/w/index.php?title=Meetings/Nova&diff=47644Meetings/Nova2014-04-03T14:53:49Z<p>Russellb: /* Agenda for next meeting */</p>
<hr />
<div><br />
= Weekly Nova team meeting =<br />
'''MEETING TIME: Thursdays alternating 14:00 UTC (#openstack-meeting-alt) and 21:00 UTC (#openstack-meeting)'''<br />
<br />
This meeting is a weekly gathering of developers working on [[Nova|OpenStack Compute (Nova)]]. We cover topics such as release planning and status, bugs, reviews, and other current topics worthy of real-time discussion.<br />
<br />
== Agenda for next meeting ==<br />
<br />
Scheduled for April 10, 2014, 21:00 UTC (http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140410T210030<br />
* icehouse-rc<br />
* other bugs<br />
* blueprints<br />
* open discussion<br />
<br />
=== Sub-teams ===<br />
<br />
There are also some Nova subteam meetings. See [[Teams#Nova_subteams]] for details.<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nova/ All other meetings are here]<br />
* [http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-08-16-21.01.html 2012-08-16]<br />
* [http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-08-09-21.00.html 2012-08-09]<br />
* [http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-08-02-21.40.html 2012-08-02]</div>Russellbhttps://wiki.openstack.org/w/index.php?title=Meetings/Nova&diff=47619Meetings/Nova2014-04-03T12:54:39Z<p>Russellb: /* Agenda for next meeting */</p>
<hr />
<div><br />
= Weekly Nova team meeting =<br />
'''MEETING TIME: Thursdays alternating 14:00 UTC (#openstack-meeting-alt) and 21:00 UTC (#openstack-meeting)'''<br />
<br />
This meeting is a weekly gathering of developers working on [[Nova|OpenStack Compute (Nova)]]. We cover topics such as release planning and status, bugs, reviews, and other current topics worthy of real-time discussion.<br />
<br />
== Agenda for next meeting ==<br />
<br />
Scheduled for April 3, 2014, 14:00 UTC (http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140403T140030<br />
* icehouse-rc<br />
* other bugs<br />
* blueprints<br />
* open discussion<br />
** https://bugs.launchpad.net/openstack-ci/+bug/1282629<br />
<br />
=== Sub-teams ===<br />
<br />
There are also some Nova subteam meetings. See [[Teams#Nova_subteams]] for details.<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nova/ All other meetings are here]<br />
* [http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-08-16-21.01.html 2012-08-16]<br />
* [http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-08-09-21.00.html 2012-08-09]<br />
* [http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-08-02-21.40.html 2012-08-02]</div>Russellbhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Icehouse&diff=47498ReleaseNotes/Icehouse2014-04-02T17:34:12Z<p>Russellb: /* Known Issues */</p>
<hr />
<div>= OpenStack 2014.1 (Icehouse) Release Notes =<br />
<br />
== General Upgrade Notes ==<br />
<br />
* Windows packagers should use pbr 0.8 to avoid [https://bugs.launchpad.net/pbr/+bug/1294246 bug 1294246]<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Upgrade Support ====<br />
<br />
* Limited live upgrades are now supported. This enables deployers to upgrade controller infrastructure first, and subsequently upgrade individual compute nodes without requiring downtime of the entire cloud to complete.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the <tt>os_command_line</tt> key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise the default kernel arguments are used.<br />
* The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.<br />
* The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random, however use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.<br />
* The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting the <tt>hw_video_model</tt>, <tt>hw_video_vram</tt>, and <tt>hw_video_head</tt> properties in the image metadata. Currently supported video driver models are <tt>vga</tt>, <tt>cirrus</tt>, <tt>vmvga</tt>, <tt>xen</tt> and <tt>qxl</tt>. <br />
* Watchdog support has been added to the Libvirt driver. The watchdog device used is <tt>i6300esb</tt>. It is enabled by setting the <tt>hw_watchdog_action</tt> property in the image properties or flavor extra specifications (<tt>extra_specs</tt>) to a value other than <tt>disabled</tt>. Supported <tt>hw_watchdog_action</tt> property values, which denote the action for the watchdog device to take in the event of an instance failure, are <tt>poweroff</tt>, <tt>reset</tt>, <tt>pause</tt>, and <tt>none</tt>.<br />
* The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.<br />
* The libvirt driver now supports waiting for an event from Neutron during instance boot for better reliability. This requires a suitably new Neutron that supports sending these events, and avoids a race between the instance expecting networking to be ready and the actual plumbing that is required.<br />
<br />
===== VMware =====<br />
<br />
* The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the "nova diagnostics INSTANCE" command, where INSTANCE is replaced by an instance name or instance identifier.<br />
* The VMware Compute drivers now booting an instance from an ISO image.<br />
* The VMware Compute drivers now support the aging of cached images.<br />
<br />
===== XenServer =====<br />
<br />
TODO...<br />
<br />
==== API ====<br />
<br />
* In OpenStack Compute, the <tt>OS-DCF:diskConfig</tt> API attribute is no longer supported in V3 of the nova API.<br />
* The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.<br />
* The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had had been disabled and the system re-provisioned. This functionality is provided by the <tt>ExtendedServicesDelete</tt> API extension.<br />
* Separated the V3 API admin_actions plugin into logically separate plugins so operators can enable subsets of the functionality currently present in the plugin.<br />
* The Compute service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3 which allows non-unique tenant names.<br />
* The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the <tt>nova hypervisor-show</tt> command.<br />
<br />
==== Scheduler ====<br />
<br />
* The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.<br />
* A new scheduler filter, <tt>AggregateImagePropertiesIsolation</tt>, has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys <tt>aggregate_image_properties_isolation_namespace</tt> and <tt>aggregate_image_properties_isolation_separator</tt> are used to determine which image properties are examined by the filter.<br />
* Weight normalization in OpenStack Compute: See: <br />
** https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0. <br />
** nova.cells.weights.weight_offset.WeightOffsetWeigher introduces a new configuration option 'offset_weight_multiplier' <br />
** https://review.openstack.org/#/c/36417/ Introduce stacking flags for weighers. Negative multipliers should not be using for stacking, but the weighers are still compatible (the issue a deprecation warning message).<br />
* The scheduler now supports server groups. The following types are supported - anti-affinity and affinity filters. That is, a server that is deployed will be done according to a predefined policy.<br />
<br />
==== Other Features ====<br />
<br />
* Notifications are now generated upon the creation and deletion of keypairs.<br />
* Notifications are now generated when an Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.<br />
* Compute services are now able to shutdown gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.<br />
* The Compute service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the <tt>running_deleted_instance_action</tt> configuration key. A new <tt>shutdown</tt> value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.<br />
* File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the <tt>inject_key</tt> and <tt>inject_partition</tt> configuration keys in <tt>/etc/nova/nova.conf</tt> and restart the Compute services. The file injection mechanism is likely to be disabled in a future release.<br />
* A number of changes have been made to the expected format <tt>/etc/nova/nova.conf</tt> configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.<br />
<br />
=== Known Issues ===<br />
* OpenStack Compute has some features that use newer API versions from other projects, but the following are the only API versions tested in Icehouse:<br />
** Keystone v2<br />
** Cinder v1<br />
** Glance v1<br />
* The libvirt driver backed by Xen or LXC is an untested configuration (group C on [[HypervisorSupportMatrix]]). Since it's untested, a change made it in that broke both of these configurations. [https://bugs.launchpad.net/nova/+bug/1301453]<br />
<br />
=== Upgrade Notes ===<br />
<br />
* An early Docker compute driver was included in the Havana release. This driver has been moved from Nova into its own repository. The new location is http://git.openstack.org/cgit/stackforge/nova-docker<br />
* https://review.openstack.org/50668 - The <tt>compute_api_class</tt> configuration option has been removed.<br />
* https://review.openstack.org/#/c/54290/ - The following deprecated configuration option aliases have been removed in favor of their new names:<br />
** <tt>service_quantum_metadata_proxy</tt><br />
** <tt>quantum_metadata_proxy_shared_secret</tt><br />
** <tt>use_quantum_default_nets</tt><br />
** <tt>quantum_default_tenant_id</tt><br />
** <tt>vpn_instance_type</tt><br />
** <tt>default_instance_type</tt><br />
** <tt>quantum_url</tt><br />
** <tt>quantum_url_timeout</tt><br />
** <tt>quantum_admin_username</tt><br />
** <tt>quantum_admin_password</tt><br />
** <tt>quantum_admin_tenant_name</tt><br />
** <tt>quantum_region_name</tt><br />
** <tt>quantum_admin_auth_url</tt><br />
** <tt>quantum_api_insecure</tt><br />
** <tt>quantum_auth_strategy</tt><br />
** <tt>quantum_ovs_bridge</tt><br />
** <tt>quantum_extension_sync_interval</tt><br />
** <tt>vmwareapi_host_ip</tt><br />
** <tt>vmwareapi_host_username</tt><br />
** <tt>vmwareapi_host_password</tt><br />
** <tt>vmwareapi_cluster_name</tt><br />
** <tt>vmwareapi_task_poll_interval</tt><br />
** <tt>vmwareapi_api_retry_count</tt><br />
** <tt>vnc_port</tt><br />
** <tt>vnc_port_total</tt><br />
** <tt>use_linked_clone</tt><br />
** <tt>vmwareapi_vlan_interface</tt><br />
** <tt>vmwareapi_wsdl_loc</tt><br />
* The PowerVM driver has been removed: https://review.openstack.org/#/c/57774/<br />
* The keystone_authtoken defaults changed in nova.conf: https://review.openstack.org/#/c/62815/<br />
* libvirt lvm names changed from using instance_name_template to instance uuid (https://review.openstack.org/#/c/76968). Possible manual cleanup required if using a non default instance_name_template.<br />
* rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)<br />
* Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.<br />
* Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual mplementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.<br />
* Nova previously included a nova.conf.sample. This file was automatically generated and is no longer included directly. If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.<br />
* Nova now defaults to requiring an event from Neutron when booting libvirt guests. If you upgrade Nova before Neutron, you must disable this feature in Nova until Neutron supports it by setting '''vif_plugging_is_fatal=False''' and '''vif_plugging_timeout=0'''. Recommended order is: Nova (with this disabled), Neutron (with the notifications enabled), and then enable vif_plugging_is_fatal=True with the default value of vif_plugging_timeout.<br />
* Nova supports a limited live upgrade model for the compute nodes in Icehouse. To do this, upgrade controller infrastructure (everthing except nova-compute) first, but set the '''[upgrade_levels]/compute=icehouse-compat''' option. This will enable Icehouse controller services to talk to Havana compute services. Upgrades of individual compute nodes can then proceed normally. When all the computes are upgraded, unset the compute version option to retain the default and restart the controller services. '''NB:''' Certain operations (such as resize/migrate) will not work while the compute version is pinned.<br />
* The following configuration options are marked as deprecated in this release. See <tt>nova.conf.sample</tt> for their replacements. <tt>[GROUP]/option</tt><br />
** <tt>[DEFAULT]/rabbit_durable_queues</tt><br />
** <tt>[rpc_notifier2]/topics</tt><br />
** <tt>[DEFAULT]/log_config</tt><br />
** <tt>[DEFAULT]/logfile</tt><br />
** <tt>[DEFAULT]/logdir</tt><br />
** <tt>[DEFAULT]/base_dir_name</tt><br />
** <tt>[DEFAULT]/instance_type_extra_specs</tt><br />
** <tt>[DEFAULT]/db_backend</tt><br />
** <tt>[DEFAULT]/sql_connection</tt><br />
** <tt>[DATABASE]/sql_connection</tt><br />
** <tt>[sql]/connection</tt><br />
** <tt>[DEFAULT]/sql_idle_timeout</tt><br />
** <tt>[DATABASE]/sql_idle_timeout</tt><br />
** <tt>[sql]/idle_timeout</tt><br />
** <tt>[DEFAULT]/sql_min_pool_size</tt><br />
** <tt>[DATABASE]/sql_min_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_pool_size</tt><br />
** <tt>[DATABASE]/sql_max_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_retries</tt><br />
** <tt>[DATABASE]/sql_max_retries</tt><br />
** <tt>[DEFAULT]/sql_retry_interval</tt><br />
** <tt>[DATABASE]/reconnect_interval</tt><br />
** <tt>[DEFAULT]/sql_max_overflow</tt><br />
** <tt>[DATABASE]/sqlalchemy_max_overflow</tt><br />
** <tt>[DEFAULT]/sql_connection_debug</tt><br />
** <tt>[DEFAULT]/sql_connection_trace</tt><br />
** <tt>[DATABASE]/sqlalchemy_pool_timeout</tt><br />
** <tt>[DEFAULT]/memcache_servers</tt><br />
** <tt>[DEFAULT]/libvirt_type</tt><br />
** <tt>[DEFAULT]/libvirt_uri</tt><br />
** <tt>[DEFAULT]/libvirt_inject_password</tt><br />
** <tt>[DEFAULT]/libvirt_inject_key</tt><br />
** <tt>[DEFAULT]/libvirt_inject_partition</tt><br />
** <tt>[DEFAULT]/libvirt_vif_driver</tt><br />
** <tt>[DEFAULT]/libvirt_volume_drivers</tt><br />
** <tt>[DEFAULT]/libvirt_disk_prefix</tt><br />
** <tt>[DEFAULT]/libvirt_wait_soft_reboot_seconds</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_mode</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_model</tt><br />
** <tt>[DEFAULT]/libvirt_snapshots_directory</tt><br />
** <tt>[DEFAULT]/libvirt_images_type</tt><br />
** <tt>[DEFAULT]/libvirt_images_volume_group</tt><br />
** <tt>[DEFAULT]/libvirt_sparse_logical_volumes</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_pool</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_ceph_conf</tt><br />
** <tt>[DEFAULT]/libvirt_snapshot_compression</tt><br />
** <tt>[DEFAULT]/libvirt_use_virtio_for_bridges</tt><br />
** <tt>[DEFAULT]/libvirt_iscsi_use_multipath</tt><br />
** <tt>[DEFAULT]/libvirt_iser_use_multipath</tt><br />
** <tt>[DEFAULT]/matchmaker_ringfile</tt><br />
** <tt>[DEFAULT]/agent_timeout</tt><br />
** <tt>[DEFAULT]/agent_version_timeout</tt><br />
** <tt>[DEFAULT]/agent_resetnetwork_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_agent_path</tt><br />
** <tt>[DEFAULT]/xenapi_disable_agent</tt><br />
** <tt>[DEFAULT]/xenapi_use_agent_default</tt><br />
** <tt>[DEFAULT]/xenapi_login_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_connection_concurrent</tt><br />
** <tt>[DEFAULT]/xenapi_connection_url</tt><br />
** <tt>[DEFAULT]/xenapi_connection_username</tt><br />
** <tt>[DEFAULT]/xenapi_connection_password</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_poll_interval</tt><br />
** <tt>[DEFAULT]/xenapi_check_host</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_max_attempts</tt><br />
** <tt>[DEFAULT]/xenapi_sr_base_path</tt><br />
** <tt>[DEFAULT]/target_host</tt><br />
** <tt>[DEFAULT]/target_port</tt><br />
** <tt>[DEFAULT]/iqn_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_base_url</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_chance</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_duration</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_last_accessed</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_start</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_end</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_download_stall_cutoff</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_seeder_processes_per_host</tt><br />
** <tt>[DEFAULT]/use_join_force</tt><br />
** <tt>[DEFAULT]/xenapi_ovs_integration_bridge</tt><br />
** <tt>[DEFAULT]/cache_images</tt><br />
** <tt>[DEFAULT]/xenapi_image_compression_level</tt><br />
** <tt>[DEFAULT]/default_os_type</tt><br />
** <tt>[DEFAULT]/block_device_creation_timeout</tt><br />
** <tt>[DEFAULT]/max_kernel_ramdisk_size</tt><br />
** <tt>[DEFAULT]/sr_matching_filter</tt><br />
** <tt>[DEFAULT]/xenapi_sparse_copy</tt><br />
** <tt>[DEFAULT]/xenapi_num_vbd_unplug_retries</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_images</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_network_name</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_boot_menu_url</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_mkisofs_cmd</tt><br />
** <tt>[DEFAULT]/xenapi_running_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_vif_driver</tt><br />
** <tt>[DEFAULT]/xenapi_image_upload_handler</tt><br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* New API features<br />
** <code>/v3/OS-FEDERATION/</code> allows Keystone to consume federated authentication via [https://shibboleth.net/ Shibboleth] for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-federation-ext.md documentation]).<br />
** <code>POST /v3/users/{user_id}/password</code> allows API users to update their own passwords (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#change-user-password-post-usersuser_idpassword documentation]).<br />
** <code>GET v3/auth/token?nocatalog</code> allows API users to opt-out of receiving the service catalog when performing online token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#validate-token-get-authtokensnocatalog documentation]).<br />
** <code>/v3/regions</code> provides a public interface for describing multi-region deployments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#regions-v3regions documentation]).<br />
** <code>/v3/OS-SIMPLECERT/</code> now publishes the certificates used for PKI token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-simple-certs-ext.md documentation]).<br />
** <code>/v3/OS-TRUST/trusts</code> is now capable of providing limited-use delegation via the <code>remaining_uses</code> attribute of [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md trusts].<br />
* The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.<br />
* KVS drivers are now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB.<br />
* Keystone's driver interfaces are now implemented as Abstract Base Classes (ABCs) to make it easier to track compatibility of custom driver implementations across releases.<br />
* Keystone's default <code>etc/policy.json</code> has been rewritten in an easier to read format.<br />
* [http://docs.openstack.org/developer/keystone/event_notifications.html Notifications] are now emitted in response to create, update and delete events on roles, groups, and trusts.<br />
* Custom extensions and driver implementations may now subscribe to internal-only event notifications, including ''disable'' events (which are only exposed externally as part of ''update'' events).<br />
* Keystone now emits [http://www.dmtf.org/standards/cadf Cloud Audit Data Federation (CADF)] event notifications in response to authentication events.<br />
* [https://review.openstack.org/#/c/50362/ Additional plugins] are provided to handle external authentication via <code>REMOTE_USER</code> with respect to single-domain versus multi-domain deployments.<br />
* <code>policy.json</code> can now perform enforcement on the target domain in a domain-aware operationusing, for example, <code>%(target.{entity}.domain_id)s</code>.<br />
* The LDAP driver for the assignment backend now supports group-based role assignment operations.<br />
* Keystone now publishes token revocation ''events'' in addition to providing continued support for token revocation ''lists''. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Keystone eliminate token persistence during the Juno release.<br />
* Deployers can now define arbitrary limits on the size of collections in API responses (for example, <code>GET /v3/users</code> might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.<br />
* Lazy translation has been enabled to translating responses according to the requested Accept-Language header.<br />
* Keystone now emits i18n-ready log messages.<br />
* Collection filtering is now performed in the driver layer, where possible, for improved performance.<br />
<br />
=== Known Issues ===<br />
<br />
* [https://bugs.launchpad.net/keystone/+bug/1291157 Bug 1291157]: If using the <code>OS-FEDERATION</code> extension, deleting an Identity Provider or Protocol does ''not'' result in previously-issued tokens being revoked. This will not be fixed in the ''stable/icehouse'' branch.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The v2 API has been prepared for deprecation, but remains stable in the Icehouse release. It may be formally deprecated during the Juno release pending widespread support for the v3 API.<br />
* Backwards compatibility for <code>keystone.middleware.auth_token</code> has been removed. <code>auth_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.auth_token</code> instead.<br />
* The <code>s3_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.s3_token</code> instead. Backwards compatibility for <code>keystone.middleware.s3_token</code> will be removed in Juno.<br />
* The default token duration has been reduced from 24 hours to just 1 hour. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been deprecated in favor of external tooling and may be removed in the K release.<br />
* <code>keystone.middleware.XmlBodyMiddleware</code> has been deprecated in favor of support for "application/json" only and may be removed in the K release.<br />
* TODO: unified assignment table migration<br />
* TODO: deprecations for the move to oslo-incubator db<br />
* TODO: mutable domain_id is false by default we opted (iirc) to say "this is security related" so if osmeone relies on that they need to re-enable it -morganfainberg<br />
* TODO: any non-ideal default values that will be changed in the future<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet.<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Metering (Ceilometer) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
https://bugs.launchpad.net/ceilometer/+bug/1297528<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Documentation ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
None yet</div>Russellbhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Icehouse&diff=47324ReleaseNotes/Icehouse2014-04-01T15:12:29Z<p>Russellb: /* Key New Features */</p>
<hr />
<div>= OpenStack 2014.1 (Icehouse) Release Notes =<br />
<br />
== General Upgrade Notes ==<br />
<br />
tbd<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Upgrade Support ====<br />
<br />
* Limited live upgrades are now supported. This enables deployers to upgrade controller infrastructure first, and subsequently upgrade individual compute nodes without requiring downtime of the entire cloud to complete.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the <tt>os_command_line</tt> key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise the default kernel arguments are used.<br />
* The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.<br />
* The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random, however use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.<br />
* The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting the <tt>hw_video_model</tt>, <tt>hw_video_vram</tt>, and <tt>hw_video_head</tt> properties in the image metadata. Currently supported video driver models are <tt>vga</tt>, <tt>cirrus</tt>, <tt>vmvga</tt>, <tt>xen</tt> and <tt>qxl</tt>. <br />
* Watchdog support has been added to the Libvirt driver. The watchdog device used is <tt>i6300esb</tt>. It is enabled by setting the <tt>hw_watchdog_action</tt> property in the image properties or flavor extra specifications (<tt>extra_specs</tt>) to a value other than <tt>disabled</tt>. Supported <tt>hw_watchdog_action</tt> property values, which denote the action for the watchdog device to take in the event of an instance failure, are <tt>poweroff</tt>, <tt>reset</tt>, <tt>pause</tt>, and <tt>none</tt>.<br />
* The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.<br />
* The libvirt driver now supports waiting for an event from Neutron during instance boot for better reliability. This requires a suitably new Neutron that supports sending these events, and avoids a race between the instance expecting networking to be ready and the actual plumbing that is required.<br />
<br />
===== VMware =====<br />
<br />
* The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the "nova diagnostics INSTANCE" command, where INSTANCE is replaced by an instance name or instance identifier.<br />
* The VMware Compute drivers now booting an instance from an ISO image.<br />
* The VMware Compute drivers now support the aging of cached images.<br />
<br />
===== XenServer =====<br />
<br />
TODO...<br />
<br />
==== API ====<br />
<br />
* In OpenStack Compute, the <tt>OS-DCF:diskConfig</tt> API attribute is no longer supported in V3 of the nova API.<br />
* The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.<br />
* The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had had been disabled and the system re-provisioned. This functionality is provided by the <tt>ExtendedServicesDelete</tt> API extension.<br />
* Separated the V3 API admin_actions plugin into logically separate plugins so operators can enable subsets of the functionality currently present in the plugin.<br />
* The Compute service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3 which allows non-unique tenant names.<br />
* The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the <tt>nova hypervisor-show</tt> command.<br />
<br />
==== Scheduler ====<br />
<br />
* The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.<br />
* A new scheduler filter, <tt>AggregateImagePropertiesIsolation</tt>, has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys <tt>aggregate_image_properties_isolation_namespace</tt> and <tt>aggregate_image_properties_isolation_separator</tt> are used to determine which image properties are examined by the filter.<br />
* Weight normalization in OpenStack Compute: See: <br />
** https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0. <br />
** nova.cells.weights.weight_offset.WeightOffsetWeigher introduces a new configuration option 'offset_weight_multiplier' <br />
** https://review.openstack.org/#/c/36417/ Introduce stacking flags for weighers. Negative multipliers should not be using for stacking, but the weighers are still compatible (the issue a deprecation warning message).<br />
* The scheduler now supports server groups. The following types are supported - anti-affinity and affinity filters. That is, a server that is deployed will be done according to a predefined policy.<br />
<br />
==== Other Features ====<br />
<br />
* Notifications are now generated upon the creation and deletion of keypairs.<br />
* Notifications are now generated when an Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.<br />
* Compute services are now able to shutdown gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.<br />
* The Compute service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the <tt>running_deleted_instance_action</tt> configuration key. A new <tt>shutdown</tt> value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.<br />
* File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the <tt>inject_key</tt> and <tt>inject_partition</tt> configuration keys in <tt>/etc/nova/nova.conf</tt> and restart the Compute services. The file injection mechanism is likely to be disabled in a future release.<br />
* A number of changes have been made to the expected format <tt>/etc/nova/nova.conf</tt> configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.<br />
<br />
=== Known Issues ===<br />
* OpenStack Compute has some features that use newer API versions from other projects, but the following are the only API versions tested in Icehouse:<br />
** Keystone v2<br />
** Cinder v1<br />
** Glance v1<br />
<br />
=== Upgrade Notes ===<br />
<br />
* An early Docker compute driver was included in the Havana release. This driver has been moved from Nova into its own repository. The new location is http://git.openstack.org/cgit/stackforge/nova-docker<br />
* https://review.openstack.org/50668 - The <tt>compute_api_class</tt> configuration option has been removed.<br />
* https://review.openstack.org/#/c/54290/ - The following deprecated configuration option aliases have been removed in favor of their new names:<br />
** <tt>service_quantum_metadata_proxy</tt><br />
** <tt>quantum_metadata_proxy_shared_secret</tt><br />
** <tt>use_quantum_default_nets</tt><br />
** <tt>quantum_default_tenant_id</tt><br />
** <tt>vpn_instance_type</tt><br />
** <tt>default_instance_type</tt><br />
** <tt>quantum_url</tt><br />
** <tt>quantum_url_timeout</tt><br />
** <tt>quantum_admin_username</tt><br />
** <tt>quantum_admin_password</tt><br />
** <tt>quantum_admin_tenant_name</tt><br />
** <tt>quantum_region_name</tt><br />
** <tt>quantum_admin_auth_url</tt><br />
** <tt>quantum_api_insecure</tt><br />
** <tt>quantum_auth_strategy</tt><br />
** <tt>quantum_ovs_bridge</tt><br />
** <tt>quantum_extension_sync_interval</tt><br />
** <tt>vmwareapi_host_ip</tt><br />
** <tt>vmwareapi_host_username</tt><br />
** <tt>vmwareapi_host_password</tt><br />
** <tt>vmwareapi_cluster_name</tt><br />
** <tt>vmwareapi_task_poll_interval</tt><br />
** <tt>vmwareapi_api_retry_count</tt><br />
** <tt>vnc_port</tt><br />
** <tt>vnc_port_total</tt><br />
** <tt>use_linked_clone</tt><br />
** <tt>vmwareapi_vlan_interface</tt><br />
** <tt>vmwareapi_wsdl_loc</tt><br />
* The PowerVM driver has been removed: https://review.openstack.org/#/c/57774/<br />
* The keystone_authtoken defaults changed in nova.conf: https://review.openstack.org/#/c/62815/<br />
* libvirt lvm names changed from using instance_name_template to instance uuid (https://review.openstack.org/#/c/76968). Possible manual cleanup required if using a non default instance_name_template.<br />
* rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)<br />
* Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.<br />
* Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual mplementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.<br />
* Nova previously included a nova.conf.sample. This file was automatically generated and is no longer included directly. If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.<br />
* Nova now defaults to requiring an event from Neutron when booting libvirt guests. If you upgrade Nova before Neutron, you must disable this feature in Nova until Neutron supports it by setting '''vif_plugging_is_fatal=False''' and '''vif_plugging_timeout=0'''. Recommended order is: Nova (with this disabled), Neutron (with the notifications enabled), and then enable vif_plugging_is_fatal=True with the default value of vif_plugging_timeout.<br />
* Nova supports a limited live upgrade model for the compute nodes in Icehouse. To do this, upgrade controller infrastructure (everthing except nova-compute) first, but set the '''[upgrade_levels]/compute=icehouse-compat''' option. This will enable Icehouse controller services to talk to Havana compute services. Upgrades of individual compute nodes can then proceed normally. When all the computes are upgraded, unset the compute version option to retain the default and restart the controller services. '''NB:''' Certain operations (such as resize/migrate) will not work while the compute version is pinned.<br />
* The following configuration options are marked as deprecated in this release. See <tt>nova.conf.sample</tt> for their replacements. <tt>[GROUP]/option</tt><br />
** <tt>[DEFAULT]/rabbit_durable_queues</tt><br />
** <tt>[rpc_notifier2]/topics</tt><br />
** <tt>[DEFAULT]/log_config</tt><br />
** <tt>[DEFAULT]/logfile</tt><br />
** <tt>[DEFAULT]/logdir</tt><br />
** <tt>[DEFAULT]/base_dir_name</tt><br />
** <tt>[DEFAULT]/instance_type_extra_specs</tt><br />
** <tt>[DEFAULT]/db_backend</tt><br />
** <tt>[DEFAULT]/sql_connection</tt><br />
** <tt>[DATABASE]/sql_connection</tt><br />
** <tt>[sql]/connection</tt><br />
** <tt>[DEFAULT]/sql_idle_timeout</tt><br />
** <tt>[DATABASE]/sql_idle_timeout</tt><br />
** <tt>[sql]/idle_timeout</tt><br />
** <tt>[DEFAULT]/sql_min_pool_size</tt><br />
** <tt>[DATABASE]/sql_min_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_pool_size</tt><br />
** <tt>[DATABASE]/sql_max_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_retries</tt><br />
** <tt>[DATABASE]/sql_max_retries</tt><br />
** <tt>[DEFAULT]/sql_retry_interval</tt><br />
** <tt>[DATABASE]/reconnect_interval</tt><br />
** <tt>[DEFAULT]/sql_max_overflow</tt><br />
** <tt>[DATABASE]/sqlalchemy_max_overflow</tt><br />
** <tt>[DEFAULT]/sql_connection_debug</tt><br />
** <tt>[DEFAULT]/sql_connection_trace</tt><br />
** <tt>[DATABASE]/sqlalchemy_pool_timeout</tt><br />
** <tt>[DEFAULT]/memcache_servers</tt><br />
** <tt>[DEFAULT]/libvirt_type</tt><br />
** <tt>[DEFAULT]/libvirt_uri</tt><br />
** <tt>[DEFAULT]/libvirt_inject_password</tt><br />
** <tt>[DEFAULT]/libvirt_inject_key</tt><br />
** <tt>[DEFAULT]/libvirt_inject_partition</tt><br />
** <tt>[DEFAULT]/libvirt_vif_driver</tt><br />
** <tt>[DEFAULT]/libvirt_volume_drivers</tt><br />
** <tt>[DEFAULT]/libvirt_disk_prefix</tt><br />
** <tt>[DEFAULT]/libvirt_wait_soft_reboot_seconds</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_mode</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_model</tt><br />
** <tt>[DEFAULT]/libvirt_snapshots_directory</tt><br />
** <tt>[DEFAULT]/libvirt_images_type</tt><br />
** <tt>[DEFAULT]/libvirt_images_volume_group</tt><br />
** <tt>[DEFAULT]/libvirt_sparse_logical_volumes</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_pool</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_ceph_conf</tt><br />
** <tt>[DEFAULT]/libvirt_snapshot_compression</tt><br />
** <tt>[DEFAULT]/libvirt_use_virtio_for_bridges</tt><br />
** <tt>[DEFAULT]/libvirt_iscsi_use_multipath</tt><br />
** <tt>[DEFAULT]/libvirt_iser_use_multipath</tt><br />
** <tt>[DEFAULT]/matchmaker_ringfile</tt><br />
** <tt>[DEFAULT]/agent_timeout</tt><br />
** <tt>[DEFAULT]/agent_version_timeout</tt><br />
** <tt>[DEFAULT]/agent_resetnetwork_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_agent_path</tt><br />
** <tt>[DEFAULT]/xenapi_disable_agent</tt><br />
** <tt>[DEFAULT]/xenapi_use_agent_default</tt><br />
** <tt>[DEFAULT]/xenapi_login_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_connection_concurrent</tt><br />
** <tt>[DEFAULT]/xenapi_connection_url</tt><br />
** <tt>[DEFAULT]/xenapi_connection_username</tt><br />
** <tt>[DEFAULT]/xenapi_connection_password</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_poll_interval</tt><br />
** <tt>[DEFAULT]/xenapi_check_host</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_max_attempts</tt><br />
** <tt>[DEFAULT]/xenapi_sr_base_path</tt><br />
** <tt>[DEFAULT]/target_host</tt><br />
** <tt>[DEFAULT]/target_port</tt><br />
** <tt>[DEFAULT]/iqn_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_base_url</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_chance</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_duration</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_last_accessed</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_start</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_end</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_download_stall_cutoff</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_seeder_processes_per_host</tt><br />
** <tt>[DEFAULT]/use_join_force</tt><br />
** <tt>[DEFAULT]/xenapi_ovs_integration_bridge</tt><br />
** <tt>[DEFAULT]/cache_images</tt><br />
** <tt>[DEFAULT]/xenapi_image_compression_level</tt><br />
** <tt>[DEFAULT]/default_os_type</tt><br />
** <tt>[DEFAULT]/block_device_creation_timeout</tt><br />
** <tt>[DEFAULT]/max_kernel_ramdisk_size</tt><br />
** <tt>[DEFAULT]/sr_matching_filter</tt><br />
** <tt>[DEFAULT]/xenapi_sparse_copy</tt><br />
** <tt>[DEFAULT]/xenapi_num_vbd_unplug_retries</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_images</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_network_name</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_boot_menu_url</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_mkisofs_cmd</tt><br />
** <tt>[DEFAULT]/xenapi_running_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_vif_driver</tt><br />
** <tt>[DEFAULT]/xenapi_image_upload_handler</tt><br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* New API features<br />
** <code>/v3/OS-FEDERATION/</code> allows Keystone to consume federated authentication via [https://shibboleth.net/ Shibboleth] for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-federation-ext.md documentation]).<br />
** <code>POST /v3/users/{user_id}/password</code> allows API users to update their own passwords (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#change-user-password-post-usersuser_idpassword documentation]).<br />
** <code>GET v3/auth/token?nocatalog</code> allows API users to opt-out of receiving the service catalog when performing online token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#validate-token-get-authtokensnocatalog documentation]).<br />
** <code>/v3/regions</code> provides a public interface for describing multi-region deployments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#regions-v3regions documentation]).<br />
** <code>/v3/OS-SIMPLECERT/</code> now publishes the certificates used for PKI token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-simple-certs-ext.md documentation]).<br />
** <code>/v3/OS-TRUST/trusts</code> is now capable of providing limited-use delegation via the <code>remaining_uses</code> attribute of [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md trusts].<br />
* The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.<br />
* KVS drivers are now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB.<br />
* Keystone's driver interfaces are now implemented as Abstract Base Classes (ABCs) to make it easier to track compatibility of custom driver implementations across releases.<br />
* Keystone's default <code>etc/policy.json</code> has been rewritten in an easier to read format.<br />
* [http://docs.openstack.org/developer/keystone/event_notifications.html Notifications] are now emitted in response to create, update and delete events on roles, groups, and trusts.<br />
* Custom extensions and driver implementations may now subscribe to internal-only event notifications, including ''disable'' events (which are only exposed externally as part of ''update'' events).<br />
* Keystone now emits [http://www.dmtf.org/standards/cadf Cloud Audit Data Federation (CADF)] event notifications in response to authentication events.<br />
* [https://review.openstack.org/#/c/50362/ Additional plugins] are provided to handle external authentication via <code>REMOTE_USER</code> with respect to single-domain versus multi-domain deployments.<br />
* <code>policy.json</code> can now perform enforcement on the target domain in a domain-aware operationusing, for example, <code>%(target.{entity}.domain_id)s</code>.<br />
* The LDAP driver for the assignment backend now supports group-based role assignment operations.<br />
* Keystone now publishes token revocation ''events'' in addition to providing continued support for token revocation ''lists''. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Keystone eliminate token persistence during the Juno release.<br />
* Deployers can now define arbitrary limits on the size of collections in API responses (for example, <code>GET /v3/users</code> might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.<br />
* Lazy translation has been enabled to translating responses according to the requested Accept-Language header.<br />
* Keystone now emits i18n-ready log messages.<br />
* Collection filtering is now performed in the driver layer, where possible, for improved performance.<br />
<br />
=== Known Issues ===<br />
<br />
* [https://bugs.launchpad.net/keystone/+bug/1291157 Bug 1291157]: If using the <code>OS-FEDERATION</code> extension, deleting an Identity Provider or Protocol does ''not'' result in previously-issued tokens being revoked. This will not be fixed in the ''stable/icehouse'' branch.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The v2 API has been prepared for deprecation, but remains stable in the Icehouse release. It may be formally deprecated during the Juno release pending widespread support for the v3 API.<br />
* Backwards compatibility for <code>keystone.middleware.auth_token</code> has been removed. <code>auth_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.auth_token</code> instead.<br />
* The <code>s3_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.s3_token</code> instead. Backwards compatibility for <code>keystone.middleware.s3_token</code> will be removed in Juno.<br />
* The default token duration has been reduced from 24 hours to just 1 hour. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been deprecated in favor of external tooling and may be removed in the K release.<br />
* <code>keystone.middleware.XmlBodyMiddleware</code> has been deprecated in favor of support for "application/json" only and may be removed in the K release.<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet.<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Metering (Ceilometer) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
https://bugs.launchpad.net/ceilometer/+bug/1297528<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Documentation ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
None yet</div>Russellbhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Icehouse&diff=47320ReleaseNotes/Icehouse2014-04-01T15:02:55Z<p>Russellb: /* Upgrade Notes */</p>
<hr />
<div>= OpenStack 2014.1 (Icehouse) Release Notes =<br />
<br />
== General Upgrade Notes ==<br />
<br />
tbd<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the <tt>os_command_line</tt> key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise the default kernel arguments are used.<br />
* The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.<br />
* The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random, however use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.<br />
* The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting the <tt>hw_video_model</tt>, <tt>hw_video_vram</tt>, and <tt>hw_video_head</tt> properties in the image metadata. Currently supported video driver models are <tt>vga</tt>, <tt>cirrus</tt>, <tt>vmvga</tt>, <tt>xen</tt> and <tt>qxl</tt>. <br />
* Watchdog support has been added to the Libvirt driver. The watchdog device used is <tt>i6300esb</tt>. It is enabled by setting the <tt>hw_watchdog_action</tt> property in the image properties or flavor extra specifications (<tt>extra_specs</tt>) to a value other than <tt>disabled</tt>. Supported <tt>hw_watchdog_action</tt> property values, which denote the action for the watchdog device to take in the event of an instance failure, are <tt>poweroff</tt>, <tt>reset</tt>, <tt>pause</tt>, and <tt>none</tt>.<br />
* The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.<br />
<br />
===== VMware =====<br />
<br />
* The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the "nova diagnostics INSTANCE" command, where INSTANCE is replaced by an instance name or instance identifier.<br />
* The VMware Compute drivers now booting an instance from an ISO image.<br />
* The VMware Compute drivers now support the aging of cached images.<br />
<br />
===== XenServer =====<br />
<br />
TODO...<br />
<br />
==== API ====<br />
<br />
* In OpenStack Compute, the <tt>OS-DCF:diskConfig</tt> API attribute is no longer supported in V3 of the nova API.<br />
* The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.<br />
* The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had had been disabled and the system re-provisioned. This functionality is provided by the <tt>ExtendedServicesDelete</tt> API extension.<br />
* Separated the V3 API admin_actions plugin into logically separate plugins so operators can enable subsets of the functionality currently present in the plugin.<br />
* The Compute service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3 which allows non-unique tenant names.<br />
* The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the <tt>nova hypervisor-show</tt> command.<br />
<br />
==== Scheduler ====<br />
<br />
* The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.<br />
* A new scheduler filter, <tt>AggregateImagePropertiesIsolation</tt>, has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys <tt>aggregate_image_properties_isolation_namespace</tt> and <tt>aggregate_image_properties_isolation_separator</tt> are used to determine which image properties are examined by the filter.<br />
* Weight normalization in OpenStack Compute: See: <br />
** https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0. <br />
** nova.cells.weights.weight_offset.WeightOffsetWeigher introduces a new configuration option 'offset_weight_multiplier' <br />
** https://review.openstack.org/#/c/36417/ Introduce stacking flags for weighers. Negative multipliers should not be using for stacking, but the weighers are still compatible (the issue a deprecation warning message).<br />
* The scheduler now supports server groups. The following types are supported - anti-affinity and affinity filters. That is, a server that is deployed will be done according to a predefined policy.<br />
<br />
==== Other Features ====<br />
<br />
* Notifications are now generated upon the creation and deletion of keypairs.<br />
* Notifications are now generated when an Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.<br />
* Compute services are now able to shutdown gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.<br />
* The Compute service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the <tt>running_deleted_instance_action</tt> configuration key. A new <tt>shutdown</tt> value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.<br />
* File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the <tt>inject_key</tt> and <tt>inject_partition</tt> configuration keys in <tt>/etc/nova/nova.conf</tt> and restart the Compute services. The file injection mechanism is likely to be disabled in a future release.<br />
* A number of changes have been made to the expected format <tt>/etc/nova/nova.conf</tt> configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.<br />
<br />
=== Known Issues ===<br />
* OpenStack Compute has some features that use newer API versions from other projects, but the following are the only API versions tested in Icehouse:<br />
** Keystone v2<br />
** Cinder v1<br />
** Glance v1<br />
<br />
=== Upgrade Notes ===<br />
<br />
* An early Docker compute driver was included in the Havana release. This driver has been moved from Nova into its own repository. The new location is http://git.openstack.org/cgit/stackforge/nova-docker<br />
* https://review.openstack.org/50668 - The <tt>compute_api_class</tt> configuration option has been removed.<br />
* https://review.openstack.org/#/c/54290/ - The following deprecated configuration option aliases have been removed in favor of their new names:<br />
** <tt>service_quantum_metadata_proxy</tt><br />
** <tt>quantum_metadata_proxy_shared_secret</tt><br />
** <tt>use_quantum_default_nets</tt><br />
** <tt>quantum_default_tenant_id</tt><br />
** <tt>vpn_instance_type</tt><br />
** <tt>default_instance_type</tt><br />
** <tt>quantum_url</tt><br />
** <tt>quantum_url_timeout</tt><br />
** <tt>quantum_admin_username</tt><br />
** <tt>quantum_admin_password</tt><br />
** <tt>quantum_admin_tenant_name</tt><br />
** <tt>quantum_region_name</tt><br />
** <tt>quantum_admin_auth_url</tt><br />
** <tt>quantum_api_insecure</tt><br />
** <tt>quantum_auth_strategy</tt><br />
** <tt>quantum_ovs_bridge</tt><br />
** <tt>quantum_extension_sync_interval</tt><br />
** <tt>vmwareapi_host_ip</tt><br />
** <tt>vmwareapi_host_username</tt><br />
** <tt>vmwareapi_host_password</tt><br />
** <tt>vmwareapi_cluster_name</tt><br />
** <tt>vmwareapi_task_poll_interval</tt><br />
** <tt>vmwareapi_api_retry_count</tt><br />
** <tt>vnc_port</tt><br />
** <tt>vnc_port_total</tt><br />
** <tt>use_linked_clone</tt><br />
** <tt>vmwareapi_vlan_interface</tt><br />
** <tt>vmwareapi_wsdl_loc</tt><br />
* The PowerVM driver has been removed: https://review.openstack.org/#/c/57774/<br />
* The keystone_authtoken defaults changed in nova.conf: https://review.openstack.org/#/c/62815/<br />
* libvirt lvm names changed from using instance_name_template to instance uuid (https://review.openstack.org/#/c/76968). Possible manual cleanup required if using a non default instance_name_template.<br />
* rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)<br />
* Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.<br />
* Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual mplementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.<br />
* Nova previously included a nova.conf.sample. This file was automatically generated and is no longer included directly. If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.<br />
* Nova now defaults to requiring an event from Neutron when booting libvirt guests. If you upgrade Nova before Neutron, you must disable this feature in Nova until Neutron supports it by setting '''vif_plugging_is_fatal=False''' and '''vif_plugging_timeout=0'''. Recommended order is: Nova (with this disabled), Neutron (with the notifications enabled), and then enable vif_plugging_is_fatal=True with the default value of vif_plugging_timeout.<br />
* Nova supports a limited live upgrade model for the compute nodes in Icehouse. To do this, upgrade controller infrastructure (everthing except nova-compute) first, but set the '''[upgrade_levels]/compute=icehouse-compat''' option. This will enable Icehouse controller services to talk to Havana compute services. Upgrades of individual compute nodes can then proceed normally. When all the computes are upgraded, unset the compute version option to retain the default and restart the controller services. '''NB:''' Certain operations (such as resize/migrate) will not work while the compute version is pinned.<br />
* The following configuration options are marked as deprecated in this release. See <tt>nova.conf.sample</tt> for their replacements. <tt>[GROUP]/option</tt><br />
** <tt>[DEFAULT]/rabbit_durable_queues</tt><br />
** <tt>[rpc_notifier2]/topics</tt><br />
** <tt>[DEFAULT]/log_config</tt><br />
** <tt>[DEFAULT]/logfile</tt><br />
** <tt>[DEFAULT]/logdir</tt><br />
** <tt>[DEFAULT]/base_dir_name</tt><br />
** <tt>[DEFAULT]/instance_type_extra_specs</tt><br />
** <tt>[DEFAULT]/db_backend</tt><br />
** <tt>[DEFAULT]/sql_connection</tt><br />
** <tt>[DATABASE]/sql_connection</tt><br />
** <tt>[sql]/connection</tt><br />
** <tt>[DEFAULT]/sql_idle_timeout</tt><br />
** <tt>[DATABASE]/sql_idle_timeout</tt><br />
** <tt>[sql]/idle_timeout</tt><br />
** <tt>[DEFAULT]/sql_min_pool_size</tt><br />
** <tt>[DATABASE]/sql_min_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_pool_size</tt><br />
** <tt>[DATABASE]/sql_max_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_retries</tt><br />
** <tt>[DATABASE]/sql_max_retries</tt><br />
** <tt>[DEFAULT]/sql_retry_interval</tt><br />
** <tt>[DATABASE]/reconnect_interval</tt><br />
** <tt>[DEFAULT]/sql_max_overflow</tt><br />
** <tt>[DATABASE]/sqlalchemy_max_overflow</tt><br />
** <tt>[DEFAULT]/sql_connection_debug</tt><br />
** <tt>[DEFAULT]/sql_connection_trace</tt><br />
** <tt>[DATABASE]/sqlalchemy_pool_timeout</tt><br />
** <tt>[DEFAULT]/memcache_servers</tt><br />
** <tt>[DEFAULT]/libvirt_type</tt><br />
** <tt>[DEFAULT]/libvirt_uri</tt><br />
** <tt>[DEFAULT]/libvirt_inject_password</tt><br />
** <tt>[DEFAULT]/libvirt_inject_key</tt><br />
** <tt>[DEFAULT]/libvirt_inject_partition</tt><br />
** <tt>[DEFAULT]/libvirt_vif_driver</tt><br />
** <tt>[DEFAULT]/libvirt_volume_drivers</tt><br />
** <tt>[DEFAULT]/libvirt_disk_prefix</tt><br />
** <tt>[DEFAULT]/libvirt_wait_soft_reboot_seconds</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_mode</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_model</tt><br />
** <tt>[DEFAULT]/libvirt_snapshots_directory</tt><br />
** <tt>[DEFAULT]/libvirt_images_type</tt><br />
** <tt>[DEFAULT]/libvirt_images_volume_group</tt><br />
** <tt>[DEFAULT]/libvirt_sparse_logical_volumes</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_pool</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_ceph_conf</tt><br />
** <tt>[DEFAULT]/libvirt_snapshot_compression</tt><br />
** <tt>[DEFAULT]/libvirt_use_virtio_for_bridges</tt><br />
** <tt>[DEFAULT]/libvirt_iscsi_use_multipath</tt><br />
** <tt>[DEFAULT]/libvirt_iser_use_multipath</tt><br />
** <tt>[DEFAULT]/matchmaker_ringfile</tt><br />
** <tt>[DEFAULT]/agent_timeout</tt><br />
** <tt>[DEFAULT]/agent_version_timeout</tt><br />
** <tt>[DEFAULT]/agent_resetnetwork_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_agent_path</tt><br />
** <tt>[DEFAULT]/xenapi_disable_agent</tt><br />
** <tt>[DEFAULT]/xenapi_use_agent_default</tt><br />
** <tt>[DEFAULT]/xenapi_login_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_connection_concurrent</tt><br />
** <tt>[DEFAULT]/xenapi_connection_url</tt><br />
** <tt>[DEFAULT]/xenapi_connection_username</tt><br />
** <tt>[DEFAULT]/xenapi_connection_password</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_poll_interval</tt><br />
** <tt>[DEFAULT]/xenapi_check_host</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_max_attempts</tt><br />
** <tt>[DEFAULT]/xenapi_sr_base_path</tt><br />
** <tt>[DEFAULT]/target_host</tt><br />
** <tt>[DEFAULT]/target_port</tt><br />
** <tt>[DEFAULT]/iqn_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_base_url</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_chance</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_duration</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_last_accessed</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_start</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_end</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_download_stall_cutoff</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_seeder_processes_per_host</tt><br />
** <tt>[DEFAULT]/use_join_force</tt><br />
** <tt>[DEFAULT]/xenapi_ovs_integration_bridge</tt><br />
** <tt>[DEFAULT]/cache_images</tt><br />
** <tt>[DEFAULT]/xenapi_image_compression_level</tt><br />
** <tt>[DEFAULT]/default_os_type</tt><br />
** <tt>[DEFAULT]/block_device_creation_timeout</tt><br />
** <tt>[DEFAULT]/max_kernel_ramdisk_size</tt><br />
** <tt>[DEFAULT]/sr_matching_filter</tt><br />
** <tt>[DEFAULT]/xenapi_sparse_copy</tt><br />
** <tt>[DEFAULT]/xenapi_num_vbd_unplug_retries</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_images</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_network_name</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_boot_menu_url</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_mkisofs_cmd</tt><br />
** <tt>[DEFAULT]/xenapi_running_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_vif_driver</tt><br />
** <tt>[DEFAULT]/xenapi_image_upload_handler</tt><br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* New API features<br />
** <code>/v3/OS-FEDERATION/</code> allows Keystone to consume federated authentication via [https://shibboleth.net/ Shibboleth] for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-federation-ext.md documentation]).<br />
** <code>POST /v3/users/{user_id}/password</code> allows API users to update their own passwords (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#change-user-password-post-usersuser_idpassword documentation]).<br />
** <code>GET v3/auth/token?nocatalog</code> allows API users to opt-out of receiving the service catalog when performing online token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#validate-token-get-authtokensnocatalog documentation]).<br />
** <code>/v3/regions</code> provides a public interface for describing multi-region deployments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#regions-v3regions documentation]).<br />
** <code>/v3/OS-SIMPLECERT/</code> now publishes the certificates used for PKI token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-simple-certs-ext.md documentation]).<br />
** <code>/v3/OS-TRUST/trusts</code> is now capable of providing limited-use delegation via the <code>remaining_uses</code> attribute of [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md trusts].<br />
* The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.<br />
* KVS drivers are now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB.<br />
* Keystone's driver interfaces are now implemented as Abstract Base Classes (ABCs) to make it easier to track compatibility of custom driver implementations across releases.<br />
* Keystone's default <code>etc/policy.json</code> has been rewritten in an easier to read format.<br />
* [http://docs.openstack.org/developer/keystone/event_notifications.html Notifications] are now emitted in response to create, update and delete events on roles, groups, and trusts.<br />
* Custom extensions and driver implementations may now subscribe to internal-only event notifications, including ''disable'' events (which are only exposed externally as part of ''update'' events).<br />
* Keystone now emits [http://www.dmtf.org/standards/cadf Cloud Audit Data Federation (CADF)] event notifications in response to authentication events.<br />
* [https://review.openstack.org/#/c/50362/ Additional plugins] are provided to handle external authentication via <code>REMOTE_USER</code> with respect to single-domain versus multi-domain deployments.<br />
* <code>policy.json</code> can now perform enforcement on the target domain in a domain-aware operationusing, for example, <code>%(target.{entity}.domain_id)s</code>.<br />
* The LDAP driver for the assignment backend now supports group-based role assignment operations.<br />
* Keystone now publishes token revocation ''events'' in addition to providing continued support for token revocation ''lists''. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Keystone eliminate token persistence during the Juno release.<br />
* Deployers can now define arbitrary limits on the size of collections in API responses (for example, <code>GET /v3/users</code> might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.<br />
* Lazy translation has been enabled to translating responses according to the requested Accept-Language header.<br />
* Keystone now emits i18n-ready log messages.<br />
* Collection filtering is now performed in the driver layer, where possible, for improved performance.<br />
<br />
=== Known Issues ===<br />
<br />
* [https://bugs.launchpad.net/keystone/+bug/1291157 Bug 1291157]: If using the <code>OS-FEDERATION</code> extension, deleting an Identity Provider or Protocol does ''not'' result in previously-issued tokens being revoked. This will not be fixed in the ''stable/icehouse'' branch.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The v2 API has been prepared for deprecation, but remains stable in the Icehouse release. It may be formally deprecated during the Juno release pending widespread support for the v3 API.<br />
* Backwards compatibility for <code>keystone.middleware.auth_token</code> has been removed. <code>auth_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.auth_token</code> instead.<br />
* The <code>s3_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.s3_token</code> instead. Backwards compatibility for <code>keystone.middleware.s3_token</code> will be removed in Juno.<br />
* The default token duration has been reduced from 24 hours to just 1 hour. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been deprecated in favor of external tooling and may be removed in the K release.<br />
* <code>keystone.middleware.XmlBodyMiddleware</code> has been deprecated in favor of support for "application/json" only and may be removed in the K release.<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet.<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Metering (Ceilometer) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
https://bugs.launchpad.net/ceilometer/+bug/1297528<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Documentation ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
None yet</div>Russellbhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Icehouse&diff=47319ReleaseNotes/Icehouse2014-04-01T15:00:33Z<p>Russellb: /* API */</p>
<hr />
<div>= OpenStack 2014.1 (Icehouse) Release Notes =<br />
<br />
== General Upgrade Notes ==<br />
<br />
tbd<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the <tt>os_command_line</tt> key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise the default kernel arguments are used.<br />
* The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.<br />
* The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random, however use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.<br />
* The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting the <tt>hw_video_model</tt>, <tt>hw_video_vram</tt>, and <tt>hw_video_head</tt> properties in the image metadata. Currently supported video driver models are <tt>vga</tt>, <tt>cirrus</tt>, <tt>vmvga</tt>, <tt>xen</tt> and <tt>qxl</tt>. <br />
* Watchdog support has been added to the Libvirt driver. The watchdog device used is <tt>i6300esb</tt>. It is enabled by setting the <tt>hw_watchdog_action</tt> property in the image properties or flavor extra specifications (<tt>extra_specs</tt>) to a value other than <tt>disabled</tt>. Supported <tt>hw_watchdog_action</tt> property values, which denote the action for the watchdog device to take in the event of an instance failure, are <tt>poweroff</tt>, <tt>reset</tt>, <tt>pause</tt>, and <tt>none</tt>.<br />
* The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.<br />
<br />
===== VMware =====<br />
<br />
* The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the "nova diagnostics INSTANCE" command, where INSTANCE is replaced by an instance name or instance identifier.<br />
* The VMware Compute drivers now booting an instance from an ISO image.<br />
* The VMware Compute drivers now support the aging of cached images.<br />
<br />
===== XenServer =====<br />
<br />
TODO...<br />
<br />
==== API ====<br />
<br />
* In OpenStack Compute, the <tt>OS-DCF:diskConfig</tt> API attribute is no longer supported in V3 of the nova API.<br />
* The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.<br />
* The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had had been disabled and the system re-provisioned. This functionality is provided by the <tt>ExtendedServicesDelete</tt> API extension.<br />
* Separated the V3 API admin_actions plugin into logically separate plugins so operators can enable subsets of the functionality currently present in the plugin.<br />
* The Compute service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3 which allows non-unique tenant names.<br />
* The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the <tt>nova hypervisor-show</tt> command.<br />
<br />
==== Scheduler ====<br />
<br />
* The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.<br />
* A new scheduler filter, <tt>AggregateImagePropertiesIsolation</tt>, has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys <tt>aggregate_image_properties_isolation_namespace</tt> and <tt>aggregate_image_properties_isolation_separator</tt> are used to determine which image properties are examined by the filter.<br />
* Weight normalization in OpenStack Compute: See: <br />
** https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0. <br />
** nova.cells.weights.weight_offset.WeightOffsetWeigher introduces a new configuration option 'offset_weight_multiplier' <br />
** https://review.openstack.org/#/c/36417/ Introduce stacking flags for weighers. Negative multipliers should not be using for stacking, but the weighers are still compatible (the issue a deprecation warning message).<br />
* The scheduler now supports server groups. The following types are supported - anti-affinity and affinity filters. That is, a server that is deployed will be done according to a predefined policy.<br />
<br />
==== Other Features ====<br />
<br />
* Notifications are now generated upon the creation and deletion of keypairs.<br />
* Notifications are now generated when an Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.<br />
* Compute services are now able to shutdown gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.<br />
* The Compute service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the <tt>running_deleted_instance_action</tt> configuration key. A new <tt>shutdown</tt> value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.<br />
* File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the <tt>inject_key</tt> and <tt>inject_partition</tt> configuration keys in <tt>/etc/nova/nova.conf</tt> and restart the Compute services. The file injection mechanism is likely to be disabled in a future release.<br />
* A number of changes have been made to the expected format <tt>/etc/nova/nova.conf</tt> configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.<br />
<br />
=== Known Issues ===<br />
* OpenStack Compute has some features that use newer API versions from other projects, but the following are the only API versions tested in Icehouse:<br />
** Keystone v2<br />
** Cinder v1<br />
** Glance v1<br />
<br />
=== Upgrade Notes ===<br />
<br />
* https://review.openstack.org/50668 - The <tt>compute_api_class</tt> configuration option has been removed.<br />
* https://review.openstack.org/#/c/54290/ - The following deprecated configuration option aliases have been removed in favor of their new names:<br />
** <tt>service_quantum_metadata_proxy</tt><br />
** <tt>quantum_metadata_proxy_shared_secret</tt><br />
** <tt>use_quantum_default_nets</tt><br />
** <tt>quantum_default_tenant_id</tt><br />
** <tt>vpn_instance_type</tt><br />
** <tt>default_instance_type</tt><br />
** <tt>quantum_url</tt><br />
** <tt>quantum_url_timeout</tt><br />
** <tt>quantum_admin_username</tt><br />
** <tt>quantum_admin_password</tt><br />
** <tt>quantum_admin_tenant_name</tt><br />
** <tt>quantum_region_name</tt><br />
** <tt>quantum_admin_auth_url</tt><br />
** <tt>quantum_api_insecure</tt><br />
** <tt>quantum_auth_strategy</tt><br />
** <tt>quantum_ovs_bridge</tt><br />
** <tt>quantum_extension_sync_interval</tt><br />
** <tt>vmwareapi_host_ip</tt><br />
** <tt>vmwareapi_host_username</tt><br />
** <tt>vmwareapi_host_password</tt><br />
** <tt>vmwareapi_cluster_name</tt><br />
** <tt>vmwareapi_task_poll_interval</tt><br />
** <tt>vmwareapi_api_retry_count</tt><br />
** <tt>vnc_port</tt><br />
** <tt>vnc_port_total</tt><br />
** <tt>use_linked_clone</tt><br />
** <tt>vmwareapi_vlan_interface</tt><br />
** <tt>vmwareapi_wsdl_loc</tt><br />
* The PowerVM driver has been removed: https://review.openstack.org/#/c/57774/<br />
* The keystone_authtoken defaults changed in nova.conf: https://review.openstack.org/#/c/62815/<br />
* libvirt lvm names changed from using instance_name_template to instance uuid (https://review.openstack.org/#/c/76968). Possible manual cleanup required if using a non default instance_name_template.<br />
* rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)<br />
* Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.<br />
* Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual mplementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.<br />
* Nova previously included a nova.conf.sample. This file was automatically generated and is no longer included directly. If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.<br />
* Nova now defaults to requiring an event from Neutron when booting libvirt guests. If you upgrade Nova before Neutron, you must disable this feature in Nova until Neutron supports it by setting '''vif_plugging_is_fatal=False''' and '''vif_plugging_timeout=0'''. Recommended order is: Nova (with this disabled), Neutron (with the notifications enabled), and then enable vif_plugging_is_fatal=True with the default value of vif_plugging_timeout.<br />
* Nova supports a limited live upgrade model for the compute nodes in Icehouse. To do this, upgrade controller infrastructure (everthing except nova-compute) first, but set the '''[upgrade_levels]/compute=icehouse-compat''' option. This will enable Icehouse controller services to talk to Havana compute services. Upgrades of individual compute nodes can then proceed normally. When all the computes are upgraded, unset the compute version option to retain the default and restart the controller services. '''NB:''' Certain operations (such as resize/migrate) will not work while the compute version is pinned.<br />
* The following configuration options are marked as deprecated in this release. See <tt>nova.conf.sample</tt> for their replacements. <tt>[GROUP]/option</tt><br />
** <tt>[DEFAULT]/rabbit_durable_queues</tt><br />
** <tt>[rpc_notifier2]/topics</tt><br />
** <tt>[DEFAULT]/log_config</tt><br />
** <tt>[DEFAULT]/logfile</tt><br />
** <tt>[DEFAULT]/logdir</tt><br />
** <tt>[DEFAULT]/base_dir_name</tt><br />
** <tt>[DEFAULT]/instance_type_extra_specs</tt><br />
** <tt>[DEFAULT]/db_backend</tt><br />
** <tt>[DEFAULT]/sql_connection</tt><br />
** <tt>[DATABASE]/sql_connection</tt><br />
** <tt>[sql]/connection</tt><br />
** <tt>[DEFAULT]/sql_idle_timeout</tt><br />
** <tt>[DATABASE]/sql_idle_timeout</tt><br />
** <tt>[sql]/idle_timeout</tt><br />
** <tt>[DEFAULT]/sql_min_pool_size</tt><br />
** <tt>[DATABASE]/sql_min_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_pool_size</tt><br />
** <tt>[DATABASE]/sql_max_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_retries</tt><br />
** <tt>[DATABASE]/sql_max_retries</tt><br />
** <tt>[DEFAULT]/sql_retry_interval</tt><br />
** <tt>[DATABASE]/reconnect_interval</tt><br />
** <tt>[DEFAULT]/sql_max_overflow</tt><br />
** <tt>[DATABASE]/sqlalchemy_max_overflow</tt><br />
** <tt>[DEFAULT]/sql_connection_debug</tt><br />
** <tt>[DEFAULT]/sql_connection_trace</tt><br />
** <tt>[DATABASE]/sqlalchemy_pool_timeout</tt><br />
** <tt>[DEFAULT]/memcache_servers</tt><br />
** <tt>[DEFAULT]/libvirt_type</tt><br />
** <tt>[DEFAULT]/libvirt_uri</tt><br />
** <tt>[DEFAULT]/libvirt_inject_password</tt><br />
** <tt>[DEFAULT]/libvirt_inject_key</tt><br />
** <tt>[DEFAULT]/libvirt_inject_partition</tt><br />
** <tt>[DEFAULT]/libvirt_vif_driver</tt><br />
** <tt>[DEFAULT]/libvirt_volume_drivers</tt><br />
** <tt>[DEFAULT]/libvirt_disk_prefix</tt><br />
** <tt>[DEFAULT]/libvirt_wait_soft_reboot_seconds</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_mode</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_model</tt><br />
** <tt>[DEFAULT]/libvirt_snapshots_directory</tt><br />
** <tt>[DEFAULT]/libvirt_images_type</tt><br />
** <tt>[DEFAULT]/libvirt_images_volume_group</tt><br />
** <tt>[DEFAULT]/libvirt_sparse_logical_volumes</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_pool</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_ceph_conf</tt><br />
** <tt>[DEFAULT]/libvirt_snapshot_compression</tt><br />
** <tt>[DEFAULT]/libvirt_use_virtio_for_bridges</tt><br />
** <tt>[DEFAULT]/libvirt_iscsi_use_multipath</tt><br />
** <tt>[DEFAULT]/libvirt_iser_use_multipath</tt><br />
** <tt>[DEFAULT]/matchmaker_ringfile</tt><br />
** <tt>[DEFAULT]/agent_timeout</tt><br />
** <tt>[DEFAULT]/agent_version_timeout</tt><br />
** <tt>[DEFAULT]/agent_resetnetwork_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_agent_path</tt><br />
** <tt>[DEFAULT]/xenapi_disable_agent</tt><br />
** <tt>[DEFAULT]/xenapi_use_agent_default</tt><br />
** <tt>[DEFAULT]/xenapi_login_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_connection_concurrent</tt><br />
** <tt>[DEFAULT]/xenapi_connection_url</tt><br />
** <tt>[DEFAULT]/xenapi_connection_username</tt><br />
** <tt>[DEFAULT]/xenapi_connection_password</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_poll_interval</tt><br />
** <tt>[DEFAULT]/xenapi_check_host</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_max_attempts</tt><br />
** <tt>[DEFAULT]/xenapi_sr_base_path</tt><br />
** <tt>[DEFAULT]/target_host</tt><br />
** <tt>[DEFAULT]/target_port</tt><br />
** <tt>[DEFAULT]/iqn_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_base_url</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_chance</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_duration</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_last_accessed</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_start</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_end</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_download_stall_cutoff</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_seeder_processes_per_host</tt><br />
** <tt>[DEFAULT]/use_join_force</tt><br />
** <tt>[DEFAULT]/xenapi_ovs_integration_bridge</tt><br />
** <tt>[DEFAULT]/cache_images</tt><br />
** <tt>[DEFAULT]/xenapi_image_compression_level</tt><br />
** <tt>[DEFAULT]/default_os_type</tt><br />
** <tt>[DEFAULT]/block_device_creation_timeout</tt><br />
** <tt>[DEFAULT]/max_kernel_ramdisk_size</tt><br />
** <tt>[DEFAULT]/sr_matching_filter</tt><br />
** <tt>[DEFAULT]/xenapi_sparse_copy</tt><br />
** <tt>[DEFAULT]/xenapi_num_vbd_unplug_retries</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_images</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_network_name</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_boot_menu_url</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_mkisofs_cmd</tt><br />
** <tt>[DEFAULT]/xenapi_running_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_vif_driver</tt><br />
** <tt>[DEFAULT]/xenapi_image_upload_handler</tt><br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* New API features<br />
** <code>/v3/OS-FEDERATION/</code> allows Keystone to consume federated authentication via [https://shibboleth.net/ Shibboleth] for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-federation-ext.md documentation]).<br />
** <code>POST /v3/users/{user_id}/password</code> allows API users to update their own passwords (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#change-user-password-post-usersuser_idpassword documentation]).<br />
** <code>GET v3/auth/token?nocatalog</code> allows API users to opt-out of receiving the service catalog when performing online token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#validate-token-get-authtokensnocatalog documentation]).<br />
** <code>/v3/regions</code> provides a public interface for describing multi-region deployments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#regions-v3regions documentation]).<br />
** <code>/v3/OS-SIMPLECERT/</code> now publishes the certificates used for PKI token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-simple-certs-ext.md documentation]).<br />
** <code>/v3/OS-TRUST/trusts</code> is now capable of providing limited-use delegation via the <code>remaining_uses</code> attribute of [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md trusts].<br />
* The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.<br />
* KVS drivers are now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB.<br />
* Keystone's driver interfaces are now implemented as Abstract Base Classes (ABCs) to make it easier to track compatibility of custom driver implementations across releases.<br />
* Keystone's default <code>etc/policy.json</code> has been rewritten in an easier to read format.<br />
* [http://docs.openstack.org/developer/keystone/event_notifications.html Notifications] are now emitted in response to create, update and delete events on roles, groups, and trusts.<br />
* Custom extensions and driver implementations may now subscribe to internal-only event notifications, including ''disable'' events (which are only exposed externally as part of ''update'' events).<br />
* Keystone now emits [http://www.dmtf.org/standards/cadf Cloud Audit Data Federation (CADF)] event notifications in response to authentication events.<br />
* [https://review.openstack.org/#/c/50362/ Additional plugins] are provided to handle external authentication via <code>REMOTE_USER</code> with respect to single-domain versus multi-domain deployments.<br />
* <code>policy.json</code> can now perform enforcement on the target domain in a domain-aware operationusing, for example, <code>%(target.{entity}.domain_id)s</code>.<br />
* The LDAP driver for the assignment backend now supports group-based role assignment operations.<br />
* Keystone now publishes token revocation ''events'' in addition to providing continued support for token revocation ''lists''. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Keystone eliminate token persistence during the Juno release.<br />
* Deployers can now define arbitrary limits on the size of collections in API responses (for example, <code>GET /v3/users</code> might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.<br />
* Lazy translation has been enabled to translating responses according to the requested Accept-Language header.<br />
* Keystone now emits i18n-ready log messages.<br />
* Collection filtering is now performed in the driver layer, where possible, for improved performance.<br />
<br />
=== Known Issues ===<br />
<br />
* [https://bugs.launchpad.net/keystone/+bug/1291157 Bug 1291157]: If using the <code>OS-FEDERATION</code> extension, deleting an Identity Provider or Protocol does ''not'' result in previously-issued tokens being revoked. This will not be fixed in the ''stable/icehouse'' branch.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The v2 API has been prepared for deprecation, but remains stable in the Icehouse release. It may be formally deprecated during the Juno release pending widespread support for the v3 API.<br />
* Backwards compatibility for <code>keystone.middleware.auth_token</code> has been removed. <code>auth_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.auth_token</code> instead.<br />
* The <code>s3_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.s3_token</code> instead. Backwards compatibility for <code>keystone.middleware.s3_token</code> will be removed in Juno.<br />
* The default token duration has been reduced from 24 hours to just 1 hour. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been deprecated in favor of external tooling and may be removed in the K release.<br />
* <code>keystone.middleware.XmlBodyMiddleware</code> has been deprecated in favor of support for "application/json" only and may be removed in the K release.<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet.<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Metering (Ceilometer) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
https://bugs.launchpad.net/ceilometer/+bug/1297528<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Documentation ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
None yet</div>Russellbhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Icehouse&diff=47317ReleaseNotes/Icehouse2014-04-01T14:59:59Z<p>Russellb: /* Scheduler */</p>
<hr />
<div>= OpenStack 2014.1 (Icehouse) Release Notes =<br />
<br />
== General Upgrade Notes ==<br />
<br />
tbd<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the <tt>os_command_line</tt> key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise the default kernel arguments are used.<br />
* The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.<br />
* The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random, however use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.<br />
* The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting the <tt>hw_video_model</tt>, <tt>hw_video_vram</tt>, and <tt>hw_video_head</tt> properties in the image metadata. Currently supported video driver models are <tt>vga</tt>, <tt>cirrus</tt>, <tt>vmvga</tt>, <tt>xen</tt> and <tt>qxl</tt>. <br />
* Watchdog support has been added to the Libvirt driver. The watchdog device used is <tt>i6300esb</tt>. It is enabled by setting the <tt>hw_watchdog_action</tt> property in the image properties or flavor extra specifications (<tt>extra_specs</tt>) to a value other than <tt>disabled</tt>. Supported <tt>hw_watchdog_action</tt> property values, which denote the action for the watchdog device to take in the event of an instance failure, are <tt>poweroff</tt>, <tt>reset</tt>, <tt>pause</tt>, and <tt>none</tt>.<br />
* The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.<br />
<br />
===== VMware =====<br />
<br />
* The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the "nova diagnostics INSTANCE" command, where INSTANCE is replaced by an instance name or instance identifier.<br />
* The VMware Compute drivers now booting an instance from an ISO image.<br />
* The VMware Compute drivers now support the aging of cached images.<br />
<br />
===== XenServer =====<br />
<br />
TODO...<br />
<br />
==== API ====<br />
<br />
* In OpenStack Compute, the <tt>OS-DCF:diskConfig</tt> API attribute is no longer supported in V3 of the nova API.<br />
* The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.<br />
* The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had had been disabled and the system re-provisioned. This functionality is provided by the "ExtendedServicesDelete" API extension.<br />
* Separated the V3 API admin_actions plugin into logically separate plugins so operators can enable subsets of the functionality currently present in the plugin.<br />
* The Compute service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3 which allows non-unique tenant names.<br />
* The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the "nova hypervisor-show" command.<br />
<br />
==== Scheduler ====<br />
<br />
* The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.<br />
* A new scheduler filter, <tt>AggregateImagePropertiesIsolation</tt>, has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys <tt>aggregate_image_properties_isolation_namespace</tt> and <tt>aggregate_image_properties_isolation_separator</tt> are used to determine which image properties are examined by the filter.<br />
* Weight normalization in OpenStack Compute: See: <br />
** https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0. <br />
** nova.cells.weights.weight_offset.WeightOffsetWeigher introduces a new configuration option 'offset_weight_multiplier' <br />
** https://review.openstack.org/#/c/36417/ Introduce stacking flags for weighers. Negative multipliers should not be using for stacking, but the weighers are still compatible (the issue a deprecation warning message).<br />
<br />
==== Other Features ====<br />
<br />
* Notifications are now generated upon the creation and deletion of keypairs.<br />
* Notifications are now generated when an Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.<br />
* Compute services are now able to shutdown gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.<br />
* The Compute service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the <tt>running_deleted_instance_action</tt> configuration key. A new <tt>shutdown</tt> value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.<br />
* File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the <tt>inject_key</tt> and <tt>inject_partition</tt> configuration keys in <tt>/etc/nova/nova.conf</tt> and restart the Compute services. The file injection mechanism is likely to be disabled in a future release.<br />
* A number of changes have been made to the expected format <tt>/etc/nova/nova.conf</tt> configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.<br />
<br />
=== Known Issues ===<br />
* OpenStack Compute has some features that use newer API versions from other projects, but the following are the only API versions tested in Icehouse:<br />
** Keystone v2<br />
** Cinder v1<br />
** Glance v1<br />
<br />
=== Upgrade Notes ===<br />
<br />
* https://review.openstack.org/50668 - The <tt>compute_api_class</tt> configuration option has been removed.<br />
* https://review.openstack.org/#/c/54290/ - The following deprecated configuration option aliases have been removed in favor of their new names:<br />
** <tt>service_quantum_metadata_proxy</tt><br />
** <tt>quantum_metadata_proxy_shared_secret</tt><br />
** <tt>use_quantum_default_nets</tt><br />
** <tt>quantum_default_tenant_id</tt><br />
** <tt>vpn_instance_type</tt><br />
** <tt>default_instance_type</tt><br />
** <tt>quantum_url</tt><br />
** <tt>quantum_url_timeout</tt><br />
** <tt>quantum_admin_username</tt><br />
** <tt>quantum_admin_password</tt><br />
** <tt>quantum_admin_tenant_name</tt><br />
** <tt>quantum_region_name</tt><br />
** <tt>quantum_admin_auth_url</tt><br />
** <tt>quantum_api_insecure</tt><br />
** <tt>quantum_auth_strategy</tt><br />
** <tt>quantum_ovs_bridge</tt><br />
** <tt>quantum_extension_sync_interval</tt><br />
** <tt>vmwareapi_host_ip</tt><br />
** <tt>vmwareapi_host_username</tt><br />
** <tt>vmwareapi_host_password</tt><br />
** <tt>vmwareapi_cluster_name</tt><br />
** <tt>vmwareapi_task_poll_interval</tt><br />
** <tt>vmwareapi_api_retry_count</tt><br />
** <tt>vnc_port</tt><br />
** <tt>vnc_port_total</tt><br />
** <tt>use_linked_clone</tt><br />
** <tt>vmwareapi_vlan_interface</tt><br />
** <tt>vmwareapi_wsdl_loc</tt><br />
* The PowerVM driver has been removed: https://review.openstack.org/#/c/57774/<br />
* The keystone_authtoken defaults changed in nova.conf: https://review.openstack.org/#/c/62815/<br />
* libvirt lvm names changed from using instance_name_template to instance uuid (https://review.openstack.org/#/c/76968). Possible manual cleanup required if using a non default instance_name_template.<br />
* rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)<br />
* Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.<br />
* Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual mplementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.<br />
* Nova previously included a nova.conf.sample. This file was automatically generated and is no longer included directly. If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.<br />
* Nova now defaults to requiring an event from Neutron when booting libvirt guests. If you upgrade Nova before Neutron, you must disable this feature in Nova until Neutron supports it by setting '''vif_plugging_is_fatal=False''' and '''vif_plugging_timeout=0'''. Recommended order is: Nova (with this disabled), Neutron (with the notifications enabled), and then enable vif_plugging_is_fatal=True with the default value of vif_plugging_timeout.<br />
* Nova supports a limited live upgrade model for the compute nodes in Icehouse. To do this, upgrade controller infrastructure (everthing except nova-compute) first, but set the '''[upgrade_levels]/compute=icehouse-compat''' option. This will enable Icehouse controller services to talk to Havana compute services. Upgrades of individual compute nodes can then proceed normally. When all the computes are upgraded, unset the compute version option to retain the default and restart the controller services. '''NB:''' Certain operations (such as resize/migrate) will not work while the compute version is pinned.<br />
* The following configuration options are marked as deprecated in this release. See <tt>nova.conf.sample</tt> for their replacements. <tt>[GROUP]/option</tt><br />
** <tt>[DEFAULT]/rabbit_durable_queues</tt><br />
** <tt>[rpc_notifier2]/topics</tt><br />
** <tt>[DEFAULT]/log_config</tt><br />
** <tt>[DEFAULT]/logfile</tt><br />
** <tt>[DEFAULT]/logdir</tt><br />
** <tt>[DEFAULT]/base_dir_name</tt><br />
** <tt>[DEFAULT]/instance_type_extra_specs</tt><br />
** <tt>[DEFAULT]/db_backend</tt><br />
** <tt>[DEFAULT]/sql_connection</tt><br />
** <tt>[DATABASE]/sql_connection</tt><br />
** <tt>[sql]/connection</tt><br />
** <tt>[DEFAULT]/sql_idle_timeout</tt><br />
** <tt>[DATABASE]/sql_idle_timeout</tt><br />
** <tt>[sql]/idle_timeout</tt><br />
** <tt>[DEFAULT]/sql_min_pool_size</tt><br />
** <tt>[DATABASE]/sql_min_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_pool_size</tt><br />
** <tt>[DATABASE]/sql_max_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_retries</tt><br />
** <tt>[DATABASE]/sql_max_retries</tt><br />
** <tt>[DEFAULT]/sql_retry_interval</tt><br />
** <tt>[DATABASE]/reconnect_interval</tt><br />
** <tt>[DEFAULT]/sql_max_overflow</tt><br />
** <tt>[DATABASE]/sqlalchemy_max_overflow</tt><br />
** <tt>[DEFAULT]/sql_connection_debug</tt><br />
** <tt>[DEFAULT]/sql_connection_trace</tt><br />
** <tt>[DATABASE]/sqlalchemy_pool_timeout</tt><br />
** <tt>[DEFAULT]/memcache_servers</tt><br />
** <tt>[DEFAULT]/libvirt_type</tt><br />
** <tt>[DEFAULT]/libvirt_uri</tt><br />
** <tt>[DEFAULT]/libvirt_inject_password</tt><br />
** <tt>[DEFAULT]/libvirt_inject_key</tt><br />
** <tt>[DEFAULT]/libvirt_inject_partition</tt><br />
** <tt>[DEFAULT]/libvirt_vif_driver</tt><br />
** <tt>[DEFAULT]/libvirt_volume_drivers</tt><br />
** <tt>[DEFAULT]/libvirt_disk_prefix</tt><br />
** <tt>[DEFAULT]/libvirt_wait_soft_reboot_seconds</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_mode</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_model</tt><br />
** <tt>[DEFAULT]/libvirt_snapshots_directory</tt><br />
** <tt>[DEFAULT]/libvirt_images_type</tt><br />
** <tt>[DEFAULT]/libvirt_images_volume_group</tt><br />
** <tt>[DEFAULT]/libvirt_sparse_logical_volumes</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_pool</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_ceph_conf</tt><br />
** <tt>[DEFAULT]/libvirt_snapshot_compression</tt><br />
** <tt>[DEFAULT]/libvirt_use_virtio_for_bridges</tt><br />
** <tt>[DEFAULT]/libvirt_iscsi_use_multipath</tt><br />
** <tt>[DEFAULT]/libvirt_iser_use_multipath</tt><br />
** <tt>[DEFAULT]/matchmaker_ringfile</tt><br />
** <tt>[DEFAULT]/agent_timeout</tt><br />
** <tt>[DEFAULT]/agent_version_timeout</tt><br />
** <tt>[DEFAULT]/agent_resetnetwork_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_agent_path</tt><br />
** <tt>[DEFAULT]/xenapi_disable_agent</tt><br />
** <tt>[DEFAULT]/xenapi_use_agent_default</tt><br />
** <tt>[DEFAULT]/xenapi_login_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_connection_concurrent</tt><br />
** <tt>[DEFAULT]/xenapi_connection_url</tt><br />
** <tt>[DEFAULT]/xenapi_connection_username</tt><br />
** <tt>[DEFAULT]/xenapi_connection_password</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_poll_interval</tt><br />
** <tt>[DEFAULT]/xenapi_check_host</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_max_attempts</tt><br />
** <tt>[DEFAULT]/xenapi_sr_base_path</tt><br />
** <tt>[DEFAULT]/target_host</tt><br />
** <tt>[DEFAULT]/target_port</tt><br />
** <tt>[DEFAULT]/iqn_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_base_url</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_chance</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_duration</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_last_accessed</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_start</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_end</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_download_stall_cutoff</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_seeder_processes_per_host</tt><br />
** <tt>[DEFAULT]/use_join_force</tt><br />
** <tt>[DEFAULT]/xenapi_ovs_integration_bridge</tt><br />
** <tt>[DEFAULT]/cache_images</tt><br />
** <tt>[DEFAULT]/xenapi_image_compression_level</tt><br />
** <tt>[DEFAULT]/default_os_type</tt><br />
** <tt>[DEFAULT]/block_device_creation_timeout</tt><br />
** <tt>[DEFAULT]/max_kernel_ramdisk_size</tt><br />
** <tt>[DEFAULT]/sr_matching_filter</tt><br />
** <tt>[DEFAULT]/xenapi_sparse_copy</tt><br />
** <tt>[DEFAULT]/xenapi_num_vbd_unplug_retries</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_images</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_network_name</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_boot_menu_url</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_mkisofs_cmd</tt><br />
** <tt>[DEFAULT]/xenapi_running_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_vif_driver</tt><br />
** <tt>[DEFAULT]/xenapi_image_upload_handler</tt><br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* New API features<br />
** <code>/v3/OS-FEDERATION/</code> allows Keystone to consume federated authentication via [https://shibboleth.net/ Shibboleth] for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-federation-ext.md documentation]).<br />
** <code>POST /v3/users/{user_id}/password</code> allows API users to update their own passwords (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#change-user-password-post-usersuser_idpassword documentation]).<br />
** <code>GET v3/auth/token?nocatalog</code> allows API users to opt-out of receiving the service catalog when performing online token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#validate-token-get-authtokensnocatalog documentation]).<br />
** <code>/v3/regions</code> provides a public interface for describing multi-region deployments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#regions-v3regions documentation]).<br />
** <code>/v3/OS-SIMPLECERT/</code> now publishes the certificates used for PKI token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-simple-certs-ext.md documentation]).<br />
** <code>/v3/OS-TRUST/trusts</code> is now capable of providing limited-use delegation via the <code>remaining_uses</code> attribute of [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md trusts].<br />
* The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.<br />
* KVS drivers are now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB.<br />
* Keystone's driver interfaces are now implemented as Abstract Base Classes (ABCs) to make it easier to track compatibility of custom driver implementations across releases.<br />
* Keystone's default <code>etc/policy.json</code> has been rewritten in an easier to read format.<br />
* [http://docs.openstack.org/developer/keystone/event_notifications.html Notifications] are now emitted in response to create, update and delete events on roles, groups, and trusts.<br />
* Custom extensions and driver implementations may now subscribe to internal-only event notifications, including ''disable'' events (which are only exposed externally as part of ''update'' events).<br />
* Keystone now emits [http://www.dmtf.org/standards/cadf Cloud Audit Data Federation (CADF)] event notifications in response to authentication events.<br />
* [https://review.openstack.org/#/c/50362/ Additional plugins] are provided to handle external authentication via <code>REMOTE_USER</code> with respect to single-domain versus multi-domain deployments.<br />
* <code>policy.json</code> can now perform enforcement on the target domain in a domain-aware operationusing, for example, <code>%(target.{entity}.domain_id)s</code>.<br />
* The LDAP driver for the assignment backend now supports group-based role assignment operations.<br />
* Keystone now publishes token revocation ''events'' in addition to providing continued support for token revocation ''lists''. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Keystone eliminate token persistence during the Juno release.<br />
* Deployers can now define arbitrary limits on the size of collections in API responses (for example, <code>GET /v3/users</code> might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.<br />
* Lazy translation has been enabled to translating responses according to the requested Accept-Language header.<br />
* Keystone now emits i18n-ready log messages.<br />
* Collection filtering is now performed in the driver layer, where possible, for improved performance.<br />
<br />
=== Known Issues ===<br />
<br />
* [https://bugs.launchpad.net/keystone/+bug/1291157 Bug 1291157]: If using the <code>OS-FEDERATION</code> extension, deleting an Identity Provider or Protocol does ''not'' result in previously-issued tokens being revoked. This will not be fixed in the ''stable/icehouse'' branch.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The v2 API has been prepared for deprecation, but remains stable in the Icehouse release. It may be formally deprecated during the Juno release pending widespread support for the v3 API.<br />
* Backwards compatibility for <code>keystone.middleware.auth_token</code> has been removed. <code>auth_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.auth_token</code> instead.<br />
* The <code>s3_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.s3_token</code> instead. Backwards compatibility for <code>keystone.middleware.s3_token</code> will be removed in Juno.<br />
* The default token duration has been reduced from 24 hours to just 1 hour. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been deprecated in favor of external tooling and may be removed in the K release.<br />
* <code>keystone.middleware.XmlBodyMiddleware</code> has been deprecated in favor of support for "application/json" only and may be removed in the K release.<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet.<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Metering (Ceilometer) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
https://bugs.launchpad.net/ceilometer/+bug/1297528<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Documentation ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
None yet</div>Russellbhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Icehouse&diff=47315ReleaseNotes/Icehouse2014-04-01T14:59:24Z<p>Russellb: /* API */</p>
<hr />
<div>= OpenStack 2014.1 (Icehouse) Release Notes =<br />
<br />
== General Upgrade Notes ==<br />
<br />
tbd<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the <tt>os_command_line</tt> key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise the default kernel arguments are used.<br />
* The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.<br />
* The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random, however use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.<br />
* The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting the <tt>hw_video_model</tt>, <tt>hw_video_vram</tt>, and <tt>hw_video_head</tt> properties in the image metadata. Currently supported video driver models are <tt>vga</tt>, <tt>cirrus</tt>, <tt>vmvga</tt>, <tt>xen</tt> and <tt>qxl</tt>. <br />
* Watchdog support has been added to the Libvirt driver. The watchdog device used is <tt>i6300esb</tt>. It is enabled by setting the <tt>hw_watchdog_action</tt> property in the image properties or flavor extra specifications (<tt>extra_specs</tt>) to a value other than <tt>disabled</tt>. Supported <tt>hw_watchdog_action</tt> property values, which denote the action for the watchdog device to take in the event of an instance failure, are <tt>poweroff</tt>, <tt>reset</tt>, <tt>pause</tt>, and <tt>none</tt>.<br />
* The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.<br />
<br />
===== VMware =====<br />
<br />
* The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the "nova diagnostics INSTANCE" command, where INSTANCE is replaced by an instance name or instance identifier.<br />
* The VMware Compute drivers now booting an instance from an ISO image.<br />
* The VMware Compute drivers now support the aging of cached images.<br />
<br />
===== XenServer =====<br />
<br />
TODO...<br />
<br />
==== API ====<br />
<br />
* In OpenStack Compute, the <tt>OS-DCF:diskConfig</tt> API attribute is no longer supported in V3 of the nova API.<br />
* The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.<br />
* The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had had been disabled and the system re-provisioned. This functionality is provided by the "ExtendedServicesDelete" API extension.<br />
* Separated the V3 API admin_actions plugin into logically separate plugins so operators can enable subsets of the functionality currently present in the plugin.<br />
* The Compute service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3 which allows non-unique tenant names.<br />
* The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the "nova hypervisor-show" command.<br />
<br />
==== Scheduler ====<br />
<br />
* The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.<br />
* A new scheduler filter, "AggregateImagePropertiesIsolation", has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys "aggregate_image_properties_isolation_namespace" and "aggregate_image_properties_isolation_separator" are used to determine which image properties are examined by the filter.<br />
* Weight normalization in OpenStack Compute: See: <br />
** https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0. <br />
** nova.cells.weights.weight_offset.WeightOffsetWeigher introduces a new configuration option 'offset_weight_multiplier' <br />
** https://review.openstack.org/#/c/36417/ Introduce stacking flags for weighers. Negative multipliers should not be using for stacking, but the weighers are still compatible (the issue a deprecation warning message).<br />
<br />
==== Other Features ====<br />
<br />
* Notifications are now generated upon the creation and deletion of keypairs.<br />
* Notifications are now generated when an Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.<br />
* Compute services are now able to shutdown gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.<br />
* The Compute service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the <tt>running_deleted_instance_action</tt> configuration key. A new <tt>shutdown</tt> value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.<br />
* File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the <tt>inject_key</tt> and <tt>inject_partition</tt> configuration keys in <tt>/etc/nova/nova.conf</tt> and restart the Compute services. The file injection mechanism is likely to be disabled in a future release.<br />
* A number of changes have been made to the expected format <tt>/etc/nova/nova.conf</tt> configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.<br />
<br />
=== Known Issues ===<br />
* OpenStack Compute has some features that use newer API versions from other projects, but the following are the only API versions tested in Icehouse:<br />
** Keystone v2<br />
** Cinder v1<br />
** Glance v1<br />
<br />
=== Upgrade Notes ===<br />
<br />
* https://review.openstack.org/50668 - The <tt>compute_api_class</tt> configuration option has been removed.<br />
* https://review.openstack.org/#/c/54290/ - The following deprecated configuration option aliases have been removed in favor of their new names:<br />
** <tt>service_quantum_metadata_proxy</tt><br />
** <tt>quantum_metadata_proxy_shared_secret</tt><br />
** <tt>use_quantum_default_nets</tt><br />
** <tt>quantum_default_tenant_id</tt><br />
** <tt>vpn_instance_type</tt><br />
** <tt>default_instance_type</tt><br />
** <tt>quantum_url</tt><br />
** <tt>quantum_url_timeout</tt><br />
** <tt>quantum_admin_username</tt><br />
** <tt>quantum_admin_password</tt><br />
** <tt>quantum_admin_tenant_name</tt><br />
** <tt>quantum_region_name</tt><br />
** <tt>quantum_admin_auth_url</tt><br />
** <tt>quantum_api_insecure</tt><br />
** <tt>quantum_auth_strategy</tt><br />
** <tt>quantum_ovs_bridge</tt><br />
** <tt>quantum_extension_sync_interval</tt><br />
** <tt>vmwareapi_host_ip</tt><br />
** <tt>vmwareapi_host_username</tt><br />
** <tt>vmwareapi_host_password</tt><br />
** <tt>vmwareapi_cluster_name</tt><br />
** <tt>vmwareapi_task_poll_interval</tt><br />
** <tt>vmwareapi_api_retry_count</tt><br />
** <tt>vnc_port</tt><br />
** <tt>vnc_port_total</tt><br />
** <tt>use_linked_clone</tt><br />
** <tt>vmwareapi_vlan_interface</tt><br />
** <tt>vmwareapi_wsdl_loc</tt><br />
* The PowerVM driver has been removed: https://review.openstack.org/#/c/57774/<br />
* The keystone_authtoken defaults changed in nova.conf: https://review.openstack.org/#/c/62815/<br />
* libvirt lvm names changed from using instance_name_template to instance uuid (https://review.openstack.org/#/c/76968). Possible manual cleanup required if using a non default instance_name_template.<br />
* rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)<br />
* Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.<br />
* Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual mplementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.<br />
* Nova previously included a nova.conf.sample. This file was automatically generated and is no longer included directly. If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.<br />
* Nova now defaults to requiring an event from Neutron when booting libvirt guests. If you upgrade Nova before Neutron, you must disable this feature in Nova until Neutron supports it by setting '''vif_plugging_is_fatal=False''' and '''vif_plugging_timeout=0'''. Recommended order is: Nova (with this disabled), Neutron (with the notifications enabled), and then enable vif_plugging_is_fatal=True with the default value of vif_plugging_timeout.<br />
* The following configuration options are marked as deprecated in this release. See <tt>nova.conf.sample</tt> for their replacements. <tt>[GROUP]/option</tt><br />
** <tt>[DEFAULT]/rabbit_durable_queues</tt><br />
** <tt>[rpc_notifier2]/topics</tt><br />
** <tt>[DEFAULT]/log_config</tt><br />
** <tt>[DEFAULT]/logfile</tt><br />
** <tt>[DEFAULT]/logdir</tt><br />
** <tt>[DEFAULT]/base_dir_name</tt><br />
** <tt>[DEFAULT]/instance_type_extra_specs</tt><br />
** <tt>[DEFAULT]/db_backend</tt><br />
** <tt>[DEFAULT]/sql_connection</tt><br />
** <tt>[DATABASE]/sql_connection</tt><br />
** <tt>[sql]/connection</tt><br />
** <tt>[DEFAULT]/sql_idle_timeout</tt><br />
** <tt>[DATABASE]/sql_idle_timeout</tt><br />
** <tt>[sql]/idle_timeout</tt><br />
** <tt>[DEFAULT]/sql_min_pool_size</tt><br />
** <tt>[DATABASE]/sql_min_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_pool_size</tt><br />
** <tt>[DATABASE]/sql_max_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_retries</tt><br />
** <tt>[DATABASE]/sql_max_retries</tt><br />
** <tt>[DEFAULT]/sql_retry_interval</tt><br />
** <tt>[DATABASE]/reconnect_interval</tt><br />
** <tt>[DEFAULT]/sql_max_overflow</tt><br />
** <tt>[DATABASE]/sqlalchemy_max_overflow</tt><br />
** <tt>[DEFAULT]/sql_connection_debug</tt><br />
** <tt>[DEFAULT]/sql_connection_trace</tt><br />
** <tt>[DATABASE]/sqlalchemy_pool_timeout</tt><br />
** <tt>[DEFAULT]/memcache_servers</tt><br />
** <tt>[DEFAULT]/libvirt_type</tt><br />
** <tt>[DEFAULT]/libvirt_uri</tt><br />
** <tt>[DEFAULT]/libvirt_inject_password</tt><br />
** <tt>[DEFAULT]/libvirt_inject_key</tt><br />
** <tt>[DEFAULT]/libvirt_inject_partition</tt><br />
** <tt>[DEFAULT]/libvirt_vif_driver</tt><br />
** <tt>[DEFAULT]/libvirt_volume_drivers</tt><br />
** <tt>[DEFAULT]/libvirt_disk_prefix</tt><br />
** <tt>[DEFAULT]/libvirt_wait_soft_reboot_seconds</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_mode</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_model</tt><br />
** <tt>[DEFAULT]/libvirt_snapshots_directory</tt><br />
** <tt>[DEFAULT]/libvirt_images_type</tt><br />
** <tt>[DEFAULT]/libvirt_images_volume_group</tt><br />
** <tt>[DEFAULT]/libvirt_sparse_logical_volumes</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_pool</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_ceph_conf</tt><br />
** <tt>[DEFAULT]/libvirt_snapshot_compression</tt><br />
** <tt>[DEFAULT]/libvirt_use_virtio_for_bridges</tt><br />
** <tt>[DEFAULT]/libvirt_iscsi_use_multipath</tt><br />
** <tt>[DEFAULT]/libvirt_iser_use_multipath</tt><br />
** <tt>[DEFAULT]/matchmaker_ringfile</tt><br />
** <tt>[DEFAULT]/agent_timeout</tt><br />
** <tt>[DEFAULT]/agent_version_timeout</tt><br />
** <tt>[DEFAULT]/agent_resetnetwork_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_agent_path</tt><br />
** <tt>[DEFAULT]/xenapi_disable_agent</tt><br />
** <tt>[DEFAULT]/xenapi_use_agent_default</tt><br />
** <tt>[DEFAULT]/xenapi_login_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_connection_concurrent</tt><br />
** <tt>[DEFAULT]/xenapi_connection_url</tt><br />
** <tt>[DEFAULT]/xenapi_connection_username</tt><br />
** <tt>[DEFAULT]/xenapi_connection_password</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_poll_interval</tt><br />
** <tt>[DEFAULT]/xenapi_check_host</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_max_attempts</tt><br />
** <tt>[DEFAULT]/xenapi_sr_base_path</tt><br />
** <tt>[DEFAULT]/target_host</tt><br />
** <tt>[DEFAULT]/target_port</tt><br />
** <tt>[DEFAULT]/iqn_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_base_url</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_chance</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_duration</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_last_accessed</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_start</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_end</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_download_stall_cutoff</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_seeder_processes_per_host</tt><br />
** <tt>[DEFAULT]/use_join_force</tt><br />
** <tt>[DEFAULT]/xenapi_ovs_integration_bridge</tt><br />
** <tt>[DEFAULT]/cache_images</tt><br />
** <tt>[DEFAULT]/xenapi_image_compression_level</tt><br />
** <tt>[DEFAULT]/default_os_type</tt><br />
** <tt>[DEFAULT]/block_device_creation_timeout</tt><br />
** <tt>[DEFAULT]/max_kernel_ramdisk_size</tt><br />
** <tt>[DEFAULT]/sr_matching_filter</tt><br />
** <tt>[DEFAULT]/xenapi_sparse_copy</tt><br />
** <tt>[DEFAULT]/xenapi_num_vbd_unplug_retries</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_images</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_network_name</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_boot_menu_url</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_mkisofs_cmd</tt><br />
** <tt>[DEFAULT]/xenapi_running_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_vif_driver</tt><br />
** <tt>[DEFAULT]/xenapi_image_upload_handler</tt><br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* New API features<br />
** <code>/v3/OS-FEDERATION/</code> allows Keystone to consume federated authentication via [https://shibboleth.net/ Shibboleth] for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-federation-ext.md documentation]).<br />
** <code>POST /v3/users/{user_id}/password</code> allows API users to update their own passwords (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#change-user-password-post-usersuser_idpassword documentation]).<br />
** <code>GET v3/auth/token?nocatalog</code> allows API users to opt-out of receiving the service catalog when performing online token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#validate-token-get-authtokensnocatalog documentation]).<br />
** <code>/v3/regions</code> provides a public interface for describing multi-region deployments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#regions-v3regions documentation]).<br />
** <code>/v3/OS-SIMPLECERT/</code> now publishes the certificates used for PKI token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-simple-certs-ext.md documentation]).<br />
** <code>/v3/OS-TRUST/trusts</code> is now capable of providing limited-use delegation via the <code>remaining_uses</code> attribute of [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md trusts].<br />
* The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.<br />
* KVS drivers are now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB.<br />
* Keystone's driver interfaces are now implemented as Abstract Base Classes (ABCs) to make it easier to track compatibility of custom driver implementations across releases.<br />
* Keystone's default <code>etc/policy.json</code> has been rewritten in an easier to read format.<br />
* [http://docs.openstack.org/developer/keystone/event_notifications.html Notifications] are now emitted in response to create, update and delete events on roles, groups, and trusts.<br />
* Custom extensions and driver implementations may now subscribe to internal-only event notifications, including ''disable'' events (which are only exposed externally as part of ''update'' events).<br />
* Keystone now emits [http://www.dmtf.org/standards/cadf Cloud Audit Data Federation (CADF)] event notifications in response to authentication events.<br />
* [https://review.openstack.org/#/c/50362/ Additional plugins] are provided to handle external authentication via <code>REMOTE_USER</code> with respect to single-domain versus multi-domain deployments.<br />
* <code>policy.json</code> can now perform enforcement on the target domain in a domain-aware operationusing, for example, <code>%(target.{entity}.domain_id)s</code>.<br />
* The LDAP driver for the assignment backend now supports group-based role assignment operations.<br />
* Keystone now publishes token revocation ''events'' in addition to providing continued support for token revocation ''lists''. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Keystone eliminate token persistence during the Juno release.<br />
* Deployers can now define arbitrary limits on the size of collections in API responses (for example, <code>GET /v3/users</code> might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.<br />
* Lazy translation has been enabled to translating responses according to the requested Accept-Language header.<br />
* Keystone now emits i18n-ready log messages.<br />
* Collection filtering is now performed in the driver layer, where possible, for improved performance.<br />
<br />
=== Known Issues ===<br />
<br />
* [https://bugs.launchpad.net/keystone/+bug/1291157 Bug 1291157]: If using the <code>OS-FEDERATION</code> extension, deleting an Identity Provider or Protocol does ''not'' result in previously-issued tokens being revoked. This will not be fixed in the ''stable/icehouse'' branch.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The v2 API has been prepared for deprecation, but remains stable in the Icehouse release. It may be formally deprecated during the Juno release pending widespread support for the v3 API.<br />
* Backwards compatibility for <code>keystone.middleware.auth_token</code> has been removed. <code>auth_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.auth_token</code> instead.<br />
* The <code>s3_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.s3_token</code> instead. Backwards compatibility for <code>keystone.middleware.s3_token</code> will be removed in Juno.<br />
* The default token duration has been reduced from 24 hours to just 1 hour. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been deprecated in favor of external tooling and may be removed in the K release.<br />
* <code>keystone.middleware.XmlBodyMiddleware</code> has been deprecated in favor of support for "application/json" only and may be removed in the K release.<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet.<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Metering (Ceilometer) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
https://bugs.launchpad.net/ceilometer/+bug/1297528<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Documentation ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
None yet</div>Russellbhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Icehouse&diff=47314ReleaseNotes/Icehouse2014-04-01T14:58:51Z<p>Russellb: /* Libvirt (KVM) */</p>
<hr />
<div>= OpenStack 2014.1 (Icehouse) Release Notes =<br />
<br />
== General Upgrade Notes ==<br />
<br />
tbd<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the <tt>os_command_line</tt> key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise the default kernel arguments are used.<br />
* The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.<br />
* The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random, however use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.<br />
* The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting the <tt>hw_video_model</tt>, <tt>hw_video_vram</tt>, and <tt>hw_video_head</tt> properties in the image metadata. Currently supported video driver models are <tt>vga</tt>, <tt>cirrus</tt>, <tt>vmvga</tt>, <tt>xen</tt> and <tt>qxl</tt>. <br />
* Watchdog support has been added to the Libvirt driver. The watchdog device used is <tt>i6300esb</tt>. It is enabled by setting the <tt>hw_watchdog_action</tt> property in the image properties or flavor extra specifications (<tt>extra_specs</tt>) to a value other than <tt>disabled</tt>. Supported <tt>hw_watchdog_action</tt> property values, which denote the action for the watchdog device to take in the event of an instance failure, are <tt>poweroff</tt>, <tt>reset</tt>, <tt>pause</tt>, and <tt>none</tt>.<br />
* The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.<br />
<br />
===== VMware =====<br />
<br />
* The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the "nova diagnostics INSTANCE" command, where INSTANCE is replaced by an instance name or instance identifier.<br />
* The VMware Compute drivers now booting an instance from an ISO image.<br />
* The VMware Compute drivers now support the aging of cached images.<br />
<br />
===== XenServer =====<br />
<br />
TODO...<br />
<br />
==== API ====<br />
<br />
* In OpenStack Compute, the OS-DCF:diskConfig API attribute is no longer supported in V3 of the nova API.<br />
* The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.<br />
* The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had had been disabled and the system re-provisioned. This functionality is provided by the "ExtendedServicesDelete" API extension.<br />
* Separated the V3 API admin_actions plugin into logically separate plugins so operators can enable subsets of the functionality currently present in the plugin.<br />
* The Compute service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3 which allows non-unique tenant names.<br />
* The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the "nova hypervisor-show" command.<br />
<br />
==== Scheduler ====<br />
<br />
* The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.<br />
* A new scheduler filter, "AggregateImagePropertiesIsolation", has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys "aggregate_image_properties_isolation_namespace" and "aggregate_image_properties_isolation_separator" are used to determine which image properties are examined by the filter.<br />
* Weight normalization in OpenStack Compute: See: <br />
** https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0. <br />
** nova.cells.weights.weight_offset.WeightOffsetWeigher introduces a new configuration option 'offset_weight_multiplier' <br />
** https://review.openstack.org/#/c/36417/ Introduce stacking flags for weighers. Negative multipliers should not be using for stacking, but the weighers are still compatible (the issue a deprecation warning message).<br />
<br />
==== Other Features ====<br />
<br />
* Notifications are now generated upon the creation and deletion of keypairs.<br />
* Notifications are now generated when an Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.<br />
* Compute services are now able to shutdown gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.<br />
* The Compute service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the <tt>running_deleted_instance_action</tt> configuration key. A new <tt>shutdown</tt> value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.<br />
* File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the <tt>inject_key</tt> and <tt>inject_partition</tt> configuration keys in <tt>/etc/nova/nova.conf</tt> and restart the Compute services. The file injection mechanism is likely to be disabled in a future release.<br />
* A number of changes have been made to the expected format <tt>/etc/nova/nova.conf</tt> configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.<br />
<br />
=== Known Issues ===<br />
* OpenStack Compute has some features that use newer API versions from other projects, but the following are the only API versions tested in Icehouse:<br />
** Keystone v2<br />
** Cinder v1<br />
** Glance v1<br />
<br />
=== Upgrade Notes ===<br />
<br />
* https://review.openstack.org/50668 - The <tt>compute_api_class</tt> configuration option has been removed.<br />
* https://review.openstack.org/#/c/54290/ - The following deprecated configuration option aliases have been removed in favor of their new names:<br />
** <tt>service_quantum_metadata_proxy</tt><br />
** <tt>quantum_metadata_proxy_shared_secret</tt><br />
** <tt>use_quantum_default_nets</tt><br />
** <tt>quantum_default_tenant_id</tt><br />
** <tt>vpn_instance_type</tt><br />
** <tt>default_instance_type</tt><br />
** <tt>quantum_url</tt><br />
** <tt>quantum_url_timeout</tt><br />
** <tt>quantum_admin_username</tt><br />
** <tt>quantum_admin_password</tt><br />
** <tt>quantum_admin_tenant_name</tt><br />
** <tt>quantum_region_name</tt><br />
** <tt>quantum_admin_auth_url</tt><br />
** <tt>quantum_api_insecure</tt><br />
** <tt>quantum_auth_strategy</tt><br />
** <tt>quantum_ovs_bridge</tt><br />
** <tt>quantum_extension_sync_interval</tt><br />
** <tt>vmwareapi_host_ip</tt><br />
** <tt>vmwareapi_host_username</tt><br />
** <tt>vmwareapi_host_password</tt><br />
** <tt>vmwareapi_cluster_name</tt><br />
** <tt>vmwareapi_task_poll_interval</tt><br />
** <tt>vmwareapi_api_retry_count</tt><br />
** <tt>vnc_port</tt><br />
** <tt>vnc_port_total</tt><br />
** <tt>use_linked_clone</tt><br />
** <tt>vmwareapi_vlan_interface</tt><br />
** <tt>vmwareapi_wsdl_loc</tt><br />
* The PowerVM driver has been removed: https://review.openstack.org/#/c/57774/<br />
* The keystone_authtoken defaults changed in nova.conf: https://review.openstack.org/#/c/62815/<br />
* libvirt lvm names changed from using instance_name_template to instance uuid (https://review.openstack.org/#/c/76968). Possible manual cleanup required if using a non default instance_name_template.<br />
* rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)<br />
* Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.<br />
* Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual mplementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.<br />
* Nova previously included a nova.conf.sample. This file was automatically generated and is no longer included directly. If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.<br />
* Nova now defaults to requiring an event from Neutron when booting libvirt guests. If you upgrade Nova before Neutron, you must disable this feature in Nova until Neutron supports it by setting '''vif_plugging_is_fatal=False''' and '''vif_plugging_timeout=0'''. Recommended order is: Nova (with this disabled), Neutron (with the notifications enabled), and then enable vif_plugging_is_fatal=True with the default value of vif_plugging_timeout.<br />
* The following configuration options are marked as deprecated in this release. See <tt>nova.conf.sample</tt> for their replacements. <tt>[GROUP]/option</tt><br />
** <tt>[DEFAULT]/rabbit_durable_queues</tt><br />
** <tt>[rpc_notifier2]/topics</tt><br />
** <tt>[DEFAULT]/log_config</tt><br />
** <tt>[DEFAULT]/logfile</tt><br />
** <tt>[DEFAULT]/logdir</tt><br />
** <tt>[DEFAULT]/base_dir_name</tt><br />
** <tt>[DEFAULT]/instance_type_extra_specs</tt><br />
** <tt>[DEFAULT]/db_backend</tt><br />
** <tt>[DEFAULT]/sql_connection</tt><br />
** <tt>[DATABASE]/sql_connection</tt><br />
** <tt>[sql]/connection</tt><br />
** <tt>[DEFAULT]/sql_idle_timeout</tt><br />
** <tt>[DATABASE]/sql_idle_timeout</tt><br />
** <tt>[sql]/idle_timeout</tt><br />
** <tt>[DEFAULT]/sql_min_pool_size</tt><br />
** <tt>[DATABASE]/sql_min_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_pool_size</tt><br />
** <tt>[DATABASE]/sql_max_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_retries</tt><br />
** <tt>[DATABASE]/sql_max_retries</tt><br />
** <tt>[DEFAULT]/sql_retry_interval</tt><br />
** <tt>[DATABASE]/reconnect_interval</tt><br />
** <tt>[DEFAULT]/sql_max_overflow</tt><br />
** <tt>[DATABASE]/sqlalchemy_max_overflow</tt><br />
** <tt>[DEFAULT]/sql_connection_debug</tt><br />
** <tt>[DEFAULT]/sql_connection_trace</tt><br />
** <tt>[DATABASE]/sqlalchemy_pool_timeout</tt><br />
** <tt>[DEFAULT]/memcache_servers</tt><br />
** <tt>[DEFAULT]/libvirt_type</tt><br />
** <tt>[DEFAULT]/libvirt_uri</tt><br />
** <tt>[DEFAULT]/libvirt_inject_password</tt><br />
** <tt>[DEFAULT]/libvirt_inject_key</tt><br />
** <tt>[DEFAULT]/libvirt_inject_partition</tt><br />
** <tt>[DEFAULT]/libvirt_vif_driver</tt><br />
** <tt>[DEFAULT]/libvirt_volume_drivers</tt><br />
** <tt>[DEFAULT]/libvirt_disk_prefix</tt><br />
** <tt>[DEFAULT]/libvirt_wait_soft_reboot_seconds</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_mode</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_model</tt><br />
** <tt>[DEFAULT]/libvirt_snapshots_directory</tt><br />
** <tt>[DEFAULT]/libvirt_images_type</tt><br />
** <tt>[DEFAULT]/libvirt_images_volume_group</tt><br />
** <tt>[DEFAULT]/libvirt_sparse_logical_volumes</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_pool</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_ceph_conf</tt><br />
** <tt>[DEFAULT]/libvirt_snapshot_compression</tt><br />
** <tt>[DEFAULT]/libvirt_use_virtio_for_bridges</tt><br />
** <tt>[DEFAULT]/libvirt_iscsi_use_multipath</tt><br />
** <tt>[DEFAULT]/libvirt_iser_use_multipath</tt><br />
** <tt>[DEFAULT]/matchmaker_ringfile</tt><br />
** <tt>[DEFAULT]/agent_timeout</tt><br />
** <tt>[DEFAULT]/agent_version_timeout</tt><br />
** <tt>[DEFAULT]/agent_resetnetwork_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_agent_path</tt><br />
** <tt>[DEFAULT]/xenapi_disable_agent</tt><br />
** <tt>[DEFAULT]/xenapi_use_agent_default</tt><br />
** <tt>[DEFAULT]/xenapi_login_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_connection_concurrent</tt><br />
** <tt>[DEFAULT]/xenapi_connection_url</tt><br />
** <tt>[DEFAULT]/xenapi_connection_username</tt><br />
** <tt>[DEFAULT]/xenapi_connection_password</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_poll_interval</tt><br />
** <tt>[DEFAULT]/xenapi_check_host</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_max_attempts</tt><br />
** <tt>[DEFAULT]/xenapi_sr_base_path</tt><br />
** <tt>[DEFAULT]/target_host</tt><br />
** <tt>[DEFAULT]/target_port</tt><br />
** <tt>[DEFAULT]/iqn_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_base_url</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_chance</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_duration</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_last_accessed</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_start</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_end</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_download_stall_cutoff</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_seeder_processes_per_host</tt><br />
** <tt>[DEFAULT]/use_join_force</tt><br />
** <tt>[DEFAULT]/xenapi_ovs_integration_bridge</tt><br />
** <tt>[DEFAULT]/cache_images</tt><br />
** <tt>[DEFAULT]/xenapi_image_compression_level</tt><br />
** <tt>[DEFAULT]/default_os_type</tt><br />
** <tt>[DEFAULT]/block_device_creation_timeout</tt><br />
** <tt>[DEFAULT]/max_kernel_ramdisk_size</tt><br />
** <tt>[DEFAULT]/sr_matching_filter</tt><br />
** <tt>[DEFAULT]/xenapi_sparse_copy</tt><br />
** <tt>[DEFAULT]/xenapi_num_vbd_unplug_retries</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_images</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_network_name</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_boot_menu_url</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_mkisofs_cmd</tt><br />
** <tt>[DEFAULT]/xenapi_running_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_vif_driver</tt><br />
** <tt>[DEFAULT]/xenapi_image_upload_handler</tt><br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* New API features<br />
** <code>/v3/OS-FEDERATION/</code> allows Keystone to consume federated authentication via [https://shibboleth.net/ Shibboleth] for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-federation-ext.md documentation]).<br />
** <code>POST /v3/users/{user_id}/password</code> allows API users to update their own passwords (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#change-user-password-post-usersuser_idpassword documentation]).<br />
** <code>GET v3/auth/token?nocatalog</code> allows API users to opt-out of receiving the service catalog when performing online token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#validate-token-get-authtokensnocatalog documentation]).<br />
** <code>/v3/regions</code> provides a public interface for describing multi-region deployments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#regions-v3regions documentation]).<br />
** <code>/v3/OS-SIMPLECERT/</code> now publishes the certificates used for PKI token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-simple-certs-ext.md documentation]).<br />
** <code>/v3/OS-TRUST/trusts</code> is now capable of providing limited-use delegation via the <code>remaining_uses</code> attribute of [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md trusts].<br />
* The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.<br />
* KVS drivers are now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB.<br />
* Keystone's driver interfaces are now implemented as Abstract Base Classes (ABCs) to make it easier to track compatibility of custom driver implementations across releases.<br />
* Keystone's default <code>etc/policy.json</code> has been rewritten in an easier to read format.<br />
* [http://docs.openstack.org/developer/keystone/event_notifications.html Notifications] are now emitted in response to create, update and delete events on roles, groups, and trusts.<br />
* Custom extensions and driver implementations may now subscribe to internal-only event notifications, including ''disable'' events (which are only exposed externally as part of ''update'' events).<br />
* Keystone now emits [http://www.dmtf.org/standards/cadf Cloud Audit Data Federation (CADF)] event notifications in response to authentication events.<br />
* [https://review.openstack.org/#/c/50362/ Additional plugins] are provided to handle external authentication via <code>REMOTE_USER</code> with respect to single-domain versus multi-domain deployments.<br />
* <code>policy.json</code> can now perform enforcement on the target domain in a domain-aware operationusing, for example, <code>%(target.{entity}.domain_id)s</code>.<br />
* The LDAP driver for the assignment backend now supports group-based role assignment operations.<br />
* Keystone now publishes token revocation ''events'' in addition to providing continued support for token revocation ''lists''. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Keystone eliminate token persistence during the Juno release.<br />
* Deployers can now define arbitrary limits on the size of collections in API responses (for example, <code>GET /v3/users</code> might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.<br />
* Lazy translation has been enabled to translating responses according to the requested Accept-Language header.<br />
* Keystone now emits i18n-ready log messages.<br />
* Collection filtering is now performed in the driver layer, where possible, for improved performance.<br />
<br />
=== Known Issues ===<br />
<br />
* [https://bugs.launchpad.net/keystone/+bug/1291157 Bug 1291157]: If using the <code>OS-FEDERATION</code> extension, deleting an Identity Provider or Protocol does ''not'' result in previously-issued tokens being revoked. This will not be fixed in the ''stable/icehouse'' branch.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The v2 API has been prepared for deprecation, but remains stable in the Icehouse release. It may be formally deprecated during the Juno release pending widespread support for the v3 API.<br />
* Backwards compatibility for <code>keystone.middleware.auth_token</code> has been removed. <code>auth_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.auth_token</code> instead.<br />
* The <code>s3_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.s3_token</code> instead. Backwards compatibility for <code>keystone.middleware.s3_token</code> will be removed in Juno.<br />
* The default token duration has been reduced from 24 hours to just 1 hour. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been deprecated in favor of external tooling and may be removed in the K release.<br />
* <code>keystone.middleware.XmlBodyMiddleware</code> has been deprecated in favor of support for "application/json" only and may be removed in the K release.<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet.<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Metering (Ceilometer) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
https://bugs.launchpad.net/ceilometer/+bug/1297528<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Documentation ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
None yet</div>Russellbhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Icehouse&diff=47309ReleaseNotes/Icehouse2014-04-01T14:50:52Z<p>Russellb: /* Upgrade Notes */</p>
<hr />
<div>= OpenStack 2014.1 (Icehouse) Release Notes =<br />
<br />
== General Upgrade Notes ==<br />
<br />
tbd<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the "os_command_line" key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise the default kernel arguments are used.<br />
* The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.<br />
* The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random, however use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.<br />
* The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting the "hw_video_model", "hw_video_vram", and "hw_video_head" properties in the image metadata. Currently supported video driver models are "vga", "cirrus", "vmvga", "xen" and "qxl".<br />
* Watchdog support has been added to the Libvirt driver. The watchdog device used is "i6300esb". It is enabled by setting the "hw_watchdog_action" property in the image properties or flavor extra specifications ("extra_specs") to a value other than "disabled". Supported "hw_watchdog_action" property values, which denote the action for the watchdog device to take in the event of an instance failure, are "poweroff", "reset", "pause", and "none".<br />
* The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.<br />
<br />
===== VMware =====<br />
<br />
* The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the "nova diagnostics INSTANCE" command, where INSTANCE is replaced by an instance name or instance identifier.<br />
* The VMware Compute drivers now booting an instance from an ISO image.<br />
* The VMware Compute drivers now support the aging of cached images.<br />
<br />
===== XenServer =====<br />
<br />
TODO...<br />
<br />
==== API ====<br />
<br />
* In OpenStack Compute, the OS-DCF:diskConfig API attribute is no longer supported in V3 of the nova API.<br />
* The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.<br />
* The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had had been disabled and the system re-provisioned. This functionality is provided by the "ExtendedServicesDelete" API extension.<br />
* Separated the V3 API admin_actions plugin into logically separate plugins so operators can enable subsets of the functionality currently present in the plugin.<br />
* The Compute service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3 which allows non-unique tenant names.<br />
* The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the "nova hypervisor-show" command.<br />
<br />
==== Scheduler ====<br />
<br />
* The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.<br />
* A new scheduler filter, "AggregateImagePropertiesIsolation", has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys "aggregate_image_properties_isolation_namespace" and "aggregate_image_properties_isolation_separator" are used to determine which image properties are examined by the filter.<br />
* Weight normalization in OpenStack Compute: See: <br />
** https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0. <br />
** nova.cells.weights.weight_offset.WeightOffsetWeigher introduces a new configuration option 'offset_weight_multiplier' <br />
** https://review.openstack.org/#/c/36417/ Introduce stacking flags for weighers. Negative multipliers should not be using for stacking, but the weighers are still compatible (the issue a deprecation warning message).<br />
<br />
==== Other Features ====<br />
<br />
* Notifications are now generated upon the creation and deletion of keypairs.<br />
* Notifications are now generated when an Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.<br />
* Compute services are now able to shutdown gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.<br />
* The Compute service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the <tt>running_deleted_instance_action</tt> configuration key. A new <tt>shutdown</tt> value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.<br />
* File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the <tt>inject_key</tt> and <tt>inject_partition</tt> configuration keys in <tt>/etc/nova/nova.conf</tt> and restart the Compute services. The file injection mechanism is likely to be disabled in a future release.<br />
* A number of changes have been made to the expected format <tt>/etc/nova/nova.conf</tt> configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.<br />
<br />
=== Known Issues ===<br />
* OpenStack Compute has some features that use newer API versions from other projects, but the following are the only API versions tested in Icehouse:<br />
** Keystone v2<br />
** Cinder v1<br />
** Glance v1<br />
<br />
=== Upgrade Notes ===<br />
<br />
* https://review.openstack.org/50668 - The <tt>compute_api_class</tt> configuration option has been removed.<br />
* https://review.openstack.org/#/c/54290/ - The following deprecated configuration option aliases have been removed in favor of their new names:<br />
** <tt>service_quantum_metadata_proxy</tt><br />
** <tt>quantum_metadata_proxy_shared_secret</tt><br />
** <tt>use_quantum_default_nets</tt><br />
** <tt>quantum_default_tenant_id</tt><br />
** <tt>vpn_instance_type</tt><br />
** <tt>default_instance_type</tt><br />
** <tt>quantum_url</tt><br />
** <tt>quantum_url_timeout</tt><br />
** <tt>quantum_admin_username</tt><br />
** <tt>quantum_admin_password</tt><br />
** <tt>quantum_admin_tenant_name</tt><br />
** <tt>quantum_region_name</tt><br />
** <tt>quantum_admin_auth_url</tt><br />
** <tt>quantum_api_insecure</tt><br />
** <tt>quantum_auth_strategy</tt><br />
** <tt>quantum_ovs_bridge</tt><br />
** <tt>quantum_extension_sync_interval</tt><br />
** <tt>vmwareapi_host_ip</tt><br />
** <tt>vmwareapi_host_username</tt><br />
** <tt>vmwareapi_host_password</tt><br />
** <tt>vmwareapi_cluster_name</tt><br />
** <tt>vmwareapi_task_poll_interval</tt><br />
** <tt>vmwareapi_api_retry_count</tt><br />
** <tt>vnc_port</tt><br />
** <tt>vnc_port_total</tt><br />
** <tt>use_linked_clone</tt><br />
** <tt>vmwareapi_vlan_interface</tt><br />
** <tt>vmwareapi_wsdl_loc</tt><br />
* The PowerVM driver has been removed: https://review.openstack.org/#/c/57774/<br />
* The keystone_authtoken defaults changed in nova.conf: https://review.openstack.org/#/c/62815/<br />
* libvirt lvm names changed from using instance_name_template to instance uuid (https://review.openstack.org/#/c/76968). Possible manual cleanup required if using a non default instance_name_template.<br />
* rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)<br />
* Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.<br />
* Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual mplementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.<br />
* Nova previously included a nova.conf.sample. This file was automatically generated and is no longer included directly. If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.<br />
* The following configuration options are marked as deprecated in this release. See <tt>nova.conf.sample</tt> for their replacements. <tt>[GROUP]/option</tt><br />
** <tt>[DEFAULT]/rabbit_durable_queues</tt><br />
** <tt>[rpc_notifier2]/topics</tt><br />
** <tt>[DEFAULT]/log_config</tt><br />
** <tt>[DEFAULT]/logfile</tt><br />
** <tt>[DEFAULT]/logdir</tt><br />
** <tt>[DEFAULT]/base_dir_name</tt><br />
** <tt>[DEFAULT]/instance_type_extra_specs</tt><br />
** <tt>[DEFAULT]/db_backend</tt><br />
** <tt>[DEFAULT]/sql_connection</tt><br />
** <tt>[DATABASE]/sql_connection</tt><br />
** <tt>[sql]/connection</tt><br />
** <tt>[DEFAULT]/sql_idle_timeout</tt><br />
** <tt>[DATABASE]/sql_idle_timeout</tt><br />
** <tt>[sql]/idle_timeout</tt><br />
** <tt>[DEFAULT]/sql_min_pool_size</tt><br />
** <tt>[DATABASE]/sql_min_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_pool_size</tt><br />
** <tt>[DATABASE]/sql_max_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_retries</tt><br />
** <tt>[DATABASE]/sql_max_retries</tt><br />
** <tt>[DEFAULT]/sql_retry_interval</tt><br />
** <tt>[DATABASE]/reconnect_interval</tt><br />
** <tt>[DEFAULT]/sql_max_overflow</tt><br />
** <tt>[DATABASE]/sqlalchemy_max_overflow</tt><br />
** <tt>[DEFAULT]/sql_connection_debug</tt><br />
** <tt>[DEFAULT]/sql_connection_trace</tt><br />
** <tt>[DATABASE]/sqlalchemy_pool_timeout</tt><br />
** <tt>[DEFAULT]/memcache_servers</tt><br />
** <tt>[DEFAULT]/libvirt_type</tt><br />
** <tt>[DEFAULT]/libvirt_uri</tt><br />
** <tt>[DEFAULT]/libvirt_inject_password</tt><br />
** <tt>[DEFAULT]/libvirt_inject_key</tt><br />
** <tt>[DEFAULT]/libvirt_inject_partition</tt><br />
** <tt>[DEFAULT]/libvirt_vif_driver</tt><br />
** <tt>[DEFAULT]/libvirt_volume_drivers</tt><br />
** <tt>[DEFAULT]/libvirt_disk_prefix</tt><br />
** <tt>[DEFAULT]/libvirt_wait_soft_reboot_seconds</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_mode</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_model</tt><br />
** <tt>[DEFAULT]/libvirt_snapshots_directory</tt><br />
** <tt>[DEFAULT]/libvirt_images_type</tt><br />
** <tt>[DEFAULT]/libvirt_images_volume_group</tt><br />
** <tt>[DEFAULT]/libvirt_sparse_logical_volumes</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_pool</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_ceph_conf</tt><br />
** <tt>[DEFAULT]/libvirt_snapshot_compression</tt><br />
** <tt>[DEFAULT]/libvirt_use_virtio_for_bridges</tt><br />
** <tt>[DEFAULT]/libvirt_iscsi_use_multipath</tt><br />
** <tt>[DEFAULT]/libvirt_iser_use_multipath</tt><br />
** <tt>[DEFAULT]/matchmaker_ringfile</tt><br />
** <tt>[DEFAULT]/agent_timeout</tt><br />
** <tt>[DEFAULT]/agent_version_timeout</tt><br />
** <tt>[DEFAULT]/agent_resetnetwork_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_agent_path</tt><br />
** <tt>[DEFAULT]/xenapi_disable_agent</tt><br />
** <tt>[DEFAULT]/xenapi_use_agent_default</tt><br />
** <tt>[DEFAULT]/xenapi_login_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_connection_concurrent</tt><br />
** <tt>[DEFAULT]/xenapi_connection_url</tt><br />
** <tt>[DEFAULT]/xenapi_connection_username</tt><br />
** <tt>[DEFAULT]/xenapi_connection_password</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_poll_interval</tt><br />
** <tt>[DEFAULT]/xenapi_check_host</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_max_attempts</tt><br />
** <tt>[DEFAULT]/xenapi_sr_base_path</tt><br />
** <tt>[DEFAULT]/target_host</tt><br />
** <tt>[DEFAULT]/target_port</tt><br />
** <tt>[DEFAULT]/iqn_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_base_url</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_chance</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_duration</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_last_accessed</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_start</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_end</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_download_stall_cutoff</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_seeder_processes_per_host</tt><br />
** <tt>[DEFAULT]/use_join_force</tt><br />
** <tt>[DEFAULT]/xenapi_ovs_integration_bridge</tt><br />
** <tt>[DEFAULT]/cache_images</tt><br />
** <tt>[DEFAULT]/xenapi_image_compression_level</tt><br />
** <tt>[DEFAULT]/default_os_type</tt><br />
** <tt>[DEFAULT]/block_device_creation_timeout</tt><br />
** <tt>[DEFAULT]/max_kernel_ramdisk_size</tt><br />
** <tt>[DEFAULT]/sr_matching_filter</tt><br />
** <tt>[DEFAULT]/xenapi_sparse_copy</tt><br />
** <tt>[DEFAULT]/xenapi_num_vbd_unplug_retries</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_images</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_network_name</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_boot_menu_url</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_mkisofs_cmd</tt><br />
** <tt>[DEFAULT]/xenapi_running_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_vif_driver</tt><br />
** <tt>[DEFAULT]/xenapi_image_upload_handler</tt><br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* New API features<br />
** <code>/v3/OS-FEDERATION/</code> allows Keystone to consume federated authentication via [https://shibboleth.net/ Shibboleth] for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-federation-ext.md documentation]).<br />
** <code>POST /v3/users/{user_id}/password</code> allows API users to update their own passwords (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#change-user-password-post-usersuser_idpassword documentation]).<br />
** <code>GET v3/auth/token?nocatalog</code> allows API users to opt-out of receiving the service catalog when performing online token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#validate-token-get-authtokensnocatalog documentation]).<br />
** <code>/v3/regions</code> provides a public interface for describing multi-region deployments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#regions-v3regions documentation]).<br />
** <code>/v3/OS-SIMPLECERT/</code> now publishes the certificates used for PKI token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-simple-certs-ext.md documentation]).<br />
** <code>/v3/OS-TRUST/trusts</code> is now capable of providing limited-use delegation via the <code>remaining_uses</code> attribute of [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md trusts].<br />
* The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.<br />
* KVS drivers are now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB.<br />
* Keystone's driver interfaces are now implemented as Abstract Base Classes (ABCs) to make it easier to track compatibility of custom driver implementations across releases.<br />
* Keystone's default <code>etc/policy.json</code> has been rewritten in an easier to read format.<br />
* [http://docs.openstack.org/developer/keystone/event_notifications.html Notifications] are now emitted in response to create, update and delete events on roles, groups, and trusts.<br />
* Custom extensions and driver implementations may now subscribe to internal-only event notifications, including ''disable'' events (which are only exposed externally as part of ''update'' events).<br />
* Keystone now emits [http://www.dmtf.org/standards/cadf Cloud Audit Data Federation (CADF)] event notifications in response to authentication events.<br />
* [https://review.openstack.org/#/c/50362/ Additional plugins] are provided to handle external authentication via <code>REMOTE_USER</code> with respect to single-domain versus multi-domain deployments.<br />
* <code>policy.json</code> can now perform enforcement on the target domain in a domain-aware operationusing, for example, <code>%(target.{entity}.domain_id)s</code>.<br />
* The LDAP driver for the assignment backend now supports group-based role assignment operations.<br />
* Keystone now publishes token revocation ''events'' in addition to providing continued support for token revocation ''lists''. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Keystone eliminate token persistence during the Juno release.<br />
* Deployers can now define arbitrary limits on the size of collections in API responses (for example, <code>GET /v3/users</code> might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.<br />
* Lazy translation has been enabled to translating responses according to the requested Accept-Language header.<br />
* Keystone now emits i18n-ready log messages.<br />
* Collection filtering is now performed in the driver layer, where possible, for improved performance.<br />
<br />
=== Known Issues ===<br />
<br />
* [https://bugs.launchpad.net/keystone/+bug/1291157 Bug 1291157]: If using the <code>OS-FEDERATION</code> extension, deleting an Identity Provider or Protocol does ''not'' result in previously-issued tokens being revoked. This will not be fixed in the ''stable/icehouse'' branch.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The v2 API has been prepared for deprecation, but remains stable in the Icehouse release. It may be formally deprecated during the Juno release pending widespread support for the v3 API.<br />
* Backwards compatibility for <code>keystone.middleware.auth_token</code> has been removed. <code>auth_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.auth_token</code> instead.<br />
* The <code>s3_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.s3_token</code> instead. Backwards compatibility for <code>keystone.middleware.s3_token</code> will be removed in Juno.<br />
* The default token duration has been reduced from 24 hours to just 1 hour. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been deprecated in favor of external tooling and may be removed in the K release.<br />
* <code>keystone.middleware.XmlBodyMiddleware</code> has been deprecated in favor of support for "application/json" only and may be removed in the K release.<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet.<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Metering (Ceilometer) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
https://bugs.launchpad.net/ceilometer/+bug/1297528<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Documentation ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
None yet</div>Russellbhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Icehouse&diff=47308ReleaseNotes/Icehouse2014-04-01T14:48:23Z<p>Russellb: /* Upgrade Notes */</p>
<hr />
<div>= OpenStack 2014.1 (Icehouse) Release Notes =<br />
<br />
== General Upgrade Notes ==<br />
<br />
tbd<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the "os_command_line" key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise the default kernel arguments are used.<br />
* The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.<br />
* The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random, however use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.<br />
* The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting the "hw_video_model", "hw_video_vram", and "hw_video_head" properties in the image metadata. Currently supported video driver models are "vga", "cirrus", "vmvga", "xen" and "qxl".<br />
* Watchdog support has been added to the Libvirt driver. The watchdog device used is "i6300esb". It is enabled by setting the "hw_watchdog_action" property in the image properties or flavor extra specifications ("extra_specs") to a value other than "disabled". Supported "hw_watchdog_action" property values, which denote the action for the watchdog device to take in the event of an instance failure, are "poweroff", "reset", "pause", and "none".<br />
* The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.<br />
<br />
===== VMware =====<br />
<br />
* The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the "nova diagnostics INSTANCE" command, where INSTANCE is replaced by an instance name or instance identifier.<br />
* The VMware Compute drivers now booting an instance from an ISO image.<br />
* The VMware Compute drivers now support the aging of cached images.<br />
<br />
===== XenServer =====<br />
<br />
TODO...<br />
<br />
==== API ====<br />
<br />
* In OpenStack Compute, the OS-DCF:diskConfig API attribute is no longer supported in V3 of the nova API.<br />
* The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.<br />
* The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had had been disabled and the system re-provisioned. This functionality is provided by the "ExtendedServicesDelete" API extension.<br />
* Separated the V3 API admin_actions plugin into logically separate plugins so operators can enable subsets of the functionality currently present in the plugin.<br />
* The Compute service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3 which allows non-unique tenant names.<br />
* The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the "nova hypervisor-show" command.<br />
<br />
==== Scheduler ====<br />
<br />
* The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.<br />
* A new scheduler filter, "AggregateImagePropertiesIsolation", has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys "aggregate_image_properties_isolation_namespace" and "aggregate_image_properties_isolation_separator" are used to determine which image properties are examined by the filter.<br />
* Weight normalization in OpenStack Compute: See: <br />
** https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0. <br />
** nova.cells.weights.weight_offset.WeightOffsetWeigher introduces a new configuration option 'offset_weight_multiplier' <br />
** https://review.openstack.org/#/c/36417/ Introduce stacking flags for weighers. Negative multipliers should not be using for stacking, but the weighers are still compatible (the issue a deprecation warning message).<br />
<br />
==== Other Features ====<br />
<br />
* Notifications are now generated upon the creation and deletion of keypairs.<br />
* Notifications are now generated when an Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.<br />
* Compute services are now able to shutdown gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.<br />
* The Compute service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the <tt>running_deleted_instance_action</tt> configuration key. A new <tt>shutdown</tt> value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.<br />
* File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the <tt>inject_key</tt> and <tt>inject_partition</tt> configuration keys in <tt>/etc/nova/nova.conf</tt> and restart the Compute services. The file injection mechanism is likely to be disabled in a future release.<br />
* A number of changes have been made to the expected format <tt>/etc/nova/nova.conf</tt> configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.<br />
<br />
=== Known Issues ===<br />
* OpenStack Compute has some features that use newer API versions from other projects, but the following are the only API versions tested in Icehouse:<br />
** Keystone v2<br />
** Cinder v1<br />
** Glance v1<br />
<br />
=== Upgrade Notes ===<br />
<br />
* https://review.openstack.org/50668 - The <tt>compute_api_class</tt> configuration option has been removed.<br />
* https://review.openstack.org/#/c/54290/ - The following deprecated configuration option aliases have been removed in favor of their new names:<br />
** <tt>service_quantum_metadata_proxy</tt><br />
** <tt>quantum_metadata_proxy_shared_secret</tt><br />
** <tt>use_quantum_default_nets</tt><br />
** <tt>quantum_default_tenant_id</tt><br />
** <tt>vpn_instance_type</tt><br />
** <tt>default_instance_type</tt><br />
** <tt>quantum_url</tt><br />
** <tt>quantum_url_timeout</tt><br />
** <tt>quantum_admin_username</tt><br />
** <tt>quantum_admin_password</tt><br />
** <tt>quantum_admin_tenant_name</tt><br />
** <tt>quantum_region_name</tt><br />
** <tt>quantum_admin_auth_url</tt><br />
** <tt>quantum_api_insecure</tt><br />
** <tt>quantum_auth_strategy</tt><br />
** <tt>quantum_ovs_bridge</tt><br />
** <tt>quantum_extension_sync_interval</tt><br />
** <tt>vmwareapi_host_ip</tt><br />
** <tt>vmwareapi_host_username</tt><br />
** <tt>vmwareapi_host_password</tt><br />
** <tt>vmwareapi_cluster_name</tt><br />
** <tt>vmwareapi_task_poll_interval</tt><br />
** <tt>vmwareapi_api_retry_count</tt><br />
** <tt>vnc_port</tt><br />
** <tt>vnc_port_total</tt><br />
** <tt>use_linked_clone</tt><br />
** <tt>vmwareapi_vlan_interface</tt><br />
** <tt>vmwareapi_wsdl_loc</tt><br />
* The PowerVM driver has been removed: https://review.openstack.org/#/c/57774/<br />
* The keystone_authtoken defaults changed in nova.conf: https://review.openstack.org/#/c/62815/<br />
* libvirt lvm names changed from using instance_name_template to instance uuid (https://review.openstack.org/#/c/76968). Possible manual cleanup required if using a non default instance_name_template.<br />
* rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)<br />
* Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.<br />
* Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual mplementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.<br />
* Nova previously included a nova.conf.sample. This file was automatically generated and is no longer included directly. If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.<br />
* The following configuration options are marked as deprecated in this release. See <tt>nova.conf.sample</tt> for their replacements. <tt>[GROUP]/option<\tt><br />
** <tt>[DEFAULT]/rabbit_durable_queues<\tt><br />
** <tt>[rpc_notifier2]/topics<\tt><br />
** <tt>[DEFAULT]/log_config<\tt><br />
** <tt>[DEFAULT]/logfile<\tt><br />
** <tt>[DEFAULT]/logdir<\tt><br />
** <tt>[DEFAULT]/base_dir_name<\tt><br />
** <tt>[DEFAULT]/instance_type_extra_specs<\tt><br />
** <tt>[DEFAULT]/db_backend<\tt><br />
** <tt>[DEFAULT]/sql_connection<\tt><br />
** <tt>[DATABASE]/sql_connection<\tt><br />
** <tt>[sql]/connection<\tt><br />
** <tt>[DEFAULT]/sql_idle_timeout<\tt><br />
** <tt>[DATABASE]/sql_idle_timeout<\tt><br />
** <tt>[sql]/idle_timeout<\tt><br />
** <tt>[DEFAULT]/sql_min_pool_size<\tt><br />
** <tt>[DATABASE]/sql_min_pool_size<\tt><br />
** <tt>[DEFAULT]/sql_max_pool_size<\tt><br />
** <tt>[DATABASE]/sql_max_pool_size<\tt><br />
** <tt>[DEFAULT]/sql_max_retries<\tt><br />
** <tt>[DATABASE]/sql_max_retries<\tt><br />
** <tt>[DEFAULT]/sql_retry_interval<\tt><br />
** <tt>[DATABASE]/reconnect_interval<\tt><br />
** <tt>[DEFAULT]/sql_max_overflow<\tt><br />
** <tt>[DATABASE]/sqlalchemy_max_overflow<\tt><br />
** <tt>[DEFAULT]/sql_connection_debug<\tt><br />
** <tt>[DEFAULT]/sql_connection_trace<\tt><br />
** <tt>[DATABASE]/sqlalchemy_pool_timeout<\tt><br />
** <tt>[DEFAULT]/memcache_servers<\tt><br />
** <tt>[DEFAULT]/libvirt_type<\tt><br />
** <tt>[DEFAULT]/libvirt_uri<\tt><br />
** <tt>[DEFAULT]/libvirt_inject_password<\tt><br />
** <tt>[DEFAULT]/libvirt_inject_key<\tt><br />
** <tt>[DEFAULT]/libvirt_inject_partition<\tt><br />
** <tt>[DEFAULT]/libvirt_vif_driver<\tt><br />
** <tt>[DEFAULT]/libvirt_volume_drivers<\tt><br />
** <tt>[DEFAULT]/libvirt_disk_prefix<\tt><br />
** <tt>[DEFAULT]/libvirt_wait_soft_reboot_seconds<\tt><br />
** <tt>[DEFAULT]/libvirt_cpu_mode<\tt><br />
** <tt>[DEFAULT]/libvirt_cpu_model<\tt><br />
** <tt>[DEFAULT]/libvirt_snapshots_directory<\tt><br />
** <tt>[DEFAULT]/libvirt_images_type<\tt><br />
** <tt>[DEFAULT]/libvirt_images_volume_group<\tt><br />
** <tt>[DEFAULT]/libvirt_sparse_logical_volumes<\tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_pool<\tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_ceph_conf<\tt><br />
** <tt>[DEFAULT]/libvirt_snapshot_compression<\tt><br />
** <tt>[DEFAULT]/libvirt_use_virtio_for_bridges<\tt><br />
** <tt>[DEFAULT]/libvirt_iscsi_use_multipath<\tt><br />
** <tt>[DEFAULT]/libvirt_iser_use_multipath<\tt><br />
** <tt>[DEFAULT]/matchmaker_ringfile<\tt><br />
** <tt>[DEFAULT]/agent_timeout<\tt><br />
** <tt>[DEFAULT]/agent_version_timeout<\tt><br />
** <tt>[DEFAULT]/agent_resetnetwork_timeout<\tt><br />
** <tt>[DEFAULT]/xenapi_agent_path<\tt><br />
** <tt>[DEFAULT]/xenapi_disable_agent<\tt><br />
** <tt>[DEFAULT]/xenapi_use_agent_default<\tt><br />
** <tt>[DEFAULT]/xenapi_login_timeout<\tt><br />
** <tt>[DEFAULT]/xenapi_connection_concurrent<\tt><br />
** <tt>[DEFAULT]/xenapi_connection_url<\tt><br />
** <tt>[DEFAULT]/xenapi_connection_username<\tt><br />
** <tt>[DEFAULT]/xenapi_connection_password<\tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_poll_interval<\tt><br />
** <tt>[DEFAULT]/xenapi_check_host<\tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_max_attempts<\tt><br />
** <tt>[DEFAULT]/xenapi_sr_base_path<\tt><br />
** <tt>[DEFAULT]/target_host<\tt><br />
** <tt>[DEFAULT]/target_port<\tt><br />
** <tt>[DEFAULT]/iqn_prefix<\tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev<\tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev_prefix<\tt><br />
** <tt>[DEFAULT]/xenapi_torrent_base_url<\tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_chance<\tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_duration<\tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_last_accessed<\tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_start<\tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_end<\tt><br />
** <tt>[DEFAULT]/xenapi_torrent_download_stall_cutoff<\tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_seeder_processes_per_host<\tt><br />
** <tt>[DEFAULT]/use_join_force<\tt><br />
** <tt>[DEFAULT]/xenapi_ovs_integration_bridge<\tt><br />
** <tt>[DEFAULT]/cache_images<\tt><br />
** <tt>[DEFAULT]/xenapi_image_compression_level<\tt><br />
** <tt>[DEFAULT]/default_os_type<\tt><br />
** <tt>[DEFAULT]/block_device_creation_timeout<\tt><br />
** <tt>[DEFAULT]/max_kernel_ramdisk_size<\tt><br />
** <tt>[DEFAULT]/sr_matching_filter<\tt><br />
** <tt>[DEFAULT]/xenapi_sparse_copy<\tt><br />
** <tt>[DEFAULT]/xenapi_num_vbd_unplug_retries<\tt><br />
** <tt>[DEFAULT]/xenapi_torrent_images<\tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_network_name<\tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_boot_menu_url<\tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_mkisofs_cmd<\tt><br />
** <tt>[DEFAULT]/xenapi_running_timeout<\tt><br />
** <tt>[DEFAULT]/xenapi_vif_driver<\tt><br />
** <tt>[DEFAULT]/xenapi_image_upload_handler<\tt><br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* New API features<br />
** <code>/v3/OS-FEDERATION/</code> allows Keystone to consume federated authentication via [https://shibboleth.net/ Shibboleth] for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-federation-ext.md documentation]).<br />
** <code>POST /v3/users/{user_id}/password</code> allows API users to update their own passwords (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#change-user-password-post-usersuser_idpassword documentation]).<br />
** <code>GET v3/auth/token?nocatalog</code> allows API users to opt-out of receiving the service catalog when performing online token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#validate-token-get-authtokensnocatalog documentation]).<br />
** <code>/v3/regions</code> provides a public interface for describing multi-region deployments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#regions-v3regions documentation]).<br />
** <code>/v3/OS-SIMPLECERT/</code> now publishes the certificates used for PKI token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-simple-certs-ext.md documentation]).<br />
** <code>/v3/OS-TRUST/trusts</code> is now capable of providing limited-use delegation via the <code>remaining_uses</code> attribute of [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md trusts].<br />
* The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.<br />
* KVS drivers are now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB.<br />
* Keystone's driver interfaces are now implemented as Abstract Base Classes (ABCs) to make it easier to track compatibility of custom driver implementations across releases.<br />
* Keystone's default <code>etc/policy.json</code> has been rewritten in an easier to read format.<br />
* [http://docs.openstack.org/developer/keystone/event_notifications.html Notifications] are now emitted in response to create, update and delete events on roles, groups, and trusts.<br />
* Custom extensions and driver implementations may now subscribe to internal-only event notifications, including ''disable'' events (which are only exposed externally as part of ''update'' events).<br />
* Keystone now emits [http://www.dmtf.org/standards/cadf Cloud Audit Data Federation (CADF)] event notifications in response to authentication events.<br />
* [https://review.openstack.org/#/c/50362/ Additional plugins] are provided to handle external authentication via <code>REMOTE_USER</code> with respect to single-domain versus multi-domain deployments.<br />
* <code>policy.json</code> can now perform enforcement on the target domain in a domain-aware operationusing, for example, <code>%(target.{entity}.domain_id)s</code>.<br />
* The LDAP driver for the assignment backend now supports group-based role assignment operations.<br />
* Keystone now publishes token revocation ''events'' in addition to providing continued support for token revocation ''lists''. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Keystone eliminate token persistence during the Juno release.<br />
* Deployers can now define arbitrary limits on the size of collections in API responses (for example, <code>GET /v3/users</code> might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.<br />
* Lazy translation has been enabled to translating responses according to the requested Accept-Language header.<br />
* Keystone now emits i18n-ready log messages.<br />
* Collection filtering is now performed in the driver layer, where possible, for improved performance.<br />
<br />
=== Known Issues ===<br />
<br />
* [https://bugs.launchpad.net/keystone/+bug/1291157 Bug 1291157]: If using the <code>OS-FEDERATION</code> extension, deleting an Identity Provider or Protocol does ''not'' result in previously-issued tokens being revoked. This will not be fixed in the ''stable/icehouse'' branch.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The v2 API has been prepared for deprecation, but remains stable in the Icehouse release. It may be formally deprecated during the Juno release pending widespread support for the v3 API.<br />
* Backwards compatibility for <code>keystone.middleware.auth_token</code> has been removed. <code>auth_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.auth_token</code> instead.<br />
* The <code>s3_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.s3_token</code> instead. Backwards compatibility for <code>keystone.middleware.s3_token</code> will be removed in Juno.<br />
* The default token duration has been reduced from 24 hours to just 1 hour. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been deprecated in favor of external tooling and may be removed in the K release.<br />
* <code>keystone.middleware.XmlBodyMiddleware</code> has been deprecated in favor of support for "application/json" only and may be removed in the K release.<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet.<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Metering (Ceilometer) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
https://bugs.launchpad.net/ceilometer/+bug/1297528<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Documentation ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
None yet</div>Russellb