https://wiki.openstack.org/w/api.php?action=feedcontributions&user=Asalkeld&feedformat=atomOpenStack - User contributions [en]2024-03-28T15:28:07ZUser contributionsMediaWiki 1.28.2https://wiki.openstack.org/w/index.php?title=CrossProjectLiaisons&diff=93345CrossProjectLiaisons2015-10-18T22:43:04Z<p>Asalkeld: /* Release management */</p>
<hr />
<div>Many of our cross-project teams need focused help for communicating with the other project teams. This page lists the people who have volunteered for that work.<br />
<br />
== Oslo ==<br />
<br />
There are now more projects consuming code from the Oslo incubator than we have Oslo contributors. That means we are going to need your help to make these migrations happen. We are asking for one person from each project to serve as a liaison between the project and Oslo, and to assist with integrating changes as we move code out of the incubator into libraries.<br />
<br />
* The liaison should be active in the project and familiar with the project-specific requirements for having patches accepted, but does not need to be a core reviewer or the PTL.<br />
* The liaison should be prepared to assist with writing and reviewing patches in their project as libraries are adopted, and with discussions of API changes to the libraries to make them easier to use within the project.<br />
* Liaisons should pay attention to [Oslo] tagged messages on the openstack-dev mailing list.<br />
* It is also useful for liaisons to be able to attend the Oslo team meeting ([[Meetings/Oslo]]) to participate in discussions and raise issues for real-time discussion.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || Julien Danjou || jd__<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Thomas Herve || therve<br />
|-<br />
| Horizon || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Ironic || Lin Tan || lintan<br />
|-<br />
| Keystone || Brant Knudson || bknudson<br />
|-<br />
| Manila || Thomas Bechtold || toabctl<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachyshka<br />
|-<br />
| Nova || Victor Stinner|| haypo<br />
|-<br />
| [[Octavia]] || Michael Johnson || johnsom<br />
|-<br />
| Sahara || Sergey Reshetnyak || sreshetnyak<br />
|-<br />
| Swift || || <br />
|-<br />
| TripleO || Ben Nemec || bnemec<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Zaqar || Flavio Percoco || flaper87<br />
|-<br />
|}<br />
<br />
== Release management ==<br />
<br />
The Release Management Liaison is responsible for communication with the Release Management team, attending the weekly 1:1 syncs in #openstack-relmgr-office, keeping milestone plans up to date, and signing off milestone and release tags. That task has been [[PTL_Guide#Interactions_with_the_Release_team|traditionally filled by the PTL]], but they may now delegate this task if they wish.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Release Management Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Nova || John Garbutt || johnthetubaguy<br />
|-<br />
| Cinder || Sean McGinnis || smcginnis<br />
|-<br />
| Swift || John Dickinson || notmyname<br />
|-<br />
| Neutron || Kyle Mestery || mestery<br />
|-<br />
| Keystone || Morgan Fainberg || morganfainberg<br />
|-<br />
| Horizon || David Lyle || david-lyle<br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Ceilometer || gordon chung || gordc<br />
|-<br />
| Heat || Sergey Kraynev || skraynev<br />
|-<br />
| Oslo || Davanum Srinivas || dims<br />
|-<br />
| Trove || Nikhil Manchanda || SlickNik<br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Zaqar || || <br />
|-<br />
| Designate || Graham Hayes || mugsie<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Murano || Serg Melikyan || sergmelikyan<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|}<br />
<br />
== QA ==<br />
<br />
There are now more projects that are being tested by Tempest, and Grenade or a part deployable by Devstack than we have QA contributors. That means we are going to need your help to keep on top of everything. We are asking for one person from each project to serve as a liaison between the project and QA, and to assist with integrating changes as we move forward.<br />
<br />
The liaison should be a core reviewer for the project, but does not need to be the PTL. The liaison should be prepared to assist with writing and reviewing patches that interact with their project, and with discussions of changes to the QA projects to make them easier to use within the project.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Nova || Matt Riedemann || mriedem<br />
|-<br />
| Cinder || || <br />
|-<br />
| Swift || || <br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Keystone || David Stanek || dstanek<br />
|-<br />
| Horizon || || <br />
|-<br />
| Glance || Nikhil Komawar || nikhil_k<br />
|-<br />
| Ceilometer || Chris Dent || cdent<br />
|-<br />
| Heat || Steve Baker || stevebaker<br />
|-<br />
| Oslo || Davanum Srinivas || dims <br />
|-<br />
| Trove || Nikhil Manchanda and Peter Stachowski || SlickNik and peterstac<br />
|-<br />
| Sahara || Luigi Toscano and Sergey Lukjanov || tosky and SergeyLukjanov<br />
|-<br />
| Ironic || John Villalovos || jlvillal<br />
|-<br />
| Zaqar || || <br />
|-<br />
| Barbican || Steve Heyman || hockeynut <br />
|-<br />
| Manila || Valeriy Ponomaryov || vponomaryov<br />
|}<br />
<br />
== Documentation ==<br />
<br />
The OpenStack Documentation is centralized on docs.openstack.org but often there's a need for specialty information when reviewing patches or triaging doc bugs. A doc liaison should be available to triage doc bugs when the docs team members don't know enough to triage accurately, and be added to doc reviews that affect your project. You'd be notified through email when you're added either to a doc bug or a doc review. We also would appreciate attendance at the [https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting weekly doc team meeting], We meet weekly in #openstack-meeting every Wednesday at alternating times for different timezones:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Nova || Joe Gordon or Michael Still || Jog0 or mikal<br />
|-<br />
| Cinder || Mike Perez || thingee <br />
|-<br />
| Swift || || <br />
|-<br />
| Neutron || Edgar Magana || emagana <br />
|-<br />
| Keystone || Steve Martinelli || stevemar<br />
|-<br />
| Horizon || Rob Cresswell || robcresswell<br />
|-<br />
| Glance || Brian Rosmaita || rosmaita<br />
|-<br />
| Ceilometer || Ildiko Vancsa || ildikov<br />
|-<br />
| Heat || Randall Burt || randallburt<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Trove || Laurel Michaels, Matt Griffin || laurelm mattgriffin<br />
|-<br />
| Sahara || Chad Roberts || crobertsrh<br />
|-<br />
| Ironic || Mitsuhiro SHIGEMATSU || pshige<br />
|-<br />
| Zaqar || || <br />
|-<br />
| Barbican || Constanze Kratel || constanze <br />
|-<br />
| Murano || Ekaterina Chernova || katyafervent <br />
|-<br />
| Manila || || <br />
|}<br />
<br />
== Stable Branch ==<br />
<br />
The Stable Branch Liaison is responsible for making sure backports are proposed for critical issues in their project, and make sure proposed backports<br />
are reviewed. They are also the contact point for stable branch release managers around point release times.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Stable Branch Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer || Eoghan Glynn || eglynn<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Heat || Zane Bitter || zaneb<br />
|-<br />
| Horizon || Matthias Runge || mrunge <br />
|-<br />
| Ironic || Adam Gandelman || adam_g<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachys<br />
|-<br />
| Nova || || <br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
|}<br />
<br />
== Vulnerability management ==<br />
<br />
The [[Vulnerability Management]] Team needs domain specialists to help assessing the impact of reported issues, coordinate the development of patches, review proposed patches and propose backports. The liaison should be familiar with the [[Vulnerability Management]] process and embargo rules, and have a good grasp of security issues in software design.<br />
<br />
* The liaison should be a core reviewer for the project, but does not need to be the PTL.<br />
* By default, the liaison will be the PTL.<br />
* The liaison is the first line of contact for the Vulnerability Management team members<br />
* The liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL<br />
* The liaison may further delegate work to other subject matter experts<br />
* The liaison maintains the members of the $PROJECT-coresec team in Launchpad (which can be given access to embargoed vulnerabilities)<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal or Charles Neill || redrobot / ccneill<br />
|-<br />
| Ceilometer || Lianhao Lu or Gordon Chung || llu/gordc <br />
|-<br />
| Cinder || || <br />
|-<br />
| Glance || Stuart McLaren or Nikhil Komawar || mclaren or nikhil_k <br />
|-<br />
| Heat || Steve Hardy || shardy<br />
|-<br />
| Horizon || Lin Hua Cheng || lhcheng <br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Michael Still || mikal<br />
|-<br />
| Sahara || Michael McCune or Sergey Lukjanov || elmiko or SergeyLukjanov<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Nikhil Manchanda || SlickNik <br />
|-<br />
|}<br />
<br />
== API Working Group ==<br />
<br />
The [[API_Working_Group|API Working Group]] seeks API subject matter experts for each project to communicate plans for API updates, review API guidelines with their project's view in mind, and review the API Working Group guidelines as they are drafted. The liaison should be familiar with the project's REST API design and future planning for changes to it.<br />
<br />
The members of the [http://specs.openstack.org/openstack/api-wg/liaisons.html API Working Group Cross-Project Liaisons] are maintained in our repo. If you want to read the entire list of CPLs or add/remove yourself from the list, you'll need to update the [http://git.openstack.org/cgit/openstack/api-wg/tree/doc/source/liaisons.json liaisons.json] file. If you don't want to make the update yourself, please ask in #openstack-api on IRC and someone can make the change for you.<br />
<br />
== Logging Working Group ==<br />
<br />
The [[LogWorkingGroup|Log Working Group]] seeks experts for each project to assist with making the logging in projects match the new [http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html Logging Guidelines]<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Nova || John Garbutt || johnthetubaguy<br />
|-<br />
| Murano || Nikolay Starodubtsev || Nikolay_St<br />
|}<br />
<br />
== Infra ==<br />
<br />
These are the project specific groups of people that Infra will look to ACK changes to that project's test configuration. Changes to project-config and devstack-gate should be +1'd by these groups when they are related to their project. Note that in an emergency this may not always be possible and Infra will ask for forgiveness but generally we should look for these +1s.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Flavio Percoco, Nikhil Komawar|| flaper87, nikhil_k<br />
<br />
|-<br />
| Neutron || Kyle Mestery, Armando Migliaccio, Doug Wiegley|| mestery, armax, dougwig<br />
|-<br />
| Documentation || Andreas Jaeger|| AJaeger<br />
|}<br />
<br />
== Product Working Group ==<br />
The product working group consists of product managers, technologists, and operators from a diverse set of organizations. The group is working to aggregate user stories from the market-focused teams (Enterprise, Telco, etc.) and cross-project functional teams (e.g. logging, upgrades, etc.), partner with the development community on resourcing, and help gather data to generate a multi-release roadmap. Most of the user stories being tracked by this team consists of items that can span multiple releases and usually have cross-project dependencies. <br />
<br />
More information about the team can be found on the [[ProductTeam|Product WG wiki]].<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer ||Krish Ragurham || <br />
|-<br />
| Cinder || Shamail Tahir || shamail<br />
|-<br />
| Glance || Nate Ziemann || nate_zman<br />
|-<br />
| Horizon || Carol Barrett || barrett1<br />
|-<br />
| Keystone || Sheena Gregson || <br />
|-<br />
| Kolla || Carol Barrett || barrett1<br />
|-<br />
| Neutron || Mike Cohen, Duane DeCapite || DuaneDeC7<br />
|-<br />
| Manilla ||Pete Chadwick || <br />
|-<br />
| Nova || Hugh Blemings || hughhalf <br />
|-<br />
|OSClient || Megan Rossetti || <br />
|-<br />
| Swift || Phil Willains || philipw<br />
|-<br />
|}<br />
<br />
== Inter-project Liaisons ==<br />
<br />
In some cases, it is useful to have liaisons between projects. [http://lists.openstack.org/pipermail/openstack-dev/2015-April/062327.html For example, it is useful for the Nova and Neutron projects to have liaisons, because the projects have complex interactions and dependencies.] Ideally, a cross-project effort should have two members, one from each project, to facilitate communication and knowledge transfer.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Projects !! Name !! IRC Handle !! Role<br />
|-<br />
| Nova / Neutron || || ||<br />
|-<br />
| || Sean M. Collins || sc68cal || Neutron liaison for Nova<br />
|-<br />
| || Brent Eagles || beagles || Nova liaison for Neutron<br />
|-<br />
| Nova / Glance || || ||<br />
|-<br />
| || Flavio Percoco, Mike Fedosin || flaper87, mfedosin || Glance liaison for Nova<br />
|-<br />
| || Jay Pipes || jaypipes || Nova liaison for Glance<br />
|-<br />
| Nova / Cinder || || ||<br />
|-<br />
| || Scott DAngelo || scottda || Cinder liaison for Nova<br />
|-<br />
| || Matt Riedemann || mriedem || Nova liason for Cinder<br />
|-<br />
| Nova / Ironic || John Villalovos || jlvillal || Ironic liaison for Nova<br />
|-<br />
| || Michael Davies || mrda || Ironic liaison for Nova<br />
|-<br />
| Neutron / Ironic || || ||<br />
|-<br />
| || Sukhdev Kapur || sukhdev || Neutron liaison for Ironic<br />
|-<br />
| || Mitsuhiro SHIGEMATSU and Jim Rollenhagen || pshige and jroll || Ironic liaison for Neutron<br />
|-<br />
| Murano / Glance || || ||<br />
|-<br />
| || Alexander Tivelkov || ativelkov || Glance liaison for Murano, Murano liaison for Glance<br />
|-<br />
| Horizon / i18n || || ||<br />
|-<br />
| || Doug Fish || doug-fish || Horizon liaison for i18n<br />
|}<br />
<br />
=== Etherpads ===<br />
<br />
The following is a list of etherpads that are used for inter-project liaisons, and are continuously updated.<br />
<br />
Nova - Neutron: https://etherpad.openstack.org/p/nova-neutron</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Design_Summit/Liberty/Etherpads&diff=80822Design Summit/Liberty/Etherpads2015-05-11T04:30:51Z<p>Asalkeld: /* Cross-Project workshops */</p>
<hr />
<div>[[Category:Summit]]<br />
[[Category:Liberty]]<br />
[[Category:Etherpad]]<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
==Cross-Project workshops==<br />
Tuesday:<br />
<br />
* 11:15 - 11:55<br />
** [https://etherpad.openstack.org/p/liberty-cross-project-python3 Moving apps to Python 3]<br />
** [https://etherpad.openstack.org/p/liberty-cross-project-user-notifications Async status updates]<br />
* 12:05 - 12:45<br />
** Improving UX across all projects<br />
** Functional Testing Show & Tell<br />
* 2:00 - 2:40<br />
** OpenStack SDK<br />
** Modern JavaScript<br />
* 2:50 - 3:30<br />
** In-team scaling<br />
** Service Catalog Standardization<br />
* 3:40 - 4:20<br />
** API Working Group<br />
** Nova & Neutron network migration<br />
* 4:40 - 5:20<br />
** OpenStack release model(s)<br />
** Unified Policy File<br />
* 5:30 - 6:10<br />
** OpenStack Documentation<br />
** [https://etherpad.openstack.org/p/liberty-cross-project-managing-concurrency Managing concurrency]<br />
<br />
==Barbican==<br />
==Ceilometer==<br />
wednesday:<br />
* 0900 - 0940: [https://etherpad.openstack.org/p/ceilo-multi-identity componentisation / multi identity]<br />
* 0950 - 1030: [https://etherpad.openstack.org/p/event_alarm event alarms]<br />
* 1100 - 1140: [https://etherpad.openstack.org/p/liberty-ceilometer-pipeline-config pipeline configuration]<br />
* 1150 - 1230: [https://etherpad.openstack.org/p/ceilo-declarative-notifications declarative notification meters]<br />
* 1440 - 1520: [https://etherpad.openstack.org/p/YVR-ops-meetup ops followup]<br />
* 1530 - 1610: [https://etherpad.openstack.org/p/ceilo-multi-identity componentization carry over]<br />
<br />
<br/>thursday:<br />
* 0900 - 0940: [https://etherpad.openstack.org/p/liberty-ceilometer-meter-deprecation meter deprecation]<br />
* 0950 - 1030: [https://etherpad.openstack.org/p/liberty-ceilometer-meter-event samples/events integration]<br />
* 1100 - 1140: [https://etherpad.openstack.org/p/liberty-ceilometer-versioned-objects versioned objects]<br />
* 1150 - 1230: ops followup<br />
<br />
<br/>friday:<br />
* 0900 - 1200: [https://etherpad.openstack.org/p/liberty-ceilometer-contributors-meetup contributor meetup]<br />
<br />
==Cinder==<br />
==Designate==<br />
==Documentation==<br />
==Glance==<br />
==Heat==<br />
==Horizon==<br />
==Infrastructure==<br />
==Ironic==<br />
==Keystone==<br />
==Manila==<br />
==Neutron==<br />
=== Tue ===<br />
=== Wed ===<br />
* 09:00 - 09:40: [https://etherpad.openstack.org/p/YVR-neutron-liberty-development Neutron Liberty Development]<br />
* 09:50 - 10:30: [https://etherpad.openstack.org/p/YVR-neutron-use-case-discussion Neutron Use Case Discussion]<br />
* 11:00 - 11:40: [https://etherpad.openstack.org/p/YVR-neutron-does-openstack-need-neutron Does OpenStack Need Neutron]<br />
* 11:50 - 12:30: Neutron Lightning Talks (no etherpad)<br />
* 13:50 - 14:30: [https://etherpad.openstack.org/p/YVR-neutron-octavia Octavia]<br />
* 14:40 - 15:20: [https://etherpad.openstack.org/p/YVR-neutron-opendaylight OpenDaylight]<br />
* 15:30 - 16:10: [https://etherpad.openstack.org/p/YVR-neutron-ovn OVN]<br />
* 16:30 - 17:10: [https://etherpad.openstack.org/p/YVR-neutron-ironic Ironic and Neutron integration]<br />
* 17:20 - 18:00: [https://etherpad.openstack.org/p/YVR-neutron-get-me-a-network Get Me a Network!]<br />
<br />
=== Thu ===<br />
* 09:00 - 09:40: [https://etherpad.openstack.org/p/YVR-neutron-lbaas-use-cases Neutron LBaaS Use Cases]<br />
* 09:50 - 10:30: [https://etherpad.openstack.org/p/YVR-neutron-l3 Neutron L3]<br />
* 11:00 - 11:40: [https://etherpad.openstack.org/p/YVR-neutron-qos QoS]<br />
* 11:50 - 12:30: [https://etherpad.openstack.org/p/YVR-neutron-third-party-ci-liberty Third Party CI in Liberty and Beyond]<br />
* 13:30 - 14:10: [https://etherpad.openstack.org/p/YVR-neutron-testing-in-liberty Testing In Liberty]<br />
* 14:20 - 15:00: [https://etherpad.openstack.org/p/YVR-neutron-RBAC Neutron RBAC]<br />
* 15:10 - 15:50: [https://etherpad.openstack.org/p/YVR-neutron-sg-fwaas-future-direction SG and FWaaS Future Directions]<br />
* 16:10 - 16:50: [https://etherpad.openstack.org/p/YVR-neutron-nfv-enhancements Neutron NFV Enhancements]<br />
=== Fri ===<br />
* 09:00-12:20: [https://etherpad.openstack.org/p/YVR-neutron-contributor-meetup Contributor Meetup]<br />
<br />
==Nova==<br />
<br />
'''Wednesday:'''<br />
<br />
* 09:00 - 09:40: [https://etherpad.openstack.org/p/YVR-nova-scheduler-in-liberty Scheduler in Liberty]<br />
* 09:50 - 10:30: [https://etherpad.openstack.org/p/YVR-nova-scalling-out-scheduler-for-cells Scaling out scheduler for cells ]<br />
<br />
* 11:00 - 11:40: [https://etherpad.openstack.org/p/YVR-nova-cells-v2 Cells v2]<br />
* 11:50 - 12:30: [https://etherpad.openstack.org/p/YVR-nova-resource-tracker Resource Tracker, Clustered Hypervisors and NFV]<br />
<br />
* 13:50 - 14:30: [https://etherpad.openstack.org/p/YVR-nova-spec-blueprint-unconference Nova Spec/Blueprint Unconference]<br />
* 14:40 - 15:20: [https://etherpad.openstack.org/p/YVR-nova-database-internals Database (part 1)]<br />
* 15:30 - 16:10: [https://etherpad.openstack.org/p/YVR-nova-instance-ha-evacuate-resize Dealing with compute host failure: Instance HA, Evacuate, Resize]<br />
<br />
* 16:30 - 17:10: [https://etherpad.openstack.org/p/YVR-nova-functional-testing-feature-classification Functional Testing and Feature Classification]<br />
* 17:20 - 18:00: [https://etherpad.openstack.org/p/YVR-nova-api-2.1-in-liberty Nova API v2.1 in Liberty]<br />
<br />
<br /><br />
'''Thursday:'''<br />
<br />
* 09:00 - 09:40: [https://etherpad.openstack.org/p/YVR-nova-api-2.0-3rd-party Future of Nova API v2.0 and 3rd Party APIs]<br />
* 09:50 - 10:30: [https://etherpad.openstack.org/p/YVR-nova-quotas-and-database Quotas and Database (part 2)]<br />
<br />
* 11:00 - 11:40: [https://etherpad.openstack.org/p/YVR-nova-flavors-and-image-properties Flavors and Image Properties]<br />
* 11:50 - 12:30: [https://etherpad.openstack.org/p/YVR-nova-error-handling Error Handling]<br />
<br />
* 13:30 - 14:10: [https://etherpad.openstack.org/p/YVR-nova-spec-blueprint-unconference Nova Spec/Blueprint Unconference]<br />
* 14:20 - 15:00: [https://etherpad.openstack.org/p/YVR-nova-liberty-priorities Liberty Priorities (part 1)]<br />
* 15:10 - 15:50: [https://etherpad.openstack.org/p/YVR-nova-liberty-priorities Liberty Priorities (part 2)]<br />
<br />
* 16:10 - 16:50: [https://etherpad.openstack.org/p/YVR-nova-liberty-process Liberty Process and Scaling out Reviews]<br />
* 17:00 - 17:40: [https://etherpad.openstack.org/p/YVR-nova-network Future of Nova's networking and nova-network]<br />
<br />
<br /><br />
'''Friday:'''<br />
<br />
* 09:00 - 12:20 and 13:20 - 16:40: [https://etherpad.openstack.org/p/YVR-nova-contributor-meetup Nova Contributor Meetup]<br />
<br />
==Oslo==<br />
<br />
'''Wednesday:'''<br />
<br />
<br />
* 09:00 - 09:40: [https://etherpad.openstack.org/p/YVR-oslo-versioned-objects-intro (F) Get to know your objects and learn how to version them (an introduction to oslo.versionedobjects)!]<br />
* 09:50 - 10:30: [https://etherpad.openstack.org/p/YVR-oslo-taskflow-plans (F) Give me liberty, or give me taskflow (come learn about taskflow liberty plans)!]<br />
* 11:00 - 11:40: [https://etherpad.openstack.org/p/YVR-oslo-rootwrap-plans (W) Give me liberty, or give me wraps (the future of oslo.rootwrap) ]<br />
* 11:50 - 12:30: [https://etherpad.openstack.org/p/YVR-oslo-functional-testing (W) The cost of liberty is less than the price of functional testing]<br />
* 13:50 - 14:30: [https://etherpad.openstack.org/p/YVR-oslo-graduation-schedule (W) Give me more oslo in liberty or else! Schedule & new libraries]<br />
* 14:40 - 15:20: [https://etherpad.openstack.org/p/YVR-oslo-messaging-zmq-status (F) State of zmq in oslo.messaging]<br />
* 15:30 - 16:10: [https://etherpad.openstack.org/p/YVR-oslo-optional-dependencies (W) Emancipate/liberate your optional dependencies - Optional Dependencies]<br />
* 16:30 - 17:10: [https://etherpad.openstack.org/p/YVR-oslo-release-process-review (W) Reviewing our release processes]<br />
* 17:20 - 18:00: [https://etherpad.openstack.org/p/YVR-oslo-config-filter (W) Configuration Filters in oslo.config]<br />
<br />
<br /><br />
'''Thursday:'''<br />
<br />
* 09:50 - 10:30: [https://etherpad.openstack.org/p/YVR-oslo-strategy-discussion (F) The oslo liberty proclamation (and associated strategy discussion)]<br />
* 11:00 - 11:40: [https://etherpad.openstack.org/p/YVR-oslo-config-plans (F) Enfranchise oslo.config, let's discuss alternative data sources in oslo.config]<br />
* 11:50 - 12:30: [https://etherpad.openstack.org/p/YVR-oslo-log-plans (W) Life, Liberty, and the pursuit of oslo.log changes]<br />
* 13:30 - 14:10: [https://etherpad.openstack.org/p/YVR-oslo-versioned-objects-deep-dive (W) Deep dive on oslo.versionedobjects, bring your wet suits.]<br />
* 14:20 - 15:00: [https://etherpad.openstack.org/p/YVR-oslo-db-plans (F) For a people who are free, and who mean to remain so, a well organized and armed 'oslo.db' is their best security]<br />
* 15:10 - 15:50: [https://etherpad.openstack.org/p/YVR-oslo-asyncio (F) Event loops, coroutines, yield from, futures, a discussion on asyncio (and triollus?)]<br />
* 16:10 - 16:50: [https://etherpad.openstack.org/p/YVR-oslo-messaging-plans (F) Ping pong, oslo.messaging plans for liberty.]<br />
* 17:00 - 17:40: [https://etherpad.openstack.org/p/YVR-oslo-tech-debt-deprecation (F) How to clean up your tech-debt; let's discuss best practices on how to deprecate things in oslo libraries]<br />
<br />
<br /><br />
<br />
==QA==<br />
<br />
=== Wed. ===<br />
<br />
* 1150-1230 - Testing outside the gate<br />
* 1350-1430 - Devstack Roadmap<br />
* 1440-1520 - Work Session: Tempest service clients<br />
* 1630-1710 - QA in the Big Tent<br />
<br />
=== Thurs. ===<br />
* 0900-0940 - Work Session: Idempotent ID<br />
* 1330-1410 - Work Session: Tempest as System Program<br />
* 1420-1500 - Work Session: Tempest CLI<br />
* 1610-1650 - Tempest Scope Revisited<br />
* 1700-1740 - Liberty Priorities<br />
<br />
==Release Management==<br />
==Sahara==<br />
==Swift==<br />
==TripleO==<br />
==Trove==<br />
==Zaqar==<br />
==Other Projects==<br />
==Event intro/closure==<br />
==Ops==<br />
Tuesday:<br />
* https://etherpad.openstack.org/p/YVR-ops-101<br />
* https://etherpad.openstack.org/p/YVR-ops-federation<br />
* https://etherpad.openstack.org/p/YVR-ops-rabbitmq<br />
* https://etherpad.openstack.org/p/YVR-ops-logging<br />
* https://etherpad.openstack.org/p/YVR-ops-arch-show-tell<br />
* https://etherpad.openstack.org/p/YVR-ops-ceilometer<br />
* https://etherpad.openstack.org/p/YVR-ops-billing<br />
* https://etherpad.openstack.org/p/YVR-ops-cinder<br />
* https://etherpad.openstack.org/p/YVR-ops-legacy-apps<br />
* https://etherpad.openstack.org/p/YVR-ops-user-committee<br />
* https://etherpad.openstack.org/p/YVR-ops-hypervisor-tuning<br />
* https://etherpad.openstack.org/p/YVR-ops-security<br />
* https://etherpad.openstack.org/p/YVR-ops-deployment<br />
* https://etherpad.openstack.org/p/YVR-ops-database<br />
* https://etherpad.openstack.org/p/YVR-ops-evangelism<br />
* https://etherpad.openstack.org/p/YVR-ops-multi-site<br />
* https://etherpad.openstack.org/p/YVR-ops-nova<br />
* https://etherpad.openstack.org/p/YVR-ops-customer-onboarding<br />
* https://etherpad.openstack.org/p/YVR-ops-containers<br />
* https://etherpad.openstack.org/p/YVR-ops-neutron<br />
<br />
Wednesday:<br />
* https://etherpad.openstack.org/p/YVR-ops-telco<br />
* https://etherpad.openstack.org/p/YVR-ops-puppet<br />
* https://etherpad.openstack.org/p/YVR-ops-chef<br />
* https://etherpad.openstack.org/p/YVR-ops-hpc<br />
* https://etherpad.openstack.org/p/YVR-ops-tools<br />
* https://etherpad.openstack.org/p/YVR-ops-ansible<br />
* https://etherpad.openstack.org/p/YVR-ops-ceph<br />
* https://etherpad.openstack.org/p/YVR-ops-tags<br />
* https://etherpad.openstack.org/p/YVR-ops-large-deployments<br />
* https://etherpad.openstack.org/p/YVR-ops-burning-issues<br />
* https://etherpad.openstack.org/p/YVR-ops-docs<br />
* https://etherpad.openstack.org/p/YVR-ops-tech-choices<br />
* https://etherpad.openstack.org/p/YVR-ops-cmdb<br />
* https://etherpad.openstack.org/p/YVR-ops-data-plane-transitions<br />
* https://etherpad.openstack.org/p/YVR-ops-upgrades<br />
* https://etherpad.openstack.org/p/YVR-ops-packaging<br />
* https://etherpad.openstack.org/p/YVR-ops-nova-network</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Kilo&diff=78654ReleaseNotes/Kilo2015-04-30T07:34:24Z<p>Asalkeld: /* Key New Features */</p>
<hr />
<div>= OpenStack 2015.1.0 (Kilo) Release Notes =<br />
<br />
<br />
{| style="color:#000000; border:solid 3px #A8A8A8; padding:2em; margin:2em 0; background-color:#FFFFFF; vertical-align:middle;"<br />
| style="padding:1em" | The Kilo release of OpenStack is dedicated to the loving memory of '''Chris Yeoh''', who left his family and us way too soon.<br />
|}<br />
<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
<br />
== General Upgrade Notes ==<br />
<br />
* TBD<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Erasure Code (beta) ====<br />
Swift now supports an erasure-code (EC) storage policy type. This allows deployers to achieve very high durability with less raw capacity as used in replicated storage. However, EC requires more CPU and network resources, so it is not good for every use case. EC is great for storing large, infrequently accessed data in a single region.<br />
<br />
Swift's implementation of erasure codes is meant to be transparent to end users. There is no API difference between replicated storage and EC storage.<br />
<br />
To support erasure codes, Swift now depends on PyECLib and liberasurecode. liberasurecode is a pluggable library that allows for the actual EC algorithm to be implemented in a library of your choosing.<br />
<br />
Full docs are at http://swift.openstack.org/overview_erasure_code.html<br />
<br />
==== Composite tokens ====<br />
Composite tokens allow other OpenStack services to store data in Swift on behalf of a client so that neither the client nor the service can update the data without both party's consent.<br />
<br />
An example of this is that a user requests that Nova save a snapshot of a VM. Nova passes the request to Glance, Glance writes the image to a Swift container as a set of objects. In this case, the user cannot modify the snapshot without also having a valid token from the service. Nor can the service update the data without a valid token from the user. But the data is still stored in the user's account in Swift, which makes accounting simpler.<br />
<br />
Full docs are at http://swift.openstack.org/overview_backing_store.html<br />
<br />
==== Data placement updates for smaller, unbalanceable clusters ====<br />
Swift's data placement now accounts for device weight. This allows operators to gradually add new zones and regions without immediately causing a large amount of data to be moved. Also, if a cluster is inbalanced (eg a two-zone cluster where one zone has twice the capacity of the other), Swift will more efficiently use the available space and warn when replicas are placed without enough dispersion in the cluster.<br />
<br />
==== Global cluster replication improvements ====<br />
Replication between regions will now only move one replica per replication run. This gives the remote region a chance to replicate internally and thus avoid more data moving over the WAN<br />
<br />
<br />
=== Known Issues ===<br />
* As a beta release, EC support is nearly fully feature complete, but it is lacking support for some features (like multi-range reads) and has not had a full performance characterization. This feature relies on ssync for durability. Deployers are urged to do extensive testing and not deploy production data using an erasure code storage policy.<br />
<br />
=== Upgrade Notes ===<br />
As always, you can upgrade to this version of Swift with no end-user downtime<br />
<br />
* In order to support erasure codes, Swift has a new dependency on PyECLib (and liberasurecode, transitively). Also, the minimum required version of eventlet has been raised.<br />
<br />
<br />
<br />
<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== API v2.1 ====<br />
<br />
* We have the first release of the next generation of the Nova API, v2.1. The v2.1 is designed to be backwards compatible with v2.0 with the addition of strong API validation. All changes to the API are discoverable via the advertised microversion. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html<br />
<br />
* For kilo, by default we are still using v2.0 API code to server v2.0 API requests. It is hoped that in liberty that v2.1 will be used to serve requests for both v2.0 and v2.1.<br />
<br />
* For liberty v2.0 is now frozen, and all new features will now be added into the v2.1 API using the microversions mechanism. Microversion increments released with kilo are:<br />
** Extending the keypair API to support for x509 certificates, to be used with Windows WinRM, is one of the first API features added as a microversion in the v2.1 API.<br />
** Exposing additional attributes in os-extended-server-attributes<br />
<br />
* python-novaclient does not yet have support for the v2.1 API<br />
<br />
* The policy enforcement of Nova v2.1 API get improvement.<br />
** Policy only enforce at the entry of API.<br />
** Without duplicated rules for single one API anymore.<br />
** All the v2.1 API policy rule use 'os_compute_api' as prefix which distinguish with v2 API.<br />
** Due to hard-code permission checks at db layer, part of Nova API isn't configurable by policy before. It's always required admin user. Part of Nova v2.1 API's hard-code permission checks is removed which make API policy configurable. The rest of hard-code permission checks will be removed at Liberty.<br />
<br />
==== Upgrade Support ====<br />
<br />
* We have reduced the data migrations that happen in the DB migration scripts, this now happens in a "lazy" way inside the DB objects code. There are nova-manage commands to help force migration of the data. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/flavor-from-sysmeta-to-blob.html<br />
<br />
* Change https://review.openstack.org/#/c/97946/ adds database migration 267 which scans for null instances.uuid records and will fail if any are found since the migrate ultimately needs to make instances.uuid non-nullable and adds a UniqueConstraint on that column. A helper script is provided to search for null instances.uuid records before running the database migrations. Before running 'nova-manage db sync', run the helper script 'nova-manage db null_instance_uuid_scan' which, by default, will just search and dump results, it does not change anything. Pass the --delete option to the null_instance_uuid_scan command to automatically remove any null records were instances.uuid is null.<br />
<br />
==== Scheduler ====<br />
<br />
* A selection of performance optimisations<br />
* We are in the process of making structural changes to the scheduler that will help improve our ability to evolve and improve scheduling. This should not be visible from an end user perspective.<br />
<br />
==== Cells v2 ====<br />
<br />
* There are some initial parts of cell v2 supported added, but this feature is not yet ready to use.<br />
* new 'nova-manage api_db sync' and 'nova-manage api_db version' commands for working with the new api database for cells, but nothing is using this database yet so it is not necessary to set it up.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Support for generation 2 VMs: https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms<br />
* Support for SMB based volumes, to sit along side existing iSCSI volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/hyper-v-smbfs-volume-support.html<br />
* Support for x509 certificate based key pairs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/keypair-x509-certificates.html<br />
* Host power actions now work with Hyper-V: https://blueprints.launchpad.net/nova/+spec/hyper-v-host-power-actions<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* NFV related features:<br />
** NUMA based scheduling: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/input-output-based-numa-scheduling.html<br />
** Pinning guest vCPUs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-cpu-pinning.html<br />
** Large page support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html<br />
* vhostuser VIF driver: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt_vif_vhostuser.html<br />
* Support for running KVM on IBM System z: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-kvm-systemz.html<br />
* Support for parallels: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pcs-support.html<br />
* Support for SMB based volumes: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-smbfs-volume-support.html<br />
* Quiesce using QEMU guest agent: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quiesced-image-snapshots-with-qemu-guest-agent.html<br />
* Quobyte volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quobyte-nova-driver.html<br />
* Support for QEMU iSCSI initiator: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html<br />
<br />
===== VMware =====<br />
<br />
* Support for Ephemeral disks: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/vmware-ephemeral-disk-support.html<br />
* Support fo vSAN: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-vsan-support.html<br />
* Support for based OVA images: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-driver-ova-support.html<br />
* Support for SPBM based storage policies: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-spbm-support.html<br />
<br />
===== Ironic =====<br />
<br />
* Support to pass flavor capabilities to ironic: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pass-flavor-capabilities-to-ironic-virt-driver.html<br />
<br />
=== Known Issues ===<br />
<br />
* Evacuate recovery code has the potential to destroy data. On nova-compute startup, instances reported by the hypervisor are examined to see if they have moved (i.e. been evacuated) from the current host during the outage. If the determination is made that they were, then they are destroyed locally. This has the potential to choose incorrectly and destroy instances unexpectedly. On libvirt-like nodes, this can be triggered by changing the system hostname. On vmware-like nodes, this can be triggered by attempting to manage a single vcenter deployment from two different hosts (with different hostnames). This will be fixed properly in Liberty, but for now deployments that wish to disable this behavior as a preventive measure can set workarounds.destroy_after_evacuate=False. NOTE: This is not a regression and has been a flaw in the design of the evacuate feature since its introduction. There is no easy fix for this, hence this workaround to limit the potential for damage. The proposed fix in liberty is here: https://review.openstack.org/#/c/161444/.<br />
<br />
* The generate config examples possibly missing some oslo related configuration<br />
<br />
=== Upgrade Notes ===<br />
<br />
Below are changes you should be aware of when upgrading. Where possible, The git commit hash is provided for you to find more information:<br />
<br />
* Neutron ports are no longer deleted after your server is deleted, if you created them outside of Nova: 1153a46738fc3ffff98a1df9d94b5a55fdd58777<br />
* EC2 API support has been deprecated, and is likely to be removed in kilo, see f098398a836e3671c49bb884b4a1a1988053f4b2<br />
* Websocket proxies need to be upgraded in a lockstep with the API nodes, as older API nodes will not be sending the access_url when authorizing console access, and newer proxy services (this commit and onward) would fail to authorize such requests 9621ccaf05900009d67cdadeb1aac27368114a61<br />
* After fully upgrading to kilo (i.e. all nodes are running kilo code), you should start a background migration of flavor information from its old home to its new home. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. This is critical to complete before the Liberty release, where support for the old location will be dropped. Use "nova-manage migrate-flavor-data" to perform this transition.<br />
* Due to the improvement on Nova v2.1 API policy enforcement. There are a lot of change happened to v2.1 API policy. Because v2.1 API didn't released before, those change won't keep back-compatible. It is better to use policy sample configuration instead of old one.<br />
* VMware rescue VM behaviour no longer creates a new VM and instead happens in place: cd1765459a24e52e1b933c8e05517fed75ac9d41<br />
* force_config_drive = always has been deprecated, and force_config_drive = True should be used instead: c12a78b35dc910fa97df888960ef2b9a64557254<br />
* Running hyper-v, if you deployed code that was past this commit: b4d57ab65836460d0d9cb8889ec2e6c3986c0a9b but before this commit: c8e9f8e71de64273f10498c5ad959634bfe79975 you make have problems to manually resolve see: c8e9f8e71de64273f10498c5ad959634bfe79975<br />
* Changed the default value of: multi_instance_display_name_template see: 609b2df339785bff9e30a9d67d5c853562ae3344<br />
* Please use "nova-manage db null_instance_uuid_scan" to ensure the DB migrations will apply cleanly, see: c0ea53ce353684b48303fc59393930c3fa5ade58<br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* Using graduated oslo.policy. Accounts changes to config options and updates the in-tree etc/config files. http://specs.openstack.org/openstack/glance-specs/specs/kilo/pass-targets-to-policy-enforcer.html<br />
* Ability to deactivate an image. Adds 2 new API calls and may require policy changes. http://specs.openstack.org/openstack/glance-specs/specs/kilo/deactivate-image.html<br />
* Basic support for Image conversion during the import process of an Image. http://specs.openstack.org/openstack/glance-specs/specs/kilo/conversion-of-images.html<br />
* Glance sorting enhancements. Images v2 API supports new sorting syntax including ability to specify the sort dir for each key. http://specs.openstack.org/openstack/glance-specs/specs/kilo/sorting-enhancements.html<br />
* Notifications support for metadefs. http://specs.openstack.org/openstack/glance-specs/specs/kilo/metadefs-notifications.html<br />
* Multiple datastore support for VMware Storage driver. http://specs.openstack.org/openstack/glance-specs/specs/kilo/vmware-store-multiple-datastores.html<br />
* Glance Image Introspection during the import process of an Image. http://specs.openstack.org/openstack/glance-specs/specs/kilo/introspection-of-images.html<br />
* Support in Metadefs for multivalue operators. http://specs.openstack.org/openstack/glance-specs/specs/kilo/metadata-multivalue-operators-support.html<br />
* Adding new taskflow executor and removing the old eventlet executor. http://specs.openstack.org/openstack/glance-specs/specs/kilo/taskflow-integration.html<br />
* Digest algorithm is now configurable. SHA-1 not being suitable for general-purpose digital signature applications that require 112 bits of security as per FIPS, we provide with a configuration to chose between these standards.<br />
* Metadef Tag support. http://specs.openstack.org/openstack/glance-specs/specs/kilo/metadefs-tags.html<br />
* Allow None values to be returned from the API. Glance's API v2 now also returns fields that have value None.<br />
* Catalog Index Service experimental API.http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html<br />
* More granular capabilities optional support to storage drivers. http://specs.openstack.org/openstack/glance-specs/specs/kilo/store-capabilities.html<br />
* Semver Utility for DB storage. http://specs.openstack.org/openstack/glance-specs/specs/kilo/semver-support.html<br />
* Reload configuration files on SIGHUP signal. Zero downtime config reload. http://specs.openstack.org/openstack/glance-specs/specs/kilo/sighup-conf-reload.html<br />
* Software Metadata Definitions. http://specs.openstack.org/openstack/glance-specs/specs/kilo/software-metadefs.html<br />
* Glance Swift Store to use Multiple Containers for Storing Images. http://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html<br />
<br />
=== Known Issues ===<br />
* Adding image member throws 500 when the member name is longer than 255 characters. https://bugs.launchpad.net/glance/+bug/1424038<br />
* Glance v2 API is incompatible with v1 API for owner change. https://bugs.launchpad.net/glance/+bug/1420008<br />
* Glance scrubber doesn't work when registry operates in trusted-auth mode. https://bugs.launchpad.net/glance/+bug/1439666<br />
<br />
=== Upgrade Notes ===<br />
* Removed deprecated option db_enforce_mysql_charset. Corresponding commit: efeb69f9033a57a1c806f71ee3ed9fd3f4d2475e<br />
* Notifications for metadef resources are now supported. Corresponding commit: fd547e3717dc4a3a92c1cb2104c18608a4f4872a<br />
* VMware multiple datastores can be enabled by a few config changes. Corresponding commit: 96fb31d7459bd4e05e052053177dce4d38cdaf90<br />
* Removed the eventlet executor and added a new taskflow executor for async tasks. Corresponding commits: ae3135e1d67df77697a24fddaee3efeadb34a0dd and a39debfd55f6872e5f4f955b75728c936d1cee4b<br />
* Replace snet config with endpoint config. Corresponding commit: 41a9a065531ec946b4a9baf999f97d10fa493826<br />
* Digest algorithm is now configurable. Corresponding commit: 82194e0c422966422f7a4e2157125c7ad8fbc5b5<br />
* Cleanup chunks for deleted image that was in 'saving' state while deleting. Corresponding commit: 0dc8fbb3479a53c5bba8475d14f4c7206904c5ea<br />
* Glance now uses graduated oslo.policy. Corresponding commit: cb7d5a4795bbdaf4dc3eaaf0a6fb1add52c09011<br />
* An image can now be deactivated. A new state called deactivated has been added to the Image data asset. Corresponding commit: b000c85b7fabbe944b4df3ab57ff73883328f40d<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* Support for Federated authentication via Web Single-Sign-On -- When configured in keystone, the user will be able to choose the authentication mechanism to use from those support by the deployment. This feature must be enabled by changes to local_settings.py to be utilized. The related settings to enable and configure can be found [http://docs.openstack.org/developer/horizon/topics/settings.html#websso-enabled here].<br />
<br />
* Support for Theming -- A simpler mechanism to specify a custom theme for Horizon has been included. Allowing for use of CSS values for Bootstrap and Horizon variables, as well as the inclusion of custom CSS. More details available [http://docs.openstack.org/developer/horizon/topics/settings.html#custom-theme-path here].<br />
<br />
* Sahara UX Improvements -- Dramatic improvements to the Sahara user experience have been made with the addition of guided cluster creation and guided job creation pages.<br />
<br />
* Launch Instance Wizard (beta) -- A full replacement for the launch instance workflow has been implemented in AngularJS to address usability issues in the existing launch instance workflow. Due to the late inclusion date and limited testing, this feature is marked as beta for Kilo and not enabled by default. To use the new workflow, the following change to local_settings.py is required: <code>LAUNCH_INSTANCE_NG_ENABLED = True</code>. Additionally, you can disable the default launch instance wizard with the following: <code>LAUNCH_INSTANCE_LEGACY_ENABLED = False</code>. This new work is a view into future development in Horizon. <br />
<br />
* Nova<br />
** allow service disable/enable on Hypervisor<br />
** Migrate all instances from host<br />
** expose serial console<br />
<br />
* Cinder<br />
** Cinder v2 by default<br />
** Managed/Unmanaged volume support -- allows admin to manage existing volumes not managed by cinder, as well as unmanage volumes.<br />
** Volume transfer support between projects<br />
** Volume encryption metadata support<br />
<br />
* Glance<br />
** View added to allow administrators to view/add/update Glance Metadata definitions<br />
<br />
* Heat<br />
** Stack Template view<br />
** Orchestration Resources Panel<br />
** Suspend/Resume actions for Stacks<br />
** Preview Stack view allows users to preview stacks specified in templates before creating them.<br />
<br />
* Trove<br />
** Resizing of Trove instances -- changing instance flavor<br />
<br />
* Ceilometer<br />
** Display IPMI meters values from Ceilometer<br />
<br />
* New Reusable AngularJS widgets in Horizon:<br />
** AngularJS table implementation<br />
*** Table drawers -- expandable table content<br />
*** improved client/server search<br />
** Transfer table widget <br />
<br />
* Configurable web root for Horizon beyond just '/'<br />
<br />
=== Known Issues ===<br />
* Volumes created from snapshots are empty - https://bugs.launchpad.net/horizon/+bug/1447288<br />
* Django 1.8 is not fully supported yet.<br />
<br />
=== Upgrade Notes ===<br />
* Django 1.7 is now supported.<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Hierarchical multitenancy ====<br />
<br />
[http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#projects-v3-projects Projects] can be nested under other projects by setting the <code>parent_id</code> attribute to an existing project when [http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#create-project creating a new project]. You can also [http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-project discovery] the parent-child hierarchy through the existing <code>/v3/projects</code> API.<br />
<br />
Role assignments can now be assigned to both [https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-inherit-ext.rst#assign-role-to-user-on-projects-in-a-subtree users] and [https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-inherit-ext.rst#assign-role-to-group-on-projects-in-a-subtree groups] on subtrees in the project hierarchy.<br />
<br />
This feature will require corresponding support across other OpenStack services (such as hierarchical quotas) in order to become broadly useful.<br />
<br />
==== Fernet tokens ====<br />
<br />
Unlike UUID tokens which must be persisted to a database, Fernet tokens are entirely non-persistent. Deployers can enable the Fernet token provider using <code> [token] provider = keystone.token.providers.fernet.Provider</code> in <code>keystone.conf</code>.<br />
<br />
Fernet tokens require symmetric encryption keys which can be established using <code>keystone-manage fernet_setup</code> and periodically rotated using <code>keystone-manage fernet_rotate</code>. These keys must be shared by all Keystone nodes in a multi-node (or multi-region) deployment, such that tokens generated by one node can be immediately validated against another.<br />
<br />
==== Identity federation ====<br />
<br />
* Keystone can now act as a [http://docs.openstack.org/developer/keystone/configure_federation.html#keystone-as-an-identity-provider-idp federated identity provider (IdP)] for another instance of Keystone by issuing SAML assertions for local users, which may be ECP-wrapped.<br />
* Added support for [http://docs.openstack.org/developer/keystone/extensions/openidc.html OpenID Connect] as a federated identity authentication mechanism.<br />
* Added the ability to associate many "Remote IDs" to a single identity provider in Keystone. This will help in a case where many identity providers use a common mapping.<br />
* Added the ability for a user to authenticate via a web browser with an existing IdP, through a Single Sign-On page.<br />
* Federated tokens now use the <code>token</code> authentication method, although both <code>mapped</code> and <code>saml2</code> remain available.<br />
* Federated users may now be mapped to existing local identities.<br />
* Groups appearing in federated identity assertions may now be automatically created as local groups with local user membership mappings.<br />
<br />
==== LDAP ====<br />
<br />
* Filter parameters specified by API users are now processed by LDAP itself, instead of by Keystone.<br />
* ''Experimental'' support was added to store domain-specific identity backend [http://docs.openstack.org/developer/keystone/configuration.html#domain-specific-drivers configuration in SQL] using the HTTP API. The primary use case for this is to create a new domain with the HTTP API, and then immediately configure a domain-specific LDAP driver for it without restarting Keystone.<br />
<br />
==== Authorization ====<br />
<br />
* The "assignment" backend has been split into a "resource" backend (containing domains, projects, and roles) and an "assignment" backend, containing the authorization mapping model.<br />
* Added support for trust redelegation. If allowed when the trust is initially created, a trustee can redelegate the roles from the trust via another trust.<br />
* Added support for explicitly requesting an unscoped token from Keystone, even if the user has a <code>default_project_id</code> attribute set.<br />
* Deployers may now opt into disallowing the re-scoping of scoped tokens by setting <code>[token] allow_rescope_scoped_token = false</code> in <code>keystone.conf</code>.<br />
<br />
=== Known Issues ===<br />
<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
<br />
* XML support in Keystone has been removed as of Kilo. When upgrading from Juno to Kilo, it is recommended that references to XML and XmlBodyMiddleware be removed from the [https://github.com/openstack/keystone/blob/master/etc/keystone-paste.ini Keystone Paste configuration]. This includes removing the XML middleware filters and the references from the public_api, admin_api, api_v3, public_version_api, admin_version_api and any other pipelines that may contain the XML filters.<br />
* All previous extensions (OS-FEDERATION, OS-OAUTH1, OS-ENDPOINT-POLICY and OS-EP-FILTER) are now enabled by default, and are [http://docs.openstack.org/developer/keystone/extensions.html correspondingly marked] as either "experimental" or "stable".<br />
* [http://specs.openstack.org/openstack/openstack-specs/specs/no-downward-sql-migration.html SQL Schema Downgrades are no longer supported]. This change is the result of evaluation that downward SQL migrations are not well tested and become increasingly difficult to support with the volume of data-change that occurs in many of the migrations.<br />
* The following python libraries are now required: [https://pypi.python.org/pypi/cryptography cryptography], [https://pypi.python.org/pypi/msgpack-python msgpack-python], [https://pypi.python.org/pypi/pysaml2 pysaml2] and [https://pypi.python.org/pypi/oauthlib oauthlib].<br />
* <code>keystone.middleware.RequestBodySizeLimiter</code> is now deprecated in favor of <code>oslo_middleware.sizelimit.RequestBodySizeLimiter</code> and will be removed in Liberty.<br />
* Eventlet-specific configuration options such as <code>public_bind_host</code>, <code>bind_host</code>, <code>admin_bind_host</code>, <code>admin_port</code>, <code>public_port</code>, <code>public_workers</code>, <code>admin_workers</code>, <code>tcp_keepalive</code>, <code>tcp_keepidle</code> have been moved from the <code>[DEFAULT]</code> configuration section to a new configuration section called <code>[eventlet_server]</code>. Similarly, Eventlet-specific SSL configuration options such as <code>enable</enable>, <code>certfile</code>, <code>keyfile</code>, <code>ca_certs</code>, <code>cert_required</code> have been moved from the <code>[ssl]</code> configuration section to a new configuration section called <code>[eventlet_server_ssl]</code>.<br />
* <code>keystone.token.backends.sql</code> has been removed in favor of <code>keystone.token.persistence.backends.sql</code>.<br />
* <code>keystone.token.backends.kvs</code> has been removed in favor of <code>keystone.token.persistence.backends.kvs</code>.<br />
* <code>keystone.token.backends.memcache</code> has been removed in favor of <code>keystone.token.persistence.backends.memcache</code>.<br />
* <code>keystone.assignment.backends.kvs</code> has been removed in favor of <code>keystone.assignment.backends.sql</code>.<br />
* <code>keystone.identity.backends.kvs</code> has been removed in favor of <code>keystone.identity.backends.sql</code>.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been removed in favor of external tooling.<br />
* <code>keystone.catalog.backends.templated.TemplatedCatalog</code> has been removed in favor of <code>keystone.catalog.backends.templated.Catalog</code>.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been removed in favor of external access logging.<br />
* <code>keystone.trust.backends.kvs</code> has been removed in favor of <code>keystone.trust.backends.sql</code>.<br />
* <code>[catalog] endpoint_substitution_whitelist</code> has been removed from <code>keystone.conf</code> as part of a related security hardening effort.<br />
* <code>[signing] token_format</code> has been removed from <code>keystone.conf</code> in favor of <code>[token] provider</code>.<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
* DVR now supports VLANs in addition to VXLAN/GRE<br />
* ML2 Hierarchical Port Binding<br />
* New LBaaS Version 2 API<br />
* Portsecurity support for the OVS ML2 Driver<br />
<br />
* New Plugins supported in Kilo include the following:<br />
** A10 Networks LBaaS V2 Driver<br />
** Brocade LBaaS V2 Driver<br />
** Brocade ML2 driver for MLX and ICX switches<br />
** Brocade L3 routing plugin for MLX switch<br />
** Brocade Vyatta vRouter L3 Plugin<br />
** Brocade Vyatta vRouter Firewall Driver<br />
** Brocade Vyatta vRouter VPN Driver<br />
** Cisco CSR VPNaaS Driver<br />
** Dragonflow SDN based Distributed Virtual Router L3 Plugin<br />
** Freescale FWaaS Driver<br />
** Intel Mcafee NGFW FWaaS Driver<br />
** IPSEC Strongswan VPNaaS Driver<br />
<br />
=== Known Issues ===<br />
* The Firewall-as-a-Service project is still marked as experimental for the Kilo release.<br />
* Bug [https://bugs.launchpad.net/neutron/+bug/1438819 1438819]<br />
** When a new subnet is created on an external network, all existing routers with gateways on the network will get a new address allocated from it. For IPv4 networks, this could consume the entire subnet for router gateway ports.<br />
<br />
=== Upgrade Notes ===<br />
<br />
From Havana, Neutron no longer supported an explicit lease database (https://bugs.launchpad.net/bugs/1202392). This left dead code including unused environment variable. In order to remove the dead code (https://review.openstack.org/#/c/152398/), a change to the dhcp.filter is required, so that line:<br />
<br />
'''dnsmasq: EnvFilter, dnsmasq, root, NEUTRON_NETWORK_ID='''<br />
<br />
Be replaced by:<br />
<br />
'''dnsmasq: CommandFilter, dnsmasq, root'''<br />
<br />
After advanced services were split into separate packages and received their own service configuration files (specifically, etc/neutron/neutron_lbaas.conf, etc/neutron/neutron_fwaas.conf and etc/neutron/neutron_vpnaas.conf), active service provider configuration can be different after upgrade (specifically, default load balancer (haproxy) and vpn (openswan) providers can be enabled for you even though you previously disabled them in neutron.conf). Please make sure you review configuration after upgrade so that it reflects the desired state of service providers.<br />
<br />
Note: this will have no effect if the related service plugin is not loaded in neutron.conf.<br />
<br />
<br />
* The default value of api_workers is now equal to the number of CPUs in the host. If you currently use the default, ensure you set api_workers to a reasonable number for your installation. (https://review.openstack.org/#/c/140493/)<br />
* The neutron.allow_duplicate_networks config option is deprecated in Kilo and will be removed in Liberty where the default behavior will be to just allow multiple ports attached to an instance on the same network in Neutron. (https://review.openstack.org/163581)<br />
* The linuxbridge agent now enables VXLAN by default (https://review.openstack.org/160826)<br />
* neutron-ns-metadata-proxy can now be run as non-root (https://review.openstack.org/147437)<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
<br />
* Deprecation<br />
** Brocade Classic plugin for Brocade's VDX/VCS series of hardware switches will be deprecated in the L-Release. The functionality provided by this plugin is now addressed by the ML2 Driver available for the VDX series of hardware. The plugin is slated for removal after this release cycle.<br />
<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
* From this point forward any new database schema upgrades will not require restarting Cinder services right away. The services are now independent of schema upgrades. This is part one to Cinder supporting rolling upgrades!<br />
* Ability to add/remove volumes from an existing consistency group. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Ability to create a consistency group from an existing consistency group snapshot. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Create more fine tuned filters/weighers to set how the scheduler will choose a volume backend. [http://docs.openstack.org/admin-guide-cloud/content/driver_filter_weighing.html Read the docs for more info].<br />
* Encrypted volumes can now be backed up using the Cinder backup service. [http://docs.openstack.org/admin-guide-cloud/content/volume-backup-restore.html Read the docs for more info].<br />
* Ability to create private volume types. This is perfect when you want to make volume types available to only a specific tenant or to test it before making available to your cloud. To do so use the ''cinder type-create <name> --is-public''.<br />
* Oversubscription with thin provision is configurable. [http://docs.openstack.org/admin-guide-cloud/content/over_subscription.html Read docs for more info].<br />
* Ability to add descriptions to volume types. To do so use ''cinder type-create <name> <description><br />
* Cinder now can return multiple iSCSI paths information so that the connector can attach volumes even when the primary path is down ([https://review.openstack.org/#/c/134681/ when connector's multipath feature is enabled] or [https://review.openstack.org/#/c/140877/ not enabled]).<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The 'host' config option for multiple-storage backends in cinder.conf is renamed to 'backend_host' in order to avoid a naming conflict with the 'host' to locate redis. If you use this option, please ensure your configuration files are updated.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* Support to add jitter to polling cycles to ensure pollsters are not querying service's api at the same time<br />
* Ceilometer API RBAC support<br />
* Improved Event support:<br />
** Multi-pipeline support to enable unique processing and publishing of events<br />
** Enabled ability to capture raw notification messages for auditing and postmortem analysis<br />
** Support for persisting events into ElasticSearch<br />
** Publishing support to database, http, file, kafka and oslo.messaging supported message queues<br />
** Option to split off the events persistence into a separate database<br />
** Telemetry now supports to collect and store all the event type meters as events. A new option, ''disable_non_metric_meters'', was added to the configuration in order to provide the possibility to turn off storing these events as samples. For further information please see the [http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-telemetry.html Telemetry Configuration Reference]<br />
** The Administrator Guide in OpenStack Manuals was updated with a new [http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-events.html Events section], where you can find further information about this functionality.<br />
* Improved pipeline publishing support:<br />
** Support to publish events and samples to Kafka or HTTP targets<br />
** Publish data to multiple queues<br />
* Additional meters<br />
** memory and disk meters for Hyper-V<br />
** disk meters for LibVirt<br />
** power and thermal related IPMI meters, more meters from NodeManager<br />
** ability to meter Ceph<br />
* IPv6 support enabled in Ceilometer udp publisher and collector<br />
* [http://launchpad.net/gnocchi Gnocchi] dispatch support for ceilometer-collector<br />
* Self-disabled pollster mechanism<br />
<br />
=== Known Issues ===<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
* Deprecated meters:<br />
** The instance:<flavor> meter is deprecated in the Kilo release. In order to retrieve samples or statistics based on flavor you can use the following queries:<br />
statistics:<br />
ceilometer statistics -m instance -g resource_metadata.instance_type<br />
<br />
samples:<br />
ceilometer sample-list -m instance -q metadata.instance_type=<value><br />
* Middleware used to meter Swift was previously packaged in Ceilometer and is now deprecated. It is now separated into it's own library: ceilometermiddleware.<br />
** Juno configuration: http://docs.openstack.org/juno/install-guide/install/apt/content/ceilometer-swift.html<br />
** Kilo configuration: http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-swift.html<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
=== Key New Features ===<br />
<br />
* Improved scaling using nested stacks<br />
** Heat will RPC actions on any resource that is based on a template. This should help to spread the load when dealing with large complex stacks.<br />
* oslo versioned objects<br />
** The database layer now uses oslo versioned objects to aid in future upgrades. This will allow a newly upgraded heat-engine to use a database with an older schema. Note that this will not help with upgrading to kilo.<br />
* New template functions<br />
** There is a new HOT template version "20150430" which includes two new functions "digest" and "repeat". <br />
* Multiregion stacks<br />
** http://docs.openstack.org/hot-reference/content/OS__Heat__Stack.html<br />
* Software config signalling using swift<br />
** https://blueprints.launchpad.net/heat/+spec/software-config-swift-signal<br />
* Trigger new software deployments from heatclient<br />
* stack snapshots<br />
** https://blueprints.launchpad.net/heat/+spec/stack-snapshot<br />
* Access to Heat services<br />
** The admin now has similar access to services as other projects. This is in the form of "heat-manage service-list" and via horizon. This feature reports the active heat-engines.<br />
* Improved validation for nova and neutron properties.<br />
* Pause stack creation/update on a given resource (stack hooks)<br />
** http://specs.openstack.org/openstack/heat-specs/specs/juno/stack-breakpoint.html<br />
** http://docs.openstack.org/developer/heat/template_guide/environment.html?highlight=hooks#pause-stack-creation-update-on-a-given-resource<br />
* New contributed resources<br />
** Mistral resources<br />
** gnocchi alarms https://blueprints.launchpad.net/heat/+spec/ceilometer-gnocchi-alarm<br />
** keystone resources <br />
supported with Keystone v3 server for Project, Role, User and Group<br />
* Stack lifecycle scheduler hints<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The default of the configuration option "num_engine_workers" has changed from 1 to a number based on the the number of CPUs. This is now the same as the way other projects set the number of workers.<br />
* The default for the configuration option "max_nested_stack_depth" has been increased to 5.<br />
* There is a new configuration option "convergence" it is by default off. This feature is not yet complete and this option should remain off.<br />
* In preparation of an upcoming major feature (convergence) there have been some significant DB schema changes. It is suggested that the heat-engine is shutdown during schema upgrades.<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
==== Deprecation ====<br />
* The follow resources are deprecated OS::Heat::HARestarter and OS::Heat::CWLiteAlarm<br />
* The CloudWatch API (heat-api-cw)<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
* Support for a new replication strategy based on async GTID replication (new in MySQL 5.6)<br />
** We now support for creating n-replicas from a single master in one API call<br />
** We also support for failover from an unresponsive master to the most up-to-date slave can now be achieved using the new 'eject-master' API call. <br />
* Support for Trove guest managers to support the following new datastores:<br />
**Vertica, and Vertica Cluster<br />
**DB2<br />
**CouchDB <br />
* Extended current management API layer :<br />
** We now have a new management API to support listing and viewing deleted trove instances<br />
** We also added a new management API to ping a datastore guestagent via the RPC mechanism<br />
* Users now have the ability to edit/update the names of Trove instances<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Data Processing service (Sahara) ==<br />
=== Key New Features ===<br />
* New plugins, their features and versions:<br />
** MAPR<br />
** Apache Storm<br />
** Apache Hadoop 2.6.0 was added, Apache Hadoop 2.4.1 deprecated<br />
** New services for CDH plugin added up to HDFS, YARN, Spark, Oozie, HBase, ZooKeeper and other services<br />
* Added indirect VM access for better utilization of floating IPs<br />
* Added event log support to have detailed info about provisioning progress<br />
* Optional default node group and cluster templates per plugin<br />
* Horizon updates:<br />
** Guided cluster creation and job execution<br />
** Filtering on search for objects<br />
* Editing of Node Group templates and Cluster templates implemented<br />
* Added Shell Job Type for clusters running Oozie<br />
* New Job Types endpoint to query list of the supported Job Types<br />
<br />
=== Known Issues ===<br />
-<br />
<br />
=== Upgrade Notes ===<br />
Details: http://docs.openstack.org/developer/sahara/userdoc/upgrade.guide.html#juno-kilo<br />
<br />
* Sahara now requires policy.json configuration file.<br />
<br />
== OpenStack Bare Metal service (Ironic) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== State Machine ====<br />
Ironic now uses a formal model for the logical state of each node it manages.<ref name="states">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/new-ironic-state-machine.html#proposed-change]New Ironic State Machine</ref> This has enabled the addition of two new processes: '''cleaning''' and '''inspection'''.<br />
* Automatic disk erasure between tenants is now enabled by default. This may be extended to perform additional '''cleaning''' steps, such as re-applying firmware, resetting BIOS settings, etc.<ref name="cleaning">[http://docs.openstack.org/developer/ironic/deploy/cleaning.html]Node Cleaning</ref><br />
* Both in-band and out-of-band methods are available to '''inspect''' hardware. These methods may be used to update Node properties automatically.<ref name="inspect">[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#hardware-inspection]Hardware Inspection</ref><br />
<br />
==== Version Headers ====<br />
The Ironic REST API expects a new ''X-OpenStack-Ironic-API-Version'' header be passed with each HTTP[S] request. This header allows client and server to negotiate a mutually supported interface.<ref name="api-version">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html]REST API "micro" versions </ref> In the absence of this header, the REST service will default to a compatibility mode and yield responses compatible with Juno clients. This mode, however, prevents access to most features introduced in Kilo.<br />
<br />
==== Hardware Driver Changes ====<br />
The following new drivers were added:<br />
* [http://docs.openstack.org/developer/ironic/drivers/amt.html AMT]<br />
* [http://docs.openstack.org/developer/ironic/deploy/drivers.html#irmc iRMC]<br />
* [http://docs.openstack.org/developer/ironic/drivers/vbox.html VirtualBox (testing driver only)]<br />
<br />
<br />
The following enhancements were made to existing drivers:<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#enabling-the-configuration-drive-configdrive Configdrives] may be used with the "agent" drivers in lieu of a metadata service, if desired.<br />
* SeaMicro driver supports serial console<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#uefi-secure-boot-support iLO driver supports UEFI secure boot]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#hardware-inspection iLO driver supports out-of-band node inspection]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#ilo-node-cleaning iLO driver supports resetting ilo and bios during cleaning]<br />
<br />
<br />
Support for third-party and out-of-tree drivers is enhanced by the following two changes:<br />
* Drivers may store their own "internal" information about Nodes.<br />
* Drivers may register their own periodic tasks to be run by the Conductor.<br />
* ''vendor_passthru'' methods now support additional HTTP methods (eg, PUT and POST).<br />
* ''vendor_passthru'' methods are now discoverable in the REST API. See [http://docs.openstack.org/developer/ironic/dev/drivers.html#node-vendor-passthru node vendor passthru] and [http://docs.openstack.org/developer/ironic/dev/drivers.html#driver-vendor-passthru driver vendor passthru]<br />
<br />
==== Other Changes ====<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#logical-names Logical names] may be used to address Nodes, in addition to their canonical UUID. <br />
* For servers with varied local disks, [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#specifying-the-disk-for-deployment ''hints''] may be supplied that affect which disk device the OS is provisioned to.<br />
* Support for fetching kernel, ramdisk, and instance images from HTTP[S] sources directly has been added to remove the dependency on Glance. [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#using-ironic-as-a-standalone-service Using ironic as a standalone service]<br />
* Nodes may be placed into ''[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#maintenance-mode maintenance mode]'' via REST API calls. An optional ''maintenance reason'' may be specified when doing so.<br />
<br />
=== Known Issues ===<br />
* '''Running more than one nova-compute process is not officially supported.'''<br />
** While Ironic does include a ClusteredComputeManager, which allows running more than one nova-compute process with Ironic, it should be considered experimental and has many known problems.<br />
* Drivers using the "agent" deploy mechanism do not support "rebuild --preserve-ephemeral"<br />
<br />
=== Upgrade Notes ===<br />
* IPMI Passwords are now obfuscated in REST API responses. This may be disabled by changing API policy settings.<br />
* The "agent" class of drivers now support both whole-disk and partition based images.<br />
* The driver_info parameters of "pxe_deploy_kernel" and "pxe_deploy_ramdisk" are deprecated in favour of "deploy_kernel" and "deploy_ramdisk". <br />
* Drivers implementing their own version of the vendor_passthru() method has been deprecated in favour of the new @passthru decorator.<br />
<br />
==== Juno to Kilo ====<br />
The recommended upgrade process is documented here:<br />
* http://docs.openstack.org/developer/ironic/deploy/upgrade-guide.html#upgrading-from-juno-to-kilo<br />
<br />
==== Upgrading from Icehouse "nova-baremetal" ====<br />
<br />
An upgrade from an Icehouse Nova installation using the "baremetal" driver directly to Kilo Ironic is untested and unsupported. Instead, please follow the following upgrade path:<br />
# Icehouse Nova "baremetal" -> Juno Nova "baremetal"<br />
# Juno Nova "baremetal" -> Juno Ironic<br />
# Juno Ironic -> Kilo Ironic<br />
<br />
Documentation for steps 1 and 2 is available at: https://wiki.openstack.org/wiki/Ironic/NovaBaremetalIronicMigration<br />
<br />
== OpenStack Documentation ==<br />
<br />
* New [http://docs.openstack.org docs.openstack.org] landing page and new web design for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* First release of the [http://docs.openstack.org/networking-guide/ Networking Guide]<br />
* Migration to RST for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* New specialty teams:<br />
** Install Guides<br />
** Networking Guide<br />
** High Availability Guide<br />
** Networking Guide<br />
** User Guides (Admin and End User)<br />
* First App Tutorial sprint<br />
* Driver documentation clarification and connections</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Kilo&diff=78630ReleaseNotes/Kilo2015-04-30T00:35:07Z<p>Asalkeld: /* Key New Features */</p>
<hr />
<div>= OpenStack 2015.1.0 (Kilo) Release Notes =<br />
<br />
<br />
{| style="color:#000000; border:solid 3px #A8A8A8; padding:2em; margin:2em 0; background-color:#FFFFFF; vertical-align:middle;"<br />
| style="padding:1em" | The Kilo release of OpenStack is dedicated to the loving memory of '''Chris Yeoh''', who left his family and us way too soon.<br />
|}<br />
<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
<br />
== General Upgrade Notes ==<br />
<br />
* TBD<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Erasure Code (beta) ====<br />
Swift now supports an erasure-code (EC) storage policy type. This allows deployers to achieve very high durability with less raw capacity as used in replicated storage. However, EC requires more CPU and network resources, so it is not good for every use case. EC is great for storing large, infrequently accessed data in a single region.<br />
<br />
Swift's implementation of erasure codes is meant to be transparent to end users. There is no API difference between replicated storage and EC storage.<br />
<br />
To support erasure codes, Swift now depends on PyECLib and liberasurecode. liberasurecode is a pluggable library that allows for the actual EC algorithm to be implemented in a library of your choosing.<br />
<br />
Full docs are at http://swift.openstack.org/overview_erasure_code.html<br />
<br />
==== Composite tokens ====<br />
Composite tokens allow other OpenStack services to store data in Swift on behalf of a client so that neither the client nor the service can update the data without both party's consent.<br />
<br />
An example of this is that a user requests that Nova save a snapshot of a VM. Nova passes the request to Glance, Glance writes the image to a Swift container as a set of objects. In this case, the user cannot modify the snapshot without also having a valid token from the service. Nor can the service update the data without a valid token from the user. But the data is still stored in the user's account in Swift, which makes accounting simpler.<br />
<br />
Full docs are at http://swift.openstack.org/overview_backing_store.html<br />
<br />
==== Data placement updates for smaller, unbalanceable clusters ====<br />
Swift's data placement now accounts for device weight. This allows operators to gradually add new zones and regions without immediately causing a large amount of data to be moved. Also, if a cluster is inbalanced (eg a two-zone cluster where one zone has twice the capacity of the other), Swift will more efficiently use the available space and warn when replicas are placed without enough dispersion in the cluster.<br />
<br />
==== Global cluster replication improvements ====<br />
Replication between regions will now only move one replica per replication run. This gives the remote region a chance to replicate internally and thus avoid more data moving over the WAN<br />
<br />
<br />
=== Known Issues ===<br />
* As a beta release, EC support is nearly fully feature complete, but it is lacking support for some features (like multi-range reads) and has not had a full performance characterization. This feature relies on ssync for durability. Deployers are urged to do extensive testing and not deploy production data using an erasure code storage policy.<br />
<br />
=== Upgrade Notes ===<br />
As always, you can upgrade to this version of Swift with no end-user downtime<br />
<br />
* In order to support erasure codes, Swift has a new dependency on PyECLib (and liberasurecode, transitively). Also, the minimum required version of eventlet has been raised.<br />
<br />
<br />
<br />
<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== API v2.1 ====<br />
<br />
* We have the first release of the next generation of the Nova API, v2.1. The v2.1 is designed to be backwards compatible with v2.0 with the addition of strong API validation. All changes to the API are discoverable via the advertised microversion. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html<br />
<br />
* For kilo, by default we are still using v2.0 API code to server v2.0 API requests. It is hoped that in liberty that v2.1 will be used to serve requests for both v2.0 and v2.1.<br />
<br />
* For liberty v2.0 is now frozen, and all new features will now be added into the v2.1 API using the microversions mechanism. Microversion increments released with kilo are:<br />
** Extending the keypair API to support for x509 certificates, to be used with Windows WinRM, is one of the first API features added as a microversion in the v2.1 API.<br />
** Exposing additional attributes in os-extended-server-attributes<br />
<br />
* python-novaclient does not yet have support for the v2.1 API<br />
<br />
* The policy enforcement of Nova v2.1 API get improvement.<br />
** Policy only enforce at the entry of API.<br />
** Without duplicated rules for single one API anymore.<br />
** All the v2.1 API policy rule use 'os_compute_api' as prefix which distinguish with v2 API.<br />
** Due to hard-code permission checks at db layer, part of Nova API isn't configurable by policy before. It's always required admin user. Part of Nova v2.1 API's hard-code permission checks is removed which make API policy configurable. The rest of hard-code permission checks will be removed at Liberty.<br />
<br />
==== Upgrade Support ====<br />
<br />
* We have reduced the data migrations that happen in the DB migration scripts, this now happens in a "lazy" way inside the DB objects code. There are nova-manage commands to help force migration of the data. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/flavor-from-sysmeta-to-blob.html<br />
<br />
* Change https://review.openstack.org/#/c/97946/ adds database migration 267 which scans for null instances.uuid records and will fail if any are found since the migrate ultimately needs to make instances.uuid non-nullable and adds a UniqueConstraint on that column. A helper script is provided to search for null instances.uuid records before running the database migrations. Before running 'nova-manage db sync', run the helper script 'nova-manage db null_instance_uuid_scan' which, by default, will just search and dump results, it does not change anything. Pass the --delete option to the null_instance_uuid_scan command to automatically remove any null records were instances.uuid is null.<br />
<br />
==== Scheduler ====<br />
<br />
* A selection of performance optimisations<br />
* We are in the process of making structural changes to the scheduler that will help improve our ability to evolve and improve scheduling. This should not be visible from an end user perspective.<br />
<br />
==== Cells v2 ====<br />
<br />
* There are some initial parts of cell v2 supported added, but this feature is not yet ready to use.<br />
* new 'nova-manage api_db sync' and 'nova-manage api_db version' commands for working with the new api database for cells, but nothing is using this database yet so it is not necessary to set it up.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Support for generation 2 VMs: https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms<br />
* Support for SMB based volumes, to sit along side existing iSCSI volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/hyper-v-smbfs-volume-support.html<br />
* Support for x509 certificate based key pairs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/keypair-x509-certificates.html<br />
* Host power actions now work with Hyper-V: https://blueprints.launchpad.net/nova/+spec/hyper-v-host-power-actions<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* NFV related features:<br />
** NUMA based scheduling: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/input-output-based-numa-scheduling.html<br />
** Pinning guest vCPUs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-cpu-pinning.html<br />
** Large page support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html<br />
* vhostuser VIF driver: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt_vif_vhostuser.html<br />
* Support for running KVM on IBM System z: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-kvm-systemz.html<br />
* Support for parallels: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pcs-support.html<br />
* Support for SMB based volumes: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-smbfs-volume-support.html<br />
* Quiesce using QEMU guest agent: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quiesced-image-snapshots-with-qemu-guest-agent.html<br />
* Quobyte volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quobyte-nova-driver.html<br />
* Support for QEMU iSCSI initiator: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html<br />
<br />
===== VMware =====<br />
<br />
* Support for Ephemeral disks: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/vmware-ephemeral-disk-support.html<br />
* Support fo vSAN: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-vsan-support.html<br />
* Support for based OVA images: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-driver-ova-support.html<br />
* Support for SPBM based storage policies: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-spbm-support.html<br />
<br />
===== Ironic =====<br />
<br />
* Support to pass flavor capabilities to ironic: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pass-flavor-capabilities-to-ironic-virt-driver.html<br />
<br />
=== Known Issues ===<br />
<br />
* Evacuate recovery code has the potential to destroy data. On nova-compute startup, instances reported by the hypervisor are examined to see if they have moved (i.e. been evacuated) from the current host during the outage. If the determination is made that they were, then they are destroyed locally. This has the potential to choose incorrectly and destroy instances unexpectedly. On libvirt-like nodes, this can be triggered by changing the system hostname. On vmware-like nodes, this can be triggered by attempting to manage a single vcenter deployment from two different hosts (with different hostnames). This will be fixed properly in Liberty, but for now deployments that wish to disable this behavior as a preventive measure can set workarounds.destroy_after_evacuate=False. NOTE: This is not a regression and has been a flaw in the design of the evacuate feature since its introduction. There is no easy fix for this, hence this workaround to limit the potential for damage. The proposed fix in liberty is here: https://review.openstack.org/#/c/161444/.<br />
<br />
* The generate config examples possibly missing some oslo related configuration<br />
<br />
=== Upgrade Notes ===<br />
<br />
Below are changes you should be aware of when upgrading. Where possible, The git commit hash is provided for you to find more information:<br />
<br />
* Neutron ports are no longer deleted after your server is deleted, if you created them outside of Nova: 1153a46738fc3ffff98a1df9d94b5a55fdd58777<br />
* EC2 API support has been deprecated, and is likely to be removed in kilo, see f098398a836e3671c49bb884b4a1a1988053f4b2<br />
* Websocket proxies need to be upgraded in a lockstep with the API nodes, as older API nodes will not be sending the access_url when authorizing console access, and newer proxy services (this commit and onward) would fail to authorize such requests 9621ccaf05900009d67cdadeb1aac27368114a61<br />
* After fully upgrading to kilo (i.e. all nodes are running kilo code), you should start a background migration of flavor information from its old home to its new home. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. This is critical to complete before the Liberty release, where support for the old location will be dropped. Use "nova-manage migrate-flavor-data" to perform this transition.<br />
* Due to the improvement on Nova v2.1 API policy enforcement. There are a lot of change happened to v2.1 API policy. Because v2.1 API didn't released before, those change won't keep back-compatible. It is better to use policy sample configuration instead of old one.<br />
* VMware rescue VM behaviour no longer creates a new VM and instead happens in place: cd1765459a24e52e1b933c8e05517fed75ac9d41<br />
* force_config_drive = always has been deprecated, and force_config_drive = True should be used instead: c12a78b35dc910fa97df888960ef2b9a64557254<br />
* Running hyper-v, if you deployed code that was past this commit: b4d57ab65836460d0d9cb8889ec2e6c3986c0a9b but before this commit: c8e9f8e71de64273f10498c5ad959634bfe79975 you make have problems to manually resolve see: c8e9f8e71de64273f10498c5ad959634bfe79975<br />
* Changed the default value of: multi_instance_display_name_template see: 609b2df339785bff9e30a9d67d5c853562ae3344<br />
* Please use "nova-manage db null_instance_uuid_scan" to ensure the DB migrations will apply cleanly, see: c0ea53ce353684b48303fc59393930c3fa5ade58<br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* Support for Federated authentication via Web Single-Sign-On -- When configured in keystone, the user will be able to choose the authentication mechanism to use from those support by the deployment. This feature must be enabled by changes to local_settings.py to be utilized. The related settings to enable and configure can be found [http://docs.openstack.org/developer/horizon/topics/settings.html#websso-enabled here].<br />
<br />
* Support for Theming -- A simpler mechanism to specify a custom theme for Horizon has been included. Allowing for use of CSS values for Bootstrap and Horizon variables, as well as the inclusion of custom CSS. More details available [http://docs.openstack.org/developer/horizon/topics/settings.html#custom-theme-path here].<br />
<br />
* Sahara UX Improvements -- Dramatic improvements to the Sahara user experience have been made with the addition of guided cluster creation and guided job creation pages.<br />
<br />
* Launch Instance Wizard (beta) -- A full replacement for the launch instance workflow has been implemented in AngularJS to address usability issues in the existing launch instance workflow. Due to the late inclusion date and limited testing, this feature is marked as beta for Kilo and not enabled by default. To use the new workflow, the following change to local_settings.py is required: <code>LAUNCH_INSTANCE_NG_ENABLED = True</code>. Additionally, you can disable the default launch instance wizard with the following: <code>LAUNCH_INSTANCE_LEGACY_ENABLED = False</code>. This new work is a view into future development in Horizon. <br />
<br />
* Nova<br />
** allow service disable/enable on Hypervisor<br />
** Migrate all instances from host<br />
** expose serial console<br />
<br />
* Cinder<br />
** Cinder v2 by default<br />
** Managed/Unmanaged volume support -- allows admin to manage existing volumes not managed by cinder, as well as unmanage volumes.<br />
** Volume transfer support between projects<br />
** Volume encryption metadata support<br />
<br />
* Glance<br />
** View added to allow administrators to view/add/update Glance Metadata definitions<br />
<br />
* Heat<br />
** Stack Template view<br />
** Orchestration Resources Panel<br />
** Suspend/Resume actions for Stacks<br />
** Preview Stack view allows users to preview stacks specified in templates before creating them.<br />
<br />
* Trove<br />
** Resizing of Trove instances -- changing instance flavor<br />
<br />
* Ceilometer<br />
** Display IPMI meters values from Ceilometer<br />
<br />
* New Reusable AngularJS widgets in Horizon:<br />
** AngularJS table implementation<br />
*** Table drawers -- expandable table content<br />
*** improved client/server search<br />
** Transfer table widget <br />
<br />
* Configurable web root for Horizon beyond just '/'<br />
<br />
=== Known Issues ===<br />
* Volumes created from snapshots are empty - https://bugs.launchpad.net/horizon/+bug/1447288<br />
* Django 1.8 is not fully supported yet.<br />
<br />
=== Upgrade Notes ===<br />
* Django 1.7 is now supported.<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Hierarchical multitenancy ====<br />
<br />
[http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#projects-v3-projects Projects] can be nested under other projects by setting the <code>parent_id</code> attribute to an existing project when [http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#create-project creating a new project]. You can also [http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-project discovery] the parent-child hierarchy through the existing <code>/v3/projects</code> API.<br />
<br />
Role assignments can now be assigned to both [https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-inherit-ext.rst#assign-role-to-user-on-projects-in-a-subtree users] and [https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-inherit-ext.rst#assign-role-to-group-on-projects-in-a-subtree groups] on subtrees in the project hierarchy.<br />
<br />
This feature will require corresponding support across other OpenStack services (such as hierarchical quotas) in order to become broadly useful.<br />
<br />
==== Fernet tokens ====<br />
<br />
Unlike UUID tokens which must be persisted to a database, Fernet tokens are entirely non-persistent. Deployers can enable the Fernet <code> [token] provider = keystone.token.providers.fernet.Provider</code> in <code>keystone.conf</code>.<br />
<br />
Fernet tokens require symmetric encryption keys which can be established using <code>keystone-manage fernet_setup</code> and periodically rotated using <code>keystone-manage fernet_rotate</code>. These keys must be shared by all keystone nodes in a multi-node (or multi-region) deployment, such that tokens generated by one node can be immediately validated against another.<br />
<br />
==== Identity federation ====<br />
<br />
* Keystone can now act as a [http://docs.openstack.org/developer/keystone/configure_federation.html#keystone-as-an-identity-provider-idp federated identity provider (IdP)] for another instance of keystone by issuing SAML assertions for local users, which may be ECP-wrapped.<br />
* Added support for [http://docs.openstack.org/developer/keystone/extensions/openidc.html OpenID Connect] as a federated identity authentication mechanism.<br />
* Added the ability to associate many "Remote IDs" to a single identity provider in keystone. This will help in a case where many identity providers use a common mapping.<br />
* Added the ability for a user to authenticate via a web browser with an existing IdP, through a Single Sign-On page.<br />
* Federated tokens now use the <code>token</code> authentication method, although both <code>mapped</code> and <code>saml2</code> remain available.<br />
* Federated users may now be mapped to existing local identities.<br />
* Groups appearing in federated identity assertions may now be automatically created as local groups with local user membership mappings.<br />
<br />
==== LDAP ====<br />
<br />
* Filter parameters specified by API users are now processed by LDAP itself, instead of by keystone.<br />
* ''Experimental'' support was added to store domain-specific identity backend [http://docs.openstack.org/developer/keystone/configuration.html#domain-specific-drivers configuration in SQL] using the HTTP API. The primary use case for this is to create a new domain the the HTTP API, and then immediately configure a domain-specific LDAP driver for it without restarting keystone.<br />
<br />
==== Authorization ====<br />
<br />
* The "assignment" backend has been split into a "resource" backend (containing domains, projects, and roles) and an "assignments" backend, containing the authorization mapping model.<br />
* Added support for trust redelegation. If allowed when the trust is initially created, a trustee can redelegate the roles from the trust via another trust<br />
* Added support for explicitly requesting an unscoped token from Keystone, even if the user has a <code>default_project_id</code> attribute set.<br />
* Deployers may now opt into disallowing the re-scoping of scoped tokens by setting <code>[token] allow_rescope_scoped_token = false</code> in <code>keystone.conf</code>.<br />
<br />
=== Known Issues ===<br />
<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
<br />
* XML support in Keystone has been removed as of Kilo. When upgrading from Juno to Kilo, it is recommended that references to XML and XmlBodyMiddleware be removed from the [https://github.com/openstack/keystone/blob/master/etc/keystone-paste.ini Keystone Paste configuration]. This includes removing the XML middleware filters and there references from the public_api, admin_api, api_v3, public_version_api, admin_version_api and any other pipelines that may contain the XML filters.<br />
* All previous extensions (OS-FEDERATION, OS-OAUTH1, OS-ENDPOINT-POLICY and OS-EP-FILTER) are now enabled by default, and are [http://docs.openstack.org/developer/keystone/extensions.html correspondingly marked] as either "experimental" or "stable".<br />
* [http://specs.openstack.org/openstack/openstack-specs/specs/no-downward-sql-migration.html SQL Schema Downgrades are no longer supported]. This change is the result of evaluation that downward SQL migrations are not well tested and become increasingly difficult to support with the volume of data-change that occurs in many of the migrations.<br />
* The following python libraries are now required: [https://pypi.python.org/pypi/cryptography cryptography], [https://pypi.python.org/pypi/msgpack-python msgpack-python], [https://pypi.python.org/pypi/pysaml2 pysaml2] and [https://pypi.python.org/pypi/oauthlib oauthlib]<br />
* <code>keystone.middleware.RequestBodySizeLimiter</code> is now deprecated in favor of <code>oslo_middleware.sizelimit.RequestBodySizeLimiter</code> and will be removed in Liberty.<br />
* Eventlet-specific configuration options such as <code>public_bind_host</code>, <code>bind_host</code>, <code>admin_bind_host</code>, <code>admin_port</code>, <code>public_port</code>, <code>public_workers</code>, <code>admin_workers</code>, <code>tcp_keepalive</code>, <code>tcp_keepidle</code> have been moved from the <code>[DEFAULT]</code> configuration section to a new configuration section called <code>[eventlet_server]</code>. Similarly, Eventlet-specific SSL configuration options such as <code>enable</enable>, <code>certfile</code>, <code>keyfile</code>, <code>ca_certs</code>, <code>cert_required</code> have been moved from the <code>[ssl]</code> configuration section to a new configuration section called <code>[eventlet_server_ssl]</code>.<br />
* <code>keystone.token.backends.sql</code> has been removed in favor of <code>keystone.token.persistence.backends.sql</code>.<br />
* <code>keystone.token.backends.kvs</code> has been removed in favor of <code>keystone.token.persistence.backends.kvs</code>.<br />
* <code>keystone.token.backends.memcache</code> has been removed in favor of <code>keystone.token.persistence.backends.memcache</code>.<br />
* <code>keystone.assignment.backends.kvs</code> has been removed in favor of <code>keystone.assignment.backends.sql</code>.<br />
* <code>keystone.identity.backends.kvs</code> has been removed in favor of <code>keystone.identity.backends.sql</code>.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been removed in favor of external tooling.<br />
* <code>keystone.catalog.backends.templated.TemplatedCatalog</code> has been removed in favor of <code>keystone.catalog.backends.templated.Catalog</code>.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been removed in favor of external access logging.<br />
* <code>keystone.trust.backends.kvs</code> has been removed in favor of <code>keystone.trust.backends.sql</code>.<br />
* <code>[catalog] endpoint_substitution_whitelist</code> has been removed from <code>keystone.conf</code> as part of a related security hardening effort.<br />
* <code>[signing] token_format</code> has been removed from <code>keystone.conf</code> in favor of <code>[token] provider</code>.<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
* DVR now supports VLANs in addition to VXLAN/GRE<br />
* ML2 Hierarchical Port Binding<br />
* New LBaaS Version 2 API<br />
* Portsecurity support for the OVS ML2 Driver<br />
<br />
* New Plugins supported in Kilo include the following:<br />
** A10 Networks LBaaS V2 Driver<br />
** Brocade LBaaS V2 Driver<br />
** Brocade ML2 driver for MLX and ICX switches<br />
** Brocade L3 routing plugin for MLX switch<br />
** Brocade Vyatta vRouter L3 Plugin<br />
** Brocade Vyatta vRouter Firewall Driver<br />
** Brocade Vyatta vRouter VPN Driver<br />
** Cisco CSR VPNaaS Driver<br />
** Dragonflow SDN based Distributed Virtual Router L3 Plugin<br />
** Freescale FWaaS Driver<br />
** Intel Mcafee NGFW FWaaS Driver<br />
** IPSEC Strongswan VPNaaS Driver<br />
<br />
=== Known Issues ===<br />
* The Firewall-as-a-Service project is still marked as experimental for the Kilo release.<br />
* Bug [https://bugs.launchpad.net/neutron/+bug/1438819 1438819]<br />
** When a new subnet is created on an external network, all existing routers with gateways on the network will get a new address allocated from it. For IPv4 networks, this could consume the entire subnet for router gateway ports.<br />
<br />
=== Upgrade Notes ===<br />
<br />
From Havana, Neutron no longer supported an explicit lease database (https://bugs.launchpad.net/bugs/1202392). This left dead code including unused environment variable. In order to remove the dead code (https://review.openstack.org/#/c/152398/), a change to the dhcp.filter is required, so that line:<br />
<br />
'''dnsmasq: EnvFilter, dnsmasq, root, NEUTRON_NETWORK_ID='''<br />
<br />
Be replaced by:<br />
<br />
'''dnsmasq: CommandFilter, dnsmasq, root'''<br />
<br />
After advanced services were split into separate packages and received their own service configuration files (specifically, etc/neutron/neutron_lbaas.conf, etc/neutron/neutron_fwaas.conf and etc/neutron/neutron_vpnaas.conf), active service provider configuration can be different after upgrade (specifically, default load balancer (haproxy) and vpn (openswan) providers can be enabled for you even though you previously disabled them in neutron.conf). Please make sure you review configuration after upgrade so that it reflects the desired state of service providers.<br />
<br />
Note: this will have no effect if the related service plugin is not loaded in neutron.conf.<br />
<br />
<br />
* The default value of api_workers is now equal to the number of CPUs in the host. If you currently use the default, ensure you set api_workers to a reasonable number for your installation. (https://review.openstack.org/#/c/140493/)<br />
* The neutron.allow_duplicate_networks config option is deprecated in Kilo and will be removed in Liberty where the default behavior will be to just allow multiple ports attached to an instance on the same network in Neutron. (https://review.openstack.org/163581)<br />
* The linuxbridge agent now enables VXLAN by default (https://review.openstack.org/160826)<br />
* neutron-ns-metadata-proxy can now be run as non-root (https://review.openstack.org/147437)<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
<br />
* Deprecation<br />
** Brocade Classic plugin for Brocade's VDX/VCS series of hardware switches will be deprecated in the L-Release. The functionality provided by this plugin is now addressed by the ML2 Driver available for the VDX series of hardware. The plugin is slated for removal after this release cycle.<br />
<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
* From this point forward any new database schema upgrades will not require restarting Cinder services right away. The services are now independent of schema upgrades. This is part one to Cinder supporting rolling upgrades!<br />
* Ability to add/remove volumes from an existing consistency group. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Ability to create a consistency group from an existing consistency group snapshot. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Create more fine tuned filters/weighers to set how the scheduler will choose a volume backend. [http://docs.openstack.org/admin-guide-cloud/content/driver_filter_weighing.html Read the docs for more info].<br />
* Encrypted volumes can now be backed up using the Cinder backup service. [http://docs.openstack.org/admin-guide-cloud/content/volume-backup-restore.html Read the docs for more info].<br />
* Ability to create private volume types. This is perfect when you want to make volume types available to only a specific tenant or to test it before making available to your cloud. To do so use the ''cinder type-create <name> --is-public''.<br />
* Oversubscription with thin provision is configurable. [http://docs.openstack.org/admin-guide-cloud/content/over_subscription.html Read docs for more info].<br />
* Ability to add descriptions to volume types. To do so use ''cinder type-create <name> <description><br />
* Cinder now can return multiple iSCSI paths information so that the connector can attach volumes even when the primary path is down ([https://review.openstack.org/#/c/134681/ when connector's multipath feature is enabled] or [https://review.openstack.org/#/c/140877/ not enabled]).<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The 'host' config option for multiple-storage backends in cinder.conf is renamed to 'backend_host' in order to avoid a naming conflict with the 'host' to locate redis. If you use this option, please ensure your configuration files are updated.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* Support to add jitter to polling cycles to ensure pollsters are not querying service's api at the same time<br />
* Ceilometer API RBAC support<br />
* Improved Event support:<br />
** Multi-pipeline support to enable unique processing and publishing of events<br />
** Enabled ability to capture raw notification messages for auditing and postmortem analysis<br />
** Support for persisting events into ElasticSearch<br />
** Publishing support to database, http, file, kafa and oslo.messaging supported message queues<br />
** Option to split off the events persistence into a separate database<br />
** Telemetry now supports to collect and store all the event type meters as events. A new option, ''disable_non_metric_meters'', was added to the configuration in order to provide the possibility to turn off storing these events as samples. For further information please see the [http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-telemetry.html Telemetry Configuration Reference]<br />
** The Administrator Guide in OpenStack Manuals was updated with a new [http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-events.html Events section], where you can find further information about this functionality.<br />
* Improved pipeline publishing support:<br />
** Support to publish events and samples to Kafka or HTTP targets<br />
** Publish data to multiple queues<br />
* Additional meters<br />
** memory and disk meters for Hyper-V<br />
** disk meters for LibVirt<br />
** power and thermal related IPMI meters, more meters from NodeManager<br />
** ability to meter Ceph<br />
* IPv6 support enabled in Ceilometer udp publisher and collector<br />
* [http://launchpad.net/gnocchi Gnocchi] dispatch support for ceilometer-collector<br />
* Self-disabled pollster mechanism<br />
<br />
=== Known Issues ===<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
* Deprecated meters:<br />
** The instance:<flavor> meter is deprecated in the Kilo release. In order to retrieve samples or statistics based on flavor you can use the following queries:<br />
statistics:<br />
ceilometer statistics -m instance -g resource_metadata.instance_type<br />
<br />
samples:<br />
ceilometer sample-list -m instance -q metadata.instance_type=<value><br />
* Middleware used to meter Swift was previously packaged in Ceilometer and is now deprecated. It is now separated into it's own library: ceilometermiddleware.<br />
** Juno configuration: http://docs.openstack.org/juno/install-guide/install/apt/content/ceilometer-swift.html<br />
** Kilo configuration: http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-swift.html<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
=== Key New Features ===<br />
<br />
* Improved scaling using nested stacks<br />
** Heat will RPC actions on any resource that is based on a template. This should help to spread the load when dealing with large complex stacks.<br />
* oslo versioned objects<br />
** The database layer now uses oslo versioned objects to aid in future upgrades. This will allow a newly upgraded heat-engine to use a database with an older schema. Note that this will not help with upgrading to kilo.<br />
* New template functions<br />
** There is a new HOT template version "20150430" which includes two new functions "digest" and "repeat". <br />
* Multiregion stacks<br />
** http://docs.openstack.org/hot-reference/content/OS__Heat__Stack.html<br />
* Software config signalling using swift<br />
** https://blueprints.launchpad.net/heat/+spec/software-config-swift-signal<br />
* Triggering new software deployments from heatclient<br />
** MORE DETAIL HERE<br />
* stack snapshots<br />
** https://blueprints.launchpad.net/heat/+spec/stack-snapshot<br />
* Access to Heat services<br />
** The admin now has similar access to services as other projects. This is in the form of "heat-manage service-list" and via horizon. This feature reports the active heat-engines.<br />
* Improved validation for nova and neutron properties.<br />
* Pause stack creation/update on a given resource (stack hooks)<br />
** http://specs.openstack.org/openstack/heat-specs/specs/juno/stack-breakpoint.html<br />
** http://docs.openstack.org/developer/heat/template_guide/environment.html?highlight=hooks#pause-stack-creation-update-on-a-given-resource<br />
* New contributed resources<br />
** Mistral resources<br />
** gnocchi alarms https://blueprints.launchpad.net/heat/+spec/ceilometer-gnocchi-alarm<br />
** keystone resources <br />
supported with Keystone v3 server for Project, Role, User and Group<br />
* Stack lifecycle scheduler hints<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The default of the configuration option "num_engine_workers" has changed from 1 to a number based on the the number of CPUs. This is now the same as the way other projects set the number of workers.<br />
* The default for the configuration option "max_nested_stack_depth" has been increased to 5.<br />
* There is a new configuration option "convergence" it is by default off. This feature is not yet complete and this option should remain off.<br />
* In preparation of an upcoming major feature (convergence) there have been some significant DB schema changes. It is suggested that the heat-engine is shutdown during schema upgrades.<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
==== Deprecation ====<br />
* The follow resources are deprecated OS::Heat::HARestarter and OS::Heat::CWLiteAlarm<br />
* The CloudWatch API (heat-api-cw)<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Data Processing service (Sahara) ==<br />
=== Key New Features ===<br />
* New plugins, their features and versions:<br />
** MAPR<br />
** Apache Storm<br />
** Apache Hadoop 2.6.0 was added, Apache Hadoop 2.4.1 deprecated<br />
** New services for CDH plugin added up to HDFS, YARN, Spark, Oozie, HBase, ZooKeeper and other services<br />
* Added indirect VM access for better utilization of floating IPs<br />
* Added event log support to have detailed info about provisioning progress<br />
* Optional default node group and cluster templates per plugin<br />
* Horizon updates:<br />
** Guided cluster creation and job execution<br />
** Filtering on search for objects<br />
* Editing of Node Group templates and Cluster templates implemented<br />
* Added Shell Job Type for clusters running Oozie<br />
* New Job Types endpoint to query list of the supported Job Types<br />
<br />
=== Known Issues ===<br />
-<br />
<br />
=== Upgrade Notes ===<br />
Details: http://docs.openstack.org/developer/sahara/userdoc/upgrade.guide.html#juno-kilo<br />
<br />
* Sahara now requires policy.json configuration file.<br />
<br />
== OpenStack Bare Metal service (Ironic) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== State Machine ====<br />
Ironic now uses a formal model for the logical state of each node it manages.<ref name="states">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/new-ironic-state-machine.html#proposed-change]New Ironic State Machine</ref> This has enabled the addition of two new processes: '''cleaning''' and '''inspection'''.<br />
* Automatic disk erasure between tenants is now enabled by default. This may be extended to perform additional '''cleaning''' steps, such as re-applying firmware, resetting BIOS settings, etc.<ref name="cleaning">[http://docs.openstack.org/developer/ironic/deploy/cleaning.html]Node Cleaning</ref><br />
* Both in-band and out-of-band methods are available to '''inspect''' hardware. These methods may be used to update Node properties automatically.<ref name="inspect">[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#hardware-inspection]Hardware Inspection</ref><br />
<br />
==== Version Headers ====<br />
The Ironic REST API expects a new ''X-OpenStack-Ironic-API-Version'' header be passed with each HTTP[S] request. This header allows client and server to negotiate a mutually supported interface.<ref name="api-version">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html]REST API "micro" versions </ref> In the absence of this header, the REST service will default to a compatibility mode and yield responses compatible with Juno clients. This mode, however, prevents access to most features introduced in Kilo.<br />
<br />
==== Hardware Driver Changes ====<br />
The following new drivers were added:<br />
* [http://docs.openstack.org/developer/ironic/drivers/amt.html AMT]<br />
* [http://docs.openstack.org/developer/ironic/deploy/drivers.html#irmc iRMC]<br />
* [http://docs.openstack.org/developer/ironic/drivers/vbox.html VirtualBox (testing driver only)]<br />
<br />
<br />
The following enhancements were made to existing drivers:<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#enabling-the-configuration-drive-configdrive Configdrives] may be used with the "agent" drivers in lieu of a metadata service, if desired.<br />
* SeaMicro driver supports serial console<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#uefi-secure-boot-support iLO driver supports UEFI secure boot]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#hardware-inspection iLO driver supports out-of-band node inspection]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#ilo-node-cleaning iLO driver supports resetting ilo and bios during cleaning]<br />
<br />
<br />
Support for third-party and out-of-tree drivers is enhanced by the following two changes:<br />
* Drivers may store their own "internal" information about Nodes.<br />
* Drivers may register their own periodic tasks to be run by the Conductor.<br />
* ''vendor_passthru'' methods now support additional HTTP methods (eg, PUT and POST).<br />
* ''vendor_passthru'' methods are now discoverable in the REST API. See [http://docs.openstack.org/developer/ironic/dev/drivers.html#node-vendor-passthru node vendor passthru] and [http://docs.openstack.org/developer/ironic/dev/drivers.html#driver-vendor-passthru driver vendor passthru]<br />
<br />
==== Other Changes ====<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#logical-names Logical names] may be used to address Nodes, in addition to their canonical UUID. <br />
* For servers with varied local disks, [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#specifying-the-disk-for-deployment ''hints''] may be supplied that affect which disk device the OS is provisioned to.<br />
* Support for fetching kernel, ramdisk, and instance images from HTTP[S] sources directly has been added to remove the dependency on Glance. [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#using-ironic-as-a-standalone-service Using ironic as a standalone service]<br />
* Nodes may be placed into ''[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#maintenance-mode maintenance mode]'' via REST API calls. An optional ''maintenance reason'' may be specified when doing so.<br />
<br />
=== Known Issues ===<br />
* '''Running more than one nova-compute process is not officially supported.'''<br />
** While Ironic does include a ClusteredComputeManager, which allows running more than one nova-compute process with Ironic, it should be considered experimental and has many known problems.<br />
* Drivers using the "agent" deploy mechanism do not support "rebuild --preserve-ephemeral"<br />
<br />
=== Upgrade Notes ===<br />
* IPMI Passwords are now obfuscated in REST API responses. This may be disabled by changing API policy settings.<br />
* The "agent" class of drivers now support both whole-disk and partition based images.<br />
* The driver_info parameters of "pxe_deploy_kernel" and "pxe_deploy_ramdisk" are deprecated in favour of "deploy_kernel" and "deploy_ramdisk". <br />
* Drivers implementing their own version of the vendor_passthru() method has been deprecated in favour of the new @passthru decorator.<br />
<br />
==== Juno to Kilo ====<br />
The recommended upgrade process is documented here:<br />
* http://docs.openstack.org/developer/ironic/deploy/upgrade-guide.html#upgrading-from-juno-to-kilo<br />
<br />
==== Upgrading from Icehouse "nova-baremetal" ====<br />
<br />
An upgrade from an Icehouse Nova installation using the "baremetal" driver directly to Kilo Ironic is untested and unsupported. Instead, please follow the following upgrade path:<br />
# Icehouse Nova "baremetal" -> Juno Nova "baremetal"<br />
# Juno Nova "baremetal" -> Juno Ironic<br />
# Juno Ironic -> Kilo Ironic<br />
<br />
Documentation for steps 1 and 2 is available at: https://wiki.openstack.org/wiki/Ironic/NovaBaremetalIronicMigration<br />
<br />
== OpenStack Documentation ==<br />
<br />
* New [http://docs.openstack.org docs.openstack.org] landing page and new web design for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* First release of the [http://docs.openstack.org/networking-guide/ Networking Guide]<br />
* Migration to RST for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* New specialty teams:<br />
** Install Guides<br />
** Networking Guide<br />
** High Availability Guide<br />
** Networking Guide<br />
** User Guides (Admin and End User)<br />
* First App Tutorial sprint<br />
* Driver documentation clarification and connections</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Kilo&diff=78629ReleaseNotes/Kilo2015-04-30T00:27:11Z<p>Asalkeld: /* Key New Features */</p>
<hr />
<div>= OpenStack 2015.1.0 (Kilo) Release Notes =<br />
<br />
<br />
{| style="color:#000000; border:solid 3px #A8A8A8; padding:2em; margin:2em 0; background-color:#FFFFFF; vertical-align:middle;"<br />
| style="padding:1em" | The Kilo release of OpenStack is dedicated to the loving memory of '''Chris Yeoh''', who left his family and us way too soon.<br />
|}<br />
<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
<br />
== General Upgrade Notes ==<br />
<br />
* TBD<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Erasure Code (beta) ====<br />
Swift now supports an erasure-code (EC) storage policy type. This allows deployers to achieve very high durability with less raw capacity as used in replicated storage. However, EC requires more CPU and network resources, so it is not good for every use case. EC is great for storing large, infrequently accessed data in a single region.<br />
<br />
Swift's implementation of erasure codes is meant to be transparent to end users. There is no API difference between replicated storage and EC storage.<br />
<br />
To support erasure codes, Swift now depends on PyECLib and liberasurecode. liberasurecode is a pluggable library that allows for the actual EC algorithm to be implemented in a library of your choosing.<br />
<br />
Full docs are at http://swift.openstack.org/overview_erasure_code.html<br />
<br />
==== Composite tokens ====<br />
Composite tokens allow other OpenStack services to store data in Swift on behalf of a client so that neither the client nor the service can update the data without both party's consent.<br />
<br />
An example of this is that a user requests that Nova save a snapshot of a VM. Nova passes the request to Glance, Glance writes the image to a Swift container as a set of objects. In this case, the user cannot modify the snapshot without also having a valid token from the service. Nor can the service update the data without a valid token from the user. But the data is still stored in the user's account in Swift, which makes accounting simpler.<br />
<br />
Full docs are at http://swift.openstack.org/overview_backing_store.html<br />
<br />
==== Data placement updates for smaller, unbalanceable clusters ====<br />
Swift's data placement now accounts for device weight. This allows operators to gradually add new zones and regions without immediately causing a large amount of data to be moved. Also, if a cluster is inbalanced (eg a two-zone cluster where one zone has twice the capacity of the other), Swift will more efficiently use the available space and warn when replicas are placed without enough dispersion in the cluster.<br />
<br />
==== Global cluster replication improvements ====<br />
Replication between regions will now only move one replica per replication run. This gives the remote region a chance to replicate internally and thus avoid more data moving over the WAN<br />
<br />
<br />
=== Known Issues ===<br />
* As a beta release, EC support is nearly fully feature complete, but it is lacking support for some features (like multi-range reads) and has not had a full performance characterization. This feature relies on ssync for durability. Deployers are urged to do extensive testing and not deploy production data using an erasure code storage policy.<br />
<br />
=== Upgrade Notes ===<br />
As always, you can upgrade to this version of Swift with no end-user downtime<br />
<br />
* In order to support erasure codes, Swift has a new dependency on PyECLib (and liberasurecode, transitively). Also, the minimum required version of eventlet has been raised.<br />
<br />
<br />
<br />
<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== API v2.1 ====<br />
<br />
* We have the first release of the next generation of the Nova API, v2.1. The v2.1 is designed to be backwards compatible with v2.0 with the addition of strong API validation. All changes to the API are discoverable via the advertised microversion. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html<br />
<br />
* For kilo, by default we are still using v2.0 API code to server v2.0 API requests. It is hoped that in liberty that v2.1 will be used to serve requests for both v2.0 and v2.1.<br />
<br />
* For liberty v2.0 is now frozen, and all new features will now be added into the v2.1 API using the microversions mechanism. Microversion increments released with kilo are:<br />
** Extending the keypair API to support for x509 certificates, to be used with Windows WinRM, is one of the first API features added as a microversion in the v2.1 API.<br />
** Exposing additional attributes in os-extended-server-attributes<br />
<br />
* python-novaclient does not yet have support for the v2.1 API<br />
<br />
* The policy enforcement of Nova v2.1 API get improvement.<br />
** Policy only enforce at the entry of API.<br />
** Without duplicated rules for single one API anymore.<br />
** All the v2.1 API policy rule use 'os_compute_api' as prefix which distinguish with v2 API.<br />
** Due to hard-code permission checks at db layer, part of Nova API isn't configurable by policy before. It's always required admin user. Part of Nova v2.1 API's hard-code permission checks is removed which make API policy configurable. The rest of hard-code permission checks will be removed at Liberty.<br />
<br />
==== Upgrade Support ====<br />
<br />
* We have reduced the data migrations that happen in the DB migration scripts, this now happens in a "lazy" way inside the DB objects code. There are nova-manage commands to help force migration of the data. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/flavor-from-sysmeta-to-blob.html<br />
<br />
* Change https://review.openstack.org/#/c/97946/ adds database migration 267 which scans for null instances.uuid records and will fail if any are found since the migrate ultimately needs to make instances.uuid non-nullable and adds a UniqueConstraint on that column. A helper script is provided to search for null instances.uuid records before running the database migrations. Before running 'nova-manage db sync', run the helper script 'nova-manage db null_instance_uuid_scan' which, by default, will just search and dump results, it does not change anything. Pass the --delete option to the null_instance_uuid_scan command to automatically remove any null records were instances.uuid is null.<br />
<br />
==== Scheduler ====<br />
<br />
* A selection of performance optimisations<br />
* We are in the process of making structural changes to the scheduler that will help improve our ability to evolve and improve scheduling. This should not be visible from an end user perspective.<br />
<br />
==== Cells v2 ====<br />
<br />
* There are some initial parts of cell v2 supported added, but this feature is not yet ready to use.<br />
* new 'nova-manage api_db sync' and 'nova-manage api_db version' commands for working with the new api database for cells, but nothing is using this database yet so it is not necessary to set it up.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Support for generation 2 VMs: https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms<br />
* Support for SMB based volumes, to sit along side existing iSCSI volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/hyper-v-smbfs-volume-support.html<br />
* Support for x509 certificate based key pairs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/keypair-x509-certificates.html<br />
* Host power actions now work with Hyper-V: https://blueprints.launchpad.net/nova/+spec/hyper-v-host-power-actions<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* NFV related features:<br />
** NUMA based scheduling: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/input-output-based-numa-scheduling.html<br />
** Pinning guest vCPUs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-cpu-pinning.html<br />
** Large page support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html<br />
* vhostuser VIF driver: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt_vif_vhostuser.html<br />
* Support for running KVM on IBM System z: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-kvm-systemz.html<br />
* Support for parallels: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pcs-support.html<br />
* Support for SMB based volumes: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-smbfs-volume-support.html<br />
* Quiesce using QEMU guest agent: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quiesced-image-snapshots-with-qemu-guest-agent.html<br />
* Quobyte volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quobyte-nova-driver.html<br />
* Support for QEMU iSCSI initiator: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html<br />
<br />
===== VMware =====<br />
<br />
* Support for Ephemeral disks: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/vmware-ephemeral-disk-support.html<br />
* Support fo vSAN: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-vsan-support.html<br />
* Support for based OVA images: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-driver-ova-support.html<br />
* Support for SPBM based storage policies: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-spbm-support.html<br />
<br />
===== Ironic =====<br />
<br />
* Support to pass flavor capabilities to ironic: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pass-flavor-capabilities-to-ironic-virt-driver.html<br />
<br />
=== Known Issues ===<br />
<br />
* Evacuate recovery code has the potential to destroy data. On nova-compute startup, instances reported by the hypervisor are examined to see if they have moved (i.e. been evacuated) from the current host during the outage. If the determination is made that they were, then they are destroyed locally. This has the potential to choose incorrectly and destroy instances unexpectedly. On libvirt-like nodes, this can be triggered by changing the system hostname. On vmware-like nodes, this can be triggered by attempting to manage a single vcenter deployment from two different hosts (with different hostnames). This will be fixed properly in Liberty, but for now deployments that wish to disable this behavior as a preventive measure can set workarounds.destroy_after_evacuate=False. NOTE: This is not a regression and has been a flaw in the design of the evacuate feature since its introduction. There is no easy fix for this, hence this workaround to limit the potential for damage. The proposed fix in liberty is here: https://review.openstack.org/#/c/161444/.<br />
<br />
* The generate config examples possibly missing some oslo related configuration<br />
<br />
=== Upgrade Notes ===<br />
<br />
Below are changes you should be aware of when upgrading. Where possible, The git commit hash is provided for you to find more information:<br />
<br />
* Neutron ports are no longer deleted after your server is deleted, if you created them outside of Nova: 1153a46738fc3ffff98a1df9d94b5a55fdd58777<br />
* EC2 API support has been deprecated, and is likely to be removed in kilo, see f098398a836e3671c49bb884b4a1a1988053f4b2<br />
* Websocket proxies need to be upgraded in a lockstep with the API nodes, as older API nodes will not be sending the access_url when authorizing console access, and newer proxy services (this commit and onward) would fail to authorize such requests 9621ccaf05900009d67cdadeb1aac27368114a61<br />
* After fully upgrading to kilo (i.e. all nodes are running kilo code), you should start a background migration of flavor information from its old home to its new home. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. This is critical to complete before the Liberty release, where support for the old location will be dropped. Use "nova-manage migrate-flavor-data" to perform this transition.<br />
* Due to the improvement on Nova v2.1 API policy enforcement. There are a lot of change happened to v2.1 API policy. Because v2.1 API didn't released before, those change won't keep back-compatible. It is better to use policy sample configuration instead of old one.<br />
* VMware rescue VM behaviour no longer creates a new VM and instead happens in place: cd1765459a24e52e1b933c8e05517fed75ac9d41<br />
* force_config_drive = always has been deprecated, and force_config_drive = True should be used instead: c12a78b35dc910fa97df888960ef2b9a64557254<br />
* Running hyper-v, if you deployed code that was past this commit: b4d57ab65836460d0d9cb8889ec2e6c3986c0a9b but before this commit: c8e9f8e71de64273f10498c5ad959634bfe79975 you make have problems to manually resolve see: c8e9f8e71de64273f10498c5ad959634bfe79975<br />
* Changed the default value of: multi_instance_display_name_template see: 609b2df339785bff9e30a9d67d5c853562ae3344<br />
* Please use "nova-manage db null_instance_uuid_scan" to ensure the DB migrations will apply cleanly, see: c0ea53ce353684b48303fc59393930c3fa5ade58<br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* Support for Federated authentication via Web Single-Sign-On -- When configured in keystone, the user will be able to choose the authentication mechanism to use from those support by the deployment. This feature must be enabled by changes to local_settings.py to be utilized. The related settings to enable and configure can be found [http://docs.openstack.org/developer/horizon/topics/settings.html#websso-enabled here].<br />
<br />
* Support for Theming -- A simpler mechanism to specify a custom theme for Horizon has been included. Allowing for use of CSS values for Bootstrap and Horizon variables, as well as the inclusion of custom CSS. More details available [http://docs.openstack.org/developer/horizon/topics/settings.html#custom-theme-path here].<br />
<br />
* Sahara UX Improvements -- Dramatic improvements to the Sahara user experience have been made with the addition of guided cluster creation and guided job creation pages.<br />
<br />
* Launch Instance Wizard (beta) -- A full replacement for the launch instance workflow has been implemented in AngularJS to address usability issues in the existing launch instance workflow. Due to the late inclusion date and limited testing, this feature is marked as beta for Kilo and not enabled by default. To use the new workflow, the following change to local_settings.py is required: <code>LAUNCH_INSTANCE_NG_ENABLED = True</code>. Additionally, you can disable the default launch instance wizard with the following: <code>LAUNCH_INSTANCE_LEGACY_ENABLED = False</code>. This new work is a view into future development in Horizon. <br />
<br />
* Nova<br />
** allow service disable/enable on Hypervisor<br />
** Migrate all instances from host<br />
** expose serial console<br />
<br />
* Cinder<br />
** Cinder v2 by default<br />
** Managed/Unmanaged volume support -- allows admin to manage existing volumes not managed by cinder, as well as unmanage volumes.<br />
** Volume transfer support between projects<br />
** Volume encryption metadata support<br />
<br />
* Glance<br />
** View added to allow administrators to view/add/update Glance Metadata definitions<br />
<br />
* Heat<br />
** Stack Template view<br />
** Orchestration Resources Panel<br />
** Suspend/Resume actions for Stacks<br />
** Preview Stack view allows users to preview stacks specified in templates before creating them.<br />
<br />
* Trove<br />
** Resizing of Trove instances -- changing instance flavor<br />
<br />
* Ceilometer<br />
** Display IPMI meters values from Ceilometer<br />
<br />
* New Reusable AngularJS widgets in Horizon:<br />
** AngularJS table implementation<br />
*** Table drawers -- expandable table content<br />
*** improved client/server search<br />
** Transfer table widget <br />
<br />
* Configurable web root for Horizon beyond just '/'<br />
<br />
=== Known Issues ===<br />
* Volumes created from snapshots are empty - https://bugs.launchpad.net/horizon/+bug/1447288<br />
* Django 1.8 is not fully supported yet.<br />
<br />
=== Upgrade Notes ===<br />
* Django 1.7 is now supported.<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Hierarchical multitenancy ====<br />
<br />
[http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#projects-v3-projects Projects] can be nested under other projects by setting the <code>parent_id</code> attribute to an existing project when [http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#create-project creating a new project]. You can also [http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-project discovery] the parent-child hierarchy through the existing <code>/v3/projects</code> API.<br />
<br />
Role assignments can now be assigned to both [https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-inherit-ext.rst#assign-role-to-user-on-projects-in-a-subtree users] and [https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-inherit-ext.rst#assign-role-to-group-on-projects-in-a-subtree groups] on subtrees in the project hierarchy.<br />
<br />
This feature will require corresponding support across other OpenStack services (such as hierarchical quotas) in order to become broadly useful.<br />
<br />
==== Fernet tokens ====<br />
<br />
Unlike UUID tokens which must be persisted to a database, Fernet tokens are entirely non-persistent. Deployers can enable the Fernet <code> [token] provider = keystone.token.providers.fernet.Provider</code> in <code>keystone.conf</code>.<br />
<br />
Fernet tokens require symmetric encryption keys which can be established using <code>keystone-manage fernet_setup</code> and periodically rotated using <code>keystone-manage fernet_rotate</code>. These keys must be shared by all keystone nodes in a multi-node (or multi-region) deployment, such that tokens generated by one node can be immediately validated against another.<br />
<br />
==== Identity federation ====<br />
<br />
* Keystone can now act as a [http://docs.openstack.org/developer/keystone/configure_federation.html#keystone-as-an-identity-provider-idp federated identity provider (IdP)] for another instance of keystone by issuing SAML assertions for local users, which may be ECP-wrapped.<br />
* Added support for [http://docs.openstack.org/developer/keystone/extensions/openidc.html OpenID Connect] as a federated identity authentication mechanism.<br />
* Added the ability to associate many "Remote IDs" to a single identity provider in keystone. This will help in a case where many identity providers use a common mapping.<br />
* Added the ability for a user to authenticate via a web browser with an existing IdP, through a Single Sign-On page.<br />
* Federated tokens now use the <code>token</code> authentication method, although both <code>mapped</code> and <code>saml2</code> remain available.<br />
* Federated users may now be mapped to existing local identities.<br />
* Groups appearing in federated identity assertions may now be automatically created as local groups with local user membership mappings.<br />
<br />
==== LDAP ====<br />
<br />
* Filter parameters specified by API users are now processed by LDAP itself, instead of by keystone.<br />
* ''Experimental'' support was added to store domain-specific identity backend [http://docs.openstack.org/developer/keystone/configuration.html#domain-specific-drivers configuration in SQL] using the HTTP API. The primary use case for this is to create a new domain the the HTTP API, and then immediately configure a domain-specific LDAP driver for it without restarting keystone.<br />
<br />
==== Authorization ====<br />
<br />
* The "assignment" backend has been split into a "resource" backend (containing domains, projects, and roles) and an "assignments" backend, containing the authorization mapping model.<br />
* Added support for trust redelegation. If allowed when the trust is initially created, a trustee can redelegate the roles from the trust via another trust<br />
* Added support for explicitly requesting an unscoped token from Keystone, even if the user has a <code>default_project_id</code> attribute set.<br />
* Deployers may now opt into disallowing the re-scoping of scoped tokens by setting <code>[token] allow_rescope_scoped_token = false</code> in <code>keystone.conf</code>.<br />
<br />
=== Known Issues ===<br />
<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
<br />
* XML support in Keystone has been removed as of Kilo. When upgrading from Juno to Kilo, it is recommended that references to XML and XmlBodyMiddleware be removed from the [https://github.com/openstack/keystone/blob/master/etc/keystone-paste.ini Keystone Paste configuration]. This includes removing the XML middleware filters and there references from the public_api, admin_api, api_v3, public_version_api, admin_version_api and any other pipelines that may contain the XML filters.<br />
* All previous extensions (OS-FEDERATION, OS-OAUTH1, OS-ENDPOINT-POLICY and OS-EP-FILTER) are now enabled by default, and are [http://docs.openstack.org/developer/keystone/extensions.html correspondingly marked] as either "experimental" or "stable".<br />
* [http://specs.openstack.org/openstack/openstack-specs/specs/no-downward-sql-migration.html SQL Schema Downgrades are no longer supported]. This change is the result of evaluation that downward SQL migrations are not well tested and become increasingly difficult to support with the volume of data-change that occurs in many of the migrations.<br />
* The following python libraries are now required: [https://pypi.python.org/pypi/cryptography cryptography], [https://pypi.python.org/pypi/msgpack-python msgpack-python], [https://pypi.python.org/pypi/pysaml2 pysaml2] and [https://pypi.python.org/pypi/oauthlib oauthlib]<br />
* <code>keystone.middleware.RequestBodySizeLimiter</code> is now deprecated in favor of <code>oslo_middleware.sizelimit.RequestBodySizeLimiter</code> and will be removed in Liberty.<br />
* Eventlet-specific configuration options such as <code>public_bind_host</code>, <code>bind_host</code>, <code>admin_bind_host</code>, <code>admin_port</code>, <code>public_port</code>, <code>public_workers</code>, <code>admin_workers</code>, <code>tcp_keepalive</code>, <code>tcp_keepidle</code> have been moved from the <code>[DEFAULT]</code> configuration section to a new configuration section called <code>[eventlet_server]</code>. Similarly, Eventlet-specific SSL configuration options such as <code>enable</enable>, <code>certfile</code>, <code>keyfile</code>, <code>ca_certs</code>, <code>cert_required</code> have been moved from the <code>[ssl]</code> configuration section to a new configuration section called <code>[eventlet_server_ssl]</code>.<br />
* <code>keystone.token.backends.sql</code> has been removed in favor of <code>keystone.token.persistence.backends.sql</code>.<br />
* <code>keystone.token.backends.kvs</code> has been removed in favor of <code>keystone.token.persistence.backends.kvs</code>.<br />
* <code>keystone.token.backends.memcache</code> has been removed in favor of <code>keystone.token.persistence.backends.memcache</code>.<br />
* <code>keystone.assignment.backends.kvs</code> has been removed in favor of <code>keystone.assignment.backends.sql</code>.<br />
* <code>keystone.identity.backends.kvs</code> has been removed in favor of <code>keystone.identity.backends.sql</code>.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been removed in favor of external tooling.<br />
* <code>keystone.catalog.backends.templated.TemplatedCatalog</code> has been removed in favor of <code>keystone.catalog.backends.templated.Catalog</code>.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been removed in favor of external access logging.<br />
* <code>keystone.trust.backends.kvs</code> has been removed in favor of <code>keystone.trust.backends.sql</code>.<br />
* <code>[catalog] endpoint_substitution_whitelist</code> has been removed from <code>keystone.conf</code> as part of a related security hardening effort.<br />
* <code>[signing] token_format</code> has been removed from <code>keystone.conf</code> in favor of <code>[token] provider</code>.<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
* DVR now supports VLANs in addition to VXLAN/GRE<br />
* ML2 Hierarchical Port Binding<br />
* New LBaaS Version 2 API<br />
* Portsecurity support for the OVS ML2 Driver<br />
<br />
* New Plugins supported in Kilo include the following:<br />
** A10 Networks LBaaS V2 Driver<br />
** Brocade LBaaS V2 Driver<br />
** Brocade ML2 driver for MLX and ICX switches<br />
** Brocade L3 routing plugin for MLX switch<br />
** Brocade Vyatta vRouter L3 Plugin<br />
** Brocade Vyatta vRouter Firewall Driver<br />
** Brocade Vyatta vRouter VPN Driver<br />
** Cisco CSR VPNaaS Driver<br />
** Dragonflow SDN based Distributed Virtual Router L3 Plugin<br />
** Freescale FWaaS Driver<br />
** Intel Mcafee NGFW FWaaS Driver<br />
** IPSEC Strongswan VPNaaS Driver<br />
<br />
=== Known Issues ===<br />
* The Firewall-as-a-Service project is still marked as experimental for the Kilo release.<br />
* Bug [https://bugs.launchpad.net/neutron/+bug/1438819 1438819]<br />
** When a new subnet is created on an external network, all existing routers with gateways on the network will get a new address allocated from it. For IPv4 networks, this could consume the entire subnet for router gateway ports.<br />
<br />
=== Upgrade Notes ===<br />
<br />
From Havana, Neutron no longer supported an explicit lease database (https://bugs.launchpad.net/bugs/1202392). This left dead code including unused environment variable. In order to remove the dead code (https://review.openstack.org/#/c/152398/), a change to the dhcp.filter is required, so that line:<br />
<br />
'''dnsmasq: EnvFilter, dnsmasq, root, NEUTRON_NETWORK_ID='''<br />
<br />
Be replaced by:<br />
<br />
'''dnsmasq: CommandFilter, dnsmasq, root'''<br />
<br />
After advanced services were split into separate packages and received their own service configuration files (specifically, etc/neutron/neutron_lbaas.conf, etc/neutron/neutron_fwaas.conf and etc/neutron/neutron_vpnaas.conf), active service provider configuration can be different after upgrade (specifically, default load balancer (haproxy) and vpn (openswan) providers can be enabled for you even though you previously disabled them in neutron.conf). Please make sure you review configuration after upgrade so that it reflects the desired state of service providers.<br />
<br />
Note: this will have no effect if the related service plugin is not loaded in neutron.conf.<br />
<br />
<br />
* The default value of api_workers is now equal to the number of CPUs in the host. If you currently use the default, ensure you set api_workers to a reasonable number for your installation. (https://review.openstack.org/#/c/140493/)<br />
* The neutron.allow_duplicate_networks config option is deprecated in Kilo and will be removed in Liberty where the default behavior will be to just allow multiple ports attached to an instance on the same network in Neutron. (https://review.openstack.org/163581)<br />
* The linuxbridge agent now enables VXLAN by default (https://review.openstack.org/160826)<br />
* neutron-ns-metadata-proxy can now be run as non-root (https://review.openstack.org/147437)<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
<br />
* Deprecation<br />
** Brocade Classic plugin for Brocade's VDX/VCS series of hardware switches will be deprecated in the L-Release. The functionality provided by this plugin is now addressed by the ML2 Driver available for the VDX series of hardware. The plugin is slated for removal after this release cycle.<br />
<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
* From this point forward any new database schema upgrades will not require restarting Cinder services right away. The services are now independent of schema upgrades. This is part one to Cinder supporting rolling upgrades!<br />
* Ability to add/remove volumes from an existing consistency group. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Ability to create a consistency group from an existing consistency group snapshot. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Create more fine tuned filters/weighers to set how the scheduler will choose a volume backend. [http://docs.openstack.org/admin-guide-cloud/content/driver_filter_weighing.html Read the docs for more info].<br />
* Encrypted volumes can now be backed up using the Cinder backup service. [http://docs.openstack.org/admin-guide-cloud/content/volume-backup-restore.html Read the docs for more info].<br />
* Ability to create private volume types. This is perfect when you want to make volume types available to only a specific tenant or to test it before making available to your cloud. To do so use the ''cinder type-create <name> --is-public''.<br />
* Oversubscription with thin provision is configurable. [http://docs.openstack.org/admin-guide-cloud/content/over_subscription.html Read docs for more info].<br />
* Ability to add descriptions to volume types. To do so use ''cinder type-create <name> <description><br />
* Cinder now can return multiple iSCSI paths information so that the connector can attach volumes even when the primary path is down ([https://review.openstack.org/#/c/134681/ when connector's multipath feature is enabled] or [https://review.openstack.org/#/c/140877/ not enabled]).<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The 'host' config option for multiple-storage backends in cinder.conf is renamed to 'backend_host' in order to avoid a naming conflict with the 'host' to locate redis. If you use this option, please ensure your configuration files are updated.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* Support to add jitter to polling cycles to ensure pollsters are not querying service's api at the same time<br />
* Ceilometer API RBAC support<br />
* Improved Event support:<br />
** Multi-pipeline support to enable unique processing and publishing of events<br />
** Enabled ability to capture raw notification messages for auditing and postmortem analysis<br />
** Support for persisting events into ElasticSearch<br />
** Publishing support to database, http, file, kafa and oslo.messaging supported message queues<br />
** Option to split off the events persistence into a separate database<br />
** Telemetry now supports to collect and store all the event type meters as events. A new option, ''disable_non_metric_meters'', was added to the configuration in order to provide the possibility to turn off storing these events as samples. For further information please see the [http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-telemetry.html Telemetry Configuration Reference]<br />
** The Administrator Guide in OpenStack Manuals was updated with a new [http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-events.html Events section], where you can find further information about this functionality.<br />
* Improved pipeline publishing support:<br />
** Support to publish events and samples to Kafka or HTTP targets<br />
** Publish data to multiple queues<br />
* Additional meters<br />
** memory and disk meters for Hyper-V<br />
** disk meters for LibVirt<br />
** power and thermal related IPMI meters, more meters from NodeManager<br />
** ability to meter Ceph<br />
* IPv6 support enabled in Ceilometer udp publisher and collector<br />
* [http://launchpad.net/gnocchi Gnocchi] dispatch support for ceilometer-collector<br />
* Self-disabled pollster mechanism<br />
<br />
=== Known Issues ===<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
* Deprecated meters:<br />
** The instance:<flavor> meter is deprecated in the Kilo release. In order to retrieve samples or statistics based on flavor you can use the following queries:<br />
statistics:<br />
ceilometer statistics -m instance -g resource_metadata.instance_type<br />
<br />
samples:<br />
ceilometer sample-list -m instance -q metadata.instance_type=<value><br />
* Middleware used to meter Swift was previously packaged in Ceilometer and is now deprecated. It is now separated into it's own library: ceilometermiddleware.<br />
** Juno configuration: http://docs.openstack.org/juno/install-guide/install/apt/content/ceilometer-swift.html<br />
** Kilo configuration: http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-swift.html<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
=== Key New Features ===<br />
<br />
* Improved scaling using nested stacks<br />
** Heat will RPC actions on any resource that is based on a template. This should help to spread the load when dealing with large complex stacks.<br />
* oslo versioned objects<br />
** The database layer now uses oslo versioned objects to aid in future upgrades. This will allow a newly upgraded heat-engine to use a database with an older schema. Note that this will not help with upgrading to kilo.<br />
* New template functions<br />
** There is a new HOT template version "20150430" which includes two new functions "digest" and "repeat". <br />
* Multiregion stacks<br />
** http://docs.openstack.org/hot-reference/content/OS__Heat__Stack.html<br />
* Software config signalling using swift<br />
** MORE DETAIL HERE<br />
* Triggering new software deployments from heatclient<br />
** MORE DETAIL HERE<br />
* stack snapshots<br />
** MORE DETAIL HERE<br />
* Access to Heat services<br />
** The admin now has similar access to services as other projects. This is in the form of "heat-manage service-list" and via horizon. This feature reports the active heat-engines.<br />
* Improved validation for nova and neutron properties.<br />
* Pause stack creation/update on a given resource (stack hooks)<br />
** http://specs.openstack.org/openstack/heat-specs/specs/juno/stack-breakpoint.html<br />
** http://docs.openstack.org/developer/heat/template_guide/environment.html?highlight=hooks#pause-stack-creation-update-on-a-given-resource<br />
* New contributed resources<br />
** Mistral resources<br />
** keystone resources <br />
supported with Keystone v3 server for Project, Role, User and Group<br />
* Stack lifecycle scheduler hints<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The default of the configuration option "num_engine_workers" has changed from 1 to a number based on the the number of CPUs. This is now the same as the way other projects set the number of workers.<br />
* The default for the configuration option "max_nested_stack_depth" has been increased to 5.<br />
* There is a new configuration option "convergence" it is by default off. This feature is not yet complete and this option should remain off.<br />
* In preparation of an upcoming major feature (convergence) there have been some significant DB schema changes. It is suggested that the heat-engine is shutdown during schema upgrades.<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
==== Deprecation ====<br />
* The follow resources are deprecated OS::Heat::HARestarter and OS::Heat::CWLiteAlarm<br />
* The CloudWatch API (heat-api-cw)<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Data Processing service (Sahara) ==<br />
=== Key New Features ===<br />
* New plugins, their features and versions:<br />
** MAPR<br />
** Apache Storm<br />
** Apache Hadoop 2.6.0 was added, Apache Hadoop 2.4.1 deprecated<br />
** New services for CDH plugin added up to HDFS, YARN, Spark, Oozie, HBase, ZooKeeper and other services<br />
* Added indirect VM access for better utilization of floating IPs<br />
* Added event log support to have detailed info about provisioning progress<br />
* Optional default node group and cluster templates per plugin<br />
* Horizon updates:<br />
** Guided cluster creation and job execution<br />
** Filtering on search for objects<br />
* Editing of Node Group templates and Cluster templates implemented<br />
* Added Shell Job Type for clusters running Oozie<br />
* New Job Types endpoint to query list of the supported Job Types<br />
<br />
=== Known Issues ===<br />
-<br />
<br />
=== Upgrade Notes ===<br />
Details: http://docs.openstack.org/developer/sahara/userdoc/upgrade.guide.html#juno-kilo<br />
<br />
* Sahara now requires policy.json configuration file.<br />
<br />
== OpenStack Bare Metal service (Ironic) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== State Machine ====<br />
Ironic now uses a formal model for the logical state of each node it manages.<ref name="states">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/new-ironic-state-machine.html#proposed-change]New Ironic State Machine</ref> This has enabled the addition of two new processes: '''cleaning''' and '''inspection'''.<br />
* Automatic disk erasure between tenants is now enabled by default. This may be extended to perform additional '''cleaning''' steps, such as re-applying firmware, resetting BIOS settings, etc.<ref name="cleaning">[http://docs.openstack.org/developer/ironic/deploy/cleaning.html]Node Cleaning</ref><br />
* Both in-band and out-of-band methods are available to '''inspect''' hardware. These methods may be used to update Node properties automatically.<ref name="inspect">[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#hardware-inspection]Hardware Inspection</ref><br />
<br />
==== Version Headers ====<br />
The Ironic REST API expects a new ''X-OpenStack-Ironic-API-Version'' header be passed with each HTTP[S] request. This header allows client and server to negotiate a mutually supported interface.<ref name="api-version">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html]REST API "micro" versions </ref> In the absence of this header, the REST service will default to a compatibility mode and yield responses compatible with Juno clients. This mode, however, prevents access to most features introduced in Kilo.<br />
<br />
==== Hardware Driver Changes ====<br />
The following new drivers were added:<br />
* [http://docs.openstack.org/developer/ironic/drivers/amt.html AMT]<br />
* [http://docs.openstack.org/developer/ironic/deploy/drivers.html#irmc iRMC]<br />
* [http://docs.openstack.org/developer/ironic/drivers/vbox.html VirtualBox (testing driver only)]<br />
<br />
<br />
The following enhancements were made to existing drivers:<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#enabling-the-configuration-drive-configdrive Configdrives] may be used with the "agent" drivers in lieu of a metadata service, if desired.<br />
* SeaMicro driver supports serial console<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#uefi-secure-boot-support iLO driver supports UEFI secure boot]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#hardware-inspection iLO driver supports out-of-band node inspection]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#ilo-node-cleaning iLO driver supports resetting ilo and bios during cleaning]<br />
<br />
<br />
Support for third-party and out-of-tree drivers is enhanced by the following two changes:<br />
* Drivers may store their own "internal" information about Nodes.<br />
* Drivers may register their own periodic tasks to be run by the Conductor.<br />
* ''vendor_passthru'' methods now support additional HTTP methods (eg, PUT and POST).<br />
* ''vendor_passthru'' methods are now discoverable in the REST API. See [http://docs.openstack.org/developer/ironic/dev/drivers.html#node-vendor-passthru node vendor passthru] and [http://docs.openstack.org/developer/ironic/dev/drivers.html#driver-vendor-passthru driver vendor passthru]<br />
<br />
==== Other Changes ====<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#logical-names Logical names] may be used to address Nodes, in addition to their canonical UUID. <br />
* For servers with varied local disks, [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#specifying-the-disk-for-deployment ''hints''] may be supplied that affect which disk device the OS is provisioned to.<br />
* Support for fetching kernel, ramdisk, and instance images from HTTP[S] sources directly has been added to remove the dependency on Glance. [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#using-ironic-as-a-standalone-service Using ironic as a standalone service]<br />
* Nodes may be placed into ''[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#maintenance-mode maintenance mode]'' via REST API calls. An optional ''maintenance reason'' may be specified when doing so.<br />
<br />
=== Known Issues ===<br />
* '''Running more than one nova-compute process is not officially supported.'''<br />
** While Ironic does include a ClusteredComputeManager, which allows running more than one nova-compute process with Ironic, it should be considered experimental and has many known problems.<br />
* Drivers using the "agent" deploy mechanism do not support "rebuild --preserve-ephemeral"<br />
<br />
=== Upgrade Notes ===<br />
* IPMI Passwords are now obfuscated in REST API responses. This may be disabled by changing API policy settings.<br />
* The "agent" class of drivers now support both whole-disk and partition based images.<br />
* The driver_info parameters of "pxe_deploy_kernel" and "pxe_deploy_ramdisk" are deprecated in favour of "deploy_kernel" and "deploy_ramdisk". <br />
* Drivers implementing their own version of the vendor_passthru() method has been deprecated in favour of the new @passthru decorator.<br />
<br />
==== Juno to Kilo ====<br />
The recommended upgrade process is documented here:<br />
* http://docs.openstack.org/developer/ironic/deploy/upgrade-guide.html#upgrading-from-juno-to-kilo<br />
<br />
==== Upgrading from Icehouse "nova-baremetal" ====<br />
<br />
An upgrade from an Icehouse Nova installation using the "baremetal" driver directly to Kilo Ironic is untested and unsupported. Instead, please follow the following upgrade path:<br />
# Icehouse Nova "baremetal" -> Juno Nova "baremetal"<br />
# Juno Nova "baremetal" -> Juno Ironic<br />
# Juno Ironic -> Kilo Ironic<br />
<br />
Documentation for steps 1 and 2 is available at: https://wiki.openstack.org/wiki/Ironic/NovaBaremetalIronicMigration<br />
<br />
== OpenStack Documentation ==<br />
<br />
* New [http://docs.openstack.org docs.openstack.org] landing page and new web design for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* First release of the [http://docs.openstack.org/networking-guide/ Networking Guide]<br />
* Migration to RST for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* New specialty teams:<br />
** Install Guides<br />
** Networking Guide<br />
** High Availability Guide<br />
** Networking Guide<br />
** User Guides (Admin and End User)<br />
* First App Tutorial sprint<br />
* Driver documentation clarification and connections</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Kilo&diff=78628ReleaseNotes/Kilo2015-04-30T00:17:35Z<p>Asalkeld: /* Upgrade Notes */</p>
<hr />
<div>= OpenStack 2015.1.0 (Kilo) Release Notes =<br />
<br />
<br />
{| style="color:#000000; border:solid 3px #A8A8A8; padding:2em; margin:2em 0; background-color:#FFFFFF; vertical-align:middle;"<br />
| style="padding:1em" | The Kilo release of OpenStack is dedicated to the loving memory of '''Chris Yeoh''', who left his family and us way too soon.<br />
|}<br />
<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
<br />
== General Upgrade Notes ==<br />
<br />
* TBD<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Erasure Code (beta) ====<br />
Swift now supports an erasure-code (EC) storage policy type. This allows deployers to achieve very high durability with less raw capacity as used in replicated storage. However, EC requires more CPU and network resources, so it is not good for every use case. EC is great for storing large, infrequently accessed data in a single region.<br />
<br />
Swift's implementation of erasure codes is meant to be transparent to end users. There is no API difference between replicated storage and EC storage.<br />
<br />
To support erasure codes, Swift now depends on PyECLib and liberasurecode. liberasurecode is a pluggable library that allows for the actual EC algorithm to be implemented in a library of your choosing.<br />
<br />
Full docs are at http://swift.openstack.org/overview_erasure_code.html<br />
<br />
==== Composite tokens ====<br />
Composite tokens allow other OpenStack services to store data in Swift on behalf of a client so that neither the client nor the service can update the data without both party's consent.<br />
<br />
An example of this is that a user requests that Nova save a snapshot of a VM. Nova passes the request to Glance, Glance writes the image to a Swift container as a set of objects. In this case, the user cannot modify the snapshot without also having a valid token from the service. Nor can the service update the data without a valid token from the user. But the data is still stored in the user's account in Swift, which makes accounting simpler.<br />
<br />
Full docs are at http://swift.openstack.org/overview_backing_store.html<br />
<br />
==== Data placement updates for smaller, unbalanceable clusters ====<br />
Swift's data placement now accounts for device weight. This allows operators to gradually add new zones and regions without immediately causing a large amount of data to be moved. Also, if a cluster is inbalanced (eg a two-zone cluster where one zone has twice the capacity of the other), Swift will more efficiently use the available space and warn when replicas are placed without enough dispersion in the cluster.<br />
<br />
==== Global cluster replication improvements ====<br />
Replication between regions will now only move one replica per replication run. This gives the remote region a chance to replicate internally and thus avoid more data moving over the WAN<br />
<br />
<br />
=== Known Issues ===<br />
* As a beta release, EC support is nearly fully feature complete, but it is lacking support for some features (like multi-range reads) and has not had a full performance characterization. This feature relies on ssync for durability. Deployers are urged to do extensive testing and not deploy production data using an erasure code storage policy.<br />
<br />
=== Upgrade Notes ===<br />
As always, you can upgrade to this version of Swift with no end-user downtime<br />
<br />
* In order to support erasure codes, Swift has a new dependency on PyECLib (and liberasurecode, transitively). Also, the minimum required version of eventlet has been raised.<br />
<br />
<br />
<br />
<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== API v2.1 ====<br />
<br />
* We have the first release of the next generation of the Nova API, v2.1. The v2.1 is designed to be backwards compatible with v2.0 with the addition of strong API validation. All changes to the API are discoverable via the advertised microversion. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html<br />
<br />
* For kilo, by default we are still using v2.0 API code to server v2.0 API requests. It is hoped that in liberty that v2.1 will be used to serve requests for both v2.0 and v2.1.<br />
<br />
* For liberty v2.0 is now frozen, and all new features will now be added into the v2.1 API using the microversions mechanism. Microversion increments released with kilo are:<br />
** Extending the keypair API to support for x509 certificates, to be used with Windows WinRM, is one of the first API features added as a microversion in the v2.1 API.<br />
** Exposing additional attributes in os-extended-server-attributes<br />
<br />
* python-novaclient does not yet have support for the v2.1 API<br />
<br />
* The policy enforcement of Nova v2.1 API get improvement.<br />
** Policy only enforce at the entry of API.<br />
** Without duplicated rules for single one API anymore.<br />
** All the v2.1 API policy rule use 'os_compute_api' as prefix which distinguish with v2 API.<br />
** Due to hard-code permission checks at db layer, part of Nova API isn't configurable by policy before. It's always required admin user. Part of Nova v2.1 API's hard-code permission checks is removed which make API policy configurable. The rest of hard-code permission checks will be removed at Liberty.<br />
<br />
==== Upgrade Support ====<br />
<br />
* We have reduced the data migrations that happen in the DB migration scripts, this now happens in a "lazy" way inside the DB objects code. There are nova-manage commands to help force migration of the data. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/flavor-from-sysmeta-to-blob.html<br />
<br />
* Change https://review.openstack.org/#/c/97946/ adds database migration 267 which scans for null instances.uuid records and will fail if any are found since the migrate ultimately needs to make instances.uuid non-nullable and adds a UniqueConstraint on that column. A helper script is provided to search for null instances.uuid records before running the database migrations. Before running 'nova-manage db sync', run the helper script 'nova-manage db null_instance_uuid_scan' which, by default, will just search and dump results, it does not change anything. Pass the --delete option to the null_instance_uuid_scan command to automatically remove any null records were instances.uuid is null.<br />
<br />
==== Scheduler ====<br />
<br />
* A selection of performance optimisations<br />
* We are in the process of making structural changes to the scheduler that will help improve our ability to evolve and improve scheduling. This should not be visible from an end user perspective.<br />
<br />
==== Cells v2 ====<br />
<br />
* There are some initial parts of cell v2 supported added, but this feature is not yet ready to use.<br />
* new 'nova-manage api_db sync' and 'nova-manage api_db version' commands for working with the new api database for cells, but nothing is using this database yet so it is not necessary to set it up.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Support for generation 2 VMs: https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms<br />
* Support for SMB based volumes, to sit along side existing iSCSI volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/hyper-v-smbfs-volume-support.html<br />
* Support for x509 certificate based key pairs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/keypair-x509-certificates.html<br />
* Host power actions now work with Hyper-V: https://blueprints.launchpad.net/nova/+spec/hyper-v-host-power-actions<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* NFV related features:<br />
** NUMA based scheduling: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/input-output-based-numa-scheduling.html<br />
** Pinning guest vCPUs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-cpu-pinning.html<br />
** Large page support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html<br />
* vhostuser VIF driver: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt_vif_vhostuser.html<br />
* Support for running KVM on IBM System z: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-kvm-systemz.html<br />
* Support for parallels: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pcs-support.html<br />
* Support for SMB based volumes: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-smbfs-volume-support.html<br />
* Quiesce using QEMU guest agent: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quiesced-image-snapshots-with-qemu-guest-agent.html<br />
* Quobyte volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quobyte-nova-driver.html<br />
* Support for QEMU iSCSI initiator: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html<br />
<br />
===== VMware =====<br />
<br />
* Support for Ephemeral disks: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/vmware-ephemeral-disk-support.html<br />
* Support fo vSAN: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-vsan-support.html<br />
* Support for based OVA images: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-driver-ova-support.html<br />
* Support for SPBM based storage policies: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-spbm-support.html<br />
<br />
===== Ironic =====<br />
<br />
* Support to pass flavor capabilities to ironic: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pass-flavor-capabilities-to-ironic-virt-driver.html<br />
<br />
=== Known Issues ===<br />
<br />
* Evacuate recovery code has the potential to destroy data. On nova-compute startup, instances reported by the hypervisor are examined to see if they have moved (i.e. been evacuated) from the current host during the outage. If the determination is made that they were, then they are destroyed locally. This has the potential to choose incorrectly and destroy instances unexpectedly. On libvirt-like nodes, this can be triggered by changing the system hostname. On vmware-like nodes, this can be triggered by attempting to manage a single vcenter deployment from two different hosts (with different hostnames). This will be fixed properly in Liberty, but for now deployments that wish to disable this behavior as a preventive measure can set workarounds.destroy_after_evacuate=False. NOTE: This is not a regression and has been a flaw in the design of the evacuate feature since its introduction. There is no easy fix for this, hence this workaround to limit the potential for damage. The proposed fix in liberty is here: https://review.openstack.org/#/c/161444/.<br />
<br />
* The generate config examples possibly missing some oslo related configuration<br />
<br />
=== Upgrade Notes ===<br />
<br />
Below are changes you should be aware of when upgrading. Where possible, The git commit hash is provided for you to find more information:<br />
<br />
* Neutron ports are no longer deleted after your server is deleted, if you created them outside of Nova: 1153a46738fc3ffff98a1df9d94b5a55fdd58777<br />
* EC2 API support has been deprecated, and is likely to be removed in kilo, see f098398a836e3671c49bb884b4a1a1988053f4b2<br />
* Websocket proxies need to be upgraded in a lockstep with the API nodes, as older API nodes will not be sending the access_url when authorizing console access, and newer proxy services (this commit and onward) would fail to authorize such requests 9621ccaf05900009d67cdadeb1aac27368114a61<br />
* After fully upgrading to kilo (i.e. all nodes are running kilo code), you should start a background migration of flavor information from its old home to its new home. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. This is critical to complete before the Liberty release, where support for the old location will be dropped. Use "nova-manage migrate-flavor-data" to perform this transition.<br />
* Due to the improvement on Nova v2.1 API policy enforcement. There are a lot of change happened to v2.1 API policy. Because v2.1 API didn't released before, those change won't keep back-compatible. It is better to use policy sample configuration instead of old one.<br />
* VMware rescue VM behaviour no longer creates a new VM and instead happens in place: cd1765459a24e52e1b933c8e05517fed75ac9d41<br />
* force_config_drive = always has been deprecated, and force_config_drive = True should be used instead: c12a78b35dc910fa97df888960ef2b9a64557254<br />
* Running hyper-v, if you deployed code that was past this commit: b4d57ab65836460d0d9cb8889ec2e6c3986c0a9b but before this commit: c8e9f8e71de64273f10498c5ad959634bfe79975 you make have problems to manually resolve see: c8e9f8e71de64273f10498c5ad959634bfe79975<br />
* Changed the default value of: multi_instance_display_name_template see: 609b2df339785bff9e30a9d67d5c853562ae3344<br />
* Please use "nova-manage db null_instance_uuid_scan" to ensure the DB migrations will apply cleanly, see: c0ea53ce353684b48303fc59393930c3fa5ade58<br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* Support for Federated authentication via Web Single-Sign-On -- When configured in keystone, the user will be able to choose the authentication mechanism to use from those support by the deployment. This feature must be enabled by changes to local_settings.py to be utilized. The related settings to enable and configure can be found [http://docs.openstack.org/developer/horizon/topics/settings.html#websso-enabled here].<br />
<br />
* Support for Theming -- A simpler mechanism to specify a custom theme for Horizon has been included. Allowing for use of CSS values for Bootstrap and Horizon variables, as well as the inclusion of custom CSS. More details available [http://docs.openstack.org/developer/horizon/topics/settings.html#custom-theme-path here].<br />
<br />
* Sahara UX Improvements -- Dramatic improvements to the Sahara user experience have been made with the addition of guided cluster creation and guided job creation pages.<br />
<br />
* Launch Instance Wizard (beta) -- A full replacement for the launch instance workflow has been implemented in AngularJS to address usability issues in the existing launch instance workflow. Due to the late inclusion date and limited testing, this feature is marked as beta for Kilo and not enabled by default. To use the new workflow, the following change to local_settings.py is required: <code>LAUNCH_INSTANCE_NG_ENABLED = True</code>. Additionally, you can disable the default launch instance wizard with the following: <code>LAUNCH_INSTANCE_LEGACY_ENABLED = False</code>. This new work is a view into future development in Horizon. <br />
<br />
* Nova<br />
** allow service disable/enable on Hypervisor<br />
** Migrate all instances from host<br />
** expose serial console<br />
<br />
* Cinder<br />
** Cinder v2 by default<br />
** Managed/Unmanaged volume support -- allows admin to manage existing volumes not managed by cinder, as well as unmanage volumes.<br />
** Volume transfer support between projects<br />
** Volume encryption metadata support<br />
<br />
* Glance<br />
** View added to allow administrators to view/add/update Glance Metadata definitions<br />
<br />
* Heat<br />
** Stack Template view<br />
** Orchestration Resources Panel<br />
** Suspend/Resume actions for Stacks<br />
** Preview Stack view allows users to preview stacks specified in templates before creating them.<br />
<br />
* Trove<br />
** Resizing of Trove instances -- changing instance flavor<br />
<br />
* Ceilometer<br />
** Display IPMI meters values from Ceilometer<br />
<br />
* New Reusable AngularJS widgets in Horizon:<br />
** AngularJS table implementation<br />
*** Table drawers -- expandable table content<br />
*** improved client/server search<br />
** Transfer table widget <br />
<br />
* Configurable web root for Horizon beyond just '/'<br />
<br />
=== Known Issues ===<br />
* Volumes created from snapshots are empty - https://bugs.launchpad.net/horizon/+bug/1447288<br />
* Django 1.8 is not fully supported yet.<br />
<br />
=== Upgrade Notes ===<br />
* Django 1.7 is now supported.<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Hierarchical multitenancy ====<br />
<br />
[http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#projects-v3-projects Projects] can be nested under other projects by setting the <code>parent_id</code> attribute to an existing project when [http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#create-project creating a new project]. You can also [http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-project discovery] the parent-child hierarchy through the existing <code>/v3/projects</code> API.<br />
<br />
Role assignments can now be assigned to both [https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-inherit-ext.rst#assign-role-to-user-on-projects-in-a-subtree users] and [https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-inherit-ext.rst#assign-role-to-group-on-projects-in-a-subtree groups] on subtrees in the project hierarchy.<br />
<br />
This feature will require corresponding support across other OpenStack services (such as hierarchical quotas) in order to become broadly useful.<br />
<br />
==== Fernet tokens ====<br />
<br />
Unlike UUID tokens which must be persisted to a database, Fernet tokens are entirely non-persistent. Deployers can enable the Fernet <code> [token] provider = keystone.token.providers.fernet.Provider</code> in <code>keystone.conf</code>.<br />
<br />
Fernet tokens require symmetric encryption keys which can be established using <code>keystone-manage fernet_setup</code> and periodically rotated using <code>keystone-manage fernet_rotate</code>. These keys must be shared by all keystone nodes in a multi-node (or multi-region) deployment, such that tokens generated by one node can be immediately validated against another.<br />
<br />
==== Identity federation ====<br />
<br />
* Keystone can now act as a [http://docs.openstack.org/developer/keystone/configure_federation.html#keystone-as-an-identity-provider-idp federated identity provider (IdP)] for another instance of keystone by issuing SAML assertions for local users, which may be ECP-wrapped.<br />
* Added support for [http://docs.openstack.org/developer/keystone/extensions/openidc.html OpenID Connect] as a federated identity authentication mechanism.<br />
* Added the ability to associate many "Remote IDs" to a single identity provider in keystone. This will help in a case where many identity providers use a common mapping.<br />
* Added the ability for a user to authenticate via a web browser with an existing IdP, through a Single Sign-On page.<br />
* Federated tokens now use the <code>token</code> authentication method, although both <code>mapped</code> and <code>saml2</code> remain available.<br />
* Federated users may now be mapped to existing local identities.<br />
* Groups appearing in federated identity assertions may now be automatically created as local groups with local user membership mappings.<br />
<br />
==== LDAP ====<br />
<br />
* Filter parameters specified by API users are now processed by LDAP itself, instead of by keystone.<br />
* ''Experimental'' support was added to store domain-specific identity backend [http://docs.openstack.org/developer/keystone/configuration.html#domain-specific-drivers configuration in SQL] using the HTTP API. The primary use case for this is to create a new domain the the HTTP API, and then immediately configure a domain-specific LDAP driver for it without restarting keystone.<br />
<br />
==== Authorization ====<br />
<br />
* The "assignment" backend has been split into a "resource" backend (containing domains, projects, and roles) and an "assignments" backend, containing the authorization mapping model.<br />
* Added support for trust redelegation. If allowed when the trust is initially created, a trustee can redelegate the roles from the trust via another trust<br />
* Added support for explicitly requesting an unscoped token from Keystone, even if the user has a <code>default_project_id</code> attribute set.<br />
* Deployers may now opt into disallowing the re-scoping of scoped tokens by setting <code>[token] allow_rescope_scoped_token = false</code> in <code>keystone.conf</code>.<br />
<br />
=== Known Issues ===<br />
<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
<br />
* XML support in Keystone has been removed as of Kilo. When upgrading from Juno to Kilo, it is recommended that references to XML and XmlBodyMiddleware be removed from the [https://github.com/openstack/keystone/blob/master/etc/keystone-paste.ini Keystone Paste configuration]. This includes removing the XML middleware filters and there references from the public_api, admin_api, api_v3, public_version_api, admin_version_api and any other pipelines that may contain the XML filters.<br />
* All previous extensions (OS-FEDERATION, OS-OAUTH1, OS-ENDPOINT-POLICY and OS-EP-FILTER) are now enabled by default, and are [http://docs.openstack.org/developer/keystone/extensions.html correspondingly marked] as either "experimental" or "stable".<br />
* [http://specs.openstack.org/openstack/openstack-specs/specs/no-downward-sql-migration.html SQL Schema Downgrades are no longer supported]. This change is the result of evaluation that downward SQL migrations are not well tested and become increasingly difficult to support with the volume of data-change that occurs in many of the migrations.<br />
* The following python libraries are now required: [https://pypi.python.org/pypi/cryptography cryptography], [https://pypi.python.org/pypi/msgpack-python msgpack-python], [https://pypi.python.org/pypi/pysaml2 pysaml2] and [https://pypi.python.org/pypi/oauthlib oauthlib]<br />
* <code>keystone.middleware.RequestBodySizeLimiter</code> is now deprecated in favor of <code>oslo_middleware.sizelimit.RequestBodySizeLimiter</code> and will be removed in Liberty.<br />
* Eventlet-specific configuration options such as <code>public_bind_host</code>, <code>bind_host</code>, <code>admin_bind_host</code>, <code>admin_port</code>, <code>public_port</code>, <code>public_workers</code>, <code>admin_workers</code>, <code>tcp_keepalive</code>, <code>tcp_keepidle</code> have been moved from the <code>[DEFAULT]</code> configuration section to a new configuration section called <code>[eventlet_server]</code>. Similarly, Eventlet-specific SSL configuration options such as <code>enable</enable>, <code>certfile</code>, <code>keyfile</code>, <code>ca_certs</code>, <code>cert_required</code> have been moved from the <code>[ssl]</code> configuration section to a new configuration section called <code>[eventlet_server_ssl]</code>.<br />
* <code>keystone.token.backends.sql</code> has been removed in favor of <code>keystone.token.persistence.backends.sql</code>.<br />
* <code>keystone.token.backends.kvs</code> has been removed in favor of <code>keystone.token.persistence.backends.kvs</code>.<br />
* <code>keystone.token.backends.memcache</code> has been removed in favor of <code>keystone.token.persistence.backends.memcache</code>.<br />
* <code>keystone.assignment.backends.kvs</code> has been removed in favor of <code>keystone.assignment.backends.sql</code>.<br />
* <code>keystone.identity.backends.kvs</code> has been removed in favor of <code>keystone.identity.backends.sql</code>.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been removed in favor of external tooling.<br />
* <code>keystone.catalog.backends.templated.TemplatedCatalog</code> has been removed in favor of <code>keystone.catalog.backends.templated.Catalog</code>.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been removed in favor of external access logging.<br />
* <code>keystone.trust.backends.kvs</code> has been removed in favor of <code>keystone.trust.backends.sql</code>.<br />
* <code>[catalog] endpoint_substitution_whitelist</code> has been removed from <code>keystone.conf</code> as part of a related security hardening effort.<br />
* <code>[signing] token_format</code> has been removed from <code>keystone.conf</code> in favor of <code>[token] provider</code>.<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
* DVR now supports VLANs in addition to VXLAN/GRE<br />
* ML2 Hierarchical Port Binding<br />
* New LBaaS Version 2 API<br />
* Portsecurity support for the OVS ML2 Driver<br />
<br />
* New Plugins supported in Kilo include the following:<br />
** A10 Networks LBaaS V2 Driver<br />
** Brocade LBaaS V2 Driver<br />
** Brocade ML2 driver for MLX and ICX switches<br />
** Brocade L3 routing plugin for MLX switch<br />
** Brocade Vyatta vRouter L3 Plugin<br />
** Brocade Vyatta vRouter Firewall Driver<br />
** Brocade Vyatta vRouter VPN Driver<br />
** Cisco CSR VPNaaS Driver<br />
** Dragonflow SDN based Distributed Virtual Router L3 Plugin<br />
** Freescale FWaaS Driver<br />
** Intel Mcafee NGFW FWaaS Driver<br />
** IPSEC Strongswan VPNaaS Driver<br />
<br />
=== Known Issues ===<br />
* The Firewall-as-a-Service project is still marked as experimental for the Kilo release.<br />
* Bug [https://bugs.launchpad.net/neutron/+bug/1438819 1438819]<br />
** When a new subnet is created on an external network, all existing routers with gateways on the network will get a new address allocated from it. For IPv4 networks, this could consume the entire subnet for router gateway ports.<br />
<br />
=== Upgrade Notes ===<br />
<br />
From Havana, Neutron no longer supported an explicit lease database (https://bugs.launchpad.net/bugs/1202392). This left dead code including unused environment variable. In order to remove the dead code (https://review.openstack.org/#/c/152398/), a change to the dhcp.filter is required, so that line:<br />
<br />
'''dnsmasq: EnvFilter, dnsmasq, root, NEUTRON_NETWORK_ID='''<br />
<br />
Be replaced by:<br />
<br />
'''dnsmasq: CommandFilter, dnsmasq, root'''<br />
<br />
After advanced services were split into separate packages and received their own service configuration files (specifically, etc/neutron/neutron_lbaas.conf, etc/neutron/neutron_fwaas.conf and etc/neutron/neutron_vpnaas.conf), active service provider configuration can be different after upgrade (specifically, default load balancer (haproxy) and vpn (openswan) providers can be enabled for you even though you previously disabled them in neutron.conf). Please make sure you review configuration after upgrade so that it reflects the desired state of service providers.<br />
<br />
Note: this will have no effect if the related service plugin is not loaded in neutron.conf.<br />
<br />
<br />
* The default value of api_workers is now equal to the number of CPUs in the host. If you currently use the default, ensure you set api_workers to a reasonable number for your installation. (https://review.openstack.org/#/c/140493/)<br />
* The neutron.allow_duplicate_networks config option is deprecated in Kilo and will be removed in Liberty where the default behavior will be to just allow multiple ports attached to an instance on the same network in Neutron. (https://review.openstack.org/163581)<br />
* The linuxbridge agent now enables VXLAN by default (https://review.openstack.org/160826)<br />
* neutron-ns-metadata-proxy can now be run as non-root (https://review.openstack.org/147437)<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
<br />
* Deprecation<br />
** Brocade Classic plugin for Brocade's VDX/VCS series of hardware switches will be deprecated in the L-Release. The functionality provided by this plugin is now addressed by the ML2 Driver available for the VDX series of hardware. The plugin is slated for removal after this release cycle.<br />
<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
* From this point forward any new database schema upgrades will not require restarting Cinder services right away. The services are now independent of schema upgrades. This is part one to Cinder supporting rolling upgrades!<br />
* Ability to add/remove volumes from an existing consistency group. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Ability to create a consistency group from an existing consistency group snapshot. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Create more fine tuned filters/weighers to set how the scheduler will choose a volume backend. [http://docs.openstack.org/admin-guide-cloud/content/driver_filter_weighing.html Read the docs for more info].<br />
* Encrypted volumes can now be backed up using the Cinder backup service. [http://docs.openstack.org/admin-guide-cloud/content/volume-backup-restore.html Read the docs for more info].<br />
* Ability to create private volume types. This is perfect when you want to make volume types available to only a specific tenant or to test it before making available to your cloud. To do so use the ''cinder type-create <name> --is-public''.<br />
* Oversubscription with thin provision is configurable. [http://docs.openstack.org/admin-guide-cloud/content/over_subscription.html Read docs for more info].<br />
* Ability to add descriptions to volume types. To do so use ''cinder type-create <name> <description><br />
* Cinder now can return multiple iSCSI paths information so that the connector can attach volumes even when the primary path is down ([https://review.openstack.org/#/c/134681/ when connector's multipath feature is enabled] or [https://review.openstack.org/#/c/140877/ not enabled]).<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The 'host' config option for multiple-storage backends in cinder.conf is renamed to 'backend_host' in order to avoid a naming conflict with the 'host' to locate redis. If you use this option, please ensure your configuration files are updated.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* Support to add jitter to polling cycles to ensure pollsters are not querying service's api at the same time<br />
* Ceilometer API RBAC support<br />
* Improved Event support:<br />
** Multi-pipeline support to enable unique processing and publishing of events<br />
** Enabled ability to capture raw notification messages for auditing and postmortem analysis<br />
** Support for persisting events into ElasticSearch<br />
** Publishing support to database, http, file, kafa and oslo.messaging supported message queues<br />
** Option to split off the events persistence into a separate database<br />
** Telemetry now supports to collect and store all the event type meters as events. A new option, ''disable_non_metric_meters'', was added to the configuration in order to provide the possibility to turn off storing these events as samples. For further information please see the [http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-telemetry.html Telemetry Configuration Reference]<br />
** The Administrator Guide in OpenStack Manuals was updated with a new [http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-events.html Events section], where you can find further information about this functionality.<br />
* Improved pipeline publishing support:<br />
** Support to publish events and samples to Kafka or HTTP targets<br />
** Publish data to multiple queues<br />
* Additional meters<br />
** memory and disk meters for Hyper-V<br />
** disk meters for LibVirt<br />
** power and thermal related IPMI meters, more meters from NodeManager<br />
** ability to meter Ceph<br />
* IPv6 support enabled in Ceilometer udp publisher and collector<br />
* [http://launchpad.net/gnocchi Gnocchi] dispatch support for ceilometer-collector<br />
* Self-disabled pollster mechanism<br />
<br />
=== Known Issues ===<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
* Deprecated meters:<br />
** The instance:<flavor> meter is deprecated in the Kilo release. In order to retrieve samples or statistics based on flavor you can use the following queries:<br />
statistics:<br />
ceilometer statistics -m instance -g resource_metadata.instance_type<br />
<br />
samples:<br />
ceilometer sample-list -m instance -q metadata.instance_type=<value><br />
* Middleware used to meter Swift was previously packaged in Ceilometer and is now deprecated. It is now separated into it's own library: ceilometermiddleware.<br />
** Juno configuration: http://docs.openstack.org/juno/install-guide/install/apt/content/ceilometer-swift.html<br />
** Kilo configuration: http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-swift.html<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
=== Key New Features ===<br />
<br />
* Improved scaling using nested stacks<br />
** Heat will RPC actions on any resource that is based on a template. This should help to spread the load when dealing with large complex stacks.<br />
* oslo versioned objects<br />
** The database layer now uses oslo versioned objects to aid in future upgrades. This will allow a newly upgraded heat-engine to use a database with an older schema. Note that this will not help with upgrading to kilo.<br />
* New template functions<br />
** There is a new HOT template version "20150430" which includes two new functions "digest" and "repeat". <br />
* Multiregion stacks<br />
** http://docs.openstack.org/hot-reference/content/OS__Heat__Stack.html<br />
* Software config signalling using swift<br />
** MORE DETAIL HERE<br />
* Triggering new software deployments from heatclient<br />
** MORE DETAIL HERE<br />
* stack snapshots<br />
** MORE DETAIL HERE<br />
* Access to Heat services<br />
** The admin now has similar access to services as other projects. This is in the form of "heat-manage service-list" and via horizon. This feature reports the active heat-engines.<br />
* Improved validation for nova and neutron properties.<br />
* Pause stack creation/update on a given resource (stack hooks)<br />
** http://specs.openstack.org/openstack/heat-specs/specs/juno/stack-breakpoint.html<br />
** http://docs.openstack.org/developer/heat/template_guide/environment.html?highlight=hooks#pause-stack-creation-update-on-a-given-resource<br />
* New contributed resources<br />
** Mistral resources<br />
** keystone resources <br />
supported with Keystone v3 server for Project, Role, User and Group<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The default of the configuration option "num_engine_workers" has changed from 1 to a number based on the the number of CPUs. This is now the same as the way other projects set the number of workers.<br />
* The default for the configuration option "max_nested_stack_depth" has been increased to 5.<br />
* There is a new configuration option "convergence" it is by default off. This feature is not yet complete and this option should remain off.<br />
* In preparation of an upcoming major feature (convergence) there have been some significant DB schema changes. It is suggested that the heat-engine is shutdown during schema upgrades.<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
==== Deprecation ====<br />
* The follow resources are deprecated OS::Heat::HARestarter and OS::Heat::CWLiteAlarm<br />
* The CloudWatch API (heat-api-cw)<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Data Processing service (Sahara) ==<br />
=== Key New Features ===<br />
* New plugins, their features and versions:<br />
** MAPR<br />
** Apache Storm<br />
** Apache Hadoop 2.6.0 was added, Apache Hadoop 2.4.1 deprecated<br />
** New services for CDH plugin added up to HDFS, YARN, Spark, Oozie, HBase, ZooKeeper and other services<br />
* Added indirect VM access for better utilization of floating IPs<br />
* Added event log support to have detailed info about provisioning progress<br />
* Optional default node group and cluster templates per plugin<br />
* Horizon updates:<br />
** Guided cluster creation and job execution<br />
** Filtering on search for objects<br />
* Editing of Node Group templates and Cluster templates implemented<br />
* Added Shell Job Type for clusters running Oozie<br />
* New Job Types endpoint to query list of the supported Job Types<br />
<br />
=== Known Issues ===<br />
-<br />
<br />
=== Upgrade Notes ===<br />
Details: http://docs.openstack.org/developer/sahara/userdoc/upgrade.guide.html#juno-kilo<br />
<br />
* Sahara now requires policy.json configuration file.<br />
<br />
== OpenStack Bare Metal service (Ironic) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== State Machine ====<br />
Ironic now uses a formal model for the logical state of each node it manages.<ref name="states">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/new-ironic-state-machine.html#proposed-change]New Ironic State Machine</ref> This has enabled the addition of two new processes: '''cleaning''' and '''inspection'''.<br />
* Automatic disk erasure between tenants is now enabled by default. This may be extended to perform additional '''cleaning''' steps, such as re-applying firmware, resetting BIOS settings, etc.<ref name="cleaning">[http://docs.openstack.org/developer/ironic/deploy/cleaning.html]Node Cleaning</ref><br />
* Both in-band and out-of-band methods are available to '''inspect''' hardware. These methods may be used to update Node properties automatically.<ref name="inspect">[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#hardware-inspection]Hardware Inspection</ref><br />
<br />
==== Version Headers ====<br />
The Ironic REST API expects a new ''X-OpenStack-Ironic-API-Version'' header be passed with each HTTP[S] request. This header allows client and server to negotiate a mutually supported interface.<ref name="api-version">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html]REST API "micro" versions </ref> In the absence of this header, the REST service will default to a compatibility mode and yield responses compatible with Juno clients. This mode, however, prevents access to most features introduced in Kilo.<br />
<br />
==== Hardware Driver Changes ====<br />
The following new drivers were added:<br />
* [http://docs.openstack.org/developer/ironic/drivers/amt.html AMT]<br />
* [http://docs.openstack.org/developer/ironic/deploy/drivers.html#irmc iRMC]<br />
* [http://docs.openstack.org/developer/ironic/drivers/vbox.html VirtualBox (testing driver only)]<br />
<br />
<br />
The following enhancements were made to existing drivers:<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#enabling-the-configuration-drive-configdrive Configdrives] may be used with the "agent" drivers in lieu of a metadata service, if desired.<br />
* SeaMicro driver supports serial console<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#uefi-secure-boot-support iLO driver supports UEFI secure boot]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#hardware-inspection iLO driver supports out-of-band node inspection]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#ilo-node-cleaning iLO driver supports resetting ilo and bios during cleaning]<br />
<br />
<br />
Support for third-party and out-of-tree drivers is enhanced by the following two changes:<br />
* Drivers may store their own "internal" information about Nodes.<br />
* Drivers may register their own periodic tasks to be run by the Conductor.<br />
* ''vendor_passthru'' methods now support additional HTTP methods (eg, PUT and POST).<br />
* ''vendor_passthru'' methods are now discoverable in the REST API. See [http://docs.openstack.org/developer/ironic/dev/drivers.html#node-vendor-passthru node vendor passthru] and [http://docs.openstack.org/developer/ironic/dev/drivers.html#driver-vendor-passthru driver vendor passthru]<br />
<br />
==== Other Changes ====<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#logical-names Logical names] may be used to address Nodes, in addition to their canonical UUID. <br />
* For servers with varied local disks, [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#specifying-the-disk-for-deployment ''hints''] may be supplied that affect which disk device the OS is provisioned to.<br />
* Support for fetching kernel, ramdisk, and instance images from HTTP[S] sources directly has been added to remove the dependency on Glance. [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#using-ironic-as-a-standalone-service Using ironic as a standalone service]<br />
* Nodes may be placed into ''[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#maintenance-mode maintenance mode]'' via REST API calls. An optional ''maintenance reason'' may be specified when doing so.<br />
<br />
=== Known Issues ===<br />
* '''Running more than one nova-compute process is not officially supported.'''<br />
** While Ironic does include a ClusteredComputeManager, which allows running more than one nova-compute process with Ironic, it should be considered experimental and has many known problems.<br />
* Drivers using the "agent" deploy mechanism do not support "rebuild --preserve-ephemeral"<br />
<br />
=== Upgrade Notes ===<br />
* IPMI Passwords are now obfuscated in REST API responses. This may be disabled by changing API policy settings.<br />
* The "agent" class of drivers now support both whole-disk and partition based images.<br />
* The driver_info parameters of "pxe_deploy_kernel" and "pxe_deploy_ramdisk" are deprecated in favour of "deploy_kernel" and "deploy_ramdisk". <br />
* Drivers implementing their own version of the vendor_passthru() method has been deprecated in favour of the new @passthru decorator.<br />
<br />
==== Juno to Kilo ====<br />
The recommended upgrade process is documented here:<br />
* http://docs.openstack.org/developer/ironic/deploy/upgrade-guide.html#upgrading-from-juno-to-kilo<br />
<br />
==== Upgrading from Icehouse "nova-baremetal" ====<br />
<br />
An upgrade from an Icehouse Nova installation using the "baremetal" driver directly to Kilo Ironic is untested and unsupported. Instead, please follow the following upgrade path:<br />
# Icehouse Nova "baremetal" -> Juno Nova "baremetal"<br />
# Juno Nova "baremetal" -> Juno Ironic<br />
# Juno Ironic -> Kilo Ironic<br />
<br />
Documentation for steps 1 and 2 is available at: https://wiki.openstack.org/wiki/Ironic/NovaBaremetalIronicMigration<br />
<br />
== OpenStack Documentation ==<br />
<br />
* New [http://docs.openstack.org docs.openstack.org] landing page and new web design for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* First release of the [http://docs.openstack.org/networking-guide/ Networking Guide]<br />
* Migration to RST for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* New specialty teams:<br />
** Install Guides<br />
** Networking Guide<br />
** High Availability Guide<br />
** Networking Guide<br />
** User Guides (Admin and End User)<br />
* First App Tutorial sprint<br />
* Driver documentation clarification and connections</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Kilo&diff=78626ReleaseNotes/Kilo2015-04-30T00:14:20Z<p>Asalkeld: /* OpenStack Orchestration (Heat) */</p>
<hr />
<div>= OpenStack 2015.1.0 (Kilo) Release Notes =<br />
<br />
<br />
{| style="color:#000000; border:solid 3px #A8A8A8; padding:2em; margin:2em 0; background-color:#FFFFFF; vertical-align:middle;"<br />
| style="padding:1em" | The Kilo release of OpenStack is dedicated to the loving memory of '''Chris Yeoh''', who left his family and us way too soon.<br />
|}<br />
<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
<br />
== General Upgrade Notes ==<br />
<br />
* TBD<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Erasure Code (beta) ====<br />
Swift now supports an erasure-code (EC) storage policy type. This allows deployers to achieve very high durability with less raw capacity as used in replicated storage. However, EC requires more CPU and network resources, so it is not good for every use case. EC is great for storing large, infrequently accessed data in a single region.<br />
<br />
Swift's implementation of erasure codes is meant to be transparent to end users. There is no API difference between replicated storage and EC storage.<br />
<br />
To support erasure codes, Swift now depends on PyECLib and liberasurecode. liberasurecode is a pluggable library that allows for the actual EC algorithm to be implemented in a library of your choosing.<br />
<br />
Full docs are at http://swift.openstack.org/overview_erasure_code.html<br />
<br />
==== Composite tokens ====<br />
Composite tokens allow other OpenStack services to store data in Swift on behalf of a client so that neither the client nor the service can update the data without both party's consent.<br />
<br />
An example of this is that a user requests that Nova save a snapshot of a VM. Nova passes the request to Glance, Glance writes the image to a Swift container as a set of objects. In this case, the user cannot modify the snapshot without also having a valid token from the service. Nor can the service update the data without a valid token from the user. But the data is still stored in the user's account in Swift, which makes accounting simpler.<br />
<br />
Full docs are at http://swift.openstack.org/overview_backing_store.html<br />
<br />
==== Data placement updates for smaller, unbalanceable clusters ====<br />
Swift's data placement now accounts for device weight. This allows operators to gradually add new zones and regions without immediately causing a large amount of data to be moved. Also, if a cluster is inbalanced (eg a two-zone cluster where one zone has twice the capacity of the other), Swift will more efficiently use the available space and warn when replicas are placed without enough dispersion in the cluster.<br />
<br />
==== Global cluster replication improvements ====<br />
Replication between regions will now only move one replica per replication run. This gives the remote region a chance to replicate internally and thus avoid more data moving over the WAN<br />
<br />
<br />
=== Known Issues ===<br />
* As a beta release, EC support is nearly fully feature complete, but it is lacking support for some features (like multi-range reads) and has not had a full performance characterization. This feature relies on ssync for durability. Deployers are urged to do extensive testing and not deploy production data using an erasure code storage policy.<br />
<br />
=== Upgrade Notes ===<br />
As always, you can upgrade to this version of Swift with no end-user downtime<br />
<br />
* In order to support erasure codes, Swift has a new dependency on PyECLib (and liberasurecode, transitively). Also, the minimum required version of eventlet has been raised.<br />
<br />
<br />
<br />
<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== API v2.1 ====<br />
<br />
* We have the first release of the next generation of the Nova API, v2.1. The v2.1 is designed to be backwards compatible with v2.0 with the addition of strong API validation. All changes to the API are discoverable via the advertised microversion. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html<br />
<br />
* For kilo, by default we are still using v2.0 API code to server v2.0 API requests. It is hoped that in liberty that v2.1 will be used to serve requests for both v2.0 and v2.1.<br />
<br />
* For liberty v2.0 is now frozen, and all new features will now be added into the v2.1 API using the microversions mechanism. Microversion increments released with kilo are:<br />
** Extending the keypair API to support for x509 certificates, to be used with Windows WinRM, is one of the first API features added as a microversion in the v2.1 API.<br />
** Exposing additional attributes in os-extended-server-attributes<br />
<br />
* python-novaclient does not yet have support for the v2.1 API<br />
<br />
* The policy enforcement of Nova v2.1 API get improvement.<br />
** Policy only enforce at the entry of API.<br />
** Without duplicated rules for single one API anymore.<br />
** All the v2.1 API policy rule use 'os_compute_api' as prefix which distinguish with v2 API.<br />
** Due to hard-code permission checks at db layer, part of Nova API isn't configurable by policy before. It's always required admin user. Part of Nova v2.1 API's hard-code permission checks is removed which make API policy configurable. The rest of hard-code permission checks will be removed at Liberty.<br />
<br />
==== Upgrade Support ====<br />
<br />
* We have reduced the data migrations that happen in the DB migration scripts, this now happens in a "lazy" way inside the DB objects code. There are nova-manage commands to help force migration of the data. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/flavor-from-sysmeta-to-blob.html<br />
<br />
* Change https://review.openstack.org/#/c/97946/ adds database migration 267 which scans for null instances.uuid records and will fail if any are found since the migrate ultimately needs to make instances.uuid non-nullable and adds a UniqueConstraint on that column. A helper script is provided to search for null instances.uuid records before running the database migrations. Before running 'nova-manage db sync', run the helper script 'nova-manage db null_instance_uuid_scan' which, by default, will just search and dump results, it does not change anything. Pass the --delete option to the null_instance_uuid_scan command to automatically remove any null records were instances.uuid is null.<br />
<br />
==== Scheduler ====<br />
<br />
* A selection of performance optimisations<br />
* We are in the process of making structural changes to the scheduler that will help improve our ability to evolve and improve scheduling. This should not be visible from an end user perspective.<br />
<br />
==== Cells v2 ====<br />
<br />
* There are some initial parts of cell v2 supported added, but this feature is not yet ready to use.<br />
* new 'nova-manage api_db sync' and 'nova-manage api_db version' commands for working with the new api database for cells, but nothing is using this database yet so it is not necessary to set it up.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Support for generation 2 VMs: https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms<br />
* Support for SMB based volumes, to sit along side existing iSCSI volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/hyper-v-smbfs-volume-support.html<br />
* Support for x509 certificate based key pairs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/keypair-x509-certificates.html<br />
* Host power actions now work with Hyper-V: https://blueprints.launchpad.net/nova/+spec/hyper-v-host-power-actions<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* NFV related features:<br />
** NUMA based scheduling: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/input-output-based-numa-scheduling.html<br />
** Pinning guest vCPUs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-cpu-pinning.html<br />
** Large page support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html<br />
* vhostuser VIF driver: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt_vif_vhostuser.html<br />
* Support for running KVM on IBM System z: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-kvm-systemz.html<br />
* Support for parallels: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pcs-support.html<br />
* Support for SMB based volumes: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-smbfs-volume-support.html<br />
* Quiesce using QEMU guest agent: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quiesced-image-snapshots-with-qemu-guest-agent.html<br />
* Quobyte volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quobyte-nova-driver.html<br />
* Support for QEMU iSCSI initiator: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html<br />
<br />
===== VMware =====<br />
<br />
* Support for Ephemeral disks: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/vmware-ephemeral-disk-support.html<br />
* Support fo vSAN: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-vsan-support.html<br />
* Support for based OVA images: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-driver-ova-support.html<br />
* Support for SPBM based storage policies: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-spbm-support.html<br />
<br />
===== Ironic =====<br />
<br />
* Support to pass flavor capabilities to ironic: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pass-flavor-capabilities-to-ironic-virt-driver.html<br />
<br />
=== Known Issues ===<br />
<br />
* Evacuate recovery code has the potential to destroy data. On nova-compute startup, instances reported by the hypervisor are examined to see if they have moved (i.e. been evacuated) from the current host during the outage. If the determination is made that they were, then they are destroyed locally. This has the potential to choose incorrectly and destroy instances unexpectedly. On libvirt-like nodes, this can be triggered by changing the system hostname. On vmware-like nodes, this can be triggered by attempting to manage a single vcenter deployment from two different hosts (with different hostnames). This will be fixed properly in Liberty, but for now deployments that wish to disable this behavior as a preventive measure can set workarounds.destroy_after_evacuate=False. NOTE: This is not a regression and has been a flaw in the design of the evacuate feature since its introduction. There is no easy fix for this, hence this workaround to limit the potential for damage. The proposed fix in liberty is here: https://review.openstack.org/#/c/161444/.<br />
<br />
* The generate config examples possibly missing some oslo related configuration<br />
<br />
=== Upgrade Notes ===<br />
<br />
Below are changes you should be aware of when upgrading. Where possible, The git commit hash is provided for you to find more information:<br />
<br />
* Neutron ports are no longer deleted after your server is deleted, if you created them outside of Nova: 1153a46738fc3ffff98a1df9d94b5a55fdd58777<br />
* EC2 API support has been deprecated, and is likely to be removed in kilo, see f098398a836e3671c49bb884b4a1a1988053f4b2<br />
* Websocket proxies need to be upgraded in a lockstep with the API nodes, as older API nodes will not be sending the access_url when authorizing console access, and newer proxy services (this commit and onward) would fail to authorize such requests 9621ccaf05900009d67cdadeb1aac27368114a61<br />
* After fully upgrading to kilo (i.e. all nodes are running kilo code), you should start a background migration of flavor information from its old home to its new home. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. This is critical to complete before the Liberty release, where support for the old location will be dropped. Use "nova-manage migrate-flavor-data" to perform this transition.<br />
* Due to the improvement on Nova v2.1 API policy enforcement. There are a lot of change happened to v2.1 API policy. Because v2.1 API didn't released before, those change won't keep back-compatible. It is better to use policy sample configuration instead of old one.<br />
* VMware rescue VM behaviour no longer creates a new VM and instead happens in place: cd1765459a24e52e1b933c8e05517fed75ac9d41<br />
* force_config_drive = always has been deprecated, and force_config_drive = True should be used instead: c12a78b35dc910fa97df888960ef2b9a64557254<br />
* Running hyper-v, if you deployed code that was past this commit: b4d57ab65836460d0d9cb8889ec2e6c3986c0a9b but before this commit: c8e9f8e71de64273f10498c5ad959634bfe79975 you make have problems to manually resolve see: c8e9f8e71de64273f10498c5ad959634bfe79975<br />
* Changed the default value of: multi_instance_display_name_template see: 609b2df339785bff9e30a9d67d5c853562ae3344<br />
* Please use "nova-manage db null_instance_uuid_scan" to ensure the DB migrations will apply cleanly, see: c0ea53ce353684b48303fc59393930c3fa5ade58<br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* Support for Federated authentication via Web Single-Sign-On -- When configured in keystone, the user will be able to choose the authentication mechanism to use from those support by the deployment. This feature must be enabled by changes to local_settings.py to be utilized. The related settings to enable and configure can be found [http://docs.openstack.org/developer/horizon/topics/settings.html#websso-enabled here].<br />
<br />
* Support for Theming -- A simpler mechanism to specify a custom theme for Horizon has been included. Allowing for use of CSS values for Bootstrap and Horizon variables, as well as the inclusion of custom CSS. More details available [http://docs.openstack.org/developer/horizon/topics/settings.html#custom-theme-path here].<br />
<br />
* Sahara UX Improvements -- Dramatic improvements to the Sahara user experience have been made with the addition of guided cluster creation and guided job creation pages.<br />
<br />
* Launch Instance Wizard (beta) -- A full replacement for the launch instance workflow has been implemented in AngularJS to address usability issues in the existing launch instance workflow. Due to the late inclusion date and limited testing, this feature is marked as beta for Kilo and not enabled by default. To use the new workflow, the following change to local_settings.py is required: <code>LAUNCH_INSTANCE_NG_ENABLED = True</code>. Additionally, you can disable the default launch instance wizard with the following: <code>LAUNCH_INSTANCE_LEGACY_ENABLED = False</code>. This new work is a view into future development in Horizon. <br />
<br />
* Nova<br />
** allow service disable/enable on Hypervisor<br />
** Migrate all instances from host<br />
** expose serial console<br />
<br />
* Cinder<br />
** Cinder v2 by default<br />
** Managed/Unmanaged volume support -- allows admin to manage existing volumes not managed by cinder, as well as unmanage volumes.<br />
** Volume transfer support between projects<br />
** Volume encryption metadata support<br />
<br />
* Glance<br />
** View added to allow administrators to view/add/update Glance Metadata definitions<br />
<br />
* Heat<br />
** Stack Template view<br />
** Orchestration Resources Panel<br />
** Suspend/Resume actions for Stacks<br />
** Preview Stack view allows users to preview stacks specified in templates before creating them.<br />
<br />
* Trove<br />
** Resizing of Trove instances -- changing instance flavor<br />
<br />
* Ceilometer<br />
** Display IPMI meters values from Ceilometer<br />
<br />
* New Reusable AngularJS widgets in Horizon:<br />
** AngularJS table implementation<br />
*** Table drawers -- expandable table content<br />
*** improved client/server search<br />
** Transfer table widget <br />
<br />
* Configurable web root for Horizon beyond just '/'<br />
<br />
=== Known Issues ===<br />
* Volumes created from snapshots are empty - https://bugs.launchpad.net/horizon/+bug/1447288<br />
* Django 1.8 is not fully supported yet.<br />
<br />
=== Upgrade Notes ===<br />
* Django 1.7 is now supported.<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Hierarchical multitenancy ====<br />
<br />
[http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#projects-v3-projects Projects] can be nested under other projects by setting the <code>parent_id</code> attribute to an existing project when [http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#create-project creating a new project]. You can also [http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-project discovery] the parent-child hierarchy through the existing <code>/v3/projects</code> API.<br />
<br />
Role assignments can now be assigned to both [https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-inherit-ext.rst#assign-role-to-user-on-projects-in-a-subtree users] and [https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-inherit-ext.rst#assign-role-to-group-on-projects-in-a-subtree groups] on subtrees in the project hierarchy.<br />
<br />
This feature will require corresponding support across other OpenStack services (such as hierarchical quotas) in order to become broadly useful.<br />
<br />
==== Fernet tokens ====<br />
<br />
Unlike UUID tokens which must be persisted to a database, Fernet tokens are entirely non-persistent. Deployers can enable the Fernet <code> [token] provider = keystone.token.providers.fernet.Provider</code> in <code>keystone.conf</code>.<br />
<br />
Fernet tokens require symmetric encryption keys which can be established using <code>keystone-manage fernet_setup</code> and periodically rotated using <code>keystone-manage fernet_rotate</code>. These keys must be shared by all keystone nodes in a multi-node (or multi-region) deployment, such that tokens generated by one node can be immediately validated against another.<br />
<br />
==== Identity federation ====<br />
<br />
* Keystone can now act as a [http://docs.openstack.org/developer/keystone/configure_federation.html#keystone-as-an-identity-provider-idp federated identity provider (IdP)] for another instance of keystone by issuing SAML assertions for local users, which may be ECP-wrapped.<br />
* Added support for [http://docs.openstack.org/developer/keystone/extensions/openidc.html OpenID Connect] as a federated identity authentication mechanism.<br />
* Added the ability to associate many "Remote IDs" to a single identity provider in keystone. This will help in a case where many identity providers use a common mapping.<br />
* Added the ability for a user to authenticate via a web browser with an existing IdP, through a Single Sign-On page.<br />
* Federated tokens now use the <code>token</code> authentication method, although both <code>mapped</code> and <code>saml2</code> remain available.<br />
* Federated users may now be mapped to existing local identities.<br />
* Groups appearing in federated identity assertions may now be automatically created as local groups with local user membership mappings.<br />
<br />
==== LDAP ====<br />
<br />
* Filter parameters specified by API users are now processed by LDAP itself, instead of by keystone.<br />
* ''Experimental'' support was added to store domain-specific identity backend [http://docs.openstack.org/developer/keystone/configuration.html#domain-specific-drivers configuration in SQL] using the HTTP API. The primary use case for this is to create a new domain the the HTTP API, and then immediately configure a domain-specific LDAP driver for it without restarting keystone.<br />
<br />
==== Authorization ====<br />
<br />
* The "assignment" backend has been split into a "resource" backend (containing domains, projects, and roles) and an "assignments" backend, containing the authorization mapping model.<br />
* Added support for trust redelegation. If allowed when the trust is initially created, a trustee can redelegate the roles from the trust via another trust<br />
* Added support for explicitly requesting an unscoped token from Keystone, even if the user has a <code>default_project_id</code> attribute set.<br />
* Deployers may now opt into disallowing the re-scoping of scoped tokens by setting <code>[token] allow_rescope_scoped_token = false</code> in <code>keystone.conf</code>.<br />
<br />
=== Known Issues ===<br />
<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
<br />
* XML support in Keystone has been removed as of Kilo. When upgrading from Juno to Kilo, it is recommended that references to XML and XmlBodyMiddleware be removed from the [https://github.com/openstack/keystone/blob/master/etc/keystone-paste.ini Keystone Paste configuration]. This includes removing the XML middleware filters and there references from the public_api, admin_api, api_v3, public_version_api, admin_version_api and any other pipelines that may contain the XML filters.<br />
* All previous extensions (OS-FEDERATION, OS-OAUTH1, OS-ENDPOINT-POLICY and OS-EP-FILTER) are now enabled by default, and are [http://docs.openstack.org/developer/keystone/extensions.html correspondingly marked] as either "experimental" or "stable".<br />
* [http://specs.openstack.org/openstack/openstack-specs/specs/no-downward-sql-migration.html SQL Schema Downgrades are no longer supported]. This change is the result of evaluation that downward SQL migrations are not well tested and become increasingly difficult to support with the volume of data-change that occurs in many of the migrations.<br />
* The following python libraries are now required: [https://pypi.python.org/pypi/cryptography cryptography], [https://pypi.python.org/pypi/msgpack-python msgpack-python], [https://pypi.python.org/pypi/pysaml2 pysaml2] and [https://pypi.python.org/pypi/oauthlib oauthlib]<br />
* <code>keystone.middleware.RequestBodySizeLimiter</code> is now deprecated in favor of <code>oslo_middleware.sizelimit.RequestBodySizeLimiter</code> and will be removed in Liberty.<br />
* Eventlet-specific configuration options such as <code>public_bind_host</code>, <code>bind_host</code>, <code>admin_bind_host</code>, <code>admin_port</code>, <code>public_port</code>, <code>public_workers</code>, <code>admin_workers</code>, <code>tcp_keepalive</code>, <code>tcp_keepidle</code> have been moved from the <code>[DEFAULT]</code> configuration section to a new configuration section called <code>[eventlet_server]</code>. Similarly, Eventlet-specific SSL configuration options such as <code>enable</enable>, <code>certfile</code>, <code>keyfile</code>, <code>ca_certs</code>, <code>cert_required</code> have been moved from the <code>[ssl]</code> configuration section to a new configuration section called <code>[eventlet_server_ssl]</code>.<br />
* All modules in <code>keystone.token.backends</code>, including <code>sql</code>, <code>kvs</code>, and <code>memcache</code> have been removed in favor of those in <code>keystone.token.persistence.backends</code>.<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
* DVR now supports VLANs in addition to VXLAN/GRE<br />
* ML2 Hierarchical Port Binding<br />
* New LBaaS Version 2 API<br />
* Portsecurity support for the OVS ML2 Driver<br />
<br />
* New Plugins supported in Kilo include the following:<br />
** A10 Networks LBaaS V2 Driver<br />
** Brocade LBaaS V2 Driver<br />
** Brocade ML2 driver for MLX and ICX switches<br />
** Brocade L3 routing plugin for MLX switch<br />
** Brocade Vyatta vRouter L3 Plugin<br />
** Brocade Vyatta vRouter Firewall Driver<br />
** Brocade Vyatta vRouter VPN Driver<br />
** Cisco CSR VPNaaS Driver<br />
** Dragonflow SDN based Distributed Virtual Router L3 Plugin<br />
** Freescale FWaaS Driver<br />
** Intel Mcafee NGFW FWaaS Driver<br />
** IPSEC Strongswan VPNaaS Driver<br />
<br />
=== Known Issues ===<br />
* The Firewall-as-a-Service project is still marked as experimental for the Kilo release.<br />
* Bug [https://bugs.launchpad.net/neutron/+bug/1438819 1438819]<br />
** When a new subnet is created on an external network, all existing routers with gateways on the network will get a new address allocated from it. For IPv4 networks, this could consume the entire subnet for router gateway ports.<br />
<br />
=== Upgrade Notes ===<br />
<br />
From Havana, Neutron no longer supported an explicit lease database (https://bugs.launchpad.net/bugs/1202392). This left dead code including unused environment variable. In order to remove the dead code (https://review.openstack.org/#/c/152398/), a change to the dhcp.filter is required, so that line:<br />
<br />
'''dnsmasq: EnvFilter, dnsmasq, root, NEUTRON_NETWORK_ID='''<br />
<br />
Be replaced by:<br />
<br />
'''dnsmasq: CommandFilter, dnsmasq, root'''<br />
<br />
After advanced services were split into separate packages and received their own service configuration files (specifically, etc/neutron/neutron_lbaas.conf, etc/neutron/neutron_fwaas.conf and etc/neutron/neutron_vpnaas.conf), active service provider configuration can be different after upgrade (specifically, default load balancer (haproxy) and vpn (openswan) providers can be enabled for you even though you previously disabled them in neutron.conf). Please make sure you review configuration after upgrade so that it reflects the desired state of service providers.<br />
<br />
Note: this will have no effect if the related service plugin is not loaded in neutron.conf.<br />
<br />
<br />
* The default value of api_workers is now equal to the number of CPUs in the host. If you currently use the default, ensure you set api_workers to a reasonable number for your installation. (https://review.openstack.org/#/c/140493/)<br />
* The neutron.allow_duplicate_networks config option is deprecated in Kilo and will be removed in Liberty where the default behavior will be to just allow multiple ports attached to an instance on the same network in Neutron. (https://review.openstack.org/163581)<br />
* The linuxbridge agent now enables VXLAN by default (https://review.openstack.org/160826)<br />
* neutron-ns-metadata-proxy can now be run as non-root (https://review.openstack.org/147437)<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
<br />
* Deprecation<br />
** Brocade Classic plugin for Brocade's VDX/VCS series of hardware switches will be deprecated in the L-Release. The functionality provided by this plugin is now addressed by the ML2 Driver available for the VDX series of hardware. The plugin is slated for removal after this release cycle.<br />
<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
* From this point forward any new database schema upgrades will not require restarting Cinder services right away. The services are now independent of schema upgrades. This is part one to Cinder supporting rolling upgrades!<br />
* Ability to add/remove volumes from an existing consistency group. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Ability to create a consistency group from an existing consistency group snapshot. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Create more fine tuned filters/weighers to set how the scheduler will choose a volume backend. [http://docs.openstack.org/admin-guide-cloud/content/driver_filter_weighing.html Read the docs for more info].<br />
* Encrypted volumes can now be backed up using the Cinder backup service. [http://docs.openstack.org/admin-guide-cloud/content/volume-backup-restore.html Read the docs for more info].<br />
* Ability to create private volume types. This is perfect when you want to make volume types available to only a specific tenant or to test it before making available to your cloud. To do so use the ''cinder type-create <name> --is-public''.<br />
* Oversubscription with thin provision is configurable. [http://docs.openstack.org/admin-guide-cloud/content/over_subscription.html Read docs for more info].<br />
* Ability to add descriptions to volume types. To do so use ''cinder type-create <name> <description><br />
* Cinder now can return multiple iSCSI paths information so that the connector can attach volumes even when the primary path is down ([https://review.openstack.org/#/c/134681/ when connector's multipath feature is enabled] or [https://review.openstack.org/#/c/140877/ not enabled]).<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The 'host' config option for multiple-storage backends in cinder.conf is renamed to 'backend_host' in order to avoid a naming conflict with the 'host' to locate redis. If you use this option, please ensure your configuration files are updated.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* Support to add jitter to polling cycles to ensure pollsters are not querying service's api at the same time<br />
* Ceilometer API RBAC support<br />
* Improved Event support:<br />
** Multi-pipeline support to enable unique processing and publishing of events<br />
** Enabled ability to capture raw notification messages for auditing and postmortem analysis<br />
** Support for persisting events into ElasticSearch<br />
** Publishing support to database, http, file, kafa and oslo.messaging supported message queues<br />
** Option to split off the events persistence into a separate database<br />
** Telemetry now supports to collect and store all the event type meters as events. A new option, ''disable_non_metric_meters'', was added to the configuration in order to provide the possibility to turn off storing these events as samples. For further information please see the [http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-telemetry.html Telemetry Configuration Reference]<br />
** The Administrator Guide in OpenStack Manuals was updated with a new [http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-events.html Events section], where you can find further information about this functionality.<br />
* Improved pipeline publishing support:<br />
** Support to publish events and samples to Kafka or HTTP targets<br />
** Publish data to multiple queues<br />
* Additional meters<br />
** memory and disk meters for Hyper-V<br />
** disk meters for LibVirt<br />
** power and thermal related IPMI meters, more meters from NodeManager<br />
** ability to meter Ceph<br />
* IPv6 support enabled in Ceilometer udp publisher and collector<br />
* [http://launchpad.net/gnocchi Gnocchi] dispatch support for ceilometer-collector<br />
* Self-disabled pollster mechanism<br />
<br />
=== Known Issues ===<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
* Deprecated meters:<br />
** The instance:<flavor> meter is deprecated in the Kilo release. In order to retrieve samples or statistics based on flavor you can use the following queries:<br />
statistics:<br />
ceilometer statistics -m instance -g resource_metadata.instance_type<br />
<br />
samples:<br />
ceilometer sample-list -m instance -q metadata.instance_type=<value><br />
* Middleware used to meter Swift was previously packaged in Ceilometer and is now deprecated. It is now separated into it's own library: ceilometermiddleware.<br />
** Juno configuration: http://docs.openstack.org/juno/install-guide/install/apt/content/ceilometer-swift.html<br />
** Kilo configuration: http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-swift.html<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
=== Key New Features ===<br />
<br />
* Improved scaling using nested stacks<br />
** Heat will RPC actions on any resource that is based on a template. This should help to spread the load when dealing with large complex stacks.<br />
* oslo versioned objects<br />
** The database layer now uses oslo versioned objects to aid in future upgrades. This will allow a newly upgraded heat-engine to use a database with an older schema. Note that this will not help with upgrading to kilo.<br />
* New template functions<br />
** There is a new HOT template version "20150430" which includes two new functions "digest" and "repeat". <br />
* Multiregion stacks<br />
** http://docs.openstack.org/hot-reference/content/OS__Heat__Stack.html<br />
* Software config signalling using swift<br />
** MORE DETAIL HERE<br />
* Triggering new software deployments from heatclient<br />
** MORE DETAIL HERE<br />
* stack snapshots<br />
** MORE DETAIL HERE<br />
* Access to Heat services<br />
** The admin now has similar access to services as other projects. This is in the form of "heat-manage service-list" and via horizon. This feature reports the active heat-engines.<br />
* Improved validation for nova and neutron properties.<br />
* Pause stack creation/update on a given resource (stack hooks)<br />
** http://specs.openstack.org/openstack/heat-specs/specs/juno/stack-breakpoint.html<br />
** http://docs.openstack.org/developer/heat/template_guide/environment.html?highlight=hooks#pause-stack-creation-update-on-a-given-resource<br />
* New contributed resources<br />
** Mistral resources<br />
** keystone resources <br />
supported with Keystone v3 server for Project, Role, User and Group<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The default of the configuration option "num_engine_workers" has changed from 1 to a number based on the the number of CPUs. This is now the same as the way other projects set the number of workers.<br />
* The default for the configuration option "max_nested_stack_depth" has been increased to 5.<br />
* There is a new configuration option "convergence" it is by default off. This feature is not yet complete and this option should remain off.<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
==== Deprecation ====<br />
* The follow resources are deprecated OS::Heat::HARestarter and OS::Heat::CWLiteAlarm<br />
* The CloudWatch API (heat-api-cw)<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Data Processing service (Sahara) ==<br />
=== Key New Features ===<br />
* New plugins, their features and versions:<br />
** MAPR<br />
** Apache Storm<br />
** Apache Hadoop 2.6.0 was added, Apache Hadoop 2.4.1 deprecated<br />
** New services for CDH plugin added up to HDFS, YARN, Spark, Oozie, HBase, ZooKeeper and other services<br />
* Added indirect VM access for better utilization of floating IPs<br />
* Added event log support to have detailed info about provisioning progress<br />
* Optional default node group and cluster templates per plugin<br />
* Horizon updates:<br />
** Guided cluster creation and job execution<br />
** Filtering on search for objects<br />
* Editing of Node Group templates and Cluster templates implemented<br />
* Added Shell Job Type for clusters running Oozie<br />
* New Job Types endpoint to query list of the supported Job Types<br />
<br />
=== Known Issues ===<br />
-<br />
<br />
=== Upgrade Notes ===<br />
Details: http://docs.openstack.org/developer/sahara/userdoc/upgrade.guide.html#juno-kilo<br />
<br />
* Sahara now requires policy.json configuration file.<br />
<br />
== OpenStack Bare Metal service (Ironic) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== State Machine ====<br />
Ironic now uses a formal model for the logical state of each node it manages.<ref name="states">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/new-ironic-state-machine.html#proposed-change]New Ironic State Machine</ref> This has enabled the addition of two new processes: '''cleaning''' and '''inspection'''.<br />
* Automatic disk erasure between tenants is now enabled by default. This may be extended to perform additional '''cleaning''' steps, such as re-applying firmware, resetting BIOS settings, etc.<ref name="cleaning">[http://docs.openstack.org/developer/ironic/deploy/cleaning.html]Node Cleaning</ref><br />
* Both in-band and out-of-band methods are available to '''inspect''' hardware. These methods may be used to update Node properties automatically.<ref name="inspect">[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#hardware-inspection]Hardware Inspection</ref><br />
<br />
==== Version Headers ====<br />
The Ironic REST API expects a new ''X-OpenStack-Ironic-API-Version'' header be passed with each HTTP[S] request. This header allows client and server to negotiate a mutually supported interface.<ref name="api-version">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html]REST API "micro" versions </ref> In the absence of this header, the REST service will default to a compatibility mode and yield responses compatible with Juno clients. This mode, however, prevents access to most features introduced in Kilo.<br />
<br />
==== Hardware Driver Changes ====<br />
The following new drivers were added:<br />
* [http://docs.openstack.org/developer/ironic/drivers/amt.html AMT]<br />
* [http://docs.openstack.org/developer/ironic/deploy/drivers.html#irmc iRMC]<br />
* [http://docs.openstack.org/developer/ironic/drivers/vbox.html VirtualBox (testing driver only)]<br />
<br />
<br />
The following enhancements were made to existing drivers:<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#enabling-the-configuration-drive-configdrive Configdrives] may be used with the "agent" drivers in lieu of a metadata service, if desired.<br />
* SeaMicro driver supports serial console<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#uefi-secure-boot-support iLO driver supports UEFI secure boot]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#hardware-inspection iLO driver supports out-of-band node inspection]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#ilo-node-cleaning iLO driver supports resetting ilo and bios during cleaning]<br />
<br />
<br />
Support for third-party and out-of-tree drivers is enhanced by the following two changes:<br />
* Drivers may store their own "internal" information about Nodes.<br />
* Drivers may register their own periodic tasks to be run by the Conductor.<br />
* ''vendor_passthru'' methods now support additional HTTP methods (eg, PUT and POST).<br />
* ''vendor_passthru'' methods are now discoverable in the REST API. See [http://docs.openstack.org/developer/ironic/dev/drivers.html#node-vendor-passthru node vendor passthru] and [http://docs.openstack.org/developer/ironic/dev/drivers.html#driver-vendor-passthru driver vendor passthru]<br />
<br />
==== Other Changes ====<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#logical-names Logical names] may be used to address Nodes, in addition to their canonical UUID. <br />
* For servers with varied local disks, [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#specifying-the-disk-for-deployment ''hints''] may be supplied that affect which disk device the OS is provisioned to.<br />
* Support for fetching kernel, ramdisk, and instance images from HTTP[S] sources directly has been added to remove the dependency on Glance. [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#using-ironic-as-a-standalone-service Using ironic as a standalone service]<br />
* Nodes may be placed into ''[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#maintenance-mode maintenance mode]'' via REST API calls. An optional ''maintenance reason'' may be specified when doing so.<br />
<br />
=== Known Issues ===<br />
* '''Running more than one nova-compute process is not officially supported.'''<br />
** While Ironic does include a ClusteredComputeManager, which allows running more than one nova-compute process with Ironic, it should be considered experimental and has many known problems.<br />
* Drivers using the "agent" deploy mechanism do not support "rebuild --preserve-ephemeral"<br />
<br />
=== Upgrade Notes ===<br />
* IPMI Passwords are now obfuscated in REST API responses. This may be disabled by changing API policy settings.<br />
* The "agent" class of drivers now support both whole-disk and partition based images.<br />
* The driver_info parameters of "pxe_deploy_kernel" and "pxe_deploy_ramdisk" are deprecated in favour of "deploy_kernel" and "deploy_ramdisk". <br />
* Drivers implementing their own version of the vendor_passthru() method has been deprecated in favour of the new @passthru decorator.<br />
<br />
==== Juno to Kilo ====<br />
The recommended upgrade process is documented here:<br />
* http://docs.openstack.org/developer/ironic/deploy/upgrade-guide.html#upgrading-from-juno-to-kilo<br />
<br />
==== Upgrading from Icehouse "nova-baremetal" ====<br />
<br />
An upgrade from an Icehouse Nova installation using the "baremetal" driver directly to Kilo Ironic is untested and unsupported. Instead, please follow the following upgrade path:<br />
# Icehouse Nova "baremetal" -> Juno Nova "baremetal"<br />
# Juno Nova "baremetal" -> Juno Ironic<br />
# Juno Ironic -> Kilo Ironic<br />
<br />
Documentation for steps 1 and 2 is available at: https://wiki.openstack.org/wiki/Ironic/NovaBaremetalIronicMigration<br />
<br />
== OpenStack Documentation ==<br />
<br />
* New [http://docs.openstack.org docs.openstack.org] landing page and new web design for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* First release of the [http://docs.openstack.org/networking-guide/ Networking Guide]<br />
* Migration to RST for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* New specialty teams:<br />
** Install Guides<br />
** Networking Guide<br />
** High Availability Guide<br />
** Networking Guide<br />
** User Guides (Admin and End User)<br />
* First App Tutorial sprint<br />
* Driver documentation clarification and connections</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Kilo&diff=78621ReleaseNotes/Kilo2015-04-29T23:41:46Z<p>Asalkeld: /* Key New Features */</p>
<hr />
<div>= OpenStack 2015.1.0 (Kilo) Release Notes =<br />
<br />
<br />
{| style="color:#000000; border:solid 3px #A8A8A8; padding:2em; margin:2em 0; background-color:#FFFFFF; vertical-align:middle;"<br />
| style="padding:1em" | The Kilo release of OpenStack is dedicated to the loving memory of '''Chris Yeoh''', who left his family and us way too soon.<br />
|}<br />
<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
<br />
== General Upgrade Notes ==<br />
<br />
* TBD<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Erasure Code (beta) ====<br />
Swift now supports an erasure-code (EC) storage policy type. This allows deployers to achieve very high durability with less raw capacity as used in replicated storage. However, EC requires more CPU and network resources, so it is not good for every use case. EC is great for storing large, infrequently accessed data in a single region.<br />
<br />
Swift's implementation of erasure codes is meant to be transparent to end users. There is no API difference between replicated storage and EC storage.<br />
<br />
To support erasure codes, Swift now depends on PyECLib and liberasurecode. liberasurecode is a pluggable library that allows for the actual EC algorithm to be implemented in a library of your choosing.<br />
<br />
Full docs are at http://swift.openstack.org/overview_erasure_code.html<br />
<br />
==== Composite tokens ====<br />
Composite tokens allow other OpenStack services to store data in Swift on behalf of a client so that neither the client nor the service can update the data without both party's consent.<br />
<br />
An example of this is that a user requests that Nova save a snapshot of a VM. Nova passes the request to Glance, Glance writes the image to a Swift container as a set of objects. In this case, the user cannot modify the snapshot without also having a valid token from the service. Nor can the service update the data without a valid token from the user. But the data is still stored in the user's account in Swift, which makes accounting simpler.<br />
<br />
Full docs are at http://swift.openstack.org/overview_backing_store.html<br />
<br />
==== Data placement updates for smaller, unbalanceable clusters ====<br />
Swift's data placement now accounts for device weight. This allows operators to gradually add new zones and regions without immediately causing a large amount of data to be moved. Also, if a cluster is inbalanced (eg a two-zone cluster where one zone has twice the capacity of the other), Swift will more efficiently use the available space and warn when replicas are placed without enough dispersion in the cluster.<br />
<br />
==== Global cluster replication improvements ====<br />
Replication between regions will now only move one replica per replication run. This gives the remote region a chance to replicate internally and thus avoid more data moving over the WAN<br />
<br />
<br />
=== Known Issues ===<br />
* As a beta release, EC support is nearly fully feature complete, but it is lacking support for some features (like multi-range reads) and has not had a full performance characterization. This feature relies on ssync for durability. Deployers are urged to do extensive testing and not deploy production data using an erasure code storage policy.<br />
<br />
=== Upgrade Notes ===<br />
As always, you can upgrade to this version of Swift with no end-user downtime<br />
<br />
* In order to support erasure codes, Swift has a new dependency on PyECLib (and liberasurecode, transitively). Also, the minimum required version of eventlet has been raised.<br />
<br />
<br />
<br />
<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== API v2.1 ====<br />
<br />
* We have the first release of the next generation of the Nova API, v2.1. The v2.1 is designed to be backwards compatible with v2.0 with the addition of strong API validation. All changes to the API are discoverable via the advertised microversion. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html<br />
<br />
* For kilo, by default we are still using v2.0 API code to server v2.0 API requests. It is hoped that in liberty that v2.1 will be used to serve requests for both v2.0 and v2.1.<br />
<br />
* For liberty v2.0 is now frozen, and all new features will now be added into the v2.1 API using the microversions mechanism. Microversion increments released with kilo are:<br />
** Extending the keypair API to support for x509 certificates, to be used with Windows WinRM, is one of the first API features added as a microversion in the v2.1 API.<br />
** Exposing additional attributes in os-extended-server-attributes<br />
<br />
* python-novaclient does not yet have support for the v2.1 API<br />
<br />
* The policy enforcement of Nova v2.1 API get improvement.<br />
** Policy only enforce at the entry of API.<br />
** Without duplicated rules for single one API anymore.<br />
** All the v2.1 API policy rule use 'os_compute_api' as prefix which distinguish with v2 API.<br />
** Due to hard-code permission checks at db layer, part of Nova API isn't configurable by policy before. It's always required admin user. Part of Nova v2.1 API's hard-code permission checks is removed which make API policy configurable. The rest of hard-code permission checks will be removed at Liberty.<br />
<br />
==== Upgrade Support ====<br />
<br />
* We have reduced the data migrations that happen in the DB migration scripts, this now happens in a "lazy" way inside the DB objects code. There are nova-manage commands to help force migration of the data. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/flavor-from-sysmeta-to-blob.html<br />
<br />
* Change https://review.openstack.org/#/c/97946/ adds database migration 267 which scans for null instances.uuid records and will fail if any are found since the migrate ultimately needs to make instances.uuid non-nullable and adds a UniqueConstraint on that column. A helper script is provided to search for null instances.uuid records before running the database migrations. Before running 'nova-manage db sync', run the helper script 'nova-manage db null_instance_uuid_scan' which, by default, will just search and dump results, it does not change anything. Pass the --delete option to the null_instance_uuid_scan command to automatically remove any null records were instances.uuid is null.<br />
<br />
==== Scheduler ====<br />
<br />
* A selection of performance optimisations<br />
* We are in the process of making structural changes to the scheduler that will help improve our ability to evolve and improve scheduling. This should not be visible from an end user perspective.<br />
<br />
==== Cells v2 ====<br />
<br />
* There are some initial parts of cell v2 supported added, but this feature is not yet ready to use.<br />
* new 'nova-manage api_db sync' and 'nova-manage api_db version' commands for working with the new api database for cells, but nothing is using this database yet so it is not necessary to set it up.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Support for generation 2 VMs: https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms<br />
* Support for SMB based volumes, to sit along side existing iSCSI volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/hyper-v-smbfs-volume-support.html<br />
* Support for x509 certificate based key pairs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/keypair-x509-certificates.html<br />
* Host power actions now work with Hyper-V: https://blueprints.launchpad.net/nova/+spec/hyper-v-host-power-actions<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* NFV related features:<br />
** NUMA based scheduling: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/input-output-based-numa-scheduling.html<br />
** Pinning guest vCPUs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-cpu-pinning.html<br />
** Large page support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html<br />
* vhostuser VIF driver: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt_vif_vhostuser.html<br />
* Support for running KVM on IBM System z: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-kvm-systemz.html<br />
* Support for parallels: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pcs-support.html<br />
* Support for SMB based volumes: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-smbfs-volume-support.html<br />
* Quiesce using QEMU guest agent: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quiesced-image-snapshots-with-qemu-guest-agent.html<br />
* Quobyte volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quobyte-nova-driver.html<br />
* Support for QEMU iSCSI initiator: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html<br />
<br />
===== VMware =====<br />
<br />
* Support for Ephemeral disks: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/vmware-ephemeral-disk-support.html<br />
* Support fo vSAN: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-vsan-support.html<br />
* Support for based OVA images: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-driver-ova-support.html<br />
* Support for SPBM based storage policies: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-spbm-support.html<br />
<br />
===== Ironic =====<br />
<br />
* Support to pass flavor capabilities to ironic: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pass-flavor-capabilities-to-ironic-virt-driver.html<br />
<br />
=== Known Issues ===<br />
<br />
* Evacuate recovery code has the potential to destroy data. On nova-compute startup, instances reported by the hypervisor are examined to see if they have moved (i.e. been evacuated) from the current host during the outage. If the determination is made that they were, then they are destroyed locally. This has the potential to choose incorrectly and destroy instances unexpectedly. On libvirt-like nodes, this can be triggered by changing the system hostname. On vmware-like nodes, this can be triggered by attempting to manage a single vcenter deployment from two different hosts (with different hostnames). This will be fixed properly in Liberty, but for now deployments that wish to disable this behavior as a preventive measure can set workarounds.destroy_after_evacuate=False. NOTE: This is not a regression and has been a flaw in the design of the evacuate feature since its introduction. There is no easy fix for this, hence this workaround to limit the potential for damage. The proposed fix in liberty is here: https://review.openstack.org/#/c/161444/.<br />
<br />
* The generate config examples possibly missing some oslo related configuration<br />
<br />
=== Upgrade Notes ===<br />
<br />
Below are changes you should be aware of when upgrading. Where possible, The git commit hash is provided for you to find more information:<br />
<br />
* Neutron ports are no longer deleted after your server is deleted, if you created them outside of Nova: 1153a46738fc3ffff98a1df9d94b5a55fdd58777<br />
* EC2 API support has been deprecated, and is likely to be removed in kilo, see f098398a836e3671c49bb884b4a1a1988053f4b2<br />
* Websocket proxies need to be upgraded in a lockstep with the API nodes, as older API nodes will not be sending the access_url when authorizing console access, and newer proxy services (this commit and onward) would fail to authorize such requests 9621ccaf05900009d67cdadeb1aac27368114a61<br />
* After fully upgrading to kilo (i.e. all nodes are running kilo code), you should start a background migration of flavor information from its old home to its new home. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. This is critical to complete before the Liberty release, where support for the old location will be dropped. Use "nova-manage migrate-flavor-data" to perform this transition.<br />
* Due to the improvement on Nova v2.1 API policy enforcement. There are a lot of change happened to v2.1 API policy. Because v2.1 API didn't released before, those change won't keep back-compatible. It is better to use policy sample configuration instead of old one.<br />
* VMware rescue VM behaviour no longer creates a new VM and instead happens in place: cd1765459a24e52e1b933c8e05517fed75ac9d41<br />
* force_config_drive = always has been deprecated, and force_config_drive = True should be used instead: c12a78b35dc910fa97df888960ef2b9a64557254<br />
* Running hyper-v, if you deployed code that was past this commit: b4d57ab65836460d0d9cb8889ec2e6c3986c0a9b but before this commit: c8e9f8e71de64273f10498c5ad959634bfe79975 you make have problems to manually resolve see: c8e9f8e71de64273f10498c5ad959634bfe79975<br />
* Changed the default value of: multi_instance_display_name_template see: 609b2df339785bff9e30a9d67d5c853562ae3344<br />
* Please use "nova-manage db null_instance_uuid_scan" to ensure the DB migrations will apply cleanly, see: c0ea53ce353684b48303fc59393930c3fa5ade58<br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* Support for Federated authentication via Web Single-Sign-On -- When configured in keystone, the user will be able to choose the authentication mechanism to use from those support by the deployment. This feature must be enabled by changes to local_settings.py to be utilized. The related settings to enable and configure can be found [http://docs.openstack.org/developer/horizon/topics/settings.html#websso-enabled here].<br />
<br />
* Support for Theming -- A simpler mechanism to specify a custom theme for Horizon has been included. Allowing for use of CSS values for Bootstrap and Horizon variables, as well as the inclusion of custom CSS. More details available [http://docs.openstack.org/developer/horizon/topics/settings.html#custom-theme-path here].<br />
<br />
* Sahara UX Improvements -- Dramatic improvements to the Sahara user experience have been made with the addition of guided cluster creation and guided job creation pages.<br />
<br />
* Launch Instance Wizard (beta) -- A full replacement for the launch instance workflow has been implemented in AngularJS to address usability issues in the existing launch instance workflow. Due to the late inclusion date and limited testing, this feature is marked as beta for Kilo and not enabled by default. To use the new workflow, the following change to local_settings.py is required: <code>LAUNCH_INSTANCE_NG_ENABLED = True</code>. Additionally, you can disable the default launch instance wizard with the following: <code>LAUNCH_INSTANCE_LEGACY_ENABLED = False</code>. This new work is a view into future development in Horizon. <br />
<br />
* Nova<br />
** allow service disable/enable on Hypervisor<br />
** Migrate all instances from host<br />
** expose serial console<br />
<br />
* Cinder<br />
** Cinder v2 by default<br />
** Managed/Unmanaged volume support -- allows admin to manage existing volumes not managed by cinder, as well as unmanage volumes.<br />
** Volume transfer support between projects<br />
** Volume encryption metadata support<br />
<br />
* Glance<br />
** View added to allow administrators to view/add/update Glance Metadata definitions<br />
<br />
* Heat<br />
** Stack Template view<br />
** Orchestration Resources Panel<br />
** Suspend/Resume actions for Stacks<br />
** Preview Stack view allows users to preview stacks specified in templates before creating them.<br />
<br />
* Trove<br />
** Resizing of Trove instances -- changing instance flavor<br />
<br />
* Ceilometer<br />
** Display IPMI meters values from Ceilometer<br />
<br />
* New Reusable AngularJS widgets in Horizon:<br />
** AngularJS table implementation<br />
*** Table drawers -- expandable table content<br />
*** improved client/server search<br />
** Transfer table widget <br />
<br />
* Configurable web root for Horizon beyond just '/'<br />
<br />
=== Known Issues ===<br />
* Volumes created from snapshots are empty - https://bugs.launchpad.net/horizon/+bug/1447288<br />
* Django 1.8 is not fully supported yet.<br />
<br />
=== Upgrade Notes ===<br />
* Django 1.7 is now supported.<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* The "assignment" backend has been split into a "resource" backend (containing domains, projects, and roles) and an "assignments" backend, containing the authorization mapping model.<br />
* Added support for trust redelegation. If allowed when the trust is initially created, a trustee can redelegate the roles from the trust via another trust<br />
* Added support for explicitly requesting an unscoped token from Keystone, even if the user has a <code>default_project_id</code> attribute set.<br />
* Deployers may now opt into disallowing the re-scoping of scoped tokens by setting <code>[token] allow_rescope_scoped_token = false</code> in <code>keystone.conf</code>.<br />
<br />
==== Hierarchical multitenancy ====<br />
<br />
[http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#projects-v3-projects Projects] can be nested under other projects by setting the <code>parent_id</code> attribute to an existing project when [http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#create-project creating a new project]. You can also [http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-project discovery] the parent-child hierarchy through the existing <code>/v3/projects</code> API.<br />
<br />
Role assignments can now be assigned to both [https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-inherit-ext.rst#assign-role-to-user-on-projects-in-a-subtree users] and [https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-inherit-ext.rst#assign-role-to-group-on-projects-in-a-subtree groups] on subtrees in the project hierarchy.<br />
<br />
This feature will require corresponding support across other OpenStack services (such as hierarchical quotas) in order to become broadly useful.<br />
<br />
==== Fernet tokens ====<br />
<br />
Unlike UUID tokens which must be persisted to a database, Fernet tokens are entirely non-persistent. Deployers can enable the Fernet <code> [token] provider = keystone.token.providers.fernet.Provider</code> in <code>keystone.conf</code>.<br />
<br />
Fernet tokens require symmetric encryption keys which can be established using <code>keystone-manage fernet_setup</code> and periodically rotated using <code>keystone-manage fernet_rotate</code>. These keys must be shared by all keystone nodes in a multi-node (or multi-region) deployment, such that tokens generated by one node can be immediately validated against another.<br />
<br />
==== Identity federation ====<br />
<br />
* Keystone can now act as a [http://docs.openstack.org/developer/keystone/configure_federation.html#keystone-as-an-identity-provider-idp federated identity provider (IdP)] for another instance of keystone by issuing SAML assertions for local users, which may be ECP-wrapped.<br />
* Added support for [http://docs.openstack.org/developer/keystone/extensions/openidc.html OpenID Connect] as a federated identity authentication mechanism.<br />
* Added the ability to associate many "Remote IDs" to a single identity provider in keystone. This will help in a case where many identity providers use a common mapping.<br />
* Added the ability for a user to authenticate via a web browser with an existing IdP, through a Single Sign-On page.<br />
* Federated tokens now use the <code>token</code> authentication method, although both <code>mapped</code> and <code>saml2</code> remain available.<br />
<br />
==== LDAP ====<br />
<br />
* Filter parameters specified by API users are now processed by LDAP itself, instead of by keystone.<br />
* ''Experimental'' support was added to store domain-specific identity backend [http://docs.openstack.org/developer/keystone/configuration.html#domain-specific-drivers configuration in SQL] using the HTTP API. The primary use case for this is to create a new domain the the HTTP API, and then immediately configure a domain-specific LDAP driver for it without restarting keystone.<br />
<br />
=== Known Issues ===<br />
<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
<br />
* XML support in Keystone has been removed as of Kilo. When upgrading from Juno to Kilo, it is recommended that references to XML and XmlBodyMiddleware be removed from the [https://github.com/openstack/keystone/blob/master/etc/keystone-paste.ini Keystone Paste configuration]. This includes removing the XML middleware filters and there references from the public_api, admin_api, api_v3, public_version_api, admin_version_api and any other pipelines that may contain the XML filters.<br />
* All previous extensions (OS-FEDERATION, OS-OAUTH1, OS-ENDPOINT-POLICY and OS-EP-FILTER) are now enabled by default, and are [http://docs.openstack.org/developer/keystone/extensions.html correspondingly marked] as either "experimental" or "stable".<br />
* [http://specs.openstack.org/openstack/openstack-specs/specs/no-downward-sql-migration.html SQL Schema Downgrades are no longer supported]. This change is the result of evaluation that downward SQL migrations are not well tested and become increasingly difficult to support with the volume of data-change that occurs in many of the migrations.<br />
* The following python libraries are now required: [https://pypi.python.org/pypi/cryptography cryptography], [https://pypi.python.org/pypi/msgpack-python msgpack-python], [https://pypi.python.org/pypi/pysaml2 pysaml2] and [https://pypi.python.org/pypi/oauthlib oauthlib]<br />
* <code>keystone.middleware.RequestBodySizeLimiter</code> is now deprecated in favor of <code>oslo_middleware.sizelimit.RequestBodySizeLimiter</code> and will be removed in Liberty.<br />
* Eventlet-specific configuration options such as <code>public_bind_host</code>, <code>bind_host</code>, <code>admin_bind_host</code>, <code>admin_port</code>, <code>public_port</code>, <code>public_workers</code>, <code>admin_workers</code>, <code>tcp_keepalive</code>, <code>tcp_keepidle</code> have been moved from the <code>[DEFAULT]</code> configuration section to a new configuration section called <code>[eventlet_server]</code>. Similarly, Eventlet-specific SSL configuration options such as <code>enable</enable>, <code>certfile</code>, <code>keyfile</code>, <code>ca_certs</code>, <code>cert_required</code> have been moved from the <code>[ssl]</code> configuration section to a new configuration section called <code>[eventlet_server_ssl]</code>.<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
* DVR now supports VLANs in addition to VXLAN/GRE<br />
* ML2 Hierarchical Port Binding<br />
* New LBaaS Version 2 API<br />
* Portsecurity support for the OVS ML2 Driver<br />
<br />
* New Plugins supported in Kilo include the following:<br />
** A10 Networks LBaaS V2 Driver<br />
** Brocade LBaaS V2 Driver<br />
** Brocade ML2 driver for MLX and ICX switches<br />
** Brocade L3 routing plugin for MLX switch<br />
** Brocade Vyatta vRouter L3 Plugin<br />
** Brocade Vyatta vRouter Firewall Driver<br />
** Brocade Vyatta vRouter VPN Driver<br />
** Cisco CSR VPNaaS Driver<br />
** Dragonflow SDN based Distributed Virtual Router L3 Plugin<br />
** Freescale FWaaS Driver<br />
** Intel Mcafee NGFW FWaaS Driver<br />
** IPSEC Strongswan VPNaaS Driver<br />
<br />
=== Known Issues ===<br />
* The Firewall-as-a-Service project is still marked as experimental for the Kilo release.<br />
* Bug [https://bugs.launchpad.net/neutron/+bug/1438819 1438819]<br />
** When a new subnet is created on an external network, all existing routers with gateways on the network will get a new address allocated from it. For IPv4 networks, this could consume the entire subnet for router gateway ports.<br />
<br />
=== Upgrade Notes ===<br />
<br />
From Havana, Neutron no longer supported an explicit lease database (https://bugs.launchpad.net/bugs/1202392). This left dead code including unused environment variable. In order to remove the dead code (https://review.openstack.org/#/c/152398/), a change to the dhcp.filter is required, so that line:<br />
<br />
'''dnsmasq: EnvFilter, dnsmasq, root, NEUTRON_NETWORK_ID='''<br />
<br />
Be replaced by:<br />
<br />
'''dnsmasq: CommandFilter, dnsmasq, root'''<br />
<br />
After advanced services were split into separate packages and received their own service configuration files (specifically, etc/neutron/neutron_lbaas.conf, etc/neutron/neutron_fwaas.conf and etc/neutron/neutron_vpnaas.conf), active service provider configuration can be different after upgrade (specifically, default load balancer (haproxy) and vpn (openswan) providers can be enabled for you even though you previously disabled them in neutron.conf). Please make sure you review configuration after upgrade so that it reflects the desired state of service providers.<br />
<br />
Note: this will have no effect if the related service plugin is not loaded in neutron.conf.<br />
<br />
<br />
* The default value of api_workers is now equal to the number of CPUs in the host. If you currently use the default, ensure you set api_workers to a reasonable number for your installation. (https://review.openstack.org/#/c/140493/)<br />
* The neutron.allow_duplicate_networks config option is deprecated in Kilo and will be removed in Liberty where the default behavior will be to just allow multiple ports attached to an instance on the same network in Neutron. (https://review.openstack.org/163581)<br />
* The linuxbridge agent now enables VXLAN by default (https://review.openstack.org/160826)<br />
* neutron-ns-metadata-proxy can now be run as non-root (https://review.openstack.org/147437)<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
<br />
* Deprecation<br />
** Brocade Classic plugin for Brocade's VDX/VCS series of hardware switches will be deprecated in the L-Release. The functionality provided by this plugin is now addressed by the ML2 Driver available for the VDX series of hardware. The plugin is slated for removal after this release cycle.<br />
<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
* From this point forward any new database schema upgrades will not require restarting Cinder services right away. The services are now independent of schema upgrades. This is part one to Cinder supporting rolling upgrades!<br />
* Ability to add/remove volumes from an existing consistency group. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Ability to create a consistency group from an existing consistency group snapshot. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Create more fine tuned filters/weighers to set how the scheduler will choose a volume backend. [http://docs.openstack.org/admin-guide-cloud/content/driver_filter_weighing.html Read the docs for more info].<br />
* Encrypted volumes can now be backed up using the Cinder backup service. [http://docs.openstack.org/admin-guide-cloud/content/volume-backup-restore.html Read the docs for more info].<br />
* Ability to create private volume types. This is perfect when you want to make volume types available to only a specific tenant or to test it before making available to your cloud. To do so use the ''cinder type-create <name> --is-public''.<br />
* Oversubscription with thin provision is configurable. [http://docs.openstack.org/admin-guide-cloud/content/over_subscription.html Read docs for more info].<br />
* Ability to add descriptions to volume types. To do so use ''cinder type-create <name> <description><br />
* Cinder now can return multiple iSCSI paths information so that the connector can attach volumes even when the primary path is down ([https://review.openstack.org/#/c/134681/ when connector's multipath feature is enabled] or [https://review.openstack.org/#/c/140877/ not enabled]).<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The 'host' config option for multiple-storage backends in cinder.conf is renamed to 'backend_host' in order to avoid a naming conflict with the 'host' to locate redis. If you use this option, please ensure your configuration files are updated.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* Support to add jitter to polling cycles to ensure pollsters are not querying service's api at the same time<br />
* Ceilometer API RBAC support<br />
* Improved Event support:<br />
** Multi-pipeline support to enable unique processing and publishing of events<br />
** Enabled ability to capture raw notification messages for auditing and postmortem analysis<br />
** Support for persisting events into ElasticSearch<br />
** Publishing support to database, http, file, kafa and oslo.messaging supported message queues<br />
** Option to split off the events persistence into a separate database<br />
** Telemetry now supports to collect and store all the event type meters as events. A new option, ''disable_non_metric_meters'', was added to the configuration in order to provide the possibility to turn off storing these events as samples. For further information please see the [http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-telemetry.html Telemetry Configuration Reference]<br />
** The Administrator Guide in OpenStack Manuals was updated with a new [http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-events.html Events section], where you can find further information about this functionality.<br />
* Improved pipeline publishing support:<br />
** Support to publish events and samples to Kafka or HTTP targets<br />
** Publish data to multiple queues<br />
* Additional meters<br />
** memory and disk meters for Hyper-V<br />
** disk meters for LibVirt<br />
** power and thermal related IPMI meters, more meters from NodeManager<br />
** ability to meter Ceph<br />
* IPv6 support enabled in Ceilometer udp publisher and collector<br />
* [http://launchpad.net/gnocchi Gnocchi] dispatch support for ceilometer-collector<br />
* Self-disabled pollster mechanism<br />
<br />
=== Known Issues ===<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
* Deprecated meters:<br />
** The instance:<flavor> meter is deprecated in the Kilo release. In order to retrieve samples or statistics based on flavor you can use the following queries:<br />
statistics:<br />
ceilometer statistics -m instance -g resource_metadata.instance_type<br />
<br />
samples:<br />
ceilometer sample-list -m instance -q metadata.instance_type=<value><br />
* Middleware used to meter Swift was previously packaged in Ceilometer and is now deprecated. It is now separated into it's own library: ceilometermiddleware.<br />
** Juno configuration: http://docs.openstack.org/juno/install-guide/install/apt/content/ceilometer-swift.html<br />
** Kilo configuration: http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-swift.html<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
=== Key New Features ===<br />
<br />
* Improved scaling using nested stacks<br />
** Heat will RPC actions on any resource that is based on a template. This should help to spread the load when dealing with large complex stacks.<br />
* oslo versioned objects<br />
** The database layer now uses oslo versioned objects to aid in future upgrades. This will allow a newly upgraded heat-engine to use a database with an older schema. Note that this will not help with upgrading to kilo.<br />
* New template functions<br />
** There is a new HOT template version "20150430" which includes two new functions "digest" and "repeat". <br />
* Multiregion stacks<br />
** http://docs.openstack.org/hot-reference/content/OS__Heat__Stack.html<br />
* Software config signalling using swift<br />
** MORE DETAIL HERE<br />
* Triggering new software deployments from heatclient<br />
** MORE DETAIL HERE<br />
* stack snapshots<br />
** MORE DETAIL HERE<br />
* Access to Heat services<br />
** The admin now has similar access to services as other projects. This is in the form of "heat-manage service-list" and via horizon. This feature reports the active heat-engines.<br />
* Improved validation for nova and neutron properties.<br />
* stack hooks<br />
** MORE DETAIL HERE http://specs.openstack.org/openstack/heat-specs/specs/juno/stack-breakpoint.html<br />
* New contributed resources<br />
** Mistral resources<br />
** keystone resources <br />
supported with Keystone v3 server for Project, Role, User and Group<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The default of the configuration option "num_engine_workers" has changed from 1 to a number based on the the number of CPUs. This is now the same as the way other projects set the number of workers.<br />
* The default for the configuration option "max_nested_stack_depth" has been increased to 5.<br />
* There is a new configuration option "convergence" it is by default off. This feature is not yet complete and this option should remain off.<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
==== Deprecation ====<br />
* The follow resources are deprecated OS::Heat::HARestarter and OS::Heat::CWLiteAlarm<br />
* The CloudWatch API (heat-api-cw)<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Data Processing service (Sahara) ==<br />
=== Key New Features ===<br />
* New plugins, their features and versions:<br />
** MAPR<br />
** Apache Storm<br />
** Apache Hadoop 2.6.0 was added, Apache Hadoop 2.4.1 deprecated<br />
** New services for CDH plugin added up to HDFS, YARN, Spark, Oozie, HBase, ZooKeeper and other services<br />
* Added indirect VM access for better utilization of floating IPs<br />
* Added event log support to have detailed info about provisioning progress<br />
* Optional default node group and cluster templates per plugin<br />
* Horizon updates:<br />
** Guided cluster creation and job execution<br />
** Filtering on search for objects<br />
* Editing of Node Group templates and Cluster templates implemented<br />
* Added Shell Job Type for clusters running Oozie<br />
* New Job Types endpoint to query list of the supported Job Types<br />
<br />
=== Known Issues ===<br />
-<br />
<br />
=== Upgrade Notes ===<br />
Details: http://docs.openstack.org/developer/sahara/userdoc/upgrade.guide.html#juno-kilo<br />
<br />
* Sahara now requires policy.json configuration file.<br />
<br />
== OpenStack Bare Metal service (Ironic) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== State Machine ====<br />
Ironic now uses a formal model for the logical state of each node it manages.<ref name="states">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/new-ironic-state-machine.html#proposed-change]New Ironic State Machine</ref> This has enabled the addition of two new processes: '''cleaning''' and '''inspection'''.<br />
* Automatic disk erasure between tenants is now enabled by default. This may be extended to perform additional '''cleaning''' steps, such as re-applying firmware, resetting BIOS settings, etc.<ref name="cleaning">[http://docs.openstack.org/developer/ironic/deploy/cleaning.html]Node Cleaning</ref><br />
* Both in-band and out-of-band methods are available to '''inspect''' hardware. These methods may be used to update Node properties automatically.<ref name="inspect">[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#hardware-inspection]Hardware Inspection</ref><br />
<br />
==== Version Headers ====<br />
The Ironic REST API expects a new ''X-OpenStack-Ironic-API-Version'' header be passed with each HTTP[S] request. This header allows client and server to negotiate a mutually supported interface.<ref name="api-version">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html]REST API "micro" versions </ref> In the absence of this header, the REST service will default to a compatibility mode and yield responses compatible with Juno clients. This mode, however, prevents access to most features introduced in Kilo.<br />
<br />
==== Hardware Driver Changes ====<br />
The following new drivers were added:<br />
* [http://docs.openstack.org/developer/ironic/drivers/amt.html AMT]<br />
* [http://docs.openstack.org/developer/ironic/deploy/drivers.html#irmc iRMC]<br />
* [http://docs.openstack.org/developer/ironic/drivers/vbox.html VirtualBox (testing driver only)]<br />
<br />
<br />
The following enhancements were made to existing drivers:<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#enabling-the-configuration-drive-configdrive Configdrives] may be used with the "agent" drivers in lieu of a metadata service, if desired.<br />
* SeaMicro driver supports serial console<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#uefi-secure-boot-support iLO driver supports UEFI secure boot]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#hardware-inspection iLO driver supports out-of-band node inspection]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#ilo-node-cleaning iLO driver supports resetting ilo and bios during cleaning]<br />
<br />
<br />
Support for third-party and out-of-tree drivers is enhanced by the following two changes:<br />
* Drivers may store their own "internal" information about Nodes.<br />
* Drivers may register their own periodic tasks to be run by the Conductor.<br />
* ''vendor_passthru'' methods now support additional HTTP methods (eg, PUT and POST).<br />
* ''vendor_passthru'' methods are now discoverable in the REST API. See [http://docs.openstack.org/developer/ironic/dev/drivers.html#node-vendor-passthru node vendor passthru] and [http://docs.openstack.org/developer/ironic/dev/drivers.html#driver-vendor-passthru driver vendor passthru]<br />
<br />
==== Other Changes ====<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#logical-names Logical names] may be used to address Nodes, in addition to their canonical UUID. <br />
* For servers with varied local disks, [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#specifying-the-disk-for-deployment ''hints''] may be supplied that affect which disk device the OS is provisioned to.<br />
* Support for fetching kernel, ramdisk, and instance images from HTTP[S] sources directly has been added to remove the dependency on Glance. [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#using-ironic-as-a-standalone-service Using ironic as a standalone service]<br />
* Nodes may be placed into ''[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#maintenance-mode maintenance mode]'' via REST API calls. An optional ''maintenance reason'' may be specified when doing so.<br />
<br />
=== Known Issues ===<br />
* '''Running more than one nova-compute process is not officially supported.'''<br />
** While Ironic does include a ClusteredComputeManager, which allows running more than one nova-compute process with Ironic, it should be considered experimental and has many known problems.<br />
* Drivers using the "agent" deploy mechanism do not support "rebuild --preserve-ephemeral"<br />
<br />
=== Upgrade Notes ===<br />
* IPMI Passwords are now obfuscated in REST API responses. This may be disabled by changing API policy settings.<br />
* The "agent" class of drivers now support both whole-disk and partition based images.<br />
* The driver_info parameters of "pxe_deploy_kernel" and "pxe_deploy_ramdisk" are deprecated in favour of "deploy_kernel" and "deploy_ramdisk". <br />
* Drivers implementing their own version of the vendor_passthru() method has been deprecated in favour of the new @passthru decorator.<br />
<br />
==== Juno to Kilo ====<br />
The recommended upgrade process is documented here:<br />
* http://docs.openstack.org/developer/ironic/deploy/upgrade-guide.html#upgrading-from-juno-to-kilo<br />
<br />
==== Upgrading from Icehouse "nova-baremetal" ====<br />
<br />
An upgrade from an Icehouse Nova installation using the "baremetal" driver directly to Kilo Ironic is untested and unsupported. Instead, please follow the following upgrade path:<br />
# Icehouse Nova "baremetal" -> Juno Nova "baremetal"<br />
# Juno Nova "baremetal" -> Juno Ironic<br />
# Juno Ironic -> Kilo Ironic<br />
<br />
Documentation for steps 1 and 2 is available at: https://wiki.openstack.org/wiki/Ironic/NovaBaremetalIronicMigration<br />
<br />
== OpenStack Documentation ==<br />
<br />
* New [http://docs.openstack.org docs.openstack.org] landing page and new web design for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* First release of the [http://docs.openstack.org/networking-guide/ Networking Guide]<br />
* Migration to RST for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* New specialty teams:<br />
** Install Guides<br />
** Networking Guide<br />
** High Availability Guide<br />
** Networking Guide<br />
** User Guides (Admin and End User)<br />
* First App Tutorial sprint<br />
* Driver documentation clarification and connections</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Kilo&diff=78598ReleaseNotes/Kilo2015-04-29T22:31:20Z<p>Asalkeld: /* Key New Features */</p>
<hr />
<div>= OpenStack 2015.1.0 (Kilo) Release Notes =<br />
<br />
<br />
{| style="color:#000000; border:solid 3px #A8A8A8; padding:2em; margin:2em 0; background-color:#FFFFFF; vertical-align:middle;"<br />
| style="padding:1em" | The Kilo release of OpenStack is dedicated to the loving memory of '''Chris Yeoh''', who left his family and us way too soon.<br />
|}<br />
<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
<br />
== General Upgrade Notes ==<br />
<br />
* TBD<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Erasure Code (beta) ====<br />
Swift now supports an erasure-code (EC) storage policy type. This allows deployers to achieve very high durability with less raw capacity as used in replicated storage. However, EC requires more CPU and network resources, so it is not good for every use case. EC is great for storing large, infrequently accessed data in a single region.<br />
<br />
Swift's implementation of erasure codes is meant to be transparent to end users. There is no API difference between replicated storage and EC storage.<br />
<br />
To support erasure codes, Swift now depends on PyECLib and liberasurecode. liberasurecode is a pluggable library that allows for the actual EC algorithm to be implemented in a library of your choosing.<br />
<br />
Full docs are at http://swift.openstack.org/overview_erasure_code.html<br />
<br />
==== Composite tokens ====<br />
Composite tokens allow other OpenStack services to store data in Swift on behalf of a client so that neither the client nor the service can update the data without both party's consent.<br />
<br />
An example of this is that a user requests that Nova save a snapshot of a VM. Nova passes the request to Glance, Glance writes the image to a Swift container as a set of objects. In this case, the user cannot modify the snapshot without also having a valid token from the service. Nor can the service update the data without a valid token from the user. But the data is still stored in the user's account in Swift, which makes accounting simpler.<br />
<br />
Full docs are at http://swift.openstack.org/overview_backing_store.html<br />
<br />
==== Data placement updates for smaller, unbalanceable clusters ====<br />
Swift's data placement now accounts for device weight. This allows operators to gradually add new zones and regions without immediately causing a large amount of data to be moved. Also, if a cluster is inbalanced (eg a two-zone cluster where one zone has twice the capacity of the other), Swift will more efficiently use the available space and warn when replicas are placed without enough dispersion in the cluster.<br />
<br />
==== Global cluster replication improvements ====<br />
Replication between regions will now only move one replica per replication run. This gives the remote region a chance to replicate internally and thus avoid more data moving over the WAN<br />
<br />
<br />
=== Known Issues ===<br />
* As a beta release, EC support is nearly fully feature complete, but it is lacking support for some features (like multi-range reads) and has not had a full performance characterization. This feature relies on ssync for durability. Deployers are urged to do extensive testing and not deploy production data using an erasure code storage policy.<br />
<br />
=== Upgrade Notes ===<br />
As always, you can upgrade to this version of Swift with no end-user downtime<br />
<br />
* In order to support erasure codes, Swift has a new dependency on PyECLib (and liberasurecode, transitively). Also, the minimum required version of eventlet has been raised.<br />
<br />
<br />
<br />
<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== API v2.1 ====<br />
<br />
* We have the first release of the next generation of the Nova API, v2.1. The v2.1 is designed to be backwards compatible with v2.0 with the addition of strong API validation. All changes to the API are discoverable via the advertised microversion. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html<br />
<br />
* For kilo, by default we are still using v2.0 API code to server v2.0 API requests. It is hoped that in liberty that v2.1 will be used to serve requests for both v2.0 and v2.1.<br />
<br />
* For liberty v2.0 is now frozen, and all new features will now be added into the v2.1 API using the microversions mechanism. Microversion increments released with kilo are:<br />
** Extending the keypair API to support for x509 certificates, to be used with Windows WinRM, is one of the first API features added as a microversion in the v2.1 API.<br />
** Exposing additional attributes in os-extended-server-attributes<br />
<br />
* python-novaclient does not yet have support for the v2.1 API<br />
<br />
* The policy enforcement of Nova v2.1 API get improvement.<br />
** Policy only enforce at the entry of API.<br />
** Without duplicated rules for single one API anymore.<br />
** All the v2.1 API policy rule use 'os_compute_api' as prefix which distinguish with v2 API.<br />
** Due to hard-code permission checks at db layer, part of Nova API isn't configurable by policy before. It's always required admin user. Part of Nova v2.1 API's hard-code permission checks is removed which make API policy configurable. The rest of hard-code permission checks will be removed at Liberty.<br />
<br />
==== Upgrade Support ====<br />
<br />
* We have reduced the data migrations that happen in the DB migration scripts, this now happens in a "lazy" way inside the DB objects code. There are nova-manage commands to help force migration of the data. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/flavor-from-sysmeta-to-blob.html<br />
<br />
* Change https://review.openstack.org/#/c/97946/ adds database migration 267 which scans for null instances.uuid records and will fail if any are found since the migrate ultimately needs to make instances.uuid non-nullable and adds a UniqueConstraint on that column. A helper script is provided to search for null instances.uuid records before running the database migrations. Before running 'nova-manage db sync', run the helper script 'nova-manage db null_instance_uuid_scan' which, by default, will just search and dump results, it does not change anything. Pass the --delete option to the null_instance_uuid_scan command to automatically remove any null records were instances.uuid is null.<br />
<br />
==== Scheduler ====<br />
<br />
* A selection of performance optimisations<br />
* We are in the process of making structural changes to the scheduler that will help improve our ability to evolve and improve scheduling. This should not be visible from an end user perspective.<br />
<br />
==== Cells v2 ====<br />
<br />
* There are some initial parts of cell v2 supported added, but this feature is not yet ready to use.<br />
* new 'nova-manage api_db sync' and 'nova-manage api_db version' commands for working with the new api database for cells, but nothing is using this database yet so it is not necessary to set it up.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Support for generation 2 VMs: https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms<br />
* Support for SMB based volumes, to sit along side existing iSCSI volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/hyper-v-smbfs-volume-support.html<br />
* Support for x509 certificate based key pairs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/keypair-x509-certificates.html<br />
* Host power actions now work with Hyper-V: https://blueprints.launchpad.net/nova/+spec/hyper-v-host-power-actions<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* NFV related features:<br />
** NUMA based scheduling: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/input-output-based-numa-scheduling.html<br />
** Pinning guest vCPUs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-cpu-pinning.html<br />
** Large page support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html<br />
* vhostuser VIF driver: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt_vif_vhostuser.html<br />
* Support for running KVM on IBM System z: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-kvm-systemz.html<br />
* Support for parallels: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pcs-support.html<br />
* Support for SMB based volumes: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-smbfs-volume-support.html<br />
* Quiesce using QEMU guest agent: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quiesced-image-snapshots-with-qemu-guest-agent.html<br />
* Quobyte volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quobyte-nova-driver.html<br />
* Support for QEMU iSCSI initiator: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html<br />
<br />
===== VMware =====<br />
<br />
* Support for Ephemeral disks: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/vmware-ephemeral-disk-support.html<br />
* Support fo vSAN: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-vsan-support.html<br />
* Support for based OVA images: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-driver-ova-support.html<br />
* Support for SPBM based storage policies: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-spbm-support.html<br />
<br />
===== Ironic =====<br />
<br />
* Support to pass flavor capabilities to ironic: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pass-flavor-capabilities-to-ironic-virt-driver.html<br />
<br />
=== Known Issues ===<br />
<br />
* Evacuate recovery code has the potential to destroy data. On nova-compute startup, instances reported by the hypervisor are examined to see if they have moved (i.e. been evacuated) from the current host during the outage. If the determination is made that they were, then they are destroyed locally. This has the potential to choose incorrectly and destroy instances unexpectedly. On libvirt-like nodes, this can be triggered by changing the system hostname. On vmware-like nodes, this can be triggered by attempting to manage a single vcenter deployment from two different hosts (with different hostnames). This will be fixed properly in Liberty, but for now deployments that wish to disable this behavior as a preventive measure can set workarounds.destroy_after_evacuate=False. NOTE: This is not a regression and has been a flaw in the design of the evacuate feature since its introduction. There is no easy fix for this, hence this workaround to limit the potential for damage. The proposed fix in liberty is here: https://review.openstack.org/#/c/161444/.<br />
<br />
* The generate config examples possibly missing some oslo related configuration<br />
<br />
=== Upgrade Notes ===<br />
<br />
Below are changes you should be aware of when upgrading. Where possible, The git commit hash is provided for you to find more information:<br />
<br />
* Neutron ports are no longer deleted after your server is deleted, if you created them outside of Nova: 1153a46738fc3ffff98a1df9d94b5a55fdd58777<br />
* EC2 API support has been deprecated, and is likely to be removed in kilo, see f098398a836e3671c49bb884b4a1a1988053f4b2<br />
* Websocket proxies need to be upgraded in a lockstep with the API nodes, as older API nodes will not be sending the access_url when authorizing console access, and newer proxy services (this commit and onward) would fail to authorize such requests 9621ccaf05900009d67cdadeb1aac27368114a61<br />
* After fully upgrading to kilo (i.e. all nodes are running kilo code), you should start a background migration of flavor information from its old home to its new home. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. This is critical to complete before the Liberty release, where support for the old location will be dropped. Use "nova-manage migrate-flavor-data" to perform this transition.<br />
* Due to the improvement on Nova v2.1 API policy enforcement. There are a lot of change happened to v2.1 API policy. Because v2.1 API didn't released before, those change won't keep back-compatible. It is better to use policy sample configuration instead of old one.<br />
* VMware rescue VM behaviour no longer creates a new VM and instead happens in place: cd1765459a24e52e1b933c8e05517fed75ac9d41<br />
* force_config_drive = always has been deprecated, and force_config_drive = True should be used instead: c12a78b35dc910fa97df888960ef2b9a64557254<br />
* Running hyper-v, if you deployed code that was past this commit: b4d57ab65836460d0d9cb8889ec2e6c3986c0a9b but before this commit: c8e9f8e71de64273f10498c5ad959634bfe79975 you make have problems to manually resolve see: c8e9f8e71de64273f10498c5ad959634bfe79975<br />
* Changed the default value of: multi_instance_display_name_template see: 609b2df339785bff9e30a9d67d5c853562ae3344<br />
* Please use "nova-manage db null_instance_uuid_scan" to ensure the DB migrations will apply cleanly, see: c0ea53ce353684b48303fc59393930c3fa5ade58<br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Hierarchical multitenancy ====<br />
<br />
[http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#projects-v3-projects Projects] can be nested under other projects by setting the <code>parent_id</code> attribute to an existing project when [http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#create-project creating a new project]. You can also [http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-project discovery] the parent-child hierarchy through the existing <code>/v3/projects</code> API.<br />
<br />
Role assignments can now be assigned to both [https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-inherit-ext.rst#assign-role-to-user-on-projects-in-a-subtree users] and [https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-inherit-ext.rst#assign-role-to-group-on-projects-in-a-subtree groups] on subtrees in the project hierarchy.<br />
<br />
This feature will require corresponding support across other OpenStack services (such as hierarchical quotas) in order to become broadly useful.<br />
<br />
==== Fernet tokens ====<br />
<br />
Unlike UUID tokens which must be persisted to a database, Fernet tokens are entirely non-persistent. Deployers can enable the Fernet <code> [token] provider = keystone.token.providers.fernet.Provider</code> in <code>keystone.conf</code>.<br />
<br />
Fernet tokens require symmetric encryption keys which can be established using <code>keystone-manage fernet_setup</code> and periodically rotated using <code>keystone-manage fernet_rotate</code>. These keys must be shared by all keystone nodes in a multi-node (or multi-region) deployment, such that tokens generated by one node can be immediately validated against another.<br />
<br />
==== Identity Federation ====<br />
<br />
Added support for [http://docs.openstack.org/developer/keystone/extensions/openidc.html OpenID Connect] as a federated identity authentication mechanism.<br />
<br />
If using a trusted identity provider for federating users, Keystone now provides the ability for a user to authenticate with Horizon using the credentials of an existing IdP, through a Single Sign-On page.<br />
<br />
* [http://docs.openstack.org/developer/keystone/configure_federation.html#keystone-as-an-identity-provider-idp Keystone-to-Keystone Identity Federation] is now considered stable.<br />
** IDP Remote ID Registration - Keystone added the ability to associate many `Remote IDs` to a single Identity-Provider. This will help in a case where there are many identity providers that use a common mapping.<br />
** ECP Wrapped Assertions - In addition to generating a SAML assertion about a user, Keystone has also added the ability to create an ECP wrapped SAML assertion. This will create a more seamless integration for Keystone to Keystone clients.<br />
<br />
==== Miscellaneous ====<br />
<br />
* Pluggable Assignment Model - TBD - More Info<br />
* Assignment / Resource Split - TBD - More Info<br />
* Extensions to Core functionality - The Keystone team is moving away from the concept of "extensions", all previous extensions are now enabled by default. These now core functions will be marked as Experimental or Stable (as assessed by the Keystone team). The previous extensions that are now enabled by default are: OS-FEDERATION, OS-OAUTH1, OS-ENDPOINT-POLICY and OS-EP-FILTER<br />
* (Experimental) Per-Domain Identity Backend Configuration can be stored in SQL - TBD More Info<br />
* Trusts now support redelegation. If allowed when the trust is created, a trustee can redelegate the roles from the trust via another trust.<br />
* It is now possible to request an explicitly unscoped token from Keystone.<br />
<br />
=== Known Issues ===<br />
<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
<br />
* XML support in Keystone has been removed as of Kilo. When upgrading from Juno to Kilo, it is recommended that references to XML and XmlBodyMiddleware be removed from the [https://github.com/openstack/keystone/blob/master/etc/keystone-paste.ini Keystone Paste configuration]. This includes removing the XML middleware filters and there references from the public_api, admin_api, api_v3, public_version_api, admin_version_api and any other pipelines that may contain the XML filters.<br />
* [http://specs.openstack.org/openstack/openstack-specs/specs/no-downward-sql-migration.html SQL Schema Downgrades are no longer supported]. This change is the result of evaluation that downward SQL migrations are not well tested and become increasingly difficult to support with the volume of data-change that occurs in many of the migrations.<br />
* The following python libraries are now required: [https://pypi.python.org/pypi/cryptography cryptography], [https://pypi.python.org/pypi/msgpack-python msgpack-python], [https://pypi.python.org/pypi/pysaml2 pysaml2] and [https://pypi.python.org/pypi/oauthlib oauthlib]<br />
* <code>keystone.middleware.RequestBodySizeLimiter</code> is now deprecated in favor of <code>oslo_middleware.sizelimit.RequestBodySizeLimiter</code> and will be removed in Liberty.<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
* DVR now supports VLANs in addition to VXLAN/GRE<br />
* ML2 Hierarchical Port Binding<br />
* New LBaaS Version 2 API<br />
* Portsecurity support for the OVS ML2 Driver<br />
<br />
* New Plugins supported in Kilo include the following:<br />
** A10 Networks LBaaS V2 Driver<br />
** Brocade LBaaS V2 Driver<br />
** Brocade ML2 driver for MLX and ICX switches<br />
** Brocade L3 routing plugin for MLX switch<br />
** Brocade Vyatta vRouter L3 Plugin<br />
** Brocade Vyatta vRouter Firewall Driver<br />
** Brocade Vyatta vRouter VPN Driver<br />
** Cisco CSR VPNaaS Driver<br />
** Dragonflow SDN based Distributed Virtual Router L3 Plugin<br />
** Freescale FWaaS Driver<br />
** Intel Mcafee NGFW FWaaS Driver<br />
** IPSEC Strongswan VPNaaS Driver<br />
<br />
=== Known Issues ===<br />
* The Firewall-as-a-Service project is still marked as experimental for the Kilo release.<br />
* Bug [https://bugs.launchpad.net/neutron/+bug/1438819 1438819]<br />
** When a new subnet is created on an external network, all existing routers with gateways on the network will get a new address allocated from it. For IPv4 networks, this could consume the entire subnet for router gateway ports.<br />
<br />
=== Upgrade Notes ===<br />
<br />
From Havana, Neutron no longer supported an explicit lease database (https://bugs.launchpad.net/bugs/1202392). This left dead code including unused environment variable. In order to remove the dead code (https://review.openstack.org/#/c/152398/), a change to the dhcp.filter is required, so that line:<br />
<br />
'''dnsmasq: EnvFilter, dnsmasq, root, NEUTRON_NETWORK_ID='''<br />
<br />
Be replaced by:<br />
<br />
'''dnsmasq: CommandFilter, dnsmasq, root'''<br />
<br />
After advanced services were split into separate packages and received their own service configuration files (specifically, etc/neutron/neutron_lbaas.conf, etc/neutron/neutron_fwaas.conf and etc/neutron/neutron_vpnaas.conf), active service provider configuration can be different after upgrade (specifically, default load balancer (haproxy) and vpn (openswan) providers can be enabled for you even though you previously disabled them in neutron.conf). Please make sure you review configuration after upgrade so that it reflects the desired state of service providers.<br />
<br />
Note: this will have no effect if the related service plugin is not loaded in neutron.conf.<br />
<br />
<br />
* The default value of api_workers is now equal to the number of CPUs in the host. If you currently use the default, ensure you set api_workers to a reasonable number for your installation. (https://review.openstack.org/#/c/140493/)<br />
* The neutron.allow_duplicate_networks config option is deprecated in Kilo and will be removed in Liberty where the default behavior will be to just allow multiple ports attached to an instance on the same network in Neutron. (https://review.openstack.org/163581)<br />
* The linuxbridge agent now enables VXLAN by default (https://review.openstack.org/160826)<br />
* neutron-ns-metadata-proxy can now be run as non-root (https://review.openstack.org/147437)<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
<br />
* Deprecation<br />
** Brocade Classic plugin for Brocade's VDX/VCS series of hardware switches will be deprecated in the L-Release. The functionality provided by this plugin is now addressed by the ML2 Driver available for the VDX series of hardware. The plugin is slated for removal after this release cycle.<br />
<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
* From this point forward any new database schema upgrades will not require restarting Cinder services right away. The services are now independent of schema upgrades. This is part one to Cinder supporting rolling upgrades!<br />
* Ability to add/remove volumes from an existing consistency group. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Ability to create a consistency group from an existing consistency group snapshot. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Create more fine tuned filters/weighers to set how the scheduler will choose a volume backend. [http://docs.openstack.org/admin-guide-cloud/content/driver_filter_weighing.html Read the docs for more info].<br />
* Encrypted volumes can now be backed up using the Cinder backup service. [http://docs.openstack.org/admin-guide-cloud/content/volume-backup-restore.html Read the docs for more info].<br />
* Ability to create private volume types. This is perfect when you want to make volume types available to only a specific tenant or to test it before making available to your cloud. To do so use the ''cinder type-create <name> --is-public''.<br />
* Oversubscription with thin provision is configurable. [http://docs.openstack.org/admin-guide-cloud/content/over_subscription.html Read docs for more info].<br />
* Ability to add descriptions to volume types. To do so use ''cinder type-create <name> <description><br />
* Cinder now can return multiple iSCSI paths information so that the connector can attach volumes even when the primary path is down ([https://review.openstack.org/#/c/134681/ when connector's multipath feature is enabled] or [https://review.openstack.org/#/c/140877/ not enabled]).<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The 'host' config option for multiple-storage backends in cinder.conf is renamed to 'backend_host' in order to avoid a naming conflict with the 'host' to locate redis. If you use this option, please ensure your configuration files are updated.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* Support to add jitter to polling cycles to ensure pollsters are not querying service's api at the same time<br />
* Ceilometer API RBAC support<br />
* Improved Event support:<br />
** Multi-pipeline support to enable unique processing and publishing of events<br />
** Enabled ability to capture raw notification messages for auditing and postmortem analysis<br />
** Support for persisting events into ElasticSearch<br />
** Publishing support to database, http, file, kafa and oslo.messaging supported message queues<br />
** Option to split off the events persistence into a separate database<br />
** Telemetry now supports to collect and store all the event type meters as events. A new option, ''disable_non_metric_meters'', was added to the configuration in order to provide the possibility to turn off storing these events as samples. For further information please see the [http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-telemetry.html Telemetry Configuration Reference]<br />
** The Administrator Guide in OpenStack Manuals was updated with a new [http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-events.html Events section], where you can find further information about this functionality.<br />
* Improved pipeline publishing support:<br />
** Support to publish events and samples to Kafka or HTTP targets<br />
** Publish data to multiple queues<br />
* Additional meters<br />
** memory and disk meters for Hyper-V<br />
** disk meters for LibVirt<br />
** power and thermal related IPMI meters, more meters from NodeManager<br />
** ability to meter Ceph<br />
* IPv6 support enabled in Ceilometer udp publisher and collector<br />
* [http://launchpad.net/gnocchi Gnocchi] dispatch support for ceilometer-collector<br />
* Self-disabled pollster mechanism<br />
<br />
=== Known Issues ===<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
* Deprecated meters:<br />
** The instance:<flavor> meter is deprecated in the Kilo release. In order to retrieve samples or statistics based on flavor you can use the following queries:<br />
statistics:<br />
ceilometer statistics -m instance -g resource_metadata.instance_type<br />
<br />
samples:<br />
ceilometer sample-list -m instance -q metadata.instance_type=<value><br />
* Middleware used to meter Swift was previously packaged in Ceilometer and is now deprecated. It is now separated into it's own library: ceilometermiddleware.<br />
** Juno configuration: http://docs.openstack.org/juno/install-guide/install/apt/content/ceilometer-swift.html<br />
** Kilo configuration: http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-swift.html<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
=== Key New Features ===<br />
<br />
* Improved scaling using nested stacks<br />
** Heat will RPC actions on any resource that is based on a template. This should help to spread the load when dealing with large complex stacks.<br />
* oslo versioned objects<br />
** The database layer now uses oslo versioned objects to aid in future upgrades. This will allow a newly upgraded heat-engine to use a database with an older schema. Note that this will not help with upgrading to kilo.<br />
* New template functions<br />
** There is a new HOT template version "20150430" which includes two new functions "digest" and "repeat". <br />
* Multiregion stacks<br />
** MORE DETAIL HERE<br />
* Software config signalling using swift<br />
** MORE DETAIL HERE<br />
* Triggering new software deployments from heatclient<br />
** MORE DETAIL HERE<br />
* stack snapshots<br />
** MORE DETAIL HERE<br />
* Access to Heat services<br />
** The admin now has similar access to services as other projects. This is in the form of "heat-manage service-list" and via horizon. This feature reports the active heat-engines.<br />
* Improved validation for nova and neutron properties.<br />
* stack hooks<br />
** MORE DETAIL HERE http://specs.openstack.org/openstack/heat-specs/specs/juno/stack-breakpoint.html<br />
* New contributed resources<br />
** Mistral resources<br />
** keystone resources <br />
supported with Keystone v3 server for Project, Role, User and Group<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The default of the configuration option "num_engine_workers" has changed from 1 to a number based on the the number of CPUs. This is now the same as the way other projects set the number of workers.<br />
* The default for the configuration option "max_nested_stack_depth" has been increased to 5.<br />
* There is a new configuration option "convergence" it is by default off. This feature is not yet complete and this option should remain off.<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
==== Deprecation ====<br />
* The follow resources are deprecated OS::Heat::HARestarter and OS::Heat::CWLiteAlarm<br />
* The CloudWatch API (heat-api-cw)<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Data Processing service (Sahara) ==<br />
=== Key New Features ===<br />
* New plugins, their features and versions:<br />
** MAPR<br />
** Apache Storm<br />
** Apache Hadoop 2.6.0 was added, Apache Hadoop 2.4.1 deprecated<br />
** New services for CDH plugin added up to HDFS, YARN, Spark, Oozie, HBase, ZooKeeper and other services<br />
* Added indirect VM access for better utilization of floating IPs<br />
* Added event log support to have detailed info about provisioning progress<br />
* Optional default node group and cluster templates per plugin<br />
* Horizon updates:<br />
** Guided cluster creation and job execution<br />
** Filtering on search for objects<br />
* Editing of Node Group templates and Cluster templates implemented<br />
* Added Shell Job Type for clusters running Oozie<br />
* New Job Types endpoint to query list of the supported Job Types<br />
<br />
=== Known Issues ===<br />
-<br />
<br />
=== Upgrade Notes ===<br />
Details: http://docs.openstack.org/developer/sahara/userdoc/upgrade.guide.html#juno-kilo<br />
<br />
* Sahara now requires policy.json configuration file.<br />
<br />
== OpenStack Bare Metal service (Ironic) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== State Machine ====<br />
Ironic now uses a formal model for the logical state of each node it manages.<ref name="states">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/new-ironic-state-machine.html#proposed-change]New Ironic State Machine</ref> This has enabled the addition of two new processes: '''cleaning''' and '''inspection'''.<br />
* Automatic disk erasure between tenants is now enabled by default. This may be extended to perform additional '''cleaning''' steps, such as re-applying firmware, resetting BIOS settings, etc.<ref name="cleaning">[http://docs.openstack.org/developer/ironic/deploy/cleaning.html]Node Cleaning</ref><br />
* Both in-band and out-of-band methods are available to '''inspect''' hardware. These methods may be used to update Node properties automatically.<ref name="inspect">[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#hardware-inspection]Hardware Inspection</ref><br />
<br />
==== Version Headers ====<br />
The Ironic REST API expects a new ''X-OpenStack-Ironic-API-Version'' header be passed with each HTTP[S] request. This header allows client and server to negotiate a mutually supported interface.<ref name="api-version">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html]REST API "micro" versions </ref> In the absence of this header, the REST service will default to a compatibility mode and yield responses compatible with Juno clients. This mode, however, prevents access to most features introduced in Kilo.<br />
<br />
==== Hardware Driver Changes ====<br />
The following new drivers were added:<br />
* [http://docs.openstack.org/developer/ironic/drivers/amt.html AMT]<br />
* [http://docs.openstack.org/developer/ironic/deploy/drivers.html#irmc iRMC]<br />
* [http://docs.openstack.org/developer/ironic/drivers/vbox.html VirtualBox (testing driver only)]<br />
<br />
<br />
The following enhancements were made to existing drivers:<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#enabling-the-configuration-drive-configdrive Configdrives] may be used with the "agent" drivers in lieu of a metadata service, if desired.<br />
* SeaMicro driver supports serial console<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#uefi-secure-boot-support iLO driver supports UEFI secure boot]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#hardware-inspection iLO driver supports out-of-band node inspection]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#ilo-node-cleaning iLO driver supports resetting ilo and bios during cleaning]<br />
<br />
<br />
Support for third-party and out-of-tree drivers is enhanced by the following two changes:<br />
* Drivers may store their own "internal" information about Nodes.<br />
* Drivers may register their own periodic tasks to be run by the Conductor.<br />
* ''vendor_passthru'' methods now support additional HTTP methods (eg, PUT and POST).<br />
* ''vendor_passthru'' methods are now discoverable in the REST API. See [http://docs.openstack.org/developer/ironic/dev/drivers.html#node-vendor-passthru node vendor passthru] and [http://docs.openstack.org/developer/ironic/dev/drivers.html#driver-vendor-passthru driver vendor passthru]<br />
<br />
==== Other Changes ====<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#logical-names Logical names] may be used to address Nodes, in addition to their canonical UUID. <br />
* For servers with varied local disks, [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#specifying-the-disk-for-deployment ''hints''] may be supplied that affect which disk device the OS is provisioned to.<br />
* Support for fetching kernel, ramdisk, and instance images from HTTP[S] sources directly has been added to remove the dependency on Glance. [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#using-ironic-as-a-standalone-service Using ironic as a standalone service]<br />
* Nodes may be placed into ''[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#maintenance-mode maintenance mode]'' via REST API calls. An optional ''maintenance reason'' may be specified when doing so.<br />
<br />
=== Known Issues ===<br />
* '''Running more than one nova-compute process is not officially supported.'''<br />
** While Ironic does include a ClusteredComputeManager, which allows running more than one nova-compute process with Ironic, it should be considered experimental and has many known problems.<br />
* Drivers using the "agent" deploy mechanism do not support "rebuild --preserve-ephemeral"<br />
<br />
=== Upgrade Notes ===<br />
* IPMI Passwords are now obfuscated in REST API responses. This may be disabled by changing API policy settings.<br />
* The "agent" class of drivers now support both whole-disk and partition based images.<br />
* The driver_info parameters of "pxe_deploy_kernel" and "pxe_deploy_ramdisk" are deprecated in favour of "deploy_kernel" and "deploy_ramdisk". <br />
* Drivers implementing their own version of the vendor_passthru() method has been deprecated in favour of the new @passthru decorator.<br />
<br />
==== Juno to Kilo ====<br />
The recommended upgrade process is documented here:<br />
* http://docs.openstack.org/developer/ironic/deploy/upgrade-guide.html#upgrading-from-juno-to-kilo<br />
<br />
==== Upgrading from Icehouse "nova-baremetal" ====<br />
<br />
An upgrade from an Icehouse Nova installation using the "baremetal" driver directly to Kilo Ironic is untested and unsupported. Instead, please follow the following upgrade path:<br />
# Icehouse Nova "baremetal" -> Juno Nova "baremetal"<br />
# Juno Nova "baremetal" -> Juno Ironic<br />
# Juno Ironic -> Kilo Ironic<br />
<br />
Documentation for steps 1 and 2 is available at: https://wiki.openstack.org/wiki/Ironic/NovaBaremetalIronicMigration<br />
<br />
== OpenStack Documentation ==<br />
<br />
* New [http://docs.openstack.org docs.openstack.org] landing page and new web design for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* First release of the [http://docs.openstack.org/networking-guide/ Networking Guide]<br />
* Migration to RST for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* New specialty teams:<br />
** Install Guides<br />
** Networking Guide<br />
** High Availability Guide<br />
** Networking Guide<br />
** User Guides (Admin and End User)<br />
* First App Tutorial sprint<br />
* Driver documentation clarification and connections</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Kilo&diff=78594ReleaseNotes/Kilo2015-04-29T22:26:02Z<p>Asalkeld: /* Upgrade Notes */</p>
<hr />
<div>= OpenStack 2015.1.0 (Kilo) Release Notes =<br />
<br />
<br />
{| style="color:#000000; border:solid 3px #A8A8A8; padding:2em; margin:2em 0; background-color:#FFFFFF; vertical-align:middle;"<br />
| style="padding:1em" | The Kilo release of OpenStack is dedicated to the loving memory of '''Chris Yeoh''', who left his family and us way too soon.<br />
|}<br />
<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
<br />
== General Upgrade Notes ==<br />
<br />
* TBD<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Erasure Code (beta) ====<br />
Swift now supports an erasure-code (EC) storage policy type. This allows deployers to achieve very high durability with less raw capacity as used in replicated storage. However, EC requires more CPU and network resources, so it is not good for every use case. EC is great for storing large, infrequently accessed data in a single region.<br />
<br />
Swift's implementation of erasure codes is meant to be transparent to end users. There is no API difference between replicated storage and EC storage.<br />
<br />
To support erasure codes, Swift now depends on PyECLib and liberasurecode. liberasurecode is a pluggable library that allows for the actual EC algorithm to be implemented in a library of your choosing.<br />
<br />
Full docs are at http://swift.openstack.org/overview_erasure_code.html<br />
<br />
==== Composite tokens ====<br />
Composite tokens allow other OpenStack services to store data in Swift on behalf of a client so that neither the client nor the service can update the data without both party's consent.<br />
<br />
An example of this is that a user requests that Nova save a snapshot of a VM. Nova passes the request to Glance, Glance writes the image to a Swift container as a set of objects. In this case, the user cannot modify the snapshot without also having a valid token from the service. Nor can the service update the data without a valid token from the user. But the data is still stored in the user's account in Swift, which makes accounting simpler.<br />
<br />
Full docs are at http://swift.openstack.org/overview_backing_store.html<br />
<br />
==== Data placement updates for smaller, unbalanceable clusters ====<br />
Swift's data placement now accounts for device weight. This allows operators to gradually add new zones and regions without immediately causing a large amount of data to be moved. Also, if a cluster is inbalanced (eg a two-zone cluster where one zone has twice the capacity of the other), Swift will more efficiently use the available space and warn when replicas are placed without enough dispersion in the cluster.<br />
<br />
==== Global cluster replication improvements ====<br />
Replication between regions will now only move one replica per replication run. This gives the remote region a chance to replicate internally and thus avoid more data moving over the WAN<br />
<br />
<br />
=== Known Issues ===<br />
* As a beta release, EC support is nearly fully feature complete, but it is lacking support for some features (like multi-range reads) and has not had a full performance characterization. This feature relies on ssync for durability. Deployers are urged to do extensive testing and not deploy production data using an erasure code storage policy.<br />
<br />
=== Upgrade Notes ===<br />
As always, you can upgrade to this version of Swift with no end-user downtime<br />
<br />
* In order to support erasure codes, Swift has a new dependency on PyECLib (and liberasurecode, transitively). Also, the minimum required version of eventlet has been raised.<br />
<br />
<br />
<br />
<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== API v2.1 ====<br />
<br />
* We have the first release of the next generation of the Nova API, v2.1. The v2.1 is designed to be backwards compatible with v2.0 with the addition of strong API validation. All changes to the API are discoverable via the advertised microversion. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html<br />
<br />
* For kilo, by default we are still using v2.0 API code to server v2.0 API requests. It is hoped that in liberty that v2.1 will be used to serve requests for both v2.0 and v2.1.<br />
<br />
* For liberty v2.0 is now frozen, and all new features will now be added into the v2.1 API using the microversions mechanism. Microversion increments released with kilo are:<br />
** Extending the keypair API to support for x509 certificates, to be used with Windows WinRM, is one of the first API features added as a microversion in the v2.1 API.<br />
** Exposing additional attributes in os-extended-server-attributes<br />
<br />
* python-novaclient does not yet have support for the v2.1 API<br />
<br />
* The policy enforcement of Nova v2.1 API get improvement.<br />
** Policy only enforce at the entry of API.<br />
** Without duplicated rules for single one API anymore.<br />
** All the v2.1 API policy rule use 'os_compute_api' as prefix which distinguish with v2 API.<br />
** Due to hard-code permission checks at db layer, part of Nova API isn't configurable by policy before. It's always required admin user. Part of Nova v2.1 API's hard-code permission checks is removed which make API policy configurable. The rest of hard-code permission checks will be removed at Liberty.<br />
<br />
==== Upgrade Support ====<br />
<br />
* We have reduced the data migrations that happen in the DB migration scripts, this now happens in a "lazy" way inside the DB objects code. There are nova-manage commands to help force migration of the data. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/flavor-from-sysmeta-to-blob.html<br />
<br />
* Change https://review.openstack.org/#/c/97946/ adds database migration 267 which scans for null instances.uuid records and will fail if any are found since the migrate ultimately needs to make instances.uuid non-nullable and adds a UniqueConstraint on that column. A helper script is provided to search for null instances.uuid records before running the database migrations. Before running 'nova-manage db sync', run the helper script 'nova-manage db null_instance_uuid_scan' which, by default, will just search and dump results, it does not change anything. Pass the --delete option to the null_instance_uuid_scan command to automatically remove any null records were instances.uuid is null.<br />
<br />
==== Scheduler ====<br />
<br />
* A selection of performance optimisations<br />
* We are in the process of making structural changes to the scheduler that will help improve our ability to evolve and improve scheduling. This should not be visible from an end user perspective.<br />
<br />
==== Cells v2 ====<br />
<br />
* There are some initial parts of cell v2 supported added, but this feature is not yet ready to use.<br />
* new 'nova-manage api_db sync' and 'nova-manage api_db version' commands for working with the new api database for cells, but nothing is using this database yet so it is not necessary to set it up.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Support for generation 2 VMs: https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms<br />
* Support for SMB based volumes, to sit along side existing iSCSI volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/hyper-v-smbfs-volume-support.html<br />
* Support for x509 certificate based key pairs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/keypair-x509-certificates.html<br />
* Host power actions now work with Hyper-V: https://blueprints.launchpad.net/nova/+spec/hyper-v-host-power-actions<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* NFV related features:<br />
** NUMA based scheduling: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/input-output-based-numa-scheduling.html<br />
** Pinning guest vCPUs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-cpu-pinning.html<br />
** Large page support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html<br />
* vhostuser VIF driver: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt_vif_vhostuser.html<br />
* Support for running KVM on IBM System z: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-kvm-systemz.html<br />
* Support for parallels: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pcs-support.html<br />
* Support for SMB based volumes: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-smbfs-volume-support.html<br />
* Quiesce using QEMU guest agent: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quiesced-image-snapshots-with-qemu-guest-agent.html<br />
* Quobyte volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quobyte-nova-driver.html<br />
* Support for QEMU iSCSI initiator: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html<br />
<br />
===== VMware =====<br />
<br />
* Support for Ephemeral disks: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/vmware-ephemeral-disk-support.html<br />
* Support fo vSAN: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-vsan-support.html<br />
* Support for based OVA images: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-driver-ova-support.html<br />
* Support for SPBM based storage policies: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-spbm-support.html<br />
<br />
===== Ironic =====<br />
<br />
* Support to pass flavor capabilities to ironic: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pass-flavor-capabilities-to-ironic-virt-driver.html<br />
<br />
=== Known Issues ===<br />
<br />
* Evacuate recovery code has the potential to destroy data. On nova-compute startup, instances reported by the hypervisor are examined to see if they have moved (i.e. been evacuated) from the current host during the outage. If the determination is made that they were, then they are destroyed locally. This has the potential to choose incorrectly and destroy instances unexpectedly. On libvirt-like nodes, this can be triggered by changing the system hostname. On vmware-like nodes, this can be triggered by attempting to manage a single vcenter deployment from two different hosts (with different hostnames). This will be fixed properly in Liberty, but for now deployments that wish to disable this behavior as a preventive measure can set workarounds.destroy_after_evacuate=False. NOTE: This is not a regression and has been a flaw in the design of the evacuate feature since its introduction. There is no easy fix for this, hence this workaround to limit the potential for damage. The proposed fix in liberty is here: https://review.openstack.org/#/c/161444/.<br />
<br />
* The generate config examples possibly missing some oslo related configuration<br />
<br />
=== Upgrade Notes ===<br />
<br />
Below are changes you should be aware of when upgrading. Where possible, The git commit hash is provided for you to find more information:<br />
<br />
* Neutron ports are no longer deleted after your server is deleted, if you created them outside of Nova: 1153a46738fc3ffff98a1df9d94b5a55fdd58777<br />
* EC2 API support has been deprecated, and is likely to be removed in kilo, see f098398a836e3671c49bb884b4a1a1988053f4b2<br />
* Websocket proxies need to be upgraded in a lockstep with the API nodes, as older API nodes will not be sending the access_url when authorizing console access, and newer proxy services (this commit and onward) would fail to authorize such requests 9621ccaf05900009d67cdadeb1aac27368114a61<br />
* After fully upgrading to kilo (i.e. all nodes are running kilo code), you should start a background migration of flavor information from its old home to its new home. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. This is critical to complete before the Liberty release, where support for the old location will be dropped. Use "nova-manage migrate-flavor-data" to perform this transition.<br />
* Due to the improvement on Nova v2.1 API policy enforcement. There are a lot of change happened to v2.1 API policy. Because v2.1 API didn't released before, those change won't keep back-compatible. It is better to use policy sample configuration instead of old one.<br />
* VMware rescue VM behaviour no longer creates a new VM and instead happens in place: cd1765459a24e52e1b933c8e05517fed75ac9d41<br />
* force_config_drive = always has been deprecated, and force_config_drive = True should be used instead: c12a78b35dc910fa97df888960ef2b9a64557254<br />
* Running hyper-v, if you deployed code that was past this commit: b4d57ab65836460d0d9cb8889ec2e6c3986c0a9b but before this commit: c8e9f8e71de64273f10498c5ad959634bfe79975 you make have problems to manually resolve see: c8e9f8e71de64273f10498c5ad959634bfe79975<br />
* Changed the default value of: multi_instance_display_name_template see: 609b2df339785bff9e30a9d67d5c853562ae3344<br />
* Please use "nova-manage db null_instance_uuid_scan" to ensure the DB migrations will apply cleanly, see: c0ea53ce353684b48303fc59393930c3fa5ade58<br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Hierarchical multitenancy ====<br />
<br />
[http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#projects-v3-projects Projects] can be nested under other projects by setting the <code>parent_id</code> attribute to an existing project when [http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#create-project creating a new project]. You can also [http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-project discovery] the parent-child hierarchy through the existing <code>/v3/projects</code> API.<br />
<br />
Role assignments can now be assigned to both [https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-inherit-ext.rst#assign-role-to-user-on-projects-in-a-subtree users] and [https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-inherit-ext.rst#assign-role-to-group-on-projects-in-a-subtree groups] on subtrees in the project hierarchy.<br />
<br />
This feature will require corresponding support across other OpenStack services (such as hierarchical quotas) in order to become broadly useful.<br />
<br />
==== Fernet tokens ====<br />
<br />
Unlike UUID tokens which must be persisted to a database, Fernet tokens are entirely non-persistent. Deployers can enable the Fernet <code> [token] provider = keystone.token.providers.fernet.Provider</code> in <code>keystone.conf</code>.<br />
<br />
Fernet tokens require symmetric encryption keys which can be established using <code>keystone-manage fernet_setup</code> and periodically rotated using <code>keystone-manage fernet_rotate</code>. These keys must be shared by all keystone nodes in a multi-node (or multi-region) deployment, such that tokens generated by one node can be immediately validated against another.<br />
<br />
==== Identity Federation ====<br />
<br />
Added support for [http://docs.openstack.org/developer/keystone/extensions/openidc.html OpenID Connect] as a federated identity authentication mechanism.<br />
<br />
If using a trusted identity provider for federating users, Keystone now provides the ability for a user to authenticate with Horizon using the credentials of an existing IdP, through a Single Sign-On page.<br />
<br />
==== Miscellaneous ====<br />
<br />
* Pluggable Assignment Model - TBD - More Info<br />
* Assignment / Resource Split - TBD - More Info<br />
* Extensions to Core functionality - The Keystone team is moving away from the concept of "extensions", all previous extensions are now enabled by default. These now core functions will be marked as Experimental or Stable (as assessed by the Keystone team). The previous extensions that are now enabled by default are: OS-FEDERATION, OS-OAUTH1, OS-ENDPOINT-POLICY and OS-EP-FILTER<br />
* (Experimental) Per-Domain Identity Backend Configuration can be stored in SQL - TBD More Info<br />
* [http://docs.openstack.org/developer/keystone/configure_federation.html#keystone-as-an-identity-provider-idp Keystone-to-Keystone Identity Federation] is now considered stable.<br />
** IDP Remote ID Registration - Keystone added the ability to associate many `Remote IDs` to a single Identity-Provider. This will help in a case where there are many identity providers that use a common mapping.<br />
** ECP Wrapped Assertions - In addition to generating a SAML assertion about a user, Keystone has also added the ability to create an ECP wrapped SAML assertion. This will create a more seamless integration for Keystone to Keystone clients.<br />
* Keystone now supports [http://docs.openstack.org/developer/keystone/extensions/openidc.html OpenID Connect] as a federated identity authentication mechanism.<br />
* Trusts now support redelegation. If allowed when the trust is created, a trustee can redelegate the roles from the trust via another trust.<br />
* It is now possible to request an explicitly unscoped token from Keystone.<br />
<br />
=== Known Issues ===<br />
<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
<br />
* XML support in Keystone has been removed as of Kilo. When upgrading from Juno to Kilo, it is recommended that references to XML and XmlBodyMiddleware be removed from the [https://github.com/openstack/keystone/blob/master/etc/keystone-paste.ini Keystone Paste configuration]. This includes removing the XML middleware filters and there references from the public_api, admin_api, api_v3, public_version_api, admin_version_api and any other pipelines that may contain the XML filters.<br />
* [http://specs.openstack.org/openstack/openstack-specs/specs/no-downward-sql-migration.html SQL Schema Downgrades are no longer supported]. This change is the result of evaluation that downward SQL migrations are not well tested and become increasingly difficult to support with the volume of data-change that occurs in many of the migrations.<br />
* The following python libraries are now required: [https://pypi.python.org/pypi/cryptography cryptography], [https://pypi.python.org/pypi/msgpack-python msgpack-python], [https://pypi.python.org/pypi/pysaml2 pysaml2] and [https://pypi.python.org/pypi/oauthlib oauthlib]<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
* DVR now supports VLANs in addition to VXLAN/GRE<br />
* ML2 Hierarchical Port Binding<br />
* New LBaaS Version 2 API<br />
* Portsecurity support for the OVS ML2 Driver<br />
<br />
* New Plugins supported in Kilo include the following:<br />
** A10 Networks LBaaS V2 Driver<br />
** Brocade LBaaS V2 Driver<br />
** Brocade ML2 driver for MLX and ICX switches<br />
** Brocade L3 routing plugin for MLX switch<br />
** Brocade Vyatta vRouter L3 Plugin<br />
** Brocade Vyatta vRouter Firewall Driver<br />
** Brocade Vyatta vRouter VPN Driver<br />
** Cisco CSR VPNaaS Driver<br />
** Dragonflow SDN based Distributed Virtual Router L3 Plugin<br />
** Freescale FWaaS Driver<br />
** Intel Mcafee NGFW FWaaS Driver<br />
** IPSEC Strongswan VPNaaS Driver<br />
<br />
=== Known Issues ===<br />
* The Firewall-as-a-Service project is still marked as experimental for the Kilo release.<br />
* Bug [https://bugs.launchpad.net/neutron/+bug/1438819 1438819]<br />
** When a new subnet is created on an external network, all existing routers with gateways on the network will get a new address allocated from it. For IPv4 networks, this could consume the entire subnet for router gateway ports.<br />
<br />
=== Upgrade Notes ===<br />
<br />
From Havana, Neutron no longer supported an explicit lease database (https://bugs.launchpad.net/bugs/1202392). This left dead code including unused environment variable. In order to remove the dead code (https://review.openstack.org/#/c/152398/), a change to the dhcp.filter is required, so that line:<br />
<br />
'''dnsmasq: EnvFilter, dnsmasq, root, NEUTRON_NETWORK_ID='''<br />
<br />
Be replaced by:<br />
<br />
'''dnsmasq: CommandFilter, dnsmasq, root'''<br />
<br />
After advanced services were split into separate packages and received their own service configuration files (specifically, etc/neutron/neutron_lbaas.conf, etc/neutron/neutron_fwaas.conf and etc/neutron/neutron_vpnaas.conf), active service provider configuration can be different after upgrade (specifically, default load balancer (haproxy) and vpn (openswan) providers can be enabled for you even though you previously disabled them in neutron.conf). Please make sure you review configuration after upgrade so that it reflects the desired state of service providers.<br />
<br />
Note: this will have no effect if the related service plugin is not loaded in neutron.conf.<br />
<br />
<br />
* The default value of api_workers is now equal to the number of CPUs in the host. If you currently use the default, ensure you set api_workers to a reasonable number for your installation. (https://review.openstack.org/#/c/140493/)<br />
* The neutron.allow_duplicate_networks config option is deprecated in Kilo and will be removed in Liberty where the default behavior will be to just allow multiple ports attached to an instance on the same network in Neutron. (https://review.openstack.org/163581)<br />
* The linuxbridge agent now enables VXLAN by default (https://review.openstack.org/160826)<br />
* neutron-ns-metadata-proxy can now be run as non-root (https://review.openstack.org/147437)<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
<br />
* Deprecation<br />
** Brocade Classic plugin for Brocade's VDX/VCS series of hardware switches will be deprecated in the L-Release. The functionality provided by this plugin is now addressed by the ML2 Driver available for the VDX series of hardware. The plugin is slated for removal after this release cycle.<br />
<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
* From this point forward any new database schema upgrades will not require restarting Cinder services right away. The services are now independent of schema upgrades. This is part one to Cinder supporting rolling upgrades!<br />
* Ability to add/remove volumes from an existing consistency group. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Ability to create a consistency group from an existing consistency group snapshot. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Create more fine tuned filters/weighers to set how the scheduler will choose a volume backend. [http://docs.openstack.org/admin-guide-cloud/content/driver_filter_weighing.html Read the docs for more info].<br />
* Encrypted volumes can now be backed up using the Cinder backup service. [http://docs.openstack.org/admin-guide-cloud/content/volume-backup-restore.html Read the docs for more info].<br />
* Ability to create private volume types. This is perfect when you want to make volume types available to only a specific tenant or to test it before making available to your cloud. To do so use the ''cinder type-create <name> --is-public''.<br />
* Oversubscription with thin provision is configurable. [http://docs.openstack.org/admin-guide-cloud/content/over_subscription.html Read docs for more info].<br />
* Ability to add descriptions to volume types. To do so use ''cinder type-create <name> <description><br />
* Cinder now can return multiple iSCSI paths information so that the connector can attach volumes even when the primary path is down ([https://review.openstack.org/#/c/134681/ when connector's multipath feature is enabled] or [https://review.openstack.org/#/c/140877/ not enabled]).<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The 'host' config option for multiple-storage backends in cinder.conf is renamed to 'backend_host' in order to avoid a naming conflict with the 'host' to locate redis. If you use this option, please ensure your configuration files are updated.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* Support to add jitter to polling cycles to ensure pollsters are not querying service's api at the same time<br />
* Ceilometer API RBAC support<br />
* Improved Event support:<br />
** Multi-pipeline support to enable unique processing and publishing of events<br />
** Enabled ability to capture raw notification messages for auditing and postmortem analysis<br />
** Support for persisting events into ElasticSearch<br />
** Publishing support to database, http, file, kafa and oslo.messaging supported message queues<br />
** Option to split off the events persistence into a separate database<br />
** Telemetry now supports to collect and store all the event type meters as events. A new option, ''disable_non_metric_meters'', was added to the configuration in order to provide the possibility to turn off storing these events as samples. For further information please see the [http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-telemetry.html Telemetry Configuration Reference]<br />
** The Administrator Guide in OpenStack Manuals was updated with a new [http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-events.html Events section], where you can find further information about this functionality.<br />
* Improved pipeline publishing support:<br />
** Support to publish events and samples to Kafka or HTTP targets<br />
** Publish data to multiple queues<br />
* Additional meters<br />
** memory and disk meters for Hyper-V<br />
** disk meters for LibVirt<br />
** power and thermal related IPMI meters, more meters from NodeManager<br />
** ability to meter Ceph<br />
* IPv6 support enabled in Ceilometer udp publisher and collector<br />
* [http://launchpad.net/gnocchi Gnocchi] dispatch support for ceilometer-collector<br />
* Self-disabled pollster mechanism<br />
<br />
=== Known Issues ===<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
* Deprecated meters:<br />
** The instance:<flavor> meter is deprecated in the Kilo release. In order to retrieve samples or statistics based on flavor you can use the following queries:<br />
statistics:<br />
ceilometer statistics -m instance -g resource_metadata.instance_type<br />
<br />
samples:<br />
ceilometer sample-list -m instance -q metadata.instance_type=<value><br />
* Middleware used to meter Swift was previously packaged in Ceilometer and is now deprecated. It is now separated into it's own library: ceilometermiddleware.<br />
** Juno configuration: http://docs.openstack.org/juno/install-guide/install/apt/content/ceilometer-swift.html<br />
** Kilo configuration: http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-swift.html<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
=== Key New Features ===<br />
<br />
* Improved scaling using nested stacks<br />
** Heat will RPC actions on any resource that is based on a template. This should help to spread the load when dealing with large complex stacks.<br />
* oslo versioned objects<br />
** The database layer now uses oslo versioned objects to aid in future upgrades. This will allow a newly upgraded heat-engine to use a database with an older schema. Note that this will not help with upgrading to kilo.<br />
* New template functions<br />
** There is a new HOT template version "20150430" which includes two new functions "digest" and "repeat". <br />
* Multiregion stacks<br />
** MORE DETAIL HERE<br />
* Software config signalling using swift<br />
** MORE DETAIL HERE<br />
* Triggering new software deployments from heatclient<br />
** MORE DETAIL HERE<br />
* stack snapshots<br />
** MORE DETAIL HERE<br />
* Access to Heat services<br />
** The admin now has similar access to services as other projects. This is in the form of "heat-manage service-list" and via horizon. This feature reports the active heat-engines.<br />
* nova and neutron property constraints<br />
** MORE DETAIL HERE<br />
* stack hooks<br />
** MORE DETAIL HERE<br />
* New contributed resources<br />
** Mistral resources<br />
** keystone resources <br />
supported with Keystone v3 server for Project, Role, User and Group<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The default of the configuration option "num_engine_workers" has changed from 1 to a number based on the the number of CPUs. This is now the same as the way other projects set the number of workers.<br />
* The default for the configuration option "max_nested_stack_depth" has been increased to 5.<br />
* There is a new configuration option "convergence" it is by default off. This feature is not yet complete and this option should remain off.<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
==== Deprecation ====<br />
* The follow resources are deprecated OS::Heat::HARestarter and OS::Heat::CWLiteAlarm<br />
* The CloudWatch API (heat-api-cw)<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Data Processing service (Sahara) ==<br />
=== Key New Features ===<br />
* New plugins, their features and versions:<br />
** MAPR<br />
** Apache Storm<br />
** Apache Hadoop 2.6.0 was added, Apache Hadoop 2.4.1 deprecated<br />
** New services for CDH plugin added up to HDFS, YARN, Spark, Oozie, HBase, ZooKeeper and other services<br />
* Added indirect VM access for better utilization of floating IPs<br />
* Added event log support to have detailed info about provisioning progress<br />
* Optional default node group and cluster templates per plugin<br />
* Horizon updates:<br />
** Guided cluster creation and job execution<br />
** Filtering on search for objects<br />
* Editing of Node Group templates and Cluster templates implemented<br />
* Added Shell Job Type for clusters running Oozie<br />
* New Job Types endpoint to query list of the supported Job Types<br />
<br />
=== Known Issues ===<br />
-<br />
<br />
=== Upgrade Notes ===<br />
Details: http://docs.openstack.org/developer/sahara/userdoc/upgrade.guide.html#juno-kilo<br />
<br />
* Sahara now requires policy.json configuration file.<br />
<br />
== OpenStack Bare Metal service (Ironic) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== State Machine ====<br />
Ironic now uses a formal model for the logical state of each node it manages.<ref name="states">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/new-ironic-state-machine.html#proposed-change]New Ironic State Machine</ref> This has enabled the addition of two new processes: '''cleaning''' and '''inspection'''.<br />
* Automatic disk erasure between tenants is now enabled by default. This may be extended to perform additional '''cleaning''' steps, such as re-applying firmware, resetting BIOS settings, etc.<ref name="cleaning">[http://docs.openstack.org/developer/ironic/deploy/cleaning.html]Node Cleaning</ref><br />
* Both in-band and out-of-band methods are available to '''inspect''' hardware. These methods may be used to update Node properties automatically.<ref name="inspect">[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#hardware-inspection]Hardware Inspection</ref><br />
<br />
==== Version Headers ====<br />
The Ironic REST API expects a new ''X-OpenStack-Ironic-API-Version'' header be passed with each HTTP[S] request. This header allows client and server to negotiate a mutually supported interface.<ref name="api-version">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html]REST API "micro" versions </ref> In the absence of this header, the REST service will default to a compatibility mode and yield responses compatible with Juno clients. This mode, however, prevents access to most features introduced in Kilo.<br />
<br />
==== Hardware Driver Changes ====<br />
The following new drivers were added:<br />
* [http://docs.openstack.org/developer/ironic/drivers/amt.html AMT]<br />
* [http://docs.openstack.org/developer/ironic/deploy/drivers.html#irmc iRMC]<br />
* [http://docs.openstack.org/developer/ironic/drivers/vbox.html VirtualBox (testing driver only)]<br />
<br />
<br />
The following enhancements were made to existing drivers:<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#enabling-the-configuration-drive-configdrive Configdrives] may be used with the "agent" drivers in lieu of a metadata service, if desired.<br />
* SeaMicro driver supports serial console<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#uefi-secure-boot-support iLO driver supports UEFI secure boot]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#hardware-inspection iLO driver supports out-of-band node inspection]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#ilo-node-cleaning iLO driver supports resetting ilo and bios during cleaning]<br />
<br />
<br />
Support for third-party and out-of-tree drivers is enhanced by the following two changes:<br />
* Drivers may store their own "internal" information about Nodes.<br />
* Drivers may register their own periodic tasks to be run by the Conductor.<br />
* ''vendor_passthru'' methods now support additional HTTP methods (eg, PUT and POST).<br />
* ''vendor_passthru'' methods are now discoverable in the REST API. See [http://docs.openstack.org/developer/ironic/dev/drivers.html#node-vendor-passthru node vendor passthru] and [http://docs.openstack.org/developer/ironic/dev/drivers.html#driver-vendor-passthru driver vendor passthru]<br />
<br />
==== Other Changes ====<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#logical-names Logical names] may be used to address Nodes, in addition to their canonical UUID. <br />
* For servers with varied local disks, [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#specifying-the-disk-for-deployment ''hints''] may be supplied that affect which disk device the OS is provisioned to.<br />
* Support for fetching kernel, ramdisk, and instance images from HTTP[S] sources directly has been added to remove the dependency on Glance. [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#using-ironic-as-a-standalone-service Using ironic as a standalone service]<br />
* Nodes may be placed into ''[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#maintenance-mode maintenance mode]'' via REST API calls. An optional ''maintenance reason'' may be specified when doing so.<br />
<br />
=== Known Issues ===<br />
* '''Running more than one nova-compute process is not officially supported.'''<br />
** While Ironic does include a ClusteredComputeManager, which allows running more than one nova-compute process with Ironic, it should be considered experimental and has many known problems.<br />
* Drivers using the "agent" deploy mechanism do not support "rebuild --preserve-ephemeral"<br />
<br />
=== Upgrade Notes ===<br />
* IPMI Passwords are now obfuscated in REST API responses. This may be disabled by changing API policy settings.<br />
* The "agent" class of drivers now support both whole-disk and partition based images.<br />
* The driver_info parameters of "pxe_deploy_kernel" and "pxe_deploy_ramdisk" are deprecated in favour of "deploy_kernel" and "deploy_ramdisk". <br />
* Drivers implementing their own version of the vendor_passthru() method has been deprecated in favour of the new @passthru decorator.<br />
<br />
==== Juno to Kilo ====<br />
The recommended upgrade process is documented here:<br />
* http://docs.openstack.org/developer/ironic/deploy/upgrade-guide.html#upgrading-from-juno-to-kilo<br />
<br />
==== Upgrading from Icehouse "nova-baremetal" ====<br />
<br />
An upgrade from an Icehouse Nova installation using the "baremetal" driver directly to Kilo Ironic is untested and unsupported. Instead, please follow the following upgrade path:<br />
# Icehouse Nova "baremetal" -> Juno Nova "baremetal"<br />
# Juno Nova "baremetal" -> Juno Ironic<br />
# Juno Ironic -> Kilo Ironic<br />
<br />
Documentation for steps 1 and 2 is available at: https://wiki.openstack.org/wiki/Ironic/NovaBaremetalIronicMigration<br />
<br />
== OpenStack Documentation ==<br />
<br />
* New [http://docs.openstack.org docs.openstack.org] landing page and new web design for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* First release of the [http://docs.openstack.org/networking-guide/ Networking Guide]<br />
* Migration to RST for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* New specialty teams:<br />
** Install Guides<br />
** Networking Guide<br />
** High Availability Guide<br />
** Networking Guide<br />
** User Guides (Admin and End User)<br />
* First App Tutorial sprint<br />
* Driver documentation clarification and connections</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Kilo&diff=78585ReleaseNotes/Kilo2015-04-29T21:47:30Z<p>Asalkeld: /* OpenStack Orchestration (Heat) */</p>
<hr />
<div>= OpenStack 2015.1.0 (Kilo) Release Notes =<br />
<br />
<br />
{| style="color:#000000; border:solid 3px #A8A8A8; padding:2em; margin:2em 0; background-color:#FFFFFF; vertical-align:middle;"<br />
| style="padding:1em" | The Kilo release of OpenStack is dedicated to the loving memory of '''Chris Yeoh''', who left his family and us way too soon.<br />
|}<br />
<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
<br />
== General Upgrade Notes ==<br />
<br />
* TBD<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Erasure Code (beta) ====<br />
Swift now supports an erasure-code (EC) storage policy type. This allows deployers to achieve very high durability with less raw capacity as used in replicated storage. However, EC requires more CPU and network resources, so it is not good for every use case. EC is great for storing large, infrequently accessed data in a single region.<br />
<br />
Swift's implementation of erasure codes is meant to be transparent to end users. There is no API difference between replicated storage and EC storage.<br />
<br />
To support erasure codes, Swift now depends on PyECLib and liberasurecode. liberasurecode is a pluggable library that allows for the actual EC algorithm to be implemented in a library of your choosing.<br />
<br />
Full docs are at http://swift.openstack.org/overview_erasure_code.html<br />
<br />
==== Composite tokens ====<br />
Composite tokens allow other OpenStack services to store data in Swift on behalf of a client so that neither the client nor the service can update the data without both party's consent.<br />
<br />
An example of this is that a user requests that Nova save a snapshot of a VM. Nova passes the request to Glance, Glance writes the image to a Swift container as a set of objects. In this case, the user cannot modify the snapshot without also having a valid token from the service. Nor can the service update the data without a valid token from the user. But the data is still stored in the user's account in Swift, which makes accounting simpler.<br />
<br />
Full docs are at http://swift.openstack.org/overview_backing_store.html<br />
<br />
==== Data placement updates for smaller, unbalanceable clusters ====<br />
Swift's data placement now accounts for device weight. This allows operators to gradually add new zones and regions without immediately causing a large amount of data to be moved. Also, if a cluster is inbalanced (eg a two-zone cluster where one zone has twice the capacity of the other), Swift will more efficiently use the available space and warn when replicas are placed without enough dispersion in the cluster.<br />
<br />
==== Global cluster replication improvements ====<br />
Replication between regions will now only move one replica per replication run. This gives the remote region a chance to replicate internally and thus avoid more data moving over the WAN<br />
<br />
<br />
=== Known Issues ===<br />
* As a beta release, EC support is nearly fully feature complete, but it is lacking support for some features (like multi-range reads) and has not had a full performance characterization. This feature relies on ssync for durability. Deployers are urged to do extensive testing and not deploy production data using an erasure code storage policy.<br />
<br />
=== Upgrade Notes ===<br />
As always, you can upgrade to this version of Swift with no end-user downtime<br />
<br />
* In order to support erasure codes, Swift has a new dependency on PyECLib (and liberasurecode, transitively). Also, the minimum required version of eventlet has been raised.<br />
<br />
<br />
<br />
<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== API v2.1 ====<br />
<br />
* We have the first release of the next generation of the Nova API, v2.1. The v2.1 is designed to be backwards compatible with v2.0 with the addition of strong API validation. All changes to the API are discoverable via the advertised microversion. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html<br />
<br />
* For kilo, by default we are still using v2.0 API code to server v2.0 API requests. It is hoped that in liberty that v2.1 will be used to serve requests for both v2.0 and v2.1.<br />
<br />
* For liberty v2.0 is now frozen, and all new features will now be added into the v2.1 API using the microversions mechanism. Microversion increments released with kilo are:<br />
** Extending the keypair API to support for x509 certificates, to be used with Windows WinRM, is one of the first API features added as a microversion in the v2.1 API.<br />
** Exposing additional attributes in os-extended-server-attributes<br />
<br />
* python-novaclient does not yet have support for the v2.1 API<br />
<br />
* The policy enforcement of Nova v2.1 API get improvement.<br />
** Policy only enforce at the entry of API.<br />
** Without duplicated rules for single one API anymore.<br />
** All the v2.1 API policy rule use 'os_compute_api' as prefix which distinguish with v2 API.<br />
** Due to hard-code permission checks at db layer, part of Nova API isn't configurable by policy before. It's always required admin user. Part of Nova v2.1 API's hard-code permission checks is removed which make API policy configurable. The rest of hard-code permission checks will be removed at Liberty.<br />
<br />
==== Upgrade Support ====<br />
<br />
* We have reduced the data migrations that happen in the DB migration scripts, this now happens in a "lazy" way inside the DB objects code. There are nova-manage commands to help force migration of the data. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/flavor-from-sysmeta-to-blob.html<br />
<br />
* Change https://review.openstack.org/#/c/97946/ adds database migration 267 which scans for null instances.uuid records and will fail if any are found since the migrate ultimately needs to make instances.uuid non-nullable and adds a UniqueConstraint on that column. A helper script is provided to search for null instances.uuid records before running the database migrations. Before running 'nova-manage db sync', run the helper script 'nova-manage db null_instance_uuid_scan' which, by default, will just search and dump results, it does not change anything. Pass the --delete option to the null_instance_uuid_scan command to automatically remove any null records were instances.uuid is null.<br />
<br />
==== Scheduler ====<br />
<br />
* A selection of performance optimisations<br />
* We are in the process of making structural changes to the scheduler that will help improve our ability to evolve and improve scheduling. This should not be visible from an end user perspective.<br />
<br />
==== Cells v2 ====<br />
<br />
* There are some initial parts of cell v2 supported added, but this feature is not yet ready to use.<br />
* new 'nova-manage api_db sync' and 'nova-manage api_db version' commands for working with the new api database for cells, but nothing is using this database yet so it is not necessary to set it up.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Support for generation 2 VMs: https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms<br />
* Support for SMB based volumes, to sit along side existing iSCSI volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/hyper-v-smbfs-volume-support.html<br />
* Support for x509 certificate based key pairs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/keypair-x509-certificates.html<br />
* Host power actions now work with Hyper-V: https://blueprints.launchpad.net/nova/+spec/hyper-v-host-power-actions<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* NFV related features:<br />
** NUMA based scheduling: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/input-output-based-numa-scheduling.html<br />
** Pinning guest vCPUs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-cpu-pinning.html<br />
** Large page support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html<br />
* vhostuser VIF driver: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt_vif_vhostuser.html<br />
* Support for running KVM on IBM System z: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-kvm-systemz.html<br />
* Support for parallels: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pcs-support.html<br />
* Support for SMB based volumes: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-smbfs-volume-support.html<br />
* Quiesce using QEMU guest agent: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quiesced-image-snapshots-with-qemu-guest-agent.html<br />
* Quobyte volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quobyte-nova-driver.html<br />
* Support for QEMU iSCSI initiator: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html<br />
<br />
===== VMware =====<br />
<br />
* Support for Ephemeral disks: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/vmware-ephemeral-disk-support.html<br />
* Support fo vSAN: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-vsan-support.html<br />
* Support for based OVA images: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-driver-ova-support.html<br />
* Support for SPBM based storage policies: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-spbm-support.html<br />
<br />
===== Ironic =====<br />
<br />
* Support to pass flavor capabilities to ironic: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pass-flavor-capabilities-to-ironic-virt-driver.html<br />
<br />
=== Known Issues ===<br />
<br />
* Evacuate recovery code has the potential to destroy data. On nova-compute startup, instances reported by the hypervisor are examined to see if they have moved (i.e. been evacuated) from the current host during the outage. If the determination is made that they were, then they are destroyed locally. This has the potential to choose incorrectly and destroy instances unexpectedly. On libvirt-like nodes, this can be triggered by changing the system hostname. On vmware-like nodes, this can be triggered by attempting to manage a single vcenter deployment from two different hosts (with different hostnames). This will be fixed properly in Liberty, but for now deployments that wish to disable this behavior as a preventive measure can set workarounds.destroy_after_evacuate=False. NOTE: This is not a regression and has been a flaw in the design of the evacuate feature since its introduction. There is no easy fix for this, hence this workaround to limit the potential for damage. The proposed fix in liberty is here: https://review.openstack.org/#/c/161444/.<br />
<br />
* The generate config examples possibly missing some oslo related configuration<br />
<br />
=== Upgrade Notes ===<br />
<br />
Below are changes you should be aware of when upgrading. Where possible, The git commit hash is provided for you to find more information:<br />
<br />
* Neutron ports are no longer deleted after your server is deleted, if you created them outside of Nova: 1153a46738fc3ffff98a1df9d94b5a55fdd58777<br />
* EC2 API support has been deprecated, and is likely to be removed in kilo, see f098398a836e3671c49bb884b4a1a1988053f4b2<br />
* Websocket proxies need to be upgraded in a lockstep with the API nodes, as older API nodes will not be sending the access_url when authorizing console access, and newer proxy services (this commit and onward) would fail to authorize such requests 9621ccaf05900009d67cdadeb1aac27368114a61<br />
* After fully upgrading to kilo (i.e. all nodes are running kilo code), you should start a background migration of flavor information from its old home to its new home. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. This is critical to complete before the Liberty release, where support for the old location will be dropped. Use "nova-manage migrate-flavor-data" to perform this transition.<br />
* Due to the improvement on Nova v2.1 API policy enforcement. There are a lot of change happened to v2.1 API policy. Because v2.1 API didn't released before, those change won't keep back-compatible. It is better to use policy sample configuration instead of old one.<br />
* VMware rescue VM behaviour no longer creates a new VM and instead happens in place: cd1765459a24e52e1b933c8e05517fed75ac9d41<br />
* force_config_drive = always has been deprecated, and force_config_drive = True should be used instead: c12a78b35dc910fa97df888960ef2b9a64557254<br />
* Running hyper-v, if you deployed code that was past this commit: b4d57ab65836460d0d9cb8889ec2e6c3986c0a9b but before this commit: c8e9f8e71de64273f10498c5ad959634bfe79975 you make have problems to manually resolve see: c8e9f8e71de64273f10498c5ad959634bfe79975<br />
* Changed the default value of: multi_instance_display_name_template see: 609b2df339785bff9e30a9d67d5c853562ae3344<br />
* Please use "nova-manage db null_instance_uuid_scan" to ensure the DB migrations will apply cleanly, see: c0ea53ce353684b48303fc59393930c3fa5ade58<br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* Keystone now supports Hierarchical Multitenancy. Projects can be nested under other projects via the <code>parent_id</code> optional attribute when creating a project.<br />
** TBD - More info.<br />
* Fernet tokens - Keystone provide another token format, that allows for tokens to be non-persistent (no longer stored in a database). The new token format promises improved scalability and performance of Keystone.<br />
* WebSSO - If using a trusted identity provider for federating users, Keystone now provides the ability for a user to authenticate with Horizon using the credentials of an existing IdP, through a Single Sign-On page.<br />
* Pluggable Assignment Model - TBD - More Info<br />
* Assignment / Resource Split - TBD - More Info<br />
* Extensions to Core functionality - The Keystone team is moving away from the concept of "extensions", all previous extensions are now enabled by default. These now core functions will be marked as Experimental or Stable (as assessed by the Keystone team). The previous extensions that are now enabled by default are: OS-FEDERATION, OS-OAUTH1, OS-ENDPOINT-POLICY and OS-EP-FILTER<br />
* (Experimental) Per-Domain Identity Backend Configuration can be stored in SQL - TBD More Info<br />
* [http://docs.openstack.org/developer/keystone/configure_federation.html#keystone-as-an-identity-provider-idp Keystone-to-Keystone Identity Federation] is now considered stable.<br />
** IDP Remote ID Registration - Keystone added the ability to associate many `Remote IDs` to a single Identity-Provider. This will help in a case where there are many identity providers that use a common mapping.<br />
** ECP Wrapped Assertions - In addition to generating a SAML assertion about a user, Keystone has also added the ability to create an ECP wrapped SAML assertion. This will create a more seamless integration for Keystone to Keystone clients.<br />
* Keystone now supports [http://docs.openstack.org/developer/keystone/extensions/openidc.html OpenID Connect] as a federated identity authentication mechanism.<br />
* Trusts now support redelegation. If allowed when the trust is created, a trustee can redelegate the roles from the trust via another trust.<br />
* It is now possible to request an explicitly unscoped token from Keystone.<br />
<br />
=== Known Issues ===<br />
<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
<br />
* XML support in Keystone has been removed as of Kilo. When upgrading from Juno to Kilo, it is recommended that references to XML and XmlBodyMiddleware be removed from the [https://github.com/openstack/keystone/blob/master/etc/keystone-paste.ini Keystone Paste configuration]. This includes removing the XML middleware filters and there references from the public_api, admin_api, api_v3, public_version_api, admin_version_api and any other pipelines that may contain the XML filters.<br />
* [http://specs.openstack.org/openstack/openstack-specs/specs/no-downward-sql-migration.html SQL Schema Downgrades are no longer supported]. This change is the result of evaluation that downward SQL migrations are not well tested and become increasingly difficult to support with the volume of data-change that occurs in many of the migrations.<br />
* The following python libraries are now required: [https://pypi.python.org/pypi/cryptography cryptography], [https://pypi.python.org/pypi/msgpack-python msgpack-python], [https://pypi.python.org/pypi/pysaml2 pysaml2] and [https://pypi.python.org/pypi/oauthlib oauthlib]<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
* DVR now supports VLANs in addition to VXLAN/GRE<br />
* ML2 Hierarchical Port Binding<br />
* New LBaaS Version 2 API<br />
* Portsecurity support for the OVS ML2 Driver<br />
<br />
* New Plugins supported in Kilo include the following:<br />
** A10 Networks LBaaS V2 Driver<br />
** Brocade LBaaS V2 Driver<br />
** Brocade ML2 driver for MLX and ICX switches<br />
** Brocade L3 routing plugin for MLX switch<br />
** Brocade Vyatta vRouter L3 Plugin<br />
** Brocade Vyatta vRouter Firewall Driver<br />
** Brocade Vyatta vRouter VPN Driver<br />
** Cisco CSR VPNaaS Driver<br />
** Dragonflow SDN based Distributed Virtual Router L3 Plugin<br />
** Freescale FWaaS Driver<br />
** Intel Mcafee NGFW FWaaS Driver<br />
** IPSEC Strongswan VPNaaS Driver<br />
<br />
=== Known Issues ===<br />
* The Firewall-as-a-Service project is still marked as experimental for the Kilo release.<br />
* Bug [https://bugs.launchpad.net/neutron/+bug/1438819 1438819]<br />
** When a new subnet is created on an external network, all existing routers with gateways on the network will get a new address allocated from it. For IPv4 networks, this could consume the entire subnet for router gateway ports.<br />
<br />
=== Upgrade Notes ===<br />
<br />
From Havana, Neutron no longer supported an explicit lease database (https://bugs.launchpad.net/bugs/1202392). This left dead code including unused environment variable. In order to remove the dead code (https://review.openstack.org/#/c/152398/), a change to the dhcp.filter is required, so that line:<br />
<br />
'''dnsmasq: EnvFilter, dnsmasq, root, NEUTRON_NETWORK_ID='''<br />
<br />
Be replaced by:<br />
<br />
'''dnsmasq: CommandFilter, dnsmasq, root'''<br />
<br />
After advanced services were split into separate packages and received their own service configuration files (specifically, etc/neutron/neutron_lbaas.conf, etc/neutron/neutron_fwaas.conf and etc/neutron/neutron_vpnaas.conf), active service provider configuration can be different after upgrade (specifically, default load balancer (haproxy) and vpn (openswan) providers can be enabled for you even though you previously disabled them in neutron.conf). Please make sure you review configuration after upgrade so that it reflects the desired state of service providers.<br />
<br />
Note: this will have no effect if the related service plugin is not loaded in neutron.conf.<br />
<br />
<br />
* The default value of api_workers is now equal to the number of CPUs in the host. If you currently use the default, ensure you set api_workers to a reasonable number for your installation. (https://review.openstack.org/#/c/140493/)<br />
* The neutron.allow_duplicate_networks config option is deprecated in Kilo and will be removed in Liberty where the default behavior will be to just allow multiple ports attached to an instance on the same network in Neutron. (https://review.openstack.org/163581)<br />
* The linuxbridge agent now enables VXLAN by default (https://review.openstack.org/160826)<br />
* neutron-ns-metadata-proxy can now be run as non-root (https://review.openstack.org/147437)<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
<br />
* Deprecation<br />
** Brocade Classic plugin for Brocade's VDX/VCS series of hardware switches will be deprecated in the L-Release. The functionality provided by this plugin is now addressed by the ML2 Driver available for the VDX series of hardware. The plugin is slated for removal after this release cycle.<br />
<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
* From this point forward any new database schema upgrades will not require restarting Cinder services right away. The services are now independent of schema upgrades. This is part one to Cinder supporting rolling upgrades!<br />
* Ability to add/remove volumes from an existing consistency group. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Ability to create a consistency group from an existing consistency group snapshot. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Create more fine tuned filters/weighers to set how the scheduler will choose a volume backend. [http://docs.openstack.org/admin-guide-cloud/content/driver_filter_weighing.html Read the docs for more info].<br />
* Encrypted volumes can now be backed up using the Cinder backup service. [http://docs.openstack.org/admin-guide-cloud/content/volume-backup-restore.html Read the docs for more info].<br />
* Ability to create private volume types. This is perfect when you want to make volume types available to only a specific tenant or to test it before making available to your cloud. To do so use the ''cinder type-create <name> --is-public''.<br />
* Oversubscription with thin provision is configurable. [http://docs.openstack.org/admin-guide-cloud/content/over_subscription.html Read docs for more info].<br />
* Ability to add descriptions to volume types. To do so use ''cinder type-create <name> <description><br />
* Cinder now can return multiple iSCSI paths information so that the connector can attach volumes even when the primary path is down ([https://review.openstack.org/#/c/134681/ when connector's multipath feature is enabled] or [https://review.openstack.org/#/c/140877/ not enabled]).<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The 'host' config option for multiple-storage backends in cinder.conf is renamed to 'backend_host' in order to avoid a naming conflict with the 'host' to locate redis. If you use this option, please ensure your configuration files are updated.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* Support to add jitter to polling cycles to ensure pollsters are not querying service's api at the same time<br />
* Ceilometer API RBAC support<br />
* Improved Event support:<br />
** Multi-pipeline support to enable unique processing and publishing of events<br />
** Enabled ability to capture raw notification messages for auditing and postmortem analysis<br />
** Support for persisting events into ElasticSearch<br />
** Publishing support to database, http, file, kafa and oslo.messaging supported message queues<br />
** Option to split off the events persistence into a separate database<br />
** Telemetry now supports to collect and store all the event type meters as events. A new option, ''disable_non_metric_meters'', was added to the configuration in order to provide the possibility to turn off storing these events as samples. For further information please see the [http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-telemetry.html Telemetry Configuration Reference]<br />
** The Administrator Guide in OpenStack Manuals was updated with a new [http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-events.html Events section], where you can find further information about this functionality.<br />
* Improved pipeline publishing support:<br />
** Support to publish events and samples to Kafka or HTTP targets<br />
** Publish data to multiple queues<br />
* Additional meters<br />
** memory and disk meters for Hyper-V<br />
** disk meters for LibVirt<br />
** power and thermal related IPMI meters, more meters from NodeManager<br />
** ability to meter Ceph<br />
* IPv6 support enabled in Ceilometer udp publisher and collector<br />
* [http://launchpad.net/gnocchi Gnocchi] dispatch support for ceilometer-collector<br />
* Self-disabled pollster mechanism<br />
<br />
=== Known Issues ===<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
* Deprecated meters:<br />
** The instance:<flavor> meter is deprecated in the Kilo release. In order to retrieve samples or statistics based on flavor you can use the following queries:<br />
statistics:<br />
ceilometer statistics -m instance -g resource_metadata.instance_type<br />
<br />
samples:<br />
ceilometer sample-list -m instance -q metadata.instance_type=<value><br />
* Middleware used to meter Swift was previously packaged in Ceilometer and is now deprecated. It is now separated into it's own library: ceilometermiddleware.<br />
** Juno configuration: http://docs.openstack.org/juno/install-guide/install/apt/content/ceilometer-swift.html<br />
** Kilo configuration: http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-swift.html<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
=== Key New Features ===<br />
<br />
* Improved scaling using nested stacks<br />
** Heat will RPC actions on any resource that is based on a template. This should help to spread the load when dealing with large complex stacks.<br />
* oslo versioned objects<br />
** The database layer now uses oslo versioned objects to aid in future upgrades. This will allow a newly upgraded heat-engine to use a database with an older schema. Note that this will not help with upgrading to kilo.<br />
* New template functions<br />
** There is a new HOT template version "20150430" which includes two new functions "digest" and "repeat". <br />
* Multiregion stacks<br />
** MORE DETAIL HERE<br />
* Software config signalling using swift<br />
** MORE DETAIL HERE<br />
* Triggering new software deployments from heatclient<br />
** MORE DETAIL HERE<br />
* stack snapshots<br />
** MORE DETAIL HERE<br />
* Access to Heat services<br />
** The admin now has similar access to services as other projects. This is in the form of "heat-manage service-list" and via horizon. This feature reports the active heat-engines.<br />
* nova and neutron property constraints<br />
** MORE DETAIL HERE<br />
* stack hooks<br />
** MORE DETAIL HERE<br />
* New contributed resources<br />
** Mistral resources<br />
** keystone resources <br />
supported with Keystone v3 server for Project, Role, User and Group<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
==== Deprecation ====<br />
* The follow resources are deprecated OS::Heat::HARestarter and OS::Heat::CWLiteAlarm<br />
* The CloudWatch API (heat-api-cw)<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Data Processing service (Sahara) ==<br />
=== Key New Features ===<br />
* New plugins, their features and versions:<br />
** MAPR<br />
** Apache Storm<br />
** Apache Hadoop 2.6.0 was added, Apache Hadoop 2.4.1 deprecated<br />
** New services for CDH plugin added up to HDFS, YARN, Spark, Oozie, HBase, ZooKeeper and other services<br />
* Added indirect VM access for better utilization of floating IPs<br />
* Added event log support to have detailed info about provisioning progress<br />
* Optional default node group and cluster templates per plugin<br />
* Horizon updates:<br />
** Guided cluster creation and job execution<br />
** Filtering on search for objects<br />
* Editing of Node Group templates and Cluster templates implemented<br />
* Added Shell Job Type for clusters running Oozie<br />
* New Job Types endpoint to query list of the supported Job Types<br />
<br />
=== Known Issues ===<br />
-<br />
<br />
=== Upgrade Notes ===<br />
Details: http://docs.openstack.org/developer/sahara/userdoc/upgrade.guide.html#juno-kilo<br />
<br />
* Sahara now requires policy.json configuration file.<br />
<br />
== OpenStack Bare Metal service (Ironic) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== State Machine ====<br />
Ironic now uses a formal model for the logical state of each node it manages.<ref name="states">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/new-ironic-state-machine.html#proposed-change]New Ironic State Machine</ref> This has enabled the addition of two new processes: '''cleaning''' and '''inspection'''.<br />
* Automatic disk erasure between tenants is now enabled by default. This may be extended to perform additional '''cleaning''' steps, such as re-applying firmware, resetting BIOS settings, etc.<ref name="cleaning">[http://docs.openstack.org/developer/ironic/deploy/cleaning.html]Node Cleaning</ref><br />
* Both in-band and out-of-band methods are available to '''inspect''' hardware. These methods may be used to update Node properties automatically.<ref name="inspect">[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#hardware-inspection]Hardware Inspection</ref><br />
<br />
==== Version Headers ====<br />
The Ironic REST API expects a new ''X-OpenStack-Ironic-API-Version'' header be passed with each HTTP[S] request. This header allows client and server to negotiate a mutually supported interface.<ref name="api-version">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html]REST API "micro" versions </ref> In the absence of this header, the REST service will default to a compatibility mode and yield responses compatible with Juno clients. This mode, however, prevents access to most features introduced in Kilo.<br />
<br />
==== Hardware Driver Changes ====<br />
The following new drivers were added:<br />
* [http://docs.openstack.org/developer/ironic/drivers/amt.html AMT]<br />
* [http://docs.openstack.org/developer/ironic/deploy/drivers.html#irmc iRMC]<br />
* [http://docs.openstack.org/developer/ironic/drivers/vbox.html VirtualBox (testing driver only)]<br />
<br />
<br />
The following enhancements were made to existing drivers:<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#enabling-the-configuration-drive-configdrive Configdrives] may be used with the "agent" drivers in lieu of a metadata service, if desired.<br />
* SeaMicro driver supports serial console<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#uefi-secure-boot-support iLO driver supports UEFI secure boot]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#hardware-inspection iLO driver supports out-of-band node inspection]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#ilo-node-cleaning iLO driver supports resetting ilo and bios during cleaning]<br />
<br />
<br />
Support for third-party and out-of-tree drivers is enhanced by the following two changes:<br />
* Drivers may store their own "internal" information about Nodes.<br />
* Drivers may register their own periodic tasks to be run by the Conductor.<br />
* ''vendor_passthru'' methods now support additional HTTP methods (eg, PUT and POST).<br />
* ''vendor_passthru'' methods are now discoverable in the REST API. See [http://docs.openstack.org/developer/ironic/dev/drivers.html#node-vendor-passthru node vendor passthru] and [http://docs.openstack.org/developer/ironic/dev/drivers.html#driver-vendor-passthru driver vendor passthru]<br />
<br />
==== Other Changes ====<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#logical-names Logical names] may be used to address Nodes, in addition to their canonical UUID. <br />
* For servers with varied local disks, [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#specifying-the-disk-for-deployment ''hints''] may be supplied that affect which disk device the OS is provisioned to.<br />
* Support for fetching kernel, ramdisk, and instance images from HTTP[S] sources directly has been added to remove the dependency on Glance. [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#using-ironic-as-a-standalone-service Using ironic as a standalone service]<br />
* Nodes may be placed into ''[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#maintenance-mode maintenance mode]'' via REST API calls. An optional ''maintenance reason'' may be specified when doing so.<br />
<br />
=== Known Issues ===<br />
* '''Running more than one nova-compute process is not officially supported.'''<br />
** While Ironic does include a ClusteredComputeManager, which allows running more than one nova-compute process with Ironic, it should be considered experimental and has many known problems.<br />
* Drivers using the "agent" deploy mechanism do not support "rebuild --preserve-ephemeral"<br />
<br />
=== Upgrade Notes ===<br />
* IPMI Passwords are now obfuscated in REST API responses. This may be disabled by changing API policy settings.<br />
* The "agent" class of drivers now support both whole-disk and partition based images.<br />
* The driver_info parameters of "pxe_deploy_kernel" and "pxe_deploy_ramdisk" are deprecated in favour of "deploy_kernel" and "deploy_ramdisk". <br />
* Drivers implementing their own version of the vendor_passthru() method has been deprecated in favour of the new @passthru decorator.<br />
<br />
==== Juno to Kilo ====<br />
The recommended upgrade process is documented here:<br />
* http://docs.openstack.org/developer/ironic/deploy/upgrade-guide.html#upgrading-from-juno-to-kilo<br />
<br />
==== Upgrading from Icehouse "nova-baremetal" ====<br />
<br />
An upgrade from an Icehouse Nova installation using the "baremetal" driver directly to Kilo Ironic is untested and unsupported. Instead, please follow the following upgrade path:<br />
# Icehouse Nova "baremetal" -> Juno Nova "baremetal"<br />
# Juno Nova "baremetal" -> Juno Ironic<br />
# Juno Ironic -> Kilo Ironic<br />
<br />
Documentation for steps 1 and 2 is available at: https://wiki.openstack.org/wiki/Ironic/NovaBaremetalIronicMigration<br />
<br />
== OpenStack Documentation ==<br />
<br />
* New [http://docs.openstack.org docs.openstack.org] landing page and new web design for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* First release of the [http://docs.openstack.org/networking-guide/ Networking Guide]<br />
* Migration to RST for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* New specialty teams:<br />
** Install Guides<br />
** Networking Guide<br />
** High Availability Guide<br />
** Networking Guide<br />
** User Guides (Admin and End User)<br />
* First App Tutorial sprint<br />
* Driver documentation clarification and connections</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Kilo&diff=78584ReleaseNotes/Kilo2015-04-29T21:41:49Z<p>Asalkeld: /* Key New Features */</p>
<hr />
<div>= OpenStack 2015.1.0 (Kilo) Release Notes =<br />
<br />
<br />
{| style="color:#000000; border:solid 3px #A8A8A8; padding:2em; margin:2em 0; background-color:#FFFFFF; vertical-align:middle;"<br />
| style="padding:1em" | The Kilo release of OpenStack is dedicated to the loving memory of '''Chris Yeoh''', who left his family and us way too soon.<br />
|}<br />
<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
<br />
== General Upgrade Notes ==<br />
<br />
* TBD<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Erasure Code (beta) ====<br />
Swift now supports an erasure-code (EC) storage policy type. This allows deployers to achieve very high durability with less raw capacity as used in replicated storage. However, EC requires more CPU and network resources, so it is not good for every use case. EC is great for storing large, infrequently accessed data in a single region.<br />
<br />
Swift's implementation of erasure codes is meant to be transparent to end users. There is no API difference between replicated storage and EC storage.<br />
<br />
To support erasure codes, Swift now depends on PyECLib and liberasurecode. liberasurecode is a pluggable library that allows for the actual EC algorithm to be implemented in a library of your choosing.<br />
<br />
Full docs are at http://swift.openstack.org/overview_erasure_code.html<br />
<br />
==== Composite tokens ====<br />
Composite tokens allow other OpenStack services to store data in Swift on behalf of a client so that neither the client nor the service can update the data without both party's consent.<br />
<br />
An example of this is that a user requests that Nova save a snapshot of a VM. Nova passes the request to Glance, Glance writes the image to a Swift container as a set of objects. In this case, the user cannot modify the snapshot without also having a valid token from the service. Nor can the service update the data without a valid token from the user. But the data is still stored in the user's account in Swift, which makes accounting simpler.<br />
<br />
Full docs are at http://swift.openstack.org/overview_backing_store.html<br />
<br />
==== Data placement updates for smaller, unbalanceable clusters ====<br />
Swift's data placement now accounts for device weight. This allows operators to gradually add new zones and regions without immediately causing a large amount of data to be moved. Also, if a cluster is inbalanced (eg a two-zone cluster where one zone has twice the capacity of the other), Swift will more efficiently use the available space and warn when replicas are placed without enough dispersion in the cluster.<br />
<br />
==== Global cluster replication improvements ====<br />
Replication between regions will now only move one replica per replication run. This gives the remote region a chance to replicate internally and thus avoid more data moving over the WAN<br />
<br />
<br />
=== Known Issues ===<br />
* As a beta release, EC support is nearly fully feature complete, but it is lacking support for some features (like multi-range reads) and has not had a full performance characterization. This feature relies on ssync for durability. Deployers are urged to do extensive testing and not deploy production data using an erasure code storage policy.<br />
<br />
=== Upgrade Notes ===<br />
As always, you can upgrade to this version of Swift with no end-user downtime<br />
<br />
* In order to support erasure codes, Swift has a new dependency on PyECLib (and liberasurecode, transitively). Also, the minimum required version of eventlet has been raised.<br />
<br />
<br />
<br />
<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== API v2.1 ====<br />
<br />
* We have the first release of the next generation of the Nova API, v2.1. The v2.1 is designed to be backwards compatible with v2.0 with the addition of strong API validation. All changes to the API are discoverable via the advertised microversion. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html<br />
<br />
* For kilo, by default we are still using v2.0 API code to server v2.0 API requests. It is hoped that in liberty that v2.1 will be used to serve requests for both v2.0 and v2.1.<br />
<br />
* For liberty v2.0 is now frozen, and all new features will now be added into the v2.1 API using the microversions mechanism. Microversion increments released with kilo are:<br />
** Extending the keypair API to support for x509 certificates, to be used with Windows WinRM, is one of the first API features added as a microversion in the v2.1 API.<br />
** Exposing additional attributes in os-extended-server-attributes<br />
<br />
* python-novaclient does not yet have support for the v2.1 API<br />
<br />
* The policy enforcement of Nova v2.1 API get improvement.<br />
** Policy only enforce at the entry of API.<br />
** Without duplicated rules for single one API anymore.<br />
** All the v2.1 API policy rule use 'os_compute_api' as prefix which distinguish with v2 API.<br />
** Due to hard-code permission checks at db layer, part of Nova API isn't configurable by policy before. It's always required admin user. Part of Nova v2.1 API's hard-code permission checks is removed which make API policy configurable. The rest of hard-code permission checks will be removed at Liberty.<br />
<br />
==== Upgrade Support ====<br />
<br />
* We have reduced the data migrations that happen in the DB migration scripts, this now happens in a "lazy" way inside the DB objects code. There are nova-manage commands to help force migration of the data. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/flavor-from-sysmeta-to-blob.html<br />
<br />
* Change https://review.openstack.org/#/c/97946/ adds database migration 267 which scans for null instances.uuid records and will fail if any are found since the migrate ultimately needs to make instances.uuid non-nullable and adds a UniqueConstraint on that column. A helper script is provided to search for null instances.uuid records before running the database migrations. Before running 'nova-manage db sync', run the helper script 'nova-manage db null_instance_uuid_scan' which, by default, will just search and dump results, it does not change anything. Pass the --delete option to the null_instance_uuid_scan command to automatically remove any null records were instances.uuid is null.<br />
<br />
==== Scheduler ====<br />
<br />
* A selection of performance optimisations<br />
* We are in the process of making structural changes to the scheduler that will help improve our ability to evolve and improve scheduling. This should not be visible from an end user perspective.<br />
<br />
==== Cells v2 ====<br />
<br />
* There are some initial parts of cell v2 supported added, but this feature is not yet ready to use.<br />
* new 'nova-manage api_db sync' and 'nova-manage api_db version' commands for working with the new api database for cells, but nothing is using this database yet so it is not necessary to set it up.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Support for generation 2 VMs: https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms<br />
* Support for SMB based volumes, to sit along side existing iSCSI volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/hyper-v-smbfs-volume-support.html<br />
* Support for x509 certificate based key pairs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/keypair-x509-certificates.html<br />
* Host power actions now work with Hyper-V: https://blueprints.launchpad.net/nova/+spec/hyper-v-host-power-actions<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* NFV related features:<br />
** NUMA based scheduling: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/input-output-based-numa-scheduling.html<br />
** Pinning guest vCPUs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-cpu-pinning.html<br />
** Large page support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html<br />
* vhostuser VIF driver: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt_vif_vhostuser.html<br />
* Support for running KVM on IBM System z: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-kvm-systemz.html<br />
* Support for parallels: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pcs-support.html<br />
* Support for SMB based volumes: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-smbfs-volume-support.html<br />
* Quiesce using QEMU guest agent: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quiesced-image-snapshots-with-qemu-guest-agent.html<br />
* Quobyte volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quobyte-nova-driver.html<br />
* Support for QEMU iSCSI initiator: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html<br />
<br />
===== VMware =====<br />
<br />
* Support for Ephemeral disks: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/vmware-ephemeral-disk-support.html<br />
* Support fo vSAN: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-vsan-support.html<br />
* Support for based OVA images: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-driver-ova-support.html<br />
* Support for SPBM based storage policies: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-spbm-support.html<br />
<br />
===== Ironic =====<br />
<br />
* Support to pass flavor capabilities to ironic: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pass-flavor-capabilities-to-ironic-virt-driver.html<br />
<br />
=== Known Issues ===<br />
<br />
* Evacuate recovery code has the potential to destroy data. On nova-compute startup, instances reported by the hypervisor are examined to see if they have moved (i.e. been evacuated) from the current host during the outage. If the determination is made that they were, then they are destroyed locally. This has the potential to choose incorrectly and destroy instances unexpectedly. On libvirt-like nodes, this can be triggered by changing the system hostname. On vmware-like nodes, this can be triggered by attempting to manage a single vcenter deployment from two different hosts (with different hostnames). This will be fixed properly in Liberty, but for now deployments that wish to disable this behavior as a preventive measure can set workarounds.destroy_after_evacuate=False. NOTE: This is not a regression and has been a flaw in the design of the evacuate feature since its introduction. There is no easy fix for this, hence this workaround to limit the potential for damage. The proposed fix in liberty is here: https://review.openstack.org/#/c/161444/.<br />
<br />
* The generate config examples possibly missing some oslo related configuration<br />
<br />
=== Upgrade Notes ===<br />
<br />
Below are changes you should be aware of when upgrading. Where possible, The git commit hash is provided for you to find more information:<br />
<br />
* Neutron ports are no longer deleted after your server is deleted, if you created them outside of Nova: 1153a46738fc3ffff98a1df9d94b5a55fdd58777<br />
* EC2 API support has been deprecated, and is likely to be removed in kilo, see f098398a836e3671c49bb884b4a1a1988053f4b2<br />
* Websocket proxies need to be upgraded in a lockstep with the API nodes, as older API nodes will not be sending the access_url when authorizing console access, and newer proxy services (this commit and onward) would fail to authorize such requests 9621ccaf05900009d67cdadeb1aac27368114a61<br />
* After fully upgrading to kilo (i.e. all nodes are running kilo code), you should start a background migration of flavor information from its old home to its new home. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. This is critical to complete before the Liberty release, where support for the old location will be dropped. Use "nova-manage migrate-flavor-data" to perform this transition.<br />
* Due to the improvement on Nova v2.1 API policy enforcement. There are a lot of change happened to v2.1 API policy. Because v2.1 API didn't released before, those change won't keep back-compatible. It is better to use policy sample configuration instead of old one.<br />
* VMware rescue VM behaviour no longer creates a new VM and instead happens in place: cd1765459a24e52e1b933c8e05517fed75ac9d41<br />
* force_config_drive = always has been deprecated, and force_config_drive = True should be used instead: c12a78b35dc910fa97df888960ef2b9a64557254<br />
* Running hyper-v, if you deployed code that was past this commit: b4d57ab65836460d0d9cb8889ec2e6c3986c0a9b but before this commit: c8e9f8e71de64273f10498c5ad959634bfe79975 you make have problems to manually resolve see: c8e9f8e71de64273f10498c5ad959634bfe79975<br />
* Changed the default value of: multi_instance_display_name_template see: 609b2df339785bff9e30a9d67d5c853562ae3344<br />
* Please use "nova-manage db null_instance_uuid_scan" to ensure the DB migrations will apply cleanly, see: c0ea53ce353684b48303fc59393930c3fa5ade58<br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* Keystone now supports Hierarchical Multitenancy. Projects can be nested under other projects via the <code>parent_id</code> optional attribute when creating a project.<br />
** TBD - More info.<br />
* Fernet tokens - Keystone provide another token format, that allows for tokens to be non-persistent (no longer stored in a database). The new token format promises improved scalability and performance of Keystone.<br />
* WebSSO - If using a trusted identity provider for federating users, Keystone now provides the ability for a user to authenticate with Horizon using the credentials of an existing IdP, through a Single Sign-On page.<br />
* Pluggable Assignment Model - TBD - More Info<br />
* Assignment / Resource Split - TBD - More Info<br />
* Extensions to Core functionality - The Keystone team is moving away from the concept of "extensions", all previous extensions are now enabled by default. These now core functions will be marked as Experimental or Stable (as assessed by the Keystone team). The previous extensions that are now enabled by default are: OS-FEDERATION, OS-OAUTH1, OS-ENDPOINT-POLICY and OS-EP-FILTER<br />
* (Experimental) Per-Domain Identity Backend Configuration can be stored in SQL - TBD More Info<br />
* [http://docs.openstack.org/developer/keystone/configure_federation.html#keystone-as-an-identity-provider-idp Keystone-to-Keystone Identity Federation] is now considered stable.<br />
** IDP Remote ID Registration - Keystone added the ability to associate many `Remote IDs` to a single Identity-Provider. This will help in a case where there are many identity providers that use a common mapping.<br />
** ECP Wrapped Assertions - In addition to generating a SAML assertion about a user, Keystone has also added the ability to create an ECP wrapped SAML assertion. This will create a more seamless integration for Keystone to Keystone clients.<br />
* Keystone now supports [http://docs.openstack.org/developer/keystone/extensions/openidc.html OpenID Connect] as a federated identity authentication mechanism.<br />
* Trusts now support redelegation. If allowed when the trust is created, a trustee can redelegate the roles from the trust via another trust.<br />
* It is now possible to request an explicitly unscoped token from Keystone.<br />
<br />
=== Known Issues ===<br />
<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
<br />
* XML support in Keystone has been removed as of Kilo. When upgrading from Juno to Kilo, it is recommended that references to XML and XmlBodyMiddleware be removed from the [https://github.com/openstack/keystone/blob/master/etc/keystone-paste.ini Keystone Paste configuration]. This includes removing the XML middleware filters and there references from the public_api, admin_api, api_v3, public_version_api, admin_version_api and any other pipelines that may contain the XML filters.<br />
* [http://specs.openstack.org/openstack/openstack-specs/specs/no-downward-sql-migration.html SQL Schema Downgrades are no longer supported]. This change is the result of evaluation that downward SQL migrations are not well tested and become increasingly difficult to support with the volume of data-change that occurs in many of the migrations.<br />
* The following python libraries are now required: [https://pypi.python.org/pypi/cryptography cryptography], [https://pypi.python.org/pypi/msgpack-python msgpack-python], [https://pypi.python.org/pypi/pysaml2 pysaml2] and [https://pypi.python.org/pypi/oauthlib oauthlib]<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
* DVR now supports VLANs in addition to VXLAN/GRE<br />
* ML2 Hierarchical Port Binding<br />
* New LBaaS Version 2 API<br />
* Portsecurity support for the OVS ML2 Driver<br />
<br />
* New Plugins supported in Kilo include the following:<br />
** A10 Networks LBaaS V2 Driver<br />
** Brocade LBaaS V2 Driver<br />
** Brocade ML2 driver for MLX and ICX switches<br />
** Brocade L3 routing plugin for MLX switch<br />
** Brocade Vyatta vRouter L3 Plugin<br />
** Brocade Vyatta vRouter Firewall Driver<br />
** Brocade Vyatta vRouter VPN Driver<br />
** Cisco CSR VPNaaS Driver<br />
** Dragonflow SDN based Distributed Virtual Router L3 Plugin<br />
** Freescale FWaaS Driver<br />
** Intel Mcafee NGFW FWaaS Driver<br />
** IPSEC Strongswan VPNaaS Driver<br />
<br />
=== Known Issues ===<br />
* The Firewall-as-a-Service project is still marked as experimental for the Kilo release.<br />
* Bug [https://bugs.launchpad.net/neutron/+bug/1438819 1438819]<br />
** When a new subnet is created on an external network, all existing routers with gateways on the network will get a new address allocated from it. For IPv4 networks, this could consume the entire subnet for router gateway ports.<br />
<br />
=== Upgrade Notes ===<br />
<br />
From Havana, Neutron no longer supported an explicit lease database (https://bugs.launchpad.net/bugs/1202392). This left dead code including unused environment variable. In order to remove the dead code (https://review.openstack.org/#/c/152398/), a change to the dhcp.filter is required, so that line:<br />
<br />
'''dnsmasq: EnvFilter, dnsmasq, root, NEUTRON_NETWORK_ID='''<br />
<br />
Be replaced by:<br />
<br />
'''dnsmasq: CommandFilter, dnsmasq, root'''<br />
<br />
After advanced services were split into separate packages and received their own service configuration files (specifically, etc/neutron/neutron_lbaas.conf, etc/neutron/neutron_fwaas.conf and etc/neutron/neutron_vpnaas.conf), active service provider configuration can be different after upgrade (specifically, default load balancer (haproxy) and vpn (openswan) providers can be enabled for you even though you previously disabled them in neutron.conf). Please make sure you review configuration after upgrade so that it reflects the desired state of service providers.<br />
<br />
Note: this will have no effect if the related service plugin is not loaded in neutron.conf.<br />
<br />
<br />
* The default value of api_workers is now equal to the number of CPUs in the host. If you currently use the default, ensure you set api_workers to a reasonable number for your installation. (https://review.openstack.org/#/c/140493/)<br />
* The neutron.allow_duplicate_networks config option is deprecated in Kilo and will be removed in Liberty where the default behavior will be to just allow multiple ports attached to an instance on the same network in Neutron. (https://review.openstack.org/163581)<br />
* The linuxbridge agent now enables VXLAN by default (https://review.openstack.org/160826)<br />
* neutron-ns-metadata-proxy can now be run as non-root (https://review.openstack.org/147437)<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
<br />
* Deprecation<br />
** Brocade Classic plugin for Brocade's VDX/VCS series of hardware switches will be deprecated in the L-Release. The functionality provided by this plugin is now addressed by the ML2 Driver available for the VDX series of hardware. The plugin is slated for removal after this release cycle.<br />
<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
* From this point forward any new database schema upgrades will not require restarting Cinder services right away. The services are now independent of schema upgrades. This is part one to Cinder supporting rolling upgrades!<br />
* Ability to add/remove volumes from an existing consistency group. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Ability to create a consistency group from an existing consistency group snapshot. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Create more fine tuned filters/weighers to set how the scheduler will choose a volume backend. [http://docs.openstack.org/admin-guide-cloud/content/driver_filter_weighing.html Read the docs for more info].<br />
* Encrypted volumes can now be backed up using the Cinder backup service. [http://docs.openstack.org/admin-guide-cloud/content/volume-backup-restore.html Read the docs for more info].<br />
* Ability to create private volume types. This is perfect when you want to make volume types available to only a specific tenant or to test it before making available to your cloud. To do so use the ''cinder type-create <name> --is-public''.<br />
* Oversubscription with thin provision is configurable. [http://docs.openstack.org/admin-guide-cloud/content/over_subscription.html Read docs for more info].<br />
* Ability to add descriptions to volume types. To do so use ''cinder type-create <name> <description><br />
* Cinder now can return multiple iSCSI paths information so that the connector can attach volumes even when the primary path is down ([https://review.openstack.org/#/c/134681/ when connector's multipath feature is enabled] or [https://review.openstack.org/#/c/140877/ not enabled]).<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The 'host' config option for multiple-storage backends in cinder.conf is renamed to 'backend_host' in order to avoid a naming conflict with the 'host' to locate redis. If you use this option, please ensure your configuration files are updated.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* Support to add jitter to polling cycles to ensure pollsters are not querying service's api at the same time<br />
* Ceilometer API RBAC support<br />
* Improved Event support:<br />
** Multi-pipeline support to enable unique processing and publishing of events<br />
** Enabled ability to capture raw notification messages for auditing and postmortem analysis<br />
** Support for persisting events into ElasticSearch<br />
** Publishing support to database, http, file, kafa and oslo.messaging supported message queues<br />
** Option to split off the events persistence into a separate database<br />
** Telemetry now supports to collect and store all the event type meters as events. A new option, ''disable_non_metric_meters'', was added to the configuration in order to provide the possibility to turn off storing these events as samples. For further information please see the [http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-telemetry.html Telemetry Configuration Reference]<br />
** The Administrator Guide in OpenStack Manuals was updated with a new [http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-events.html Events section], where you can find further information about this functionality.<br />
* Improved pipeline publishing support:<br />
** Support to publish events and samples to Kafka or HTTP targets<br />
** Publish data to multiple queues<br />
* Additional meters<br />
** memory and disk meters for Hyper-V<br />
** disk meters for LibVirt<br />
** power and thermal related IPMI meters, more meters from NodeManager<br />
** ability to meter Ceph<br />
* IPv6 support enabled in Ceilometer udp publisher and collector<br />
* [http://launchpad.net/gnocchi Gnocchi] dispatch support for ceilometer-collector<br />
* Self-disabled pollster mechanism<br />
<br />
=== Known Issues ===<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
* Deprecated meters:<br />
** The instance:<flavor> meter is deprecated in the Kilo release. In order to retrieve samples or statistics based on flavor you can use the following queries:<br />
statistics:<br />
ceilometer statistics -m instance -g resource_metadata.instance_type<br />
<br />
samples:<br />
ceilometer sample-list -m instance -q metadata.instance_type=<value><br />
* Middleware used to meter Swift was previously packaged in Ceilometer and is now deprecated. It is now separated into it's own library: ceilometermiddleware.<br />
** Juno configuration: http://docs.openstack.org/juno/install-guide/install/apt/content/ceilometer-swift.html<br />
** Kilo configuration: http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-swift.html<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
=== Key New Features ===<br />
<br />
* Improved scaling using nested stacks<br />
** Heat will RPC actions on any resource that is based on a template. This should help to spread the load when dealing with large complex stacks.<br />
* oslo versioned objects<br />
** The database layer now uses oslo versioned objects to aid in future upgrades. This will allow a newly upgraded heat-engine to use a database with an older schema. Note that this will not help with upgrading to kilo.<br />
* New template functions<br />
** There is a new HOT template version "20150430" which includes two new functions "digest" and "repeat". <br />
* Multiregion stacks<br />
** MORE DETAIL HERE<br />
* Software config signalling using swift<br />
** MORE DETAIL HERE<br />
* Triggering new software deployments from heatclient<br />
** MORE DETAIL HERE<br />
* stack snapshots<br />
** MORE DETAIL HERE<br />
* Access to Heat services<br />
** The admin now has similar access to services as other projects. This is in the form of "heat-manage service-list" and via horizon. This feature reports the active heat-engines.<br />
* nova and neutron property constraints<br />
** MORE DETAIL HERE<br />
* stack hooks<br />
** MORE DETAIL HERE<br />
* New contributed resources<br />
** Mistral resources<br />
** keystone resources <br />
supported with Keystone v3 server for Project, Role, User and Group<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Data Processing service (Sahara) ==<br />
=== Key New Features ===<br />
* New plugins, their features and versions:<br />
** MAPR<br />
** Apache Storm<br />
** Apache Hadoop 2.6.0 was added, Apache Hadoop 2.4.1 deprecated<br />
** New services for CDH plugin added up to HDFS, YARN, Spark, Oozie, HBase, ZooKeeper and other services<br />
* Added indirect VM access for better utilization of floating IPs<br />
* Added event log support to have detailed info about provisioning progress<br />
* Optional default node group and cluster templates per plugin<br />
* Horizon updates:<br />
** Guided cluster creation and job execution<br />
** Filtering on search for objects<br />
* Editing of Node Group templates and Cluster templates implemented<br />
* Added Shell Job Type for clusters running Oozie<br />
* New Job Types endpoint to query list of the supported Job Types<br />
<br />
=== Known Issues ===<br />
-<br />
<br />
=== Upgrade Notes ===<br />
Details: http://docs.openstack.org/developer/sahara/userdoc/upgrade.guide.html#juno-kilo<br />
<br />
* Sahara now requires policy.json configuration file.<br />
<br />
== OpenStack Bare Metal service (Ironic) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== State Machine ====<br />
Ironic now uses a formal model for the logical state of each node it manages.<ref name="states">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/new-ironic-state-machine.html#proposed-change]New Ironic State Machine</ref> This has enabled the addition of two new processes: '''cleaning''' and '''inspection'''.<br />
* Automatic disk erasure between tenants is now enabled by default. This may be extended to perform additional '''cleaning''' steps, such as re-applying firmware, resetting BIOS settings, etc.<ref name="cleaning">[http://docs.openstack.org/developer/ironic/deploy/cleaning.html]Node Cleaning</ref><br />
* Both in-band and out-of-band methods are available to '''inspect''' hardware. These methods may be used to update Node properties automatically.<ref name="inspect">[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#hardware-inspection]Hardware Inspection</ref><br />
<br />
==== Version Headers ====<br />
The Ironic REST API expects a new ''X-OpenStack-Ironic-API-Version'' header be passed with each HTTP[S] request. This header allows client and server to negotiate a mutually supported interface.<ref name="api-version">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html]REST API "micro" versions </ref> In the absence of this header, the REST service will default to a compatibility mode and yield responses compatible with Juno clients. This mode, however, prevents access to most features introduced in Kilo.<br />
<br />
==== Hardware Driver Changes ====<br />
The following new drivers were added:<br />
* [http://docs.openstack.org/developer/ironic/drivers/amt.html AMT]<br />
* [http://docs.openstack.org/developer/ironic/deploy/drivers.html#irmc iRMC]<br />
* [http://docs.openstack.org/developer/ironic/drivers/vbox.html VirtualBox (testing driver only)]<br />
<br />
<br />
The following enhancements were made to existing drivers:<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#enabling-the-configuration-drive-configdrive Configdrives] may be used with the "agent" drivers in lieu of a metadata service, if desired.<br />
* SeaMicro driver supports serial console<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#uefi-secure-boot-support iLO driver supports UEFI secure boot]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#hardware-inspection iLO driver supports out-of-band node inspection]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#ilo-node-cleaning iLO driver supports resetting ilo and bios during cleaning]<br />
<br />
<br />
Support for third-party and out-of-tree drivers is enhanced by the following two changes:<br />
* Drivers may store their own "internal" information about Nodes.<br />
* Drivers may register their own periodic tasks to be run by the Conductor.<br />
* ''vendor_passthru'' methods now support additional HTTP methods (eg, PUT and POST).<br />
* ''vendor_passthru'' methods are now discoverable in the REST API. See [http://docs.openstack.org/developer/ironic/dev/drivers.html#node-vendor-passthru node vendor passthru] and [http://docs.openstack.org/developer/ironic/dev/drivers.html#driver-vendor-passthru driver vendor passthru]<br />
<br />
==== Other Changes ====<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#logical-names Logical names] may be used to address Nodes, in addition to their canonical UUID. <br />
* For servers with varied local disks, [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#specifying-the-disk-for-deployment ''hints''] may be supplied that affect which disk device the OS is provisioned to.<br />
* Support for fetching kernel, ramdisk, and instance images from HTTP[S] sources directly has been added to remove the dependency on Glance. [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#using-ironic-as-a-standalone-service Using ironic as a standalone service]<br />
* Nodes may be placed into ''[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#maintenance-mode maintenance mode]'' via REST API calls. An optional ''maintenance reason'' may be specified when doing so.<br />
<br />
=== Known Issues ===<br />
* '''Running more than one nova-compute process is not officially supported.'''<br />
** While Ironic does include a ClusteredComputeManager, which allows running more than one nova-compute process with Ironic, it should be considered experimental and has many known problems.<br />
* Drivers using the "agent" deploy mechanism do not support "rebuild --preserve-ephemeral"<br />
<br />
=== Upgrade Notes ===<br />
* IPMI Passwords are now obfuscated in REST API responses. This may be disabled by changing API policy settings.<br />
* The "agent" class of drivers now support both whole-disk and partition based images.<br />
* The driver_info parameters of "pxe_deploy_kernel" and "pxe_deploy_ramdisk" are deprecated in favour of "deploy_kernel" and "deploy_ramdisk". <br />
* Drivers implementing their own version of the vendor_passthru() method has been deprecated in favour of the new @passthru decorator.<br />
<br />
==== Juno to Kilo ====<br />
The recommended upgrade process is documented here:<br />
* http://docs.openstack.org/developer/ironic/deploy/upgrade-guide.html#upgrading-from-juno-to-kilo<br />
<br />
==== Upgrading from Icehouse "nova-baremetal" ====<br />
<br />
An upgrade from an Icehouse Nova installation using the "baremetal" driver directly to Kilo Ironic is untested and unsupported. Instead, please follow the following upgrade path:<br />
# Icehouse Nova "baremetal" -> Juno Nova "baremetal"<br />
# Juno Nova "baremetal" -> Juno Ironic<br />
# Juno Ironic -> Kilo Ironic<br />
<br />
Documentation for steps 1 and 2 is available at: https://wiki.openstack.org/wiki/Ironic/NovaBaremetalIronicMigration<br />
<br />
== OpenStack Documentation ==<br />
<br />
* New [http://docs.openstack.org docs.openstack.org] landing page and new web design for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* First release of the [http://docs.openstack.org/networking-guide/ Networking Guide]<br />
* Migration to RST for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* New specialty teams:<br />
** Install Guides<br />
** Networking Guide<br />
** High Availability Guide<br />
** Networking Guide<br />
** User Guides (Admin and End User)<br />
* First App Tutorial sprint<br />
* Driver documentation clarification and connections</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Kilo&diff=78583ReleaseNotes/Kilo2015-04-29T21:38:19Z<p>Asalkeld: /* OpenStack Orchestration (Heat) */</p>
<hr />
<div>= OpenStack 2015.1.0 (Kilo) Release Notes =<br />
<br />
<br />
{| style="color:#000000; border:solid 3px #A8A8A8; padding:2em; margin:2em 0; background-color:#FFFFFF; vertical-align:middle;"<br />
| style="padding:1em" | The Kilo release of OpenStack is dedicated to the loving memory of '''Chris Yeoh''', who left his family and us way too soon.<br />
|}<br />
<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
<br />
== General Upgrade Notes ==<br />
<br />
* TBD<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Erasure Code (beta) ====<br />
Swift now supports an erasure-code (EC) storage policy type. This allows deployers to achieve very high durability with less raw capacity as used in replicated storage. However, EC requires more CPU and network resources, so it is not good for every use case. EC is great for storing large, infrequently accessed data in a single region.<br />
<br />
Swift's implementation of erasure codes is meant to be transparent to end users. There is no API difference between replicated storage and EC storage.<br />
<br />
To support erasure codes, Swift now depends on PyECLib and liberasurecode. liberasurecode is a pluggable library that allows for the actual EC algorithm to be implemented in a library of your choosing.<br />
<br />
Full docs are at http://swift.openstack.org/overview_erasure_code.html<br />
<br />
==== Composite tokens ====<br />
Composite tokens allow other OpenStack services to store data in Swift on behalf of a client so that neither the client nor the service can update the data without both party's consent.<br />
<br />
An example of this is that a user requests that Nova save a snapshot of a VM. Nova passes the request to Glance, Glance writes the image to a Swift container as a set of objects. In this case, the user cannot modify the snapshot without also having a valid token from the service. Nor can the service update the data without a valid token from the user. But the data is still stored in the user's account in Swift, which makes accounting simpler.<br />
<br />
Full docs are at http://swift.openstack.org/overview_backing_store.html<br />
<br />
==== Data placement updates for smaller, unbalanceable clusters ====<br />
Swift's data placement now accounts for device weight. This allows operators to gradually add new zones and regions without immediately causing a large amount of data to be moved. Also, if a cluster is inbalanced (eg a two-zone cluster where one zone has twice the capacity of the other), Swift will more efficiently use the available space and warn when replicas are placed without enough dispersion in the cluster.<br />
<br />
==== Global cluster replication improvements ====<br />
Replication between regions will now only move one replica per replication run. This gives the remote region a chance to replicate internally and thus avoid more data moving over the WAN<br />
<br />
<br />
=== Known Issues ===<br />
* As a beta release, EC support is nearly fully feature complete, but it is lacking support for some features (like multi-range reads) and has not had a full performance characterization. This feature relies on ssync for durability. Deployers are urged to do extensive testing and not deploy production data using an erasure code storage policy.<br />
<br />
=== Upgrade Notes ===<br />
As always, you can upgrade to this version of Swift with no end-user downtime<br />
<br />
* In order to support erasure codes, Swift has a new dependency on PyECLib (and liberasurecode, transitively). Also, the minimum required version of eventlet has been raised.<br />
<br />
<br />
<br />
<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== API v2.1 ====<br />
<br />
* We have the first release of the next generation of the Nova API, v2.1. The v2.1 is designed to be backwards compatible with v2.0 with the addition of strong API validation. All changes to the API are discoverable via the advertised microversion. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html<br />
<br />
* For kilo, by default we are still using v2.0 API code to server v2.0 API requests. It is hoped that in liberty that v2.1 will be used to serve requests for both v2.0 and v2.1.<br />
<br />
* For liberty v2.0 is now frozen, and all new features will now be added into the v2.1 API using the microversions mechanism. Microversion increments released with kilo are:<br />
** Extending the keypair API to support for x509 certificates, to be used with Windows WinRM, is one of the first API features added as a microversion in the v2.1 API.<br />
** Exposing additional attributes in os-extended-server-attributes<br />
<br />
* python-novaclient does not yet have support for the v2.1 API<br />
<br />
* The policy enforcement of Nova v2.1 API get improvement.<br />
** Policy only enforce at the entry of API.<br />
** Without duplicated rules for single one API anymore.<br />
** All the v2.1 API policy rule use 'os_compute_api' as prefix which distinguish with v2 API.<br />
** Due to hard-code permission checks at db layer, part of Nova API isn't configurable by policy before. It's always required admin user. Part of Nova v2.1 API's hard-code permission checks is removed which make API policy configurable. The rest of hard-code permission checks will be removed at Liberty.<br />
<br />
==== Upgrade Support ====<br />
<br />
* We have reduced the data migrations that happen in the DB migration scripts, this now happens in a "lazy" way inside the DB objects code. There are nova-manage commands to help force migration of the data. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/flavor-from-sysmeta-to-blob.html<br />
<br />
* Change https://review.openstack.org/#/c/97946/ adds database migration 267 which scans for null instances.uuid records and will fail if any are found since the migrate ultimately needs to make instances.uuid non-nullable and adds a UniqueConstraint on that column. A helper script is provided to search for null instances.uuid records before running the database migrations. Before running 'nova-manage db sync', run the helper script 'nova-manage db null_instance_uuid_scan' which, by default, will just search and dump results, it does not change anything. Pass the --delete option to the null_instance_uuid_scan command to automatically remove any null records were instances.uuid is null.<br />
<br />
==== Scheduler ====<br />
<br />
* A selection of performance optimisations<br />
* We are in the process of making structural changes to the scheduler that will help improve our ability to evolve and improve scheduling. This should not be visible from an end user perspective.<br />
<br />
==== Cells v2 ====<br />
<br />
* There are some initial parts of cell v2 supported added, but this feature is not yet ready to use.<br />
* new 'nova-manage api_db sync' and 'nova-manage api_db version' commands for working with the new api database for cells, but nothing is using this database yet so it is not necessary to set it up.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Support for generation 2 VMs: https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms<br />
* Support for SMB based volumes, to sit along side existing iSCSI volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/hyper-v-smbfs-volume-support.html<br />
* Support for x509 certificate based key pairs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/keypair-x509-certificates.html<br />
* Host power actions now work with Hyper-V: https://blueprints.launchpad.net/nova/+spec/hyper-v-host-power-actions<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* NFV related features:<br />
** NUMA based scheduling: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/input-output-based-numa-scheduling.html<br />
** Pinning guest vCPUs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-cpu-pinning.html<br />
** Large page support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html<br />
* vhostuser VIF driver: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt_vif_vhostuser.html<br />
* Support for running KVM on IBM System z: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-kvm-systemz.html<br />
* Support for parallels: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pcs-support.html<br />
* Support for SMB based volumes: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-smbfs-volume-support.html<br />
* Quiesce using QEMU guest agent: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quiesced-image-snapshots-with-qemu-guest-agent.html<br />
* Quobyte volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quobyte-nova-driver.html<br />
* Support for QEMU iSCSI initiator: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html<br />
<br />
===== VMware =====<br />
<br />
* Support for Ephemeral disks: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/vmware-ephemeral-disk-support.html<br />
* Support fo vSAN: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-vsan-support.html<br />
* Support for based OVA images: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-driver-ova-support.html<br />
* Support for SPBM based storage policies: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-spbm-support.html<br />
<br />
===== Ironic =====<br />
<br />
* Support to pass flavor capabilities to ironic: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pass-flavor-capabilities-to-ironic-virt-driver.html<br />
<br />
=== Known Issues ===<br />
<br />
* Evacuate recovery code has the potential to destroy data. On nova-compute startup, instances reported by the hypervisor are examined to see if they have moved (i.e. been evacuated) from the current host during the outage. If the determination is made that they were, then they are destroyed locally. This has the potential to choose incorrectly and destroy instances unexpectedly. On libvirt-like nodes, this can be triggered by changing the system hostname. On vmware-like nodes, this can be triggered by attempting to manage a single vcenter deployment from two different hosts (with different hostnames). This will be fixed properly in Liberty, but for now deployments that wish to disable this behavior as a preventive measure can set workarounds.destroy_after_evacuate=False. NOTE: This is not a regression and has been a flaw in the design of the evacuate feature since its introduction. There is no easy fix for this, hence this workaround to limit the potential for damage. The proposed fix in liberty is here: https://review.openstack.org/#/c/161444/.<br />
<br />
* The generate config examples possibly missing some oslo related configuration<br />
<br />
=== Upgrade Notes ===<br />
<br />
Below are changes you should be aware of when upgrading. Where possible, The git commit hash is provided for you to find more information:<br />
<br />
* Neutron ports are no longer deleted after your server is deleted, if you created them outside of Nova: 1153a46738fc3ffff98a1df9d94b5a55fdd58777<br />
* EC2 API support has been deprecated, and is likely to be removed in kilo, see f098398a836e3671c49bb884b4a1a1988053f4b2<br />
* Websocket proxies need to be upgraded in a lockstep with the API nodes, as older API nodes will not be sending the access_url when authorizing console access, and newer proxy services (this commit and onward) would fail to authorize such requests 9621ccaf05900009d67cdadeb1aac27368114a61<br />
* After fully upgrading to kilo (i.e. all nodes are running kilo code), you should start a background migration of flavor information from its old home to its new home. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. This is critical to complete before the Liberty release, where support for the old location will be dropped. Use "nova-manage migrate-flavor-data" to perform this transition.<br />
* Due to the improvement on Nova v2.1 API policy enforcement. There are a lot of change happened to v2.1 API policy. Because v2.1 API didn't released before, those change won't keep back-compatible. It is better to use policy sample configuration instead of old one.<br />
* VMware rescue VM behaviour no longer creates a new VM and instead happens in place: cd1765459a24e52e1b933c8e05517fed75ac9d41<br />
* force_config_drive = always has been deprecated, and force_config_drive = True should be used instead: c12a78b35dc910fa97df888960ef2b9a64557254<br />
* Running hyper-v, if you deployed code that was past this commit: b4d57ab65836460d0d9cb8889ec2e6c3986c0a9b but before this commit: c8e9f8e71de64273f10498c5ad959634bfe79975 you make have problems to manually resolve see: c8e9f8e71de64273f10498c5ad959634bfe79975<br />
* Changed the default value of: multi_instance_display_name_template see: 609b2df339785bff9e30a9d67d5c853562ae3344<br />
* Please use "nova-manage db null_instance_uuid_scan" to ensure the DB migrations will apply cleanly, see: c0ea53ce353684b48303fc59393930c3fa5ade58<br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* Keystone now supports Hierarchical Multitenancy. Projects can be nested under other projects via the <code>parent_id</code> optional attribute when creating a project.<br />
** TBD - More info.<br />
* Fernet tokens - Keystone provide another token format, that allows for tokens to be non-persistent (no longer stored in a database). The new token format promises improved scalability and performance of Keystone.<br />
* WebSSO - If using a trusted identity provider for federating users, Keystone now provides the ability for a user to authenticate with Horizon using the credentials of an existing IdP, through a Single Sign-On page.<br />
* Pluggable Assignment Model - TBD - More Info<br />
* Assignment / Resource Split - TBD - More Info<br />
* Extensions to Core functionality - The Keystone team is moving away from the concept of "extensions", all previous extensions are now enabled by default. These now core functions will be marked as Experimental or Stable (as assessed by the Keystone team). The previous extensions that are now enabled by default are: OS-FEDERATION, OS-OAUTH1, OS-ENDPOINT-POLICY and OS-EP-FILTER<br />
* (Experimental) Per-Domain Identity Backend Configuration can be stored in SQL - TBD More Info<br />
* [http://docs.openstack.org/developer/keystone/configure_federation.html#keystone-as-an-identity-provider-idp Keystone-to-Keystone Identity Federation] is now considered stable.<br />
** IDP Remote ID Registration - Keystone added the ability to associate many `Remote IDs` to a single Identity-Provider. This will help in a case where there are many identity providers that use a common mapping.<br />
** ECP Wrapped Assertions - In addition to generating a SAML assertion about a user, Keystone has also added the ability to create an ECP wrapped SAML assertion. This will create a more seamless integration for Keystone to Keystone clients.<br />
* Keystone now supports [http://docs.openstack.org/developer/keystone/extensions/openidc.html OpenID Connect] as a federated identity authentication mechanism.<br />
* Trusts now support redelegation. If allowed when the trust is created, a trustee can redelegate the roles from the trust via another trust.<br />
* It is now possible to request an explicitly unscoped token from Keystone.<br />
<br />
=== Known Issues ===<br />
<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
<br />
* XML support in Keystone has been removed as of Kilo. When upgrading from Juno to Kilo, it is recommended that references to XML and XmlBodyMiddleware be removed from the [https://github.com/openstack/keystone/blob/master/etc/keystone-paste.ini Keystone Paste configuration]. This includes removing the XML middleware filters and there references from the public_api, admin_api, api_v3, public_version_api, admin_version_api and any other pipelines that may contain the XML filters.<br />
* [http://specs.openstack.org/openstack/openstack-specs/specs/no-downward-sql-migration.html SQL Schema Downgrades are no longer supported]. This change is the result of evaluation that downward SQL migrations are not well tested and become increasingly difficult to support with the volume of data-change that occurs in many of the migrations.<br />
* The following python libraries are now required: [https://pypi.python.org/pypi/cryptography cryptography], [https://pypi.python.org/pypi/msgpack-python msgpack-python], [https://pypi.python.org/pypi/pysaml2 pysaml2] and [https://pypi.python.org/pypi/oauthlib oauthlib]<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
* DVR now supports VLANs in addition to VXLAN/GRE<br />
* ML2 Hierarchical Port Binding<br />
* New LBaaS Version 2 API<br />
* Portsecurity support for the OVS ML2 Driver<br />
<br />
* New Plugins supported in Kilo include the following:<br />
** A10 Networks LBaaS V2 Driver<br />
** Brocade LBaaS V2 Driver<br />
** Brocade ML2 driver for MLX and ICX switches<br />
** Brocade L3 routing plugin for MLX switch<br />
** Brocade Vyatta vRouter L3 Plugin<br />
** Brocade Vyatta vRouter Firewall Driver<br />
** Brocade Vyatta vRouter VPN Driver<br />
** Cisco CSR VPNaaS Driver<br />
** Dragonflow SDN based Distributed Virtual Router L3 Plugin<br />
** Freescale FWaaS Driver<br />
** Intel Mcafee NGFW FWaaS Driver<br />
** IPSEC Strongswan VPNaaS Driver<br />
<br />
=== Known Issues ===<br />
* The Firewall-as-a-Service project is still marked as experimental for the Kilo release.<br />
* Bug [https://bugs.launchpad.net/neutron/+bug/1438819 1438819]<br />
** When a new subnet is created on an external network, all existing routers with gateways on the network will get a new address allocated from it. For IPv4 networks, this could consume the entire subnet for router gateway ports.<br />
<br />
=== Upgrade Notes ===<br />
<br />
From Havana, Neutron no longer supported an explicit lease database (https://bugs.launchpad.net/bugs/1202392). This left dead code including unused environment variable. In order to remove the dead code (https://review.openstack.org/#/c/152398/), a change to the dhcp.filter is required, so that line:<br />
<br />
'''dnsmasq: EnvFilter, dnsmasq, root, NEUTRON_NETWORK_ID='''<br />
<br />
Be replaced by:<br />
<br />
'''dnsmasq: CommandFilter, dnsmasq, root'''<br />
<br />
After advanced services were split into separate packages and received their own service configuration files (specifically, etc/neutron/neutron_lbaas.conf, etc/neutron/neutron_fwaas.conf and etc/neutron/neutron_vpnaas.conf), active service provider configuration can be different after upgrade (specifically, default load balancer (haproxy) and vpn (openswan) providers can be enabled for you even though you previously disabled them in neutron.conf). Please make sure you review configuration after upgrade so that it reflects the desired state of service providers.<br />
<br />
Note: this will have no effect if the related service plugin is not loaded in neutron.conf.<br />
<br />
<br />
* The default value of api_workers is now equal to the number of CPUs in the host. If you currently use the default, ensure you set api_workers to a reasonable number for your installation. (https://review.openstack.org/#/c/140493/)<br />
* The neutron.allow_duplicate_networks config option is deprecated in Kilo and will be removed in Liberty where the default behavior will be to just allow multiple ports attached to an instance on the same network in Neutron. (https://review.openstack.org/163581)<br />
* The linuxbridge agent now enables VXLAN by default (https://review.openstack.org/160826)<br />
* neutron-ns-metadata-proxy can now be run as non-root (https://review.openstack.org/147437)<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
<br />
* Deprecation<br />
** Brocade Classic plugin for Brocade's VDX/VCS series of hardware switches will be deprecated in the L-Release. The functionality provided by this plugin is now addressed by the ML2 Driver available for the VDX series of hardware. The plugin is slated for removal after this release cycle.<br />
<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
* From this point forward any new database schema upgrades will not require restarting Cinder services right away. The services are now independent of schema upgrades. This is part one to Cinder supporting rolling upgrades!<br />
* Ability to add/remove volumes from an existing consistency group. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Ability to create a consistency group from an existing consistency group snapshot. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Create more fine tuned filters/weighers to set how the scheduler will choose a volume backend. [http://docs.openstack.org/admin-guide-cloud/content/driver_filter_weighing.html Read the docs for more info].<br />
* Encrypted volumes can now be backed up using the Cinder backup service. [http://docs.openstack.org/admin-guide-cloud/content/volume-backup-restore.html Read the docs for more info].<br />
* Ability to create private volume types. This is perfect when you want to make volume types available to only a specific tenant or to test it before making available to your cloud. To do so use the ''cinder type-create <name> --is-public''.<br />
* Oversubscription with thin provision is configurable. [http://docs.openstack.org/admin-guide-cloud/content/over_subscription.html Read docs for more info].<br />
* Ability to add descriptions to volume types. To do so use ''cinder type-create <name> <description><br />
* Cinder now can return multiple iSCSI paths information so that the connector can attach volumes even when the primary path is down ([https://review.openstack.org/#/c/134681/ when connector's multipath feature is enabled] or [https://review.openstack.org/#/c/140877/ not enabled]).<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The 'host' config option for multiple-storage backends in cinder.conf is renamed to 'backend_host' in order to avoid a naming conflict with the 'host' to locate redis. If you use this option, please ensure your configuration files are updated.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* Support to add jitter to polling cycles to ensure pollsters are not querying service's api at the same time<br />
* Ceilometer API RBAC support<br />
* Improved Event support:<br />
** Multi-pipeline support to enable unique processing and publishing of events<br />
** Enabled ability to capture raw notification messages for auditing and postmortem analysis<br />
** Support for persisting events into ElasticSearch<br />
** Publishing support to database, http, file, kafa and oslo.messaging supported message queues<br />
** Option to split off the events persistence into a separate database<br />
** Telemetry now supports to collect and store all the event type meters as events. A new option, ''disable_non_metric_meters'', was added to the configuration in order to provide the possibility to turn off storing these events as samples. For further information please see the [http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-telemetry.html Telemetry Configuration Reference]<br />
** The Administrator Guide in OpenStack Manuals was updated with a new [http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-events.html Events section], where you can find further information about this functionality.<br />
* Improved pipeline publishing support:<br />
** Support to publish events and samples to Kafka or HTTP targets<br />
** Publish data to multiple queues<br />
* Additional meters<br />
** memory and disk meters for Hyper-V<br />
** disk meters for LibVirt<br />
** power and thermal related IPMI meters, more meters from NodeManager<br />
** ability to meter Ceph<br />
* IPv6 support enabled in Ceilometer udp publisher and collector<br />
* [http://launchpad.net/gnocchi Gnocchi] dispatch support for ceilometer-collector<br />
* Self-disabled pollster mechanism<br />
<br />
=== Known Issues ===<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
* Deprecated meters:<br />
** The instance:<flavor> meter is deprecated in the Kilo release. In order to retrieve samples or statistics based on flavor you can use the following queries:<br />
statistics:<br />
ceilometer statistics -m instance -g resource_metadata.instance_type<br />
<br />
samples:<br />
ceilometer sample-list -m instance -q metadata.instance_type=<value><br />
* Middleware used to meter Swift was previously packaged in Ceilometer and is now deprecated. It is now separated into it's own library: ceilometermiddleware.<br />
** Juno configuration: http://docs.openstack.org/juno/install-guide/install/apt/content/ceilometer-swift.html<br />
** Kilo configuration: http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-swift.html<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
=== Key New Features ===<br />
<br />
* Improved scaling using nested stacks<br />
** Heat will RPC actions on any resource that is based on a template. This should help to spread the load when dealing with large complex stacks.<br />
* oslo versioned objects<br />
** The database layer now uses oslo versioned objects to aid in future upgrades. This will allow a newly upgraded heat-engine to use a database with an older schema. Note that this will not help with upgrading to kilo.<br />
* New template functions<br />
** There is a new HOT template version "20150430" which includes two new functions "digest" and "repeat". <br />
* Multiregion stacks<br />
** MORE DETAIL HERE<br />
* Software config signalling using swift<br />
** MORE DETAIL HERE<br />
* Triggering new software deployments from heatclient<br />
** MORE DETAIL HERE<br />
* stack snapshots<br />
** MORE DETAIL HERE<br />
* Access to Heat services<br />
* The admin now has similar access to services as other projects. This is in the form of "heat-manage service-list" and via horizon. This feature reports the active heat-engines.<br />
* nova and neutron property constraints<br />
** MORE DETAIL HERE<br />
* stack hooks<br />
** MORE DETAIL HERE<br />
* New contributed resources<br />
** Mistral resources<br />
** keystone resources <br />
supported with Keystone v3 server for Project, Role, User and Group<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Data Processing service (Sahara) ==<br />
=== Key New Features ===<br />
* New plugins, their features and versions:<br />
** MAPR<br />
** Apache Storm<br />
** Apache Hadoop 2.6.0 was added, Apache Hadoop 2.4.1 deprecated<br />
** New services for CDH plugin added up to HDFS, YARN, Spark, Oozie, HBase, ZooKeeper and other services<br />
* Added indirect VM access for better utilization of floating IPs<br />
* Added event log support to have detailed info about provisioning progress<br />
* Optional default node group and cluster templates per plugin<br />
* Horizon updates:<br />
** Guided cluster creation and job execution<br />
** Filtering on search for objects<br />
* Editing of Node Group templates and Cluster templates implemented<br />
* Added Shell Job Type for clusters running Oozie<br />
* New Job Types endpoint to query list of the supported Job Types<br />
<br />
=== Known Issues ===<br />
-<br />
<br />
=== Upgrade Notes ===<br />
Details: http://docs.openstack.org/developer/sahara/userdoc/upgrade.guide.html#juno-kilo<br />
<br />
* Sahara now requires policy.json configuration file.<br />
<br />
== OpenStack Bare Metal service (Ironic) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== State Machine ====<br />
Ironic now uses a formal model for the logical state of each node it manages.<ref name="states">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/new-ironic-state-machine.html#proposed-change]New Ironic State Machine</ref> This has enabled the addition of two new processes: '''cleaning''' and '''inspection'''.<br />
* Automatic disk erasure between tenants is now enabled by default. This may be extended to perform additional '''cleaning''' steps, such as re-applying firmware, resetting BIOS settings, etc.<ref name="cleaning">[http://docs.openstack.org/developer/ironic/deploy/cleaning.html]Node Cleaning</ref><br />
* Both in-band and out-of-band methods are available to '''inspect''' hardware. These methods may be used to update Node properties automatically.<ref name="inspect">[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#hardware-inspection]Hardware Inspection</ref><br />
<br />
==== Version Headers ====<br />
The Ironic REST API expects a new ''X-OpenStack-Ironic-API-Version'' header be passed with each HTTP[S] request. This header allows client and server to negotiate a mutually supported interface.<ref name="api-version">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html]REST API "micro" versions </ref> In the absence of this header, the REST service will default to a compatibility mode and yield responses compatible with Juno clients. This mode, however, prevents access to most features introduced in Kilo.<br />
<br />
==== Hardware Driver Changes ====<br />
The following new drivers were added:<br />
* [http://docs.openstack.org/developer/ironic/drivers/amt.html AMT]<br />
* [http://docs.openstack.org/developer/ironic/deploy/drivers.html#irmc iRMC]<br />
* [http://docs.openstack.org/developer/ironic/drivers/vbox.html VirtualBox (testing driver only)]<br />
<br />
<br />
The following enhancements were made to existing drivers:<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#enabling-the-configuration-drive-configdrive Configdrives] may be used with the "agent" drivers in lieu of a metadata service, if desired.<br />
* SeaMicro driver supports serial console<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#uefi-secure-boot-support iLO driver supports UEFI secure boot]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#hardware-inspection iLO driver supports out-of-band node inspection]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#ilo-node-cleaning iLO driver supports resetting ilo and bios during cleaning]<br />
<br />
<br />
Support for third-party and out-of-tree drivers is enhanced by the following two changes:<br />
* Drivers may store their own "internal" information about Nodes.<br />
* Drivers may register their own periodic tasks to be run by the Conductor.<br />
* ''vendor_passthru'' methods now support additional HTTP methods (eg, PUT and POST).<br />
* ''vendor_passthru'' methods are now discoverable in the REST API. See [http://docs.openstack.org/developer/ironic/dev/drivers.html#node-vendor-passthru node vendor passthru] and [http://docs.openstack.org/developer/ironic/dev/drivers.html#driver-vendor-passthru driver vendor passthru]<br />
<br />
==== Other Changes ====<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#logical-names Logical names] may be used to address Nodes, in addition to their canonical UUID. <br />
* For servers with varied local disks, [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#specifying-the-disk-for-deployment ''hints''] may be supplied that affect which disk device the OS is provisioned to.<br />
* Support for fetching kernel, ramdisk, and instance images from HTTP[S] sources directly has been added to remove the dependency on Glance. [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#using-ironic-as-a-standalone-service Using ironic as a standalone service]<br />
* Nodes may be placed into ''[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#maintenance-mode maintenance mode]'' via REST API calls. An optional ''maintenance reason'' may be specified when doing so.<br />
<br />
=== Known Issues ===<br />
* '''Running more than one nova-compute process is not officially supported.'''<br />
** While Ironic does include a ClusteredComputeManager, which allows running more than one nova-compute process with Ironic, it should be considered experimental and has many known problems.<br />
* Drivers using the "agent" deploy mechanism do not support "rebuild --preserve-ephemeral"<br />
<br />
=== Upgrade Notes ===<br />
* IPMI Passwords are now obfuscated in REST API responses. This may be disabled by changing API policy settings.<br />
* The "agent" class of drivers now support both whole-disk and partition based images.<br />
* The driver_info parameters of "pxe_deploy_kernel" and "pxe_deploy_ramdisk" are deprecated in favour of "deploy_kernel" and "deploy_ramdisk". <br />
* Drivers implementing their own version of the vendor_passthru() method has been deprecated in favour of the new @passthru decorator.<br />
<br />
==== Juno to Kilo ====<br />
The recommended upgrade process is documented here:<br />
* http://docs.openstack.org/developer/ironic/deploy/upgrade-guide.html#upgrading-from-juno-to-kilo<br />
<br />
==== Upgrading from Icehouse "nova-baremetal" ====<br />
<br />
An upgrade from an Icehouse Nova installation using the "baremetal" driver directly to Kilo Ironic is untested and unsupported. Instead, please follow the following upgrade path:<br />
# Icehouse Nova "baremetal" -> Juno Nova "baremetal"<br />
# Juno Nova "baremetal" -> Juno Ironic<br />
# Juno Ironic -> Kilo Ironic<br />
<br />
Documentation for steps 1 and 2 is available at: https://wiki.openstack.org/wiki/Ironic/NovaBaremetalIronicMigration<br />
<br />
== OpenStack Documentation ==<br />
<br />
* New [http://docs.openstack.org docs.openstack.org] landing page and new web design for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* First release of the [http://docs.openstack.org/networking-guide/ Networking Guide]<br />
* Migration to RST for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* New specialty teams:<br />
** Install Guides<br />
** Networking Guide<br />
** High Availability Guide<br />
** Networking Guide<br />
** User Guides (Admin and End User)<br />
* First App Tutorial sprint<br />
* Driver documentation clarification and connections</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Kilo&diff=78582ReleaseNotes/Kilo2015-04-29T21:18:06Z<p>Asalkeld: /* OpenStack Orchestration (Heat) */</p>
<hr />
<div>= OpenStack 2015.1.0 (Kilo) Release Notes =<br />
<br />
<br />
{| style="color:#000000; border:solid 3px #A8A8A8; padding:2em; margin:2em 0; background-color:#FFFFFF; vertical-align:middle;"<br />
| style="padding:1em" | The Kilo release of OpenStack is dedicated to the loving memory of '''Chris Yeoh''', who left his family and us way too soon.<br />
|}<br />
<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
<br />
== General Upgrade Notes ==<br />
<br />
* TBD<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Erasure Code (beta) ====<br />
Swift now supports an erasure-code (EC) storage policy type. This allows deployers to achieve very high durability with less raw capacity as used in replicated storage. However, EC requires more CPU and network resources, so it is not good for every use case. EC is great for storing large, infrequently accessed data in a single region.<br />
<br />
Swift's implementation of erasure codes is meant to be transparent to end users. There is no API difference between replicated storage and EC storage.<br />
<br />
To support erasure codes, Swift now depends on PyECLib and liberasurecode. liberasurecode is a pluggable library that allows for the actual EC algorithm to be implemented in a library of your choosing.<br />
<br />
Full docs are at http://swift.openstack.org/overview_erasure_code.html<br />
<br />
==== Composite tokens ====<br />
Composite tokens allow other OpenStack services to store data in Swift on behalf of a client so that neither the client nor the service can update the data without both party's consent.<br />
<br />
An example of this is that a user requests that Nova save a snapshot of a VM. Nova passes the request to Glance, Glance writes the image to a Swift container as a set of objects. In this case, the user cannot modify the snapshot without also having a valid token from the service. Nor can the service update the data without a valid token from the user. But the data is still stored in the user's account in Swift, which makes accounting simpler.<br />
<br />
Full docs are at http://swift.openstack.org/overview_backing_store.html<br />
<br />
==== Data placement updates for smaller, unbalanceable clusters ====<br />
Swift's data placement now accounts for device weight. This allows operators to gradually add new zones and regions without immediately causing a large amount of data to be moved. Also, if a cluster is inbalanced (eg a two-zone cluster where one zone has twice the capacity of the other), Swift will more efficiently use the available space and warn when replicas are placed without enough dispersion in the cluster.<br />
<br />
==== Global cluster replication improvements ====<br />
Replication between regions will now only move one replica per replication run. This gives the remote region a chance to replicate internally and thus avoid more data moving over the WAN<br />
<br />
<br />
=== Known Issues ===<br />
* As a beta release, EC support is nearly fully feature complete, but it is lacking support for some features (like multi-range reads) and has not had a full performance characterization. This feature relies on ssync for durability. Deployers are urged to do extensive testing and not deploy production data using an erasure code storage policy.<br />
<br />
=== Upgrade Notes ===<br />
As always, you can upgrade to this version of Swift with no end-user downtime<br />
<br />
* In order to support erasure codes, Swift has a new dependency on PyECLib (and liberasurecode, transitively). Also, the minimum required version of eventlet has been raised.<br />
<br />
<br />
<br />
<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== API v2.1 ====<br />
<br />
* We have the first release of the next generation of the Nova API, v2.1. The v2.1 is designed to be backwards compatible with v2.0 with the addition of strong API validation. All changes to the API are discoverable via the advertised microversion. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html<br />
<br />
* For kilo, by default we are still using v2.0 API code to server v2.0 API requests. It is hoped that in liberty that v2.1 will be used to serve requests for both v2.0 and v2.1.<br />
<br />
* For liberty v2.0 is now frozen, and all new features will now be added into the v2.1 API using the microversions mechanism. Microversion increments released with kilo are:<br />
** Extending the keypair API to support for x509 certificates, to be used with Windows WinRM, is one of the first API features added as a microversion in the v2.1 API.<br />
** Exposing additional attributes in os-extended-server-attributes<br />
<br />
* python-novaclient does not yet have support for the v2.1 API<br />
<br />
* The policy enforcement of Nova v2.1 API get improvement.<br />
** Policy only enforce at the entry of API.<br />
** Without duplicated rules for single one API anymore.<br />
** All the v2.1 API policy rule use 'os_compute_api' as prefix which distinguish with v2 API.<br />
** Due to hard-code permission checks at db layer, part of Nova API isn't configurable by policy before. It's always required admin user. Part of Nova v2.1 API's hard-code permission checks is removed which make API policy configurable. The rest of hard-code permission checks will be removed at Liberty.<br />
<br />
==== Upgrade Support ====<br />
<br />
* We have reduced the data migrations that happen in the DB migration scripts, this now happens in a "lazy" way inside the DB objects code. There are nova-manage commands to help force migration of the data. For more details see: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/flavor-from-sysmeta-to-blob.html<br />
<br />
* Change https://review.openstack.org/#/c/97946/ adds database migration 267 which scans for null instances.uuid records and will fail if any are found since the migrate ultimately needs to make instances.uuid non-nullable and adds a UniqueConstraint on that column. A helper script is provided to search for null instances.uuid records before running the database migrations. Before running 'nova-manage db sync', run the helper script 'nova-manage db null_instance_uuid_scan' which, by default, will just search and dump results, it does not change anything. Pass the --delete option to the null_instance_uuid_scan command to automatically remove any null records were instances.uuid is null.<br />
<br />
==== Scheduler ====<br />
<br />
* A selection of performance optimisations<br />
* We are in the process of making structural changes to the scheduler that will help improve our ability to evolve and improve scheduling. This should not be visible from an end user perspective.<br />
<br />
==== Cells v2 ====<br />
<br />
* There are some initial parts of cell v2 supported added, but this feature is not yet ready to use.<br />
* new 'nova-manage api_db sync' and 'nova-manage api_db version' commands for working with the new api database for cells, but nothing is using this database yet so it is not necessary to set it up.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Support for generation 2 VMs: https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms<br />
* Support for SMB based volumes, to sit along side existing iSCSI volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/hyper-v-smbfs-volume-support.html<br />
* Support for x509 certificate based key pairs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/keypair-x509-certificates.html<br />
* Host power actions now work with Hyper-V: https://blueprints.launchpad.net/nova/+spec/hyper-v-host-power-actions<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* NFV related features:<br />
** NUMA based scheduling: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/input-output-based-numa-scheduling.html<br />
** Pinning guest vCPUs: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-cpu-pinning.html<br />
** Large page support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html<br />
* vhostuser VIF driver: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt_vif_vhostuser.html<br />
* Support for running KVM on IBM System z: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-kvm-systemz.html<br />
* Support for parallels: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pcs-support.html<br />
* Support for SMB based volumes: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/libvirt-smbfs-volume-support.html<br />
* Quiesce using QEMU guest agent: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quiesced-image-snapshots-with-qemu-guest-agent.html<br />
* Quobyte volume support: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/quobyte-nova-driver.html<br />
* Support for QEMU iSCSI initiator: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html<br />
<br />
===== VMware =====<br />
<br />
* Support for Ephemeral disks: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/vmware-ephemeral-disk-support.html<br />
* Support fo vSAN: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-vsan-support.html<br />
* Support for based OVA images: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-driver-ova-support.html<br />
* Support for SPBM based storage policies: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/vmware-spbm-support.html<br />
<br />
===== Ironic =====<br />
<br />
* Support to pass flavor capabilities to ironic: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/pass-flavor-capabilities-to-ironic-virt-driver.html<br />
<br />
=== Known Issues ===<br />
<br />
* Evacuate recovery code has the potential to destroy data. On nova-compute startup, instances reported by the hypervisor are examined to see if they have moved (i.e. been evacuated) from the current host during the outage. If the determination is made that they were, then they are destroyed locally. This has the potential to choose incorrectly and destroy instances unexpectedly. On libvirt-like nodes, this can be triggered by changing the system hostname. On vmware-like nodes, this can be triggered by attempting to manage a single vcenter deployment from two different hosts (with different hostnames). This will be fixed properly in Liberty, but for now deployments that wish to disable this behavior as a preventive measure can set workarounds.destroy_after_evacuate=False. NOTE: This is not a regression and has been a flaw in the design of the evacuate feature since its introduction. There is no easy fix for this, hence this workaround to limit the potential for damage. The proposed fix in liberty is here: https://review.openstack.org/#/c/161444/.<br />
<br />
* The generate config examples possibly missing some oslo related configuration<br />
<br />
=== Upgrade Notes ===<br />
<br />
Below are changes you should be aware of when upgrading. Where possible, The git commit hash is provided for you to find more information:<br />
<br />
* Neutron ports are no longer deleted after your server is deleted, if you created them outside of Nova: 1153a46738fc3ffff98a1df9d94b5a55fdd58777<br />
* EC2 API support has been deprecated, and is likely to be removed in kilo, see f098398a836e3671c49bb884b4a1a1988053f4b2<br />
* Websocket proxies need to be upgraded in a lockstep with the API nodes, as older API nodes will not be sending the access_url when authorizing console access, and newer proxy services (this commit and onward) would fail to authorize such requests 9621ccaf05900009d67cdadeb1aac27368114a61<br />
* After fully upgrading to kilo (i.e. all nodes are running kilo code), you should start a background migration of flavor information from its old home to its new home. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. This is critical to complete before the Liberty release, where support for the old location will be dropped. Use "nova-manage migrate-flavor-data" to perform this transition.<br />
* Due to the improvement on Nova v2.1 API policy enforcement. There are a lot of change happened to v2.1 API policy. Because v2.1 API didn't released before, those change won't keep back-compatible. It is better to use policy sample configuration instead of old one.<br />
* VMware rescue VM behaviour no longer creates a new VM and instead happens in place: cd1765459a24e52e1b933c8e05517fed75ac9d41<br />
* force_config_drive = always has been deprecated, and force_config_drive = True should be used instead: c12a78b35dc910fa97df888960ef2b9a64557254<br />
* Running hyper-v, if you deployed code that was past this commit: b4d57ab65836460d0d9cb8889ec2e6c3986c0a9b but before this commit: c8e9f8e71de64273f10498c5ad959634bfe79975 you make have problems to manually resolve see: c8e9f8e71de64273f10498c5ad959634bfe79975<br />
* Changed the default value of: multi_instance_display_name_template see: 609b2df339785bff9e30a9d67d5c853562ae3344<br />
* Please use "nova-manage db null_instance_uuid_scan" to ensure the DB migrations will apply cleanly, see: c0ea53ce353684b48303fc59393930c3fa5ade58<br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* Keystone now supports Hierarchical Multitenancy. Projects can be nested under other projects via the <code>parent_id</code> optional attribute when creating a project.<br />
** TBD - More info.<br />
* Fernet tokens - Keystone provide another token format, that allows for tokens to be non-persistent (no longer stored in a database). The new token format promises improved scalability and performance of Keystone.<br />
* WebSSO - If using a trusted identity provider for federating users, Keystone now provides the ability for a user to authenticate with Horizon using the credentials of an existing IdP, through a Single Sign-On page.<br />
* Pluggable Assignment Model - TBD - More Info<br />
* Assignment / Resource Split - TBD - More Info<br />
* Extensions to Core functionality - The Keystone team is moving away from the concept of "extensions", all previous extensions are now enabled by default. These now core functions will be marked as Experimental or Stable (as assessed by the Keystone team). The previous extensions that are now enabled by default are: OS-FEDERATION, OS-OAUTH1, OS-ENDPOINT-POLICY and OS-EP-FILTER<br />
* (Experimental) Per-Domain Identity Backend Configuration can be stored in SQL - TBD More Info<br />
* [http://docs.openstack.org/developer/keystone/configure_federation.html#keystone-as-an-identity-provider-idp Keystone-to-Keystone Identity Federation] is now considered stable.<br />
** IDP Remote ID Registration - Keystone added the ability to associate many `Remote IDs` to a single Identity-Provider. This will help in a case where there are many identity providers that use a common mapping.<br />
** ECP Wrapped Assertions - In addition to generating a SAML assertion about a user, Keystone has also added the ability to create an ECP wrapped SAML assertion. This will create a more seamless integration for Keystone to Keystone clients.<br />
* Keystone now supports [http://docs.openstack.org/developer/keystone/extensions/openidc.html OpenID Connect] as a federated identity authentication mechanism.<br />
* Trusts now support redelegation. If allowed when the trust is created, a trustee can redelegate the roles from the trust via another trust.<br />
* It is now possible to request an explicitly unscoped token from Keystone.<br />
<br />
=== Known Issues ===<br />
<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
<br />
* XML support in Keystone has been removed as of Kilo. When upgrading from Juno to Kilo, it is recommended that references to XML and XmlBodyMiddleware be removed from the [https://github.com/openstack/keystone/blob/master/etc/keystone-paste.ini Keystone Paste configuration]. This includes removing the XML middleware filters and there references from the public_api, admin_api, api_v3, public_version_api, admin_version_api and any other pipelines that may contain the XML filters.<br />
* [http://specs.openstack.org/openstack/openstack-specs/specs/no-downward-sql-migration.html SQL Schema Downgrades are no longer supported]. This change is the result of evaluation that downward SQL migrations are not well tested and become increasingly difficult to support with the volume of data-change that occurs in many of the migrations.<br />
* The following python libraries are now required: [https://pypi.python.org/pypi/cryptography cryptography], [https://pypi.python.org/pypi/msgpack-python msgpack-python], [https://pypi.python.org/pypi/pysaml2 pysaml2] and [https://pypi.python.org/pypi/oauthlib oauthlib]<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
* DVR now supports VLANs in addition to VXLAN/GRE<br />
* ML2 Hierarchical Port Binding<br />
* New LBaaS Version 2 API<br />
* Portsecurity support for the OVS ML2 Driver<br />
<br />
* New Plugins supported in Kilo include the following:<br />
** A10 Networks LBaaS V2 Driver<br />
** Brocade LBaaS V2 Driver<br />
** Brocade ML2 driver for MLX and ICX switches<br />
** Brocade L3 routing plugin for MLX switch<br />
** Brocade Vyatta vRouter L3 Plugin<br />
** Brocade Vyatta vRouter Firewall Driver<br />
** Brocade Vyatta vRouter VPN Driver<br />
** Cisco CSR VPNaaS Driver<br />
** Dragonflow SDN based Distributed Virtual Router L3 Plugin<br />
** Freescale FWaaS Driver<br />
** Intel Mcafee NGFW FWaaS Driver<br />
** IPSEC Strongswan VPNaaS Driver<br />
<br />
=== Known Issues ===<br />
* The Firewall-as-a-Service project is still marked as experimental for the Kilo release.<br />
* Bug [https://bugs.launchpad.net/neutron/+bug/1438819 1438819]<br />
** When a new subnet is created on an external network, all existing routers with gateways on the network will get a new address allocated from it. For IPv4 networks, this could consume the entire subnet for router gateway ports.<br />
<br />
=== Upgrade Notes ===<br />
<br />
From Havana, Neutron no longer supported an explicit lease database (https://bugs.launchpad.net/bugs/1202392). This left dead code including unused environment variable. In order to remove the dead code (https://review.openstack.org/#/c/152398/), a change to the dhcp.filter is required, so that line:<br />
<br />
'''dnsmasq: EnvFilter, dnsmasq, root, NEUTRON_NETWORK_ID='''<br />
<br />
Be replaced by:<br />
<br />
'''dnsmasq: CommandFilter, dnsmasq, root'''<br />
<br />
After advanced services were split into separate packages and received their own service configuration files (specifically, etc/neutron/neutron_lbaas.conf, etc/neutron/neutron_fwaas.conf and etc/neutron/neutron_vpnaas.conf), active service provider configuration can be different after upgrade (specifically, default load balancer (haproxy) and vpn (openswan) providers can be enabled for you even though you previously disabled them in neutron.conf). Please make sure you review configuration after upgrade so that it reflects the desired state of service providers.<br />
<br />
Note: this will have no effect if the related service plugin is not loaded in neutron.conf.<br />
<br />
<br />
* The default value of api_workers is now equal to the number of CPUs in the host. If you currently use the default, ensure you set api_workers to a reasonable number for your installation. (https://review.openstack.org/#/c/140493/)<br />
* The neutron.allow_duplicate_networks config option is deprecated in Kilo and will be removed in Liberty where the default behavior will be to just allow multiple ports attached to an instance on the same network in Neutron. (https://review.openstack.org/163581)<br />
* The linuxbridge agent now enables VXLAN by default (https://review.openstack.org/160826)<br />
* neutron-ns-metadata-proxy can now be run as non-root (https://review.openstack.org/147437)<br />
<br />
=== Other Notes (Deprecation/EOL etc) ===<br />
<br />
* Deprecation<br />
** Brocade Classic plugin for Brocade's VDX/VCS series of hardware switches will be deprecated in the L-Release. The functionality provided by this plugin is now addressed by the ML2 Driver available for the VDX series of hardware. The plugin is slated for removal after this release cycle.<br />
<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
* From this point forward any new database schema upgrades will not require restarting Cinder services right away. The services are now independent of schema upgrades. This is part one to Cinder supporting rolling upgrades!<br />
* Ability to add/remove volumes from an existing consistency group. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Ability to create a consistency group from an existing consistency group snapshot. [http://docs.openstack.org/admin-guide-cloud/content/consistency-groups.html Read docs for more info].<br />
* Create more fine tuned filters/weighers to set how the scheduler will choose a volume backend. [http://docs.openstack.org/admin-guide-cloud/content/driver_filter_weighing.html Read the docs for more info].<br />
* Encrypted volumes can now be backed up using the Cinder backup service. [http://docs.openstack.org/admin-guide-cloud/content/volume-backup-restore.html Read the docs for more info].<br />
* Ability to create private volume types. This is perfect when you want to make volume types available to only a specific tenant or to test it before making available to your cloud. To do so use the ''cinder type-create <name> --is-public''.<br />
* Oversubscription with thin provision is configurable. [http://docs.openstack.org/admin-guide-cloud/content/over_subscription.html Read docs for more info].<br />
* Ability to add descriptions to volume types. To do so use ''cinder type-create <name> <description><br />
* Cinder now can return multiple iSCSI paths information so that the connector can attach volumes even when the primary path is down ([https://review.openstack.org/#/c/134681/ when connector's multipath feature is enabled] or [https://review.openstack.org/#/c/140877/ not enabled]).<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The 'host' config option for multiple-storage backends in cinder.conf is renamed to 'backend_host' in order to avoid a naming conflict with the 'host' to locate redis. If you use this option, please ensure your configuration files are updated.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* Support to add jitter to polling cycles to ensure pollsters are not querying service's api at the same time<br />
* Ceilometer API RBAC support<br />
* Improved Event support:<br />
** Multi-pipeline support to enable unique processing and publishing of events<br />
** Enabled ability to capture raw notification messages for auditing and postmortem analysis<br />
** Support for persisting events into ElasticSearch<br />
** Publishing support to database, http, file, kafa and oslo.messaging supported message queues<br />
** Option to split off the events persistence into a separate database<br />
** Telemetry now supports to collect and store all the event type meters as events. A new option, ''disable_non_metric_meters'', was added to the configuration in order to provide the possibility to turn off storing these events as samples. For further information please see the [http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-telemetry.html Telemetry Configuration Reference]<br />
** The Administrator Guide in OpenStack Manuals was updated with a new [http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-events.html Events section], where you can find further information about this functionality.<br />
* Improved pipeline publishing support:<br />
** Support to publish events and samples to Kafka or HTTP targets<br />
** Publish data to multiple queues<br />
* Additional meters<br />
** memory and disk meters for Hyper-V<br />
** disk meters for LibVirt<br />
** power and thermal related IPMI meters, more meters from NodeManager<br />
** ability to meter Ceph<br />
* IPv6 support enabled in Ceilometer udp publisher and collector<br />
* [http://launchpad.net/gnocchi Gnocchi] dispatch support for ceilometer-collector<br />
* Self-disabled pollster mechanism<br />
<br />
=== Known Issues ===<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
* Deprecated meters:<br />
** The instance:<flavor> meter is deprecated in the Kilo release. In order to retrieve samples or statistics based on flavor you can use the following queries:<br />
statistics:<br />
ceilometer statistics -m instance -g resource_metadata.instance_type<br />
<br />
samples:<br />
ceilometer sample-list -m instance -q metadata.instance_type=<value><br />
* Middleware used to meter Swift was previously packaged in Ceilometer and is now deprecated. It is now separated into it's own library: ceilometermiddleware.<br />
** Juno configuration: http://docs.openstack.org/juno/install-guide/install/apt/content/ceilometer-swift.html<br />
** Kilo configuration: http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-swift.html<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Improved scaling using nested stacks ====<br />
* Heat will RPC actions on any resource that is based on a template. This should help to spread the load when dealing with large complex stacks.<br />
<br />
==== oslo versioned objects ====<br />
* The database layer now uses oslo versioned objects to aid in future upgrades. This will allow a newly upgraded heat-engine to use a database with an older schema. Note that this will not help with upgrading to kilo.<br />
<br />
==== new template functions ====<br />
* There is a new HOT template version "20150430" which includes two new functions "digest" and "repeat". <br />
<br />
==== multiregion stacks ====<br />
* TBD<br />
<br />
==== software config signalling using swift ====<br />
* TBD<br />
==== triggering new software deployments from heatclient ====<br />
* TBD<br />
==== stack snapshots ====<br />
* TBD<br />
<br />
==== Access to Heat services ====<br />
* The admin now has similar access to services as other projects. This is in the form of "heat-manage service-list" and via horizon. This feature reports the active heat-engines.<br />
<br />
==== nova and neutron property constraints ====<br />
==== stack hooks ====<br />
<br />
==== New contributed resources ====<br />
* Mistral resources<br />
* keystone resources <br />
supported with Keystone v3 server for Project, Role, User and Group<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Data Processing service (Sahara) ==<br />
=== Key New Features ===<br />
* New plugins, their features and versions:<br />
** MAPR<br />
** Apache Storm<br />
** Apache Hadoop 2.6.0 was added, Apache Hadoop 2.4.1 deprecated<br />
** New services for CDH plugin added up to HDFS, YARN, Spark, Oozie, HBase, ZooKeeper and other services<br />
* Added indirect VM access for better utilization of floating IPs<br />
* Added event log support to have detailed info about provisioning progress<br />
* Optional default node group and cluster templates per plugin<br />
* Horizon updates:<br />
** Guided cluster creation and job execution<br />
** Filtering on search for objects<br />
* Editing of Node Group templates and Cluster templates implemented<br />
* Added Shell Job Type for clusters running Oozie<br />
* New Job Types endpoint to query list of the supported Job Types<br />
<br />
=== Known Issues ===<br />
-<br />
<br />
=== Upgrade Notes ===<br />
Details: http://docs.openstack.org/developer/sahara/userdoc/upgrade.guide.html#juno-kilo<br />
<br />
* Sahara now requires policy.json configuration file.<br />
<br />
== OpenStack Bare Metal service (Ironic) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== State Machine ====<br />
Ironic now uses a formal model for the logical state of each node it manages.<ref name="states">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/new-ironic-state-machine.html#proposed-change]New Ironic State Machine</ref> This has enabled the addition of two new processes: '''cleaning''' and '''inspection'''.<br />
* Automatic disk erasure between tenants is now enabled by default. This may be extended to perform additional '''cleaning''' steps, such as re-applying firmware, resetting BIOS settings, etc.<ref name="cleaning">[http://docs.openstack.org/developer/ironic/deploy/cleaning.html]Node Cleaning</ref><br />
* Both in-band and out-of-band methods are available to '''inspect''' hardware. These methods may be used to update Node properties automatically.<ref name="inspect">[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#hardware-inspection]Hardware Inspection</ref><br />
<br />
==== Version Headers ====<br />
The Ironic REST API expects a new ''X-OpenStack-Ironic-API-Version'' header be passed with each HTTP[S] request. This header allows client and server to negotiate a mutually supported interface.<ref name="api-version">[http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html]REST API "micro" versions </ref> In the absence of this header, the REST service will default to a compatibility mode and yield responses compatible with Juno clients. This mode, however, prevents access to most features introduced in Kilo.<br />
<br />
==== Hardware Driver Changes ====<br />
The following new drivers were added:<br />
* [http://docs.openstack.org/developer/ironic/drivers/amt.html AMT]<br />
* [http://docs.openstack.org/developer/ironic/deploy/drivers.html#irmc iRMC]<br />
* [http://docs.openstack.org/developer/ironic/drivers/vbox.html VirtualBox (testing driver only)]<br />
<br />
<br />
The following enhancements were made to existing drivers:<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#enabling-the-configuration-drive-configdrive Configdrives] may be used with the "agent" drivers in lieu of a metadata service, if desired.<br />
* SeaMicro driver supports serial console<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#uefi-secure-boot-support iLO driver supports UEFI secure boot]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#hardware-inspection iLO driver supports out-of-band node inspection]<br />
* [http://docs.openstack.org/developer/ironic/drivers/ilo.html#ilo-node-cleaning iLO driver supports resetting ilo and bios during cleaning]<br />
<br />
<br />
Support for third-party and out-of-tree drivers is enhanced by the following two changes:<br />
* Drivers may store their own "internal" information about Nodes.<br />
* Drivers may register their own periodic tasks to be run by the Conductor.<br />
* ''vendor_passthru'' methods now support additional HTTP methods (eg, PUT and POST).<br />
* ''vendor_passthru'' methods are now discoverable in the REST API. See [http://docs.openstack.org/developer/ironic/dev/drivers.html#node-vendor-passthru node vendor passthru] and [http://docs.openstack.org/developer/ironic/dev/drivers.html#driver-vendor-passthru driver vendor passthru]<br />
<br />
==== Other Changes ====<br />
* [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#logical-names Logical names] may be used to address Nodes, in addition to their canonical UUID. <br />
* For servers with varied local disks, [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#specifying-the-disk-for-deployment ''hints''] may be supplied that affect which disk device the OS is provisioned to.<br />
* Support for fetching kernel, ramdisk, and instance images from HTTP[S] sources directly has been added to remove the dependency on Glance. [http://docs.openstack.org/developer/ironic/deploy/install-guide.html#using-ironic-as-a-standalone-service Using ironic as a standalone service]<br />
* Nodes may be placed into ''[http://docs.openstack.org/developer/ironic/deploy/install-guide.html#maintenance-mode maintenance mode]'' via REST API calls. An optional ''maintenance reason'' may be specified when doing so.<br />
<br />
=== Known Issues ===<br />
* '''Running more than one nova-compute process is not officially supported.'''<br />
** While Ironic does include a ClusteredComputeManager, which allows running more than one nova-compute process with Ironic, it should be considered experimental and has many known problems.<br />
* Drivers using the "agent" deploy mechanism do not support "rebuild --preserve-ephemeral"<br />
<br />
=== Upgrade Notes ===<br />
* IPMI Passwords are now obfuscated in REST API responses. This may be disabled by changing API policy settings.<br />
* The "agent" class of drivers now support both whole-disk and partition based images.<br />
* The driver_info parameters of "pxe_deploy_kernel" and "pxe_deploy_ramdisk" are deprecated in favour of "deploy_kernel" and "deploy_ramdisk". <br />
* Drivers implementing their own version of the vendor_passthru() method has been deprecated in favour of the new @passthru decorator.<br />
<br />
==== Juno to Kilo ====<br />
The recommended upgrade process is documented here:<br />
* http://docs.openstack.org/developer/ironic/deploy/upgrade-guide.html#upgrading-from-juno-to-kilo<br />
<br />
==== Upgrading from Icehouse "nova-baremetal" ====<br />
<br />
An upgrade from an Icehouse Nova installation using the "baremetal" driver directly to Kilo Ironic is untested and unsupported. Instead, please follow the following upgrade path:<br />
# Icehouse Nova "baremetal" -> Juno Nova "baremetal"<br />
# Juno Nova "baremetal" -> Juno Ironic<br />
# Juno Ironic -> Kilo Ironic<br />
<br />
Documentation for steps 1 and 2 is available at: https://wiki.openstack.org/wiki/Ironic/NovaBaremetalIronicMigration<br />
<br />
== OpenStack Documentation ==<br />
<br />
* New [http://docs.openstack.org docs.openstack.org] landing page and new web design for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* First release of the [http://docs.openstack.org/networking-guide/ Networking Guide]<br />
* Migration to RST for [http://docs.openstack.org/user-guide/ End User Guide] and [http://docs.openstack.org/user-guide-admin/ Admin User Guide]<br />
* New specialty teams:<br />
** Install Guides<br />
** Networking Guide<br />
** High Availability Guide<br />
** Networking Guide<br />
** User Guides (Admin and End User)<br />
* First App Tutorial sprint<br />
* Driver documentation clarification and connections</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=78476Meetings/HeatAgenda2015-04-29T06:18:29Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Everyone is welcome, feel free to add topics before or at the beginning of meetings.<br />
<br />
=== Agenda (2015-04-29 1200 UTC) ===<br />
* Adding items to the agenda<br />
* Any critical bugs to stop rc2 becoming the release?<br />
* open discussion<br />
<br />
=== Agenda (2015-04-22 2000 UTC) ===<br />
* Adding items to the agenda<br />
* Any critical bugs (rc2)<br />
* gate status<br />
* open discussion<br />
<br />
=== Agenda (2015-04-15 1200 UTC) ===<br />
* Adding items to the agenda<br />
* Any critical bugs (rc2 potential)<br />
* open discussion<br />
<br />
=== Agenda (2015-04-08 2000 UTC) ===<br />
* Adding items to the agenda<br />
* Any critical bugs (rc2 potential)<br />
* open discussion<br />
<br />
=== Agenda (2015-04-01 1200 UTC) ===<br />
* Adding items to the agenda<br />
* rc1 status https://launchpad.net/heat/+milestone/kilo-rc1<br />
* https://etherpad.openstack.org/p/liberty-heat-sessions<br />
* open discussion<br />
<br />
=== Agenda (2015-03-25 2000 UTC) ===<br />
* Adding items to the agenda<br />
* rc1 status https://launchpad.net/heat/+milestone/kilo-rc1<br />
* make heat_integration tests more independent from Heat tree (pas-ha)<br />
* Critical bugs<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2015/ 2015 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=78040Meetings/HeatAgenda2015-04-22T20:37:05Z<p>Asalkeld: /* Agenda (2015-04-22 2000 UTC) */</p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Everyone is welcome, feel free to add topics before or at the beginning of meetings.<br />
<br />
=== Agenda (2015-04-22 2000 UTC) ===<br />
* Adding items to the agenda<br />
* Any critical bugs (rc2)<br />
* gate status<br />
* open discussion<br />
<br />
=== Agenda (2015-04-15 1200 UTC) ===<br />
* Adding items to the agenda<br />
* Any critical bugs (rc2 potential)<br />
* open discussion<br />
<br />
=== Agenda (2015-04-08 2000 UTC) ===<br />
* Adding items to the agenda<br />
* Any critical bugs (rc2 potential)<br />
* open discussion<br />
<br />
=== Agenda (2015-04-01 1200 UTC) ===<br />
* Adding items to the agenda<br />
* rc1 status https://launchpad.net/heat/+milestone/kilo-rc1<br />
* https://etherpad.openstack.org/p/liberty-heat-sessions<br />
* open discussion<br />
<br />
=== Agenda (2015-03-25 2000 UTC) ===<br />
* Adding items to the agenda<br />
* rc1 status https://launchpad.net/heat/+milestone/kilo-rc1<br />
* make heat_integration tests more independent from Heat tree (pas-ha)<br />
* Critical bugs<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2015/ 2015 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=77882Meetings/HeatAgenda2015-04-20T21:25:24Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Everyone is welcome, feel free to add topics before or at the beginning of meetings.<br />
<br />
=== Agenda (2015-04-22 2000 UTC) ===<br />
* Adding items to the agenda<br />
* Any critical bugs (rc2 potential)<br />
* open discussion<br />
<br />
=== Agenda (2015-04-15 1200 UTC) ===<br />
* Adding items to the agenda<br />
* Any critical bugs (rc2 potential)<br />
* open discussion<br />
<br />
=== Agenda (2015-04-08 2000 UTC) ===<br />
* Adding items to the agenda<br />
* Any critical bugs (rc2 potential)<br />
* open discussion<br />
<br />
=== Agenda (2015-04-01 1200 UTC) ===<br />
* Adding items to the agenda<br />
* rc1 status https://launchpad.net/heat/+milestone/kilo-rc1<br />
* https://etherpad.openstack.org/p/liberty-heat-sessions<br />
* open discussion<br />
<br />
=== Agenda (2015-03-25 2000 UTC) ===<br />
* Adding items to the agenda<br />
* rc1 status https://launchpad.net/heat/+milestone/kilo-rc1<br />
* make heat_integration tests more independent from Heat tree (pas-ha)<br />
* Critical bugs<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2015/ 2015 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=77150Meetings/HeatAgenda2015-04-08T04:11:36Z<p>Asalkeld: /* Weekly Heat (Orchestration) meeting */</p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Everyone is welcome, feel free to add topics before or at the beginning of meetings.<br />
<br />
=== Agenda (2015-04-08 2000 UTC) ===<br />
* Adding items to the agenda<br />
* Any critical bugs (rc2 potential)<br />
* open discussion<br />
<br />
=== Agenda (2015-04-01 1200 UTC) ===<br />
* Adding items to the agenda<br />
* rc1 status https://launchpad.net/heat/+milestone/kilo-rc1<br />
* https://etherpad.openstack.org/p/liberty-heat-sessions<br />
* open discussion<br />
<br />
=== Agenda (2015-03-25 2000 UTC) ===<br />
* Adding items to the agenda<br />
* rc1 status https://launchpad.net/heat/+milestone/kilo-rc1<br />
* make heat_integration tests more independent from Heat tree (pas-ha)<br />
* Critical bugs<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2015/ 2015 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=76693Meetings/HeatAgenda2015-04-01T09:14:04Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Everyone is welcome, feel free to add topics before or at the beginning of meetings.<br />
<br />
=== Agenda (2015-04-01 1200 UTC) ===<br />
* Adding items to the agenda<br />
* rc1 status https://launchpad.net/heat/+milestone/kilo-rc1<br />
* https://etherpad.openstack.org/p/liberty-heat-sessions<br />
* open discussion<br />
<br />
=== Agenda (2015-03-25 2000 UTC) ===<br />
* Adding items to the agenda<br />
* rc1 status https://launchpad.net/heat/+milestone/kilo-rc1<br />
* make heat_integration tests more independent from Heat tree (pas-ha)<br />
* Critical bugs<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2015/ 2015 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=76692Meetings/HeatAgenda2015-04-01T09:13:24Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Everyone is welcome, feel free to add topics before or at the beginning of meetings.<br />
<br />
=== Agenda (2015-04-01 1200 UTC) ===<br />
* Adding items to the agenda<br />
* rc1 status https://launchpad.net/heat/+milestone/kilo-rc1<br />
* https://etherpad.openstack.org/p/liberty-heat-sessions<br />
* open discussion<br />
<br />
=== Agenda (2015-03-25 2000 UTC) ===<br />
* Adding items to the agenda<br />
* rc1 status https://launchpad.net/heat/+milestone/kilo-rc1<br />
* make heat_integration tests more independent from Heat tree (pas-ha)<br />
* Critical bugs<br />
<br />
=== Agenda (2015-03-18 1200 UTC) ===<br />
* Adding items to the agenda<br />
* https://etherpad.openstack.org/p/heat-kilo-releasenotes<br />
* Last day before FF (highlight what needs to be done)<br />
* Any urgent bugs needed before rc1<br />
<br />
=== Agenda (2015-03-11 2000 UTC) ===<br />
* Adding items to the agenda<br />
* https://wiki.openstack.org/wiki/Kilo_Release_Schedule (March 19 cut off for features), run though what we can get in.<br />
* Thoughts on spec for balancing scaling groups across AZs https://review.openstack.org/#/c/105907/<br />
* Documentation options for stack lifecycle scheduler hints https://review.openstack.org/#/c/130294/<br />
* Work to get WSGI services runnable inside Apache/nginx (skraynev)<br />
<br />
=== Agenda (2015-03-04 1200 UTC) ===<br />
* Adding items to the agenda<br />
* update from cross project meeting (asyncio/threads/goless)<br />
* update from the release meeting (blueprint status, Kilo-3)<br />
<br />
=== Agenda (2015-02-25 2000 UTC) ===<br />
* Adding items to the agenda<br />
* Vancouver Design Summit space needs (provisional are: 5 fishbowl/ 10 work/ 1 friday): https://docs.google.com/spreadsheets/d/14pryalH3rVVQGHdyE3QeeZ3eDrs_1KfqfykxSTvrfyI/edit?usp=sharing<br />
* Heat Mission statement: https://review.openstack.org/#/c/154049/<br />
* FWI: https://wiki.openstack.org/wiki/Release_Cycle_Management/Liberty_Tracking<br />
* Short question about increasing nested depth default value for using it in Sahara.<br />
* Mistral resources in Heat.<br />
<br />
=== Agenda (2015-02-18 1200 UTC) ===<br />
* Adding items to the agenda<br />
* Critical bug review<br />
* blueprint reviews (let's try to prioritize)<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2015/ 2015 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Design_Summit/Planning&diff=76668Design Summit/Planning2015-03-31T21:18:13Z<p>Asalkeld: /* Topic proposal, discussion and selection */</p>
<hr />
<div>== Topic proposal, discussion and selection ==<br />
<br />
* QA - https://etherpad.openstack.org/p/liberty-qa-summit-topics<br />
* Neutron - https://etherpad.openstack.org/p/liberty-neutron-summit-topics<br />
* Nova - https://etherpad.openstack.org/p/liberty-nova-summit-ideas<br />
* Keystone - https://etherpad.openstack.org/p/Keystone-liberty-summit-brainstorm (yep we didn't follow the naming convention)<br />
* Heat - https://etherpad.openstack.org/p/liberty-heat-sessions</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=76325Meetings/HeatAgenda2015-03-25T19:57:57Z<p>Asalkeld: /* Agenda (2015-03-25 2000 UTC) */</p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Everyone is welcome, feel free to add topics before or at the beginning of meetings.<br />
<br />
=== Agenda (2015-03-25 2000 UTC) ===<br />
* Adding items to the agenda<br />
* rc1 status https://launchpad.net/heat/+milestone/kilo-rc1<br />
* make heat_integration tests more independent from Heat tree (pas-ha)<br />
* Critical bugs<br />
<br />
=== Agenda (2015-03-18 1200 UTC) ===<br />
* Adding items to the agenda<br />
* https://etherpad.openstack.org/p/heat-kilo-releasenotes<br />
* Last day before FF (highlight what needs to be done)<br />
* Any urgent bugs needed before rc1<br />
<br />
=== Agenda (2015-03-11 2000 UTC) ===<br />
* Adding items to the agenda<br />
* https://wiki.openstack.org/wiki/Kilo_Release_Schedule (March 19 cut off for features), run though what we can get in.<br />
* Thoughts on spec for balancing scaling groups across AZs https://review.openstack.org/#/c/105907/<br />
* Documentation options for stack lifecycle scheduler hints https://review.openstack.org/#/c/130294/<br />
* Work to get WSGI services runnable inside Apache/nginx (skraynev)<br />
<br />
=== Agenda (2015-03-04 1200 UTC) ===<br />
* Adding items to the agenda<br />
* update from cross project meeting (asyncio/threads/goless)<br />
* update from the release meeting (blueprint status, Kilo-3)<br />
<br />
=== Agenda (2015-02-25 2000 UTC) ===<br />
* Adding items to the agenda<br />
* Vancouver Design Summit space needs (provisional are: 5 fishbowl/ 10 work/ 1 friday): https://docs.google.com/spreadsheets/d/14pryalH3rVVQGHdyE3QeeZ3eDrs_1KfqfykxSTvrfyI/edit?usp=sharing<br />
* Heat Mission statement: https://review.openstack.org/#/c/154049/<br />
* FWI: https://wiki.openstack.org/wiki/Release_Cycle_Management/Liberty_Tracking<br />
* Short question about increasing nested depth default value for using it in Sahara.<br />
* Mistral resources in Heat.<br />
<br />
=== Agenda (2015-02-18 1200 UTC) ===<br />
* Adding items to the agenda<br />
* Critical bug review<br />
* blueprint reviews (let's try to prioritize)<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2015/ 2015 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Kolla/PTL_Elections_March_2015&diff=75802Kolla/PTL Elections March 20152015-03-18T00:05:27Z<p>Asalkeld: /* PTL */</p>
<hr />
<div>=== Official ===<br />
* Angus Salkeld (asalkeld)<br />
<br />
=== Election system ===<br />
Elections will be held using CIVS and a Condorcet algorithm (Schulze/Beatpath/CSSD variant). Any tie will be broken using [[Governance/TieBreaking]]. If there is only one candidate, no poll will be held and the election will conclude March 17, 2015 05:59 UTC.<br />
<br />
=== Timeline ===<br />
* till 05:59 UTC March 17, 2015: Open candidacy to PTL positions<br />
* March 17, 2015 - 1300 UTC March 24, 2015: PTL elections<br />
<br />
=== Elected position ===<br />
The Kolla project must elect a PTL. PTL will be elected for the Liberty cycle.<br />
<br />
=== Electorate ===<br />
<br />
In order to be an eligible candidate (and be allowed to vote) in the Kolla PTL election, you need to have contributed an accepted patch to the Kolla repo during the Juno-Kilo timeframe.<br />
<br />
Only the kolla project on stackforge is counted.<br />
<br />
=== Candidates ===<br />
<br />
Any member of an election electorate can propose his/her candidacy for the same election. No nomination is required. They can do so by sending an email to the openstack-dev@lists.openstack.org mailing-list, which the subject: "[Kolla] PTL candidacy". The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate.<br />
<br />
Confirmed candidates for Kolla Liberty PTL Elections (alphabetically by last name):<br />
<br />
* [http://osdir.com/ml/openstack-dev/2015-03/msg00908.html Steven Dake]<br />
<br />
=== PTL ===<br />
<br />
* Steven Dake<br />
<br />
=== Links to Results ===<br />
<br />
* <br />
<br />
=== Useful links ===<br />
<br />
[[Election_Officiating_Guidelines]]</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=75725Meetings/HeatAgenda2015-03-17T10:56:20Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Everyone is welcome, feel free to add topics before or at the beginning of meetings.<br />
<br />
=== Agenda (2015-03-18 1200 UTC) ===<br />
* Adding items to the agenda<br />
* https://etherpad.openstack.org/p/heat-kilo-releasenotes<br />
* Last day before FF (highlight what needs to be done)<br />
* Any urgent bugs needed before rc1<br />
<br />
=== Agenda (2015-03-11 2000 UTC) ===<br />
* Adding items to the agenda<br />
* https://wiki.openstack.org/wiki/Kilo_Release_Schedule (March 19 cut off for features), run though what we can get in.<br />
* Thoughts on spec for balancing scaling groups across AZs https://review.openstack.org/#/c/105907/<br />
* Documentation options for stack lifecycle scheduler hints https://review.openstack.org/#/c/130294/<br />
* Work to get WSGI services runnable inside Apache/nginx (skraynev)<br />
<br />
=== Agenda (2015-03-04 1200 UTC) ===<br />
* Adding items to the agenda<br />
* update from cross project meeting (asyncio/threads/goless)<br />
* update from the release meeting (blueprint status, Kilo-3)<br />
<br />
=== Agenda (2015-02-25 2000 UTC) ===<br />
* Adding items to the agenda<br />
* Vancouver Design Summit space needs (provisional are: 5 fishbowl/ 10 work/ 1 friday): https://docs.google.com/spreadsheets/d/14pryalH3rVVQGHdyE3QeeZ3eDrs_1KfqfykxSTvrfyI/edit?usp=sharing<br />
* Heat Mission statement: https://review.openstack.org/#/c/154049/<br />
* FWI: https://wiki.openstack.org/wiki/Release_Cycle_Management/Liberty_Tracking<br />
* Short question about increasing nested depth default value for using it in Sahara.<br />
* Mistral resources in Heat.<br />
<br />
=== Agenda (2015-02-18 1200 UTC) ===<br />
* Adding items to the agenda<br />
* Critical bug review<br />
* blueprint reviews (let's try to prioritize)<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2015/ 2015 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Kilo&diff=75723ReleaseNotes/Kilo2015-03-17T10:53:14Z<p>Asalkeld: /* OpenStack Orchestration (Heat) */</p>
<hr />
<div>{| style="color:#000000; border:solid 1px #A8A8A8; padding:0.5em; margin:0.5em 0; background-color:#FFFFFF;font-size:95%; vertical-align:middle;"<br />
| style="padding:1em;width: 40px" | [[Image:Warning.svg|40px]]<br />
| '''Release Under Development'''<br />
This release of OpenStack is under development and has yet to be completed. It will be released on April 30, 2015<br />
<br />
The information on this page may not accurately reflect the state of release at the current point in time.<br />
|}<br />
<br />
= OpenStack 2015.1 (Kilo) Release Notes =<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== General Upgrade Notes ==<br />
<br />
* TBD<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Upgrade Support ====<br />
<br />
* TBD<br />
<br />
==== Cells v2 ====<br />
* TBD<br />
* new 'nova-manage api_db sync' and 'nova-manage api_db version' commands for working with the new api database for cells.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* TBD<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* TBD<br />
<br />
===== VMware =====<br />
<br />
* TBD<br />
<br />
===== XenServer =====<br />
<br />
* TBD<br />
<br />
===== Ironic =====<br />
<br />
* Add config drive support ( https://review.openstack.org/#/c/144792/ )<br />
<br />
==== API ====<br />
<br />
* TBD<br />
<br />
==== Scheduler ====<br />
<br />
* TBD<br />
<br />
==== Other Features ====<br />
<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
* Evacuate recovery code has the potential to destroy data. On nova-compute startup, instances reported by the hypervisor are examined to see if they have moved (i.e. been evacuated) from the current host during the outage. If the determination is made that they were, then they are destroyed locally. This has the potential to choose incorrectly and destroy instances unexpectedly. On libvirt-like nodes, this can be triggered by changing the system hostname. On vmware-like nodes, this can be triggered by attempting to manage a single vcenter deployment from two different hosts (with different hostnames). This will be fixed properly in Liberty, but for now deployments that wish to disable this behavior as a preventive measure can set workarounds.destroy_after_evacuate=False.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* TBD<br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
<br />
* XML support in Keystone has been removed as of Kilo. When upgrading from Juno to Kilo, it is recommended that references to XML and XmlBodyMiddleware be removed from the [https://github.com/openstack/keystone/blob/master/etc/keystone-paste.ini Keystone Paste configuration]. This includes removing the XML middleware filters and there references from the public_api, admin_api, api_v3, public_version_api, admin_version_api and any other pipelines that may contain the XML filters.<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet.<br />
<br />
=== Upgrade Notes ===<br />
<br />
From Havana, Neutron no longer supported an explicit lease database (https://bugs.launchpad.net/bugs/1202392). This left dead code including unused environment variable. In order to remove the dead code (https://review.openstack.org/#/c/152398/), a change to the dhcp.filter is required, so that line:<br />
<br />
'''dnsmasq: EnvFilter, dnsmasq, root, NEUTRON_NETWORK_ID='''<br />
<br />
Be replaced by:<br />
<br />
'''dnsmasq: CommandFilter, dnsmasq, root'''<br />
<br />
After advanced services were split into separate packages and received their own service configuration files (specifically, etc/neutron/neutron_lbaas.conf, etc/neutron/neutron_fwaas.conf and etc/neutron/neutron_vpnaas.conf), active service provider configuration can be different after upgrade (specifically, default load balancer (haproxy) and vpn (openswan) providers can be enabled for you even though you previously disabled them in neutron.conf). Please make sure you review configuration after upgrade so that it reflects the desired state of service providers.<br />
<br />
Note: this will have no effect if the related service plugin is not loaded in neutron.conf.<br />
<br />
<br />
* The default value of api_workers is now equal to the number of CPUs in the host. If you currently use the default, ensure you set api_workers to a reasonable number for your installation. (https://review.openstack.org/#/c/140493/)<br />
* The neutron.allow_duplicate_networks config option is deprecated in Kilo and will be removed in Liberty where the default behavior will be to just allow multiple ports attached to an instance on the same network in Neutron. (https://review.openstack.org/163581)<br />
* The linuxbridge agent now enables VXLAN by default (https://review.openstack.org/160826)<br />
* neutron-ns-metadata-proxy can now be run as non-root (https://review.openstack.org/147437)<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
* The 'host' config option for multiple-storage backends in cinder.conf is renamed to 'backend_host' in order to avoid a naming conflict with the 'host' to locate redis. If you use this option, please ensure your configuration files are updated.<br />
* The openstack.common.middleware has graduated to oslo.middleware support. When upgrading from Juno to Kilo, it is needed that references to openstack.common.middleware.RequestIdMiddleware be changed to oslo.middleware.RequestId from the Cinder Paste configuration. This includes changing the middleware filter and it's references to oslo.middleware.RequestId.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* TBD<br />
<br />
=== Known Issues ===<br />
* TBD<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
WIP (will move here) https://etherpad.openstack.org/p/heat-kilo-releasenotes<br />
<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Documentation ==<br />
<br />
=== Key New Features ===<br />
<br />
* TBD<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
None yet</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Kolla/PTL_Elections_March_2015&diff=75535Kolla/PTL Elections March 20152015-03-13T13:23:58Z<p>Asalkeld: /* Candidates */</p>
<hr />
<div>=== Official ===<br />
* Angus Salkeld (asalkeld)<br />
<br />
=== Election system ===<br />
Elections will be held using CIVS and a Condorcet algorithm (Schulze/Beatpath/CSSD variant). Any tie will be broken using [[Governance/TieBreaking]]. If there is only one candidate, no poll will be held and the election will conclude March 17, 2015 05:59 UTC.<br />
<br />
=== Timeline ===<br />
* till 05:59 UTC March 17, 2015: Open candidacy to PTL positions<br />
* March 17, 2015 - 1300 UTC March 24, 2015: PTL elections<br />
<br />
=== Elected position ===<br />
The Kolla project must elect a PTL. PTL will be elected for the Liberty cycle.<br />
<br />
=== Electorate ===<br />
<br />
In order to be an eligible candidate (and be allowed to vote) in the Kolla PTL election, you need to have contributed an accepted patch to the Kolla repo during the Juno-Kilo timeframe.<br />
<br />
Only the kolla project on stackforge is counted.<br />
<br />
=== Candidates ===<br />
<br />
Any member of an election electorate can propose his/her candidacy for the same election. No nomination is required. They can do so by sending an email to the openstack-dev@lists.openstack.org mailing-list, which the subject: "[Kolla] PTL candidacy". The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate.<br />
<br />
Confirmed candidates for Kolla Liberty PTL Elections (alphabetically by last name):<br />
<br />
* [http://osdir.com/ml/openstack-dev/2015-03/msg00908.html Steven Dake]<br />
<br />
=== PTL ===<br />
<br />
* <br />
<br />
=== Links to Results ===<br />
<br />
* <br />
<br />
=== Useful links ===<br />
<br />
[[Election_Officiating_Guidelines]]</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=75378Meetings/HeatAgenda2015-03-11T11:41:24Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Everyone is welcome, feel free to add topics before or at the beginning of meetings.<br />
<br />
=== Agenda (2015-03-11 1200 UTC) ===<br />
* Adding items to the agenda<br />
* https://wiki.openstack.org/wiki/Kilo_Release_Schedule (March 19 cut off for features), run though what we can get in.<br />
<br />
=== Agenda (2015-03-11 2000 UTC) ===<br />
* Adding items to the agenda<br />
* Work to get WSGI services runnable inside Apache/nginx (skraynev)<br />
<br />
=== Agenda (2015-03-04 1200 UTC) ===<br />
* Adding items to the agenda<br />
* update from cross project meeting (asyncio/threads/goless)<br />
* update from the release meeting (blueprint status, Kilo-3)<br />
<br />
=== Agenda (2015-02-25 2000 UTC) ===<br />
* Adding items to the agenda<br />
* Vancouver Design Summit space needs (provisional are: 5 fishbowl/ 10 work/ 1 friday): https://docs.google.com/spreadsheets/d/14pryalH3rVVQGHdyE3QeeZ3eDrs_1KfqfykxSTvrfyI/edit?usp=sharing<br />
* Heat Mission statement: https://review.openstack.org/#/c/154049/<br />
* FWI: https://wiki.openstack.org/wiki/Release_Cycle_Management/Liberty_Tracking<br />
* Short question about increasing nested depth default value for using it in Sahara.<br />
* Mistral resources in Heat.<br />
<br />
=== Agenda (2015-02-18 1200 UTC) ===<br />
* Adding items to the agenda<br />
* Critical bug review<br />
* blueprint reviews (let's try to prioritize)<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2015/ 2015 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=74988Meetings/HeatAgenda2015-03-04T05:44:09Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Everyone is welcome, feel free to add topics before or at the beginning of meetings.<br />
<br />
=== Agenda (2015-03-04 1200 UTC) ===<br />
* Adding items to the agenda<br />
* update from cross project meeting (asyncio/threads/goless)<br />
* update from the release meeting (blueprint status, Kilo-3)<br />
<br />
=== Agenda (2015-02-25 2000 UTC) ===<br />
* Adding items to the agenda<br />
* Vancouver Design Summit space needs (provisional are: 5 fishbowl/ 10 work/ 1 friday): https://docs.google.com/spreadsheets/d/14pryalH3rVVQGHdyE3QeeZ3eDrs_1KfqfykxSTvrfyI/edit?usp=sharing<br />
* Heat Mission statement: https://review.openstack.org/#/c/154049/<br />
* FWI: https://wiki.openstack.org/wiki/Release_Cycle_Management/Liberty_Tracking<br />
* Short question about increasing nested depth default value for using it in Sahara.<br />
* Mistral resources in Heat.<br />
<br />
=== Agenda (2015-02-18 1200 UTC) ===<br />
* Adding items to the agenda<br />
* Critical bug review<br />
* blueprint reviews (let's try to prioritize)<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2015/ 2015 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=74269Meetings/HeatAgenda2015-02-24T09:31:23Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Everyone is welcome, feel free to add topics before or at the beginning of meetings.<br />
<br />
=== Agenda (2015-02-25 2000 UTC) ===<br />
* Adding items to the agenda<br />
* Vancouver Design Summit space needs (provisional are: 5 fishbowl/ 10 work/ 1 friday): https://docs.google.com/spreadsheets/d/14pryalH3rVVQGHdyE3QeeZ3eDrs_1KfqfykxSTvrfyI/edit?usp=sharing<br />
* Heat Mission statement: https://review.openstack.org/#/c/154049/<br />
* FWI: https://wiki.openstack.org/wiki/Release_Cycle_Management/Liberty_Tracking<br />
<br />
=== Agenda (2015-02-18 1200 UTC) ===<br />
* Adding items to the agenda<br />
* Critical bug review<br />
* blueprint reviews (let's try to prioritize)<br />
<br />
=== Agenda (2015-02-11 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Bug 1414674 (orphan queues after heat-engine restart)<br />
<br />
=== Agenda (2015-02-04 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2015/ 2015 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=73896Meetings/HeatAgenda2015-02-18T09:18:21Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Everyone is welcome, feel free to add topics before or at the beginning of meetings.<br />
<br />
=== Agenda (2015-02-18 1200 UTC) ===<br />
* Adding items to the agenda<br />
* Critical bug review<br />
* blueprint reviews (let's try to prioritize)<br />
<br />
=== Agenda (2015-02-11 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Bug 1414674 (orphan queues after heat-engine restart)<br />
<br />
=== Agenda (2015-02-04 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2015/ 2015 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=73443Meetings/HeatAgenda2015-02-11T04:17:40Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Everyone is welcome, feel free to add topics before or at the beginning of meetings.<br />
<br />
=== Agenda (2015-02-12 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
<br />
=== Agenda (2015-02-04 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2015/ 2015 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=73442Meetings/HeatAgenda2015-02-11T04:17:24Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Everyone is welcome, feel free to add topics before or at the beginning of meetings.<br />
<br />
=== Agenda (2015-02-12 2000 UTC)<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
<br />
=== Agenda (2015-02-04 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2015/ 2015 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=72958Meetings/HeatAgenda2015-02-04T10:05:56Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2015-02-04 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
<br />
=== Agenda (2015-01-28 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
* convergence tasks/blueprints (target to kilo-3?)<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2015/ 2015 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=72957Meetings/HeatAgenda2015-02-04T10:05:16Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2015-02-04 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
<br />
=== Agenda (2015-01-28 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
* convergence tasks/blueprints (target to kilo-3?)<br />
<br />
=== Agenda (2015-01-21 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Remove deprecation properties<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
<br />
=== Agenda (2015-01-14 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
<br />
=== Agenda (2015-01-07 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
* Planing the midcycle meetup online (brought forward from last time)<br />
* what's up with convergence?<br />
<br />
=== Agenda (2014-12-18 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-1 https://launchpad.net/heat/+milestone/kilo-1<br />
* Planing the midcycle meetup online - bring your ideas!<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2013/ 2013 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2012/ 2012 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=72523Meetings/HeatAgenda2015-01-28T11:07:48Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2015-01-28 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
* convergence tasks/blueprints (target to kilo-3?)<br />
<br />
=== Agenda (2015-01-21 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Remove deprecation properties<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
<br />
=== Agenda (2015-01-14 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
<br />
=== Agenda (2015-01-07 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
* Planing the midcycle meetup online (brought forward from last time)<br />
* what's up with convergence?<br />
<br />
=== Agenda (2014-12-18 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-1 https://launchpad.net/heat/+milestone/kilo-1<br />
* Planing the midcycle meetup online - bring your ideas!<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2013/ 2013 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2012/ 2012 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=71808Meetings/HeatAgenda2015-01-14T10:07:27Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2015-01-14 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
<br />
=== Agenda (2015-01-07 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
* Planing the midcycle meetup online (brought forward from last time)<br />
* what's up with convergence?<br />
<br />
=== Agenda (2014-12-18 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-1 https://launchpad.net/heat/+milestone/kilo-1<br />
* Planing the midcycle meetup online - bring your ideas!<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2013/ 2013 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2012/ 2012 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=71299Meetings/HeatAgenda2015-01-07T11:09:23Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2015-01-07 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-2 https://launchpad.net/heat/+milestone/kilo-2<br />
* Planing the midcycle meetup online (brought forward from last time)<br />
* what's up with convergence?<br />
<br />
=== Agenda (2014-12-18 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-1 https://launchpad.net/heat/+milestone/kilo-1<br />
* Planing the midcycle meetup online - bring your ideas!<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2013/ 2013 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2012/ 2012 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=70681Meetings/HeatAgenda2014-12-17T19:57:51Z<p>Asalkeld: /* Agenda (2014-12-18 2000 UTC) */</p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2014-12-18 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-1 https://launchpad.net/heat/+milestone/kilo-1<br />
* Planing the midcycle meetup online - bring your ideas!<br />
<br />
=== Agenda (2014-12-11 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-1 status: https://launchpad.net/heat/+milestone/kilo-1<br />
<br />
=== Agenda (2014-12-04 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
<br />
=== Agenda (2014-11-26 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Cross Project Liasons revisited (shardy)<br />
<br />
=== Agenda (2014-11-19 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Organize a bug cleanup day (therve)<br />
* https://wiki.openstack.org/wiki/CrossProjectLiaisons including (Release manager)<br />
* CPLs encouraged to go to project meeting<br />
* Mid cycle meetup planning<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2013/ 2013 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2012/ 2012 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=70210Meetings/HeatAgenda2014-12-12T02:40:59Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2014-12-18 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Planing the midcycle meetup online - bring your ideas!<br />
<br />
=== Agenda (2014-12-11 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-1 status: https://launchpad.net/heat/+milestone/kilo-1<br />
<br />
=== Agenda (2014-12-04 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
<br />
=== Agenda (2014-11-26 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Cross Project Liasons revisited (shardy)<br />
<br />
=== Agenda (2014-11-19 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Organize a bug cleanup day (therve)<br />
* https://wiki.openstack.org/wiki/CrossProjectLiaisons including (Release manager)<br />
* CPLs encouraged to go to project meeting<br />
* Mid cycle meetup planning<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2013/ 2013 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2012/ 2012 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=70075Meetings/HeatAgenda2014-12-10T11:26:23Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2014-12-11 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* kilo-1 status: https://launchpad.net/heat/+milestone/kilo-1<br />
<br />
=== Agenda (2014-12-04 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
<br />
=== Agenda (2014-11-26 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Cross Project Liasons revisited (shardy)<br />
<br />
=== Agenda (2014-11-19 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Organize a bug cleanup day (therve)<br />
* https://wiki.openstack.org/wiki/CrossProjectLiaisons including (Release manager)<br />
* CPLs encouraged to go to project meeting<br />
* Mid cycle meetup planning<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2013/ 2013 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2012/ 2012 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=69519Meetings/HeatAgenda2014-12-03T19:56:59Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2014-12-04 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
<br />
=== Agenda (2014-11-26 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Cross Project Liasons revisited (shardy)<br />
<br />
=== Agenda (2014-11-19 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Organize a bug cleanup day (therve)<br />
* https://wiki.openstack.org/wiki/CrossProjectLiaisons including (Release manager)<br />
* CPLs encouraged to go to project meeting<br />
* Mid cycle meetup planning<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2013/ 2013 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2012/ 2012 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=68835Meetings/HeatAgenda2014-11-25T12:12:01Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2014-11-26 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
<br />
<br />
=== Agenda (2014-11-19 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Organize a bug cleanup day (therve)<br />
* https://wiki.openstack.org/wiki/CrossProjectLiaisons including (Release manager)<br />
* CPLs encouraged to go to project meeting<br />
* Mid cycle meetup planning<br />
<br />
=== Agenda (2014-10-29 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
<br />
=== Agenda (2014-10-22 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Review and prioritize [https://etherpad.openstack.org/p/kilo-heat-summit-topics Summit sessions] <br />
<br />
=== Agenda (2014-10-15 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Reminder to update [https://etherpad.openstack.org/p/kilo-heat-midcycle-meetup midcycle etherpad]<br />
* Reminder to update [https://etherpad.openstack.org/p/kilo-heat-summit-topics summit session etherpad]<br />
* [[Heat/ConvergenceDesign|Convergence]]: Persisting graph and resource versioning<br />
<br />
=== Agenda (2014-10-08 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Request for python-heatclient project to adopt heat-translator<br />
<br />
=== Agenda (2014-10-01 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* HARestarter transition plan<br />
* Critical issues sync<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2013/ 2013 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2012/ 2012 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=68351Meetings/HeatAgenda2014-11-18T23:29:28Z<p>Asalkeld: /* Agenda (2014-11-19 2000 UTC) */</p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2014-11-19 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Organize a bug cleanup day (therve)<br />
* https://wiki.openstack.org/wiki/CrossProjectLiaisons including (Release manager)<br />
* CPLs encouraged to go to project meeting<br />
* Mid cycle meetup planning<br />
<br />
=== Agenda (2014-10-29 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
<br />
=== Agenda (2014-10-22 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Review and prioritize [https://etherpad.openstack.org/p/kilo-heat-summit-topics Summit sessions] <br />
<br />
=== Agenda (2014-10-15 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Reminder to update [https://etherpad.openstack.org/p/kilo-heat-midcycle-meetup midcycle etherpad]<br />
* Reminder to update [https://etherpad.openstack.org/p/kilo-heat-summit-topics summit session etherpad]<br />
* [[Heat/ConvergenceDesign|Convergence]]: Persisting graph and resource versioning<br />
<br />
=== Agenda (2014-10-08 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Request for python-heatclient project to adopt heat-translator<br />
<br />
=== Agenda (2014-10-01 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* HARestarter transition plan<br />
* Critical issues sync<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2013/ 2013 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2012/ 2012 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=67789Meetings/HeatAgenda2014-11-13T05:23:22Z<p>Asalkeld: /* Agenda (2014-11-12 1200 UTC) */</p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2014-11-19 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Organize a bug cleanup day<br />
* https://wiki.openstack.org/wiki/CrossProjectLiaisons<br />
<br />
=== Agenda (2014-10-29 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
<br />
=== Agenda (2014-10-22 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Review and prioritize [https://etherpad.openstack.org/p/kilo-heat-summit-topics Summit sessions] <br />
<br />
=== Agenda (2014-10-15 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Reminder to update [https://etherpad.openstack.org/p/kilo-heat-midcycle-meetup midcycle etherpad]<br />
* Reminder to update [https://etherpad.openstack.org/p/kilo-heat-summit-topics summit session etherpad]<br />
* [[Heat/ConvergenceDesign|Convergence]]: Persisting graph and resource versioning<br />
<br />
=== Agenda (2014-10-08 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Request for python-heatclient project to adopt heat-translator<br />
<br />
=== Agenda (2014-10-01 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* HARestarter transition plan<br />
* Critical issues sync<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2013/ 2013 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2012/ 2012 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=67503Meetings/HeatAgenda2014-11-07T15:36:50Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2014-11-12 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Organize a bug cleanup day<br />
<br />
=== Agenda (2014-10-29 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
<br />
=== Agenda (2014-10-22 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Review and prioritize [https://etherpad.openstack.org/p/kilo-heat-summit-topics Summit sessions] <br />
<br />
=== Agenda (2014-10-15 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Reminder to update [https://etherpad.openstack.org/p/kilo-heat-midcycle-meetup midcycle etherpad]<br />
* Reminder to update [https://etherpad.openstack.org/p/kilo-heat-summit-topics summit session etherpad]<br />
* [[Heat/ConvergenceDesign|Convergence]]: Persisting graph and resource versioning<br />
<br />
=== Agenda (2014-10-08 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Request for python-heatclient project to adopt heat-translator<br />
<br />
=== Agenda (2014-10-01 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* HARestarter transition plan<br />
* Critical issues sync<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2013/ 2013 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2012/ 2012 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=66950Meetings/HeatAgenda2014-10-29T11:04:13Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2014-10-29 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
<br />
=== Agenda (2014-10-22 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Review and prioritize [https://etherpad.openstack.org/p/kilo-heat-summit-topics Summit sessions] <br />
<br />
=== Agenda (2014-10-15 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Reminder to update [https://etherpad.openstack.org/p/kilo-heat-midcycle-meetup midcycle etherpad]<br />
* Reminder to update [https://etherpad.openstack.org/p/kilo-heat-summit-topics summit session etherpad]<br />
* [[Heat/ConvergenceDesign|Convergence]]: Persisting graph and resource versioning<br />
<br />
=== Agenda (2014-10-08 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Request for python-heatclient project to adopt heat-translator<br />
<br />
=== Agenda (2014-10-01 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* HARestarter transition plan<br />
* Critical issues sync<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2013/ 2013 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2012/ 2012 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=66151Meetings/HeatAgenda2014-10-20T02:41:50Z<p>Asalkeld: </p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2014-10-22 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Review and prioritize [https://etherpad.openstack.org/p/kilo-heat-summit-topics Summit sessions] <br />
<br />
=== Agenda (2014-10-15 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Reminder to update [https://etherpad.openstack.org/p/kilo-heat-midcycle-meetup midcycle etherpad]<br />
* Reminder to update [https://etherpad.openstack.org/p/kilo-heat-summit-topics summit session etherpad]<br />
* [[Heat/ConvergenceDesign|Convergence]]: Persisting graph and resource versioning<br />
<br />
=== Agenda (2014-10-08 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Request for python-heatclient project to adopt heat-translator<br />
<br />
=== Agenda (2014-10-01 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* HARestarter transition plan<br />
* Critical issues sync<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2013/ 2013 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2012/ 2012 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=65024Meetings/HeatAgenda2014-10-15T07:12:00Z<p>Asalkeld: /* Agenda (2014-10-15 1200 UTC) */</p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2014-10-15 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Reminder to update [https://etherpad.openstack.org/p/kilo-heat-midcycle-meetup midcycle etherpad]<br />
* Reminder to update [https://etherpad.openstack.org/p/kilo-heat-summit-topics summit session etherpad]<br />
* [[Heat/ConvergenceDesign|Convergence]]: Persisting graph and resource versioning<br />
<br />
=== Agenda (2014-10-08 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Request for python-heatclient project to adopt heat-translator<br />
<br />
=== Agenda (2014-10-01 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* HARestarter transition plan<br />
* Critical issues sync<br />
<br />
=== Agenda (2014-09-24 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Release cadence (http://inaugust.com/post/108 #10)<br />
* Critical issues sync<br />
* Deprecating CWLiteAlarms<br />
<br />
=== Agenda (2014-09-17 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Review priorities & release status<br />
* Critical issues sync<br />
<br />
=== Agenda (2014-09-10 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* FFE status<br />
* Integration tests<br />
<br />
=== Agenda (2014-09-03 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Can we have a grenade testing update (it would be good to know where we stand before releasing)?<br />
* <other stuff><br />
<br />
=== Agenda (2014-08-27 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* [https://launchpad.net/heat/+milestone/juno-3 Juno blueprint status]<br />
* juno-3 milestone release management<br />
* [https://review.openstack.org/#/c/116703/ Mission Statement]<br />
* Critical issues sync<br />
<br />
=== No Meeting 2014-08-20 ===<br />
This meeting is cancelled because a lot of Heat folks will be at the mid-cycle meetup<br />
<br />
=== Agenda (2014-08-13 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Mid-cycle meet-up<br />
* Feature Proposal Freeze<br />
* Update replace with dependencies<br />
* Critical issues sync<br />
<br />
=== Agenda (2014-08-06 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Keeping up to date on the Heat mid-cycle meet-up<br />
* Critical issues sync<br />
<br />
=== Agenda (2014-07-30 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Spec approval criteria - (suggest we *can* approve after 3 * +2)<br />
* [[Governance/TechnicalCommittee/Heat_Gap_Coverage|Heat Gap Coverage]] plan<br />
* Scaling group member health maintenance<br />
* Using rally for Heat benchmark and quotas patches<br />
* Critical issues sync<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2013/ 2013 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2012/ 2012 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Meetings/HeatAgenda&diff=65023Meetings/HeatAgenda2014-10-15T07:10:11Z<p>Asalkeld: /* Agenda (2014-10-15 1200 UTC) */</p>
<hr />
<div><br />
= Weekly Heat (Orchestration) meeting =<br />
The [https://launchpad.net/heat heat] Orchestration project (see also [https://wiki.openstack.org/wiki/Heat wiki]) team holds a meeting in <code><nowiki>#openstack-meeting</nowiki></code> at alternating times:<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
Our first 1200 UTC meeting is Wednesday 28th May, 2014.<br />
<br />
Everyone is welcome.<br />
<br />
The blueprints that are used as a basis for [https://launchpad.net/heat the heat project] can be found at https://blueprints.launchpad.net/heat<br />
<br />
=== Agenda (2014-10-15 1200 UTC) ===<br />
* [[Heat/ConvergenceDesign|Convergence]]: Persisting graph and resource versioning<br />
* Reminder to update [https://etherpad.openstack.org/p/kilo-heat-midcycle-meetup midcycle etherpad]<br />
* Reminder to update [https://etherpad.openstack.org/p/kilo-heat-summit-topics summit session etherpad]<br />
<br />
=== Agenda (2014-10-08 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* Request for python-heatclient project to adopt heat-translator<br />
<br />
=== Agenda (2014-10-01 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* HARestarter transition plan<br />
* Critical issues sync<br />
<br />
=== Agenda (2014-09-24 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Release cadence (http://inaugust.com/post/108 #10)<br />
* Critical issues sync<br />
* Deprecating CWLiteAlarms<br />
<br />
=== Agenda (2014-09-17 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Review priorities & release status<br />
* Critical issues sync<br />
<br />
=== Agenda (2014-09-10 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Critical issues sync<br />
* FFE status<br />
* Integration tests<br />
<br />
=== Agenda (2014-09-03 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Can we have a grenade testing update (it would be good to know where we stand before releasing)?<br />
* <other stuff><br />
<br />
=== Agenda (2014-08-27 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* [https://launchpad.net/heat/+milestone/juno-3 Juno blueprint status]<br />
* juno-3 milestone release management<br />
* [https://review.openstack.org/#/c/116703/ Mission Statement]<br />
* Critical issues sync<br />
<br />
=== No Meeting 2014-08-20 ===<br />
This meeting is cancelled because a lot of Heat folks will be at the mid-cycle meetup<br />
<br />
=== Agenda (2014-08-13 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Mid-cycle meet-up<br />
* Feature Proposal Freeze<br />
* Update replace with dependencies<br />
* Critical issues sync<br />
<br />
=== Agenda (2014-08-06 1200 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Keeping up to date on the Heat mid-cycle meet-up<br />
* Critical issues sync<br />
<br />
=== Agenda (2014-07-30 2000 UTC) ===<br />
* Review action items from last meeting<br />
* Adding items to the agenda<br />
* Spec approval criteria - (suggest we *can* approve after 3 * +2)<br />
* [[Governance/TechnicalCommittee/Heat_Gap_Coverage|Heat Gap Coverage]] plan<br />
* Scaling group member health maintenance<br />
* Using rally for Heat benchmark and quotas patches<br />
* Critical issues sync<br />
<br />
== Meeting minutes ==<br />
* [http://eavesdrop.openstack.org/meetings/heat/2014/ 2014 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2013/ 2013 Heat meeting archive]<br />
* [http://eavesdrop.openstack.org/meetings/heat/2012/ 2012 Heat meeting archive]<br />
<br />
== Meeting organizers ==<br />
* Publish the agenda 24h in advance<br />
* Mail the agenda to the list and invite participants<br />
* Ask each person responsible for an action from the previous meeting to prepare a line of the form, for each action item: . #info nickname description of the action link to the diff / mailing list thread etc. describing the implementation of the action<br />
* Use http://meetbot.debian.net/Manual.html to get an automatic summary<br />
* Mail the automatic summary as a reply to the invitation</div>Asalkeldhttps://wiki.openstack.org/w/index.php?title=Design_Summit/Planning&diff=62544Design Summit/Planning2014-09-13T12:32:28Z<p>Asalkeld: /* Topic proposal, discussion and selection */</p>
<hr />
<div>== Topic proposal, discussion and selection ==<br />
<br />
* [https://etherpad.openstack.org/p/kilo-crossproject-summit-topics Cross-project workshops]<br />
* [https://etherpad.openstack.org/p/kilo-nova-summit-topics Nova]<br />
* Swift<br />
* [https://etherpad.openstack.org/p/kilo-neutron-summit-topics Neutron]<br />
* Keystone<br />
* Glance<br />
* Horizon<br />
* [https://etherpad.openstack.org/p/kilo-cinder-summit-topics Cinder]<br />
* [https://etherpad.openstack.org/p/kilo-heat-summit-topics Heat]<br />
* Ceilometer<br />
* Trove<br />
* [https://etherpad.openstack.org/p/kilo-sahara-summit-topics Sahara]<br />
* Release management<br />
* Infrastructure<br />
* [https://etherpad.openstack.org/p/kilo-qa-summit-topics QA]<br />
* [https://etherpad.openstack.org/p/kilo-oslo-summit-topics Oslo]<br />
* Documentation<br />
* TripleO<br />
* Ironic<br />
* [https://etherpad.openstack.org/p/kilo-zaqar-summit-topics Zaqar]<br />
* Barbican<br />
* Designate<br />
* Manila</div>Asalkeld