https://wiki.openstack.org/w/api.php?action=feedcontributions&user=Travis+Tripp&feedformat=atomOpenStack - User contributions [en]2024-03-19T03:52:08ZUser contributionsMediaWiki 1.28.2https://wiki.openstack.org/w/index.php?title=Graffiti&diff=177286Graffiti2021-01-08T20:26:22Z<p>Travis Tripp: /* Current Status */</p>
<hr />
<div><br />
== What's in my cloud? ==<br />
<br />
I've got a lot of resources in my cloud.<br />
<br />
* How do I find what I need?<br />
* How do I describe what I have?<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects so that OpenStack users can take advantage of an Enhanced Platform Awareness.<br />
<br />
== Current Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then the concepts have been adopted and implemented as part of multiple different OpenStack Projects.<br />
<br />
* Glance Metadata Definition Catalog<br />
** http://docs.openstack.org/developer/glance/metadefs-concepts.html<br />
** https://github.com/openstack/glance/tree/master/etc/metadefs<br />
** https://youtu.be/zJpHXdBOoeM<br />
* Searchlight<br />
** http://launchpad.net/searchlight<br />
** https://wiki.openstack.org/wiki/Searchlight<br />
* Nova features<br />
** Scheduling filters like Numa topology<br />
* Horizon features<br />
** An admin UI for managing the catalog<br />
*** (Admin —> Metadata Definitions) (Kilo)<br />
** A widget for associating metadata to different resources<br />
*** (Update Metadata action on each row item below)<br />
*** admin -> images (Juno)<br />
*** admin -> flavors (Kilo)<br />
*** admin —> Host Aggregates (Kilo)<br />
*** project —> images (Liberty)<br />
*** project —> instances (Mitaka)<br />
** The ability to add metadata at launch time<br />
*** project —> Launch Instance (ng launch instance enabled) (Mitaka)<br />
<br />
The following information provides much of the background information on where these concepts originated.<br />
<br />
== Overview ==<br />
<br />
A challenge we've experienced with using OpenStack is discovering, sharing, and correlating metadata across services and different types of resources. We believe this affects both end users and administrators. <br />
<br />
For end users, we feel like doing basic tasks like launching instances is too technical for end users and require too much pre-existing knowledge of OpenStack concepts. For example, you should be able to just specify categories like "Big Data" or an "OS Family" and then let the system find the boot source for you, whether that is an image, snapshot, or volume. It should also allow finer grained filtering like filtering on specific versions of software that you want.<br />
<br />
For administrators, we’d like there to be an easier way to meaningfully collaborate on properties across host aggregates, flavors, images, volumes, or other cloud resources. <br />
<br />
Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties can be a disconnected and difficult process. This often involves searching wikis and opening the source code. In addition, the metadata properties often need to be correlated across several different services. It becomes more difficult as a cloud's scale grows and the number of resources being managed increases.<br />
<br />
We, HP and Intel, believe that both of the above problems come back to needing a better way for users to collaborate on metadata across services and resource types. We started a project called Graffiti to explore ideas and concepts for how to make this easier and more approachable for end users. Please join with us to help move forward together as a community!<br />
<br />
We believe that we can make some immediate improvements in Horizon, but that they can't be achieved through Horizon alone and that the benefits should extend to the API and CLI interactions as well. Better cross service collaboration and consistency on metadata should provide benefits that can be leveraged by other projects such as scheduling, reservation, orchestration, and policy enforcement.<br />
<br />
=== Terminology Note ===<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them.<br />
<br />
=== Screencasts ===<br />
<br />
To help explain the ideas of the project, we have a quick screencast demonstrating the concepts running under POC code. Please take a look!<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Concept Overview]<br />
* [https://youtu.be/zJpHXdBOoeM Availability as of the mitaka release in Horizon and Glance]<br />
<br />
=== Usage Concepts ===<br />
<br />
# Load your metadata definitions (sometimes called properties, tags, or capabilities)<br />
## Into the central metadata catalog <br />
# Update the resources in the cloud with your tags and capabilities<br />
# Let users find the resources with your desired tags and capabilities<br />
<br />
== Design Concepts ==<br />
<br />
Additional architecture concepts on the [[Graffiti/Architecture|Architecture]] page.<br />
<br />
=== Juno Summit Design Sessioɲ ===<br />
<br />
POC Demo reviewː<br />
̽ * https://www.youtube.com/watch?v=Dhrthnq1bnw<br />
<br />
http://sched.co/1m7wghx<br />
* Etherpadː https://etherpad.openstack.org/p/juno-summit-graffiti<br />
<br />
=== IRC ===<br />
<br />
The various features are maintained by teams in the following IRC channels on [http://freenode.net/ Freenode].<br />
<br />
#openstack-searchlight<br />
#openstack-horizon<br />
#openstack-glance<br />
<br />
=== Development ===<br />
* Open source under Apache 2.0<br />
* [https://github.com/stackforge/graffiti Graffiti POC API Service Source Repository] - No Longer Maintained (See Glance, Horizon, Searchlight)</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti&diff=177285Graffiti2021-01-08T20:24:49Z<p>Travis Tripp: /* What's in my cloud? */</p>
<hr />
<div><br />
== What's in my cloud? ==<br />
<br />
I've got a lot of resources in my cloud.<br />
<br />
* How do I find what I need?<br />
* How do I describe what I have?<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects so that OpenStack users can take advantage of an Enhanced Platform Awareness.<br />
<br />
== Current Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects.<br />
<br />
* Glance Metadata Definition Catalog<br />
** http://docs.openstack.org/developer/glance/metadefs-concepts.html<br />
** https://github.com/openstack/glance/tree/master/etc/metadefs<br />
** https://youtu.be/zJpHXdBOoeM<br />
* Searchlight<br />
** http://launchpad.net/searchlight<br />
** https://wiki.openstack.org/wiki/Searchlight<br />
* Horizon features<br />
** An admin UI for managing the catalog<br />
*** (Admin —> Metadata Definitions) (Kilo)<br />
** A widget for associating metadata to different resources<br />
*** (Update Metadata action on each row item below)<br />
*** admin -> images (Juno)<br />
*** admin -> flavors (Kilo)<br />
*** admin —> Host Aggregates (Kilo)<br />
*** project —> images (Liberty)<br />
*** project —> instances (Mitaka)<br />
** The ability to add metadata at launch time<br />
*** project —> Launch Instance (ng launch instance enabled) (Mitaka)<br />
<br />
The following information provides much of the background information on where these concepts originated.<br />
<br />
== Overview ==<br />
<br />
A challenge we've experienced with using OpenStack is discovering, sharing, and correlating metadata across services and different types of resources. We believe this affects both end users and administrators. <br />
<br />
For end users, we feel like doing basic tasks like launching instances is too technical for end users and require too much pre-existing knowledge of OpenStack concepts. For example, you should be able to just specify categories like "Big Data" or an "OS Family" and then let the system find the boot source for you, whether that is an image, snapshot, or volume. It should also allow finer grained filtering like filtering on specific versions of software that you want.<br />
<br />
For administrators, we’d like there to be an easier way to meaningfully collaborate on properties across host aggregates, flavors, images, volumes, or other cloud resources. <br />
<br />
Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties can be a disconnected and difficult process. This often involves searching wikis and opening the source code. In addition, the metadata properties often need to be correlated across several different services. It becomes more difficult as a cloud's scale grows and the number of resources being managed increases.<br />
<br />
We, HP and Intel, believe that both of the above problems come back to needing a better way for users to collaborate on metadata across services and resource types. We started a project called Graffiti to explore ideas and concepts for how to make this easier and more approachable for end users. Please join with us to help move forward together as a community!<br />
<br />
We believe that we can make some immediate improvements in Horizon, but that they can't be achieved through Horizon alone and that the benefits should extend to the API and CLI interactions as well. Better cross service collaboration and consistency on metadata should provide benefits that can be leveraged by other projects such as scheduling, reservation, orchestration, and policy enforcement.<br />
<br />
=== Terminology Note ===<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them.<br />
<br />
=== Screencasts ===<br />
<br />
To help explain the ideas of the project, we have a quick screencast demonstrating the concepts running under POC code. Please take a look!<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Concept Overview]<br />
* [https://youtu.be/zJpHXdBOoeM Availability as of the mitaka release in Horizon and Glance]<br />
<br />
=== Usage Concepts ===<br />
<br />
# Load your metadata definitions (sometimes called properties, tags, or capabilities)<br />
## Into the central metadata catalog <br />
# Update the resources in the cloud with your tags and capabilities<br />
# Let users find the resources with your desired tags and capabilities<br />
<br />
== Design Concepts ==<br />
<br />
Additional architecture concepts on the [[Graffiti/Architecture|Architecture]] page.<br />
<br />
=== Juno Summit Design Sessioɲ ===<br />
<br />
POC Demo reviewː<br />
̽ * https://www.youtube.com/watch?v=Dhrthnq1bnw<br />
<br />
http://sched.co/1m7wghx<br />
* Etherpadː https://etherpad.openstack.org/p/juno-summit-graffiti<br />
<br />
=== IRC ===<br />
<br />
The various features are maintained by teams in the following IRC channels on [http://freenode.net/ Freenode].<br />
<br />
#openstack-searchlight<br />
#openstack-horizon<br />
#openstack-glance<br />
<br />
=== Development ===<br />
* Open source under Apache 2.0<br />
* [https://github.com/stackforge/graffiti Graffiti POC API Service Source Repository] - No Longer Maintained (See Glance, Horizon, Searchlight)</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti/Architecture&diff=177284Graffiti/Architecture2021-01-08T20:24:17Z<p>Travis Tripp: </p>
<hr />
<div><br />
== Graffiti Architecture Concepts ==<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects so that OpenStack users can take advantage of an Enhanced Platform Awareness.<br />
<br />
== Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. It is a virtual project which has had all of the concepts adopted and implemented as part of multiple different OpenStack Projects. The primary work involved Glance, Searchlight, & Horizon, but also included improvements to Nova scheduling and filters.<br />
<br />
The below provides legacy overview information to help understand how the various components come together. For more info, see https://wiki.openstack.org/wiki/Graffiti#Current_Status<br />
<br />
==== Workflow and Components ====<br />
<br />
# Load your custom metadata definitions (called property types or capability types)<br />
## Into the Graffiti central dictionary <br />
## Or configure Graffiti plugins to include / proxy existing definitions provided by the various services<br />
# "Tag" the resources in the cloud with your properties and capabilities<br />
# Let users find the resources with your desired properties and capabilities̈<br />
<br />
* Repeat across multiple clouds installations for Cloud capability portability.<br />
<br />
[[File:Graffiti-Architecture-ConceptOverlay-Simple.png]]<br />
<br />
=== Base Concepts ===<br />
<br />
* Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties is largely a disconnected and difficult process. This often involves searching wikis and opening the source code. It becomes more difficult as a cloud's scale grows. In addition, many times the properties can apply to resources from several different services. Graffiti makes this easier by creating the following concepts: <br />
** ''[[Graffiti/Dictionary#Capability_Types|Capabilities and Requirements]]'': The Graffiti concepts have embraced the idea that cloud resources may be described using the notion of capabilities, a concept influenced by some parts of OpenStack today as well as by industry specifications like OASIS TOSCA (Please note, Graffiti is NOT an orchestration engine, it only assists in describing and locating existing resources in the cloud.).<br />
** ''[[Graffiti/Dictionary|Dictionary]]'': A common API for services, admins, and users to discover and share their metadata vocabulary. This is the basis for creating an agreement on how to describe the various capabilities the cloud provides. It allows for a consistent UI and CLI experience for describing and finding resources. <br />
** ''[[Graffiti/Directory|Resource Directory]]'': A common API to "tag" and search across existing and new services for cloud content based on the dictionary (metadata definitions). <br />
** ''Resource Capability Registry'': A persistent shared repository for services to publish information about cloud resources. This can optionally be used by services instead of or in addition to having their own local native storage to describe resources.<br />
<br />
== Use Case Exampleː Compute Capabilities ==<br />
In Summary: <br />
The Graffiti concepts provide cross service and cross environment:<br />
* metadata definition aggregation and administration<br />
* resource metadata "tagging" aggregation<br />
* resource metadata search aggregation<br />
<br />
<br />
[[File:Graffiti-ComputeCapability-Flow-Overview.png]]<br />
<br />
== Additional Details ==<br />
<br />
The below provides an overview of the metadata aggregation, resource search optimization, and local resource registry concepts.<br />
<br />
<br />
[[File:Graffiti-Architecture-ConceptOverlay.png]]<br />
<br />
=== Graffiti API Benefits ===<br />
<br />
When we first looked at a UI only solution, we found that it can be done to a certain extent [[Graffiti/Architecture#Limits_of_a_Horizon_Only_Solution|with limitations]]. However, if we propose the idea of a new service integrated or built into the ecosystem the following additional benefits will be available:<br />
* Command line and REST API for cross service searching<br />
* Ability to import / export definitions across deployments<br />
* Common persistence DB for definitions in multi-node / HA deployments<br />
* Private tag / metadata libraries. Users / projects will be able to have their own vocabulary for "tagging" resources<br />
* Authoring - We will provide an authoring and administration UI for creating and managing namespaces, capability types, etc<br />
* Resource search performance optimizations. We would like to introduce a high performance indexing mechanism based that crosses service boundaries.<br />
<br />
==== Resource Search Optimization ====<br />
<br />
Ideasː<br />
* Lazy loading. Simple pre-fetch mechanism. Make a call to initiate session or on first request for a resource type, data is pulled into memory and held for a limited time. Subsequent searches are all done in in memory. RBAC is handled via token pass through.<br />
* Eager loading. The base idea is that cache provider plugin can be added under the API. Resources that are indexable (those whose service owner supports notifications) would then be indexed via a combination of startup seeding and service resource event notifications. For example, Glance supports sending notifications on certain image changes. The index itself could be based on elasticsearch and the plugin would translate queries in and out of elasticsearch. (Noteː This portion of the concept has been mostly implemented by Project Searchlight [https://wiki.openstack.org/wiki/Searchlight]).<br />
<br />
== Originally Proposed Horizon Concepts ==<br />
<br />
These have been implemented in Horizonː<br />
<br />
* Horizon features<br />
** An admin UI for managing the catalog<br />
*** (Admin —> Metadata Definitions) (Kilo)<br />
** A widget for associating metadata to different resources<br />
*** (Update Metadata action on each row item below)<br />
*** admin -> images (Juno)<br />
*** admin -> flavors (Kilo)<br />
*** admin —> Host Aggregates (Kilo)<br />
*** project —> images (Liberty)<br />
*** project —> instances (Mitaka)<br />
** The ability to add metadata at launch time<br />
*** project —> Launch Instance (ng launch instance enabled) (Mitaka)<br />
<br />
Legacy Infoː<br />
<br />
We believe that the [[Graffiti]] concepts can be fulfilled in Horizon with reusable widgets that we can plug into Horizon as well as changes to screens like the launch instance wizard. The widgets will provide the ability to "tag" capabilities and TBDː requirements on various resources. They will also be able to generate filter queries based on resource capabilities and properties.<br />
<br />
==== Terminology Note ====<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them. Some resource types may not support capabilities / tag that have properties.<br />
<br />
==== Concept Screencasts ====<br />
<br />
To explore and explain the ideas, HP and Intel have created a screencasts showing the concepts running under POC code. The styling is only representative of the point in time that the demo was recorded and has changed.<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Screencast - Concept Overview]<br />
<br />
==== Concept Flow Mockup ====<br />
<br />
The basic proposed flow is that we will be able to add a widget on any resource management screen that we want to be able to "tag" capabilities. For example, the images, volumes, flavors, and host aggregate screens are all good candidates. The goal is that the only customization required will be for the code using the widget will send in information about the resource / resource type that is being tagged. The resource type is sent to the API which then returns back the capabilities applicable for that type of resource.<br />
<br />
==== Launch Instance Example ==== <br />
<br />
̈ - Noteː Tagging other resource types and searching for them could work similarly.<br />
<br />
[[File:Graffit-Tag-Then-Use-Concept.png|center|Widget Screenshots from screencasts]]<br />
<br />
==== Style Mockups ====<br />
<br />
We have been playing with various style mockups, but aren't sure what makes sense or would be acceptable. The traditional look and feel in Horizon can be achieved, but we also aren't sure that Horizon today has a good example for handling tree browsing. The following are some of the mockups we've created.<br />
<br />
[[File:Graffiti-capabilities-widget-mockups.png|thumbnail|center|Graffiti Concept Mockups]]<br />
<br />
<br />
<br />
=== Proposed Horizon Component Architecture ===<br />
<br />
We would like there to be a common way in Horizon to support "tagging" simple named tags and key-value pairs that also will support the overall [[Graffiti]] concepts. In the proposed architecture, we will support Horizon gaining the value of Graffiti concepts through a thin API plugin layer directly in Horizon without the full "Dictionary" and "Resource Directory" APIs in the deployed environment. This will provide benefits to Horizon now, without requiring a new Graffiti service to either be incubated or be adopted into other projects (which we are actively seeking input and advice). The widgets will be built to work with a common simple "resource syntax" that the external service API would provide.<br />
<br />
The entire concept can be run in a lightweight way through a thin filesystem provider on the Horizon server that allows reading dictionary definition files directly from the filesystem or from services that already provide schemas or tags. This would suffice for single node deployments or deployments that are managed through configuration management provider to ensure consistency of the definitions across Horizon nodes.<br />
<br />
If a fully "Dictionary" / "Resource Directory" service API was available, the widgets wouldn't have to change even as new resource types and metadata definitions are added to the system. They still go to the Horizon Graffiti component, which would add the plugin to talk to the appropriate central "Dictionary" / "Resource Directory" service endpoint(s), which would provide the [[Graffiti/Architecture#Graffiti_API_Benefits|full benefits]].<br />
<br />
===== Limits of a Horizon Only Solution =====<br />
<br />
The widgets and concepts can be partially built in Horizon as stated above and diagrammed below without changes to existing services. However, there are a number of limitations that require some external service work as well.<br />
<br />
# Horizon is a stateless server by design at this point. The only place any persistent data can exist is if you choose to store session information on the server in a database. The default setup for Horizon now uses signed cookies to maintain session data and avoids a DB requirement.<br />
# There is no privileged account running on the Horizon server and thus no way to build a persistent datastore only the admin can obtain. A persistent privileged session as this creates many security issues.<br />
# Horizon can be set up in an HA manner, which would require either duplicate DB on multiple Horizon servers or another server dedicated to the DB backend for Horizon.<br />
# The original scope discussed is only part of the picture, when the scope grows beyond the launch use case, the scope grows beyond usefulness for just Horizon. Isolating in Horizon is limiting.<br />
<br />
[[File:Graffiti-Widgets.png]]</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Searchlight&diff=144122Searchlight2016-12-03T04:40:49Z<p>Travis Tripp: Added api ref</p>
<hr />
<div><br />
= Mission Statement =<br />
<br />
To provide advanced and scalable indexing and search across multi-tenant cloud resources.<br />
<br />
== Project Links ==<br />
<br />
{| border="1" cellpadding="2"<br />
| Developer Documentation<br />
| http://docs.openstack.org/developer/searchlight/<br />
|-<br />
| API Reference<br />
| http://developer.openstack.org/api-ref/search/<br />
|-<br />
| Source code - API and Listener Services<br />
| https://github.com/openstack/searchlight<br />
|-<br />
| Source code - Horizon UI Plugin<br />
| https://github.com/openstack/searchlight-ui<br />
|-<br />
| Source code - Python Client<br />
| https://github.com/openstack/python-searchlightclient<br />
|-<br />
| Gerrit Reviews<br />
| https://review.openstack.org/#/q/project:%255E.*searchlight.*+status:open,n,z<br />
|-<br />
| Bug tracker<br />
| https://bugs.launchpad.net/searchlight<br />
|-<br />
| Feature tracker<br />
| https://blueprints.launchpad.net/searchlight<br />
|-<br />
| IRC<br />
| #openstack-searchlight<br />
|-<br />
| Meeting Times<br />
| http://eavesdrop.openstack.org/#Searchlight_Team_Meeting<br />
|-<br />
| Meeting Agenda<br />
| https://etherpad.openstack.org/p/search-team-meeting-agenda<br />
|-<br />
| Meeting Logs<br />
| http://eavesdrop.openstack.org/meetings/openstack_search/<br />
|}<br />
<br />
= Overview =<br />
<br />
Searchlight dramatically improves the user focused search capabilities and performance on behalf of various OpenStack cloud services.<br />
<br />
It accomplishes this by offloading user search queries from existing API servers and indexing their data into ElasticSearch. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents. ElasticSearch is developed and released as open source under the terms of the Apache License. Notable users of ElasticSearch include Wikimedia, StumbleUpon, Mozilla, Quora, Foursquare, Etsy, SoundCloud, GitHub, FDA, CERN, and Stack Exchange. (Source: http://en.wikipedia.org/wiki/Elasticsearch). The elastic-recheck project also uses Elasticsearch (and kibana) to classify and track OpenStack gate failures. (Source: http://status.openstack.org/elastic-recheck)<br />
<br />
=== Screencasts ===<br />
<br />
* (Mitaka) Integration with Horizon demoː https://www.youtube.com/watch?v=2feC1njvZe0<br />
* (Mitaka) End of Cycle Presentation on Horizon, CLI, and Searchlightː https://www.youtube.com/watch?v=ExzULavwvNQ<br />
* (Mitaka) Summit Presentationː https://www.youtube.com/watch?v=0jYXsK4j26s<br />
* (Liberty) PTL Overviewː https://www.youtube.com/watch?v=yU5CrAOAlkA<br />
* (Kilo summit) Concept Demoː https://youtu.be/eGnGr48E5_4<br />
<br />
=== Design ===<br />
<br />
* http://docs.openstack.org/developer/searchlight/<br />
* http://docs.openstack.org/developer/searchlight/architecture.html<br />
<br />
==== Concept Overview ====<br />
<br />
[[File:Searchlight-Concept-1.png]]<br />
<br />
==== Concept Internals Overview ====<br />
<br />
* [https://wiki.openstack.org/w/images/5/55/Searchlight-Concept-2.png Concept Internals]<br />
* [https://wiki.openstack.org/w/images/e/ef/Searchlight-use-when-there.png Usage switching]<br />
* [https://wiki.openstack.org/w/images/1/10/Searchlight-Concept-Horizon-Layers.png Horizon Concept Layers]<br />
* [https://wiki.openstack.org/w/images/6/6b/Searchlight-WebSocket-Concept.png Horizon Web Socket Concept]<br />
<br />
== Get Involved ==<br />
<br />
Please join with us to help move forward together as a communityǃ<br />
<br />
Searchlight is an open project and we encourage contribution from everybody.<br />
<br />
We support both developers and non-developers who want to provide input, requests for features, and bug fixes. We want to be able to move quickly without getting too bogged down in process, but still provide a rich mechanism for feature reviews as needed.<br />
<br />
* http://docs.openstack.org/developer/searchlight/feature-requests-bugs.html<br />
<br />
== History ==<br />
<br />
Searchlight was originally developed and released in the Kilo release of Glance as the Catalog Index Service [1]. At the Liberty Summit we decided to broaden the scope to provide advanced and scalable search across multi-tenant cloud resources.<br />
<br />
[1] http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Design_Summit/Ocata/Etherpads&diff=135139Design Summit/Ocata/Etherpads2016-10-19T04:16:45Z<p>Travis Tripp: /* Searchlight */</p>
<hr />
<div>[[Category:Summit]]<br />
[[Category:Ocata]]<br />
[[Category:Etherpad]]<br />
<br />
The grand list of all the Ocata Design Summit sessions. Please include Date, Time, and links to etherpads when adding new content.<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== Event intro/closure ==<br />
* Tue Oct 26 11:25am - Design Summit 101 - https://etherpad.openstack.org/p/ocata-design-summit-101<br />
* Fri Oct 29 12:30pm - Barcelona feedback session - https://etherpad.openstack.org/p/BCN-summit-feedback<br />
<br />
<br />
==Architecture Working Group==<br />
<br />
'''Wednesday, October 26'''<br />
* 11:25am-12:05pm - Cross Project workshops: Architecture Working Group Fishbowl - https://etherpad.openstack.org/p/ocata-summit-arch-wg<br />
<br />
==Barbican==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Barbican<br />
<br />
'''Thursday, October 27'''<br />
* 11:00am-11:40am - (128) Barbican: User and Operator Feedback Fishbowl - https://etherpad.openstack.org/p/barbican-ocata-summit-roadmap<br />
* 11:50am-12:30pm - (Montjuic) Barbican: Work Session (Roadmap) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 11:50pm-02:30pm - (130) Barbican: Work Session (Cross Project)- https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
<br />
'''Friday, October 28'''<br />
* 09:00am-09:40am - (129) Barbican: Work Session (Security) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 09:50am-10:30am - (129) Barbican: Work Session (TBD) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 11:00am-11:40am - (129) Barbican: Work Session (Resources) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 11:50am-12:30pm - (129) Barbican: Work Session (Planning) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
<br />
==Cinder==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Cinder<br />
<br />
'''Wednesday October 26'''<br />
* 3:05pm-3:45pm - Cinder Test Working Group progress and status - https://etherpad.openstack.org/p/Cinder-testing<br />
* 3:55-4:35 - Driver bug fixes for unsupported OpenStack releases - https://etherpad.openstack.org/p/ocata-cinder-summit-stabledriverfixes<br />
* 5:05-5:45 - Stand alone Cinder service - https://etherpad.openstack.org/p/ocata-cinder-summit-standalonecinder<br />
* 5:55-6:35 - Pike (and beyond) planning - https://etherpad.openstack.org/p/ocata-cinder-summit-pikeplanning<br />
'''Thursday October 27'''<br />
* 9:00-9:40 - Replication - https://etherpad.openstack.org/p/ocata-cinder-summit-replication<br />
* 9:50-10:30 - Cinder-Nova API changes - https://etherpad.openstack.org/p/ocata-cinder-summit-attachdetach<br />
'''Friday October 28'''<br />
* 9:00am-9:40am - Nova/Cinder cross-project session - https://etherpad.openstack.org/p/ocata-nova-summit-cinder-session<br />
* 11:00am-11:40am - NFS snapshots - https://etherpad.openstack.org/p/ocata-cinder-summit-nfssnapshots<br />
* 11:50am-12:30pm - Cinder backup improvements - https://etherpad.openstack.org/p/ocata-cinder-summit-backupimprovements<br />
* '''Lunch'''<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/ocata-cinder-summit-meetup<br />
<br />
==Cross Project Sessions==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Cross+Project<br />
<br />
'''Tuesday October 25'''<br />
<br />
* 3:55 PM - 4:35 PM -- Experiences with Project Decomposition, Scaling Review Teams and Subsystem Maintainers (Part 1) -- https://etherpad.openstack.org/p/ocata-summit-xp-scaling-review-teams<br />
* 5:05 PM - 5:45 PM -- Discuss Community-Wide Release Goals -- https://etherpad.openstack.org/p/community-goals<br />
* 5:55 PM - 6:35 PM -- Python 3 Integration Testing -- https://etherpad.openstack.org/p/ocata-python-3<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 11:25 AM - 12:05 PM -- Ocata goal: Remove Incubated Oslo Code -- https://etherpad.openstack.org/p/ocata-goal-oslo<br />
* 2:15 PM - 2:55 PM -- Experiences with project decomposition, scaling review teams and subsystem maintainers (part 2) -- https://etherpad.openstack.org/p/ocata-summit-xp-scaling-review-teams<br />
<br />
==Documentation==<br />
<br />
See these and more documentation sessions in schedule: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Documentation<br />
<br />
'''Wednesday, October 26'''<br />
* 5:05pm-5:45pm - User Guides Working Group - https://etherpad.openstack.org/p/BCN-Docs-UserGuidesWG<br />
'''Thursday October 27'''<br />
* 2:40pm-3:20pm - Newton Retrospective - https://etherpad.openstack.org/p/BCN-Docs-NewtonRetro <br />
* 3:30pm-4:10pm - Social Things - https://etherpad.openstack.org/p/BCN-Docs-Social <br />
* 4:40pm-5:20pm - Training Labs - https://etherpad.openstack.org/p/BCN-Docs-Training <br />
* 5:30pm-6:10pm - Toolchain - https://etherpad.openstack.org/p/BCN-Docs-Toolchain <br />
'''Friday October 28'''<br />
* 11:00am-11:40am - API Working Group - https://etherpad.openstack.org/p/BCN-Docs-APIWG <br />
* 11:50am-12:30pm - Ocata Planning Working Group - https://etherpad.openstack.org/p/BCN-Docs-OcataPlanningWG <br />
* 2:00pm-6:00pm - Contributors Meetup - no etherpad<br />
<br />
== Gluon ==<br />
<br />
View online: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Gluon%3A<br />
<br />
Fri 28, 9:50am-10:30am: Gluon Work Session https://etherpad.openstack.org/p/ocata-gluon-work-plan<br />
<br />
==Heat==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Heat<br />
<br />
'''Thursday October 27'''<br />
<br />
* 11:00am-11:40am - Convergence Phase 1 - What worked, What didn't - https://etherpad.openstack.org/p/heat-ocata-convergence-phase-1<br />
* 11:50am-12:30pm - Performance Scalability Improvements - I (Issues with very large stacks) - https://etherpad.openstack.org/p/heat-ocata-performance-scalability-1<br />
* 2:40pm-3:20pm - Performance Scalability Improvements - II - https://etherpad.openstack.org/p/heat-ocata-performance-scalability-2<br />
* 3:30pm-4:10pm - Convergence Phase 2 - https://etherpad.openstack.org/p/heat-ocata-convergence-phase-2<br />
* 4:40pm-5:20pm - Validation Improvements - https://etherpad.openstack.org/p/heat-ocata-validation-improvements<br />
<br />
'''Friday October 28'''<br />
<br />
* 9:00am-9:40am - RPC versioning and hitless upgrades - https://etherpad.openstack.org/p/heat-ocata-hitless-upgrades<br />
* 9:50am-10:30am - API Microversions - https://etherpad.openstack.org/p/heat-ocata-api-microversions<br />
* 11:00am-11:40am - Heat Integration tests, Tempest and test candidates for DefCore Interop Testing - https://etherpad.openstack.org/p/heat-ocata-test-coverage<br />
* 11:50am-12:30pm - Improve maturity of heat - https://etherpad.openstack.org/p/heat-ocata-improve-maturity<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/ocata-heat-contributor-meetup<br />
<br />
<br />
==Horizon==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Horizon%3A<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 16:55-17:35 - Cross-project meeting with Horizon and Keystone - https://etherpad.openstack.org/p/ocata-keystone-horizon<br />
<br />
'''Thursday October 27'''<br />
<br />
* 09:00-09:40 - Operator/ Plugin feedback - https://etherpad.openstack.org/p/horizon-ocata-feedback<br />
* 09:50-10:30 - Newton retrospective, Ocata timeline, Dependencies, Testing!! and Selenium :-( - https://etherpad.openstack.org/p/horizon-ocata-planning<br />
* 16:40-17:20 - Cross-project topics; Glance, Identity, K2K Federation, Quotas - https://etherpad.openstack.org/p/horizon-ocata-cross-project<br />
* 17:30-18:10 - AngularJS state of play (where we're going, status of panels, what CORS means, do we want a thin service proxy, deprecations, etc.) -https://etherpad.openstack.org/p/horizon-ocata-angularjs<br />
<br />
'''Friday October 28'''<br />
<br />
* 11:50-12:30 - Priority setting (and TODO review if we have time) - https://etherpad.openstack.org/p/horizon-ocata-priorities<br />
* 14:00-18:00 - General project discussion (Newton retrospective, how to improve our organisation and use of tooling)<br />
<br />
== I18n ==<br />
<br />
View online: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=I18n%3A<br />
<br />
'''Friday October 28'''<br />
<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/barcelona-i18n-meetup<br />
<br />
==Infrastructure==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Infrastructure%3A<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 3:05pm-3:45pm: ''Work Session: Firehose'' in AC Hotel - P3 - Montjuic<br />
** https://etherpad.openstack.org/p/ocata-infra-firehose<br />
<br />
'''Thursday October 27'''<br />
<br />
* 2:40pm-3:20pm: ''Fishbowl: Status update and plans for task tracking'' in AC Hotel - P1 - Salon Barcelona<br />
** https://etherpad.openstack.org/p/ocata-infra-community-task-tracking<br />
<br />
'''Friday October 28'''<br />
<br />
All the sessions on Friday are taking place at CCIB - Centre de Convencions Internacional de Barcelona - P1<br />
<br />
* 9:00am-9:40am: ''Work Session: Next steps for infra-cloud'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-infra-cloud<br />
* 9:50am-10:30am: ''Work Session: Interactive infra-cloud debugging'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-infra-cloud-debugging<br />
* 11:00am-11:40am: ''Work Session: Test environment expectations'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-test-env-expectations<br />
* 11:50am-12:30pm: ''Work Session: Xenial jobs transition for stable/newton'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-xenial-stable-newton<br />
* 2:00pm-6:00pm: ''Contributors Meetup'' in Room 121<br />
** https://etherpad.openstack.org/p/ocata-infra-contributors-meetup<br />
<br />
==Ironic==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ironic:<br />
<br />
'''Wednesday October 26'''<br />
* 5ː05pm-5ː45pm - API Evolution - https://etherpad.openstack.org/p/ironic-ocata-summit-api-evolution<br />
* 5:55pm-6:35pm - Deploy-time RAID and Advanced Partitioning (w/ Nova) - https://etherpad.openstack.org/p/ironic-ocata-summit-deploy-time-raid<br />
'''Thursday October 27'''<br />
* 9:00am-9:40am - Task Framework - https://etherpad.openstack.org/p/ironic-ocata-summit-task-framework<br />
* 9:50am-10:30am - QA/CI - https://etherpad.openstack.org/p/ironic-ocata-summit-qa<br />
* 1:50pm-2:30pm - Synchronizing Events with Neutron - https://etherpad.openstack.org/p/ironic-ocata-summit-neutron-events<br />
* 2:40pm-3:20pm - Ocata Priorities - https://etherpad.openstack.org/p/ironic-ocata-summit-priorities<br />
'''Friday October 28'''<br />
* 11:00am-11:40am - VNC Console - https://etherpad.openstack.org/p/ironic-ocata-summit-vnc-console<br />
* 11:50am-12:30pm - Unblocking Priority Features - https://etherpad.openstack.org/p/ironic-ocata-summit-unblock-priorities<br />
* 2:00pm-6:00pm - Contributors Meetup - https://etherpad.openstack.org/p/ironic-ocata-summit-contributor-meetup<br />
<br />
== Keystone ==<br />
<br />
View online: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Keystone%3A<br />
<br />
Wed 26, 4:05pm-4:45pm<br />
Keystone: Newton retrospective (Fishbowl)<br />
https://etherpad.openstack.org/p/keystone-newton-retrospective<br />
<br />
Wed 26, 4:55pm-5:35pm<br />
Keystone: keystone/horizon integration<br />
https://etherpad.openstack.org/p/ocata-keystone-horizon<br />
<br />
Thu 27, 12:00pm-12:40pm<br />
Keystone: Unconference (Fishbowl)<br />
https://etherpad.openstack.org/p/ocata-keystone-unconference<br />
<br />
Thu 27, 12:50pm-1:30pm<br />
Keystone: Ocata priorities (Fishbowl)<br />
https://etherpad.openstack.org/p/ocata-keystone-priorities<br />
<br />
Thu 27, 2:50pm-3:30pm<br />
Keystone: Work session (Federation)<br />
https://etherpad.openstack.org/p/ocata-keystone-federation<br />
<br />
Thu 27, 3:40pm-4:20pm<br />
Keystone: Work session (Testing)<br />
https://etherpad.openstack.org/p/ocata-keystone-testing<br />
<br />
Thu 27, 4:30pm-5:10pm<br />
Keystone: Work session (Documentation)<br />
https://etherpad.openstack.org/p/ocata-keystone-documentation<br />
<br />
Fri 28, 10:00am-10:40am<br />
Keystone: Work session (Authorization)<br />
https://etherpad.openstack.org/p/ocata-keystone-authorization<br />
<br />
Fri 28, 10:50am-11:30am<br />
Keystone: Work session (Authentication)<br />
https://etherpad.openstack.org/p/ocata-keystone-authentication<br />
<br />
Fri 28, 12:00pm-12:40pm<br />
Keystone: Work session (Scaling and Performance)<br />
https://etherpad.openstack.org/p/ocata-keystone-scaling<br />
<br />
Fri 28, 12:50pm-1:30pm<br />
Keystone: Work session (Integration)<br />
https://etherpad.openstack.org/p/ocata-keystone-integration<br />
<br />
Fri 28, 3:00pm-7:00pm<br />
Keystone: Contributors meetup<br />
(No etherpad)<br />
<br />
== Kolla ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Kolla%3A<br />
<br />
Kolla Ocata Summit Master Etherpad - https://etherpad.openstack.org/p/kolla-o-summit-schedule<br />
<br />
'''Wed October 26'''<br />
<br />
* 3:55pm - 4:35pm - Operator experiences - https://etherpad.openstack.org/p/kolla-o-summit-op-experiences<br />
* 5:05pm - 5:45pm - Community roadmap planning for O - https://etherpad.openstack.org/p/kolla-o-summit-community-planning<br />
* 5:55pm - 6:35pm - Goals for Ocata - https://etherpad.openstack.org/p/kolla-o-summit-roadmap<br />
<br />
'''Thu October 27'''<br />
<br />
* 9:00am - 9:40am - Kolla-Kubernetes Architecture - https://etherpad.openstack.org/p/kolla-ocata-summit-kolla-k8s-architecture<br />
* 9:50am - 10:30am - High availability - https://etherpad.openstack.org/p/kolla-o-summit-high-availability<br />
* 1:50pm - 2:30pm - 3rd Party Plugins - https://etherpad.openstack.org/p/kolla-o-summit-3rd-party-plugins<br />
* 2:40pm - 3:20pm - Improving the CI system - https://etherpad.openstack.org/p/kolla-o-summit-improving-ci<br />
* 3:30pm - 4:10pm - Distro requirements, deprecation, levels of support - https://etherpad.openstack.org/p/kolla-o-summit-support-and-deprecation<br />
<br />
'''Fri April 28'''<br />
<br />
* 9:00am - 9:40am - Documentation - https://etherpad.openstack.org/p/kolla-o-summit-documentation<br />
* 9:50am - 10:30am - OSIC review - https://etherpad.openstack.org/p/kolla-o-summit-OSIC-review<br />
* 11:00am - 11:40am - Kolla-Kubernetes Roadmap - https://etherpad.openstack.org/p/kolla-ocata-summit-kolla-k8s-road-map<br />
* 11:50am - 12:30pm - Security VMT threat - https://etherpad.openstack.org/p/kolla-ocata-summit-threat-analysis<br />
* 2:00pm - 6:00pm - Afternoon Contributor Meetup - https://etherpad.openstack.org/p/kolla-ocata-summit-contrib-meetup<br />
<br />
==Manila==<br />
<br />
'''Thu October 27'''<br />
<br />
* 11:00 - 11:40 - Race Conditions (FB) - https://etherpad.openstack.org/p/ocata-manila-race-conditions<br />
* 11:50 - 12:30 - Data Service Jobs Table (FB) - https://etherpad.openstack.org/p/ocata-manila-data-service-jobs-table<br />
* 14:40 - 15:20 - Tempest Direction (WS) - https://etherpad.openstack.org/p/ocata-manila-tempest-direction<br />
<br />
'''Fri April 28'''<br />
<br />
* 11:00 - 11:40 - Access Rules (WS) - https://etherpad.openstack.org/p/ocata-manila-access-rules<br />
* 11:50 - 12:30 - High Availability (WS) - https://etherpad.openstack.org/p/ocata-manila-high-availability<br />
* 14:00 - 18:00 - Contributor Meetup (CM) - https://etherpad.openstack.org/p/ocata-manila-contributor-meetup<br />
<br />
==Neutron==<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 17:05 - 17:45 - Nova/Neutron cross-project session Nova - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
* 17:55 - 18:35 - LBaaS retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session<br />
<br />
'''Thursday October 27'''<br />
<br />
* 09:00 - 09:40 - Completing the Newton backlog - https://etherpad.openstack.org/p/ocata-neutron-core-newton-backlog<br />
* 09:50 - 10:30 - Upstream and dowstream CI and testing efforts - https://etherpad.openstack.org/p/ocata-neutron-testing<br />
* 11:00 - 11:40 - End user and operator feedback - https://etherpad.openstack.org/p/ocata-neutron-end-user-operator-feedback<br />
* 11:50 - 12:30 - Neutronclient retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-client<br />
* 17:30 - 18:10 - Nova/Neutron cross-project session - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
<br />
'''Friday October 28'''<br />
<br />
* 09:00 - 09:40 (Sagrada Familia) Fishbowl Neutron: Neutron-lib retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-lib-next-steps<br />
* 09:50 - 10:30 (Sagrada Familia) Fishbowl Neutron: Neutron server: retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-server-next<br />
* 11:00 - 11:40 (Sagrada Familia) Fishbowl Neutron: Neutron agents: retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-agents<br />
* 11:50 - 12:30 (Sagrada Familia) Fishbowl Neutron: Stadium update - https://etherpad.openstack.org/p/ocata-nova-neutron-stadium<br />
* 14:00 - 18:00 (Room 114) Meetup Neutron: Contributors meetup - https://etherpad.openstack.org/p/ocata-neutron-contributor-meetup<br />
<br />
==Nomad==<br />
<br />
https://etherpad.openstack.org/p/nomad-ocata-design-session<br />
<br />
== Nova ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Nova%3A<br />
<br />
'''Wednesday October 26'''<br />
* 5:05pm-5:45pm - Nova/Neutron cross-project session - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
'''Thursday October 27'''<br />
* 9:00am-9:40am - Newton placement service retrospective - https://etherpad.openstack.org/p/ocata-nova-summit-placement-retrospective<br />
* 9:50am-10:30am - Scheduler / resource providers (quantitative) - https://etherpad.openstack.org/p/ocata-nova-summit-resource-providers-quantitative<br />
* '''Break'''<br />
* 11:00am-11:40am - Scheduler / resource provider traits (qualitative) - https://etherpad.openstack.org/p/ocata-nova-summit-resource-providers-qualitative<br />
* 11:50am-12:30pm - Organizing API work for Ocata - https://etherpad.openstack.org/p/ocata-nova-summit-api<br />
* '''Lunch'''<br />
* 1:50pm-2:30pm - Unconference - https://etherpad.openstack.org/p/ocata-nova-summit-unconference<br />
* 2:40pm-3:20pm - Cells v2 (scheduler, searchlight, multi-cell support) - https://etherpad.openstack.org/p/ocata-nova-summit-cellsv2-scheduler<br />
* 3:30pm-4:10pm - Cells v2 (quotas) - https://etherpad.openstack.org/p/ocata-nova-summit-cellsv2-quotas<br />
* '''Break'''<br />
* 4:40pm-5:20pm - Completing vendordata v2 - https://etherpad.openstack.org/p/ocata-nova-summit-vendoradatav2<br />
* 5:30pm-6:10pm - Nova/Neutron cross-project session - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
'''Friday October 28'''<br />
* 9:00am-9:40am - Nova/Cinder cross-project session - https://etherpad.openstack.org/p/ocata-nova-summit-cinder-session<br />
* 9:50am-10:30am - Security specs and testing - https://etherpad.openstack.org/p/ocata-nova-summit-security<br />
* '''Break'''<br />
* 11:00am-11:40am - Planning the libvirt imagebackend refactor work - https://etherpad.openstack.org/p/ocata-nova-summit-libvirt-imagebackend<br />
* 11:50am-12:30pm - Ocata priorities and schedule - https://etherpad.openstack.org/p/ocata-nova-summit-priorities<br />
* '''Lunch'''<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/ocata-nova-summit-meetup<br />
<br />
== Release Management ==<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 5:55 PM - 6:35 PM -- Work session -- https://etherpad.openstack.org/p/ocata-relmgt-plan<br />
<br />
'''Thursday October 27'''<br />
<br />
* 1:50 PM - 2:30 PM -- Newton Retrospective & Ocata Schedule -- https://etherpad.openstack.org/p/ocata-release-fishbowl<br />
<br />
'''Friday October 28'''<br />
<br />
* 2:00 PM - 6:00 PM -- Contributors Meetup -- https://etherpad.openstack.org/p/ocata-relmgt-plan<br />
<br />
== Searchlight ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Searchlight<br />
<br />
'''Thursday October 27'''<br />
* 9:50 - 10:30 - Fishbowl - https://etherpad.openstack.org/p/ocata-searchlight-summit-plugins-fishbowl<br />
* 11:00 - 11:40 - Working room<br />
* 11:50 - 12:30 - Working room<br />
* 2:40pm-3:20pm - Cells v2 (scheduler, searchlight, multi-cell support) - https://etherpad.openstack.org/p/ocata-nova-summit-cellsv2-scheduler<br />
<br />
== Senlin ==<br />
<br />
'''Friday October 28'''<br />
* 9:00am-9:40am - Senlin work session: policy/profile versioning - https://etherpad.openstack.org/p/ocata-summit-senlin-profile-policy-versioning<br />
* 9:50am-10:30am - Senlin work session: versioned everything - https://etherpad.openstack.org/p/ocata-summit-senlin-versioned-everything<br />
* '''Break'''<br />
* 11:00am-11:40am - Senlin work session: container cluster - https://etherpad.openstack.org/p/ocata-summit-senlin-container-cluster<br />
* 11:50am-12:30am - Senlin work session: HA - https://etherpad.openstack.org/p/ocata-summit-senlin-HA<br />
<br />
== Stewardship Working Group ==<br />
<br />
'''Wed October 26'''<br />
<br />
*12:15pm - 12:55pm - Cross Project workshops: "Re-inventing the TC", the Stewardship Working Group discussion - https://etherpad.openstack.org/p/Barcelona-SWG-cp<br />
<br />
== Tricircle ==<br />
<br />
Venue: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=tricircle%3A<br />
<br />
ideas: https://etherpad.openstack.org/p/ocata-tricircle-sessions-planning<br />
<br />
'''Thu October 27'''<br />
<br />
* 5:30pm - 6:10pm - Cross Neutron networking automation: feature review and what's to do in Ocata : https://etherpad.openstack.org/p/ocata-tricircle-feature-review-priorities-roadmap<br />
<br />
'''Fri April 28'''<br />
<br />
* 9:00am - 9:40am - Ocata work session: https://etherpad.openstack.org/p/ocata-tricircle-work-session<br />
* 9:40am - 12:00am - Tricricle contributors meetup<br />
<br />
== TripleO ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=tripleo%3A<br />
<br />
https://etherpad.openstack.org/p/ocata-tripleo<br />
<br />
===== TripleO: Containers - Current Status and Roadmap =====<br />
Wed 26 3:55pm-4:35pm<br />
https://etherpad.openstack.org/p/ocata-tripleo-containers<br />
<br />
=====TripleO: Work Session - Growing the team=====<br />
Wed 26 5:05pm-5:45pm -<br />
https://etherpad.openstack.org/p/ocata-tripleo-team-growing<br />
<br />
===== TripleO: Work Session - CI - current status and roadmap=====<br />
Wed 26 5:55pm-6:35pm -<br />
https://etherpad.openstack.org/p/ocata-tripleo-ci<br />
<br />
===== TripleO: Upgrades - current status and roadmap=====<br />
Thu 27 1:50pm-2:30pm -<br />
https://etherpad.openstack.org/p/ocata-tripleo-upgrades<br />
<br />
===== TripleO: Work Session - Composable Undercloud deployment with Heat=====<br />
Fri 28 9:00am-9:20am -<br />
https://etherpad.openstack.org/p/tripleo-composable-undercloud<br />
<br />
===== TripleO: Work Session - GUI, CLI, Validations current status, roadmap, requirements=====<br />
Fri 28 9:20am-9:40am -<br />
https://etherpad.openstack.org/p/gui-ocata<br />
<br />
===== TripleO: Work Session - Multiple topics=====<br />
Fri 28 9:50am-10:30am -<br />
Blueprints, specs, tools and Ocata summary.<br />
See bottom of https://etherpad.openstack.org/p/ocata-tripleo<br />
<br />
== Trove ==<br />
<br />
https://etherpad.openstack.org/p/trove-barcelona-sessions <br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Trove<br />
<br />
==Watcher==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Watcher<br />
<br />
'''Wed October 26'''<br />
<br />
* 5.55pm - 6.35pm - [https://etherpad.openstack.org/p/watcher-ocata-design-session Existing & new infrastructure optimization strategies]<br />
<br />
'''Thu October 27'''<br />
<br />
* 9.50am - 10.30am - [https://etherpad.openstack.org/p/watcher-ocata-design-session Watcher Newton retrospective]<br />
<br />
'''Fri April 28'''<br />
<br />
* 11am - 12.30am - [https://etherpad.openstack.org/p/watcher-ocata-design-session Ocata priorities & roadmap]<br />
* 2pm - 6pm - [https://etherpad.openstack.org/p/watcher-ocata-design-session Contributors meetup]<br />
<br />
==Zaqar==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Zaqar<br />
<br />
'''Thursday, October 27'''<br />
<br />
9:50am-10:30am [https://etherpad.openstack.org/p/zaqar-ocata-performance Zaqar's profile and performance gate]<br />
<br />
4:40pm-5:00pm [https://etherpad.openstack.org/p/zaqar-ocata-notification-delivery-policy Notification delivery policy]<br />
<br />
5:00pm-5:20pm [https://etherpad.openstack.org/p/zaqar-ocata-purge-queue Purge queue]<br />
<br />
5:30pm-6:10pm [https://etherpad.openstack.org/p/zaqar-ocata-subscription-confirmation-email Subscription Confirmation - Email]<br />
...</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Design_Summit/Ocata/Etherpads&diff=135138Design Summit/Ocata/Etherpads2016-10-19T04:16:09Z<p>Travis Tripp: /* Searchlight */</p>
<hr />
<div>[[Category:Summit]]<br />
[[Category:Ocata]]<br />
[[Category:Etherpad]]<br />
<br />
The grand list of all the Ocata Design Summit sessions. Please include Date, Time, and links to etherpads when adding new content.<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== Event intro/closure ==<br />
* Tue Oct 26 11:25am - Design Summit 101 - https://etherpad.openstack.org/p/ocata-design-summit-101<br />
* Fri Oct 29 12:30pm - Barcelona feedback session - https://etherpad.openstack.org/p/BCN-summit-feedback<br />
<br />
<br />
==Architecture Working Group==<br />
<br />
'''Wednesday, October 26'''<br />
* 11:25am-12:05pm - Cross Project workshops: Architecture Working Group Fishbowl - https://etherpad.openstack.org/p/ocata-summit-arch-wg<br />
<br />
==Barbican==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Barbican<br />
<br />
'''Thursday, October 27'''<br />
* 11:00am-11:40am - (128) Barbican: User and Operator Feedback Fishbowl - https://etherpad.openstack.org/p/barbican-ocata-summit-roadmap<br />
* 11:50am-12:30pm - (Montjuic) Barbican: Work Session (Roadmap) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 11:50pm-02:30pm - (130) Barbican: Work Session (Cross Project)- https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
<br />
'''Friday, October 28'''<br />
* 09:00am-09:40am - (129) Barbican: Work Session (Security) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 09:50am-10:30am - (129) Barbican: Work Session (TBD) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 11:00am-11:40am - (129) Barbican: Work Session (Resources) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 11:50am-12:30pm - (129) Barbican: Work Session (Planning) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
<br />
==Cinder==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Cinder<br />
<br />
'''Wednesday October 26'''<br />
* 3:05pm-3:45pm - Cinder Test Working Group progress and status - https://etherpad.openstack.org/p/Cinder-testing<br />
* 3:55-4:35 - Driver bug fixes for unsupported OpenStack releases - https://etherpad.openstack.org/p/ocata-cinder-summit-stabledriverfixes<br />
* 5:05-5:45 - Stand alone Cinder service - https://etherpad.openstack.org/p/ocata-cinder-summit-standalonecinder<br />
* 5:55-6:35 - Pike (and beyond) planning - https://etherpad.openstack.org/p/ocata-cinder-summit-pikeplanning<br />
'''Thursday October 27'''<br />
* 9:00-9:40 - Replication - https://etherpad.openstack.org/p/ocata-cinder-summit-replication<br />
* 9:50-10:30 - Cinder-Nova API changes - https://etherpad.openstack.org/p/ocata-cinder-summit-attachdetach<br />
'''Friday October 28'''<br />
* 9:00am-9:40am - Nova/Cinder cross-project session - https://etherpad.openstack.org/p/ocata-nova-summit-cinder-session<br />
* 11:00am-11:40am - NFS snapshots - https://etherpad.openstack.org/p/ocata-cinder-summit-nfssnapshots<br />
* 11:50am-12:30pm - Cinder backup improvements - https://etherpad.openstack.org/p/ocata-cinder-summit-backupimprovements<br />
* '''Lunch'''<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/ocata-cinder-summit-meetup<br />
<br />
==Cross Project Sessions==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Cross+Project<br />
<br />
'''Tuesday October 25'''<br />
<br />
* 3:55 PM - 4:35 PM -- Experiences with Project Decomposition, Scaling Review Teams and Subsystem Maintainers (Part 1) -- https://etherpad.openstack.org/p/ocata-summit-xp-scaling-review-teams<br />
* 5:05 PM - 5:45 PM -- Discuss Community-Wide Release Goals -- https://etherpad.openstack.org/p/community-goals<br />
* 5:55 PM - 6:35 PM -- Python 3 Integration Testing -- https://etherpad.openstack.org/p/ocata-python-3<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 11:25 AM - 12:05 PM -- Ocata goal: Remove Incubated Oslo Code -- https://etherpad.openstack.org/p/ocata-goal-oslo<br />
* 2:15 PM - 2:55 PM -- Experiences with project decomposition, scaling review teams and subsystem maintainers (part 2) -- https://etherpad.openstack.org/p/ocata-summit-xp-scaling-review-teams<br />
<br />
==Documentation==<br />
<br />
See these and more documentation sessions in schedule: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Documentation<br />
<br />
'''Wednesday, October 26'''<br />
* 5:05pm-5:45pm - User Guides Working Group - https://etherpad.openstack.org/p/BCN-Docs-UserGuidesWG<br />
'''Thursday October 27'''<br />
* 2:40pm-3:20pm - Newton Retrospective - https://etherpad.openstack.org/p/BCN-Docs-NewtonRetro <br />
* 3:30pm-4:10pm - Social Things - https://etherpad.openstack.org/p/BCN-Docs-Social <br />
* 4:40pm-5:20pm - Training Labs - https://etherpad.openstack.org/p/BCN-Docs-Training <br />
* 5:30pm-6:10pm - Toolchain - https://etherpad.openstack.org/p/BCN-Docs-Toolchain <br />
'''Friday October 28'''<br />
* 11:00am-11:40am - API Working Group - https://etherpad.openstack.org/p/BCN-Docs-APIWG <br />
* 11:50am-12:30pm - Ocata Planning Working Group - https://etherpad.openstack.org/p/BCN-Docs-OcataPlanningWG <br />
* 2:00pm-6:00pm - Contributors Meetup - no etherpad<br />
<br />
== Gluon ==<br />
<br />
View online: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Gluon%3A<br />
<br />
Fri 28, 9:50am-10:30am: Gluon Work Session https://etherpad.openstack.org/p/ocata-gluon-work-plan<br />
<br />
==Heat==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Heat<br />
<br />
'''Thursday October 27'''<br />
<br />
* 11:00am-11:40am - Convergence Phase 1 - What worked, What didn't - https://etherpad.openstack.org/p/heat-ocata-convergence-phase-1<br />
* 11:50am-12:30pm - Performance Scalability Improvements - I (Issues with very large stacks) - https://etherpad.openstack.org/p/heat-ocata-performance-scalability-1<br />
* 2:40pm-3:20pm - Performance Scalability Improvements - II - https://etherpad.openstack.org/p/heat-ocata-performance-scalability-2<br />
* 3:30pm-4:10pm - Convergence Phase 2 - https://etherpad.openstack.org/p/heat-ocata-convergence-phase-2<br />
* 4:40pm-5:20pm - Validation Improvements - https://etherpad.openstack.org/p/heat-ocata-validation-improvements<br />
<br />
'''Friday October 28'''<br />
<br />
* 9:00am-9:40am - RPC versioning and hitless upgrades - https://etherpad.openstack.org/p/heat-ocata-hitless-upgrades<br />
* 9:50am-10:30am - API Microversions - https://etherpad.openstack.org/p/heat-ocata-api-microversions<br />
* 11:00am-11:40am - Heat Integration tests, Tempest and test candidates for DefCore Interop Testing - https://etherpad.openstack.org/p/heat-ocata-test-coverage<br />
* 11:50am-12:30pm - Improve maturity of heat - https://etherpad.openstack.org/p/heat-ocata-improve-maturity<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/ocata-heat-contributor-meetup<br />
<br />
<br />
==Horizon==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Horizon%3A<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 16:55-17:35 - Cross-project meeting with Horizon and Keystone - https://etherpad.openstack.org/p/ocata-keystone-horizon<br />
<br />
'''Thursday October 27'''<br />
<br />
* 09:00-09:40 - Operator/ Plugin feedback - https://etherpad.openstack.org/p/horizon-ocata-feedback<br />
* 09:50-10:30 - Newton retrospective, Ocata timeline, Dependencies, Testing!! and Selenium :-( - https://etherpad.openstack.org/p/horizon-ocata-planning<br />
* 16:40-17:20 - Cross-project topics; Glance, Identity, K2K Federation, Quotas - https://etherpad.openstack.org/p/horizon-ocata-cross-project<br />
* 17:30-18:10 - AngularJS state of play (where we're going, status of panels, what CORS means, do we want a thin service proxy, deprecations, etc.) -https://etherpad.openstack.org/p/horizon-ocata-angularjs<br />
<br />
'''Friday October 28'''<br />
<br />
* 11:50-12:30 - Priority setting (and TODO review if we have time) - https://etherpad.openstack.org/p/horizon-ocata-priorities<br />
* 14:00-18:00 - General project discussion (Newton retrospective, how to improve our organisation and use of tooling)<br />
<br />
== I18n ==<br />
<br />
View online: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=I18n%3A<br />
<br />
'''Friday October 28'''<br />
<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/barcelona-i18n-meetup<br />
<br />
==Infrastructure==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Infrastructure%3A<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 3:05pm-3:45pm: ''Work Session: Firehose'' in AC Hotel - P3 - Montjuic<br />
** https://etherpad.openstack.org/p/ocata-infra-firehose<br />
<br />
'''Thursday October 27'''<br />
<br />
* 2:40pm-3:20pm: ''Fishbowl: Status update and plans for task tracking'' in AC Hotel - P1 - Salon Barcelona<br />
** https://etherpad.openstack.org/p/ocata-infra-community-task-tracking<br />
<br />
'''Friday October 28'''<br />
<br />
All the sessions on Friday are taking place at CCIB - Centre de Convencions Internacional de Barcelona - P1<br />
<br />
* 9:00am-9:40am: ''Work Session: Next steps for infra-cloud'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-infra-cloud<br />
* 9:50am-10:30am: ''Work Session: Interactive infra-cloud debugging'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-infra-cloud-debugging<br />
* 11:00am-11:40am: ''Work Session: Test environment expectations'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-test-env-expectations<br />
* 11:50am-12:30pm: ''Work Session: Xenial jobs transition for stable/newton'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-xenial-stable-newton<br />
* 2:00pm-6:00pm: ''Contributors Meetup'' in Room 121<br />
** https://etherpad.openstack.org/p/ocata-infra-contributors-meetup<br />
<br />
==Ironic==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ironic:<br />
<br />
'''Wednesday October 26'''<br />
* 5ː05pm-5ː45pm - API Evolution - https://etherpad.openstack.org/p/ironic-ocata-summit-api-evolution<br />
* 5:55pm-6:35pm - Deploy-time RAID and Advanced Partitioning (w/ Nova) - https://etherpad.openstack.org/p/ironic-ocata-summit-deploy-time-raid<br />
'''Thursday October 27'''<br />
* 9:00am-9:40am - Task Framework - https://etherpad.openstack.org/p/ironic-ocata-summit-task-framework<br />
* 9:50am-10:30am - QA/CI - https://etherpad.openstack.org/p/ironic-ocata-summit-qa<br />
* 1:50pm-2:30pm - Synchronizing Events with Neutron - https://etherpad.openstack.org/p/ironic-ocata-summit-neutron-events<br />
* 2:40pm-3:20pm - Ocata Priorities - https://etherpad.openstack.org/p/ironic-ocata-summit-priorities<br />
'''Friday October 28'''<br />
* 11:00am-11:40am - VNC Console - https://etherpad.openstack.org/p/ironic-ocata-summit-vnc-console<br />
* 11:50am-12:30pm - Unblocking Priority Features - https://etherpad.openstack.org/p/ironic-ocata-summit-unblock-priorities<br />
* 2:00pm-6:00pm - Contributors Meetup - https://etherpad.openstack.org/p/ironic-ocata-summit-contributor-meetup<br />
<br />
== Keystone ==<br />
<br />
View online: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Keystone%3A<br />
<br />
Wed 26, 4:05pm-4:45pm<br />
Keystone: Newton retrospective (Fishbowl)<br />
https://etherpad.openstack.org/p/keystone-newton-retrospective<br />
<br />
Wed 26, 4:55pm-5:35pm<br />
Keystone: keystone/horizon integration<br />
https://etherpad.openstack.org/p/ocata-keystone-horizon<br />
<br />
Thu 27, 12:00pm-12:40pm<br />
Keystone: Unconference (Fishbowl)<br />
https://etherpad.openstack.org/p/ocata-keystone-unconference<br />
<br />
Thu 27, 12:50pm-1:30pm<br />
Keystone: Ocata priorities (Fishbowl)<br />
https://etherpad.openstack.org/p/ocata-keystone-priorities<br />
<br />
Thu 27, 2:50pm-3:30pm<br />
Keystone: Work session (Federation)<br />
https://etherpad.openstack.org/p/ocata-keystone-federation<br />
<br />
Thu 27, 3:40pm-4:20pm<br />
Keystone: Work session (Testing)<br />
https://etherpad.openstack.org/p/ocata-keystone-testing<br />
<br />
Thu 27, 4:30pm-5:10pm<br />
Keystone: Work session (Documentation)<br />
https://etherpad.openstack.org/p/ocata-keystone-documentation<br />
<br />
Fri 28, 10:00am-10:40am<br />
Keystone: Work session (Authorization)<br />
https://etherpad.openstack.org/p/ocata-keystone-authorization<br />
<br />
Fri 28, 10:50am-11:30am<br />
Keystone: Work session (Authentication)<br />
https://etherpad.openstack.org/p/ocata-keystone-authentication<br />
<br />
Fri 28, 12:00pm-12:40pm<br />
Keystone: Work session (Scaling and Performance)<br />
https://etherpad.openstack.org/p/ocata-keystone-scaling<br />
<br />
Fri 28, 12:50pm-1:30pm<br />
Keystone: Work session (Integration)<br />
https://etherpad.openstack.org/p/ocata-keystone-integration<br />
<br />
Fri 28, 3:00pm-7:00pm<br />
Keystone: Contributors meetup<br />
(No etherpad)<br />
<br />
== Kolla ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Kolla%3A<br />
<br />
Kolla Ocata Summit Master Etherpad - https://etherpad.openstack.org/p/kolla-o-summit-schedule<br />
<br />
'''Wed October 26'''<br />
<br />
* 3:55pm - 4:35pm - Operator experiences - https://etherpad.openstack.org/p/kolla-o-summit-op-experiences<br />
* 5:05pm - 5:45pm - Community roadmap planning for O - https://etherpad.openstack.org/p/kolla-o-summit-community-planning<br />
* 5:55pm - 6:35pm - Goals for Ocata - https://etherpad.openstack.org/p/kolla-o-summit-roadmap<br />
<br />
'''Thu October 27'''<br />
<br />
* 9:00am - 9:40am - Kolla-Kubernetes Architecture - https://etherpad.openstack.org/p/kolla-ocata-summit-kolla-k8s-architecture<br />
* 9:50am - 10:30am - High availability - https://etherpad.openstack.org/p/kolla-o-summit-high-availability<br />
* 1:50pm - 2:30pm - 3rd Party Plugins - https://etherpad.openstack.org/p/kolla-o-summit-3rd-party-plugins<br />
* 2:40pm - 3:20pm - Improving the CI system - https://etherpad.openstack.org/p/kolla-o-summit-improving-ci<br />
* 3:30pm - 4:10pm - Distro requirements, deprecation, levels of support - https://etherpad.openstack.org/p/kolla-o-summit-support-and-deprecation<br />
<br />
'''Fri April 28'''<br />
<br />
* 9:00am - 9:40am - Documentation - https://etherpad.openstack.org/p/kolla-o-summit-documentation<br />
* 9:50am - 10:30am - OSIC review - https://etherpad.openstack.org/p/kolla-o-summit-OSIC-review<br />
* 11:00am - 11:40am - Kolla-Kubernetes Roadmap - https://etherpad.openstack.org/p/kolla-ocata-summit-kolla-k8s-road-map<br />
* 11:50am - 12:30pm - Security VMT threat - https://etherpad.openstack.org/p/kolla-ocata-summit-threat-analysis<br />
* 2:00pm - 6:00pm - Afternoon Contributor Meetup - https://etherpad.openstack.org/p/kolla-ocata-summit-contrib-meetup<br />
<br />
==Manila==<br />
<br />
'''Thu October 27'''<br />
<br />
* 11:00 - 11:40 - Race Conditions (FB) - https://etherpad.openstack.org/p/ocata-manila-race-conditions<br />
* 11:50 - 12:30 - Data Service Jobs Table (FB) - https://etherpad.openstack.org/p/ocata-manila-data-service-jobs-table<br />
* 14:40 - 15:20 - Tempest Direction (WS) - https://etherpad.openstack.org/p/ocata-manila-tempest-direction<br />
<br />
'''Fri April 28'''<br />
<br />
* 11:00 - 11:40 - Access Rules (WS) - https://etherpad.openstack.org/p/ocata-manila-access-rules<br />
* 11:50 - 12:30 - High Availability (WS) - https://etherpad.openstack.org/p/ocata-manila-high-availability<br />
* 14:00 - 18:00 - Contributor Meetup (CM) - https://etherpad.openstack.org/p/ocata-manila-contributor-meetup<br />
<br />
==Neutron==<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 17:05 - 17:45 - Nova/Neutron cross-project session Nova - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
* 17:55 - 18:35 - LBaaS retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session<br />
<br />
'''Thursday October 27'''<br />
<br />
* 09:00 - 09:40 - Completing the Newton backlog - https://etherpad.openstack.org/p/ocata-neutron-core-newton-backlog<br />
* 09:50 - 10:30 - Upstream and dowstream CI and testing efforts - https://etherpad.openstack.org/p/ocata-neutron-testing<br />
* 11:00 - 11:40 - End user and operator feedback - https://etherpad.openstack.org/p/ocata-neutron-end-user-operator-feedback<br />
* 11:50 - 12:30 - Neutronclient retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-client<br />
* 17:30 - 18:10 - Nova/Neutron cross-project session - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
<br />
'''Friday October 28'''<br />
<br />
* 09:00 - 09:40 (Sagrada Familia) Fishbowl Neutron: Neutron-lib retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-lib-next-steps<br />
* 09:50 - 10:30 (Sagrada Familia) Fishbowl Neutron: Neutron server: retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-server-next<br />
* 11:00 - 11:40 (Sagrada Familia) Fishbowl Neutron: Neutron agents: retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-agents<br />
* 11:50 - 12:30 (Sagrada Familia) Fishbowl Neutron: Stadium update - https://etherpad.openstack.org/p/ocata-nova-neutron-stadium<br />
* 14:00 - 18:00 (Room 114) Meetup Neutron: Contributors meetup - https://etherpad.openstack.org/p/ocata-neutron-contributor-meetup<br />
<br />
==Nomad==<br />
<br />
https://etherpad.openstack.org/p/nomad-ocata-design-session<br />
<br />
== Nova ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Nova%3A<br />
<br />
'''Wednesday October 26'''<br />
* 5:05pm-5:45pm - Nova/Neutron cross-project session - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
'''Thursday October 27'''<br />
* 9:00am-9:40am - Newton placement service retrospective - https://etherpad.openstack.org/p/ocata-nova-summit-placement-retrospective<br />
* 9:50am-10:30am - Scheduler / resource providers (quantitative) - https://etherpad.openstack.org/p/ocata-nova-summit-resource-providers-quantitative<br />
* '''Break'''<br />
* 11:00am-11:40am - Scheduler / resource provider traits (qualitative) - https://etherpad.openstack.org/p/ocata-nova-summit-resource-providers-qualitative<br />
* 11:50am-12:30pm - Organizing API work for Ocata - https://etherpad.openstack.org/p/ocata-nova-summit-api<br />
* '''Lunch'''<br />
* 1:50pm-2:30pm - Unconference - https://etherpad.openstack.org/p/ocata-nova-summit-unconference<br />
* 2:40pm-3:20pm - Cells v2 (scheduler, searchlight, multi-cell support) - https://etherpad.openstack.org/p/ocata-nova-summit-cellsv2-scheduler<br />
* 3:30pm-4:10pm - Cells v2 (quotas) - https://etherpad.openstack.org/p/ocata-nova-summit-cellsv2-quotas<br />
* '''Break'''<br />
* 4:40pm-5:20pm - Completing vendordata v2 - https://etherpad.openstack.org/p/ocata-nova-summit-vendoradatav2<br />
* 5:30pm-6:10pm - Nova/Neutron cross-project session - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
'''Friday October 28'''<br />
* 9:00am-9:40am - Nova/Cinder cross-project session - https://etherpad.openstack.org/p/ocata-nova-summit-cinder-session<br />
* 9:50am-10:30am - Security specs and testing - https://etherpad.openstack.org/p/ocata-nova-summit-security<br />
* '''Break'''<br />
* 11:00am-11:40am - Planning the libvirt imagebackend refactor work - https://etherpad.openstack.org/p/ocata-nova-summit-libvirt-imagebackend<br />
* 11:50am-12:30pm - Ocata priorities and schedule - https://etherpad.openstack.org/p/ocata-nova-summit-priorities<br />
* '''Lunch'''<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/ocata-nova-summit-meetup<br />
<br />
== Release Management ==<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 5:55 PM - 6:35 PM -- Work session -- https://etherpad.openstack.org/p/ocata-relmgt-plan<br />
<br />
'''Thursday October 27'''<br />
<br />
* 1:50 PM - 2:30 PM -- Newton Retrospective & Ocata Schedule -- https://etherpad.openstack.org/p/ocata-release-fishbowl<br />
<br />
'''Friday October 28'''<br />
<br />
* 2:00 PM - 6:00 PM -- Contributors Meetup -- https://etherpad.openstack.org/p/ocata-relmgt-plan<br />
<br />
== Searchlight ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Searchlight<br />
<br />
'''Thursday October 27'''<br />
9:50 - 10:30 - Fishbowl - https://etherpad.openstack.org/p/ocata-searchlight-summit-plugins-fishbowl<br />
11:00 - 11:40 - Working room<br />
11:50 - 12:30 - Working room<br />
2:40pm-3:20pm - Cells v2 (scheduler, searchlight, multi-cell support) - https://etherpad.openstack.org/p/ocata-nova-summit-cellsv2-scheduler<br />
<br />
== Senlin ==<br />
<br />
'''Friday October 28'''<br />
* 9:00am-9:40am - Senlin work session: policy/profile versioning - https://etherpad.openstack.org/p/ocata-summit-senlin-profile-policy-versioning<br />
* 9:50am-10:30am - Senlin work session: versioned everything - https://etherpad.openstack.org/p/ocata-summit-senlin-versioned-everything<br />
* '''Break'''<br />
* 11:00am-11:40am - Senlin work session: container cluster - https://etherpad.openstack.org/p/ocata-summit-senlin-container-cluster<br />
* 11:50am-12:30am - Senlin work session: HA - https://etherpad.openstack.org/p/ocata-summit-senlin-HA<br />
<br />
== Stewardship Working Group ==<br />
<br />
'''Wed October 26'''<br />
<br />
*12:15pm - 12:55pm - Cross Project workshops: "Re-inventing the TC", the Stewardship Working Group discussion - https://etherpad.openstack.org/p/Barcelona-SWG-cp<br />
<br />
== Tricircle ==<br />
<br />
Venue: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=tricircle%3A<br />
<br />
ideas: https://etherpad.openstack.org/p/ocata-tricircle-sessions-planning<br />
<br />
'''Thu October 27'''<br />
<br />
* 5:30pm - 6:10pm - Cross Neutron networking automation: feature review and what's to do in Ocata : https://etherpad.openstack.org/p/ocata-tricircle-feature-review-priorities-roadmap<br />
<br />
'''Fri April 28'''<br />
<br />
* 9:00am - 9:40am - Ocata work session: https://etherpad.openstack.org/p/ocata-tricircle-work-session<br />
* 9:40am - 12:00am - Tricricle contributors meetup<br />
<br />
== TripleO ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=tripleo%3A<br />
<br />
https://etherpad.openstack.org/p/ocata-tripleo<br />
<br />
===== TripleO: Containers - Current Status and Roadmap =====<br />
Wed 26 3:55pm-4:35pm<br />
https://etherpad.openstack.org/p/ocata-tripleo-containers<br />
<br />
=====TripleO: Work Session - Growing the team=====<br />
Wed 26 5:05pm-5:45pm -<br />
https://etherpad.openstack.org/p/ocata-tripleo-team-growing<br />
<br />
===== TripleO: Work Session - CI - current status and roadmap=====<br />
Wed 26 5:55pm-6:35pm -<br />
https://etherpad.openstack.org/p/ocata-tripleo-ci<br />
<br />
===== TripleO: Upgrades - current status and roadmap=====<br />
Thu 27 1:50pm-2:30pm -<br />
https://etherpad.openstack.org/p/ocata-tripleo-upgrades<br />
<br />
===== TripleO: Work Session - Composable Undercloud deployment with Heat=====<br />
Fri 28 9:00am-9:20am -<br />
https://etherpad.openstack.org/p/tripleo-composable-undercloud<br />
<br />
===== TripleO: Work Session - GUI, CLI, Validations current status, roadmap, requirements=====<br />
Fri 28 9:20am-9:40am -<br />
https://etherpad.openstack.org/p/gui-ocata<br />
<br />
===== TripleO: Work Session - Multiple topics=====<br />
Fri 28 9:50am-10:30am -<br />
Blueprints, specs, tools and Ocata summary.<br />
See bottom of https://etherpad.openstack.org/p/ocata-tripleo<br />
<br />
== Trove ==<br />
<br />
https://etherpad.openstack.org/p/trove-barcelona-sessions <br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Trove<br />
<br />
==Watcher==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Watcher<br />
<br />
'''Wed October 26'''<br />
<br />
* 5.55pm - 6.35pm - [https://etherpad.openstack.org/p/watcher-ocata-design-session Existing & new infrastructure optimization strategies]<br />
<br />
'''Thu October 27'''<br />
<br />
* 9.50am - 10.30am - [https://etherpad.openstack.org/p/watcher-ocata-design-session Watcher Newton retrospective]<br />
<br />
'''Fri April 28'''<br />
<br />
* 11am - 12.30am - [https://etherpad.openstack.org/p/watcher-ocata-design-session Ocata priorities & roadmap]<br />
* 2pm - 6pm - [https://etherpad.openstack.org/p/watcher-ocata-design-session Contributors meetup]<br />
<br />
==Zaqar==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Zaqar<br />
<br />
'''Thursday, October 27'''<br />
<br />
9:50am-10:30am [https://etherpad.openstack.org/p/zaqar-ocata-performance Zaqar's profile and performance gate]<br />
<br />
4:40pm-5:00pm [https://etherpad.openstack.org/p/zaqar-ocata-notification-delivery-policy Notification delivery policy]<br />
<br />
5:00pm-5:20pm [https://etherpad.openstack.org/p/zaqar-ocata-purge-queue Purge queue]<br />
<br />
5:30pm-6:10pm [https://etherpad.openstack.org/p/zaqar-ocata-subscription-confirmation-email Subscription Confirmation - Email]<br />
...</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Design_Summit/Ocata/Etherpads&diff=135137Design Summit/Ocata/Etherpads2016-10-19T04:10:31Z<p>Travis Tripp: /* Searchlight */</p>
<hr />
<div>[[Category:Summit]]<br />
[[Category:Ocata]]<br />
[[Category:Etherpad]]<br />
<br />
The grand list of all the Ocata Design Summit sessions. Please include Date, Time, and links to etherpads when adding new content.<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== Event intro/closure ==<br />
* Tue Oct 26 11:25am - Design Summit 101 - https://etherpad.openstack.org/p/ocata-design-summit-101<br />
* Fri Oct 29 12:30pm - Barcelona feedback session - https://etherpad.openstack.org/p/BCN-summit-feedback<br />
<br />
<br />
==Architecture Working Group==<br />
<br />
'''Wednesday, October 26'''<br />
* 11:25am-12:05pm - Cross Project workshops: Architecture Working Group Fishbowl - https://etherpad.openstack.org/p/ocata-summit-arch-wg<br />
<br />
==Barbican==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Barbican<br />
<br />
'''Thursday, October 27'''<br />
* 11:00am-11:40am - (128) Barbican: User and Operator Feedback Fishbowl - https://etherpad.openstack.org/p/barbican-ocata-summit-roadmap<br />
* 11:50am-12:30pm - (Montjuic) Barbican: Work Session (Roadmap) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 11:50pm-02:30pm - (130) Barbican: Work Session (Cross Project)- https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
<br />
'''Friday, October 28'''<br />
* 09:00am-09:40am - (129) Barbican: Work Session (Security) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 09:50am-10:30am - (129) Barbican: Work Session (TBD) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 11:00am-11:40am - (129) Barbican: Work Session (Resources) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 11:50am-12:30pm - (129) Barbican: Work Session (Planning) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
<br />
==Cinder==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Cinder<br />
<br />
'''Wednesday October 26'''<br />
* 3:05pm-3:45pm - Cinder Test Working Group progress and status - https://etherpad.openstack.org/p/Cinder-testing<br />
* 3:55-4:35 - Driver bug fixes for unsupported OpenStack releases - https://etherpad.openstack.org/p/ocata-cinder-summit-stabledriverfixes<br />
* 5:05-5:45 - Stand alone Cinder service - https://etherpad.openstack.org/p/ocata-cinder-summit-standalonecinder<br />
* 5:55-6:35 - Pike (and beyond) planning - https://etherpad.openstack.org/p/ocata-cinder-summit-pikeplanning<br />
'''Thursday October 27'''<br />
* 9:00-9:40 - Replication - https://etherpad.openstack.org/p/ocata-cinder-summit-replication<br />
* 9:50-10:30 - Cinder-Nova API changes - https://etherpad.openstack.org/p/ocata-cinder-summit-attachdetach<br />
'''Friday October 28'''<br />
* 9:00am-9:40am - Nova/Cinder cross-project session - https://etherpad.openstack.org/p/ocata-nova-summit-cinder-session<br />
* 11:00am-11:40am - NFS snapshots - https://etherpad.openstack.org/p/ocata-cinder-summit-nfssnapshots<br />
* 11:50am-12:30pm - Cinder backup improvements - https://etherpad.openstack.org/p/ocata-cinder-summit-backupimprovements<br />
* '''Lunch'''<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/ocata-cinder-summit-meetup<br />
<br />
==Cross Project Sessions==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Cross+Project<br />
<br />
'''Tuesday October 25'''<br />
<br />
* 3:55 PM - 4:35 PM -- Experiences with Project Decomposition, Scaling Review Teams and Subsystem Maintainers (Part 1) -- https://etherpad.openstack.org/p/ocata-summit-xp-scaling-review-teams<br />
* 5:05 PM - 5:45 PM -- Discuss Community-Wide Release Goals -- https://etherpad.openstack.org/p/community-goals<br />
* 5:55 PM - 6:35 PM -- Python 3 Integration Testing -- https://etherpad.openstack.org/p/ocata-python-3<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 11:25 AM - 12:05 PM -- Ocata goal: Remove Incubated Oslo Code -- https://etherpad.openstack.org/p/ocata-goal-oslo<br />
* 2:15 PM - 2:55 PM -- Experiences with project decomposition, scaling review teams and subsystem maintainers (part 2) -- https://etherpad.openstack.org/p/ocata-summit-xp-scaling-review-teams<br />
<br />
==Documentation==<br />
<br />
See these and more documentation sessions in schedule: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Documentation<br />
<br />
'''Wednesday, October 26'''<br />
* 5:05pm-5:45pm - User Guides Working Group - https://etherpad.openstack.org/p/BCN-Docs-UserGuidesWG<br />
'''Thursday October 27'''<br />
* 2:40pm-3:20pm - Newton Retrospective - https://etherpad.openstack.org/p/BCN-Docs-NewtonRetro <br />
* 3:30pm-4:10pm - Social Things - https://etherpad.openstack.org/p/BCN-Docs-Social <br />
* 4:40pm-5:20pm - Training Labs - https://etherpad.openstack.org/p/BCN-Docs-Training <br />
* 5:30pm-6:10pm - Toolchain - https://etherpad.openstack.org/p/BCN-Docs-Toolchain <br />
'''Friday October 28'''<br />
* 11:00am-11:40am - API Working Group - https://etherpad.openstack.org/p/BCN-Docs-APIWG <br />
* 11:50am-12:30pm - Ocata Planning Working Group - https://etherpad.openstack.org/p/BCN-Docs-OcataPlanningWG <br />
* 2:00pm-6:00pm - Contributors Meetup - no etherpad<br />
<br />
== Gluon ==<br />
<br />
View online: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Gluon%3A<br />
<br />
Fri 28, 9:50am-10:30am: Gluon Work Session https://etherpad.openstack.org/p/ocata-gluon-work-plan<br />
<br />
==Heat==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Heat<br />
<br />
'''Thursday October 27'''<br />
<br />
* 11:00am-11:40am - Convergence Phase 1 - What worked, What didn't - https://etherpad.openstack.org/p/heat-ocata-convergence-phase-1<br />
* 11:50am-12:30pm - Performance Scalability Improvements - I (Issues with very large stacks) - https://etherpad.openstack.org/p/heat-ocata-performance-scalability-1<br />
* 2:40pm-3:20pm - Performance Scalability Improvements - II - https://etherpad.openstack.org/p/heat-ocata-performance-scalability-2<br />
* 3:30pm-4:10pm - Convergence Phase 2 - https://etherpad.openstack.org/p/heat-ocata-convergence-phase-2<br />
* 4:40pm-5:20pm - Validation Improvements - https://etherpad.openstack.org/p/heat-ocata-validation-improvements<br />
<br />
'''Friday October 28'''<br />
<br />
* 9:00am-9:40am - RPC versioning and hitless upgrades - https://etherpad.openstack.org/p/heat-ocata-hitless-upgrades<br />
* 9:50am-10:30am - API Microversions - https://etherpad.openstack.org/p/heat-ocata-api-microversions<br />
* 11:00am-11:40am - Heat Integration tests, Tempest and test candidates for DefCore Interop Testing - https://etherpad.openstack.org/p/heat-ocata-test-coverage<br />
* 11:50am-12:30pm - Improve maturity of heat - https://etherpad.openstack.org/p/heat-ocata-improve-maturity<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/ocata-heat-contributor-meetup<br />
<br />
<br />
==Horizon==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Horizon%3A<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 16:55-17:35 - Cross-project meeting with Horizon and Keystone - https://etherpad.openstack.org/p/ocata-keystone-horizon<br />
<br />
'''Thursday October 27'''<br />
<br />
* 09:00-09:40 - Operator/ Plugin feedback - https://etherpad.openstack.org/p/horizon-ocata-feedback<br />
* 09:50-10:30 - Newton retrospective, Ocata timeline, Dependencies, Testing!! and Selenium :-( - https://etherpad.openstack.org/p/horizon-ocata-planning<br />
* 16:40-17:20 - Cross-project topics; Glance, Identity, K2K Federation, Quotas - https://etherpad.openstack.org/p/horizon-ocata-cross-project<br />
* 17:30-18:10 - AngularJS state of play (where we're going, status of panels, what CORS means, do we want a thin service proxy, deprecations, etc.) -https://etherpad.openstack.org/p/horizon-ocata-angularjs<br />
<br />
'''Friday October 28'''<br />
<br />
* 11:50-12:30 - Priority setting (and TODO review if we have time) - https://etherpad.openstack.org/p/horizon-ocata-priorities<br />
* 14:00-18:00 - General project discussion (Newton retrospective, how to improve our organisation and use of tooling)<br />
<br />
== I18n ==<br />
<br />
View online: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=I18n%3A<br />
<br />
'''Friday October 28'''<br />
<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/barcelona-i18n-meetup<br />
<br />
==Infrastructure==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Infrastructure%3A<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 3:05pm-3:45pm: ''Work Session: Firehose'' in AC Hotel - P3 - Montjuic<br />
** https://etherpad.openstack.org/p/ocata-infra-firehose<br />
<br />
'''Thursday October 27'''<br />
<br />
* 2:40pm-3:20pm: ''Fishbowl: Status update and plans for task tracking'' in AC Hotel - P1 - Salon Barcelona<br />
** https://etherpad.openstack.org/p/ocata-infra-community-task-tracking<br />
<br />
'''Friday October 28'''<br />
<br />
All the sessions on Friday are taking place at CCIB - Centre de Convencions Internacional de Barcelona - P1<br />
<br />
* 9:00am-9:40am: ''Work Session: Next steps for infra-cloud'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-infra-cloud<br />
* 9:50am-10:30am: ''Work Session: Interactive infra-cloud debugging'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-infra-cloud-debugging<br />
* 11:00am-11:40am: ''Work Session: Test environment expectations'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-test-env-expectations<br />
* 11:50am-12:30pm: ''Work Session: Xenial jobs transition for stable/newton'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-xenial-stable-newton<br />
* 2:00pm-6:00pm: ''Contributors Meetup'' in Room 121<br />
** https://etherpad.openstack.org/p/ocata-infra-contributors-meetup<br />
<br />
==Ironic==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ironic:<br />
<br />
'''Wednesday October 26'''<br />
* 5ː05pm-5ː45pm - API Evolution - https://etherpad.openstack.org/p/ironic-ocata-summit-api-evolution<br />
* 5:55pm-6:35pm - Deploy-time RAID and Advanced Partitioning (w/ Nova) - https://etherpad.openstack.org/p/ironic-ocata-summit-deploy-time-raid<br />
'''Thursday October 27'''<br />
* 9:00am-9:40am - Task Framework - https://etherpad.openstack.org/p/ironic-ocata-summit-task-framework<br />
* 9:50am-10:30am - QA/CI - https://etherpad.openstack.org/p/ironic-ocata-summit-qa<br />
* 1:50pm-2:30pm - Synchronizing Events with Neutron - https://etherpad.openstack.org/p/ironic-ocata-summit-neutron-events<br />
* 2:40pm-3:20pm - Ocata Priorities - https://etherpad.openstack.org/p/ironic-ocata-summit-priorities<br />
'''Friday October 28'''<br />
* 11:00am-11:40am - VNC Console - https://etherpad.openstack.org/p/ironic-ocata-summit-vnc-console<br />
* 11:50am-12:30pm - Unblocking Priority Features - https://etherpad.openstack.org/p/ironic-ocata-summit-unblock-priorities<br />
* 2:00pm-6:00pm - Contributors Meetup - https://etherpad.openstack.org/p/ironic-ocata-summit-contributor-meetup<br />
<br />
== Keystone ==<br />
<br />
View online: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Keystone%3A<br />
<br />
Wed 26, 4:05pm-4:45pm<br />
Keystone: Newton retrospective (Fishbowl)<br />
https://etherpad.openstack.org/p/keystone-newton-retrospective<br />
<br />
Wed 26, 4:55pm-5:35pm<br />
Keystone: keystone/horizon integration<br />
https://etherpad.openstack.org/p/ocata-keystone-horizon<br />
<br />
Thu 27, 12:00pm-12:40pm<br />
Keystone: Unconference (Fishbowl)<br />
https://etherpad.openstack.org/p/ocata-keystone-unconference<br />
<br />
Thu 27, 12:50pm-1:30pm<br />
Keystone: Ocata priorities (Fishbowl)<br />
https://etherpad.openstack.org/p/ocata-keystone-priorities<br />
<br />
Thu 27, 2:50pm-3:30pm<br />
Keystone: Work session (Federation)<br />
https://etherpad.openstack.org/p/ocata-keystone-federation<br />
<br />
Thu 27, 3:40pm-4:20pm<br />
Keystone: Work session (Testing)<br />
https://etherpad.openstack.org/p/ocata-keystone-testing<br />
<br />
Thu 27, 4:30pm-5:10pm<br />
Keystone: Work session (Documentation)<br />
https://etherpad.openstack.org/p/ocata-keystone-documentation<br />
<br />
Fri 28, 10:00am-10:40am<br />
Keystone: Work session (Authorization)<br />
https://etherpad.openstack.org/p/ocata-keystone-authorization<br />
<br />
Fri 28, 10:50am-11:30am<br />
Keystone: Work session (Authentication)<br />
https://etherpad.openstack.org/p/ocata-keystone-authentication<br />
<br />
Fri 28, 12:00pm-12:40pm<br />
Keystone: Work session (Scaling and Performance)<br />
https://etherpad.openstack.org/p/ocata-keystone-scaling<br />
<br />
Fri 28, 12:50pm-1:30pm<br />
Keystone: Work session (Integration)<br />
https://etherpad.openstack.org/p/ocata-keystone-integration<br />
<br />
Fri 28, 3:00pm-7:00pm<br />
Keystone: Contributors meetup<br />
(No etherpad)<br />
<br />
== Kolla ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Kolla%3A<br />
<br />
Kolla Ocata Summit Master Etherpad - https://etherpad.openstack.org/p/kolla-o-summit-schedule<br />
<br />
'''Wed October 26'''<br />
<br />
* 3:55pm - 4:35pm - Operator experiences - https://etherpad.openstack.org/p/kolla-o-summit-op-experiences<br />
* 5:05pm - 5:45pm - Community roadmap planning for O - https://etherpad.openstack.org/p/kolla-o-summit-community-planning<br />
* 5:55pm - 6:35pm - Goals for Ocata - https://etherpad.openstack.org/p/kolla-o-summit-roadmap<br />
<br />
'''Thu October 27'''<br />
<br />
* 9:00am - 9:40am - Kolla-Kubernetes Architecture - https://etherpad.openstack.org/p/kolla-ocata-summit-kolla-k8s-architecture<br />
* 9:50am - 10:30am - High availability - https://etherpad.openstack.org/p/kolla-o-summit-high-availability<br />
* 1:50pm - 2:30pm - 3rd Party Plugins - https://etherpad.openstack.org/p/kolla-o-summit-3rd-party-plugins<br />
* 2:40pm - 3:20pm - Improving the CI system - https://etherpad.openstack.org/p/kolla-o-summit-improving-ci<br />
* 3:30pm - 4:10pm - Distro requirements, deprecation, levels of support - https://etherpad.openstack.org/p/kolla-o-summit-support-and-deprecation<br />
<br />
'''Fri April 28'''<br />
<br />
* 9:00am - 9:40am - Documentation - https://etherpad.openstack.org/p/kolla-o-summit-documentation<br />
* 9:50am - 10:30am - OSIC review - https://etherpad.openstack.org/p/kolla-o-summit-OSIC-review<br />
* 11:00am - 11:40am - Kolla-Kubernetes Roadmap - https://etherpad.openstack.org/p/kolla-ocata-summit-kolla-k8s-road-map<br />
* 11:50am - 12:30pm - Security VMT threat - https://etherpad.openstack.org/p/kolla-ocata-summit-threat-analysis<br />
* 2:00pm - 6:00pm - Afternoon Contributor Meetup - https://etherpad.openstack.org/p/kolla-ocata-summit-contrib-meetup<br />
<br />
==Manila==<br />
<br />
'''Thu October 27'''<br />
<br />
* 11:00 - 11:40 - Race Conditions (FB) - https://etherpad.openstack.org/p/ocata-manila-race-conditions<br />
* 11:50 - 12:30 - Data Service Jobs Table (FB) - https://etherpad.openstack.org/p/ocata-manila-data-service-jobs-table<br />
* 14:40 - 15:20 - Tempest Direction (WS) - https://etherpad.openstack.org/p/ocata-manila-tempest-direction<br />
<br />
'''Fri April 28'''<br />
<br />
* 11:00 - 11:40 - Access Rules (WS) - https://etherpad.openstack.org/p/ocata-manila-access-rules<br />
* 11:50 - 12:30 - High Availability (WS) - https://etherpad.openstack.org/p/ocata-manila-high-availability<br />
* 14:00 - 18:00 - Contributor Meetup (CM) - https://etherpad.openstack.org/p/ocata-manila-contributor-meetup<br />
<br />
==Neutron==<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 17:05 - 17:45 - Nova/Neutron cross-project session Nova - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
* 17:55 - 18:35 - LBaaS retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session<br />
<br />
'''Thursday October 27'''<br />
<br />
* 09:00 - 09:40 - Completing the Newton backlog - https://etherpad.openstack.org/p/ocata-neutron-core-newton-backlog<br />
* 09:50 - 10:30 - Upstream and dowstream CI and testing efforts - https://etherpad.openstack.org/p/ocata-neutron-testing<br />
* 11:00 - 11:40 - End user and operator feedback - https://etherpad.openstack.org/p/ocata-neutron-end-user-operator-feedback<br />
* 11:50 - 12:30 - Neutronclient retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-client<br />
* 17:30 - 18:10 - Nova/Neutron cross-project session - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
<br />
'''Friday October 28'''<br />
<br />
* 09:00 - 09:40 (Sagrada Familia) Fishbowl Neutron: Neutron-lib retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-lib-next-steps<br />
* 09:50 - 10:30 (Sagrada Familia) Fishbowl Neutron: Neutron server: retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-server-next<br />
* 11:00 - 11:40 (Sagrada Familia) Fishbowl Neutron: Neutron agents: retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-agents<br />
* 11:50 - 12:30 (Sagrada Familia) Fishbowl Neutron: Stadium update - https://etherpad.openstack.org/p/ocata-nova-neutron-stadium<br />
* 14:00 - 18:00 (Room 114) Meetup Neutron: Contributors meetup - https://etherpad.openstack.org/p/ocata-neutron-contributor-meetup<br />
<br />
==Nomad==<br />
<br />
https://etherpad.openstack.org/p/nomad-ocata-design-session<br />
<br />
== Nova ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Nova%3A<br />
<br />
'''Wednesday October 26'''<br />
* 5:05pm-5:45pm - Nova/Neutron cross-project session - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
'''Thursday October 27'''<br />
* 9:00am-9:40am - Newton placement service retrospective - https://etherpad.openstack.org/p/ocata-nova-summit-placement-retrospective<br />
* 9:50am-10:30am - Scheduler / resource providers (quantitative) - https://etherpad.openstack.org/p/ocata-nova-summit-resource-providers-quantitative<br />
* '''Break'''<br />
* 11:00am-11:40am - Scheduler / resource provider traits (qualitative) - https://etherpad.openstack.org/p/ocata-nova-summit-resource-providers-qualitative<br />
* 11:50am-12:30pm - Organizing API work for Ocata - https://etherpad.openstack.org/p/ocata-nova-summit-api<br />
* '''Lunch'''<br />
* 1:50pm-2:30pm - Unconference - https://etherpad.openstack.org/p/ocata-nova-summit-unconference<br />
* 2:40pm-3:20pm - Cells v2 (scheduler, searchlight, multi-cell support) - https://etherpad.openstack.org/p/ocata-nova-summit-cellsv2-scheduler<br />
* 3:30pm-4:10pm - Cells v2 (quotas) - https://etherpad.openstack.org/p/ocata-nova-summit-cellsv2-quotas<br />
* '''Break'''<br />
* 4:40pm-5:20pm - Completing vendordata v2 - https://etherpad.openstack.org/p/ocata-nova-summit-vendoradatav2<br />
* 5:30pm-6:10pm - Nova/Neutron cross-project session - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
'''Friday October 28'''<br />
* 9:00am-9:40am - Nova/Cinder cross-project session - https://etherpad.openstack.org/p/ocata-nova-summit-cinder-session<br />
* 9:50am-10:30am - Security specs and testing - https://etherpad.openstack.org/p/ocata-nova-summit-security<br />
* '''Break'''<br />
* 11:00am-11:40am - Planning the libvirt imagebackend refactor work - https://etherpad.openstack.org/p/ocata-nova-summit-libvirt-imagebackend<br />
* 11:50am-12:30pm - Ocata priorities and schedule - https://etherpad.openstack.org/p/ocata-nova-summit-priorities<br />
* '''Lunch'''<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/ocata-nova-summit-meetup<br />
<br />
== Release Management ==<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 5:55 PM - 6:35 PM -- Work session -- https://etherpad.openstack.org/p/ocata-relmgt-plan<br />
<br />
'''Thursday October 27'''<br />
<br />
* 1:50 PM - 2:30 PM -- Newton Retrospective & Ocata Schedule -- https://etherpad.openstack.org/p/ocata-release-fishbowl<br />
<br />
'''Friday October 28'''<br />
<br />
* 2:00 PM - 6:00 PM -- Contributors Meetup -- https://etherpad.openstack.org/p/ocata-relmgt-plan<br />
<br />
== Searchlight ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Searchlight<br />
<br />
'''Wednesday October 26'''<br />
2:40pm-3:20pm - Cells v2 (scheduler, searchlight, multi-cell support) - https://etherpad.openstack.org/p/ocata-nova-summit-cellsv2-scheduler<br />
<br />
'''Thursday October 27'''<br />
9:50 - 10:30 - Fishbowl - https://etherpad.openstack.org/p/ocata-searchlight-summit-plugins-fishbowl<br />
11:00 - 11:40 - Working room<br />
11:50 - 12:30 - Working room<br />
<br />
== Senlin ==<br />
<br />
'''Friday October 28'''<br />
* 9:00am-9:40am - Senlin work session: policy/profile versioning - https://etherpad.openstack.org/p/ocata-summit-senlin-profile-policy-versioning<br />
* 9:50am-10:30am - Senlin work session: versioned everything - https://etherpad.openstack.org/p/ocata-summit-senlin-versioned-everything<br />
* '''Break'''<br />
* 11:00am-11:40am - Senlin work session: container cluster - https://etherpad.openstack.org/p/ocata-summit-senlin-container-cluster<br />
* 11:50am-12:30am - Senlin work session: HA - https://etherpad.openstack.org/p/ocata-summit-senlin-HA<br />
<br />
== Stewardship Working Group ==<br />
<br />
'''Wed October 26'''<br />
<br />
*12:15pm - 12:55pm - Cross Project workshops: "Re-inventing the TC", the Stewardship Working Group discussion - https://etherpad.openstack.org/p/Barcelona-SWG-cp<br />
<br />
== Tricircle ==<br />
<br />
Venue: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=tricircle%3A<br />
<br />
ideas: https://etherpad.openstack.org/p/ocata-tricircle-sessions-planning<br />
<br />
'''Thu October 27'''<br />
<br />
* 5:30pm - 6:10pm - Cross Neutron networking automation: feature review and what's to do in Ocata : https://etherpad.openstack.org/p/ocata-tricircle-feature-review-priorities-roadmap<br />
<br />
'''Fri April 28'''<br />
<br />
* 9:00am - 9:40am - Ocata work session: https://etherpad.openstack.org/p/ocata-tricircle-work-session<br />
* 9:40am - 12:00am - Tricricle contributors meetup<br />
<br />
== TripleO ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=tripleo%3A<br />
<br />
https://etherpad.openstack.org/p/ocata-tripleo<br />
<br />
===== TripleO: Containers - Current Status and Roadmap =====<br />
Wed 26 3:55pm-4:35pm<br />
https://etherpad.openstack.org/p/ocata-tripleo-containers<br />
<br />
=====TripleO: Work Session - Growing the team=====<br />
Wed 26 5:05pm-5:45pm -<br />
https://etherpad.openstack.org/p/ocata-tripleo-team-growing<br />
<br />
===== TripleO: Work Session - CI - current status and roadmap=====<br />
Wed 26 5:55pm-6:35pm -<br />
https://etherpad.openstack.org/p/ocata-tripleo-ci<br />
<br />
===== TripleO: Upgrades - current status and roadmap=====<br />
Thu 27 1:50pm-2:30pm -<br />
https://etherpad.openstack.org/p/ocata-tripleo-upgrades<br />
<br />
===== TripleO: Work Session - Composable Undercloud deployment with Heat=====<br />
Fri 28 9:00am-9:20am -<br />
https://etherpad.openstack.org/p/tripleo-composable-undercloud<br />
<br />
===== TripleO: Work Session - GUI, CLI, Validations current status, roadmap, requirements=====<br />
Fri 28 9:20am-9:40am -<br />
https://etherpad.openstack.org/p/gui-ocata<br />
<br />
===== TripleO: Work Session - Multiple topics=====<br />
Fri 28 9:50am-10:30am -<br />
Blueprints, specs, tools and Ocata summary.<br />
See bottom of https://etherpad.openstack.org/p/ocata-tripleo<br />
<br />
== Trove ==<br />
<br />
https://etherpad.openstack.org/p/trove-barcelona-sessions <br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Trove<br />
<br />
==Watcher==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Watcher<br />
<br />
'''Wed October 26'''<br />
<br />
* 5.55pm - 6.35pm - [https://etherpad.openstack.org/p/watcher-ocata-design-session Existing & new infrastructure optimization strategies]<br />
<br />
'''Thu October 27'''<br />
<br />
* 9.50am - 10.30am - [https://etherpad.openstack.org/p/watcher-ocata-design-session Watcher Newton retrospective]<br />
<br />
'''Fri April 28'''<br />
<br />
* 11am - 12.30am - [https://etherpad.openstack.org/p/watcher-ocata-design-session Ocata priorities & roadmap]<br />
* 2pm - 6pm - [https://etherpad.openstack.org/p/watcher-ocata-design-session Contributors meetup]<br />
<br />
==Zaqar==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Zaqar<br />
<br />
'''Thursday, October 27'''<br />
<br />
9:50am-10:30am [https://etherpad.openstack.org/p/zaqar-ocata-performance Zaqar's profile and performance gate]<br />
<br />
4:40pm-5:00pm [https://etherpad.openstack.org/p/zaqar-ocata-notification-delivery-policy Notification delivery policy]<br />
<br />
5:00pm-5:20pm [https://etherpad.openstack.org/p/zaqar-ocata-purge-queue Purge queue]<br />
<br />
5:30pm-6:10pm [https://etherpad.openstack.org/p/zaqar-ocata-subscription-confirmation-email Subscription Confirmation - Email]<br />
...</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Design_Summit/Ocata/Etherpads&diff=135136Design Summit/Ocata/Etherpads2016-10-19T04:04:10Z<p>Travis Tripp: /* Searchlight */</p>
<hr />
<div>[[Category:Summit]]<br />
[[Category:Ocata]]<br />
[[Category:Etherpad]]<br />
<br />
The grand list of all the Ocata Design Summit sessions. Please include Date, Time, and links to etherpads when adding new content.<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== Event intro/closure ==<br />
* Tue Oct 26 11:25am - Design Summit 101 - https://etherpad.openstack.org/p/ocata-design-summit-101<br />
* Fri Oct 29 12:30pm - Barcelona feedback session - https://etherpad.openstack.org/p/BCN-summit-feedback<br />
<br />
<br />
==Architecture Working Group==<br />
<br />
'''Wednesday, October 26'''<br />
* 11:25am-12:05pm - Cross Project workshops: Architecture Working Group Fishbowl - https://etherpad.openstack.org/p/ocata-summit-arch-wg<br />
<br />
==Barbican==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Barbican<br />
<br />
'''Thursday, October 27'''<br />
* 11:00am-11:40am - (128) Barbican: User and Operator Feedback Fishbowl - https://etherpad.openstack.org/p/barbican-ocata-summit-roadmap<br />
* 11:50am-12:30pm - (Montjuic) Barbican: Work Session (Roadmap) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 11:50pm-02:30pm - (130) Barbican: Work Session (Cross Project)- https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
<br />
'''Friday, October 28'''<br />
* 09:00am-09:40am - (129) Barbican: Work Session (Security) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 09:50am-10:30am - (129) Barbican: Work Session (TBD) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 11:00am-11:40am - (129) Barbican: Work Session (Resources) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 11:50am-12:30pm - (129) Barbican: Work Session (Planning) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
<br />
==Cinder==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Cinder<br />
<br />
'''Wednesday October 26'''<br />
* 3:05pm-3:45pm - Cinder Test Working Group progress and status - https://etherpad.openstack.org/p/Cinder-testing<br />
* 3:55-4:35 - Driver bug fixes for unsupported OpenStack releases - https://etherpad.openstack.org/p/ocata-cinder-summit-stabledriverfixes<br />
* 5:05-5:45 - Stand alone Cinder service - https://etherpad.openstack.org/p/ocata-cinder-summit-standalonecinder<br />
* 5:55-6:35 - Pike (and beyond) planning - https://etherpad.openstack.org/p/ocata-cinder-summit-pikeplanning<br />
'''Thursday October 27'''<br />
* 9:00-9:40 - Replication - https://etherpad.openstack.org/p/ocata-cinder-summit-replication<br />
* 9:50-10:30 - Cinder-Nova API changes - https://etherpad.openstack.org/p/ocata-cinder-summit-attachdetach<br />
'''Friday October 28'''<br />
* 9:00am-9:40am - Nova/Cinder cross-project session - https://etherpad.openstack.org/p/ocata-nova-summit-cinder-session<br />
* 11:00am-11:40am - NFS snapshots - https://etherpad.openstack.org/p/ocata-cinder-summit-nfssnapshots<br />
* 11:50am-12:30pm - Cinder backup improvements - https://etherpad.openstack.org/p/ocata-cinder-summit-backupimprovements<br />
* '''Lunch'''<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/ocata-cinder-summit-meetup<br />
<br />
==Cross Project Sessions==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Cross+Project<br />
<br />
'''Tuesday October 25'''<br />
<br />
* 3:55 PM - 4:35 PM -- Experiences with Project Decomposition, Scaling Review Teams and Subsystem Maintainers (Part 1) -- https://etherpad.openstack.org/p/ocata-summit-xp-scaling-review-teams<br />
* 5:05 PM - 5:45 PM -- Discuss Community-Wide Release Goals -- https://etherpad.openstack.org/p/community-goals<br />
* 5:55 PM - 6:35 PM -- Python 3 Integration Testing -- https://etherpad.openstack.org/p/ocata-python-3<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 11:25 AM - 12:05 PM -- Ocata goal: Remove Incubated Oslo Code -- https://etherpad.openstack.org/p/ocata-goal-oslo<br />
* 2:15 PM - 2:55 PM -- Experiences with project decomposition, scaling review teams and subsystem maintainers (part 2) -- https://etherpad.openstack.org/p/ocata-summit-xp-scaling-review-teams<br />
<br />
==Documentation==<br />
<br />
See these and more documentation sessions in schedule: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Documentation<br />
<br />
'''Wednesday, October 26'''<br />
* 5:05pm-5:45pm - User Guides Working Group - https://etherpad.openstack.org/p/BCN-Docs-UserGuidesWG<br />
'''Thursday October 27'''<br />
* 2:40pm-3:20pm - Newton Retrospective - https://etherpad.openstack.org/p/BCN-Docs-NewtonRetro <br />
* 3:30pm-4:10pm - Social Things - https://etherpad.openstack.org/p/BCN-Docs-Social <br />
* 4:40pm-5:20pm - Training Labs - https://etherpad.openstack.org/p/BCN-Docs-Training <br />
* 5:30pm-6:10pm - Toolchain - https://etherpad.openstack.org/p/BCN-Docs-Toolchain <br />
'''Friday October 28'''<br />
* 11:00am-11:40am - API Working Group - https://etherpad.openstack.org/p/BCN-Docs-APIWG <br />
* 11:50am-12:30pm - Ocata Planning Working Group - https://etherpad.openstack.org/p/BCN-Docs-OcataPlanningWG <br />
* 2:00pm-6:00pm - Contributors Meetup - no etherpad<br />
<br />
== Gluon ==<br />
<br />
View online: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Gluon%3A<br />
<br />
Fri 28, 9:50am-10:30am: Gluon Work Session https://etherpad.openstack.org/p/ocata-gluon-work-plan<br />
<br />
==Heat==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Heat<br />
<br />
'''Thursday October 27'''<br />
<br />
* 11:00am-11:40am - Convergence Phase 1 - What worked, What didn't - https://etherpad.openstack.org/p/heat-ocata-convergence-phase-1<br />
* 11:50am-12:30pm - Performance Scalability Improvements - I (Issues with very large stacks) - https://etherpad.openstack.org/p/heat-ocata-performance-scalability-1<br />
* 2:40pm-3:20pm - Performance Scalability Improvements - II - https://etherpad.openstack.org/p/heat-ocata-performance-scalability-2<br />
* 3:30pm-4:10pm - Convergence Phase 2 - https://etherpad.openstack.org/p/heat-ocata-convergence-phase-2<br />
* 4:40pm-5:20pm - Validation Improvements - https://etherpad.openstack.org/p/heat-ocata-validation-improvements<br />
<br />
'''Friday October 28'''<br />
<br />
* 9:00am-9:40am - RPC versioning and hitless upgrades - https://etherpad.openstack.org/p/heat-ocata-hitless-upgrades<br />
* 9:50am-10:30am - API Microversions - https://etherpad.openstack.org/p/heat-ocata-api-microversions<br />
* 11:00am-11:40am - Heat Integration tests, Tempest and test candidates for DefCore Interop Testing - https://etherpad.openstack.org/p/heat-ocata-test-coverage<br />
* 11:50am-12:30pm - Improve maturity of heat - https://etherpad.openstack.org/p/heat-ocata-improve-maturity<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/ocata-heat-contributor-meetup<br />
<br />
<br />
==Horizon==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Horizon%3A<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 16:55-17:35 - Cross-project meeting with Horizon and Keystone - https://etherpad.openstack.org/p/ocata-keystone-horizon<br />
<br />
'''Thursday October 27'''<br />
<br />
* 09:00-09:40 - Operator/ Plugin feedback - https://etherpad.openstack.org/p/horizon-ocata-feedback<br />
* 09:50-10:30 - Newton retrospective, Ocata timeline, Dependencies, Testing!! and Selenium :-( - https://etherpad.openstack.org/p/horizon-ocata-planning<br />
* 16:40-17:20 - Cross-project topics; Glance, Identity, K2K Federation, Quotas - https://etherpad.openstack.org/p/horizon-ocata-cross-project<br />
* 17:30-18:10 - AngularJS state of play (where we're going, status of panels, what CORS means, do we want a thin service proxy, deprecations, etc.) -https://etherpad.openstack.org/p/horizon-ocata-angularjs<br />
<br />
'''Friday October 28'''<br />
<br />
* 11:50-12:30 - Priority setting (and TODO review if we have time) - https://etherpad.openstack.org/p/horizon-ocata-priorities<br />
* 14:00-18:00 - General project discussion (Newton retrospective, how to improve our organisation and use of tooling)<br />
<br />
== I18n ==<br />
<br />
View online: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=I18n%3A<br />
<br />
'''Friday October 28'''<br />
<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/barcelona-i18n-meetup<br />
<br />
==Infrastructure==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Infrastructure%3A<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 3:05pm-3:45pm: ''Work Session: Firehose'' in AC Hotel - P3 - Montjuic<br />
** https://etherpad.openstack.org/p/ocata-infra-firehose<br />
<br />
'''Thursday October 27'''<br />
<br />
* 2:40pm-3:20pm: ''Fishbowl: Status update and plans for task tracking'' in AC Hotel - P1 - Salon Barcelona<br />
** https://etherpad.openstack.org/p/ocata-infra-community-task-tracking<br />
<br />
'''Friday October 28'''<br />
<br />
All the sessions on Friday are taking place at CCIB - Centre de Convencions Internacional de Barcelona - P1<br />
<br />
* 9:00am-9:40am: ''Work Session: Next steps for infra-cloud'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-infra-cloud<br />
* 9:50am-10:30am: ''Work Session: Interactive infra-cloud debugging'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-infra-cloud-debugging<br />
* 11:00am-11:40am: ''Work Session: Test environment expectations'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-test-env-expectations<br />
* 11:50am-12:30pm: ''Work Session: Xenial jobs transition for stable/newton'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-xenial-stable-newton<br />
* 2:00pm-6:00pm: ''Contributors Meetup'' in Room 121<br />
** https://etherpad.openstack.org/p/ocata-infra-contributors-meetup<br />
<br />
==Ironic==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ironic:<br />
<br />
'''Wednesday October 26'''<br />
* 5ː05pm-5ː45pm - API Evolution - https://etherpad.openstack.org/p/ironic-ocata-summit-api-evolution<br />
* 5:55pm-6:35pm - Deploy-time RAID and Advanced Partitioning (w/ Nova) - https://etherpad.openstack.org/p/ironic-ocata-summit-deploy-time-raid<br />
'''Thursday October 27'''<br />
* 9:00am-9:40am - Task Framework - https://etherpad.openstack.org/p/ironic-ocata-summit-task-framework<br />
* 9:50am-10:30am - QA/CI - https://etherpad.openstack.org/p/ironic-ocata-summit-qa<br />
* 1:50pm-2:30pm - Synchronizing Events with Neutron - https://etherpad.openstack.org/p/ironic-ocata-summit-neutron-events<br />
* 2:40pm-3:20pm - Ocata Priorities - https://etherpad.openstack.org/p/ironic-ocata-summit-priorities<br />
'''Friday October 28'''<br />
* 11:00am-11:40am - VNC Console - https://etherpad.openstack.org/p/ironic-ocata-summit-vnc-console<br />
* 11:50am-12:30pm - Unblocking Priority Features - https://etherpad.openstack.org/p/ironic-ocata-summit-unblock-priorities<br />
* 2:00pm-6:00pm - Contributors Meetup - https://etherpad.openstack.org/p/ironic-ocata-summit-contributor-meetup<br />
<br />
== Keystone ==<br />
<br />
View online: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Keystone%3A<br />
<br />
Wed 26, 4:05pm-4:45pm<br />
Keystone: Newton retrospective (Fishbowl)<br />
https://etherpad.openstack.org/p/keystone-newton-retrospective<br />
<br />
Wed 26, 4:55pm-5:35pm<br />
Keystone: keystone/horizon integration<br />
https://etherpad.openstack.org/p/ocata-keystone-horizon<br />
<br />
Thu 27, 12:00pm-12:40pm<br />
Keystone: Unconference (Fishbowl)<br />
https://etherpad.openstack.org/p/ocata-keystone-unconference<br />
<br />
Thu 27, 12:50pm-1:30pm<br />
Keystone: Ocata priorities (Fishbowl)<br />
https://etherpad.openstack.org/p/ocata-keystone-priorities<br />
<br />
Thu 27, 2:50pm-3:30pm<br />
Keystone: Work session (Federation)<br />
https://etherpad.openstack.org/p/ocata-keystone-federation<br />
<br />
Thu 27, 3:40pm-4:20pm<br />
Keystone: Work session (Testing)<br />
https://etherpad.openstack.org/p/ocata-keystone-testing<br />
<br />
Thu 27, 4:30pm-5:10pm<br />
Keystone: Work session (Documentation)<br />
https://etherpad.openstack.org/p/ocata-keystone-documentation<br />
<br />
Fri 28, 10:00am-10:40am<br />
Keystone: Work session (Authorization)<br />
https://etherpad.openstack.org/p/ocata-keystone-authorization<br />
<br />
Fri 28, 10:50am-11:30am<br />
Keystone: Work session (Authentication)<br />
https://etherpad.openstack.org/p/ocata-keystone-authentication<br />
<br />
Fri 28, 12:00pm-12:40pm<br />
Keystone: Work session (Scaling and Performance)<br />
https://etherpad.openstack.org/p/ocata-keystone-scaling<br />
<br />
Fri 28, 12:50pm-1:30pm<br />
Keystone: Work session (Integration)<br />
https://etherpad.openstack.org/p/ocata-keystone-integration<br />
<br />
Fri 28, 3:00pm-7:00pm<br />
Keystone: Contributors meetup<br />
(No etherpad)<br />
<br />
== Kolla ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Kolla%3A<br />
<br />
Kolla Ocata Summit Master Etherpad - https://etherpad.openstack.org/p/kolla-o-summit-schedule<br />
<br />
'''Wed October 26'''<br />
<br />
* 3:55pm - 4:35pm - Operator experiences - https://etherpad.openstack.org/p/kolla-o-summit-op-experiences<br />
* 5:05pm - 5:45pm - Community roadmap planning for O - https://etherpad.openstack.org/p/kolla-o-summit-community-planning<br />
* 5:55pm - 6:35pm - Goals for Ocata - https://etherpad.openstack.org/p/kolla-o-summit-roadmap<br />
<br />
'''Thu October 27'''<br />
<br />
* 9:00am - 9:40am - Kolla-Kubernetes Architecture - https://etherpad.openstack.org/p/kolla-ocata-summit-kolla-k8s-architecture<br />
* 9:50am - 10:30am - High availability - https://etherpad.openstack.org/p/kolla-o-summit-high-availability<br />
* 1:50pm - 2:30pm - 3rd Party Plugins - https://etherpad.openstack.org/p/kolla-o-summit-3rd-party-plugins<br />
* 2:40pm - 3:20pm - Improving the CI system - https://etherpad.openstack.org/p/kolla-o-summit-improving-ci<br />
* 3:30pm - 4:10pm - Distro requirements, deprecation, levels of support - https://etherpad.openstack.org/p/kolla-o-summit-support-and-deprecation<br />
<br />
'''Fri April 28'''<br />
<br />
* 9:00am - 9:40am - Documentation - https://etherpad.openstack.org/p/kolla-o-summit-documentation<br />
* 9:50am - 10:30am - OSIC review - https://etherpad.openstack.org/p/kolla-o-summit-OSIC-review<br />
* 11:00am - 11:40am - Kolla-Kubernetes Roadmap - https://etherpad.openstack.org/p/kolla-ocata-summit-kolla-k8s-road-map<br />
* 11:50am - 12:30pm - Security VMT threat - https://etherpad.openstack.org/p/kolla-ocata-summit-threat-analysis<br />
* 2:00pm - 6:00pm - Afternoon Contributor Meetup - https://etherpad.openstack.org/p/kolla-ocata-summit-contrib-meetup<br />
<br />
==Manila==<br />
<br />
'''Thu October 27'''<br />
<br />
* 11:00 - 11:40 - Race Conditions (FB) - https://etherpad.openstack.org/p/ocata-manila-race-conditions<br />
* 11:50 - 12:30 - Data Service Jobs Table (FB) - https://etherpad.openstack.org/p/ocata-manila-data-service-jobs-table<br />
* 14:40 - 15:20 - Tempest Direction (WS) - https://etherpad.openstack.org/p/ocata-manila-tempest-direction<br />
<br />
'''Fri April 28'''<br />
<br />
* 11:00 - 11:40 - Access Rules (WS) - https://etherpad.openstack.org/p/ocata-manila-access-rules<br />
* 11:50 - 12:30 - High Availability (WS) - https://etherpad.openstack.org/p/ocata-manila-high-availability<br />
* 14:00 - 18:00 - Contributor Meetup (CM) - https://etherpad.openstack.org/p/ocata-manila-contributor-meetup<br />
<br />
==Neutron==<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 17:05 - 17:45 - Nova/Neutron cross-project session Nova - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
* 17:55 - 18:35 - LBaaS retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session<br />
<br />
'''Thursday October 27'''<br />
<br />
* 09:00 - 09:40 - Completing the Newton backlog - https://etherpad.openstack.org/p/ocata-neutron-core-newton-backlog<br />
* 09:50 - 10:30 - Upstream and dowstream CI and testing efforts - https://etherpad.openstack.org/p/ocata-neutron-testing<br />
* 11:00 - 11:40 - End user and operator feedback - https://etherpad.openstack.org/p/ocata-neutron-end-user-operator-feedback<br />
* 11:50 - 12:30 - Neutronclient retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-client<br />
* 17:30 - 18:10 - Nova/Neutron cross-project session - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
<br />
'''Friday October 28'''<br />
<br />
* 09:00 - 09:40 (Sagrada Familia) Fishbowl Neutron: Neutron-lib retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-lib-next-steps<br />
* 09:50 - 10:30 (Sagrada Familia) Fishbowl Neutron: Neutron server: retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-server-next<br />
* 11:00 - 11:40 (Sagrada Familia) Fishbowl Neutron: Neutron agents: retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-agents<br />
* 11:50 - 12:30 (Sagrada Familia) Fishbowl Neutron: Stadium update - https://etherpad.openstack.org/p/ocata-nova-neutron-stadium<br />
* 14:00 - 18:00 (Room 114) Meetup Neutron: Contributors meetup - https://etherpad.openstack.org/p/ocata-neutron-contributor-meetup<br />
<br />
==Nomad==<br />
<br />
https://etherpad.openstack.org/p/nomad-ocata-design-session<br />
<br />
== Nova ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Nova%3A<br />
<br />
'''Wednesday October 26'''<br />
* 5:05pm-5:45pm - Nova/Neutron cross-project session - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
'''Thursday October 27'''<br />
* 9:00am-9:40am - Newton placement service retrospective - https://etherpad.openstack.org/p/ocata-nova-summit-placement-retrospective<br />
* 9:50am-10:30am - Scheduler / resource providers (quantitative) - https://etherpad.openstack.org/p/ocata-nova-summit-resource-providers-quantitative<br />
* '''Break'''<br />
* 11:00am-11:40am - Scheduler / resource provider traits (qualitative) - https://etherpad.openstack.org/p/ocata-nova-summit-resource-providers-qualitative<br />
* 11:50am-12:30pm - Organizing API work for Ocata - https://etherpad.openstack.org/p/ocata-nova-summit-api<br />
* '''Lunch'''<br />
* 1:50pm-2:30pm - Unconference - https://etherpad.openstack.org/p/ocata-nova-summit-unconference<br />
* 2:40pm-3:20pm - Cells v2 (scheduler, searchlight, multi-cell support) - https://etherpad.openstack.org/p/ocata-nova-summit-cellsv2-scheduler<br />
* 3:30pm-4:10pm - Cells v2 (quotas) - https://etherpad.openstack.org/p/ocata-nova-summit-cellsv2-quotas<br />
* '''Break'''<br />
* 4:40pm-5:20pm - Completing vendordata v2 - https://etherpad.openstack.org/p/ocata-nova-summit-vendoradatav2<br />
* 5:30pm-6:10pm - Nova/Neutron cross-project session - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
'''Friday October 28'''<br />
* 9:00am-9:40am - Nova/Cinder cross-project session - https://etherpad.openstack.org/p/ocata-nova-summit-cinder-session<br />
* 9:50am-10:30am - Security specs and testing - https://etherpad.openstack.org/p/ocata-nova-summit-security<br />
* '''Break'''<br />
* 11:00am-11:40am - Planning the libvirt imagebackend refactor work - https://etherpad.openstack.org/p/ocata-nova-summit-libvirt-imagebackend<br />
* 11:50am-12:30pm - Ocata priorities and schedule - https://etherpad.openstack.org/p/ocata-nova-summit-priorities<br />
* '''Lunch'''<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/ocata-nova-summit-meetup<br />
<br />
== Release Management ==<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 5:55 PM - 6:35 PM -- Work session -- https://etherpad.openstack.org/p/ocata-relmgt-plan<br />
<br />
'''Thursday October 27'''<br />
<br />
* 1:50 PM - 2:30 PM -- Newton Retrospective & Ocata Schedule -- https://etherpad.openstack.org/p/ocata-release-fishbowl<br />
<br />
'''Friday October 28'''<br />
<br />
* 2:00 PM - 6:00 PM -- Contributors Meetup -- https://etherpad.openstack.org/p/ocata-relmgt-plan<br />
<br />
== Searchlight ==<br />
<br />
'''Wednesday October 26'''<br />
2:40pm-3:20pm - Cells v2 (scheduler, searchlight, multi-cell support) - https://etherpad.openstack.org/p/ocata-nova-summit-cellsv2-scheduler<br />
<br />
'''Thursday October 27'''<br />
9:50 - 10:30 - Fishbowl - https://etherpad.openstack.org/p/ocata-searchlight-summit-plugins-fishbowl<br />
11:00 - 11:40 - Working room<br />
11:50 - 12:30 - Working room<br />
<br />
== Senlin ==<br />
<br />
'''Friday October 28'''<br />
* 9:00am-9:40am - Senlin work session: policy/profile versioning - https://etherpad.openstack.org/p/ocata-summit-senlin-profile-policy-versioning<br />
* 9:50am-10:30am - Senlin work session: versioned everything - https://etherpad.openstack.org/p/ocata-summit-senlin-versioned-everything<br />
* '''Break'''<br />
* 11:00am-11:40am - Senlin work session: container cluster - https://etherpad.openstack.org/p/ocata-summit-senlin-container-cluster<br />
* 11:50am-12:30am - Senlin work session: HA - https://etherpad.openstack.org/p/ocata-summit-senlin-HA<br />
<br />
== Stewardship Working Group ==<br />
<br />
'''Wed October 26'''<br />
<br />
*12:15pm - 12:55pm - Cross Project workshops: "Re-inventing the TC", the Stewardship Working Group discussion - https://etherpad.openstack.org/p/Barcelona-SWG-cp<br />
<br />
== Tricircle ==<br />
<br />
Venue: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=tricircle%3A<br />
<br />
ideas: https://etherpad.openstack.org/p/ocata-tricircle-sessions-planning<br />
<br />
'''Thu October 27'''<br />
<br />
* 5:30pm - 6:10pm - Cross Neutron networking automation: feature review and what's to do in Ocata : https://etherpad.openstack.org/p/ocata-tricircle-feature-review-priorities-roadmap<br />
<br />
'''Fri April 28'''<br />
<br />
* 9:00am - 9:40am - Ocata work session: https://etherpad.openstack.org/p/ocata-tricircle-work-session<br />
* 9:40am - 12:00am - Tricricle contributors meetup<br />
<br />
== TripleO ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=tripleo%3A<br />
<br />
https://etherpad.openstack.org/p/ocata-tripleo<br />
<br />
===== TripleO: Containers - Current Status and Roadmap =====<br />
Wed 26 3:55pm-4:35pm<br />
https://etherpad.openstack.org/p/ocata-tripleo-containers<br />
<br />
=====TripleO: Work Session - Growing the team=====<br />
Wed 26 5:05pm-5:45pm -<br />
https://etherpad.openstack.org/p/ocata-tripleo-team-growing<br />
<br />
===== TripleO: Work Session - CI - current status and roadmap=====<br />
Wed 26 5:55pm-6:35pm -<br />
https://etherpad.openstack.org/p/ocata-tripleo-ci<br />
<br />
===== TripleO: Upgrades - current status and roadmap=====<br />
Thu 27 1:50pm-2:30pm -<br />
https://etherpad.openstack.org/p/ocata-tripleo-upgrades<br />
<br />
===== TripleO: Work Session - Composable Undercloud deployment with Heat=====<br />
Fri 28 9:00am-9:20am -<br />
https://etherpad.openstack.org/p/tripleo-composable-undercloud<br />
<br />
===== TripleO: Work Session - GUI, CLI, Validations current status, roadmap, requirements=====<br />
Fri 28 9:20am-9:40am -<br />
https://etherpad.openstack.org/p/gui-ocata<br />
<br />
===== TripleO: Work Session - Multiple topics=====<br />
Fri 28 9:50am-10:30am -<br />
Blueprints, specs, tools and Ocata summary.<br />
See bottom of https://etherpad.openstack.org/p/ocata-tripleo<br />
<br />
== Trove ==<br />
<br />
https://etherpad.openstack.org/p/trove-barcelona-sessions <br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Trove<br />
<br />
==Watcher==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Watcher<br />
<br />
'''Wed October 26'''<br />
<br />
* 5.55pm - 6.35pm - [https://etherpad.openstack.org/p/watcher-ocata-design-session Existing & new infrastructure optimization strategies]<br />
<br />
'''Thu October 27'''<br />
<br />
* 9.50am - 10.30am - [https://etherpad.openstack.org/p/watcher-ocata-design-session Watcher Newton retrospective]<br />
<br />
'''Fri April 28'''<br />
<br />
* 11am - 12.30am - [https://etherpad.openstack.org/p/watcher-ocata-design-session Ocata priorities & roadmap]<br />
* 2pm - 6pm - [https://etherpad.openstack.org/p/watcher-ocata-design-session Contributors meetup]<br />
<br />
==Zaqar==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Zaqar<br />
<br />
'''Thursday, October 27'''<br />
<br />
9:50am-10:30am [https://etherpad.openstack.org/p/zaqar-ocata-performance Zaqar's profile and performance gate]<br />
<br />
4:40pm-5:00pm [https://etherpad.openstack.org/p/zaqar-ocata-notification-delivery-policy Notification delivery policy]<br />
<br />
5:00pm-5:20pm [https://etherpad.openstack.org/p/zaqar-ocata-purge-queue Purge queue]<br />
<br />
5:30pm-6:10pm [https://etherpad.openstack.org/p/zaqar-ocata-subscription-confirmation-email Subscription Confirmation - Email]<br />
...</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Design_Summit/Ocata/Etherpads&diff=135135Design Summit/Ocata/Etherpads2016-10-19T04:01:10Z<p>Travis Tripp: </p>
<hr />
<div>[[Category:Summit]]<br />
[[Category:Ocata]]<br />
[[Category:Etherpad]]<br />
<br />
The grand list of all the Ocata Design Summit sessions. Please include Date, Time, and links to etherpads when adding new content.<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== Event intro/closure ==<br />
* Tue Oct 26 11:25am - Design Summit 101 - https://etherpad.openstack.org/p/ocata-design-summit-101<br />
* Fri Oct 29 12:30pm - Barcelona feedback session - https://etherpad.openstack.org/p/BCN-summit-feedback<br />
<br />
<br />
==Architecture Working Group==<br />
<br />
'''Wednesday, October 26'''<br />
* 11:25am-12:05pm - Cross Project workshops: Architecture Working Group Fishbowl - https://etherpad.openstack.org/p/ocata-summit-arch-wg<br />
<br />
==Barbican==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Barbican<br />
<br />
'''Thursday, October 27'''<br />
* 11:00am-11:40am - (128) Barbican: User and Operator Feedback Fishbowl - https://etherpad.openstack.org/p/barbican-ocata-summit-roadmap<br />
* 11:50am-12:30pm - (Montjuic) Barbican: Work Session (Roadmap) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 11:50pm-02:30pm - (130) Barbican: Work Session (Cross Project)- https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
<br />
'''Friday, October 28'''<br />
* 09:00am-09:40am - (129) Barbican: Work Session (Security) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 09:50am-10:30am - (129) Barbican: Work Session (TBD) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 11:00am-11:40am - (129) Barbican: Work Session (Resources) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
* 11:50am-12:30pm - (129) Barbican: Work Session (Planning) - https://etherpad.openstack.org/p/barbican-ocata-design-summit<br />
<br />
==Cinder==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Cinder<br />
<br />
'''Wednesday October 26'''<br />
* 3:05pm-3:45pm - Cinder Test Working Group progress and status - https://etherpad.openstack.org/p/Cinder-testing<br />
* 3:55-4:35 - Driver bug fixes for unsupported OpenStack releases - https://etherpad.openstack.org/p/ocata-cinder-summit-stabledriverfixes<br />
* 5:05-5:45 - Stand alone Cinder service - https://etherpad.openstack.org/p/ocata-cinder-summit-standalonecinder<br />
* 5:55-6:35 - Pike (and beyond) planning - https://etherpad.openstack.org/p/ocata-cinder-summit-pikeplanning<br />
'''Thursday October 27'''<br />
* 9:00-9:40 - Replication - https://etherpad.openstack.org/p/ocata-cinder-summit-replication<br />
* 9:50-10:30 - Cinder-Nova API changes - https://etherpad.openstack.org/p/ocata-cinder-summit-attachdetach<br />
'''Friday October 28'''<br />
* 9:00am-9:40am - Nova/Cinder cross-project session - https://etherpad.openstack.org/p/ocata-nova-summit-cinder-session<br />
* 11:00am-11:40am - NFS snapshots - https://etherpad.openstack.org/p/ocata-cinder-summit-nfssnapshots<br />
* 11:50am-12:30pm - Cinder backup improvements - https://etherpad.openstack.org/p/ocata-cinder-summit-backupimprovements<br />
* '''Lunch'''<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/ocata-cinder-summit-meetup<br />
<br />
==Cross Project Sessions==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Cross+Project<br />
<br />
'''Tuesday October 25'''<br />
<br />
* 3:55 PM - 4:35 PM -- Experiences with Project Decomposition, Scaling Review Teams and Subsystem Maintainers (Part 1) -- https://etherpad.openstack.org/p/ocata-summit-xp-scaling-review-teams<br />
* 5:05 PM - 5:45 PM -- Discuss Community-Wide Release Goals -- https://etherpad.openstack.org/p/community-goals<br />
* 5:55 PM - 6:35 PM -- Python 3 Integration Testing -- https://etherpad.openstack.org/p/ocata-python-3<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 11:25 AM - 12:05 PM -- Ocata goal: Remove Incubated Oslo Code -- https://etherpad.openstack.org/p/ocata-goal-oslo<br />
* 2:15 PM - 2:55 PM -- Experiences with project decomposition, scaling review teams and subsystem maintainers (part 2) -- https://etherpad.openstack.org/p/ocata-summit-xp-scaling-review-teams<br />
<br />
==Documentation==<br />
<br />
See these and more documentation sessions in schedule: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Documentation<br />
<br />
'''Wednesday, October 26'''<br />
* 5:05pm-5:45pm - User Guides Working Group - https://etherpad.openstack.org/p/BCN-Docs-UserGuidesWG<br />
'''Thursday October 27'''<br />
* 2:40pm-3:20pm - Newton Retrospective - https://etherpad.openstack.org/p/BCN-Docs-NewtonRetro <br />
* 3:30pm-4:10pm - Social Things - https://etherpad.openstack.org/p/BCN-Docs-Social <br />
* 4:40pm-5:20pm - Training Labs - https://etherpad.openstack.org/p/BCN-Docs-Training <br />
* 5:30pm-6:10pm - Toolchain - https://etherpad.openstack.org/p/BCN-Docs-Toolchain <br />
'''Friday October 28'''<br />
* 11:00am-11:40am - API Working Group - https://etherpad.openstack.org/p/BCN-Docs-APIWG <br />
* 11:50am-12:30pm - Ocata Planning Working Group - https://etherpad.openstack.org/p/BCN-Docs-OcataPlanningWG <br />
* 2:00pm-6:00pm - Contributors Meetup - no etherpad<br />
<br />
== Gluon ==<br />
<br />
View online: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Gluon%3A<br />
<br />
Fri 28, 9:50am-10:30am: Gluon Work Session https://etherpad.openstack.org/p/ocata-gluon-work-plan<br />
<br />
==Heat==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Heat<br />
<br />
'''Thursday October 27'''<br />
<br />
* 11:00am-11:40am - Convergence Phase 1 - What worked, What didn't - https://etherpad.openstack.org/p/heat-ocata-convergence-phase-1<br />
* 11:50am-12:30pm - Performance Scalability Improvements - I (Issues with very large stacks) - https://etherpad.openstack.org/p/heat-ocata-performance-scalability-1<br />
* 2:40pm-3:20pm - Performance Scalability Improvements - II - https://etherpad.openstack.org/p/heat-ocata-performance-scalability-2<br />
* 3:30pm-4:10pm - Convergence Phase 2 - https://etherpad.openstack.org/p/heat-ocata-convergence-phase-2<br />
* 4:40pm-5:20pm - Validation Improvements - https://etherpad.openstack.org/p/heat-ocata-validation-improvements<br />
<br />
'''Friday October 28'''<br />
<br />
* 9:00am-9:40am - RPC versioning and hitless upgrades - https://etherpad.openstack.org/p/heat-ocata-hitless-upgrades<br />
* 9:50am-10:30am - API Microversions - https://etherpad.openstack.org/p/heat-ocata-api-microversions<br />
* 11:00am-11:40am - Heat Integration tests, Tempest and test candidates for DefCore Interop Testing - https://etherpad.openstack.org/p/heat-ocata-test-coverage<br />
* 11:50am-12:30pm - Improve maturity of heat - https://etherpad.openstack.org/p/heat-ocata-improve-maturity<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/ocata-heat-contributor-meetup<br />
<br />
<br />
==Horizon==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Horizon%3A<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 16:55-17:35 - Cross-project meeting with Horizon and Keystone - https://etherpad.openstack.org/p/ocata-keystone-horizon<br />
<br />
'''Thursday October 27'''<br />
<br />
* 09:00-09:40 - Operator/ Plugin feedback - https://etherpad.openstack.org/p/horizon-ocata-feedback<br />
* 09:50-10:30 - Newton retrospective, Ocata timeline, Dependencies, Testing!! and Selenium :-( - https://etherpad.openstack.org/p/horizon-ocata-planning<br />
* 16:40-17:20 - Cross-project topics; Glance, Identity, K2K Federation, Quotas - https://etherpad.openstack.org/p/horizon-ocata-cross-project<br />
* 17:30-18:10 - AngularJS state of play (where we're going, status of panels, what CORS means, do we want a thin service proxy, deprecations, etc.) -https://etherpad.openstack.org/p/horizon-ocata-angularjs<br />
<br />
'''Friday October 28'''<br />
<br />
* 11:50-12:30 - Priority setting (and TODO review if we have time) - https://etherpad.openstack.org/p/horizon-ocata-priorities<br />
* 14:00-18:00 - General project discussion (Newton retrospective, how to improve our organisation and use of tooling)<br />
<br />
== I18n ==<br />
<br />
View online: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=I18n%3A<br />
<br />
'''Friday October 28'''<br />
<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/barcelona-i18n-meetup<br />
<br />
==Infrastructure==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Infrastructure%3A<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 3:05pm-3:45pm: ''Work Session: Firehose'' in AC Hotel - P3 - Montjuic<br />
** https://etherpad.openstack.org/p/ocata-infra-firehose<br />
<br />
'''Thursday October 27'''<br />
<br />
* 2:40pm-3:20pm: ''Fishbowl: Status update and plans for task tracking'' in AC Hotel - P1 - Salon Barcelona<br />
** https://etherpad.openstack.org/p/ocata-infra-community-task-tracking<br />
<br />
'''Friday October 28'''<br />
<br />
All the sessions on Friday are taking place at CCIB - Centre de Convencions Internacional de Barcelona - P1<br />
<br />
* 9:00am-9:40am: ''Work Session: Next steps for infra-cloud'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-infra-cloud<br />
* 9:50am-10:30am: ''Work Session: Interactive infra-cloud debugging'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-infra-cloud-debugging<br />
* 11:00am-11:40am: ''Work Session: Test environment expectations'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-test-env-expectations<br />
* 11:50am-12:30pm: ''Work Session: Xenial jobs transition for stable/newton'' in Room 115<br />
** https://etherpad.openstack.org/p/ocata-infra-xenial-stable-newton<br />
* 2:00pm-6:00pm: ''Contributors Meetup'' in Room 121<br />
** https://etherpad.openstack.org/p/ocata-infra-contributors-meetup<br />
<br />
==Ironic==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ironic:<br />
<br />
'''Wednesday October 26'''<br />
* 5ː05pm-5ː45pm - API Evolution - https://etherpad.openstack.org/p/ironic-ocata-summit-api-evolution<br />
* 5:55pm-6:35pm - Deploy-time RAID and Advanced Partitioning (w/ Nova) - https://etherpad.openstack.org/p/ironic-ocata-summit-deploy-time-raid<br />
'''Thursday October 27'''<br />
* 9:00am-9:40am - Task Framework - https://etherpad.openstack.org/p/ironic-ocata-summit-task-framework<br />
* 9:50am-10:30am - QA/CI - https://etherpad.openstack.org/p/ironic-ocata-summit-qa<br />
* 1:50pm-2:30pm - Synchronizing Events with Neutron - https://etherpad.openstack.org/p/ironic-ocata-summit-neutron-events<br />
* 2:40pm-3:20pm - Ocata Priorities - https://etherpad.openstack.org/p/ironic-ocata-summit-priorities<br />
'''Friday October 28'''<br />
* 11:00am-11:40am - VNC Console - https://etherpad.openstack.org/p/ironic-ocata-summit-vnc-console<br />
* 11:50am-12:30pm - Unblocking Priority Features - https://etherpad.openstack.org/p/ironic-ocata-summit-unblock-priorities<br />
* 2:00pm-6:00pm - Contributors Meetup - https://etherpad.openstack.org/p/ironic-ocata-summit-contributor-meetup<br />
<br />
== Keystone ==<br />
<br />
View online: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Keystone%3A<br />
<br />
Wed 26, 4:05pm-4:45pm<br />
Keystone: Newton retrospective (Fishbowl)<br />
https://etherpad.openstack.org/p/keystone-newton-retrospective<br />
<br />
Wed 26, 4:55pm-5:35pm<br />
Keystone: keystone/horizon integration<br />
https://etherpad.openstack.org/p/ocata-keystone-horizon<br />
<br />
Thu 27, 12:00pm-12:40pm<br />
Keystone: Unconference (Fishbowl)<br />
https://etherpad.openstack.org/p/ocata-keystone-unconference<br />
<br />
Thu 27, 12:50pm-1:30pm<br />
Keystone: Ocata priorities (Fishbowl)<br />
https://etherpad.openstack.org/p/ocata-keystone-priorities<br />
<br />
Thu 27, 2:50pm-3:30pm<br />
Keystone: Work session (Federation)<br />
https://etherpad.openstack.org/p/ocata-keystone-federation<br />
<br />
Thu 27, 3:40pm-4:20pm<br />
Keystone: Work session (Testing)<br />
https://etherpad.openstack.org/p/ocata-keystone-testing<br />
<br />
Thu 27, 4:30pm-5:10pm<br />
Keystone: Work session (Documentation)<br />
https://etherpad.openstack.org/p/ocata-keystone-documentation<br />
<br />
Fri 28, 10:00am-10:40am<br />
Keystone: Work session (Authorization)<br />
https://etherpad.openstack.org/p/ocata-keystone-authorization<br />
<br />
Fri 28, 10:50am-11:30am<br />
Keystone: Work session (Authentication)<br />
https://etherpad.openstack.org/p/ocata-keystone-authentication<br />
<br />
Fri 28, 12:00pm-12:40pm<br />
Keystone: Work session (Scaling and Performance)<br />
https://etherpad.openstack.org/p/ocata-keystone-scaling<br />
<br />
Fri 28, 12:50pm-1:30pm<br />
Keystone: Work session (Integration)<br />
https://etherpad.openstack.org/p/ocata-keystone-integration<br />
<br />
Fri 28, 3:00pm-7:00pm<br />
Keystone: Contributors meetup<br />
(No etherpad)<br />
<br />
== Kolla ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Kolla%3A<br />
<br />
Kolla Ocata Summit Master Etherpad - https://etherpad.openstack.org/p/kolla-o-summit-schedule<br />
<br />
'''Wed October 26'''<br />
<br />
* 3:55pm - 4:35pm - Operator experiences - https://etherpad.openstack.org/p/kolla-o-summit-op-experiences<br />
* 5:05pm - 5:45pm - Community roadmap planning for O - https://etherpad.openstack.org/p/kolla-o-summit-community-planning<br />
* 5:55pm - 6:35pm - Goals for Ocata - https://etherpad.openstack.org/p/kolla-o-summit-roadmap<br />
<br />
'''Thu October 27'''<br />
<br />
* 9:00am - 9:40am - Kolla-Kubernetes Architecture - https://etherpad.openstack.org/p/kolla-ocata-summit-kolla-k8s-architecture<br />
* 9:50am - 10:30am - High availability - https://etherpad.openstack.org/p/kolla-o-summit-high-availability<br />
* 1:50pm - 2:30pm - 3rd Party Plugins - https://etherpad.openstack.org/p/kolla-o-summit-3rd-party-plugins<br />
* 2:40pm - 3:20pm - Improving the CI system - https://etherpad.openstack.org/p/kolla-o-summit-improving-ci<br />
* 3:30pm - 4:10pm - Distro requirements, deprecation, levels of support - https://etherpad.openstack.org/p/kolla-o-summit-support-and-deprecation<br />
<br />
'''Fri April 28'''<br />
<br />
* 9:00am - 9:40am - Documentation - https://etherpad.openstack.org/p/kolla-o-summit-documentation<br />
* 9:50am - 10:30am - OSIC review - https://etherpad.openstack.org/p/kolla-o-summit-OSIC-review<br />
* 11:00am - 11:40am - Kolla-Kubernetes Roadmap - https://etherpad.openstack.org/p/kolla-ocata-summit-kolla-k8s-road-map<br />
* 11:50am - 12:30pm - Security VMT threat - https://etherpad.openstack.org/p/kolla-ocata-summit-threat-analysis<br />
* 2:00pm - 6:00pm - Afternoon Contributor Meetup - https://etherpad.openstack.org/p/kolla-ocata-summit-contrib-meetup<br />
<br />
==Manila==<br />
<br />
'''Thu October 27'''<br />
<br />
* 11:00 - 11:40 - Race Conditions (FB) - https://etherpad.openstack.org/p/ocata-manila-race-conditions<br />
* 11:50 - 12:30 - Data Service Jobs Table (FB) - https://etherpad.openstack.org/p/ocata-manila-data-service-jobs-table<br />
* 14:40 - 15:20 - Tempest Direction (WS) - https://etherpad.openstack.org/p/ocata-manila-tempest-direction<br />
<br />
'''Fri April 28'''<br />
<br />
* 11:00 - 11:40 - Access Rules (WS) - https://etherpad.openstack.org/p/ocata-manila-access-rules<br />
* 11:50 - 12:30 - High Availability (WS) - https://etherpad.openstack.org/p/ocata-manila-high-availability<br />
* 14:00 - 18:00 - Contributor Meetup (CM) - https://etherpad.openstack.org/p/ocata-manila-contributor-meetup<br />
<br />
==Neutron==<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 17:05 - 17:45 - Nova/Neutron cross-project session Nova - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
* 17:55 - 18:35 - LBaaS retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session<br />
<br />
'''Thursday October 27'''<br />
<br />
* 09:00 - 09:40 - Completing the Newton backlog - https://etherpad.openstack.org/p/ocata-neutron-core-newton-backlog<br />
* 09:50 - 10:30 - Upstream and dowstream CI and testing efforts - https://etherpad.openstack.org/p/ocata-neutron-testing<br />
* 11:00 - 11:40 - End user and operator feedback - https://etherpad.openstack.org/p/ocata-neutron-end-user-operator-feedback<br />
* 11:50 - 12:30 - Neutronclient retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-client<br />
* 17:30 - 18:10 - Nova/Neutron cross-project session - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
<br />
'''Friday October 28'''<br />
<br />
* 09:00 - 09:40 (Sagrada Familia) Fishbowl Neutron: Neutron-lib retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-lib-next-steps<br />
* 09:50 - 10:30 (Sagrada Familia) Fishbowl Neutron: Neutron server: retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-server-next<br />
* 11:00 - 11:40 (Sagrada Familia) Fishbowl Neutron: Neutron agents: retrospective and next steps - https://etherpad.openstack.org/p/ocata-neutron-agents<br />
* 11:50 - 12:30 (Sagrada Familia) Fishbowl Neutron: Stadium update - https://etherpad.openstack.org/p/ocata-nova-neutron-stadium<br />
* 14:00 - 18:00 (Room 114) Meetup Neutron: Contributors meetup - https://etherpad.openstack.org/p/ocata-neutron-contributor-meetup<br />
<br />
==Nomad==<br />
<br />
https://etherpad.openstack.org/p/nomad-ocata-design-session<br />
<br />
== Nova ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Nova%3A<br />
<br />
'''Wednesday October 26'''<br />
* 5:05pm-5:45pm - Nova/Neutron cross-project session - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
'''Thursday October 27'''<br />
* 9:00am-9:40am - Newton placement service retrospective - https://etherpad.openstack.org/p/ocata-nova-summit-placement-retrospective<br />
* 9:50am-10:30am - Scheduler / resource providers (quantitative) - https://etherpad.openstack.org/p/ocata-nova-summit-resource-providers-quantitative<br />
* '''Break'''<br />
* 11:00am-11:40am - Scheduler / resource provider traits (qualitative) - https://etherpad.openstack.org/p/ocata-nova-summit-resource-providers-qualitative<br />
* 11:50am-12:30pm - Organizing API work for Ocata - https://etherpad.openstack.org/p/ocata-nova-summit-api<br />
* '''Lunch'''<br />
* 1:50pm-2:30pm - Unconference - https://etherpad.openstack.org/p/ocata-nova-summit-unconference<br />
* 2:40pm-3:20pm - Cells v2 (scheduler, searchlight, multi-cell support) - https://etherpad.openstack.org/p/ocata-nova-summit-cellsv2-scheduler<br />
* 3:30pm-4:10pm - Cells v2 (quotas) - https://etherpad.openstack.org/p/ocata-nova-summit-cellsv2-quotas<br />
* '''Break'''<br />
* 4:40pm-5:20pm - Completing vendordata v2 - https://etherpad.openstack.org/p/ocata-nova-summit-vendoradatav2<br />
* 5:30pm-6:10pm - Nova/Neutron cross-project session - https://etherpad.openstack.org/p/ocata-nova-neutron-session<br />
'''Friday October 28'''<br />
* 9:00am-9:40am - Nova/Cinder cross-project session - https://etherpad.openstack.org/p/ocata-nova-summit-cinder-session<br />
* 9:50am-10:30am - Security specs and testing - https://etherpad.openstack.org/p/ocata-nova-summit-security<br />
* '''Break'''<br />
* 11:00am-11:40am - Planning the libvirt imagebackend refactor work - https://etherpad.openstack.org/p/ocata-nova-summit-libvirt-imagebackend<br />
* 11:50am-12:30pm - Ocata priorities and schedule - https://etherpad.openstack.org/p/ocata-nova-summit-priorities<br />
* '''Lunch'''<br />
* 2:00pm-6:00pm - Contributors meetup - https://etherpad.openstack.org/p/ocata-nova-summit-meetup<br />
<br />
== Release Management ==<br />
<br />
'''Wednesday October 26'''<br />
<br />
* 5:55 PM - 6:35 PM -- Work session -- https://etherpad.openstack.org/p/ocata-relmgt-plan<br />
<br />
'''Thursday October 27'''<br />
<br />
* 1:50 PM - 2:30 PM -- Newton Retrospective & Ocata Schedule -- https://etherpad.openstack.org/p/ocata-release-fishbowl<br />
<br />
'''Friday October 28'''<br />
<br />
* 2:00 PM - 6:00 PM -- Contributors Meetup -- https://etherpad.openstack.org/p/ocata-relmgt-plan<br />
<br />
== Searchlight ==<br />
<br />
'''Wednesday October 26'''<br />
2:40pm-3:20pm - Cells v2 (scheduler, searchlight, multi-cell support) - https://etherpad.openstack.org/p/ocata-nova-summit-cellsv2-scheduler<br />
<br />
'''Thursday October 27'''<br />
9:50 - 10:30 - <br />
11:00 - 11:40 - <br />
11:50 - 12:30 - <br />
<br />
* 9ː50 - 10ː30<br />
<br />
Fishbowl: Thursday 9:50 - 10:30<br />
Working room : Thursday 11:00 - 11:40<br />
Working room 2: Thursday 11:50 - 12:30<br />
<br />
== Senlin ==<br />
<br />
'''Friday October 28'''<br />
* 9:00am-9:40am - Senlin work session: policy/profile versioning - https://etherpad.openstack.org/p/ocata-summit-senlin-profile-policy-versioning<br />
* 9:50am-10:30am - Senlin work session: versioned everything - https://etherpad.openstack.org/p/ocata-summit-senlin-versioned-everything<br />
* '''Break'''<br />
* 11:00am-11:40am - Senlin work session: container cluster - https://etherpad.openstack.org/p/ocata-summit-senlin-container-cluster<br />
* 11:50am-12:30am - Senlin work session: HA - https://etherpad.openstack.org/p/ocata-summit-senlin-HA<br />
<br />
== Stewardship Working Group ==<br />
<br />
'''Wed October 26'''<br />
<br />
*12:15pm - 12:55pm - Cross Project workshops: "Re-inventing the TC", the Stewardship Working Group discussion - https://etherpad.openstack.org/p/Barcelona-SWG-cp<br />
<br />
== Tricircle ==<br />
<br />
Venue: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=tricircle%3A<br />
<br />
ideas: https://etherpad.openstack.org/p/ocata-tricircle-sessions-planning<br />
<br />
'''Thu October 27'''<br />
<br />
* 5:30pm - 6:10pm - Cross Neutron networking automation: feature review and what's to do in Ocata : https://etherpad.openstack.org/p/ocata-tricircle-feature-review-priorities-roadmap<br />
<br />
'''Fri April 28'''<br />
<br />
* 9:00am - 9:40am - Ocata work session: https://etherpad.openstack.org/p/ocata-tricircle-work-session<br />
* 9:40am - 12:00am - Tricricle contributors meetup<br />
<br />
== TripleO ==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=tripleo%3A<br />
<br />
https://etherpad.openstack.org/p/ocata-tripleo<br />
<br />
===== TripleO: Containers - Current Status and Roadmap =====<br />
Wed 26 3:55pm-4:35pm<br />
https://etherpad.openstack.org/p/ocata-tripleo-containers<br />
<br />
=====TripleO: Work Session - Growing the team=====<br />
Wed 26 5:05pm-5:45pm -<br />
https://etherpad.openstack.org/p/ocata-tripleo-team-growing<br />
<br />
===== TripleO: Work Session - CI - current status and roadmap=====<br />
Wed 26 5:55pm-6:35pm -<br />
https://etherpad.openstack.org/p/ocata-tripleo-ci<br />
<br />
===== TripleO: Upgrades - current status and roadmap=====<br />
Thu 27 1:50pm-2:30pm -<br />
https://etherpad.openstack.org/p/ocata-tripleo-upgrades<br />
<br />
===== TripleO: Work Session - Composable Undercloud deployment with Heat=====<br />
Fri 28 9:00am-9:20am -<br />
https://etherpad.openstack.org/p/tripleo-composable-undercloud<br />
<br />
===== TripleO: Work Session - GUI, CLI, Validations current status, roadmap, requirements=====<br />
Fri 28 9:20am-9:40am -<br />
https://etherpad.openstack.org/p/gui-ocata<br />
<br />
===== TripleO: Work Session - Multiple topics=====<br />
Fri 28 9:50am-10:30am -<br />
Blueprints, specs, tools and Ocata summary.<br />
See bottom of https://etherpad.openstack.org/p/ocata-tripleo<br />
<br />
== Trove ==<br />
<br />
https://etherpad.openstack.org/p/trove-barcelona-sessions <br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Trove<br />
<br />
==Watcher==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Watcher<br />
<br />
'''Wed October 26'''<br />
<br />
* 5.55pm - 6.35pm - [https://etherpad.openstack.org/p/watcher-ocata-design-session Existing & new infrastructure optimization strategies]<br />
<br />
'''Thu October 27'''<br />
<br />
* 9.50am - 10.30am - [https://etherpad.openstack.org/p/watcher-ocata-design-session Watcher Newton retrospective]<br />
<br />
'''Fri April 28'''<br />
<br />
* 11am - 12.30am - [https://etherpad.openstack.org/p/watcher-ocata-design-session Ocata priorities & roadmap]<br />
* 2pm - 6pm - [https://etherpad.openstack.org/p/watcher-ocata-design-session Contributors meetup]<br />
<br />
==Zaqar==<br />
<br />
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Zaqar<br />
<br />
'''Thursday, October 27'''<br />
<br />
9:50am-10:30am [https://etherpad.openstack.org/p/zaqar-ocata-performance Zaqar's profile and performance gate]<br />
<br />
4:40pm-5:00pm [https://etherpad.openstack.org/p/zaqar-ocata-notification-delivery-policy Notification delivery policy]<br />
<br />
5:00pm-5:20pm [https://etherpad.openstack.org/p/zaqar-ocata-purge-queue Purge queue]<br />
<br />
5:30pm-6:10pm [https://etherpad.openstack.org/p/zaqar-ocata-subscription-confirmation-email Subscription Confirmation - Email]<br />
...</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti/Architecture&diff=133100Graffiti/Architecture2016-09-17T03:16:43Z<p>Travis Tripp: /* Workflow and Components */</p>
<hr />
<div><br />
== Graffiti Architecture Concepts ==<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects for OpenStack users. Graffiti has the initial intent of providing cross service metadata “tagging" and search aggregation for cloud resources.<br />
<br />
== Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects (Glance, Searchlight, Horizon). The below provides legacy overview information to help understand how the various components come together. For more info, see https://wiki.openstack.org/wiki/Graffiti#Current_Status<br />
<br />
==== Workflow and Components ====<br />
<br />
# Load your custom metadata definitions (called property types or capability types)<br />
## Into the Graffiti central dictionary <br />
## Or configure Graffiti plugins to include / proxy existing definitions provided by the various services<br />
# "Tag" the resources in the cloud with your properties and capabilities<br />
# Let users find the resources with your desired properties and capabilities̈<br />
<br />
* Repeat across multiple clouds installations for Cloud capability portability.<br />
<br />
[[File:Graffiti-Architecture-ConceptOverlay-Simple.png]]<br />
<br />
=== Base Concepts ===<br />
<br />
* Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties is largely a disconnected and difficult process. This often involves searching wikis and opening the source code. It becomes more difficult as a cloud's scale grows. In addition, many times the properties can apply to resources from several different services. Graffiti makes this easier by creating the following concepts: <br />
** ''[[Graffiti/Dictionary#Capability_Types|Capabilities and Requirements]]'': The Graffiti concepts have embraced the idea that cloud resources may be described using the notion of capabilities, a concept influenced by some parts of OpenStack today as well as by industry specifications like OASIS TOSCA (Please note, Graffiti is NOT an orchestration engine, it only assists in describing and locating existing resources in the cloud.).<br />
** ''[[Graffiti/Dictionary|Dictionary]]'': A common API for services, admins, and users to discover and share their metadata vocabulary. This is the basis for creating an agreement on how to describe the various capabilities the cloud provides. It allows for a consistent UI and CLI experience for describing and finding resources. <br />
** ''[[Graffiti/Directory|Resource Directory]]'': A common API to "tag" and search across existing and new services for cloud content based on the dictionary (metadata definitions). <br />
** ''Resource Capability Registry'': A persistent shared repository for services to publish information about cloud resources. This can optionally be used by services instead of or in addition to having their own local native storage to describe resources.<br />
<br />
== Use Case Exampleː Compute Capabilities ==<br />
In Summary: <br />
The Graffiti concepts provide cross service and cross environment:<br />
* metadata definition aggregation and administration<br />
* resource metadata "tagging" aggregation<br />
* resource metadata search aggregation<br />
<br />
<br />
[[File:Graffiti-ComputeCapability-Flow-Overview.png]]<br />
<br />
== Additional Details ==<br />
<br />
The below provides an overview of the metadata aggregation, resource search optimization, and local resource registry concepts.<br />
<br />
<br />
[[File:Graffiti-Architecture-ConceptOverlay.png]]<br />
<br />
=== Graffiti API Benefits ===<br />
<br />
When we first looked at a UI only solution, we found that it can be done to a certain extent [[Graffiti/Architecture#Limits_of_a_Horizon_Only_Solution|with limitations]]. However, if we propose the idea of a new service integrated or built into the ecosystem the following additional benefits will be available:<br />
* Command line and REST API for cross service searching<br />
* Ability to import / export definitions across deployments<br />
* Common persistence DB for definitions in multi-node / HA deployments<br />
* Private tag / metadata libraries. Users / projects will be able to have their own vocabulary for "tagging" resources<br />
* Authoring - We will provide an authoring and administration UI for creating and managing namespaces, capability types, etc<br />
* Resource search performance optimizations. We would like to introduce a high performance indexing mechanism based that crosses service boundaries.<br />
<br />
==== Resource Search Optimization ====<br />
<br />
Ideasː<br />
* Lazy loading. Simple pre-fetch mechanism. Make a call to initiate session or on first request for a resource type, data is pulled into memory and held for a limited time. Subsequent searches are all done in in memory. RBAC is handled via token pass through.<br />
* Eager loading. The base idea is that cache provider plugin can be added under the API. Resources that are indexable (those whose service owner supports notifications) would then be indexed via a combination of startup seeding and service resource event notifications. For example, Glance supports sending notifications on certain image changes. The index itself could be based on elasticsearch and the plugin would translate queries in and out of elasticsearch. (Noteː This portion of the concept has been mostly implemented by Project Searchlight [https://wiki.openstack.org/wiki/Searchlight]).<br />
<br />
== Originally Proposed Horizon Concepts ==<br />
<br />
These have been implemented in Horizonː<br />
<br />
* Horizon features<br />
** An admin UI for managing the catalog<br />
*** (Admin —> Metadata Definitions) (Kilo)<br />
** A widget for associating metadata to different resources<br />
*** (Update Metadata action on each row item below)<br />
*** admin -> images (Juno)<br />
*** admin -> flavors (Kilo)<br />
*** admin —> Host Aggregates (Kilo)<br />
*** project —> images (Liberty)<br />
*** project —> instances (Mitaka)<br />
** The ability to add metadata at launch time<br />
*** project —> Launch Instance (ng launch instance enabled) (Mitaka)<br />
<br />
Legacy Infoː<br />
<br />
We believe that the [[Graffiti]] concepts can be fulfilled in Horizon with reusable widgets that we can plug into Horizon as well as changes to screens like the launch instance wizard. The widgets will provide the ability to "tag" capabilities and TBDː requirements on various resources. They will also be able to generate filter queries based on resource capabilities and properties.<br />
<br />
==== Terminology Note ====<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them. Some resource types may not support capabilities / tag that have properties.<br />
<br />
==== Concept Screencasts ====<br />
<br />
To explore and explain the ideas, HP and Intel have created a screencasts showing the concepts running under POC code. The styling is only representative of the point in time that the demo was recorded and has changed.<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Screencast - Concept Overview]<br />
<br />
==== Concept Flow Mockup ====<br />
<br />
The basic proposed flow is that we will be able to add a widget on any resource management screen that we want to be able to "tag" capabilities. For example, the images, volumes, flavors, and host aggregate screens are all good candidates. The goal is that the only customization required will be for the code using the widget will send in information about the resource / resource type that is being tagged. The resource type is sent to the API which then returns back the capabilities applicable for that type of resource.<br />
<br />
==== Launch Instance Example ==== <br />
<br />
̈ - Noteː Tagging other resource types and searching for them could work similarly.<br />
<br />
[[File:Graffit-Tag-Then-Use-Concept.png|center|Widget Screenshots from screencasts]]<br />
<br />
==== Style Mockups ====<br />
<br />
We have been playing with various style mockups, but aren't sure what makes sense or would be acceptable. The traditional look and feel in Horizon can be achieved, but we also aren't sure that Horizon today has a good example for handling tree browsing. The following are some of the mockups we've created.<br />
<br />
[[File:Graffiti-capabilities-widget-mockups.png|thumbnail|center|Graffiti Concept Mockups]]<br />
<br />
<br />
<br />
=== Proposed Horizon Component Architecture ===<br />
<br />
We would like there to be a common way in Horizon to support "tagging" simple named tags and key-value pairs that also will support the overall [[Graffiti]] concepts. In the proposed architecture, we will support Horizon gaining the value of Graffiti concepts through a thin API plugin layer directly in Horizon without the full "Dictionary" and "Resource Directory" APIs in the deployed environment. This will provide benefits to Horizon now, without requiring a new Graffiti service to either be incubated or be adopted into other projects (which we are actively seeking input and advice). The widgets will be built to work with a common simple "resource syntax" that the external service API would provide.<br />
<br />
The entire concept can be run in a lightweight way through a thin filesystem provider on the Horizon server that allows reading dictionary definition files directly from the filesystem or from services that already provide schemas or tags. This would suffice for single node deployments or deployments that are managed through configuration management provider to ensure consistency of the definitions across Horizon nodes.<br />
<br />
If a fully "Dictionary" / "Resource Directory" service API was available, the widgets wouldn't have to change even as new resource types and metadata definitions are added to the system. They still go to the Horizon Graffiti component, which would add the plugin to talk to the appropriate central "Dictionary" / "Resource Directory" service endpoint(s), which would provide the [[Graffiti/Architecture#Graffiti_API_Benefits|full benefits]].<br />
<br />
===== Limits of a Horizon Only Solution =====<br />
<br />
The widgets and concepts can be partially built in Horizon as stated above and diagrammed below without changes to existing services. However, there are a number of limitations that require some external service work as well.<br />
<br />
# Horizon is a stateless server by design at this point. The only place any persistent data can exist is if you choose to store session information on the server in a database. The default setup for Horizon now uses signed cookies to maintain session data and avoids a DB requirement.<br />
# There is no privileged account running on the Horizon server and thus no way to build a persistent datastore only the admin can obtain. A persistent privileged session as this creates many security issues.<br />
# Horizon can be set up in an HA manner, which would require either duplicate DB on multiple Horizon servers or another server dedicated to the DB backend for Horizon.<br />
# The original scope discussed is only part of the picture, when the scope grows beyond the launch use case, the scope grows beyond usefulness for just Horizon. Isolating in Horizon is limiting.<br />
<br />
[[File:Graffiti-Widgets.png]]</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti/Architecture&diff=133099Graffiti/Architecture2016-09-17T03:15:05Z<p>Travis Tripp: /* Workflow and Components */</p>
<hr />
<div><br />
== Graffiti Architecture Concepts ==<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects for OpenStack users. Graffiti has the initial intent of providing cross service metadata “tagging" and search aggregation for cloud resources.<br />
<br />
== Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects (Glance, Searchlight, Horizon). The below provides legacy overview information to help understand how the various components come together. For more info, see https://wiki.openstack.org/wiki/Graffiti#Current_Status<br />
<br />
==== Workflow and Components ====<br />
<br />
# Load your custom metadata definitions (called property types or capability types)<br />
## Into the Graffiti central dictionary <br />
## Or configure Graffiti plugins to include / proxy existing definitions provided by the various services<br />
# "Tag" the resources in the cloud with your properties and capabilities<br />
# Let users find the resources with your desired properties and capabilities<br />
<br />
Finally, repeat the same across multiple clouds.<br />
<br />
[[File:Graffiti-Architecture-ConceptOverlay-Simple.png]]<br />
<br />
=== Base Concepts ===<br />
<br />
* Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties is largely a disconnected and difficult process. This often involves searching wikis and opening the source code. It becomes more difficult as a cloud's scale grows. In addition, many times the properties can apply to resources from several different services. Graffiti makes this easier by creating the following concepts: <br />
** ''[[Graffiti/Dictionary#Capability_Types|Capabilities and Requirements]]'': The Graffiti concepts have embraced the idea that cloud resources may be described using the notion of capabilities, a concept influenced by some parts of OpenStack today as well as by industry specifications like OASIS TOSCA (Please note, Graffiti is NOT an orchestration engine, it only assists in describing and locating existing resources in the cloud.).<br />
** ''[[Graffiti/Dictionary|Dictionary]]'': A common API for services, admins, and users to discover and share their metadata vocabulary. This is the basis for creating an agreement on how to describe the various capabilities the cloud provides. It allows for a consistent UI and CLI experience for describing and finding resources. <br />
** ''[[Graffiti/Directory|Resource Directory]]'': A common API to "tag" and search across existing and new services for cloud content based on the dictionary (metadata definitions). <br />
** ''Resource Capability Registry'': A persistent shared repository for services to publish information about cloud resources. This can optionally be used by services instead of or in addition to having their own local native storage to describe resources.<br />
<br />
== Use Case Exampleː Compute Capabilities ==<br />
In Summary: <br />
The Graffiti concepts provide cross service and cross environment:<br />
* metadata definition aggregation and administration<br />
* resource metadata "tagging" aggregation<br />
* resource metadata search aggregation<br />
<br />
<br />
[[File:Graffiti-ComputeCapability-Flow-Overview.png]]<br />
<br />
== Additional Details ==<br />
<br />
The below provides an overview of the metadata aggregation, resource search optimization, and local resource registry concepts.<br />
<br />
<br />
[[File:Graffiti-Architecture-ConceptOverlay.png]]<br />
<br />
=== Graffiti API Benefits ===<br />
<br />
When we first looked at a UI only solution, we found that it can be done to a certain extent [[Graffiti/Architecture#Limits_of_a_Horizon_Only_Solution|with limitations]]. However, if we propose the idea of a new service integrated or built into the ecosystem the following additional benefits will be available:<br />
* Command line and REST API for cross service searching<br />
* Ability to import / export definitions across deployments<br />
* Common persistence DB for definitions in multi-node / HA deployments<br />
* Private tag / metadata libraries. Users / projects will be able to have their own vocabulary for "tagging" resources<br />
* Authoring - We will provide an authoring and administration UI for creating and managing namespaces, capability types, etc<br />
* Resource search performance optimizations. We would like to introduce a high performance indexing mechanism based that crosses service boundaries.<br />
<br />
==== Resource Search Optimization ====<br />
<br />
Ideasː<br />
* Lazy loading. Simple pre-fetch mechanism. Make a call to initiate session or on first request for a resource type, data is pulled into memory and held for a limited time. Subsequent searches are all done in in memory. RBAC is handled via token pass through.<br />
* Eager loading. The base idea is that cache provider plugin can be added under the API. Resources that are indexable (those whose service owner supports notifications) would then be indexed via a combination of startup seeding and service resource event notifications. For example, Glance supports sending notifications on certain image changes. The index itself could be based on elasticsearch and the plugin would translate queries in and out of elasticsearch. (Noteː This portion of the concept has been mostly implemented by Project Searchlight [https://wiki.openstack.org/wiki/Searchlight]).<br />
<br />
== Originally Proposed Horizon Concepts ==<br />
<br />
These have been implemented in Horizonː<br />
<br />
* Horizon features<br />
** An admin UI for managing the catalog<br />
*** (Admin —> Metadata Definitions) (Kilo)<br />
** A widget for associating metadata to different resources<br />
*** (Update Metadata action on each row item below)<br />
*** admin -> images (Juno)<br />
*** admin -> flavors (Kilo)<br />
*** admin —> Host Aggregates (Kilo)<br />
*** project —> images (Liberty)<br />
*** project —> instances (Mitaka)<br />
** The ability to add metadata at launch time<br />
*** project —> Launch Instance (ng launch instance enabled) (Mitaka)<br />
<br />
Legacy Infoː<br />
<br />
We believe that the [[Graffiti]] concepts can be fulfilled in Horizon with reusable widgets that we can plug into Horizon as well as changes to screens like the launch instance wizard. The widgets will provide the ability to "tag" capabilities and TBDː requirements on various resources. They will also be able to generate filter queries based on resource capabilities and properties.<br />
<br />
==== Terminology Note ====<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them. Some resource types may not support capabilities / tag that have properties.<br />
<br />
==== Concept Screencasts ====<br />
<br />
To explore and explain the ideas, HP and Intel have created a screencasts showing the concepts running under POC code. The styling is only representative of the point in time that the demo was recorded and has changed.<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Screencast - Concept Overview]<br />
<br />
==== Concept Flow Mockup ====<br />
<br />
The basic proposed flow is that we will be able to add a widget on any resource management screen that we want to be able to "tag" capabilities. For example, the images, volumes, flavors, and host aggregate screens are all good candidates. The goal is that the only customization required will be for the code using the widget will send in information about the resource / resource type that is being tagged. The resource type is sent to the API which then returns back the capabilities applicable for that type of resource.<br />
<br />
==== Launch Instance Example ==== <br />
<br />
̈ - Noteː Tagging other resource types and searching for them could work similarly.<br />
<br />
[[File:Graffit-Tag-Then-Use-Concept.png|center|Widget Screenshots from screencasts]]<br />
<br />
==== Style Mockups ====<br />
<br />
We have been playing with various style mockups, but aren't sure what makes sense or would be acceptable. The traditional look and feel in Horizon can be achieved, but we also aren't sure that Horizon today has a good example for handling tree browsing. The following are some of the mockups we've created.<br />
<br />
[[File:Graffiti-capabilities-widget-mockups.png|thumbnail|center|Graffiti Concept Mockups]]<br />
<br />
<br />
<br />
=== Proposed Horizon Component Architecture ===<br />
<br />
We would like there to be a common way in Horizon to support "tagging" simple named tags and key-value pairs that also will support the overall [[Graffiti]] concepts. In the proposed architecture, we will support Horizon gaining the value of Graffiti concepts through a thin API plugin layer directly in Horizon without the full "Dictionary" and "Resource Directory" APIs in the deployed environment. This will provide benefits to Horizon now, without requiring a new Graffiti service to either be incubated or be adopted into other projects (which we are actively seeking input and advice). The widgets will be built to work with a common simple "resource syntax" that the external service API would provide.<br />
<br />
The entire concept can be run in a lightweight way through a thin filesystem provider on the Horizon server that allows reading dictionary definition files directly from the filesystem or from services that already provide schemas or tags. This would suffice for single node deployments or deployments that are managed through configuration management provider to ensure consistency of the definitions across Horizon nodes.<br />
<br />
If a fully "Dictionary" / "Resource Directory" service API was available, the widgets wouldn't have to change even as new resource types and metadata definitions are added to the system. They still go to the Horizon Graffiti component, which would add the plugin to talk to the appropriate central "Dictionary" / "Resource Directory" service endpoint(s), which would provide the [[Graffiti/Architecture#Graffiti_API_Benefits|full benefits]].<br />
<br />
===== Limits of a Horizon Only Solution =====<br />
<br />
The widgets and concepts can be partially built in Horizon as stated above and diagrammed below without changes to existing services. However, there are a number of limitations that require some external service work as well.<br />
<br />
# Horizon is a stateless server by design at this point. The only place any persistent data can exist is if you choose to store session information on the server in a database. The default setup for Horizon now uses signed cookies to maintain session data and avoids a DB requirement.<br />
# There is no privileged account running on the Horizon server and thus no way to build a persistent datastore only the admin can obtain. A persistent privileged session as this creates many security issues.<br />
# Horizon can be set up in an HA manner, which would require either duplicate DB on multiple Horizon servers or another server dedicated to the DB backend for Horizon.<br />
# The original scope discussed is only part of the picture, when the scope grows beyond the launch use case, the scope grows beyond usefulness for just Horizon. Isolating in Horizon is limiting.<br />
<br />
[[File:Graffiti-Widgets.png]]</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti/Architecture&diff=133098Graffiti/Architecture2016-09-17T03:12:28Z<p>Travis Tripp: /* Proposed Horizon Concepts */</p>
<hr />
<div><br />
== Graffiti Architecture Concepts ==<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects for OpenStack users. Graffiti has the initial intent of providing cross service metadata “tagging" and search aggregation for cloud resources.<br />
<br />
== Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects (Glance, Searchlight, Horizon). The below provides legacy overview information to help understand how the various components come together. For more info, see https://wiki.openstack.org/wiki/Graffiti#Current_Status<br />
<br />
==== Workflow and Components ====<br />
<br />
# Load your metadata definitions (called property types or capability types)<br />
## Into the Graffiti central dictionary <br />
## Or configure Graffiti plugins to include existing definitions provided by the various services<br />
# "Tag" the resources in the cloud with your properties and capabilities<br />
# Let users find the resources with your desired properties and capabilities<br />
<br />
<br />
[[File:Graffiti-Architecture-ConceptOverlay-Simple.png]]<br />
<br />
=== Base Concepts ===<br />
<br />
* Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties is largely a disconnected and difficult process. This often involves searching wikis and opening the source code. It becomes more difficult as a cloud's scale grows. In addition, many times the properties can apply to resources from several different services. Graffiti makes this easier by creating the following concepts: <br />
** ''[[Graffiti/Dictionary#Capability_Types|Capabilities and Requirements]]'': The Graffiti concepts have embraced the idea that cloud resources may be described using the notion of capabilities, a concept influenced by some parts of OpenStack today as well as by industry specifications like OASIS TOSCA (Please note, Graffiti is NOT an orchestration engine, it only assists in describing and locating existing resources in the cloud.).<br />
** ''[[Graffiti/Dictionary|Dictionary]]'': A common API for services, admins, and users to discover and share their metadata vocabulary. This is the basis for creating an agreement on how to describe the various capabilities the cloud provides. It allows for a consistent UI and CLI experience for describing and finding resources. <br />
** ''[[Graffiti/Directory|Resource Directory]]'': A common API to "tag" and search across existing and new services for cloud content based on the dictionary (metadata definitions). <br />
** ''Resource Capability Registry'': A persistent shared repository for services to publish information about cloud resources. This can optionally be used by services instead of or in addition to having their own local native storage to describe resources.<br />
<br />
== Use Case Exampleː Compute Capabilities ==<br />
In Summary: <br />
The Graffiti concepts provide cross service and cross environment:<br />
* metadata definition aggregation and administration<br />
* resource metadata "tagging" aggregation<br />
* resource metadata search aggregation<br />
<br />
<br />
[[File:Graffiti-ComputeCapability-Flow-Overview.png]]<br />
<br />
== Additional Details ==<br />
<br />
The below provides an overview of the metadata aggregation, resource search optimization, and local resource registry concepts.<br />
<br />
<br />
[[File:Graffiti-Architecture-ConceptOverlay.png]]<br />
<br />
=== Graffiti API Benefits ===<br />
<br />
When we first looked at a UI only solution, we found that it can be done to a certain extent [[Graffiti/Architecture#Limits_of_a_Horizon_Only_Solution|with limitations]]. However, if we propose the idea of a new service integrated or built into the ecosystem the following additional benefits will be available:<br />
* Command line and REST API for cross service searching<br />
* Ability to import / export definitions across deployments<br />
* Common persistence DB for definitions in multi-node / HA deployments<br />
* Private tag / metadata libraries. Users / projects will be able to have their own vocabulary for "tagging" resources<br />
* Authoring - We will provide an authoring and administration UI for creating and managing namespaces, capability types, etc<br />
* Resource search performance optimizations. We would like to introduce a high performance indexing mechanism based that crosses service boundaries.<br />
<br />
==== Resource Search Optimization ====<br />
<br />
Ideasː<br />
* Lazy loading. Simple pre-fetch mechanism. Make a call to initiate session or on first request for a resource type, data is pulled into memory and held for a limited time. Subsequent searches are all done in in memory. RBAC is handled via token pass through.<br />
* Eager loading. The base idea is that cache provider plugin can be added under the API. Resources that are indexable (those whose service owner supports notifications) would then be indexed via a combination of startup seeding and service resource event notifications. For example, Glance supports sending notifications on certain image changes. The index itself could be based on elasticsearch and the plugin would translate queries in and out of elasticsearch. (Noteː This portion of the concept has been mostly implemented by Project Searchlight [https://wiki.openstack.org/wiki/Searchlight]).<br />
<br />
== Originally Proposed Horizon Concepts ==<br />
<br />
These have been implemented in Horizonː<br />
<br />
* Horizon features<br />
** An admin UI for managing the catalog<br />
*** (Admin —> Metadata Definitions) (Kilo)<br />
** A widget for associating metadata to different resources<br />
*** (Update Metadata action on each row item below)<br />
*** admin -> images (Juno)<br />
*** admin -> flavors (Kilo)<br />
*** admin —> Host Aggregates (Kilo)<br />
*** project —> images (Liberty)<br />
*** project —> instances (Mitaka)<br />
** The ability to add metadata at launch time<br />
*** project —> Launch Instance (ng launch instance enabled) (Mitaka)<br />
<br />
Legacy Infoː<br />
<br />
We believe that the [[Graffiti]] concepts can be fulfilled in Horizon with reusable widgets that we can plug into Horizon as well as changes to screens like the launch instance wizard. The widgets will provide the ability to "tag" capabilities and TBDː requirements on various resources. They will also be able to generate filter queries based on resource capabilities and properties.<br />
<br />
==== Terminology Note ====<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them. Some resource types may not support capabilities / tag that have properties.<br />
<br />
==== Concept Screencasts ====<br />
<br />
To explore and explain the ideas, HP and Intel have created a screencasts showing the concepts running under POC code. The styling is only representative of the point in time that the demo was recorded and has changed.<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Screencast - Concept Overview]<br />
<br />
==== Concept Flow Mockup ====<br />
<br />
The basic proposed flow is that we will be able to add a widget on any resource management screen that we want to be able to "tag" capabilities. For example, the images, volumes, flavors, and host aggregate screens are all good candidates. The goal is that the only customization required will be for the code using the widget will send in information about the resource / resource type that is being tagged. The resource type is sent to the API which then returns back the capabilities applicable for that type of resource.<br />
<br />
==== Launch Instance Example ==== <br />
<br />
̈ - Noteː Tagging other resource types and searching for them could work similarly.<br />
<br />
[[File:Graffit-Tag-Then-Use-Concept.png|center|Widget Screenshots from screencasts]]<br />
<br />
==== Style Mockups ====<br />
<br />
We have been playing with various style mockups, but aren't sure what makes sense or would be acceptable. The traditional look and feel in Horizon can be achieved, but we also aren't sure that Horizon today has a good example for handling tree browsing. The following are some of the mockups we've created.<br />
<br />
[[File:Graffiti-capabilities-widget-mockups.png|thumbnail|center|Graffiti Concept Mockups]]<br />
<br />
<br />
<br />
=== Proposed Horizon Component Architecture ===<br />
<br />
We would like there to be a common way in Horizon to support "tagging" simple named tags and key-value pairs that also will support the overall [[Graffiti]] concepts. In the proposed architecture, we will support Horizon gaining the value of Graffiti concepts through a thin API plugin layer directly in Horizon without the full "Dictionary" and "Resource Directory" APIs in the deployed environment. This will provide benefits to Horizon now, without requiring a new Graffiti service to either be incubated or be adopted into other projects (which we are actively seeking input and advice). The widgets will be built to work with a common simple "resource syntax" that the external service API would provide.<br />
<br />
The entire concept can be run in a lightweight way through a thin filesystem provider on the Horizon server that allows reading dictionary definition files directly from the filesystem or from services that already provide schemas or tags. This would suffice for single node deployments or deployments that are managed through configuration management provider to ensure consistency of the definitions across Horizon nodes.<br />
<br />
If a fully "Dictionary" / "Resource Directory" service API was available, the widgets wouldn't have to change even as new resource types and metadata definitions are added to the system. They still go to the Horizon Graffiti component, which would add the plugin to talk to the appropriate central "Dictionary" / "Resource Directory" service endpoint(s), which would provide the [[Graffiti/Architecture#Graffiti_API_Benefits|full benefits]].<br />
<br />
===== Limits of a Horizon Only Solution =====<br />
<br />
The widgets and concepts can be partially built in Horizon as stated above and diagrammed below without changes to existing services. However, there are a number of limitations that require some external service work as well.<br />
<br />
# Horizon is a stateless server by design at this point. The only place any persistent data can exist is if you choose to store session information on the server in a database. The default setup for Horizon now uses signed cookies to maintain session data and avoids a DB requirement.<br />
# There is no privileged account running on the Horizon server and thus no way to build a persistent datastore only the admin can obtain. A persistent privileged session as this creates many security issues.<br />
# Horizon can be set up in an HA manner, which would require either duplicate DB on multiple Horizon servers or another server dedicated to the DB backend for Horizon.<br />
# The original scope discussed is only part of the picture, when the scope grows beyond the launch use case, the scope grows beyond usefulness for just Horizon. Isolating in Horizon is limiting.<br />
<br />
[[File:Graffiti-Widgets.png]]</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti/Architecture&diff=133097Graffiti/Architecture2016-09-17T03:09:24Z<p>Travis Tripp: /* Resource Search Optimization */</p>
<hr />
<div><br />
== Graffiti Architecture Concepts ==<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects for OpenStack users. Graffiti has the initial intent of providing cross service metadata “tagging" and search aggregation for cloud resources.<br />
<br />
== Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects (Glance, Searchlight, Horizon). The below provides legacy overview information to help understand how the various components come together. For more info, see https://wiki.openstack.org/wiki/Graffiti#Current_Status<br />
<br />
==== Workflow and Components ====<br />
<br />
# Load your metadata definitions (called property types or capability types)<br />
## Into the Graffiti central dictionary <br />
## Or configure Graffiti plugins to include existing definitions provided by the various services<br />
# "Tag" the resources in the cloud with your properties and capabilities<br />
# Let users find the resources with your desired properties and capabilities<br />
<br />
<br />
[[File:Graffiti-Architecture-ConceptOverlay-Simple.png]]<br />
<br />
=== Base Concepts ===<br />
<br />
* Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties is largely a disconnected and difficult process. This often involves searching wikis and opening the source code. It becomes more difficult as a cloud's scale grows. In addition, many times the properties can apply to resources from several different services. Graffiti makes this easier by creating the following concepts: <br />
** ''[[Graffiti/Dictionary#Capability_Types|Capabilities and Requirements]]'': The Graffiti concepts have embraced the idea that cloud resources may be described using the notion of capabilities, a concept influenced by some parts of OpenStack today as well as by industry specifications like OASIS TOSCA (Please note, Graffiti is NOT an orchestration engine, it only assists in describing and locating existing resources in the cloud.).<br />
** ''[[Graffiti/Dictionary|Dictionary]]'': A common API for services, admins, and users to discover and share their metadata vocabulary. This is the basis for creating an agreement on how to describe the various capabilities the cloud provides. It allows for a consistent UI and CLI experience for describing and finding resources. <br />
** ''[[Graffiti/Directory|Resource Directory]]'': A common API to "tag" and search across existing and new services for cloud content based on the dictionary (metadata definitions). <br />
** ''Resource Capability Registry'': A persistent shared repository for services to publish information about cloud resources. This can optionally be used by services instead of or in addition to having their own local native storage to describe resources.<br />
<br />
== Use Case Exampleː Compute Capabilities ==<br />
In Summary: <br />
The Graffiti concepts provide cross service and cross environment:<br />
* metadata definition aggregation and administration<br />
* resource metadata "tagging" aggregation<br />
* resource metadata search aggregation<br />
<br />
<br />
[[File:Graffiti-ComputeCapability-Flow-Overview.png]]<br />
<br />
== Additional Details ==<br />
<br />
The below provides an overview of the metadata aggregation, resource search optimization, and local resource registry concepts.<br />
<br />
<br />
[[File:Graffiti-Architecture-ConceptOverlay.png]]<br />
<br />
=== Graffiti API Benefits ===<br />
<br />
When we first looked at a UI only solution, we found that it can be done to a certain extent [[Graffiti/Architecture#Limits_of_a_Horizon_Only_Solution|with limitations]]. However, if we propose the idea of a new service integrated or built into the ecosystem the following additional benefits will be available:<br />
* Command line and REST API for cross service searching<br />
* Ability to import / export definitions across deployments<br />
* Common persistence DB for definitions in multi-node / HA deployments<br />
* Private tag / metadata libraries. Users / projects will be able to have their own vocabulary for "tagging" resources<br />
* Authoring - We will provide an authoring and administration UI for creating and managing namespaces, capability types, etc<br />
* Resource search performance optimizations. We would like to introduce a high performance indexing mechanism based that crosses service boundaries.<br />
<br />
==== Resource Search Optimization ====<br />
<br />
Ideasː<br />
* Lazy loading. Simple pre-fetch mechanism. Make a call to initiate session or on first request for a resource type, data is pulled into memory and held for a limited time. Subsequent searches are all done in in memory. RBAC is handled via token pass through.<br />
* Eager loading. The base idea is that cache provider plugin can be added under the API. Resources that are indexable (those whose service owner supports notifications) would then be indexed via a combination of startup seeding and service resource event notifications. For example, Glance supports sending notifications on certain image changes. The index itself could be based on elasticsearch and the plugin would translate queries in and out of elasticsearch. (Noteː This portion of the concept has been mostly implemented by Project Searchlight [https://wiki.openstack.org/wiki/Searchlight]).<br />
<br />
== Proposed Horizon Concepts ==<br />
<br />
We believe that the [[Graffiti]] concepts can be fulfilled in Horizon with reusable widgets that we can plug into Horizon as well as changes to screens like the launch instance wizard. The widgets will provide the ability to "tag" capabilities and TBDː requirements on various resources. They will also be able to generate filter queries based on resource capabilities and properties.<br />
<br />
==== Related blueprints: ====<br />
* https://blueprints.launchpad.net/horizon/+spec/instance-launch-using-capability-filtering<br />
* https://blueprints.launchpad.net/horizon/+spec/faceted-search<br />
* https://blueprints.launchpad.net/horizon/+spec/tagging<br />
* https://blueprints.launchpad.net/horizon/+spec/host-aggregate-update-metadata<br />
<br />
==== Terminology Note ====<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them. Some resource types may not support capabilities / tag that have properties.<br />
<br />
==== Concept Screencasts ====<br />
<br />
To explore and explain the ideas, HP and Intel have created a screencasts showing the concepts running under POC code. The styling is only representative of the point in time that the demo was recorded and has changed.<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Screencast - Concept Overview]<br />
<br />
==== Concept Flow Mockup ====<br />
<br />
The basic proposed flow is that we will be able to add a widget on any resource management screen that we want to be able to "tag" capabilities. For example, the images, volumes, flavors, and host aggregate screens are all good candidates. The goal is that the only customization required will be for the code using the widget will send in information about the resource / resource type that is being tagged. The resource type is sent to the API which then returns back the capabilities applicable for that type of resource.<br />
<br />
==== Launch Instance Example ==== <br />
<br />
̈ - Noteː Tagging other resource types and searching for them could work similarly.<br />
<br />
[[File:Graffit-Tag-Then-Use-Concept.png|center|Widget Screenshots from screencasts]]<br />
<br />
==== Style Mockups ====<br />
<br />
We have been playing with various style mockups, but aren't sure what makes sense or would be acceptable. The traditional look and feel in Horizon can be achieved, but we also aren't sure that Horizon today has a good example for handling tree browsing. The following are some of the mockups we've created.<br />
<br />
[[File:Graffiti-capabilities-widget-mockups.png|thumbnail|center|Graffiti Concept Mockups]]<br />
<br />
<br />
<br />
=== Proposed Horizon Component Architecture ===<br />
<br />
We would like there to be a common way in Horizon to support "tagging" simple named tags and key-value pairs that also will support the overall [[Graffiti]] concepts. In the proposed architecture, we will support Horizon gaining the value of Graffiti concepts through a thin API plugin layer directly in Horizon without the full "Dictionary" and "Resource Directory" APIs in the deployed environment. This will provide benefits to Horizon now, without requiring a new Graffiti service to either be incubated or be adopted into other projects (which we are actively seeking input and advice). The widgets will be built to work with a common simple "resource syntax" that the external service API would provide.<br />
<br />
The entire concept can be run in a lightweight way through a thin filesystem provider on the Horizon server that allows reading dictionary definition files directly from the filesystem or from services that already provide schemas or tags. This would suffice for single node deployments or deployments that are managed through configuration management provider to ensure consistency of the definitions across Horizon nodes.<br />
<br />
If a fully "Dictionary" / "Resource Directory" service API was available, the widgets wouldn't have to change even as new resource types and metadata definitions are added to the system. They still go to the Horizon Graffiti component, which would add the plugin to talk to the appropriate central "Dictionary" / "Resource Directory" service endpoint(s), which would provide the [[Graffiti/Architecture#Graffiti_API_Benefits|full benefits]].<br />
<br />
===== Limits of a Horizon Only Solution =====<br />
<br />
The widgets and concepts can be partially built in Horizon as stated above and diagrammed below without changes to existing services. However, there are a number of limitations that require some external service work as well.<br />
<br />
# Horizon is a stateless server by design at this point. The only place any persistent data can exist is if you choose to store session information on the server in a database. The default setup for Horizon now uses signed cookies to maintain session data and avoids a DB requirement.<br />
# There is no privileged account running on the Horizon server and thus no way to build a persistent datastore only the admin can obtain. A persistent privileged session as this creates many security issues.<br />
# Horizon can be set up in an HA manner, which would require either duplicate DB on multiple Horizon servers or another server dedicated to the DB backend for Horizon.<br />
# The original scope discussed is only part of the picture, when the scope grows beyond the launch use case, the scope grows beyond usefulness for just Horizon. Isolating in Horizon is limiting.<br />
<br />
[[File:Graffiti-Widgets.png]]</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Searchlight&diff=132666Searchlight2016-09-07T15:28:58Z<p>Travis Tripp: /* Get Involved */</p>
<hr />
<div><br />
= Mission Statement =<br />
<br />
To provide advanced and scalable indexing and search across multi-tenant cloud resources.<br />
<br />
== Project Links ==<br />
<br />
{| border="1" cellpadding="2"<br />
| Developer Documentation<br />
| http://docs.openstack.org/developer/searchlight/<br />
|-<br />
| Source code - API and Listener Services<br />
| https://github.com/openstack/searchlight<br />
|-<br />
| Source code - Horizon UI Plugin<br />
| https://github.com/openstack/searchlight-ui<br />
|-<br />
| Source code - Python Client<br />
| https://github.com/openstack/python-searchlightclient<br />
|-<br />
| Gerrit Reviews<br />
| https://review.openstack.org/#/q/project:%255E.*searchlight.*+status:open,n,z<br />
|-<br />
| Bug tracker<br />
| https://bugs.launchpad.net/searchlight<br />
|-<br />
| Feature tracker<br />
| https://blueprints.launchpad.net/searchlight<br />
|-<br />
| IRC<br />
| #openstack-searchlight<br />
|-<br />
| Meeting Times<br />
| http://eavesdrop.openstack.org/#Searchlight_Team_Meeting<br />
|-<br />
| Meeting Agenda<br />
| https://etherpad.openstack.org/p/search-team-meeting-agenda<br />
|-<br />
| Meeting Logs<br />
| http://eavesdrop.openstack.org/meetings/openstack_search/<br />
|}<br />
<br />
= Overview =<br />
<br />
Searchlight dramatically improves the user focused search capabilities and performance on behalf of various OpenStack cloud services.<br />
<br />
It accomplishes this by offloading user search queries from existing API servers and indexing their data into ElasticSearch. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents. ElasticSearch is developed and released as open source under the terms of the Apache License. Notable users of ElasticSearch include Wikimedia, StumbleUpon, Mozilla, Quora, Foursquare, Etsy, SoundCloud, GitHub, FDA, CERN, and Stack Exchange. (Source: http://en.wikipedia.org/wiki/Elasticsearch). The elastic-recheck project also uses Elasticsearch (and kibana) to classify and track OpenStack gate failures. (Source: http://status.openstack.org/elastic-recheck)<br />
<br />
=== Screencasts ===<br />
<br />
* (Mitaka) Integration with Horizon demoː https://www.youtube.com/watch?v=2feC1njvZe0<br />
* (Mitaka) End of Cycle Presentation on Horizon, CLI, and Searchlightː https://www.youtube.com/watch?v=ExzULavwvNQ<br />
* (Mitaka) Summit Presentationː https://www.youtube.com/watch?v=0jYXsK4j26s<br />
* (Liberty) PTL Overviewː https://www.youtube.com/watch?v=yU5CrAOAlkA<br />
* (Kilo summit) Concept Demoː https://youtu.be/eGnGr48E5_4<br />
<br />
=== Design ===<br />
<br />
* http://docs.openstack.org/developer/searchlight/<br />
* http://docs.openstack.org/developer/searchlight/architecture.html<br />
<br />
==== Concept Overview ====<br />
<br />
[[File:Searchlight-Concept-1.png]]<br />
<br />
==== Concept Internals Overview ====<br />
<br />
* [https://wiki.openstack.org/w/images/5/55/Searchlight-Concept-2.png Concept Internals]<br />
* [https://wiki.openstack.org/w/images/e/ef/Searchlight-use-when-there.png Usage switching]<br />
* [https://wiki.openstack.org/w/images/1/10/Searchlight-Concept-Horizon-Layers.png Horizon Concept Layers]<br />
* [https://wiki.openstack.org/w/images/6/6b/Searchlight-WebSocket-Concept.png Horizon Web Socket Concept]<br />
<br />
== Get Involved ==<br />
<br />
Please join with us to help move forward together as a communityǃ<br />
<br />
Searchlight is an open project and we encourage contribution from everybody.<br />
<br />
We support both developers and non-developers who want to provide input, requests for features, and bug fixes. We want to be able to move quickly without getting too bogged down in process, but still provide a rich mechanism for feature reviews as needed.<br />
<br />
* http://docs.openstack.org/developer/searchlight/feature-requests-bugs.html<br />
<br />
== History ==<br />
<br />
Searchlight was originally developed and released in the Kilo release of Glance as the Catalog Index Service [1]. At the Liberty Summit we decided to broaden the scope to provide advanced and scalable search across multi-tenant cloud resources.<br />
<br />
[1] http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Searchlight&diff=132665Searchlight2016-09-07T15:27:41Z<p>Travis Tripp: /* Design */</p>
<hr />
<div><br />
= Mission Statement =<br />
<br />
To provide advanced and scalable indexing and search across multi-tenant cloud resources.<br />
<br />
== Project Links ==<br />
<br />
{| border="1" cellpadding="2"<br />
| Developer Documentation<br />
| http://docs.openstack.org/developer/searchlight/<br />
|-<br />
| Source code - API and Listener Services<br />
| https://github.com/openstack/searchlight<br />
|-<br />
| Source code - Horizon UI Plugin<br />
| https://github.com/openstack/searchlight-ui<br />
|-<br />
| Source code - Python Client<br />
| https://github.com/openstack/python-searchlightclient<br />
|-<br />
| Gerrit Reviews<br />
| https://review.openstack.org/#/q/project:%255E.*searchlight.*+status:open,n,z<br />
|-<br />
| Bug tracker<br />
| https://bugs.launchpad.net/searchlight<br />
|-<br />
| Feature tracker<br />
| https://blueprints.launchpad.net/searchlight<br />
|-<br />
| IRC<br />
| #openstack-searchlight<br />
|-<br />
| Meeting Times<br />
| http://eavesdrop.openstack.org/#Searchlight_Team_Meeting<br />
|-<br />
| Meeting Agenda<br />
| https://etherpad.openstack.org/p/search-team-meeting-agenda<br />
|-<br />
| Meeting Logs<br />
| http://eavesdrop.openstack.org/meetings/openstack_search/<br />
|}<br />
<br />
= Overview =<br />
<br />
Searchlight dramatically improves the user focused search capabilities and performance on behalf of various OpenStack cloud services.<br />
<br />
It accomplishes this by offloading user search queries from existing API servers and indexing their data into ElasticSearch. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents. ElasticSearch is developed and released as open source under the terms of the Apache License. Notable users of ElasticSearch include Wikimedia, StumbleUpon, Mozilla, Quora, Foursquare, Etsy, SoundCloud, GitHub, FDA, CERN, and Stack Exchange. (Source: http://en.wikipedia.org/wiki/Elasticsearch). The elastic-recheck project also uses Elasticsearch (and kibana) to classify and track OpenStack gate failures. (Source: http://status.openstack.org/elastic-recheck)<br />
<br />
=== Screencasts ===<br />
<br />
* (Mitaka) Integration with Horizon demoː https://www.youtube.com/watch?v=2feC1njvZe0<br />
* (Mitaka) End of Cycle Presentation on Horizon, CLI, and Searchlightː https://www.youtube.com/watch?v=ExzULavwvNQ<br />
* (Mitaka) Summit Presentationː https://www.youtube.com/watch?v=0jYXsK4j26s<br />
* (Liberty) PTL Overviewː https://www.youtube.com/watch?v=yU5CrAOAlkA<br />
* (Kilo summit) Concept Demoː https://youtu.be/eGnGr48E5_4<br />
<br />
=== Design ===<br />
<br />
* http://docs.openstack.org/developer/searchlight/<br />
* http://docs.openstack.org/developer/searchlight/architecture.html<br />
<br />
==== Concept Overview ====<br />
<br />
[[File:Searchlight-Concept-1.png]]<br />
<br />
==== Concept Internals Overview ====<br />
<br />
* [https://wiki.openstack.org/w/images/5/55/Searchlight-Concept-2.png Concept Internals]<br />
* [https://wiki.openstack.org/w/images/e/ef/Searchlight-use-when-there.png Usage switching]<br />
* [https://wiki.openstack.org/w/images/1/10/Searchlight-Concept-Horizon-Layers.png Horizon Concept Layers]<br />
* [https://wiki.openstack.org/w/images/6/6b/Searchlight-WebSocket-Concept.png Horizon Web Socket Concept]<br />
<br />
== Get Involved ==<br />
<br />
Please join with us to help move forward together as a community! We are sure that the ideas and concepts can use refinement and we'd like to identify where we can best fit in to the ecosystem.<br />
<br />
== History ==<br />
<br />
Searchlight was originally developed and released in the Kilo release of Glance as the Catalog Index Service [1]. At the Liberty Summit we decided to broaden the scope to provide advanced and scalable search across multi-tenant cloud resources.<br />
<br />
[1] http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Meetings/Horizon&diff=131605Meetings/Horizon2016-08-24T00:02:11Z<p>Travis Tripp: /* Agenda for 2016-08-24 */</p>
<hr />
<div>The [[OpenStack]] [[Horizon]] Team holds public meetings in #openstack-meeting-3, weekly at 2000 UTC.<br />
<br />
See http://eavesdrop.openstack.org/#Horizon_Team_Meeting for upcoming meeting schedule<br />
<br />
Everyone is encouraged to attend!<br />
<br />
<br />
== Apologies for absence ==<br />
* amotoki (Dec 3) -- Every 2000 UTC meeting<br />
<br />
== Agenda for 2016-08-24 ==<br />
* https://review.openstack.org/357600 <code>TravT on behalf of tyr</code><br />
* Planning summit sessions <code>robcresswell</code><br />
<br />
== Agenda for 2016-08-03 ==<br />
* Looking to help with a blueprint <code>Marcellin_</code><br />
* UI autotests: proposition to migrate to new architecture https://github.com/sergeychipiga/horizon_autotests, based on steps.<br />
<br />
== Agenda for 2016-07-27 ==<br />
*<br />
<br />
<br />
== Agenda for 2016-07-20 ==<br />
*<br />
<br />
== Agenda for 2016-07-13 -- ''cancelled'' ==<br />
* This meeting is cancelled due to the midcycle (https://wiki.openstack.org/wiki/Sprints/HorizonNewtonSprint)<br />
<br />
== Agenda for 2016-07-06 2000 UTC ==<br />
* Glance v2 update. The issue we have: https://bugs.launchpad.net/glance/+bug/1595335 <code>bpokorny</code><br />
* UI Guidelines Spec https://review.openstack.org/#/c/337202/ <code>robcresswell</code> <code>asettle</code><br />
* 800UTC meeting has brexit'd http://lists.openstack.org/pipermail/openstack-dev/2016-July/098854.html<br />
<br />
== Agenda for 2016-06-15 800 UTC ==<br />
* Priorities update<br />
* Discuss need for early meeting<br />
* Discuss UI guidelines (http://docs.openstack.org/contributor-guide/ui-text-guidelines.html) and path forward. Volunteers to work with docs/UX folks please! <code>robcresswell</code><br />
<br />
== Agenda for 2016-06-15 800 UTC -- ''cancelled'' ==<br />
<br />
== Agenda for 2016-06-08 2000 UTC ==<br />
<br />
* Let's All Read: the doc team's guidelines on text content in UI! http://docs.openstack.org/contributor-guide/ui-text-guidelines.html [r1chardj0n3s]<br />
* https://blueprints.launchpad.net/horizon/+spec/add-policy-rules-to-workflow-actions [tsufiev, moved from lonely morning meeting]<br />
* https://blueprints.launchpad.net/horizon/+spec/support-extra-prop-for-project-and-user [kenji-i]<br />
<br />
== Agenda for 2016-06-01 800 UTC ==<br />
<br />
== Agenda for 2016-05-25 2000 UTC ==<br />
<br />
== Agenda for 2016-05-18 800 UTC ==<br />
* Notices<br />
** Midcycle: https://wiki.openstack.org/wiki/Sprints/HorizonNewtonSprint<br />
** Bug Report: https://wiki.openstack.org/wiki/Horizon/WeeklyBugReport<br />
* We should consider http://docs.openstack.org/contributor-guide/ui-text-guidelines.html<br />
* State of Glance v2 support in Horizon: plans / volunteers? [tsufiev]<br />
<br />
== Agenda for 2016-05-11 2000 UTC ==<br />
* Notices<br />
** Bug Report (https://wiki.openstack.org/wiki/Horizon/WeeklyBugReport)<br />
** Midcycle dates (http://doodle.com/poll/xvchsbbs4qz9tzr7)<br />
<br />
== Agenda for 2016-05-04 800 UTC ==<br />
* Notices<br />
** Midcycle<br />
** Priorities (and summit etherpads)<br />
* Discuss earlier feature freeze (and scope of that freeze) for plugins<br />
<br />
<br />
== Agenda for 2016-04-27 2000 UTC ==<br />
* Meeting cancelled due to OpenStack Summit<br />
<br />
== Agenda for 2016-04-20 800 UTC ==<br />
<br />
<br />
<br />
== Previous meetings ==<br />
http://eavesdrop.openstack.org/meetings/horizon/<br />
<br />
March 4, 2015 (bot broken) [[Meetings/Horizon/March4Log|log]]<br />
<br />
Feb 17, 2016 (bot broken) [[Meetings/Horizon/Feb17log|log]]</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti/Architecture&diff=131255Graffiti/Architecture2016-08-16T22:07:43Z<p>Travis Tripp: just moved more architecture to the top and moved horizon stuff to the bottom.</p>
<hr />
<div><br />
== Graffiti Architecture Concepts ==<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects for OpenStack users. Graffiti has the initial intent of providing cross service metadata “tagging" and search aggregation for cloud resources.<br />
<br />
== Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects (Glance, Searchlight, Horizon). The below provides legacy overview information to help understand how the various components come together. For more info, see https://wiki.openstack.org/wiki/Graffiti#Current_Status<br />
<br />
==== Workflow and Components ====<br />
<br />
# Load your metadata definitions (called property types or capability types)<br />
## Into the Graffiti central dictionary <br />
## Or configure Graffiti plugins to include existing definitions provided by the various services<br />
# "Tag" the resources in the cloud with your properties and capabilities<br />
# Let users find the resources with your desired properties and capabilities<br />
<br />
<br />
[[File:Graffiti-Architecture-ConceptOverlay-Simple.png]]<br />
<br />
=== Base Concepts ===<br />
<br />
* Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties is largely a disconnected and difficult process. This often involves searching wikis and opening the source code. It becomes more difficult as a cloud's scale grows. In addition, many times the properties can apply to resources from several different services. Graffiti makes this easier by creating the following concepts: <br />
** ''[[Graffiti/Dictionary#Capability_Types|Capabilities and Requirements]]'': The Graffiti concepts have embraced the idea that cloud resources may be described using the notion of capabilities, a concept influenced by some parts of OpenStack today as well as by industry specifications like OASIS TOSCA (Please note, Graffiti is NOT an orchestration engine, it only assists in describing and locating existing resources in the cloud.).<br />
** ''[[Graffiti/Dictionary|Dictionary]]'': A common API for services, admins, and users to discover and share their metadata vocabulary. This is the basis for creating an agreement on how to describe the various capabilities the cloud provides. It allows for a consistent UI and CLI experience for describing and finding resources. <br />
** ''[[Graffiti/Directory|Resource Directory]]'': A common API to "tag" and search across existing and new services for cloud content based on the dictionary (metadata definitions). <br />
** ''Resource Capability Registry'': A persistent shared repository for services to publish information about cloud resources. This can optionally be used by services instead of or in addition to having their own local native storage to describe resources.<br />
<br />
== Use Case Exampleː Compute Capabilities ==<br />
In Summary: <br />
The Graffiti concepts provide cross service and cross environment:<br />
* metadata definition aggregation and administration<br />
* resource metadata "tagging" aggregation<br />
* resource metadata search aggregation<br />
<br />
<br />
[[File:Graffiti-ComputeCapability-Flow-Overview.png]]<br />
<br />
== Additional Details ==<br />
<br />
The below provides an overview of the metadata aggregation, resource search optimization, and local resource registry concepts.<br />
<br />
<br />
[[File:Graffiti-Architecture-ConceptOverlay.png]]<br />
<br />
=== Graffiti API Benefits ===<br />
<br />
When we first looked at a UI only solution, we found that it can be done to a certain extent [[Graffiti/Architecture#Limits_of_a_Horizon_Only_Solution|with limitations]]. However, if we propose the idea of a new service integrated or built into the ecosystem the following additional benefits will be available:<br />
* Command line and REST API for cross service searching<br />
* Ability to import / export definitions across deployments<br />
* Common persistence DB for definitions in multi-node / HA deployments<br />
* Private tag / metadata libraries. Users / projects will be able to have their own vocabulary for "tagging" resources<br />
* Authoring - We will provide an authoring and administration UI for creating and managing namespaces, capability types, etc<br />
* Resource search performance optimizations. We would like to introduce a high performance indexing mechanism based that crosses service boundaries.<br />
<br />
==== Resource Search Optimization ====<br />
<br />
This has not been explored in depth, but we do have a few ideasː<br />
* Lazy loading. Simple pre-fetch mechanism. Make a call to initiate session or on first request for a resource type, data is pulled into memory and held for a limited time. Subsequent searches are all done in in memory. RBAC is handled via token pass through.<br />
* Eager loading. The base idea is that cache provider plugin can be added under the API. Resources that are indexable (those whose service owner supports notifications) would then be indexed via a combination of startup seeding and service resource event notifications. For example, Glance supports sending notifications on certain image changes. The index itself could be based on elasticsearch and the plugin would translate queries in and out of elasticsearch. One issue with this approach today is that this may be limited to admin only due to limited RBAC visibility.<br />
<br />
== Proposed Horizon Concepts ==<br />
<br />
We believe that the [[Graffiti]] concepts can be fulfilled in Horizon with reusable widgets that we can plug into Horizon as well as changes to screens like the launch instance wizard. The widgets will provide the ability to "tag" capabilities and TBDː requirements on various resources. They will also be able to generate filter queries based on resource capabilities and properties.<br />
<br />
==== Related blueprints: ====<br />
* https://blueprints.launchpad.net/horizon/+spec/instance-launch-using-capability-filtering<br />
* https://blueprints.launchpad.net/horizon/+spec/faceted-search<br />
* https://blueprints.launchpad.net/horizon/+spec/tagging<br />
* https://blueprints.launchpad.net/horizon/+spec/host-aggregate-update-metadata<br />
<br />
==== Terminology Note ====<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them. Some resource types may not support capabilities / tag that have properties.<br />
<br />
==== Concept Screencasts ====<br />
<br />
To explore and explain the ideas, HP and Intel have created a screencasts showing the concepts running under POC code. The styling is only representative of the point in time that the demo was recorded and has changed.<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Screencast - Concept Overview]<br />
<br />
==== Concept Flow Mockup ====<br />
<br />
The basic proposed flow is that we will be able to add a widget on any resource management screen that we want to be able to "tag" capabilities. For example, the images, volumes, flavors, and host aggregate screens are all good candidates. The goal is that the only customization required will be for the code using the widget will send in information about the resource / resource type that is being tagged. The resource type is sent to the API which then returns back the capabilities applicable for that type of resource.<br />
<br />
==== Launch Instance Example ==== <br />
<br />
̈ - Noteː Tagging other resource types and searching for them could work similarly.<br />
<br />
[[File:Graffit-Tag-Then-Use-Concept.png|center|Widget Screenshots from screencasts]]<br />
<br />
==== Style Mockups ====<br />
<br />
We have been playing with various style mockups, but aren't sure what makes sense or would be acceptable. The traditional look and feel in Horizon can be achieved, but we also aren't sure that Horizon today has a good example for handling tree browsing. The following are some of the mockups we've created.<br />
<br />
[[File:Graffiti-capabilities-widget-mockups.png|thumbnail|center|Graffiti Concept Mockups]]<br />
<br />
<br />
<br />
=== Proposed Horizon Component Architecture ===<br />
<br />
We would like there to be a common way in Horizon to support "tagging" simple named tags and key-value pairs that also will support the overall [[Graffiti]] concepts. In the proposed architecture, we will support Horizon gaining the value of Graffiti concepts through a thin API plugin layer directly in Horizon without the full "Dictionary" and "Resource Directory" APIs in the deployed environment. This will provide benefits to Horizon now, without requiring a new Graffiti service to either be incubated or be adopted into other projects (which we are actively seeking input and advice). The widgets will be built to work with a common simple "resource syntax" that the external service API would provide.<br />
<br />
The entire concept can be run in a lightweight way through a thin filesystem provider on the Horizon server that allows reading dictionary definition files directly from the filesystem or from services that already provide schemas or tags. This would suffice for single node deployments or deployments that are managed through configuration management provider to ensure consistency of the definitions across Horizon nodes.<br />
<br />
If a fully "Dictionary" / "Resource Directory" service API was available, the widgets wouldn't have to change even as new resource types and metadata definitions are added to the system. They still go to the Horizon Graffiti component, which would add the plugin to talk to the appropriate central "Dictionary" / "Resource Directory" service endpoint(s), which would provide the [[Graffiti/Architecture#Graffiti_API_Benefits|full benefits]].<br />
<br />
===== Limits of a Horizon Only Solution =====<br />
<br />
The widgets and concepts can be partially built in Horizon as stated above and diagrammed below without changes to existing services. However, there are a number of limitations that require some external service work as well.<br />
<br />
# Horizon is a stateless server by design at this point. The only place any persistent data can exist is if you choose to store session information on the server in a database. The default setup for Horizon now uses signed cookies to maintain session data and avoids a DB requirement.<br />
# There is no privileged account running on the Horizon server and thus no way to build a persistent datastore only the admin can obtain. A persistent privileged session as this creates many security issues.<br />
# Horizon can be set up in an HA manner, which would require either duplicate DB on multiple Horizon servers or another server dedicated to the DB backend for Horizon.<br />
# The original scope discussed is only part of the picture, when the scope grows beyond the launch use case, the scope grows beyond usefulness for just Horizon. Isolating in Horizon is limiting.<br />
<br />
[[File:Graffiti-Widgets.png]]</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Sprints/HorizonNewtonSprint&diff=126684Sprints/HorizonNewtonSprint2016-06-13T20:08:56Z<p>Travis Tripp: /* Registration */</p>
<hr />
<div>The Horizon team is having their Newton mid-cycle/sprint in San Jose, California.<br />
<br />
* Where: 3750 Cisco Way, Building-15, San Jose, CA 95134<br />
* When: July 12 - 14, 2016<br />
* Topics: https://etherpad.openstack.org/p/horizon-newton-midcycle<br />
<br />
=== Registration ===<br />
{| class="wikitable sortable"<br />
|-<br />
! # !! Name !! IRC Nick !! Email<br />
|-<br />
| 0 || <!-- name --> || <!-- irc --> || <!-- email --><br />
|-<br />
| 1 || Rob Cresswell || robcresswell || robert.cresswell AT outlook DOT com<br />
|-<br />
| 2 || Thai Tran || tqtran || tqtran AT us DOT ibm DOT com<br />
|-<br />
| 3 || Richard Jones || r1chardj0n3s || r1chardj0n3s AT gmail DOT com<br />
|-<br />
| 4 || Brad Pokorny || bpokorny || brad_pokorny AT symantec DOT com<br />
|-<br />
| 5 || David Lyle || david-lyle || dklyle0 AT gmail DOT com<br />
|-<br />
| 6 || Matt Borland || matt-borland || matt.borland AT moc.eph REVERSED<br />
|-<br />
| 7 || Diana Whitten || hurgleburgler || hurgleburgler AT gmail DOT com<br />
|-<br />
| 8 || Daniel Castellanos || lcastell || luis DOT daniel DOT castellanos AT intel DOT com<br />
|- <br />
| 9 || Travis Tripp || TravT || travis.tripp AT moc.eph REVERSED<br />
|}</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Horizon/RESTAPI&diff=124124Horizon/RESTAPI2016-04-19T03:37:34Z<p>Travis Tripp: added new postman collection links</p>
<hr />
<div><br />
== Horizon REST API ==<br />
Starting in Kilo, Horizon has a REST API that allows client side code to make API through the Horizon server without DJANGO server side rendering. This is to support the angular work being done in Horizon. It takes care of issues like cross origin resource scripting as well as leveraging authentication and authorization libraries already in existence.<br />
<br />
=== THESE APIs ARE FOR THE EXCLUSIVE USE OF HORIZON DEVELOPMENT AND ARE NOT INTENDED FOR EXTERNAL USE AT THIS TIME ===<br />
They are very early in development and intended to support in tree development. Until further notice, there will be no deprecation period if they need to change.<br />
<br />
=== Status ===<br />
<br />
==== Liberty ====<br />
Server sideː [https://github.com/openstack/horizon/tree/master/openstack_dashboard/api/rest]<br />
Client sideː [https://github.com/openstack/horizon/tree/master/openstack_dashboard/static/openstack-service-api]<br />
<br />
==== Kilo ====<br />
Summit Talkː [https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/beyond-the-horizon-innovating-and-customizing-horizon-using-angularjs]<br />
Server sideː [https://github.com/openstack/horizon/tree/master/openstack_dashboard/api/rest]<br />
Client sideː [https://github.com/openstack/horizon/tree/stable/kilo/horizon/static/horizon/js/angular/services]<br />
<br />
=== Testing ===<br />
<br />
To Test the Horizon REST APIs using POSTMAN (Chrome Plugin), login normally going to the exact API address (or localhost) that you'll use from POSTMAN. Once you've logged in your browser will have an access cookie that it'll pass on with all its requests.<br />
<br />
You will also want to install the POSTMAN interceptor and log into horizon from your browser.<br />
<br />
==== GET requests ====<br />
Input the URL. <br />
<br />
For exampleː http://127.0.0.1:8005/api/glance/images<br />
<br />
Set the following header:<br />
{| class="wikitable"<br />
|-<br />
! Header !! Value<br />
|-<br />
| X-Requested-With || XMLHttpRequest<br />
|}<br />
<br />
The GET requests should include a csrftoken cookie (eg. csrftoken=onXnBfMqIxuFGr437P91Uuxdl09t2ykQ). You should copy this off if you need to perform any POST requests.<br />
<br />
==== POST requests ====<br />
Input the URL. <br />
<br />
For exampleː http://127.0.0.1:8005/api/nova/keypairs/<br />
<br />
Set the following headers<br />
{| class="wikitable"<br />
|-<br />
! Header !! Value<br />
|-<br />
| X-Requested-With || XMLHttpRequest<br />
|-<br />
| Content-Type || application/json<br />
|-<br />
| X-CSRFToken || the value of the csrftoken cookie you've seen in any GET request to the same address. You can obtain the token from the cookie sent via your REQUEST headers of the above GET call.<br />
|}<br />
<br />
Set the raw content as JSON formatted.<br />
<br />
Exampleː<br />
<br />
{<br />
"name":"foo" <br />
}<br />
<br />
==== Postman Collections ====<br />
<br />
Some POSTMAN Collections to help with testing can be found below. The date that they worked is listed and collection follows.<br />
<br />
===== 2016-04-18 =====<br />
Cinder Horizonː https://www.getpostman.com/collections/342a35ae1063767049ed<br />
Glance Directː https://www.getpostman.com/collections/2ec9be630bc6b860e31d<br />
Glance Horizonː https://www.getpostman.com/collections/0fb61d16402b0b2bb640<br />
Keystone Directː https://www.getpostman.com/collections/a8364572e67c0f067c2e<br />
Keystone Horizonː https://www.getpostman.com/collections/3fe2767e50acf34893fc<br />
Network Abstraction Horizonː https://www.getpostman.com/collections/165869c840477c8b22a7<br />
Neutron Directː https://www.getpostman.com/collections/96ced18cd6483318ccf1<br />
Neutron Horizonː https://www.getpostman.com/collections/f372f0480ada8fdb61ed<br />
Nova Directː https://www.getpostman.com/collections/127fc0afe0dacd4806d5<br />
Nova Horizonː https://www.getpostman.com/collections/b5d0ed70a0c7b5f82d03<br />
Horizon Policyː https://www.getpostman.com/collections/dd65a2cc201169bcab91<br />
Searchlight Directː https://www.getpostman.com/collections/4165d893505350d5761a<br />
Searchlight Horizonː https://www.getpostman.com/collections/2c1fa8a695303122ad7a<br />
<br />
Work with the following Postman environment at the bottom of this page.<br />
<br />
===== 2015-04-06 =====<br />
Keystone: https://www.getpostman.com/collections/311080eb87c7bca1b7b1<br />
Neutron: https://www.getpostman.com/collections/401a40ae526887e378ef<br />
Network: https://www.getpostman.com/collections/be29e7af5243d09d8b99<br />
Nova Direct: https://www.getpostman.com/collections/0b6e8b0eb23687bfeec0<br />
Nova Horizon: https://www.getpostman.com/collections/1c24ae62ea46c8a56791<br />
Policy: https://www.getpostman.com/collections/acf134dde5c77ad81a1b<br />
Cinder: https://www.getpostman.com/collections/6c7391cd25603c2218fe<br />
Config: https://www.getpostman.com/collections/b64d279e663de38897c8<br />
Glance: https://www.getpostman.com/collections/2dd5e5e2e849bf3880fc<br />
<br />
===== POSTMAN ENVIRONMENT =====<br />
<br />
<code><br />
{<br />
"name": "OpenStack @ 192.168.200.200",<br />
"values": [<br />
{<br />
"key": "IP",<br />
"value": "127.0.0.1",<br />
"type": "text",<br />
"name": "IP",<br />
"enabled": true<br />
},<br />
{<br />
"key": "HORIZON_PORT",<br />
"value": "8005",<br />
"type": "text",<br />
"name": "HORIZON_PORT",<br />
"enabled": true<br />
},<br />
{<br />
"key": "TOKEN",<br />
"value": "REPLACE̞ME",<br />
"type": "text",<br />
"name": "TOKEN",<br />
"enabled": true<br />
},<br />
{<br />
"key": "KEYSTONE_PORT",<br />
"value": "5000",<br />
"type": "text",<br />
"name": "KEYSTONE_PORT",<br />
"enabled": true<br />
},<br />
{<br />
"key": "OS_TENANT_NAME",<br />
"value": "demo",<br />
"type": "text",<br />
"name": "OS_TENANT_NAME",<br />
"enabled": true<br />
},<br />
{<br />
"key": "OS_TENANT_ID",<br />
"value": "REPLACE̠ME",<br />
"type": "text",<br />
"name": "OS_TENANT_ID",<br />
"enabled": true<br />
},<br />
{<br />
"key": "OS_USERNAME",<br />
"value": "admin",<br />
"type": "text",<br />
"name": "OS_USERNAME",<br />
"enabled": true<br />
},<br />
{<br />
"key": "OS_PASSWORD",<br />
"value": "REPLACE̙ME",<br />
"type": "text",<br />
"name": "OS_PASSWORD",<br />
"enabled": true<br />
},<br />
{<br />
"key": "NOVA_V2_PORT",<br />
"value": "8774",<br />
"type": "text",<br />
"name": "NOVA_V2_PORT",<br />
"enabled": true<br />
},<br />
{<br />
"key": "SEARCHLIGHT_V1_PORT",<br />
"value": "9393",<br />
"type": "text",<br />
"name": "SEARCHLIGHT_V1_PORT",<br />
"enabled": true<br />
},<br />
{<br />
"key": "OS_DOMAIN_NAME",<br />
"value": "admin",<br />
"type": "text",<br />
"name": "OS_DOMAIN_NAME",<br />
"enabled": true<br />
},<br />
{<br />
"key": "OS_DOMAIN_ID",<br />
"value": null,<br />
"type": "text",<br />
"name": "OS_DOMAIN_ID",<br />
"enabled": true<br />
},<br />
{<br />
"key": "GLANCE_V2_PORT",<br />
"value": "9292",<br />
"type": "text",<br />
"name": "GLANCE_V2_PORT",<br />
"enabled": true<br />
},<br />
{<br />
"key": "HORIZON_LOCAL_IP",<br />
"value": "127.0.0.1",<br />
"type": "text",<br />
"name": "HORIZON_LOCAL_IP",<br />
"enabled": true<br />
},<br />
{<br />
"key": "SEARCHLIGHT_LOCAL_IP",<br />
"value": "127.0.0.1",<br />
"type": "text",<br />
"enabled": true<br />
}<br />
],<br />
"team": null,<br />
"timestamp": 1461035969136,<br />
"synced": false,<br />
"syncedFilename": "",<br />
"isDeleted": false<br />
}<br />
</code></div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Searchlight&diff=123904Searchlight2016-04-13T16:58:27Z<p>Travis Tripp: /* Overview */</p>
<hr />
<div><br />
= Mission Statement =<br />
<br />
To provide advanced and scalable indexing and search across multi-tenant cloud resources.<br />
<br />
== Project Links ==<br />
<br />
{| border="1" cellpadding="2"<br />
| Developer Documentation<br />
| http://docs.openstack.org/developer/searchlight/<br />
|-<br />
| Source code - API and Listener Services<br />
| https://github.com/openstack/searchlight<br />
|-<br />
| Source code - Horizon UI Plugin<br />
| https://github.com/openstack/searchlight-ui<br />
|-<br />
| Source code - Python Client<br />
| https://github.com/openstack/python-searchlightclient<br />
|-<br />
| Gerrit Reviews<br />
| https://review.openstack.org/#/q/project:%255E.*searchlight.*+status:open,n,z<br />
|-<br />
| Bug tracker<br />
| https://bugs.launchpad.net/searchlight<br />
|-<br />
| Feature tracker<br />
| https://blueprints.launchpad.net/searchlight<br />
|-<br />
| IRC<br />
| #openstack-searchlight<br />
|-<br />
| Meeting Times<br />
| http://eavesdrop.openstack.org/#Searchlight_Team_Meeting<br />
|-<br />
| Meeting Agenda<br />
| https://etherpad.openstack.org/p/search-team-meeting-agenda<br />
|-<br />
| Meeting Logs<br />
| http://eavesdrop.openstack.org/meetings/openstack_search/<br />
|}<br />
<br />
= Overview =<br />
<br />
Searchlight dramatically improves the user focused search capabilities and performance on behalf of various OpenStack cloud services.<br />
<br />
It accomplishes this by offloading user search queries from existing API servers and indexing their data into ElasticSearch. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents. ElasticSearch is developed and released as open source under the terms of the Apache License. Notable users of ElasticSearch include Wikimedia, StumbleUpon, Mozilla, Quora, Foursquare, Etsy, SoundCloud, GitHub, FDA, CERN, and Stack Exchange. (Source: http://en.wikipedia.org/wiki/Elasticsearch). The elastic-recheck project also uses Elasticsearch (and kibana) to classify and track OpenStack gate failures. (Source: http://status.openstack.org/elastic-recheck)<br />
<br />
=== Screencasts ===<br />
<br />
* (Mitaka) Integration with Horizon demoː https://www.youtube.com/watch?v=2feC1njvZe0<br />
* (Mitaka) End of Cycle Presentation on Horizon, CLI, and Searchlightː https://www.youtube.com/watch?v=ExzULavwvNQ<br />
* (Mitaka) Summit Presentationː https://www.youtube.com/watch?v=0jYXsK4j26s<br />
* (Liberty) PTL Overviewː https://www.youtube.com/watch?v=yU5CrAOAlkA<br />
* (Kilo summit) Concept Demoː https://youtu.be/eGnGr48E5_4<br />
<br />
=== Design ===<br />
<br />
* http://docs.openstack.org/developer/searchlight/<br />
<br />
==== Concept Overview ====<br />
<br />
[[File:Searchlight-Concept-1.png]]<br />
<br />
==== Concept Internals Overview ====<br />
<br />
* [https://wiki.openstack.org/w/images/5/55/Searchlight-Concept-2.png Concept Internals]<br />
* [https://wiki.openstack.org/w/images/e/ef/Searchlight-use-when-there.png Usage switching]<br />
* [https://wiki.openstack.org/w/images/1/10/Searchlight-Concept-Horizon-Layers.png Horizon Concept Layers]<br />
* [https://wiki.openstack.org/w/images/6/6b/Searchlight-WebSocket-Concept.png Horizon Web Socket Concept]<br />
<br />
== Get Involved ==<br />
<br />
Please join with us to help move forward together as a community! We are sure that the ideas and concepts can use refinement and we'd like to identify where we can best fit in to the ecosystem.<br />
<br />
== History ==<br />
<br />
Searchlight was originally developed and released in the Kilo release of Glance as the Catalog Index Service [1]. At the Liberty Summit we decided to broaden the scope to provide advanced and scalable search across multi-tenant cloud resources.<br />
<br />
[1] http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Searchlight&diff=123163Searchlight2016-03-29T20:30:18Z<p>Travis Tripp: /* Project Links */</p>
<hr />
<div><br />
= Mission Statement =<br />
<br />
To provide advanced and scalable indexing and search across multi-tenant cloud resources.<br />
<br />
== Project Links ==<br />
<br />
{| border="1" cellpadding="2"<br />
| Developer Documentation<br />
| http://docs.openstack.org/developer/searchlight/<br />
|-<br />
| Source code - API and Listener Services<br />
| https://github.com/openstack/searchlight<br />
|-<br />
| Source code - Horizon UI Plugin<br />
| https://github.com/openstack/searchlight-ui<br />
|-<br />
| Source code - Python Client<br />
| https://github.com/openstack/python-searchlightclient<br />
|-<br />
| Gerrit Reviews<br />
| https://review.openstack.org/#/q/project:%255E.*searchlight.*+status:open,n,z<br />
|-<br />
| Bug tracker<br />
| https://bugs.launchpad.net/searchlight<br />
|-<br />
| Feature tracker<br />
| https://blueprints.launchpad.net/searchlight<br />
|-<br />
| IRC<br />
| #openstack-searchlight<br />
|-<br />
| Meeting Times<br />
| http://eavesdrop.openstack.org/#Searchlight_Team_Meeting<br />
|-<br />
| Meeting Agenda<br />
| https://etherpad.openstack.org/p/search-team-meeting-agenda<br />
|-<br />
| Meeting Logs<br />
| http://eavesdrop.openstack.org/meetings/openstack_search/<br />
|}<br />
<br />
= Overview =<br />
<br />
Searchlight is intended to dramatically improving the user focused search capabilities and performance on behalf of various OpenStack cloud services.<br />
<br />
It accomplishes this by offloading user search queries from existing API servers and indexing their data into ElasticSearch. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents. ElasticSearch is developed and released as open source under the terms of the Apache License. Notable users of ElasticSearch include Wikimedia, StumbleUpon, Mozilla, Quora, Foursquare, Etsy, SoundCloud, GitHub, FDA, CERN, and Stack Exchange. (Source: http://en.wikipedia.org/wiki/Elasticsearch). The elastic-recheck project also uses Elasticsearch (and kibana) to classify and track OpenStack gate failures. (Source: http://status.openstack.org/elastic-recheck)<br />
<br />
=== Screencasts ===<br />
<br />
* (Mitaka) Integration with Horizon demoː https://www.youtube.com/watch?v=2feC1njvZe0<br />
* (Mitaka) End of Cycle Presentation on Horizon, CLI, and Searchlightː https://www.youtube.com/watch?v=ExzULavwvNQ<br />
* (Mitaka) Summit Presentationː https://www.youtube.com/watch?v=0jYXsK4j26s<br />
* (Liberty) PTL Overviewː https://www.youtube.com/watch?v=yU5CrAOAlkA<br />
* (Kilo summit) Concept Demoː https://youtu.be/eGnGr48E5_4<br />
<br />
=== Design ===<br />
<br />
* http://docs.openstack.org/developer/searchlight/<br />
<br />
==== Concept Overview ====<br />
<br />
[[File:Searchlight-Concept-1.png]]<br />
<br />
==== Concept Internals Overview ====<br />
<br />
* [https://wiki.openstack.org/w/images/5/55/Searchlight-Concept-2.png Concept Internals]<br />
* [https://wiki.openstack.org/w/images/e/ef/Searchlight-use-when-there.png Usage switching]<br />
* [https://wiki.openstack.org/w/images/1/10/Searchlight-Concept-Horizon-Layers.png Horizon Concept Layers]<br />
* [https://wiki.openstack.org/w/images/6/6b/Searchlight-WebSocket-Concept.png Horizon Web Socket Concept]<br />
<br />
== Get Involved ==<br />
<br />
Please join with us to help move forward together as a community! We are sure that the ideas and concepts can use refinement and we'd like to identify where we can best fit in to the ecosystem.<br />
<br />
== History ==<br />
<br />
Searchlight was originally developed and released in the Kilo release of Glance as the Catalog Index Service [1]. At the Liberty Summit we decided to broaden the scope to provide advanced and scalable search across multi-tenant cloud resources.<br />
<br />
[1] http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Searchlight&diff=122973Searchlight2016-03-24T23:09:06Z<p>Travis Tripp: /* Screencasts */</p>
<hr />
<div><br />
= Mission Statement =<br />
<br />
To provide advanced and scalable indexing and search across multi-tenant cloud resources.<br />
<br />
== Project Links ==<br />
<br />
{| border="1" cellpadding="2"<br />
| Developer Documentation<br />
| http://docs.openstack.org/developer/searchlight/<br />
|-<br />
| Source code<br />
| https://github.com/openstack/searchlight<br />
|-<br />
| Gerrit Reviews<br />
| https://review.openstack.org/#/q/status:open+project:openstack/searchlight,n,z<br />
|-<br />
| Bug tracker<br />
| https://bugs.launchpad.net/searchlight<br />
|-<br />
| Feature tracker (see Glance for historical tracking)<br />
| https://blueprints.launchpad.net/searchlight<br />
|-<br />
| IRC<br />
| #openstack-searchlight<br />
|-<br />
| Meeting Times<br />
| http://eavesdrop.openstack.org/#Searchlight_Team_Meeting<br />
|-<br />
| Meeting Agenda<br />
| https://etherpad.openstack.org/p/search-team-meeting-agenda<br />
|-<br />
| Meeting Logs<br />
| http://eavesdrop.openstack.org/meetings/openstack_search/<br />
|}<br />
<br />
= Overview =<br />
<br />
Searchlight is intended to dramatically improving the user focused search capabilities and performance on behalf of various OpenStack cloud services.<br />
<br />
It accomplishes this by offloading user search queries from existing API servers and indexing their data into ElasticSearch. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents. ElasticSearch is developed and released as open source under the terms of the Apache License. Notable users of ElasticSearch include Wikimedia, StumbleUpon, Mozilla, Quora, Foursquare, Etsy, SoundCloud, GitHub, FDA, CERN, and Stack Exchange. (Source: http://en.wikipedia.org/wiki/Elasticsearch). The elastic-recheck project also uses Elasticsearch (and kibana) to classify and track OpenStack gate failures. (Source: http://status.openstack.org/elastic-recheck)<br />
<br />
=== Screencasts ===<br />
<br />
* (Mitaka) Integration with Horizon demoː https://www.youtube.com/watch?v=2feC1njvZe0<br />
* (Mitaka) End of Cycle Presentation on Horizon, CLI, and Searchlightː https://www.youtube.com/watch?v=ExzULavwvNQ<br />
* (Mitaka) Summit Presentationː https://www.youtube.com/watch?v=0jYXsK4j26s<br />
* (Liberty) PTL Overviewː https://www.youtube.com/watch?v=yU5CrAOAlkA<br />
* (Kilo summit) Concept Demoː https://youtu.be/eGnGr48E5_4<br />
<br />
=== Design ===<br />
<br />
* http://docs.openstack.org/developer/searchlight/<br />
<br />
==== Concept Overview ====<br />
<br />
[[File:Searchlight-Concept-1.png]]<br />
<br />
==== Concept Internals Overview ====<br />
<br />
* [https://wiki.openstack.org/w/images/5/55/Searchlight-Concept-2.png Concept Internals]<br />
* [https://wiki.openstack.org/w/images/e/ef/Searchlight-use-when-there.png Usage switching]<br />
* [https://wiki.openstack.org/w/images/1/10/Searchlight-Concept-Horizon-Layers.png Horizon Concept Layers]<br />
* [https://wiki.openstack.org/w/images/6/6b/Searchlight-WebSocket-Concept.png Horizon Web Socket Concept]<br />
<br />
== Get Involved ==<br />
<br />
Please join with us to help move forward together as a community! We are sure that the ideas and concepts can use refinement and we'd like to identify where we can best fit in to the ecosystem.<br />
<br />
== History ==<br />
<br />
Searchlight was originally developed and released in the Kilo release of Glance as the Catalog Index Service [1]. At the Liberty Summit we decided to broaden the scope to provide advanced and scalable search across multi-tenant cloud resources.<br />
<br />
[1] http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Searchlight&diff=122972Searchlight2016-03-24T23:07:54Z<p>Travis Tripp: </p>
<hr />
<div><br />
= Mission Statement =<br />
<br />
To provide advanced and scalable indexing and search across multi-tenant cloud resources.<br />
<br />
== Project Links ==<br />
<br />
{| border="1" cellpadding="2"<br />
| Developer Documentation<br />
| http://docs.openstack.org/developer/searchlight/<br />
|-<br />
| Source code<br />
| https://github.com/openstack/searchlight<br />
|-<br />
| Gerrit Reviews<br />
| https://review.openstack.org/#/q/status:open+project:openstack/searchlight,n,z<br />
|-<br />
| Bug tracker<br />
| https://bugs.launchpad.net/searchlight<br />
|-<br />
| Feature tracker (see Glance for historical tracking)<br />
| https://blueprints.launchpad.net/searchlight<br />
|-<br />
| IRC<br />
| #openstack-searchlight<br />
|-<br />
| Meeting Times<br />
| http://eavesdrop.openstack.org/#Searchlight_Team_Meeting<br />
|-<br />
| Meeting Agenda<br />
| https://etherpad.openstack.org/p/search-team-meeting-agenda<br />
|-<br />
| Meeting Logs<br />
| http://eavesdrop.openstack.org/meetings/openstack_search/<br />
|}<br />
<br />
= Overview =<br />
<br />
Searchlight is intended to dramatically improving the user focused search capabilities and performance on behalf of various OpenStack cloud services.<br />
<br />
It accomplishes this by offloading user search queries from existing API servers and indexing their data into ElasticSearch. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents. ElasticSearch is developed and released as open source under the terms of the Apache License. Notable users of ElasticSearch include Wikimedia, StumbleUpon, Mozilla, Quora, Foursquare, Etsy, SoundCloud, GitHub, FDA, CERN, and Stack Exchange. (Source: http://en.wikipedia.org/wiki/Elasticsearch). The elastic-recheck project also uses Elasticsearch (and kibana) to classify and track OpenStack gate failures. (Source: http://status.openstack.org/elastic-recheck)<br />
<br />
=== Screencasts ===<br />
<br />
* (Mitaka) Integration with Horizon demoː https://www.youtube.com/watch?v=2feC1njvZe0<br />
* (Mitaka) Bugsmash Day Presentation on Horizon, CLI, and Searchlightː https://www.youtube.com/watch?v=ExzULavwvNQ<br />
* (Mitaka) Summit Presentationː https://www.youtube.com/watch?v=0jYXsK4j26s<br />
* (Liberty) PTL Overviewː https://www.youtube.com/watch?v=yU5CrAOAlkA<br />
* (Kilo summit) Concept Demoː https://youtu.be/eGnGr48E5_4<br />
<br />
=== Design ===<br />
<br />
* http://docs.openstack.org/developer/searchlight/<br />
<br />
==== Concept Overview ====<br />
<br />
[[File:Searchlight-Concept-1.png]]<br />
<br />
==== Concept Internals Overview ====<br />
<br />
* [https://wiki.openstack.org/w/images/5/55/Searchlight-Concept-2.png Concept Internals]<br />
* [https://wiki.openstack.org/w/images/e/ef/Searchlight-use-when-there.png Usage switching]<br />
* [https://wiki.openstack.org/w/images/1/10/Searchlight-Concept-Horizon-Layers.png Horizon Concept Layers]<br />
* [https://wiki.openstack.org/w/images/6/6b/Searchlight-WebSocket-Concept.png Horizon Web Socket Concept]<br />
<br />
== Get Involved ==<br />
<br />
Please join with us to help move forward together as a community! We are sure that the ideas and concepts can use refinement and we'd like to identify where we can best fit in to the ecosystem.<br />
<br />
== History ==<br />
<br />
Searchlight was originally developed and released in the Kilo release of Glance as the Catalog Index Service [1]. At the Liberty Summit we decided to broaden the scope to provide advanced and scalable search across multi-tenant cloud resources.<br />
<br />
[1] http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Searchlight&diff=122969Searchlight2016-03-24T22:10:15Z<p>Travis Tripp: /* Screencasts */</p>
<hr />
<div><br />
= Mission Statement =<br />
<br />
To provide advanced and scalable indexing and search across multi-tenant cloud resources.<br />
<br />
== Project Links ==<br />
<br />
{| border="1" cellpadding="2"<br />
| Developer Documentation<br />
| http://docs.openstack.org/developer/searchlight/<br />
|-<br />
| Source code<br />
| https://github.com/openstack/searchlight<br />
|-<br />
| Gerrit Reviews<br />
| https://review.openstack.org/#/q/status:open+project:openstack/searchlight,n,z<br />
|-<br />
| Bug tracker<br />
| https://bugs.launchpad.net/searchlight<br />
|-<br />
| Feature tracker (see Glance for historical tracking)<br />
| https://blueprints.launchpad.net/searchlight<br />
|-<br />
| IRC<br />
| #openstack-searchlight<br />
|-<br />
| Meeting Times<br />
| http://eavesdrop.openstack.org/#Searchlight_Team_Meeting<br />
|-<br />
| Meeting Agenda<br />
| https://etherpad.openstack.org/p/search-team-meeting-agenda<br />
|-<br />
| Meeting Logs<br />
| http://eavesdrop.openstack.org/meetings/openstack_search/<br />
|}<br />
<br />
= Overview =<br />
<br />
Searchlight was originally developed and released in the Kilo release of Glance as the Catalog Index Service [1]. At the Liberty Summit we decided to broaden the scope to provide advanced and scalable search across multi-tenant cloud resources.<br />
<br />
[1] http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html<br />
<br />
This is intended to dramatically improving the search capabilities and performance of various OpenStack cloud services.<br />
<br />
It accomplishes this by offloading user search queries from existing API servers by indexing their data into ElasticSearch. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents. ElasticSearch is developed and released as open source under the terms of the Apache License. Notable users of ElasticSearch include Wikimedia, StumbleUpon, Mozilla, Quora, Foursquare, Etsy, SoundCloud, GitHub, FDA, CERN, and Stack Exchange. (Source: http://en.wikipedia.org/wiki/Elasticsearch). The elastic-recheck project also uses Elasticsearch (and kibana) to classify and track OpenStack gate failures. (Source: http://status.openstack.org/elastic-recheck)<br />
<br />
=== Screencasts ===<br />
<br />
* (Mitaka) Integration with Horizon demoː https://www.youtube.com/watch?v=2feC1njvZe0<br />
* (Mitaka) Bugsmash Day Presentation on Horizon, CLI, and Searchlightː https://www.youtube.com/watch?v=ExzULavwvNQ<br />
* (Mitaka) Summit Presentationː https://www.youtube.com/watch?v=0jYXsK4j26s<br />
* (Liberty) PTL Overviewː https://www.youtube.com/watch?v=yU5CrAOAlkA<br />
* (Kilo summit) Concept Demoː https://youtu.be/eGnGr48E5_4<br />
<br />
=== Design ===<br />
<br />
The design is based off the Catalog Index Service in Glance. It will be refined moving forward as cross project needs are discovered and defined.<br />
<br />
* Glance Specificationː http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html<br />
<br />
==== Concept Overview ====<br />
<br />
[[File:Searchlight-Concept-1.png]]<br />
<br />
==== Concept Internals Overview ====<br />
<br />
* [https://wiki.openstack.org/w/images/5/55/Searchlight-Concept-2.png Concept Internals]<br />
* [https://wiki.openstack.org/w/images/e/ef/Searchlight-use-when-there.png Usage switching]<br />
* [https://wiki.openstack.org/w/images/1/10/Searchlight-Concept-Horizon-Layers.png Horizon Concept Layers]<br />
* [https://wiki.openstack.org/w/images/6/6b/Searchlight-WebSocket-Concept.png Horizon Web Socket Concept]<br />
<br />
== Get Involved ==<br />
<br />
Please join with us to help move forward together as a community! We are sure that the ideas and concepts can use refinement and we'd like to identify where we can best fit in to the ecosystem.</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Searchlight&diff=122968Searchlight2016-03-24T22:09:04Z<p>Travis Tripp: /* Screencasts */</p>
<hr />
<div><br />
= Mission Statement =<br />
<br />
To provide advanced and scalable indexing and search across multi-tenant cloud resources.<br />
<br />
== Project Links ==<br />
<br />
{| border="1" cellpadding="2"<br />
| Developer Documentation<br />
| http://docs.openstack.org/developer/searchlight/<br />
|-<br />
| Source code<br />
| https://github.com/openstack/searchlight<br />
|-<br />
| Gerrit Reviews<br />
| https://review.openstack.org/#/q/status:open+project:openstack/searchlight,n,z<br />
|-<br />
| Bug tracker<br />
| https://bugs.launchpad.net/searchlight<br />
|-<br />
| Feature tracker (see Glance for historical tracking)<br />
| https://blueprints.launchpad.net/searchlight<br />
|-<br />
| IRC<br />
| #openstack-searchlight<br />
|-<br />
| Meeting Times<br />
| http://eavesdrop.openstack.org/#Searchlight_Team_Meeting<br />
|-<br />
| Meeting Agenda<br />
| https://etherpad.openstack.org/p/search-team-meeting-agenda<br />
|-<br />
| Meeting Logs<br />
| http://eavesdrop.openstack.org/meetings/openstack_search/<br />
|}<br />
<br />
= Overview =<br />
<br />
Searchlight was originally developed and released in the Kilo release of Glance as the Catalog Index Service [1]. At the Liberty Summit we decided to broaden the scope to provide advanced and scalable search across multi-tenant cloud resources.<br />
<br />
[1] http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html<br />
<br />
This is intended to dramatically improving the search capabilities and performance of various OpenStack cloud services.<br />
<br />
It accomplishes this by offloading user search queries from existing API servers by indexing their data into ElasticSearch. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents. ElasticSearch is developed and released as open source under the terms of the Apache License. Notable users of ElasticSearch include Wikimedia, StumbleUpon, Mozilla, Quora, Foursquare, Etsy, SoundCloud, GitHub, FDA, CERN, and Stack Exchange. (Source: http://en.wikipedia.org/wiki/Elasticsearch). The elastic-recheck project also uses Elasticsearch (and kibana) to classify and track OpenStack gate failures. (Source: http://status.openstack.org/elastic-recheck)<br />
<br />
=== Screencasts ===<br />
<br />
* Mitaka Integration with Horizon UI demoː https://www.youtube.com/watch?v=2feC1njvZe0<br />
* Mitaka Bugsmash Presentation on Horizon, CLI, and Searchlightː https://www.youtube.com/watch?v=ExzULavwvNQ<br />
* Mitaka Summit Presentationː https://www.youtube.com/watch?v=0jYXsK4j26s<br />
* Liberty PTL Overviewː https://www.youtube.com/watch?v=yU5CrAOAlkA<br />
* Concept Demo (Kilo summit)ː https://youtu.be/eGnGr48E5_4<br />
<br />
=== Design ===<br />
<br />
The design is based off the Catalog Index Service in Glance. It will be refined moving forward as cross project needs are discovered and defined.<br />
<br />
* Glance Specificationː http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html<br />
<br />
==== Concept Overview ====<br />
<br />
[[File:Searchlight-Concept-1.png]]<br />
<br />
==== Concept Internals Overview ====<br />
<br />
* [https://wiki.openstack.org/w/images/5/55/Searchlight-Concept-2.png Concept Internals]<br />
* [https://wiki.openstack.org/w/images/e/ef/Searchlight-use-when-there.png Usage switching]<br />
* [https://wiki.openstack.org/w/images/1/10/Searchlight-Concept-Horizon-Layers.png Horizon Concept Layers]<br />
* [https://wiki.openstack.org/w/images/6/6b/Searchlight-WebSocket-Concept.png Horizon Web Socket Concept]<br />
<br />
== Get Involved ==<br />
<br />
Please join with us to help move forward together as a community! We are sure that the ideas and concepts can use refinement and we'd like to identify where we can best fit in to the ecosystem.</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Internship_ideas&diff=104583Internship ideas2016-02-19T19:16:33Z<p>Travis Tripp: /* Coding */</p>
<hr />
<div><!-- ## page was renamed from [[GnomeOutreachWomen]]/Ideas --><br />
<br />
To submit new ideas please consider creating a new page and use the [[Template:InternshipIdea]] (instructions are provided on that page) and you can see how a [[Test idea|sample idea page]] would look like. The pages created with such template are listed on [[:Category:Internship_idea]].<br />
<br />
= List of Ideas for Internships =<br />
<br />
The OpenStack Foundation has multiple sources for internships, from [[Outreachy]] to [[:Category:GSoC|Google Summer of Code]] and other opportunities. This page collects the ideas for candidate interns to work on. <br />
<br />
Applicants may not have ever worked on FLOSS before and have different levels of competence. Since we have different programs, add here ideas that can be completed by inexperienced contributors, developers or other fields (marketing, communication, graphic design, and anything that may be useful for OpenStack and to include new people in this community).<br />
<br />
== Coding ==<br />
<br />
{{InternshipIdea|<br />
TITLE=Murano - Murano package validation tool |<br />
DESCRIPTION=Murano cloud-ready applications are written in MuranoPL language and are packaged into zip packages, with certain prerequisites obligatory (manifest file, package structure, etc.). Having a murano package verification tool would speed up development and debugging of such apps greatly. Package verification tool should include tools for verifying that package structure is correct, all the class files mentioned in manifest file are present. It should also include MuranoPL linting code, to speed up MuranoPL-app development. Finally after the tool is ready — we should implement checks at import level (when importing a package to murano-api) and jobs at commit-time for murano-apps |<br />
DIFFICULTY=Medium |<br />
TOPICS=Murano |<br />
SKILLS=Python |<br />
EXTRA_SKILLS=YAML |<br />
MENTORS=kzaitsev |<br />
STATUS=Not Started |<br />
PROGRAM=Google Summer of Code 2016 / Outreach May-Aug 2016<br />
}}<br />
<br />
{{InternshipIdea|<br />
TITLE=Neutron - Metering agent add port statistics |<br />
DESCRIPTION=Neutron the metering agent collects statistics regarding bandwidth usage. Right now it only measure the bandwidth used by routers. The idea is to extend it and provide statistics also for ports. In the first implementation only openvswitch will be supported, since we will use openvswitch tools to get the port statistics. The first step will be getting familiar with the metering agent and with Neutron in general. Then you will approach openvswitch tools and think about how to use them for this project. After that you can reach out to the community to collect and discuss ideas. Neutron folks are pretty active on #openstack-neutron channel most of the time and would be willing to share their opinions on this or any other project. You'll submit your code upstream and address the comments you get till your patch gets merged. |<br />
DIFFICULTY=Medium-Advanced |<br />
TOPICS=Neutron |<br />
SKILLS=Python |<br />
EXTRA_SKILLS=Networking, OVS |<br />
MENTORS=rossella_s |<br />
STATUS=None |<br />
PROGRAM=Google Summer of Code 2016 / Outreach May-Aug 2016<br />
}}<br />
<br />
{{InternshipIdea|<br />
TITLE=Neutron - ovsdb client monitor for Windows |<br />
DESCRIPTION=The OVS agent monitors the ports that are added in the compute host to be able to wire them correctly. In Linux it uses the class InterfacePollingMinimizer that notifies the agent when a new port is plugged or unplugged and passes the related events (port added or deleted). For Windows it uses the class AlwaysPoll that doesn't notify any specific event, it returns always true. The OVS agent in Windows is forced to rescan the devices currently in the machine to infer which were added. This is because the current Windows implementation of the interface polling manager doesn't use ovsdb client monitor. The aim of this project is to use ovsdb client monitor also for Windows and make sure that the events are passed correctly to the OVS agent. This will improve the performance and will enable some clean up in the OVS agent code. The first step is getting familiar with the OVS agent and with Neutron in general. Then you will approach openvswitch tools and investigate how to use ovsdb monitor client in Windows. Neutron folks are pretty active on #openstack-neutron channel most of the time and would be willing to share their opinions on this or any other project. You'll submit your code upstream and address the comments you get till your patch gets merged. |<br />
DIFFICULTY=Medium |<br />
TOPICS=Neutron |<br />
SKILLS=Python |<br />
EXTRA_SKILLS=Networking, OVS |<br />
MENTORS=rossella_s |<br />
STATUS=None |<br />
PROGRAM=Google Summer of Code 2016 / Outreach May-Aug 2016<br />
}}<br />
<br />
{{InternshipIdea|<br />
TITLE=Glance - Extended support for requests library |<br />
DESCRIPTION= You would be learning about glance-replicator, glance_store drivers and if time permits other modules. You are then expected to add support for requests library ( https://pypi.python.org/pypi/requests ) starting with glance-replicator. Although, supporting more than glance-replicator is not expected, more support you add the better your internship will be. You will also help with bug triage and optimizations around that code base as you add more support. |<br />
DIFFICULTY=Medium |<br />
TOPICS=Glance |<br />
SKILLS=Python |<br />
EXTRA_SKILLS=Good communication skills |<br />
MENTORS=nikhil |<br />
STATUS=Open |<br />
PROGRAM=Google Summer of Code 2016 / Outreach May-Aug 2016<br />
}}<br />
<br />
{{InternshipIdea|<br />
TITLE=Glance - Develop a python based GLARE (GLance Artifacts REpository) client library and shell API |<br />
DESCRIPTION= You will learn how python based clients are developed in the Openstack realm. You will be responsible for closely working with the Glare drivers to understand the requirements, API evolution and contribute ideas to the development of the Glare API. You should be able to set up the basic build structure, common interfaces, setup configs and infra jobs for the glareclient. Co-ordinate with the Glare drivers and infra team to setup repositories, documentation and test jobs for releases of this client. Also, based on the outcome and feedback from the Glare API discussions you will be responsible for keep evolving the client library. |<br />
DIFFICULTY=Medium-Advanced |<br />
TOPICS=Glance |<br />
SKILLS=Python |<br />
EXTRA_SKILLS=Shell scripting & packaging, good communication skills |<br />
MENTORS=nikhil |<br />
STATUS=Open |<br />
PROGRAM=Google Summer of Code 2016 / Outreach May-Aug 2016<br />
}}<br />
<br />
{{InternshipIdea|<br />
TITLE=Searchlight - Extend automated functional testing for Searchlight plugins / Improve existing plugins |<br />
DESCRIPTION= Searchlight is intended to dramatically improving the search capabilities and performance of various OpenStack cloud services by indexing their data into ElasticSearch using plugins. You will be learning the Searchlight fundamentals including indexing, searching, faceting, security, etc. You will also learn the APIs and data models of various other OpenStack services that are indexed into Searchlight. You will understand plugin design then explore all of the functional testing aspects mentioned above. You will be responsible for implementing complete coverage of functional testing for one big or two medium sized plugins. You will improve the existing plugins for any bugs or improvements you discover. |<br />
DIFFICULTY=Medium |<br />
TOPICS=Searchlight |<br />
SKILLS=Python |<br />
EXTRA_SKILLS=Basic understanding of ElasticSearch, good communication skills |<br />
MENTORS=nikhil |<br />
STATUS=Open |<br />
PROGRAM=Google Summer of Code 2016 / Outreach May-Aug 2016<br />
}}<br />
<br />
{{InternshipIdea|<br />
TITLE=OSprofiler - Implement new storage drivers for OSprofiler / add OSprofiler support to other OpenStack projects |<br />
DESCRIPTION= OSprofiler is an Oslo library allowing to trace cross-project requests and identify the OpenStack performance bottlenecks via understanding what time was spent on each request stage, how many requests were used, etc. Lots of developing efforts might be found here, including writing new storage drivers for OSprofiler and adding its integration to other OpenStack projects. |<br />
DIFFICULTY=Medium |<br />
TOPICS=OSprofiler |<br />
SKILLS=Python|<br />
MENTORS=DinaBelova |<br />
STATUS=Open |<br />
PROGRAM=Google Summer of Code 2016 / Outreach May-Aug 2016<br />
}}<br />
<br />
{{InternshipIdea|<br />
TITLE=Magnum - Container Service for Magnum's Kubernetes Orchestration Engine |<br />
DESCRIPTION= Magnum's client has several actions (create/delete/exec/logs/pause/reboot/start/stop/unpause) for containers, currently these commands work only for Docker Swarm COE. When a operator deploys a bay with either Kubernetes COE or the Mesos COE, these command line functionality is not available for the operator as there is not backend support for these operations. In this project, we will first add a concrete implementation for the Container Service that calls Kubernetes API appropriately, then we make sure that the magnum's client command lines work properly against this, just like this works when the operator deploys a bay using swarm COE. |<br />
DIFFICULTY=Medium |<br />
TOPICS=Magnum |<br />
SKILLS=Python|<br />
MENTORS=Dims |<br />
STATUS=Open |<br />
PROGRAM=Google Summer of Code 2016 / Outreach May-Aug 2016<br />
}}<br />
<br />
== Documentation ==<br />
<br />
{{InternshipIdea|<br />
TITLE=Performance-docs - Add missing sections to http://docs.openstack.org/developer/performance-docs/# and identify the documentation gaps |<br />
DESCRIPTION= Performance-docs is quite new initiative leaded and pushed by OpenStack Performance Working Group - https://wiki.openstack.org/wiki/Performance_Team - and we really need your help to work on adding test results, topologies and environments description, etc. to make this source valuable for all community. |<br />
DIFFICULTY=Medium |<br />
TOPICS=Performance-docs |<br />
SKILLS=Good English and great communication skills to collect the information|<br />
MENTORS=DinaBelova |<br />
STATUS=Open |<br />
PROGRAM=Google Summer of Code 2016 / Outreach May-Aug 2016<br />
}}<br />
<br />
[[Past internship ideas]]<br />
<br />
[[Category: Internship]]</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=CrossProjectLiaisons&diff=101803CrossProjectLiaisons2016-01-21T15:56:56Z<p>Travis Tripp: /* Vulnerability management */</p>
<hr />
<div>Many of our cross-project teams need focused help for communicating with the other project teams. This page lists the people who have volunteered for that work.<br />
<br />
== Oslo ==<br />
<br />
There are now more projects consuming code from the Oslo incubator than we have Oslo contributors. That means we are going to need your help to make these migrations happen. We are asking for one person from each project to serve as a liaison between the project and Oslo, and to assist with integrating changes as we move code out of the incubator into libraries.<br />
<br />
* The liaison should be active in the project and familiar with the project-specific requirements for having patches accepted, but does not need to be a core reviewer or the PTL.<br />
* The liaison should be prepared to assist with writing and reviewing patches in their project as libraries are adopted, and with discussions of API changes to the libraries to make them easier to use within the project.<br />
* Liaisons should pay attention to [Oslo] tagged messages on the openstack-dev mailing list.<br />
* It is also useful for liaisons to be able to attend the Oslo team meeting ([[Meetings/Oslo]]) to participate in discussions and raise issues for real-time discussion.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || Julien Danjou || jd__<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Cue || Min Pae || sputnik13<br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Thomas Herve || therve<br />
|-<br />
| Horizon || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Ironic || Lin Tan || lintan<br />
|-<br />
| Keystone || Brant Knudson || bknudson<br />
|-<br />
| Manila || Thomas Bechtold || toabctl<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachyshka<br />
|-<br />
| Nova || ChangBo Guo <glongwave@gmail.com> || gcb<br />
|-<br />
| [[Octavia]] || Michael Johnson || johnsom<br />
|-<br />
| Sahara || Sergey Reshetnyak || sreshetnyak<br />
|-<br />
| Senlin || Yanyan Hu || Yanyanhu<br />
|-<br />
<br />
| Swift || || <br />
|-<br />
| TripleO || Ben Nemec || bnemec<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Zaqar || Flavio Percoco || flaper87<br />
|-<br />
|}<br />
<br />
== Release management ==<br />
<br />
The Release Management Liaison is responsible for communication with the Release Management team, attending the weekly 1:1 syncs in #openstack-relmgr-office, keeping milestone plans up to date, and signing off milestone and release tags. That task has been [[PTL_Guide#Interactions_with_the_Release_team|traditionally filled by the PTL]], but they may now delegate this task if they wish.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Release Management Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || gordon chung || gordc<br />
|-<br />
| Cinder || Sean McGinnis || smcginnis<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Designate || Graham Hayes || mugsie<br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Sergey Kraynev || skraynev<br />
|-<br />
| Horizon || David Lyle || david-lyle<br />
|-<br />
| Ironic || Jim Rollenhagen (Dmitry Tantsur for inspector deliverables) || jroll (dtantsur)<br />
|-<br />
| Keystone || Steve Martinelli || stevemar<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Mistral || Lingxian Kong || lane_kong<br />
|-<br />
| Murano ||Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Kyle Mestery || mestery<br />
|-<br />
| Nova || Sylvain Bauza || bauzas<br />
|-<br />
| Oslo || Davanum Srinivas || dims<br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp || TravT<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || John Dickinson || notmyname<br />
|-<br />
| Trove || Craig Vyvial or Nikhil Manchanda || cp16net or SlickNik<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== QA ==<br />
<br />
There are now more projects that are being tested by Tempest, and Grenade or a part deployable by Devstack than we have QA contributors. That means we are going to need your help to keep on top of everything. We are asking for one person from each project to serve as a liaison between the project and QA, and to assist with integrating changes as we move forward.<br />
<br />
The liaison should be a core reviewer for the project, but does not need to be the PTL. The liaison should be prepared to assist with writing and reviewing patches that interact with their project, and with discussions of changes to the QA projects to make them easier to use within the project.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Steve Heyman || hockeynut <br />
|-<br />
| Ceilometer || Chris Dent || cdent<br />
|-<br />
| Cinder || || <br />
|-<br />
| Glance || Nikhil Komawar || nikhil_k<br />
|-<br />
| Heat || Steve Baker || stevebaker<br />
|-<br />
| Horizon || || <br />
|-<br />
| Ironic || John Villalovos || jlvillal<br />
|-<br />
| Keystone || David Stanek || dstanek<br />
|-<br />
| Manila || Valeriy Ponomaryov || vponomaryov<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Matt Riedemann || mriedem<br />
|-<br />
| Oslo || Davanum Srinivas || dims <br />
|-<br />
| Sahara || Luigi Toscano and Sergey Lukjanov || tosky and SergeyLukjanov<br />
|-<br />
| Senlin || Haiwei Xu || haiwei-xu<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Craig Vyvial and Nirav Shah || cp16net and nshah<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== Documentation ==<br />
<br />
The OpenStack Documentation is centralized on docs.openstack.org but often there's a need for specialty information when reviewing patches or triaging doc bugs. A doc liaison should be available to triage doc bugs when the docs team members don't know enough to triage accurately, and be added to doc reviews that affect your project. You'd be notified through email when you're added either to a doc bug or a doc review. We also would appreciate attendance at the [https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting weekly doc team meeting], We meet weekly in #openstack-meeting every Wednesday at alternating times for different timezones:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Constanze Kratel || constanze <br />
|-<br />
| Ceilometer || Ildiko Vancsa || ildikov<br />
|-<br />
| Cinder || Mike Perez || thingee <br />
|-<br />
| Congress || || <br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Brian Rosmaita || rosmaita<br />
|-<br />
| Heat || Randall Burt || randallburt<br />
|-<br />
| Horizon || Rob Cresswell || robcresswell<br />
|-<br />
| Ironic || Mitsuhiro SHIGEMATSU || pshige<br />
|-<br />
| Keystone || Lance Bragstad || lbragstad<br />
|-<br />
| Magnum || || <br />
|-<br />
| Manila || || <br />
|-<br />
| Mistral || || <br />
|-<br />
| Murano || Ekaterina Chernova || katyafervent <br />
|-<br />
| Neutron || Edgar Magana || emagana <br />
|-<br />
| Nova || Joe Gordon or Michael Still || Jog0 or mikal<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Rally || || <br />
|-<br />
| Sahara || Chad Roberts || crobertsrh<br />
|-<br />
| Senlin || Cindia Blue || lixinhui<br />
|-<br />
| Swift || || <br />
|-<br />
| Tripleo || || <br />
|-<br />
| Trove || Laurel Michaels || laurelm<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== Stable Branch ==<br />
<br />
The Stable Branch Liaison is responsible for making sure backports are proposed for critical issues in their project, and make sure proposed backports<br />
are reviewed. They are also the contact point for stable branch release managers around point release times.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Stable Branch Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer || Eoghan Glynn || eglynn<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Heat || Zane Bitter || zaneb<br />
|-<br />
| Horizon || Matthias Runge || mrunge <br />
|-<br />
| Ironic || || <br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachys<br />
|-<br />
| Nova || Matt Riedemann || mriedem <br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Senlin|| Qiming Teng || Qiming<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
|}<br />
<br />
== Vulnerability management ==<br />
<br />
The [[Vulnerability Management]] Team needs domain specialists to help assessing the impact of reported issues, coordinate the development of patches, review proposed patches and propose backports. The liaison should be familiar with the [[Vulnerability Management]] process and embargo rules, and have a good grasp of security issues in software design.<br />
<br />
* The liaison should be a core reviewer for the project, but does not need to be the PTL.<br />
* By default, the liaison will be the PTL.<br />
* The liaison is the first line of contact for the Vulnerability Management team members<br />
* The liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL<br />
* The liaison may further delegate work to other subject matter experts<br />
* The liaison maintains the members of the $PROJECT-coresec team in Launchpad (which can be given access to embargoed vulnerabilities)<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal or Charles Neill || redrobot / ccneill<br />
|-<br />
| Ceilometer || Lianhao Lu or Gordon Chung || llu/gordc <br />
|-<br />
| Cinder || || <br />
|-<br />
| Glance || Stuart McLaren or Nikhil Komawar || mclaren or nikhil_k <br />
|-<br />
| Heat || Steve Hardy || shardy<br />
|-<br />
| Horizon || Lin Hua Cheng || lhcheng <br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Michael Still || mikal<br />
|-<br />
| Sahara || Michael McCune or Sergey Lukjanov || elmiko or SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp or Steve McLellan || TravT or sjmc7<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Craig Vyvial or Nikhil Manchanda || cp16net or SlickNik <br />
|-<br />
|}<br />
<br />
== API Working Group ==<br />
<br />
The [[API_Working_Group|API Working Group]] seeks API subject matter experts for each project to communicate plans for API updates, review API guidelines with their project's view in mind, and review the API Working Group guidelines as they are drafted. The liaison should be familiar with the project's REST API design and future planning for changes to it.<br />
<br />
The members of the [http://specs.openstack.org/openstack/api-wg/liaisons.html API Working Group Cross-Project Liaisons] are maintained in our repo. If you want to read the entire list of CPLs or add/remove yourself from the list, you'll need to update the [http://git.openstack.org/cgit/openstack/api-wg/tree/doc/source/liaisons.json liaisons.json] file. If you don't want to make the update yourself, please ask in #openstack-sdks on IRC and someone can make the change for you.<br />
<br />
== Logging Working Group ==<br />
<br />
The [[LogWorkingGroup|Log Working Group]] seeks experts for each project to assist with making the logging in projects match the new [http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html Logging Guidelines]<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Nova || John Garbutt || johnthetubaguy<br />
|-<br />
| Murano || Nikolay Starodubtsev || Nikolay_St<br />
|}<br />
<br />
== Infra ==<br />
<br />
These are the project specific groups of people that Infra will look to ACK changes to that project's test configuration. Changes to project-config and devstack-gate should be +1'd by these groups when they are related to their project. Note that in an emergency this may not always be possible and Infra will ask for forgiveness but generally we should look for these +1s.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Flavio Percoco, Nikhil Komawar|| flaper87, nikhil_k<br />
<br />
|-<br />
| Neutron || Kyle Mestery, Armando Migliaccio, Doug Wiegley|| mestery, armax, dougwig<br />
|-<br />
| Documentation || Andreas Jaeger|| AJaeger<br />
<br />
|-<br />
| Trove || Nikhil Manchanda || SlickNik<br />
|}<br />
<br />
== Product Working Group ==<br />
The product working group consists of product managers, technologists, and operators from a diverse set of organizations. The group is working to aggregate user stories from the market-focused teams (Enterprise, Telco, etc.) and cross-project functional teams (e.g. logging, upgrades, etc.), partner with the development community on resourcing, and help gather data to generate a multi-release roadmap. Most of the user stories being tracked by this team consists of items that can span multiple releases and usually have cross-project dependencies. <br />
<br />
More information about the team can be found on the [[ProductTeam|Product WG wiki]].<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer ||Krish Ragurham || <br />
|-<br />
| Cinder || Shamail Tahir || shamail<br />
|-<br />
| Glance || Nate Ziemann || nate_zman<br />
|-<br />
| Horizon || Carol Barrett || barrett1<br />
|-<br />
| Keystone || Sheena Gregson || <br />
|-<br />
| Kolla || Carol Barrett || barrett1<br />
|-<br />
| Magnum || Steve Gordon || sgordon<br />
|-<br />
| Manilla ||Pete Chadwick || <br />
|-<br />
| Neutron || Mike Cohen, Duane DeCapite || DuaneDeC7<br />
|-<br />
| Nova || Hugh Blemings || hughhalf <br />
|-<br />
| OSClient || Megan Rossetti || MeganR<br />
|-<br />
| Stable Release|| Rochelle Grober || rockyg<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || Phil Willains || philipw<br />
|-<br />
| Tempest || Arkady Kanevsky || arkady_kanevsky<br />
|-<br />
|}<br />
<br />
== Inter-project Liaisons ==<br />
<br />
In some cases, it is useful to have liaisons between projects. [http://lists.openstack.org/pipermail/openstack-dev/2015-April/062327.html For example, it is useful for the Nova and Neutron projects to have liaisons, because the projects have complex interactions and dependencies.] Ideally, a cross-project effort should have two members, one from each project, to facilitate communication and knowledge transfer.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Projects !! Name !! IRC Handle !! Role<br />
|-<br />
| Nova / Neutron || || ||<br />
|-<br />
| || Sean M. Collins || sc68cal || Neutron liaison for Nova<br />
|-<br />
| || Brent Eagles || beagles || Nova liaison for Neutron<br />
|-<br />
| Nova / Glance || || ||<br />
|-<br />
| || Flavio Percoco, Mike Fedosin || flaper87, mfedosin || Glance liaison for Nova<br />
|-<br />
| || Jay Pipes || jaypipes || Nova liaison for Glance<br />
|-<br />
| Nova / Cinder || || ||<br />
|-<br />
| || Scott DAngelo || scottda || Cinder liaison for Nova<br />
|-<br />
| || Matt Riedemann || mriedem || Nova liason for Cinder<br />
|-<br />
| Nova / Ironic || John Villalovos || jlvillal || Ironic liaison for Nova<br />
|-<br />
| || Michael Davies || mrda || Ironic liaison for Nova<br />
|-<br />
| Neutron / Ironic || || ||<br />
|-<br />
| || Sukhdev Kapur || sukhdev || Neutron liaison for Ironic<br />
|-<br />
| || Mitsuhiro SHIGEMATSU and Jim Rollenhagen || pshige and jroll || Ironic liaison for Neutron<br />
|-<br />
| Murano / Glance || || ||<br />
|-<br />
| || Alexander Tivelkov || ativelkov || Glance liaison for Murano, Murano liaison for Glance<br />
|-<br />
| Horizon / i18n || || ||<br />
|-<br />
| || Doug Fish || doug-fish || Horizon liaison for i18n<br />
|}<br />
<br />
=== Etherpads ===<br />
<br />
The following is a list of etherpads that are used for inter-project liaisons, and are continuously updated.<br />
<br />
Nova - Neutron: https://etherpad.openstack.org/p/nova-neutron<br />
<br />
== Cross-Project Spec Liaisons ==<br />
<br />
The OpenStack project relies on the cross-project spec liaisons from each participating project to help with coordination and cross-project spec related tasks. See full set of [http://docs.openstack.org/project-team-guide/cross-project.html#cross-project-specification-liaisons responsibilities] The liaison defaults to the PTL, but the PTL can also delegate the responsibilities to someone else on the team by updating this tableː<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || ||<br />
|-<br />
| Ceilometer || ||<br />
|-<br />
| Cinder || Kendall Nelson || diablo_rojo<br />
|-<br />
| Congress || ||<br />
|-<br />
| Designate || ||<br />
|-<br />
| Glance || Nikhil Komawar || nikhil<br />
|-<br />
| Heat || Rico Lin || ricolin<br />
|-<br />
| Horizon || ||<br />
|-<br />
| Infrastructure || Matthew Wagoner || olaph<br />
|-<br />
| Ironic || ||<br />
|-<br />
| Keystone || Samuel de Medeiros Queiroz || samueldmq<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Mistral || ||<br />
|-<br />
| Murano || ||<br />
|-<br />
| Neutron || Armando Migliaccio || armax<br />
|-<br />
| Nova || Chris Dent || cdent<br />
|-<br />
| Oslo || Davanum Srinivas || dims<br />
|-<br />
| Sahara || Michael McCune or Sergey Lukjanov || elmiko or SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp || TravT<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || ||<br />
|-<br />
| Trove || ||<br />
|-<br />
| Zaqar || || <br />
|}</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=CrossProjectLiaisons&diff=101802CrossProjectLiaisons2016-01-21T15:51:01Z<p>Travis Tripp: /* Cross-Project Spec Liaisons */</p>
<hr />
<div>Many of our cross-project teams need focused help for communicating with the other project teams. This page lists the people who have volunteered for that work.<br />
<br />
== Oslo ==<br />
<br />
There are now more projects consuming code from the Oslo incubator than we have Oslo contributors. That means we are going to need your help to make these migrations happen. We are asking for one person from each project to serve as a liaison between the project and Oslo, and to assist with integrating changes as we move code out of the incubator into libraries.<br />
<br />
* The liaison should be active in the project and familiar with the project-specific requirements for having patches accepted, but does not need to be a core reviewer or the PTL.<br />
* The liaison should be prepared to assist with writing and reviewing patches in their project as libraries are adopted, and with discussions of API changes to the libraries to make them easier to use within the project.<br />
* Liaisons should pay attention to [Oslo] tagged messages on the openstack-dev mailing list.<br />
* It is also useful for liaisons to be able to attend the Oslo team meeting ([[Meetings/Oslo]]) to participate in discussions and raise issues for real-time discussion.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || Julien Danjou || jd__<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Cue || Min Pae || sputnik13<br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Thomas Herve || therve<br />
|-<br />
| Horizon || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Ironic || Lin Tan || lintan<br />
|-<br />
| Keystone || Brant Knudson || bknudson<br />
|-<br />
| Manila || Thomas Bechtold || toabctl<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachyshka<br />
|-<br />
| Nova || ChangBo Guo <glongwave@gmail.com> || gcb<br />
|-<br />
| [[Octavia]] || Michael Johnson || johnsom<br />
|-<br />
| Sahara || Sergey Reshetnyak || sreshetnyak<br />
|-<br />
| Senlin || Yanyan Hu || Yanyanhu<br />
|-<br />
<br />
| Swift || || <br />
|-<br />
| TripleO || Ben Nemec || bnemec<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Zaqar || Flavio Percoco || flaper87<br />
|-<br />
|}<br />
<br />
== Release management ==<br />
<br />
The Release Management Liaison is responsible for communication with the Release Management team, attending the weekly 1:1 syncs in #openstack-relmgr-office, keeping milestone plans up to date, and signing off milestone and release tags. That task has been [[PTL_Guide#Interactions_with_the_Release_team|traditionally filled by the PTL]], but they may now delegate this task if they wish.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Release Management Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || gordon chung || gordc<br />
|-<br />
| Cinder || Sean McGinnis || smcginnis<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Designate || Graham Hayes || mugsie<br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Sergey Kraynev || skraynev<br />
|-<br />
| Horizon || David Lyle || david-lyle<br />
|-<br />
| Ironic || Jim Rollenhagen (Dmitry Tantsur for inspector deliverables) || jroll (dtantsur)<br />
|-<br />
| Keystone || Steve Martinelli || stevemar<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Mistral || Lingxian Kong || lane_kong<br />
|-<br />
| Murano ||Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Kyle Mestery || mestery<br />
|-<br />
| Nova || Sylvain Bauza || bauzas<br />
|-<br />
| Oslo || Davanum Srinivas || dims<br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp || TravT<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || John Dickinson || notmyname<br />
|-<br />
| Trove || Craig Vyvial or Nikhil Manchanda || cp16net or SlickNik<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== QA ==<br />
<br />
There are now more projects that are being tested by Tempest, and Grenade or a part deployable by Devstack than we have QA contributors. That means we are going to need your help to keep on top of everything. We are asking for one person from each project to serve as a liaison between the project and QA, and to assist with integrating changes as we move forward.<br />
<br />
The liaison should be a core reviewer for the project, but does not need to be the PTL. The liaison should be prepared to assist with writing and reviewing patches that interact with their project, and with discussions of changes to the QA projects to make them easier to use within the project.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Steve Heyman || hockeynut <br />
|-<br />
| Ceilometer || Chris Dent || cdent<br />
|-<br />
| Cinder || || <br />
|-<br />
| Glance || Nikhil Komawar || nikhil_k<br />
|-<br />
| Heat || Steve Baker || stevebaker<br />
|-<br />
| Horizon || || <br />
|-<br />
| Ironic || John Villalovos || jlvillal<br />
|-<br />
| Keystone || David Stanek || dstanek<br />
|-<br />
| Manila || Valeriy Ponomaryov || vponomaryov<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Matt Riedemann || mriedem<br />
|-<br />
| Oslo || Davanum Srinivas || dims <br />
|-<br />
| Sahara || Luigi Toscano and Sergey Lukjanov || tosky and SergeyLukjanov<br />
|-<br />
| Senlin || Haiwei Xu || haiwei-xu<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Craig Vyvial and Nirav Shah || cp16net and nshah<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== Documentation ==<br />
<br />
The OpenStack Documentation is centralized on docs.openstack.org but often there's a need for specialty information when reviewing patches or triaging doc bugs. A doc liaison should be available to triage doc bugs when the docs team members don't know enough to triage accurately, and be added to doc reviews that affect your project. You'd be notified through email when you're added either to a doc bug or a doc review. We also would appreciate attendance at the [https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting weekly doc team meeting], We meet weekly in #openstack-meeting every Wednesday at alternating times for different timezones:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Constanze Kratel || constanze <br />
|-<br />
| Ceilometer || Ildiko Vancsa || ildikov<br />
|-<br />
| Cinder || Mike Perez || thingee <br />
|-<br />
| Congress || || <br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Brian Rosmaita || rosmaita<br />
|-<br />
| Heat || Randall Burt || randallburt<br />
|-<br />
| Horizon || Rob Cresswell || robcresswell<br />
|-<br />
| Ironic || Mitsuhiro SHIGEMATSU || pshige<br />
|-<br />
| Keystone || Lance Bragstad || lbragstad<br />
|-<br />
| Magnum || || <br />
|-<br />
| Manila || || <br />
|-<br />
| Mistral || || <br />
|-<br />
| Murano || Ekaterina Chernova || katyafervent <br />
|-<br />
| Neutron || Edgar Magana || emagana <br />
|-<br />
| Nova || Joe Gordon or Michael Still || Jog0 or mikal<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Rally || || <br />
|-<br />
| Sahara || Chad Roberts || crobertsrh<br />
|-<br />
| Senlin || Cindia Blue || lixinhui<br />
|-<br />
| Swift || || <br />
|-<br />
| Tripleo || || <br />
|-<br />
| Trove || Laurel Michaels || laurelm<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== Stable Branch ==<br />
<br />
The Stable Branch Liaison is responsible for making sure backports are proposed for critical issues in their project, and make sure proposed backports<br />
are reviewed. They are also the contact point for stable branch release managers around point release times.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Stable Branch Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer || Eoghan Glynn || eglynn<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Heat || Zane Bitter || zaneb<br />
|-<br />
| Horizon || Matthias Runge || mrunge <br />
|-<br />
| Ironic || || <br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachys<br />
|-<br />
| Nova || Matt Riedemann || mriedem <br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Senlin|| Qiming Teng || Qiming<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
|}<br />
<br />
== Vulnerability management ==<br />
<br />
The [[Vulnerability Management]] Team needs domain specialists to help assessing the impact of reported issues, coordinate the development of patches, review proposed patches and propose backports. The liaison should be familiar with the [[Vulnerability Management]] process and embargo rules, and have a good grasp of security issues in software design.<br />
<br />
* The liaison should be a core reviewer for the project, but does not need to be the PTL.<br />
* By default, the liaison will be the PTL.<br />
* The liaison is the first line of contact for the Vulnerability Management team members<br />
* The liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL<br />
* The liaison may further delegate work to other subject matter experts<br />
* The liaison maintains the members of the $PROJECT-coresec team in Launchpad (which can be given access to embargoed vulnerabilities)<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal or Charles Neill || redrobot / ccneill<br />
|-<br />
| Ceilometer || Lianhao Lu or Gordon Chung || llu/gordc <br />
|-<br />
| Cinder || || <br />
|-<br />
| Glance || Stuart McLaren or Nikhil Komawar || mclaren or nikhil_k <br />
|-<br />
| Heat || Steve Hardy || shardy<br />
|-<br />
| Horizon || Lin Hua Cheng || lhcheng <br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Michael Still || mikal<br />
|-<br />
| Sahara || Michael McCune or Sergey Lukjanov || elmiko or SergeyLukjanov<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Craig Vyvial or Nikhil Manchanda || cp16net or SlickNik <br />
|-<br />
|}<br />
<br />
== API Working Group ==<br />
<br />
The [[API_Working_Group|API Working Group]] seeks API subject matter experts for each project to communicate plans for API updates, review API guidelines with their project's view in mind, and review the API Working Group guidelines as they are drafted. The liaison should be familiar with the project's REST API design and future planning for changes to it.<br />
<br />
The members of the [http://specs.openstack.org/openstack/api-wg/liaisons.html API Working Group Cross-Project Liaisons] are maintained in our repo. If you want to read the entire list of CPLs or add/remove yourself from the list, you'll need to update the [http://git.openstack.org/cgit/openstack/api-wg/tree/doc/source/liaisons.json liaisons.json] file. If you don't want to make the update yourself, please ask in #openstack-sdks on IRC and someone can make the change for you.<br />
<br />
== Logging Working Group ==<br />
<br />
The [[LogWorkingGroup|Log Working Group]] seeks experts for each project to assist with making the logging in projects match the new [http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html Logging Guidelines]<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Nova || John Garbutt || johnthetubaguy<br />
|-<br />
| Murano || Nikolay Starodubtsev || Nikolay_St<br />
|}<br />
<br />
== Infra ==<br />
<br />
These are the project specific groups of people that Infra will look to ACK changes to that project's test configuration. Changes to project-config and devstack-gate should be +1'd by these groups when they are related to their project. Note that in an emergency this may not always be possible and Infra will ask for forgiveness but generally we should look for these +1s.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Flavio Percoco, Nikhil Komawar|| flaper87, nikhil_k<br />
<br />
|-<br />
| Neutron || Kyle Mestery, Armando Migliaccio, Doug Wiegley|| mestery, armax, dougwig<br />
|-<br />
| Documentation || Andreas Jaeger|| AJaeger<br />
<br />
|-<br />
| Trove || Nikhil Manchanda || SlickNik<br />
|}<br />
<br />
== Product Working Group ==<br />
The product working group consists of product managers, technologists, and operators from a diverse set of organizations. The group is working to aggregate user stories from the market-focused teams (Enterprise, Telco, etc.) and cross-project functional teams (e.g. logging, upgrades, etc.), partner with the development community on resourcing, and help gather data to generate a multi-release roadmap. Most of the user stories being tracked by this team consists of items that can span multiple releases and usually have cross-project dependencies. <br />
<br />
More information about the team can be found on the [[ProductTeam|Product WG wiki]].<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer ||Krish Ragurham || <br />
|-<br />
| Cinder || Shamail Tahir || shamail<br />
|-<br />
| Glance || Nate Ziemann || nate_zman<br />
|-<br />
| Horizon || Carol Barrett || barrett1<br />
|-<br />
| Keystone || Sheena Gregson || <br />
|-<br />
| Kolla || Carol Barrett || barrett1<br />
|-<br />
| Magnum || Steve Gordon || sgordon<br />
|-<br />
| Manilla ||Pete Chadwick || <br />
|-<br />
| Neutron || Mike Cohen, Duane DeCapite || DuaneDeC7<br />
|-<br />
| Nova || Hugh Blemings || hughhalf <br />
|-<br />
| OSClient || Megan Rossetti || MeganR<br />
|-<br />
| Stable Release|| Rochelle Grober || rockyg<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || Phil Willains || philipw<br />
|-<br />
| Tempest || Arkady Kanevsky || arkady_kanevsky<br />
|-<br />
|}<br />
<br />
== Inter-project Liaisons ==<br />
<br />
In some cases, it is useful to have liaisons between projects. [http://lists.openstack.org/pipermail/openstack-dev/2015-April/062327.html For example, it is useful for the Nova and Neutron projects to have liaisons, because the projects have complex interactions and dependencies.] Ideally, a cross-project effort should have two members, one from each project, to facilitate communication and knowledge transfer.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Projects !! Name !! IRC Handle !! Role<br />
|-<br />
| Nova / Neutron || || ||<br />
|-<br />
| || Sean M. Collins || sc68cal || Neutron liaison for Nova<br />
|-<br />
| || Brent Eagles || beagles || Nova liaison for Neutron<br />
|-<br />
| Nova / Glance || || ||<br />
|-<br />
| || Flavio Percoco, Mike Fedosin || flaper87, mfedosin || Glance liaison for Nova<br />
|-<br />
| || Jay Pipes || jaypipes || Nova liaison for Glance<br />
|-<br />
| Nova / Cinder || || ||<br />
|-<br />
| || Scott DAngelo || scottda || Cinder liaison for Nova<br />
|-<br />
| || Matt Riedemann || mriedem || Nova liason for Cinder<br />
|-<br />
| Nova / Ironic || John Villalovos || jlvillal || Ironic liaison for Nova<br />
|-<br />
| || Michael Davies || mrda || Ironic liaison for Nova<br />
|-<br />
| Neutron / Ironic || || ||<br />
|-<br />
| || Sukhdev Kapur || sukhdev || Neutron liaison for Ironic<br />
|-<br />
| || Mitsuhiro SHIGEMATSU and Jim Rollenhagen || pshige and jroll || Ironic liaison for Neutron<br />
|-<br />
| Murano / Glance || || ||<br />
|-<br />
| || Alexander Tivelkov || ativelkov || Glance liaison for Murano, Murano liaison for Glance<br />
|-<br />
| Horizon / i18n || || ||<br />
|-<br />
| || Doug Fish || doug-fish || Horizon liaison for i18n<br />
|}<br />
<br />
=== Etherpads ===<br />
<br />
The following is a list of etherpads that are used for inter-project liaisons, and are continuously updated.<br />
<br />
Nova - Neutron: https://etherpad.openstack.org/p/nova-neutron<br />
<br />
== Cross-Project Spec Liaisons ==<br />
<br />
The OpenStack project relies on the cross-project spec liaisons from each participating project to help with coordination and cross-project spec related tasks. See full set of [http://docs.openstack.org/project-team-guide/cross-project.html#cross-project-specification-liaisons responsibilities] The liaison defaults to the PTL, but the PTL can also delegate the responsibilities to someone else on the team by updating this tableː<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || ||<br />
|-<br />
| Ceilometer || ||<br />
|-<br />
| Cinder || Kendall Nelson || diablo_rojo<br />
|-<br />
| Congress || ||<br />
|-<br />
| Designate || ||<br />
|-<br />
| Glance || Nikhil Komawar || nikhil<br />
|-<br />
| Heat || Rico Lin || ricolin<br />
|-<br />
| Horizon || ||<br />
|-<br />
| Infrastructure || Matthew Wagoner || olaph<br />
|-<br />
| Ironic || ||<br />
|-<br />
| Keystone || Samuel de Medeiros Queiroz || samueldmq<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Mistral || ||<br />
|-<br />
| Murano || ||<br />
|-<br />
| Neutron || Armando Migliaccio || armax<br />
|-<br />
| Nova || Chris Dent || cdent<br />
|-<br />
| Oslo || Davanum Srinivas || dims<br />
|-<br />
| Sahara || Michael McCune or Sergey Lukjanov || elmiko or SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp || TravT<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || ||<br />
|-<br />
| Trove || ||<br />
|-<br />
| Zaqar || || <br />
|}</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti&diff=101647Graffiti2016-01-19T19:38:55Z<p>Travis Tripp: /* Screencasts */</p>
<hr />
<div><br />
== What's in my cloud? ==<br />
<br />
I've got a lot of resources in my cloud.<br />
<br />
* How do I find what I need?<br />
* How do I describe what I have?<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects for OpenStack users. Graffiti has the initial intent of providing cross service metadata “tagging" and search aggregation for cloud resources.<br />
<br />
== Current Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects.<br />
<br />
* Glance Metadata Definition Catalog<br />
** http://docs.openstack.org/developer/glance/metadefs-concepts.html<br />
** https://github.com/openstack/glance/tree/master/etc/metadefs<br />
** https://youtu.be/zJpHXdBOoeM<br />
* Searchlight<br />
** http://launchpad.net/searchlight<br />
** https://wiki.openstack.org/wiki/Searchlight<br />
* Horizon features<br />
** An admin UI for managing the catalog<br />
*** (Admin —> Metadata Definitions) (Kilo)<br />
** A widget for associating metadata to different resources<br />
*** (Update Metadata action on each row item below)<br />
*** admin -> images (Juno)<br />
*** admin -> flavors (Kilo)<br />
*** admin —> Host Aggregates (Kilo)<br />
*** project —> images (Liberty)<br />
*** project —> instances (Mitaka)<br />
** The ability to add metadata at launch time<br />
*** project —> Launch Instance (ng launch instance enabled) (Mitaka)<br />
<br />
The following information provides much of the background information on where these concepts originated.<br />
<br />
== Overview ==<br />
<br />
A challenge we've experienced with using OpenStack is discovering, sharing, and correlating metadata across services and different types of resources. We believe this affects both end users and administrators. <br />
<br />
For end users, we feel like doing basic tasks like launching instances is too technical for end users and require too much pre-existing knowledge of OpenStack concepts. For example, you should be able to just specify categories like "Big Data" or an "OS Family" and then let the system find the boot source for you, whether that is an image, snapshot, or volume. It should also allow finer grained filtering like filtering on specific versions of software that you want.<br />
<br />
For administrators, we’d like there to be an easier way to meaningfully collaborate on properties across host aggregates, flavors, images, volumes, or other cloud resources. <br />
<br />
Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties can be a disconnected and difficult process. This often involves searching wikis and opening the source code. In addition, the metadata properties often need to be correlated across several different services. It becomes more difficult as a cloud's scale grows and the number of resources being managed increases.<br />
<br />
We, HP and Intel, believe that both of the above problems come back to needing a better way for users to collaborate on metadata across services and resource types. We started a project called Graffiti to explore ideas and concepts for how to make this easier and more approachable for end users. Please join with us to help move forward together as a community!<br />
<br />
We believe that we can make some immediate improvements in Horizon, but that they can't be achieved through Horizon alone and that the benefits should extend to the API and CLI interactions as well. Better cross service collaboration and consistency on metadata should provide benefits that can be leveraged by other projects such as scheduling, reservation, orchestration, and policy enforcement.<br />
<br />
=== Terminology Note ===<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them.<br />
<br />
=== Screencasts ===<br />
<br />
To help explain the ideas of the project, we have a quick screencast demonstrating the concepts running under POC code. Please take a look!<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Concept Overview]<br />
* [https://youtu.be/zJpHXdBOoeM Availability as of the mitaka release in Horizon and Glance]<br />
<br />
=== Usage Concepts ===<br />
<br />
# Load your metadata definitions (sometimes called properties, tags, or capabilities)<br />
## Into the central metadata catalog <br />
# Update the resources in the cloud with your tags and capabilities<br />
# Let users find the resources with your desired tags and capabilities<br />
<br />
== Design Concepts ==<br />
<br />
Additional architecture concepts on the [[Graffiti/Architecture|Architecture]] page.<br />
<br />
=== Juno Summit Design Sessioɲ ===<br />
<br />
POC Demo reviewː<br />
̽ * https://www.youtube.com/watch?v=Dhrthnq1bnw<br />
<br />
http://sched.co/1m7wghx<br />
* Etherpadː https://etherpad.openstack.org/p/juno-summit-graffiti<br />
<br />
=== IRC ===<br />
<br />
The various features are maintained by teams in the following IRC channels on [http://freenode.net/ Freenode].<br />
<br />
#openstack-searchlight<br />
#openstack-horizon<br />
#openstack-glance<br />
<br />
=== Development ===<br />
* Open source under Apache 2.0<br />
* [https://github.com/stackforge/graffiti Graffiti POC API Service Source Repository] - No Longer Maintained (See Glance, Horizon, Searchlight)</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti&diff=101646Graffiti2016-01-19T19:38:31Z<p>Travis Tripp: /* Screencasts */</p>
<hr />
<div><br />
== What's in my cloud? ==<br />
<br />
I've got a lot of resources in my cloud.<br />
<br />
* How do I find what I need?<br />
* How do I describe what I have?<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects for OpenStack users. Graffiti has the initial intent of providing cross service metadata “tagging" and search aggregation for cloud resources.<br />
<br />
== Current Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects.<br />
<br />
* Glance Metadata Definition Catalog<br />
** http://docs.openstack.org/developer/glance/metadefs-concepts.html<br />
** https://github.com/openstack/glance/tree/master/etc/metadefs<br />
** https://youtu.be/zJpHXdBOoeM<br />
* Searchlight<br />
** http://launchpad.net/searchlight<br />
** https://wiki.openstack.org/wiki/Searchlight<br />
* Horizon features<br />
** An admin UI for managing the catalog<br />
*** (Admin —> Metadata Definitions) (Kilo)<br />
** A widget for associating metadata to different resources<br />
*** (Update Metadata action on each row item below)<br />
*** admin -> images (Juno)<br />
*** admin -> flavors (Kilo)<br />
*** admin —> Host Aggregates (Kilo)<br />
*** project —> images (Liberty)<br />
*** project —> instances (Mitaka)<br />
** The ability to add metadata at launch time<br />
*** project —> Launch Instance (ng launch instance enabled) (Mitaka)<br />
<br />
The following information provides much of the background information on where these concepts originated.<br />
<br />
== Overview ==<br />
<br />
A challenge we've experienced with using OpenStack is discovering, sharing, and correlating metadata across services and different types of resources. We believe this affects both end users and administrators. <br />
<br />
For end users, we feel like doing basic tasks like launching instances is too technical for end users and require too much pre-existing knowledge of OpenStack concepts. For example, you should be able to just specify categories like "Big Data" or an "OS Family" and then let the system find the boot source for you, whether that is an image, snapshot, or volume. It should also allow finer grained filtering like filtering on specific versions of software that you want.<br />
<br />
For administrators, we’d like there to be an easier way to meaningfully collaborate on properties across host aggregates, flavors, images, volumes, or other cloud resources. <br />
<br />
Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties can be a disconnected and difficult process. This often involves searching wikis and opening the source code. In addition, the metadata properties often need to be correlated across several different services. It becomes more difficult as a cloud's scale grows and the number of resources being managed increases.<br />
<br />
We, HP and Intel, believe that both of the above problems come back to needing a better way for users to collaborate on metadata across services and resource types. We started a project called Graffiti to explore ideas and concepts for how to make this easier and more approachable for end users. Please join with us to help move forward together as a community!<br />
<br />
We believe that we can make some immediate improvements in Horizon, but that they can't be achieved through Horizon alone and that the benefits should extend to the API and CLI interactions as well. Better cross service collaboration and consistency on metadata should provide benefits that can be leveraged by other projects such as scheduling, reservation, orchestration, and policy enforcement.<br />
<br />
=== Terminology Note ===<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them.<br />
<br />
=== Screencasts ===<br />
<br />
To help explain the ideas of the project, we have a quick screencast demonstrating the concepts running under POC code. Please take a look!<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Concept Overview]<br />
* [https://youtu.be/zJpHXdBOoeM| Availability as of the mitaka release in Horizon and Glance]<br />
<br />
=== Usage Concepts ===<br />
<br />
# Load your metadata definitions (sometimes called properties, tags, or capabilities)<br />
## Into the central metadata catalog <br />
# Update the resources in the cloud with your tags and capabilities<br />
# Let users find the resources with your desired tags and capabilities<br />
<br />
== Design Concepts ==<br />
<br />
Additional architecture concepts on the [[Graffiti/Architecture|Architecture]] page.<br />
<br />
=== Juno Summit Design Sessioɲ ===<br />
<br />
POC Demo reviewː<br />
̽ * https://www.youtube.com/watch?v=Dhrthnq1bnw<br />
<br />
http://sched.co/1m7wghx<br />
* Etherpadː https://etherpad.openstack.org/p/juno-summit-graffiti<br />
<br />
=== IRC ===<br />
<br />
The various features are maintained by teams in the following IRC channels on [http://freenode.net/ Freenode].<br />
<br />
#openstack-searchlight<br />
#openstack-horizon<br />
#openstack-glance<br />
<br />
=== Development ===<br />
* Open source under Apache 2.0<br />
* [https://github.com/stackforge/graffiti Graffiti POC API Service Source Repository] - No Longer Maintained (See Glance, Horizon, Searchlight)</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti&diff=101645Graffiti2016-01-19T19:38:15Z<p>Travis Tripp: /* Screencasts */</p>
<hr />
<div><br />
== What's in my cloud? ==<br />
<br />
I've got a lot of resources in my cloud.<br />
<br />
* How do I find what I need?<br />
* How do I describe what I have?<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects for OpenStack users. Graffiti has the initial intent of providing cross service metadata “tagging" and search aggregation for cloud resources.<br />
<br />
== Current Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects.<br />
<br />
* Glance Metadata Definition Catalog<br />
** http://docs.openstack.org/developer/glance/metadefs-concepts.html<br />
** https://github.com/openstack/glance/tree/master/etc/metadefs<br />
** https://youtu.be/zJpHXdBOoeM<br />
* Searchlight<br />
** http://launchpad.net/searchlight<br />
** https://wiki.openstack.org/wiki/Searchlight<br />
* Horizon features<br />
** An admin UI for managing the catalog<br />
*** (Admin —> Metadata Definitions) (Kilo)<br />
** A widget for associating metadata to different resources<br />
*** (Update Metadata action on each row item below)<br />
*** admin -> images (Juno)<br />
*** admin -> flavors (Kilo)<br />
*** admin —> Host Aggregates (Kilo)<br />
*** project —> images (Liberty)<br />
*** project —> instances (Mitaka)<br />
** The ability to add metadata at launch time<br />
*** project —> Launch Instance (ng launch instance enabled) (Mitaka)<br />
<br />
The following information provides much of the background information on where these concepts originated.<br />
<br />
== Overview ==<br />
<br />
A challenge we've experienced with using OpenStack is discovering, sharing, and correlating metadata across services and different types of resources. We believe this affects both end users and administrators. <br />
<br />
For end users, we feel like doing basic tasks like launching instances is too technical for end users and require too much pre-existing knowledge of OpenStack concepts. For example, you should be able to just specify categories like "Big Data" or an "OS Family" and then let the system find the boot source for you, whether that is an image, snapshot, or volume. It should also allow finer grained filtering like filtering on specific versions of software that you want.<br />
<br />
For administrators, we’d like there to be an easier way to meaningfully collaborate on properties across host aggregates, flavors, images, volumes, or other cloud resources. <br />
<br />
Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties can be a disconnected and difficult process. This often involves searching wikis and opening the source code. In addition, the metadata properties often need to be correlated across several different services. It becomes more difficult as a cloud's scale grows and the number of resources being managed increases.<br />
<br />
We, HP and Intel, believe that both of the above problems come back to needing a better way for users to collaborate on metadata across services and resource types. We started a project called Graffiti to explore ideas and concepts for how to make this easier and more approachable for end users. Please join with us to help move forward together as a community!<br />
<br />
We believe that we can make some immediate improvements in Horizon, but that they can't be achieved through Horizon alone and that the benefits should extend to the API and CLI interactions as well. Better cross service collaboration and consistency on metadata should provide benefits that can be leveraged by other projects such as scheduling, reservation, orchestration, and policy enforcement.<br />
<br />
=== Terminology Note ===<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them.<br />
<br />
=== Screencasts ===<br />
<br />
To help explain the ideas of the project, we have a quick screencast demonstrating the concepts running under POC code. Please take a look!<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Concept Overview]<br />
* [https://youtu.be/zJpHXdBOoeM | Availability as of the mitaka release in Horizon and Glance]<br />
<br />
=== Usage Concepts ===<br />
<br />
# Load your metadata definitions (sometimes called properties, tags, or capabilities)<br />
## Into the central metadata catalog <br />
# Update the resources in the cloud with your tags and capabilities<br />
# Let users find the resources with your desired tags and capabilities<br />
<br />
== Design Concepts ==<br />
<br />
Additional architecture concepts on the [[Graffiti/Architecture|Architecture]] page.<br />
<br />
=== Juno Summit Design Sessioɲ ===<br />
<br />
POC Demo reviewː<br />
̽ * https://www.youtube.com/watch?v=Dhrthnq1bnw<br />
<br />
http://sched.co/1m7wghx<br />
* Etherpadː https://etherpad.openstack.org/p/juno-summit-graffiti<br />
<br />
=== IRC ===<br />
<br />
The various features are maintained by teams in the following IRC channels on [http://freenode.net/ Freenode].<br />
<br />
#openstack-searchlight<br />
#openstack-horizon<br />
#openstack-glance<br />
<br />
=== Development ===<br />
* Open source under Apache 2.0<br />
* [https://github.com/stackforge/graffiti Graffiti POC API Service Source Repository] - No Longer Maintained (See Glance, Horizon, Searchlight)</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti&diff=101644Graffiti2016-01-19T19:37:51Z<p>Travis Tripp: /* Screencasts */</p>
<hr />
<div><br />
== What's in my cloud? ==<br />
<br />
I've got a lot of resources in my cloud.<br />
<br />
* How do I find what I need?<br />
* How do I describe what I have?<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects for OpenStack users. Graffiti has the initial intent of providing cross service metadata “tagging" and search aggregation for cloud resources.<br />
<br />
== Current Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects.<br />
<br />
* Glance Metadata Definition Catalog<br />
** http://docs.openstack.org/developer/glance/metadefs-concepts.html<br />
** https://github.com/openstack/glance/tree/master/etc/metadefs<br />
** https://youtu.be/zJpHXdBOoeM<br />
* Searchlight<br />
** http://launchpad.net/searchlight<br />
** https://wiki.openstack.org/wiki/Searchlight<br />
* Horizon features<br />
** An admin UI for managing the catalog<br />
*** (Admin —> Metadata Definitions) (Kilo)<br />
** A widget for associating metadata to different resources<br />
*** (Update Metadata action on each row item below)<br />
*** admin -> images (Juno)<br />
*** admin -> flavors (Kilo)<br />
*** admin —> Host Aggregates (Kilo)<br />
*** project —> images (Liberty)<br />
*** project —> instances (Mitaka)<br />
** The ability to add metadata at launch time<br />
*** project —> Launch Instance (ng launch instance enabled) (Mitaka)<br />
<br />
The following information provides much of the background information on where these concepts originated.<br />
<br />
== Overview ==<br />
<br />
A challenge we've experienced with using OpenStack is discovering, sharing, and correlating metadata across services and different types of resources. We believe this affects both end users and administrators. <br />
<br />
For end users, we feel like doing basic tasks like launching instances is too technical for end users and require too much pre-existing knowledge of OpenStack concepts. For example, you should be able to just specify categories like "Big Data" or an "OS Family" and then let the system find the boot source for you, whether that is an image, snapshot, or volume. It should also allow finer grained filtering like filtering on specific versions of software that you want.<br />
<br />
For administrators, we’d like there to be an easier way to meaningfully collaborate on properties across host aggregates, flavors, images, volumes, or other cloud resources. <br />
<br />
Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties can be a disconnected and difficult process. This often involves searching wikis and opening the source code. In addition, the metadata properties often need to be correlated across several different services. It becomes more difficult as a cloud's scale grows and the number of resources being managed increases.<br />
<br />
We, HP and Intel, believe that both of the above problems come back to needing a better way for users to collaborate on metadata across services and resource types. We started a project called Graffiti to explore ideas and concepts for how to make this easier and more approachable for end users. Please join with us to help move forward together as a community!<br />
<br />
We believe that we can make some immediate improvements in Horizon, but that they can't be achieved through Horizon alone and that the benefits should extend to the API and CLI interactions as well. Better cross service collaboration and consistency on metadata should provide benefits that can be leveraged by other projects such as scheduling, reservation, orchestration, and policy enforcement.<br />
<br />
=== Terminology Note ===<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them.<br />
<br />
=== Screencasts ===<br />
<br />
To help explain the ideas of the project, we have a quick screencast demonstrating the concepts running under POC code. Please take a look!<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Concept Overview]<br />
* [Availability as of the mitaka release in Horizon and Glance| https://youtu.be/zJpHXdBOoeM]<br />
<br />
=== Usage Concepts ===<br />
<br />
# Load your metadata definitions (sometimes called properties, tags, or capabilities)<br />
## Into the central metadata catalog <br />
# Update the resources in the cloud with your tags and capabilities<br />
# Let users find the resources with your desired tags and capabilities<br />
<br />
== Design Concepts ==<br />
<br />
Additional architecture concepts on the [[Graffiti/Architecture|Architecture]] page.<br />
<br />
=== Juno Summit Design Sessioɲ ===<br />
<br />
POC Demo reviewː<br />
̽ * https://www.youtube.com/watch?v=Dhrthnq1bnw<br />
<br />
http://sched.co/1m7wghx<br />
* Etherpadː https://etherpad.openstack.org/p/juno-summit-graffiti<br />
<br />
=== IRC ===<br />
<br />
The various features are maintained by teams in the following IRC channels on [http://freenode.net/ Freenode].<br />
<br />
#openstack-searchlight<br />
#openstack-horizon<br />
#openstack-glance<br />
<br />
=== Development ===<br />
* Open source under Apache 2.0<br />
* [https://github.com/stackforge/graffiti Graffiti POC API Service Source Repository] - No Longer Maintained (See Glance, Horizon, Searchlight)</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti&diff=101643Graffiti2016-01-19T19:36:37Z<p>Travis Tripp: /* Current Status */</p>
<hr />
<div><br />
== What's in my cloud? ==<br />
<br />
I've got a lot of resources in my cloud.<br />
<br />
* How do I find what I need?<br />
* How do I describe what I have?<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects for OpenStack users. Graffiti has the initial intent of providing cross service metadata “tagging" and search aggregation for cloud resources.<br />
<br />
== Current Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects.<br />
<br />
* Glance Metadata Definition Catalog<br />
** http://docs.openstack.org/developer/glance/metadefs-concepts.html<br />
** https://github.com/openstack/glance/tree/master/etc/metadefs<br />
** https://youtu.be/zJpHXdBOoeM<br />
* Searchlight<br />
** http://launchpad.net/searchlight<br />
** https://wiki.openstack.org/wiki/Searchlight<br />
* Horizon features<br />
** An admin UI for managing the catalog<br />
*** (Admin —> Metadata Definitions) (Kilo)<br />
** A widget for associating metadata to different resources<br />
*** (Update Metadata action on each row item below)<br />
*** admin -> images (Juno)<br />
*** admin -> flavors (Kilo)<br />
*** admin —> Host Aggregates (Kilo)<br />
*** project —> images (Liberty)<br />
*** project —> instances (Mitaka)<br />
** The ability to add metadata at launch time<br />
*** project —> Launch Instance (ng launch instance enabled) (Mitaka)<br />
<br />
The following information provides much of the background information on where these concepts originated.<br />
<br />
== Overview ==<br />
<br />
A challenge we've experienced with using OpenStack is discovering, sharing, and correlating metadata across services and different types of resources. We believe this affects both end users and administrators. <br />
<br />
For end users, we feel like doing basic tasks like launching instances is too technical for end users and require too much pre-existing knowledge of OpenStack concepts. For example, you should be able to just specify categories like "Big Data" or an "OS Family" and then let the system find the boot source for you, whether that is an image, snapshot, or volume. It should also allow finer grained filtering like filtering on specific versions of software that you want.<br />
<br />
For administrators, we’d like there to be an easier way to meaningfully collaborate on properties across host aggregates, flavors, images, volumes, or other cloud resources. <br />
<br />
Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties can be a disconnected and difficult process. This often involves searching wikis and opening the source code. In addition, the metadata properties often need to be correlated across several different services. It becomes more difficult as a cloud's scale grows and the number of resources being managed increases.<br />
<br />
We, HP and Intel, believe that both of the above problems come back to needing a better way for users to collaborate on metadata across services and resource types. We started a project called Graffiti to explore ideas and concepts for how to make this easier and more approachable for end users. Please join with us to help move forward together as a community!<br />
<br />
We believe that we can make some immediate improvements in Horizon, but that they can't be achieved through Horizon alone and that the benefits should extend to the API and CLI interactions as well. Better cross service collaboration and consistency on metadata should provide benefits that can be leveraged by other projects such as scheduling, reservation, orchestration, and policy enforcement.<br />
<br />
=== Terminology Note ===<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them.<br />
<br />
=== Screencasts ===<br />
<br />
To help explain the ideas of the project, we have a quick screencast demonstrating the concepts running under POC code. Please take a look!<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Concept Overview]<br />
<br />
=== Usage Concepts ===<br />
<br />
# Load your metadata definitions (sometimes called properties, tags, or capabilities)<br />
## Into the central metadata catalog <br />
# Update the resources in the cloud with your tags and capabilities<br />
# Let users find the resources with your desired tags and capabilities<br />
<br />
== Design Concepts ==<br />
<br />
Additional architecture concepts on the [[Graffiti/Architecture|Architecture]] page.<br />
<br />
=== Juno Summit Design Sessioɲ ===<br />
<br />
POC Demo reviewː<br />
̽ * https://www.youtube.com/watch?v=Dhrthnq1bnw<br />
<br />
http://sched.co/1m7wghx<br />
* Etherpadː https://etherpad.openstack.org/p/juno-summit-graffiti<br />
<br />
=== IRC ===<br />
<br />
The various features are maintained by teams in the following IRC channels on [http://freenode.net/ Freenode].<br />
<br />
#openstack-searchlight<br />
#openstack-horizon<br />
#openstack-glance<br />
<br />
=== Development ===<br />
* Open source under Apache 2.0<br />
* [https://github.com/stackforge/graffiti Graffiti POC API Service Source Repository] - No Longer Maintained (See Glance, Horizon, Searchlight)</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=CrossProjectLiaisons&diff=101262CrossProjectLiaisons2016-01-14T19:00:07Z<p>Travis Tripp: /* Release management */ added searchlight</p>
<hr />
<div>Many of our cross-project teams need focused help for communicating with the other project teams. This page lists the people who have volunteered for that work.<br />
<br />
== Oslo ==<br />
<br />
There are now more projects consuming code from the Oslo incubator than we have Oslo contributors. That means we are going to need your help to make these migrations happen. We are asking for one person from each project to serve as a liaison between the project and Oslo, and to assist with integrating changes as we move code out of the incubator into libraries.<br />
<br />
* The liaison should be active in the project and familiar with the project-specific requirements for having patches accepted, but does not need to be a core reviewer or the PTL.<br />
* The liaison should be prepared to assist with writing and reviewing patches in their project as libraries are adopted, and with discussions of API changes to the libraries to make them easier to use within the project.<br />
* Liaisons should pay attention to [Oslo] tagged messages on the openstack-dev mailing list.<br />
* It is also useful for liaisons to be able to attend the Oslo team meeting ([[Meetings/Oslo]]) to participate in discussions and raise issues for real-time discussion.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || Julien Danjou || jd__<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Cue || Min Pae || sputnik13<br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Thomas Herve || therve<br />
|-<br />
| Horizon || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Ironic || Lin Tan || lintan<br />
|-<br />
| Keystone || Brant Knudson || bknudson<br />
|-<br />
| Manila || Thomas Bechtold || toabctl<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachyshka<br />
|-<br />
| Nova || ChangBo Guo <glongwave@gmail.com> || gcb<br />
|-<br />
| [[Octavia]] || Michael Johnson || johnsom<br />
|-<br />
| Sahara || Sergey Reshetnyak || sreshetnyak<br />
|-<br />
| Senlin || Yanyan Hu || Yanyanhu<br />
|-<br />
<br />
| Swift || || <br />
|-<br />
| TripleO || Ben Nemec || bnemec<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Zaqar || Flavio Percoco || flaper87<br />
|-<br />
|}<br />
<br />
== Release management ==<br />
<br />
The Release Management Liaison is responsible for communication with the Release Management team, attending the weekly 1:1 syncs in #openstack-relmgr-office, keeping milestone plans up to date, and signing off milestone and release tags. That task has been [[PTL_Guide#Interactions_with_the_Release_team|traditionally filled by the PTL]], but they may now delegate this task if they wish.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Release Management Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || gordon chung || gordc<br />
|-<br />
| Cinder || Sean McGinnis || smcginnis<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Designate || Graham Hayes || mugsie<br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Sergey Kraynev || skraynev<br />
|-<br />
| Horizon || David Lyle || david-lyle<br />
|-<br />
| Ironic || Jim Rollenhagen (Dmitry Tantsur for inspector deliverables) || jroll (dtantsur)<br />
|-<br />
| Keystone || Steve Martinelli || stevemar<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Mistral || Lingxian Kong || lane_kong<br />
|-<br />
| Murano ||Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Kyle Mestery || mestery<br />
|-<br />
| Nova || Sylvain Bauza || bauzas<br />
|-<br />
| Oslo || Davanum Srinivas || dims<br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp || TravT<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || John Dickinson || notmyname<br />
|-<br />
| Trove || Craig Vyvial or Nikhil Manchanda || cp16net or SlickNik<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== QA ==<br />
<br />
There are now more projects that are being tested by Tempest, and Grenade or a part deployable by Devstack than we have QA contributors. That means we are going to need your help to keep on top of everything. We are asking for one person from each project to serve as a liaison between the project and QA, and to assist with integrating changes as we move forward.<br />
<br />
The liaison should be a core reviewer for the project, but does not need to be the PTL. The liaison should be prepared to assist with writing and reviewing patches that interact with their project, and with discussions of changes to the QA projects to make them easier to use within the project.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Steve Heyman || hockeynut <br />
|-<br />
| Ceilometer || Chris Dent || cdent<br />
|-<br />
| Cinder || || <br />
|-<br />
| Glance || Nikhil Komawar || nikhil_k<br />
|-<br />
| Heat || Steve Baker || stevebaker<br />
|-<br />
| Horizon || || <br />
|-<br />
| Ironic || John Villalovos || jlvillal<br />
|-<br />
| Keystone || David Stanek || dstanek<br />
|-<br />
| Manila || Valeriy Ponomaryov || vponomaryov<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Matt Riedemann || mriedem<br />
|-<br />
| Oslo || Davanum Srinivas || dims <br />
|-<br />
| Sahara || Luigi Toscano and Sergey Lukjanov || tosky and SergeyLukjanov<br />
|-<br />
| Senlin || Haiwei Xu || haiwei-xu<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Craig Vyvial and Nirav Shah || cp16net and nshah<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== Documentation ==<br />
<br />
The OpenStack Documentation is centralized on docs.openstack.org but often there's a need for specialty information when reviewing patches or triaging doc bugs. A doc liaison should be available to triage doc bugs when the docs team members don't know enough to triage accurately, and be added to doc reviews that affect your project. You'd be notified through email when you're added either to a doc bug or a doc review. We also would appreciate attendance at the [https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting weekly doc team meeting], We meet weekly in #openstack-meeting every Wednesday at alternating times for different timezones:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Constanze Kratel || constanze <br />
|-<br />
| Ceilometer || Ildiko Vancsa || ildikov<br />
|-<br />
| Cinder || Mike Perez || thingee <br />
|-<br />
| Congress || || <br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Brian Rosmaita || rosmaita<br />
|-<br />
| Heat || Randall Burt || randallburt<br />
|-<br />
| Horizon || Rob Cresswell || robcresswell<br />
|-<br />
| Ironic || Mitsuhiro SHIGEMATSU || pshige<br />
|-<br />
| Keystone || Lance Bragstad || lbragstad<br />
|-<br />
| Magnum || || <br />
|-<br />
| Manila || || <br />
|-<br />
| Mistral || || <br />
|-<br />
| Murano || Ekaterina Chernova || katyafervent <br />
|-<br />
| Neutron || Edgar Magana || emagana <br />
|-<br />
| Nova || Joe Gordon or Michael Still || Jog0 or mikal<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Rally || || <br />
|-<br />
| Sahara || Chad Roberts || crobertsrh<br />
|-<br />
| Senlin || Cindia Blue || lixinhui<br />
|-<br />
| Swift || || <br />
|-<br />
| Tripleo || || <br />
|-<br />
| Trove || Laurel Michaels || laurelm<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== Stable Branch ==<br />
<br />
The Stable Branch Liaison is responsible for making sure backports are proposed for critical issues in their project, and make sure proposed backports<br />
are reviewed. They are also the contact point for stable branch release managers around point release times.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Stable Branch Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer || Eoghan Glynn || eglynn<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Heat || Zane Bitter || zaneb<br />
|-<br />
| Horizon || Matthias Runge || mrunge <br />
|-<br />
| Ironic || Adam Gandelman || adam_g<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachys<br />
|-<br />
| Nova || Matt Riedemann || mriedem <br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Senlin|| Qiming Teng || Qiming<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
|}<br />
<br />
== Vulnerability management ==<br />
<br />
The [[Vulnerability Management]] Team needs domain specialists to help assessing the impact of reported issues, coordinate the development of patches, review proposed patches and propose backports. The liaison should be familiar with the [[Vulnerability Management]] process and embargo rules, and have a good grasp of security issues in software design.<br />
<br />
* The liaison should be a core reviewer for the project, but does not need to be the PTL.<br />
* By default, the liaison will be the PTL.<br />
* The liaison is the first line of contact for the Vulnerability Management team members<br />
* The liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL<br />
* The liaison may further delegate work to other subject matter experts<br />
* The liaison maintains the members of the $PROJECT-coresec team in Launchpad (which can be given access to embargoed vulnerabilities)<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal or Charles Neill || redrobot / ccneill<br />
|-<br />
| Ceilometer || Lianhao Lu or Gordon Chung || llu/gordc <br />
|-<br />
| Cinder || || <br />
|-<br />
| Glance || Stuart McLaren or Nikhil Komawar || mclaren or nikhil_k <br />
|-<br />
| Heat || Steve Hardy || shardy<br />
|-<br />
| Horizon || Lin Hua Cheng || lhcheng <br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Michael Still || mikal<br />
|-<br />
| Sahara || Michael McCune or Sergey Lukjanov || elmiko or SergeyLukjanov<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Craig Vyvial or Nikhil Manchanda || cp16net or SlickNik <br />
|-<br />
|}<br />
<br />
== API Working Group ==<br />
<br />
The [[API_Working_Group|API Working Group]] seeks API subject matter experts for each project to communicate plans for API updates, review API guidelines with their project's view in mind, and review the API Working Group guidelines as they are drafted. The liaison should be familiar with the project's REST API design and future planning for changes to it.<br />
<br />
The members of the [http://specs.openstack.org/openstack/api-wg/liaisons.html API Working Group Cross-Project Liaisons] are maintained in our repo. If you want to read the entire list of CPLs or add/remove yourself from the list, you'll need to update the [http://git.openstack.org/cgit/openstack/api-wg/tree/doc/source/liaisons.json liaisons.json] file. If you don't want to make the update yourself, please ask in #openstack-sdks on IRC and someone can make the change for you.<br />
<br />
== Logging Working Group ==<br />
<br />
The [[LogWorkingGroup|Log Working Group]] seeks experts for each project to assist with making the logging in projects match the new [http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html Logging Guidelines]<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Nova || John Garbutt || johnthetubaguy<br />
|-<br />
| Murano || Nikolay Starodubtsev || Nikolay_St<br />
|}<br />
<br />
== Infra ==<br />
<br />
These are the project specific groups of people that Infra will look to ACK changes to that project's test configuration. Changes to project-config and devstack-gate should be +1'd by these groups when they are related to their project. Note that in an emergency this may not always be possible and Infra will ask for forgiveness but generally we should look for these +1s.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Flavio Percoco, Nikhil Komawar|| flaper87, nikhil_k<br />
<br />
|-<br />
| Neutron || Kyle Mestery, Armando Migliaccio, Doug Wiegley|| mestery, armax, dougwig<br />
|-<br />
| Documentation || Andreas Jaeger|| AJaeger<br />
<br />
|-<br />
| Trove || Nikhil Manchanda || SlickNik<br />
|}<br />
<br />
== Product Working Group ==<br />
The product working group consists of product managers, technologists, and operators from a diverse set of organizations. The group is working to aggregate user stories from the market-focused teams (Enterprise, Telco, etc.) and cross-project functional teams (e.g. logging, upgrades, etc.), partner with the development community on resourcing, and help gather data to generate a multi-release roadmap. Most of the user stories being tracked by this team consists of items that can span multiple releases and usually have cross-project dependencies. <br />
<br />
More information about the team can be found on the [[ProductTeam|Product WG wiki]].<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer ||Krish Ragurham || <br />
|-<br />
| Cinder || Shamail Tahir || shamail<br />
|-<br />
| Glance || Nate Ziemann || nate_zman<br />
|-<br />
| Horizon || Carol Barrett || barrett1<br />
|-<br />
| Keystone || Sheena Gregson || <br />
|-<br />
| Kolla || Carol Barrett || barrett1<br />
|-<br />
| Magnum || Steve Gordon || sgordon<br />
|-<br />
| Manilla ||Pete Chadwick || <br />
|-<br />
| Neutron || Mike Cohen, Duane DeCapite || DuaneDeC7<br />
|-<br />
| Nova || Hugh Blemings || hughhalf <br />
|-<br />
| OSClient || Megan Rossetti || MeganR<br />
|-<br />
| Stable Release|| Rochelle Grober || rockyg<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || Phil Willains || philipw<br />
|-<br />
| Tempest || Arkady Kanevsky || arkady_kanevsky<br />
|-<br />
|}<br />
<br />
== Inter-project Liaisons ==<br />
<br />
In some cases, it is useful to have liaisons between projects. [http://lists.openstack.org/pipermail/openstack-dev/2015-April/062327.html For example, it is useful for the Nova and Neutron projects to have liaisons, because the projects have complex interactions and dependencies.] Ideally, a cross-project effort should have two members, one from each project, to facilitate communication and knowledge transfer.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Projects !! Name !! IRC Handle !! Role<br />
|-<br />
| Nova / Neutron || || ||<br />
|-<br />
| || Sean M. Collins || sc68cal || Neutron liaison for Nova<br />
|-<br />
| || Brent Eagles || beagles || Nova liaison for Neutron<br />
|-<br />
| Nova / Glance || || ||<br />
|-<br />
| || Flavio Percoco, Mike Fedosin || flaper87, mfedosin || Glance liaison for Nova<br />
|-<br />
| || Jay Pipes || jaypipes || Nova liaison for Glance<br />
|-<br />
| Nova / Cinder || || ||<br />
|-<br />
| || Scott DAngelo || scottda || Cinder liaison for Nova<br />
|-<br />
| || Matt Riedemann || mriedem || Nova liason for Cinder<br />
|-<br />
| Nova / Ironic || John Villalovos || jlvillal || Ironic liaison for Nova<br />
|-<br />
| || Michael Davies || mrda || Ironic liaison for Nova<br />
|-<br />
| Neutron / Ironic || || ||<br />
|-<br />
| || Sukhdev Kapur || sukhdev || Neutron liaison for Ironic<br />
|-<br />
| || Mitsuhiro SHIGEMATSU and Jim Rollenhagen || pshige and jroll || Ironic liaison for Neutron<br />
|-<br />
| Murano / Glance || || ||<br />
|-<br />
| || Alexander Tivelkov || ativelkov || Glance liaison for Murano, Murano liaison for Glance<br />
|-<br />
| Horizon / i18n || || ||<br />
|-<br />
| || Doug Fish || doug-fish || Horizon liaison for i18n<br />
|}<br />
<br />
=== Etherpads ===<br />
<br />
The following is a list of etherpads that are used for inter-project liaisons, and are continuously updated.<br />
<br />
Nova - Neutron: https://etherpad.openstack.org/p/nova-neutron</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti/Architecture&diff=101107Graffiti/Architecture2016-01-13T17:36:02Z<p>Travis Tripp: /* Graffiti Architecture Concepts */</p>
<hr />
<div><br />
== Graffiti Architecture Concepts ==<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects for OpenStack users. Graffiti has the initial intent of providing cross service metadata “tagging" and search aggregation for cloud resources.<br />
<br />
== Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects (Glance, Searchlight, Horizon). The below provides legacy overview information to help understand how the various components come together. For more info, see https://wiki.openstack.org/wiki/Graffiti#Current_Status<br />
<br />
==== Workflow and Components ====<br />
<br />
# Load your metadata definitions (called property types or capability types)<br />
## Into the Graffiti central dictionary <br />
## Or configure Graffiti plugins to include existing definitions provided by the various services<br />
# "Tag" the resources in the cloud with your properties and capabilities<br />
# Let users find the resources with your desired properties and capabilities<br />
<br />
<br />
[[File:Graffiti-Architecture-ConceptOverlay-Simple.png]]<br />
<br />
=== Base Concepts ===<br />
<br />
* Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties is largely a disconnected and difficult process. This often involves searching wikis and opening the source code. It becomes more difficult as a cloud's scale grows. In addition, many times the properties can apply to resources from several different services. Graffiti makes this easier by creating the following concepts: <br />
** ''[[Graffiti/Dictionary#Capability_Types|Capabilities and Requirements]]'': The Graffiti concepts have embraced the idea that cloud resources may be described using the notion of capabilities, a concept influenced by some parts of OpenStack today as well as by industry specifications like OASIS TOSCA (Please note, Graffiti is NOT an orchestration engine, it only assists in describing and locating existing resources in the cloud.).<br />
** ''[[Graffiti/Dictionary|Dictionary]]'': A common API for services, admins, and users to discover and share their metadata vocabulary. This is the basis for creating an agreement on how to describe the various capabilities the cloud provides. It allows for a consistent UI and CLI experience for describing and finding resources. <br />
** ''[[Graffiti/Directory|Resource Directory]]'': A common API to "tag" and search across existing and new services for cloud content based on the dictionary (metadata definitions). <br />
** ''Resource Capability Registry'': A persistent shared repository for services to publish information about cloud resources. This can optionally be used by services instead of or in addition to having their own local native storage to describe resources.<br />
<br />
== Use Case Exampleː Compute Capabilities ==<br />
In Summary: <br />
The Graffiti concepts provide cross service and cross environment:<br />
* metadata definition aggregation and administration<br />
* resource metadata "tagging" aggregation<br />
* resource metadata search aggregation<br />
<br />
<br />
[[File:Graffiti-ComputeCapability-Flow-Overview.png]]<br />
<br />
== Proposed Horizon Concepts ==<br />
<br />
We believe that the [[Graffiti]] concepts can be fulfilled in Horizon with reusable widgets that we can plug into Horizon as well as changes to screens like the launch instance wizard. The widgets will provide the ability to "tag" capabilities and TBDː requirements on various resources. They will also be able to generate filter queries based on resource capabilities and properties.<br />
<br />
==== Related blueprints: ====<br />
* https://blueprints.launchpad.net/horizon/+spec/instance-launch-using-capability-filtering<br />
* https://blueprints.launchpad.net/horizon/+spec/faceted-search<br />
* https://blueprints.launchpad.net/horizon/+spec/tagging<br />
* https://blueprints.launchpad.net/horizon/+spec/host-aggregate-update-metadata<br />
<br />
==== Terminology Note ====<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them. Some resource types may not support capabilities / tag that have properties.<br />
<br />
==== Concept Screencasts ====<br />
<br />
To explore and explain the ideas, HP and Intel have created a screencasts showing the concepts running under POC code. The styling is only representative of the point in time that the demo was recorded and has changed.<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Screencast - Concept Overview]<br />
<br />
==== Concept Flow Mockup ====<br />
<br />
The basic proposed flow is that we will be able to add a widget on any resource management screen that we want to be able to "tag" capabilities. For example, the images, volumes, flavors, and host aggregate screens are all good candidates. The goal is that the only customization required will be for the code using the widget will send in information about the resource / resource type that is being tagged. The resource type is sent to the API which then returns back the capabilities applicable for that type of resource.<br />
<br />
==== Launch Instance Example ==== <br />
<br />
̈ - Noteː Tagging other resource types and searching for them could work similarly.<br />
<br />
[[File:Graffit-Tag-Then-Use-Concept.png|center|Widget Screenshots from screencasts]]<br />
<br />
==== Style Mockups ====<br />
<br />
We have been playing with various style mockups, but aren't sure what makes sense or would be acceptable. The traditional look and feel in Horizon can be achieved, but we also aren't sure that Horizon today has a good example for handling tree browsing. The following are some of the mockups we've created.<br />
<br />
[[File:Graffiti-capabilities-widget-mockups.png|thumbnail|center|Graffiti Concept Mockups]]<br />
<br />
<br />
<br />
=== Proposed Horizon Component Architecture ===<br />
<br />
We would like there to be a common way in Horizon to support "tagging" simple named tags and key-value pairs that also will support the overall [[Graffiti]] concepts. In the proposed architecture, we will support Horizon gaining the value of Graffiti concepts through a thin API plugin layer directly in Horizon without the full "Dictionary" and "Resource Directory" APIs in the deployed environment. This will provide benefits to Horizon now, without requiring a new Graffiti service to either be incubated or be adopted into other projects (which we are actively seeking input and advice). The widgets will be built to work with a common simple "resource syntax" that the external service API would provide.<br />
<br />
The entire concept can be run in a lightweight way through a thin filesystem provider on the Horizon server that allows reading dictionary definition files directly from the filesystem or from services that already provide schemas or tags. This would suffice for single node deployments or deployments that are managed through configuration management provider to ensure consistency of the definitions across Horizon nodes.<br />
<br />
If a fully "Dictionary" / "Resource Directory" service API was available, the widgets wouldn't have to change even as new resource types and metadata definitions are added to the system. They still go to the Horizon Graffiti component, which would add the plugin to talk to the appropriate central "Dictionary" / "Resource Directory" service endpoint(s), which would provide the [[Graffiti/Architecture#Graffiti_API_Benefits|full benefits]].<br />
<br />
===== Limits of a Horizon Only Solution =====<br />
<br />
The widgets and concepts can be partially built in Horizon as stated above and diagrammed below without changes to existing services. However, there are a number of limitations that require some external service work as well.<br />
<br />
# Horizon is a stateless server by design at this point. The only place any persistent data can exist is if you choose to store session information on the server in a database. The default setup for Horizon now uses signed cookies to maintain session data and avoids a DB requirement.<br />
# There is no privileged account running on the Horizon server and thus no way to build a persistent datastore only the admin can obtain. A persistent privileged session as this creates many security issues.<br />
# Horizon can be set up in an HA manner, which would require either duplicate DB on multiple Horizon servers or another server dedicated to the DB backend for Horizon.<br />
# The original scope discussed is only part of the picture, when the scope grows beyond the launch use case, the scope grows beyond usefulness for just Horizon. Isolating in Horizon is limiting.<br />
<br />
[[File:Graffiti-Widgets.png]]<br />
<br />
== Additional Details ==<br />
<br />
The below provides an overview of the metadata aggregation, resource search optimization, and local resource registry concepts.<br />
<br />
<br />
[[File:Graffiti-Architecture-ConceptOverlay.png]]<br />
<br />
=== Graffiti API Benefits ===<br />
<br />
When we first looked at a UI only solution, we found that it can be done to a certain extent [[Graffiti/Architecture#Limits_of_a_Horizon_Only_Solution|with limitations]]. However, if we propose the idea of a new service integrated or built into the ecosystem the following additional benefits will be available:<br />
* Command line and REST API for cross service searching<br />
* Ability to import / export definitions across deployments<br />
* Common persistence DB for definitions in multi-node / HA deployments<br />
* Private tag / metadata libraries. Users / projects will be able to have their own vocabulary for "tagging" resources<br />
* Authoring - We will provide an authoring and administration UI for creating and managing namespaces, capability types, etc<br />
* Resource search performance optimizations. We would like to introduce a high performance indexing mechanism based that crosses service boundaries.<br />
<br />
==== Resource Search Optimization ====<br />
<br />
This has not been explored in depth, but we do have a few ideasː<br />
* Lazy loading. Simple pre-fetch mechanism. Make a call to initiate session or on first request for a resource type, data is pulled into memory and held for a limited time. Subsequent searches are all done in in memory. RBAC is handled via token pass through.<br />
* Eager loading. The base idea is that cache provider plugin can be added under the API. Resources that are indexable (those whose service owner supports notifications) would then be indexed via a combination of startup seeding and service resource event notifications. For example, Glance supports sending notifications on certain image changes. The index itself could be based on elasticsearch and the plugin would translate queries in and out of elasticsearch. One issue with this approach today is that this may be limited to admin only due to limited RBAC visibility.</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti&diff=101106Graffiti2016-01-13T17:31:59Z<p>Travis Tripp: /* Usage Concepts */</p>
<hr />
<div><br />
== What's in my cloud? ==<br />
<br />
I've got a lot of resources in my cloud.<br />
<br />
* How do I find what I need?<br />
* How do I describe what I have?<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects for OpenStack users. Graffiti has the initial intent of providing cross service metadata “tagging" and search aggregation for cloud resources.<br />
<br />
== Current Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects.<br />
<br />
* Glance Metadata Definition Catalog<br />
** http://docs.openstack.org/developer/glance/metadefs-concepts.html<br />
** https://github.com/openstack/glance/tree/master/etc/metadefs<br />
* Searchlight<br />
** http://launchpad.net/searchlight<br />
** https://wiki.openstack.org/wiki/Searchlight<br />
* Horizon features<br />
** An admin UI for managing the catalog<br />
*** (Admin —> Metadata Definitions) (Kilo)<br />
** A widget for associating metadata to different resources<br />
*** (Update Metadata action on each row item below)<br />
*** admin -> images (Juno)<br />
*** admin -> flavors (Kilo)<br />
*** admin —> Host Aggregates (Kilo)<br />
*** project —> images (Liberty)<br />
*** project —> instances (Mitaka)<br />
** The ability to add metadata at launch time<br />
*** project —> Launch Instance (ng launch instance enabled) (Mitaka)<br />
<br />
The following information provides much of the background information on where these concepts originated.<br />
<br />
== Overview ==<br />
<br />
A challenge we've experienced with using OpenStack is discovering, sharing, and correlating metadata across services and different types of resources. We believe this affects both end users and administrators. <br />
<br />
For end users, we feel like doing basic tasks like launching instances is too technical for end users and require too much pre-existing knowledge of OpenStack concepts. For example, you should be able to just specify categories like "Big Data" or an "OS Family" and then let the system find the boot source for you, whether that is an image, snapshot, or volume. It should also allow finer grained filtering like filtering on specific versions of software that you want.<br />
<br />
For administrators, we’d like there to be an easier way to meaningfully collaborate on properties across host aggregates, flavors, images, volumes, or other cloud resources. <br />
<br />
Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties can be a disconnected and difficult process. This often involves searching wikis and opening the source code. In addition, the metadata properties often need to be correlated across several different services. It becomes more difficult as a cloud's scale grows and the number of resources being managed increases.<br />
<br />
We, HP and Intel, believe that both of the above problems come back to needing a better way for users to collaborate on metadata across services and resource types. We started a project called Graffiti to explore ideas and concepts for how to make this easier and more approachable for end users. Please join with us to help move forward together as a community!<br />
<br />
We believe that we can make some immediate improvements in Horizon, but that they can't be achieved through Horizon alone and that the benefits should extend to the API and CLI interactions as well. Better cross service collaboration and consistency on metadata should provide benefits that can be leveraged by other projects such as scheduling, reservation, orchestration, and policy enforcement.<br />
<br />
=== Terminology Note ===<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them.<br />
<br />
=== Screencasts ===<br />
<br />
To help explain the ideas of the project, we have a quick screencast demonstrating the concepts running under POC code. Please take a look!<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Concept Overview]<br />
<br />
=== Usage Concepts ===<br />
<br />
# Load your metadata definitions (sometimes called properties, tags, or capabilities)<br />
## Into the central metadata catalog <br />
# Update the resources in the cloud with your tags and capabilities<br />
# Let users find the resources with your desired tags and capabilities<br />
<br />
== Design Concepts ==<br />
<br />
Additional architecture concepts on the [[Graffiti/Architecture|Architecture]] page.<br />
<br />
=== Juno Summit Design Sessioɲ ===<br />
<br />
POC Demo reviewː<br />
̽ * https://www.youtube.com/watch?v=Dhrthnq1bnw<br />
<br />
http://sched.co/1m7wghx<br />
* Etherpadː https://etherpad.openstack.org/p/juno-summit-graffiti<br />
<br />
=== IRC ===<br />
<br />
The various features are maintained by teams in the following IRC channels on [http://freenode.net/ Freenode].<br />
<br />
#openstack-searchlight<br />
#openstack-horizon<br />
#openstack-glance<br />
<br />
=== Development ===<br />
* Open source under Apache 2.0<br />
* [https://github.com/stackforge/graffiti Graffiti POC API Service Source Repository] - No Longer Maintained (See Glance, Horizon, Searchlight)</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti&diff=101105Graffiti2016-01-13T17:30:48Z<p>Travis Tripp: </p>
<hr />
<div><br />
== What's in my cloud? ==<br />
<br />
I've got a lot of resources in my cloud.<br />
<br />
* How do I find what I need?<br />
* How do I describe what I have?<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects for OpenStack users. Graffiti has the initial intent of providing cross service metadata “tagging" and search aggregation for cloud resources.<br />
<br />
== Current Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects.<br />
<br />
* Glance Metadata Definition Catalog<br />
** http://docs.openstack.org/developer/glance/metadefs-concepts.html<br />
** https://github.com/openstack/glance/tree/master/etc/metadefs<br />
* Searchlight<br />
** http://launchpad.net/searchlight<br />
** https://wiki.openstack.org/wiki/Searchlight<br />
* Horizon features<br />
** An admin UI for managing the catalog<br />
*** (Admin —> Metadata Definitions) (Kilo)<br />
** A widget for associating metadata to different resources<br />
*** (Update Metadata action on each row item below)<br />
*** admin -> images (Juno)<br />
*** admin -> flavors (Kilo)<br />
*** admin —> Host Aggregates (Kilo)<br />
*** project —> images (Liberty)<br />
*** project —> instances (Mitaka)<br />
** The ability to add metadata at launch time<br />
*** project —> Launch Instance (ng launch instance enabled) (Mitaka)<br />
<br />
The following information provides much of the background information on where these concepts originated.<br />
<br />
== Overview ==<br />
<br />
A challenge we've experienced with using OpenStack is discovering, sharing, and correlating metadata across services and different types of resources. We believe this affects both end users and administrators. <br />
<br />
For end users, we feel like doing basic tasks like launching instances is too technical for end users and require too much pre-existing knowledge of OpenStack concepts. For example, you should be able to just specify categories like "Big Data" or an "OS Family" and then let the system find the boot source for you, whether that is an image, snapshot, or volume. It should also allow finer grained filtering like filtering on specific versions of software that you want.<br />
<br />
For administrators, we’d like there to be an easier way to meaningfully collaborate on properties across host aggregates, flavors, images, volumes, or other cloud resources. <br />
<br />
Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties can be a disconnected and difficult process. This often involves searching wikis and opening the source code. In addition, the metadata properties often need to be correlated across several different services. It becomes more difficult as a cloud's scale grows and the number of resources being managed increases.<br />
<br />
We, HP and Intel, believe that both of the above problems come back to needing a better way for users to collaborate on metadata across services and resource types. We started a project called Graffiti to explore ideas and concepts for how to make this easier and more approachable for end users. Please join with us to help move forward together as a community!<br />
<br />
We believe that we can make some immediate improvements in Horizon, but that they can't be achieved through Horizon alone and that the benefits should extend to the API and CLI interactions as well. Better cross service collaboration and consistency on metadata should provide benefits that can be leveraged by other projects such as scheduling, reservation, orchestration, and policy enforcement.<br />
<br />
=== Terminology Note ===<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them.<br />
<br />
=== Screencasts ===<br />
<br />
To help explain the ideas of the project, we have a quick screencast demonstrating the concepts running under POC code. Please take a look!<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Concept Overview]<br />
<br />
=== Usage Concepts ===<br />
<br />
# Load your metadata definitions (called capabilities and tags)<br />
## Into the central metadata catalog <br />
# Update the resources in the cloud with your tags and capabilities<br />
# Let users find the resources with your desired tags and capabilities<br />
<br />
== Design Concepts ==<br />
<br />
Additional architecture concepts on the [[Graffiti/Architecture|Architecture]] page.<br />
<br />
=== Juno Summit Design Sessioɲ ===<br />
<br />
POC Demo reviewː<br />
̽ * https://www.youtube.com/watch?v=Dhrthnq1bnw<br />
<br />
http://sched.co/1m7wghx<br />
* Etherpadː https://etherpad.openstack.org/p/juno-summit-graffiti<br />
<br />
=== IRC ===<br />
<br />
The various features are maintained by teams in the following IRC channels on [http://freenode.net/ Freenode].<br />
<br />
#openstack-searchlight<br />
#openstack-horizon<br />
#openstack-glance<br />
<br />
=== Development ===<br />
* Open source under Apache 2.0<br />
* [https://github.com/stackforge/graffiti Graffiti POC API Service Source Repository] - No Longer Maintained (See Glance, Horizon, Searchlight)</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti&diff=101104Graffiti2016-01-13T17:26:35Z<p>Travis Tripp: /* Current Status */</p>
<hr />
<div><br />
== What's in my cloud? ==<br />
<br />
I've got a lot of resources in my cloud.<br />
<br />
* How do I find what I need?<br />
* How do I describe what I have?<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects for OpenStack users. Graffiti has the initial intent of providing cross service metadata “tagging" and search aggregation for cloud resources.<br />
<br />
== Current Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects.<br />
<br />
* Glance Metadata Definition Catalog<br />
** http://docs.openstack.org/developer/glance/metadefs-concepts.html<br />
** https://github.com/openstack/glance/tree/master/etc/metadefs<br />
* Searchlight<br />
** http://launchpad.net/searchlight<br />
** https://wiki.openstack.org/wiki/Searchlight<br />
* Horizon features<br />
** An admin UI for managing the catalog<br />
*** (Admin —> Metadata Definitions) (Kilo)<br />
** A widget for associating metadata to different resources<br />
*** (Update Metadata action on each row item below)<br />
*** admin -> images (Juno)<br />
*** admin -> flavors (Kilo)<br />
*** admin —> Host Aggregates (Kilo)<br />
*** project —> images (Liberty)<br />
*** project —> instances (Mitaka)<br />
** The ability to add metadata at launch time<br />
*** project —> Launch Instance (ng launch instance enabled) (Mitaka)<br />
<br />
The following information provides much of the background information on where these concepts originated.<br />
<br />
== Overview ==<br />
<br />
A challenge we've experienced with using OpenStack is discovering, sharing, and correlating metadata across services and different types of resources. We believe this affects both end users and administrators. <br />
<br />
For end users, we feel like doing basic tasks like launching instances is too technical for end users and require too much pre-existing knowledge of OpenStack concepts. For example, you should be able to just specify categories like "Big Data" or an "OS Family" and then let the system find the boot source for you, whether that is an image, snapshot, or volume. It should also allow finer grained filtering like filtering on specific versions of software that you want.<br />
<br />
For administrators, we’d like there to be an easier way to meaningfully collaborate on properties across host aggregates, flavors, images, volumes, or other cloud resources. <br />
<br />
Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties can be a disconnected and difficult process. This often involves searching wikis and opening the source code. In addition, the metadata properties often need to be correlated across several different services. It becomes more difficult as a cloud's scale grows and the number of resources being managed increases.<br />
<br />
We, HP and Intel, believe that both of the above problems come back to needing a better way for users to collaborate on metadata across services and resource types. We started a project called Graffiti to explore ideas and concepts for how to make this easier and more approachable for end users. Please join with us to help move forward together as a community!<br />
<br />
We believe that we can make some immediate improvements in Horizon, but that they can't be achieved through Horizon alone and that the benefits should extend to the API and CLI interactions as well. Better cross service collaboration and consistency on metadata should provide benefits that can be leveraged by other projects such as scheduling, reservation, orchestration, and policy enforcement.<br />
<br />
=== Terminology Note ===<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them.<br />
<br />
=== Screencasts ===<br />
<br />
To help explain the ideas of the project, we have a quick screencast demonstrating the concepts running under POC code. Please take a look!<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Concept Overview]<br />
<br />
=== Usage Concepts ===<br />
<br />
# Load your metadata definitions (called capabilities and tags)<br />
## Into the central metadata catalog <br />
# Update the resources in the cloud with your tags and capabilities<br />
# Let users find the resources with your desired tags and capabilities<br />
<br />
== LEGACY Status ==<br />
<br />
In the spirit of agile and iterative open development, we are taking the concepts of Graffiti and working with the community to build them directly into core OpenStack projects. Our first stop is to build out a new capability and tag catalog API in an existing core OpenStack project. These discussions are currently happening with the Glance team.<br />
<br />
We built and demonstrated a POC of the concepts at the Juno summit. The POC was done with Horizon, Nova, Glance, and Cinder. We are using the POC to help us better understand technical issues and will use that knowledge to help contribute to existing projects and to build out the Graffiti concepts appropriately.<br />
<br />
POC Demo Reviewː<br />
̽ * https://www.youtube.com/watch?v=Dhrthnq1bnw<br />
<br />
== Design Concepts ==<br />
<br />
Additional architecture concepts on the [[Graffiti/Architecture|Architecture]] page.<br />
<br />
=== Juno Summit Design Sessioɲ ===<br />
<br />
POC Demo reviewː<br />
̽ * https://www.youtube.com/watch?v=Dhrthnq1bnw<br />
<br />
http://sched.co/1m7wghx<br />
* Etherpadː https://etherpad.openstack.org/p/juno-summit-graffiti<br />
<br />
=== IRC ===<br />
<br />
Please join with us to help move forward together as a community! The various features are maintained by teams in the following IRC channels on [http://freenode.net/ Freenode].<br />
<br />
#openstack-searchlight<br />
#openstack-horizon<br />
#openstack-glance<br />
<br />
=== Development ===<br />
* Open source under Apache 2.0<br />
* [https://github.com/stackforge/graffiti Graffiti POC API Service Source Repository]<br />
* [https://github.com/ttripp/horizon Temporary Horizon POC Fork Repository]<br />
* [https://bugs.launchpad.net/graffiti Bug tracker]<br />
* [https://blueprints.launchpad.net/graffiti Feature tracker]</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Graffiti&diff=101103Graffiti2016-01-13T17:24:41Z<p>Travis Tripp: Updated with current status</p>
<hr />
<div><br />
== What's in my cloud? ==<br />
<br />
I've got a lot of resources in my cloud.<br />
<br />
* How do I find what I need?<br />
* How do I describe what I have?<br />
<br />
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects for OpenStack users. Graffiti has the initial intent of providing cross service metadata “tagging" and search aggregation for cloud resources.<br />
<br />
== Current Status ==<br />
<br />
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects.<br />
<br />
* Glance Metadata Definition Catalog<br />
** http://docs.openstack.org/developer/glance/metadefs-concepts.html<br />
** https://github.com/openstack/glance/tree/master/etc/metadefs<br />
* Horizon features (see below)<br />
* Searchlight<br />
** http://launchpad.net/searchlight<br />
** https://wiki.openstack.org/wiki/Searchlight<br />
<br />
Horizon Features:<br />
<br />
* an admin UI for managing the catalog<br />
** (Admin —> Metadata Definitions) (Kilo)<br />
<br />
* a widget for associating metadata to different resources<br />
** (Update Metadata action on each row item below)<br />
** admin -> images (Juno)<br />
** admin -> flavors (Kilo)<br />
** admin —> Host Aggregates (Kilo)<br />
** project —> images (Liberty)<br />
** project —> instances (Mitaka)<br />
<br />
* The ability to add metadata at launch time<br />
** project —> Launch Instance (ng launch instance enabled) (Mitaka)<br />
<br />
The below information provides much of the background information on where these concepts originated.<br />
<br />
== Overview ==<br />
<br />
A challenge we've experienced with using OpenStack is discovering, sharing, and correlating metadata across services and different types of resources. We believe this affects both end users and administrators. <br />
<br />
For end users, we feel like doing basic tasks like launching instances is too technical for end users and require too much pre-existing knowledge of OpenStack concepts. For example, you should be able to just specify categories like "Big Data" or an "OS Family" and then let the system find the boot source for you, whether that is an image, snapshot, or volume. It should also allow finer grained filtering like filtering on specific versions of software that you want.<br />
<br />
For administrators, we’d like there to be an easier way to meaningfully collaborate on properties across host aggregates, flavors, images, volumes, or other cloud resources. <br />
<br />
Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties can be a disconnected and difficult process. This often involves searching wikis and opening the source code. In addition, the metadata properties often need to be correlated across several different services. It becomes more difficult as a cloud's scale grows and the number of resources being managed increases.<br />
<br />
We, HP and Intel, believe that both of the above problems come back to needing a better way for users to collaborate on metadata across services and resource types. We started a project called Graffiti to explore ideas and concepts for how to make this easier and more approachable for end users. Please join with us to help move forward together as a community!<br />
<br />
We believe that we can make some immediate improvements in Horizon, but that they can't be achieved through Horizon alone and that the benefits should extend to the API and CLI interactions as well. Better cross service collaboration and consistency on metadata should provide benefits that can be leveraged by other projects such as scheduling, reservation, orchestration, and policy enforcement.<br />
<br />
=== Terminology Note ===<br />
<br />
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them.<br />
<br />
=== Screencasts ===<br />
<br />
To help explain the ideas of the project, we have a quick screencast demonstrating the concepts running under POC code. Please take a look!<br />
<br />
* [https://www.youtube.com/watch?v=f0SZtPgcxk4| Concept Overview]<br />
<br />
=== Usage Concepts ===<br />
<br />
# Load your metadata definitions (called capabilities and tags)<br />
## Into the central metadata catalog <br />
# Update the resources in the cloud with your tags and capabilities<br />
# Let users find the resources with your desired tags and capabilities<br />
<br />
== LEGACY Status ==<br />
<br />
In the spirit of agile and iterative open development, we are taking the concepts of Graffiti and working with the community to build them directly into core OpenStack projects. Our first stop is to build out a new capability and tag catalog API in an existing core OpenStack project. These discussions are currently happening with the Glance team.<br />
<br />
We built and demonstrated a POC of the concepts at the Juno summit. The POC was done with Horizon, Nova, Glance, and Cinder. We are using the POC to help us better understand technical issues and will use that knowledge to help contribute to existing projects and to build out the Graffiti concepts appropriately.<br />
<br />
POC Demo Reviewː<br />
̽ * https://www.youtube.com/watch?v=Dhrthnq1bnw<br />
<br />
== Design Concepts ==<br />
<br />
Additional architecture concepts on the [[Graffiti/Architecture|Architecture]] page.<br />
<br />
=== Juno Summit Design Sessioɲ ===<br />
<br />
POC Demo reviewː<br />
̽ * https://www.youtube.com/watch?v=Dhrthnq1bnw<br />
<br />
http://sched.co/1m7wghx<br />
* Etherpadː https://etherpad.openstack.org/p/juno-summit-graffiti<br />
<br />
=== IRC ===<br />
<br />
Please join with us to help move forward together as a community! The various features are maintained by teams in the following IRC channels on [http://freenode.net/ Freenode].<br />
<br />
#openstack-searchlight<br />
#openstack-horizon<br />
#openstack-glance<br />
<br />
=== Development ===<br />
* Open source under Apache 2.0<br />
* [https://github.com/stackforge/graffiti Graffiti POC API Service Source Repository]<br />
* [https://github.com/ttripp/horizon Temporary Horizon POC Fork Repository]<br />
* [https://bugs.launchpad.net/graffiti Bug tracker]<br />
* [https://blueprints.launchpad.net/graffiti Feature tracker]</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Sprints/HorizonMitakaSprint&diff=100668Sprints/HorizonMitakaSprint2016-01-07T16:32:30Z<p>Travis Tripp: /* Registration */</p>
<hr />
<div>The Horizon team is having their Mitaka mid-cycle/sprint in Hillsboro, Oregon.<br />
<br />
* Where: Intel - Jones Farm 1, NE Griffin Oaks St, Hillsboro, OR 97124<br />
* When: February 23-25, 2016<br />
<br />
=== Travel ===<br />
* Fly into Portland International Airport (PDX)<br />
** Time to drive is approximately 40 minutes<br />
** Trains from airport to Hillsboro are available, but will take approximately 2 hours travel time.<br />
<br />
=== Hotels ===<br />
Recommended hotels in Hillsboro are:<br />
* Holiday Inn Express Portland West/Hillsboro , 5900 NE Ray Cir, Hillsboro, OR 97124, Phone:(503) 844-9696<br />
http://www.ihg.com/holidayinnexpress/hotels/us/en/hillsboro/pdxhi/hoteldetail/directions<br />
* Larkspur Landing in Hillsboro, Oregon, 3133 NE Shute Rd, Hillsboro, OR 97124, Phone:(503) 681-2121<br />
http://www.larkspurhotels.com/hillsboro/amenities/?gclid=COeFrZG77ckCFYpffgodyugJvQ<br />
* More hotel options are available Portland as well. Hillsboro is approximately 30 minutes west of downtown Portland.<br />
<br />
=== Registration ===<br />
{| class="wikitable sortable"<br />
|-<br />
! # !! Name !! IRC Nick !! Comment !! Email<br />
|-<br />
| 1 || David Lyle || david-lyle || || dklyle0 AT gmail DOT com<br />
|-<br />
| 2 || Richard Jones || r1chardj0n3s || || r1chardj0n3s AT gmail DOT com<br />
|-<br />
| 3 || Diana Whitten || hurgleburgler || || hurgleburgler AT gmail DOT com<br />
|-<br />
| 4 || Tyr Johanson || tyr || || tyr AT hpe DOT com<br />
|-<br />
| 5 || Doug Fish || doug-fish || || drfish AT ibm DOT com<br />
|-<br />
| 6 || Travis Tripp || TravT || || travis.tripp AT hpe DOT com<br />
|-<br />
| 7 || <!-- name --> || <!-- irc --> || <!-- comment --> || <!-- email --><br />
|}</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Searchlight&diff=97800Searchlight2015-11-24T21:57:27Z<p>Travis Tripp: /* Screencasts */</p>
<hr />
<div><br />
= Mission Statement =<br />
<br />
To provide advanced and scalable indexing and search across multi-tenant cloud resources.<br />
<br />
== Project Links ==<br />
<br />
{| border="1" cellpadding="2"<br />
| Developer Documentation<br />
| http://docs.openstack.org/developer/searchlight/<br />
|-<br />
| Source code<br />
| https://github.com/openstack/searchlight<br />
|-<br />
| Gerrit Reviews<br />
| https://review.openstack.org/#/q/status:open+project:openstack/searchlight,n,z<br />
|-<br />
| Bug tracker<br />
| https://bugs.launchpad.net/searchlight<br />
|-<br />
| Feature tracker (see Glance for historical tracking)<br />
| https://blueprints.launchpad.net/searchlight<br />
|-<br />
| IRC<br />
| #openstack-searchlight<br />
|-<br />
| Meeting Times<br />
| http://eavesdrop.openstack.org/#Searchlight_Team_Meeting<br />
|-<br />
| Meeting Agenda<br />
| https://etherpad.openstack.org/p/search-team-meeting-agenda<br />
|-<br />
| Meeting Logs<br />
| http://eavesdrop.openstack.org/meetings/openstack_search/<br />
|}<br />
<br />
= Overview =<br />
<br />
Searchlight was originally developed and released in the Kilo release of Glance as the Catalog Index Service [1]. At the Liberty Summit we decided to broaden the scope to provide advanced and scalable search across multi-tenant cloud resources.<br />
<br />
[1] http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html<br />
<br />
This is intended to dramatically improving the search capabilities and performance of various OpenStack cloud services.<br />
<br />
It accomplishes this by offloading user search queries from existing API servers by indexing their data into ElasticSearch. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents. ElasticSearch is developed and released as open source under the terms of the Apache License. Notable users of ElasticSearch include Wikimedia, StumbleUpon, Mozilla, Quora, Foursquare, Etsy, SoundCloud, GitHub, FDA, CERN, and Stack Exchange. (Source: http://en.wikipedia.org/wiki/Elasticsearch). The elastic-recheck project also uses Elasticsearch (and kibana) to classify and track OpenStack gate failures. (Source: http://status.openstack.org/elastic-recheck)<br />
<br />
=== Screencasts ===<br />
<br />
* Mitaka Summit Presentationː https://www.youtube.com/watch?v=0jYXsK4j26s<br />
* Liberty PTL Overviewː https://www.youtube.com/watch?v=yU5CrAOAlkA<br />
* Concept Demo (Kilo summit)ː https://youtu.be/eGnGr48E5_4<br />
<br />
=== Design ===<br />
<br />
The design is based off the Catalog Index Service in Glance. It will be refined moving forward as cross project needs are discovered and defined.<br />
<br />
* Glance Specificationː http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html<br />
<br />
==== Concept Overview ====<br />
<br />
[[File:Searchlight-Concept-1.png]]<br />
<br />
==== Concept Internals Overview ====<br />
<br />
* [https://wiki.openstack.org/w/images/5/55/Searchlight-Concept-2.png Concept Internals]<br />
* [https://wiki.openstack.org/w/images/e/ef/Searchlight-use-when-there.png Usage switching]<br />
* [https://wiki.openstack.org/w/images/1/10/Searchlight-Concept-Horizon-Layers.png Horizon Concept Layers]<br />
* [https://wiki.openstack.org/w/images/6/6b/Searchlight-WebSocket-Concept.png Horizon Web Socket Concept]<br />
<br />
== Get Involved ==<br />
<br />
Please join with us to help move forward together as a community! We are sure that the ideas and concepts can use refinement and we'd like to identify where we can best fit in to the ecosystem.</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Searchlight&diff=97799Searchlight2015-11-24T21:56:21Z<p>Travis Tripp: /* Project Links */</p>
<hr />
<div><br />
= Mission Statement =<br />
<br />
To provide advanced and scalable indexing and search across multi-tenant cloud resources.<br />
<br />
== Project Links ==<br />
<br />
{| border="1" cellpadding="2"<br />
| Developer Documentation<br />
| http://docs.openstack.org/developer/searchlight/<br />
|-<br />
| Source code<br />
| https://github.com/openstack/searchlight<br />
|-<br />
| Gerrit Reviews<br />
| https://review.openstack.org/#/q/status:open+project:openstack/searchlight,n,z<br />
|-<br />
| Bug tracker<br />
| https://bugs.launchpad.net/searchlight<br />
|-<br />
| Feature tracker (see Glance for historical tracking)<br />
| https://blueprints.launchpad.net/searchlight<br />
|-<br />
| IRC<br />
| #openstack-searchlight<br />
|-<br />
| Meeting Times<br />
| http://eavesdrop.openstack.org/#Searchlight_Team_Meeting<br />
|-<br />
| Meeting Agenda<br />
| https://etherpad.openstack.org/p/search-team-meeting-agenda<br />
|-<br />
| Meeting Logs<br />
| http://eavesdrop.openstack.org/meetings/openstack_search/<br />
|}<br />
<br />
= Overview =<br />
<br />
Searchlight was originally developed and released in the Kilo release of Glance as the Catalog Index Service [1]. At the Liberty Summit we decided to broaden the scope to provide advanced and scalable search across multi-tenant cloud resources.<br />
<br />
[1] http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html<br />
<br />
This is intended to dramatically improving the search capabilities and performance of various OpenStack cloud services.<br />
<br />
It accomplishes this by offloading user search queries from existing API servers by indexing their data into ElasticSearch. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents. ElasticSearch is developed and released as open source under the terms of the Apache License. Notable users of ElasticSearch include Wikimedia, StumbleUpon, Mozilla, Quora, Foursquare, Etsy, SoundCloud, GitHub, FDA, CERN, and Stack Exchange. (Source: http://en.wikipedia.org/wiki/Elasticsearch). The elastic-recheck project also uses Elasticsearch (and kibana) to classify and track OpenStack gate failures. (Source: http://status.openstack.org/elastic-recheck)<br />
<br />
=== Screencasts ===<br />
<br />
To help explain the ideas of the project, we have a quick screencast demonstrating the concepts. This was done by taking the Glance Catalog Index Service and adding in a plugin for Nova and then modifying horizon to use it. This is what was demonstrated at the Liberty Design Summit in both a Glance fishbowl session and a Horizon fishbowl session.<br />
<br />
* Mitaka Summit Presentationː https://www.youtube.com/watch?v=0jYXsK4j26s<br />
* Concept Demoː https://youtu.be/eGnGr48E5_4<br />
* Liberty PTL Overviewː https://www.youtube.com/watch?v=yU5CrAOAlkA<br />
<br />
=== Design ===<br />
<br />
The design is based off the Catalog Index Service in Glance. It will be refined moving forward as cross project needs are discovered and defined.<br />
<br />
* Glance Specificationː http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html<br />
<br />
==== Concept Overview ====<br />
<br />
[[File:Searchlight-Concept-1.png]]<br />
<br />
==== Concept Internals Overview ====<br />
<br />
* [https://wiki.openstack.org/w/images/5/55/Searchlight-Concept-2.png Concept Internals]<br />
* [https://wiki.openstack.org/w/images/e/ef/Searchlight-use-when-there.png Usage switching]<br />
* [https://wiki.openstack.org/w/images/1/10/Searchlight-Concept-Horizon-Layers.png Horizon Concept Layers]<br />
* [https://wiki.openstack.org/w/images/6/6b/Searchlight-WebSocket-Concept.png Horizon Web Socket Concept]<br />
<br />
== Get Involved ==<br />
<br />
Please join with us to help move forward together as a community! We are sure that the ideas and concepts can use refinement and we'd like to identify where we can best fit in to the ecosystem.</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=IRC&diff=97205IRC2015-11-17T22:35:48Z<p>Travis Tripp: </p>
<hr />
<div>IRC, or Internet Relay Chat, is often used as a real-time communication capability with open source projects. We're pretty proud of the friendly vibe in the OpenStack channels and invite anyone wanting to ask questions or talk about all things OpenStack to the channels.<br />
<br />
IRC software can be found for all operating systems. The [https://en.wikipedia.org/wiki/Comparison_of_Internet_Relay_Chat_clients#Operating_system_support IRC clients comparison chart on Wikipedia] can help you pick one for your operating system.<br />
<br />
You don't have to have a complex setup to use IRC. You can use the web client for Freenode, which doesn't require any download or setup. Just pick a nickname and join #openstack: http://webchat.freenode.net/?channels=openstack,openstack-101.<br />
<br />
<br />
=== How to read messages exchanged when you're offline ===<br />
<br />
IRC, unlike other chat systems, doesn't keep when you're offline. In order to be notified of relevant communications you can either look at the [http://eavesdrop.openstack.org/irclogs/ channel logs] or setup an IRC proxy. <br />
<br />
The most common IRC proxies are [http://wiki.znc.in/ZNC znc] and [https://bip.milkypond.org/ bip]. See the following guides to configure them:<br />
<br />
* [https://kashyapc.fedorapeople.org/notes-bip-IRC-proxy/README Installation notes for Fedora/RH-like] and [https://kashyapc.fedorapeople.org/notes-bip-IRC-proxy/bip.conf example bip.conf] contributed by Kashyap Chamarthy<br />
* ZNC [https://dague.net/2014/09/13/my-irc-proxy-setup/ configuration notes] contributed by Sean Dague<br />
* [https://weechat.org/ WeeChat] IRC client combines proxy and client, and allows you to run the client in a shell and access that client additionally from a web client or Android app.<br />
<br />
=== IRC meetings ===<br />
<br />
The OpenStack project holds its various public meetings on IRC. See [[Meetings]] for details.<br />
<br />
== OpenStack IRC channels (chat.freenode.net) ==<br />
<br />
If you want to start a new IRC channel, please consult with the InfrastructureTeam in #openstack-infra or at openstack-infra@lists.openstack.org to ensure it gets registered appropriately. <br />
<br />
'''Many IRC channels are logged and [http://eavesdrop.openstack.org/irclogs/ recordings are publicly accessible]'''. If you're concerned about privacy consider using a [https://freenode.net/faq.shtml#cloaks cloak], [https://freenode.net/irc_servers.shtml#tor tor], hide your real name and be mindful not to write sensitive data in these channels.<br />
<br />
{| class="wikitable sortable" border="1"<br />
|- <br />
! IRC Channel !! Description<br />
|-<br />
|'''#openstack''' || general discussion, support<br />
|-<br />
| '''#openstack-101''' || guidance for new contributors<br />
|-<br />
|'''#openstack-ansible''' || [http://docs.openstack.org/developer/openstack-ansible/ OpenStack-Ansible] discussions<br />
|-<br />
|'''#openstack-anvil''' || [http://anvil.readthedocs.org/ Anvil] discussion channel<br />
|-<br />
|'''#openstack-app-catalog''' || [http://apps.openstack.org Community App Catalog] discussions <br />
|-<br />
|'''#openstack-barbican''' || Barbican-related team discussions<br />
|-<br />
|'''#openstack-blazar''' || blazar (formerly climate) team discussions<br />
|-<br />
|'''#openstack-board''' || OpenStack Foundation Board Meeting Back channel (mainly quiet except during meetings)<br />
|-<br />
|'''#openstack-ceilometer''' || ceilometer team discussions<br />
|-<br />
|'''#openstack-chef''' || deployment and operating OpenStack with Chef<br />
|- <br />
|'''#openstack-chinese''' || general discussion, support in Chinese<br />
|- <br />
|'''#openstack-cinder''' || cinder team discussions<br />
|-<br />
| '''#openstack-community''' || coordination of community activity<br />
|-<br />
| '''#openstack-containers''' || containers team discussion<br />
|-<br />
| '''#openstack-cue''' || Cue team discussion <br />
|-<br />
|'''#openstack-defcore''' || Defcore discussion channel<br />
|-<br />
|'''#openstack-dev''' || general and cross-project development discussion<br />
|-<br />
|'''#openstack-dns''' || Designate DNS team discussions<br />
|-<br />
|'''#openstack-doc''' || documentation team discussion<br />
|-<br />
|'''#openstack-fr''' || general discussion, support in French<br />
|-<br />
|'''#openstack-fwaas''' || Firewall as a Service discussions <br />
|-<br />
|'''#openstack-gbp''' || Group Based Policy discussions<br />
|-<br />
|'''#openstack-glance''' || glance team discussions<br />
|-<br />
|'''#openstack-gsoc''' || google summer of code discussions<br />
|-<br />
|'''#openstack-ha''' || High Availability discussions<br />
|-<br />
|'''#openstack-horizon''' || horizon team discussions<br />
|-<br />
|'''#openstack-hyper-v''' || Microsoft Windows guests and hypervisor discussion<br />
|-<br />
| '''#openstack-i18n''' || I18N team discussions<br />
|-<br />
|'''#openstack-infra''' || developer community infrastructure, continuous integration testing<br />
|-<br />
|'''#openstack-ironic''' || ironic & bare metal discussions<br />
|-<br />
|'''#openstack-keystone''' || keystone team discussions<br />
|-<br />
|'''#openstack-ko''' || general discussion, support in Korean<br />
|-<br />
|'''#openstack-latinamerica''' || OpenStack Latin America (Spanish)<br />
|-<br />
|'''#openstack-lbaas''' || Neutron LBaaS and Project Octavia discussions<br />
|-<br />
|'''#openstack-manila''' || shared / distributed file system service team discussions<br />
|-<br />
|'''#openstack-marconi''' || queue/messaging marconi team discussions<br />
|-<br />
|'''#openstack-meeting''' || team meetings<br />
|-<br />
|'''#openstack-meeting-alt''' || team meetings, alternate channel<br />
|-<br />
|'''#openstack-meeting-3''' || team meetings, another alternate channel<br />
|-<br />
|'''#openstack-meeting-4''' || team meetings, another alternate channel<br />
|-<br />
|'''#openstack-mistral''' || Mistral Workflow Service for OpenStack<br />
|-<br />
|'''#openstack-neutron''' || neutron team discussions<br />
|-<br />
|'''#openstack-nfv''' || [[Teams/NFV|NFV]] team discussions<br />
|-<br />
|'''#openstack-nova''' || nova team discussions<br />
|-<br />
|'''#openstack-operators''' || OpenStack Operators discussion channel<br />
|-<br />
|'''#openstack-opw''' || GNOME OPW mentor, intern and supporter discussions<br />
|-<br />
|'''#openstack-oslo''' || [https://wiki.openstack.org/wiki/Oslo Oslo] development discussion<br />
|-<br />
|'''#openstack-performance''' || All OpenStack performance related discussions<br />
|-<br />
|'''#openstack-powervm''' || PowerVM OpenStack drivers discussion channel<br />
|-<br />
| '''#openstack-qa''' || QA team discussion<br />
|-<br />
|'''#openstack-rally''' || [https://wiki.openstack.org/wiki/Rally Rally] measure performance of your cloud<br />
|-<br />
|'''#openstack-rating''' || Rating team discussions<br />
|-<br />
|'''#openstack-relmgr-office''' || Release managers office hours channel<br />
|-<br />
|'''#openstack-sahara''' || [https://wiki.openstack.org/wiki/Sahara Sahara] team discussions<br />
|-<br />
|'''#openstack-sdks''' || Development of SDKs to work with OpenStack and the unified OpenStack command line tool<br />
|-<br />
|'''#openstack-security''' || General discussion about OpenStack security and open channel for the OpenStack Security Group (OSSG)<br />
|-<br />
|'''#openstack-searchlight''' || [wiki.openstack.org/wiki/Searchlight] - Search your OpenStack resources<br />
|-<br />
|'''#openstack-stable''' || stable branch management and packaging discussions<br />
|-<br />
|'''#openstack-state-management''' || [https://wiki.openstack.org/wiki/TaskFlow TaskFlow] and state-management development discussion<br />
|-<br />
|'''#openstack-swift''' || swift team discussions<br />
|-<br />
|'''#openstack-trove''' || trove database team discussions<br />
|-<br />
|'''#openstack-tw''' || general discussion, support in Taiwan<br />
|-<br />
| '''#openstack-ux''' || discussion channel for user experience<br />
|-<br />
|'''#openstack-vmware''' || The VMwareAPI team discussion channel<br />
|-<br />
|'''#openstack-watcher''' || [https://wiki.openstack.org/wiki/Watcher Watcher] discussion channel<br />
|-<br />
|'''#congress''' || Congress policy developer discussion channel<br />
|-<br />
|'''#heat''' || Heat developer discussion channel<br />
|-<br />
|'''#kolla''' || Kolla team discussion channel<br />
|-<br />
|'''#magnetodb''' || Key-Value storage for OpenStack<br />
|-<br />
|'''#murano''' || Murano team discussions<br />
|-<br />
|'''#nova-docker''' || Nova Docker team discussions<br />
|-<br />
|'''#refstack''' || RefStack<br />
|-<br />
|'''#senlin''' || [https://wiki.openstack.org/wiki/Senlin Senlin] team discussions<br />
|-<br />
|'''#storyboard''' || StoryBoard team discussions<br />
|-<br />
|'''#tacker''' || [https://wiki.openstack.org/wiki/Tacker Tacker] NFV Orchestrator team discussions<br />
|-<br />
|'''#tripleo''' || TripleO team discussions<br />
|-<br />
|'''#puppet-openstack''' || Openstack Puppet modules discussions<br />
|-}<br />
<br />
[[Category:Connect]]</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Searchlight&diff=94903Searchlight2015-10-28T16:06:21Z<p>Travis Tripp: /* Screencasts */</p>
<hr />
<div><br />
= Mission Statement =<br />
<br />
To provide advanced and scalable indexing and search across multi-tenant cloud resources.<br />
<br />
== Project Links ==<br />
<br />
{| border="1" cellpadding="2"<br />
| Developer Documentation<br />
| http://docs.openstack.org/developer/searchlight/<br />
|-<br />
| Source code<br />
| https://github.com/openstack/searchlight<br />
|-<br />
| Gerrit Reviews<br />
| https://review.openstack.org/#/q/status:open+project:openstack/searchlight,n,z<br />
|-<br />
| Bug tracker<br />
| https://bugs.launchpad.net/searchlight<br />
|-<br />
| Feature tracker (see Glance for historical tracking)<br />
| https://blueprints.launchpad.net/searchlight<br />
|-<br />
| IRC<br />
| #openstack-searchlight<br />
|-<br />
| Meeting Times<br />
| http://eavesdrop.openstack.org/#Search_Team_Meeting<br />
|-<br />
| Meeting Agenda<br />
| https://etherpad.openstack.org/p/search-team-meeting-agenda<br />
|-<br />
| Meeting Logs<br />
| http://eavesdrop.openstack.org/meetings/openstack_search/<br />
|}<br />
<br />
= Overview =<br />
<br />
Searchlight was originally developed and released in the Kilo release of Glance as the Catalog Index Service [1]. At the Liberty Summit we decided to broaden the scope to provide advanced and scalable search across multi-tenant cloud resources.<br />
<br />
[1] http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html<br />
<br />
This is intended to dramatically improving the search capabilities and performance of various OpenStack cloud services.<br />
<br />
It accomplishes this by offloading user search queries from existing API servers by indexing their data into ElasticSearch. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents. ElasticSearch is developed and released as open source under the terms of the Apache License. Notable users of ElasticSearch include Wikimedia, StumbleUpon, Mozilla, Quora, Foursquare, Etsy, SoundCloud, GitHub, FDA, CERN, and Stack Exchange. (Source: http://en.wikipedia.org/wiki/Elasticsearch). The elastic-recheck project also uses Elasticsearch (and kibana) to classify and track OpenStack gate failures. (Source: http://status.openstack.org/elastic-recheck)<br />
<br />
=== Screencasts ===<br />
<br />
To help explain the ideas of the project, we have a quick screencast demonstrating the concepts. This was done by taking the Glance Catalog Index Service and adding in a plugin for Nova and then modifying horizon to use it. This is what was demonstrated at the Liberty Design Summit in both a Glance fishbowl session and a Horizon fishbowl session.<br />
<br />
* Mitaka Summit Presentationː https://www.youtube.com/watch?v=0jYXsK4j26s<br />
* Concept Demoː https://youtu.be/eGnGr48E5_4<br />
* Liberty PTL Overviewː https://www.youtube.com/watch?v=yU5CrAOAlkA<br />
<br />
=== Design ===<br />
<br />
The design is based off the Catalog Index Service in Glance. It will be refined moving forward as cross project needs are discovered and defined.<br />
<br />
* Glance Specificationː http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html<br />
<br />
==== Concept Overview ====<br />
<br />
[[File:Searchlight-Concept-1.png]]<br />
<br />
==== Concept Internals Overview ====<br />
<br />
* [https://wiki.openstack.org/w/images/5/55/Searchlight-Concept-2.png Concept Internals]<br />
* [https://wiki.openstack.org/w/images/e/ef/Searchlight-use-when-there.png Usage switching]<br />
* [https://wiki.openstack.org/w/images/1/10/Searchlight-Concept-Horizon-Layers.png Horizon Concept Layers]<br />
* [https://wiki.openstack.org/w/images/6/6b/Searchlight-WebSocket-Concept.png Horizon Web Socket Concept]<br />
<br />
== Get Involved ==<br />
<br />
Please join with us to help move forward together as a community! We are sure that the ideas and concepts can use refinement and we'd like to identify where we can best fit in to the ecosystem.</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Design_Summit/Mitaka/Etherpads&diff=94040Design Summit/Mitaka/Etherpads2015-10-24T10:03:16Z<p>Travis Tripp: Added Searchlight</p>
<hr />
<div>[[Category:Summit]]<br />
[[Category:Liberty]]<br />
[[Category:Etherpad]]<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
== Event intro/closure ==<br />
* Tue 11:15: Design Summit 101 [https://etherpad.openstack.org/p/mitaka-design-summit-101]<br />
* Fri 12:30: Design Summit feedback [https://etherpad.openstack.org/p/mitaka-design-summit-feedback]<br />
<br />
==App Catalog==<br />
* Wed 14:00: [https://etherpad.openstack.org/p/TYO-ops-delivering-apps Ops: Delivering Apps To Your Users]<br />
* Thur 14:40: [https://etherpad.openstack.org/p/TYO-app-catalog App Catalog Working Session]<br />
<br />
== Barbican ==<br />
*Fishbowls<br />
**Fishbowl 1: Wednesday, 12:05pm - Barbican Roadmap and Cross-Project Integration Status<br />
**Fishbowl 2: Wednesday, 2:00pm - Barbican Key Federation<br />
*Working Sessions<br />
**Working Session 1: Wednesday, 4:40pm<br />
**Working Session 2: Wednesday, 5:30pm<br />
**Working Session 3: Thursday, 9:00am <br />
**Working Session 4: Thursday, 9:50am <br />
**Working Session 5:Thursday, 11:00am<br />
**Working Session 6:Thursday, 5:20pm<br />
*Meetup<br />
**Contributors Meetup: Friday, 2:00pm<br />
*Etherpad<br />
**https://etherpad.openstack.org/p/barbican-m-design-sessions<br />
<br />
== Cinder ==<br />
* Wed 14.00: Will the real Block Storage Service please stand up [https://etherpad.openstack.org/p/mitaka-cinder-direction]<br />
* Wed 14.50: Availability zones in Cinder [https://etherpad.openstack.org/p/mitaka-cinder-az]<br />
* Thur 9.00: Experimental APIs and Microversions [https://etherpad.openstack.org/p/mitaka-cinder-experimental-apis]<br />
* Thur 9.50: Cinder Nova Interaction [https://etherpad.openstack.org/p/mitaka-cinder-nova-interaction]<br />
* Thur 13.50: Cinder driver interface [https://etherpad.openstack.org/p/mitaka-cinder-driver-interface]<br />
* Thur 14.40: API Microversions [https://etherpad.openstack.org/p/mitaka-cinder-api-microversions]<br />
* Thur 15.30: ABC work [https://etherpad.openstack.org/p/mitaka-cinder-abc-work]<br />
* Thur 15.30: Driver deadlines [https://etherpad.openstack.org/p/mitaka-cinder-driver-deadlines]<br />
* Thur 16.30: C-Vol Active/Active HA [https://etherpad.openstack.org/p/mitaka-cinder-cvol-aa]<br />
* Thur 17.20: Volume manager locks [https://etherpad.openstack.org/p/mitaka-cinder-volmgr-locks]<br />
* Fri: Contributor Meetup [https://etherpad.openstack.org/p/mitaka-cinder-contributor-meetup]<br />
<br />
== Congress ==<br />
* Wed 2:00: [https://etherpad.openstack.org/p/congress-mitaka-arch Distributed architecture and additional features for Mitaka]<br />
* Wed 2:50: [https://etherpad.openstack.org/p/congress-mitaka-integrations Integration with other projects: congress gating (murano, nova, neutron, etc.), keystone]<br />
* Wed 3:40: [https://etherpad.openstack.org/p/congress-mitaka-external Discussions with external teams: OPNFV, Monasca]<br />
<br />
== Cross-Project workshops ==<br />
<br />
All sessions are on Tuesday 2015-10-27<br />
<br />
* 11:15<br />
** Service Catalog TNG (double session) [https://etherpad.openstack.org/p/mitaka-service-catalog-session]<br />
** Cycle themes [https://etherpad.openstack.org/p/mitaka-crossproject-themes]<br />
* 12:05<br />
** Supporting DefCore and Interoperability Testing [https://etherpad.openstack.org/p/mitaka-crossproject-defcore]<br />
** Tags today and tomorrow [https://etherpad.openstack.org/p/mitaka-crossproject-next-tags]<br />
* 14:00<br />
** Standard Deprecation Policy [https://etherpad.openstack.org/p/mitaka-deprecation-policy]<br />
* 14:50<br />
** Role Assignments for Service users [https://etherpad.openstack.org/p/mitaka-cross-project-role-assignment-service-user]<br />
* 15:40<br />
** Documenting the OpenStack way [https://etherpad.openstack.org/p/mitaka-crossproject-doc-the-way]<br />
* 16:40<br />
** Troubleshooting cross-project comms [https://etherpad.openstack.org/p/mitaka-crossproject-comms]<br />
* 17:30<br />
** Serving extreme use cases [https://etherpad.openstack.org/p/mitaka-crossproject-extreme-usecases]<br />
<br />
== Ceilometer ==<br />
* Wednesday, 2015-10-28<br />
** 11:15 - [https://etherpad.openstack.org/p/mitaka-telemetry-alarms alams]<br />
** 12:05 - [https://etherpad.openstack.org/p/mitaka-telemetry-ui visualising data]<br />
** 14:50 - [https://etherpad.openstack.org/p/mitaka-telemetry-upgrades rolling upgrades]<br />
** 15:40 - [https://etherpad.openstack.org/p/mitaka-telemetry-split componentisation]<br />
<br />
* Thursday, 2015-10-29<br />
** 09:00 - [https://etherpad.openstack.org/p/mitaka-telemetry-testing functional and integration testing]<br />
** 09ː50 - [https://etherpad.openstack.org/p/mitaka-telemetry-bi business intelligence]<br />
** 11ː00 - [https://etherpad.openstack.org/p/mitaka-telemetry-polling refined polling]<br />
** 11ː50 - [https://etherpad.openstack.org/p/mitaka-telemetry-cross-project project data ownership]<br />
** 13ː50 - [https://etherpad.openstack.org/p/mitaka-telemetry-alarms event alarms]<br />
<br />
* Friday, 2015-10-30<br />
** 09:00 - [https://etherpad.openstack.org/p/mitaka-telemetry-contributors-meetup contributors meetup]<br />
<br />
== Designate ==<br />
<br />
* Wed 11:15: Roadmap https://etherpad.openstack.org/p/mitaka-designate-summit-roadmap<br />
* Wed 12:05: Alias Records https://etherpad.openstack.org/p/mitaka-designate-summit-alias<br />
* Wed 14:00: Batch API Actions https://etherpad.openstack.org/p/mitaka-designate-summit-batch-api<br />
* Wed 14:50: Embedable Services https://etherpad.openstack.org/p/mitaka-designate-summit-embeddable-services<br />
* Wed 16:40: Incremental Zone Transfer (IFXR) https://etherpad.openstack.org/p/mitaka-designate-summit-ifxr<br />
* Fri 14:00: Contributors Meetup https://etherpad.openstack.org/p/mitaka-designate-summit-meetup<br />
<br />
== Glance ==<br />
<br />
* Wed:<br />
** 14:00 - 14:40: Trusts implementation (mfedosin) https://etherpad.openstack.org/p/mitaka-glance-trusts<br />
* Thur:<br />
** 09:00 - 09:40 (fishbowl): Cross-project image protection (rosmaita) https://etherpad.openstack.org/p/mitaka-glance-xp-property-protections-support<br />
** 09:50 - 10:30: Image Signature Verification Improvements (bpoulos) https://etherpad.openstack.org/p/mitaka-glance-image-signing-and-encryption<br />
** 11:50 - 12:30: Defcore Updates and joint effort (flaper87) https://etherpad.openstack.org/p/mitaka-glance-defcore<br />
** 14:40 - 15:20 (fishbowl): Glance image import reloaded (rosmaita, mclaren) https://etherpad.openstack.org/p/Mitaka-glance-image-import-reloaded<br />
** 15:30 - 16:10 (fisbowl): Artifacts Review (ativelkov) https://etherpad.openstack.org/p/mitaka-glance-artifacts-review<br />
** 16:30 - 17:10: Glance image import (follow-up working session) (flaper87) https://etherpad.openstack.org/p/Mitaka-glance-image-import-reloaded<br />
** 17:20 - 18:00: Finalize Glance priorities (flaper87)<br />
<br />
== Heat ==<br />
* The everything etherpad: https://etherpad.openstack.org/p/mitaka-heat-sessions<br />
'''Wed''' <br />
* 11:15 - 11:55: [https://etherpad.openstack.org/p/mitaka-heat-documentation (W) Documentation improvements] <br />
* 12:05 - 12:45: [https://etherpad.openstack.org/p/mitaka-heat-tests (W) Heat tests] <br />
* 14:00 - 14:40: [https://etherpad.openstack.org/p/mitaka-heat-convergence-migration (W) Tool to migrate stacks to/from convergence] <br />
* 14:50 - 15:30: [https://etherpad.openstack.org/p/mitaka-heat-large-stacks (F) Issues from deploying very large stacks]<br />
* 15:40 - 16:20: [https://etherpad.openstack.org/p/mitaka-heat-user-ops (F) User/ops session for summit ]<br />
'''Thu'''<br />
* 09:00 - 09:40: [https://etherpad.openstack.org/p/mitaka-heat-autoscaling (W) AutoScaling/Group architecture/roadmap] <br />
* 09:50 - 10:30: [https://etherpad.openstack.org/p/mitaka-heat-break-stack-barrier (W) Breaking the stack barrier] <br />
* 11:00 - 11:40: [https://etherpad.openstack.org/p/mitaka-heat-composition-improvements (F) Composition improvements ] <br />
* 11:50 - 12:30: [https://etherpad.openstack.org/p/mitaka-heat-openstackclient (F) Complete heat support in python-openstackclient ]<br />
* 13:50 - 14:30: [https://etherpad.openstack.org/p/mitaka-heat-hooks-notifications (W) Hooks & Notifications ] <br />
* 14:40 - 15:20: [https://etherpad.openstack.org/p/mitaka-heat-convergence-ph1 (W) Convergence Phase 1 results] <br />
* 15:30 - 16:10: [https://etherpad.openstack.org/p/mitaka-heat-convergence-ph2 (W) Convergence Phase 2 start] <br />
'''Fri'''<br />
* 09:00 - 12:30: [https://etherpad.openstack.org/p/mitaka-heat-summit-meetup Contributor Meetup]<br />
<br />
== Horizon ==<br />
[https://etherpad.openstack.org/p/horizon-mitaka-summit Planning etherpad]<br />
* Wed October 28<br />
** 16:40-17:20 [https://etherpad.openstack.org/p/mitaka-horizon-plugins Plugins]<br />
** 17:30-18:10 [https://etherpad.openstack.org/p/mitaka-horizon-theming Theming/UX]<br />
* Thursday October 29<br />
** 9:00-9:40 [https://etherpad.openstack.org/p/mitaka-horizon-angular Existential AngularJS]<br />
** 9:50-10:30 [https://etherpad.openstack.org/p/mitaka-horizon-ops Ops Feedback]<br />
** 11:00-11:40 [https://etherpad.openstack.org/p/mitaka-horizon-angular-progress AngularJS Plan]<br />
** 11:50-12:30 [https://etherpad.openstack.org/p/mitaka-horizon-async async]<br />
** 13:50-14:30 [https://etherpad.openstack.org/p/mitaka-horizon-scale Scale]<br />
** 14:40-15:20 [https://etherpad.openstack.org/p/mitaka-horizon-identity Identity]<br />
** 15:30-16:10 [https://etherpad.openstack.org/p/mitaka-horizon-priorities Priorities]<br />
* Friday October 30<br />
** 9:00-12:30 & 14:00-17:30 [https://etherpad.openstack.org/p/mitaka-horizon-meetup Contributors Meetup]<br />
<br />
== Infrastructure ==<br />
http://mitakadesignsummit.sched.org/type/Infrastructure<br />
<br />
'''Wednesday:'''<br />
* '''Work Session: Masterless Puppet part I''', Ho-O Room, 11:15am-11:55am (02:15-02:55 UTC)<br />
** https://etherpad.openstack.org/p/mitaka-infra-masterlesspuppet<br />
* '''Work Session: Masterless Puppet part II''', Ho-O Room, 12:05pm-12:45pm (03:05-03:45 UTC)<br />
** https://etherpad.openstack.org/p/mitaka-infra-masterlesspuppet<br />
* '''Extending Nodepool With Plug-Ins''', Suzuran Room, 5:30pm-6:10pm (08:30-09:10 UTC)<br />
** https://etherpad.openstack.org/p/mitaka-infra-nodepoolplugins<br />
<br />
'''Thursday:'''<br />
* '''Work Session: Gerrit Planning and Development''', Kinkei Room, 9:00am-9:40am (00:00-00:40 UTC)<br />
** https://etherpad.openstack.org/p/mitaka-infra-gerritdevelopment<br />
* '''Work Session: Nodepool Image Workers''', Kinkei Room, 9:50am-10:30am (00:50-01:30 UTC)<br />
** https://etherpad.openstack.org/p/mitaka-infra-nodepoolimageworkers<br />
* '''Scaling New Project Creation''', Suzuran Room, 11:00am-11:40am (02:00-02:40 UTC)<br />
** https://etherpad.openstack.org/p/mitaka-infra-scalingnewprojectcreation<br />
* '''Task Tracking: Mitaka Edition''', Suzuran Room, 11:50am-12:30pm (02:50-03:30 UTC)<br />
** https://etherpad.openstack.org/p/mitaka-infra-tasktracking<br />
<br />
'''Friday:'''<br />
* '''Ironic/Infrastructure contributors meetup''', Jako Room, 9:00am-12:30pm (00:00-03:30 UTC)<br />
** https://etherpad.openstack.org/p/summit-mitaka-ironic-contributors-meetup<br />
* '''Infra/QA/Release management contributors meetup''', Kusunoki Room, 2:00pm-5:30pm (05:00-08:30 UTC)<br />
** https://etherpad.openstack.org/p/summit-mitaka-qa-contributors-meetup<br />
<br />
== Ironic ==<br />
* The everything etherpad: https://etherpad.openstack.org/p/summit-mitaka-ironic<br />
* Wednesday fishbowl DS3 2:00-2:40 https://etherpad.openstack.org/p/summit-mitaka-ironic-third-party-ci<br />
* Wednesday fishbowl DS3 2:50-3:30 https://etherpad.openstack.org/p/summit-mitaka-ironic-group-management<br />
* Thursday workroom DS10 9:00-9:40 https://etherpad.openstack.org/p/summit-mitaka-ironic-notifications-bus<br />
* Thursday workroom DS10 9:50-10:30 https://etherpad.openstack.org/p/summit-mitaka-ironic-driver-composition<br />
* Thursday fishbowl DS3 11:00-11:40 https://etherpad.openstack.org/p/summit-mitaka-ironic-driver-api<br />
* Thursday fishbowl DS3 11:50-12:30 https://etherpad.openstack.org/p/summit-mitaka-ironic-nova-driver (joint session with Nova)<br />
* Thursday workroom DS11 4:30-5:10 https://etherpad.openstack.org/p/summit-mitaka-ironic-lock-manager<br />
* Thursday workroom DS11 5:20-6:00 https://etherpad.openstack.org/p/summit-mitaka-ironic-gate-improvements<br />
* Friday workroom DS15 9:00-12:30 https://etherpad.openstack.org/p/summit-mitaka-ironic-contributors-meetup (shared space with Infra)<br />
<br />
== Kolla ==<br />
<br />
[http://etherpad.openstack.org/kolla-mitaka-all-sessions All Kolla Design Summit Sessions]<br />
<br />
Wednesday:<br />
* 11:15 - 11:55: [http://etherpad.openstack.org/kolla-mitaka-documentation (W) Documentation]<br />
* 12:05 - 12:45: [http://etherpad.openstack.org/kolla-mitaka-diagnostics (W) Diagnostics]<br />
* 14:00 - 14:40: [http://etherpad.openstack.org/kolla-mitaka-bare-metal-deployment (W) Bare Metal Deployment]<br />
* 15:40 - 16:20: [http://etherpad.openstack.org/kolla-mitaka-roadmap (F) Mitaka Roadmap]<br />
* 16:40 - 17:20: [http://etherpad.openstack.org/kolla-mitaka-operator-requirements-gathering (F) Mitaka Operator Requirements Gathering]<br />
* 17:40 - 18:20: [http://etherpad.openstack.org/kolla-mitaka-upgrade (F) Integrating Kolla Containers with Third Party Projects]<br />
<br />
Thursday<br />
* 14:40 - 15:20: [http://etherpad.openstack.org/kolla-mitaka-gating (W) Gating Commits]<br />
* 15:30 - 16:10: [http://etherpad.openstack.org/kolla-mitaka-upgrade (W) Upgrading from Liberty to Mitaka]<br />
<br />
== Keystone == <br />
* Wed 2:50 - 3:30: tokens and tokenless auth https://etherpad.openstack.org/p/keystone-mitaka-summit-tokens<br />
* Wed 3:40 - 4:20: hierarchical multitenancy https://etherpad.openstack.org/p/keystone-mitaka-summit-multitenancy<br />
* Wed 4:40 - 5:20: policy https://etherpad.openstack.org/p/keystone-mitaka-summit-policy<br />
* Thu 9:00 - 9:40: deprecations https://etherpad.openstack.org/p/keystone-mitaka-summit-deprecations<br />
* Thu 9:50 - 10:30: federation https://etherpad.openstack.org/p/keystone-mitaka-summit-federation<br />
* Thu 11:00 - 11:40: keystone server https://etherpad.openstack.org/p/keystone-mitaka-summit-server (workshop)<br />
* Thu 11:50 - 12:30: testing https://etherpad.openstack.org/p/keystone-mitaka-summit-testing (workshop)<br />
* Thu 1:50 - 2:30: oslo and doc https://etherpad.openstack.org/p/keystone-mitaka-summit-oslo-and-docs (workshop)<br />
* Thu 4:30 - 5:10: keystone libraries https://etherpad.openstack.org/p/keystone-mitaka-summit-libraries<br />
* Thu 5:20 - 6:00: more cross-project https://etherpad.openstack.org/p/keystone-mitaka-summit-x-project<br />
<br />
== Neutron ==<br />
<br />
* Wed 11:15 - 11:55: Completing the Liberty backlog https://etherpad.openstack.org/p/mitaka-neutron-core-liberty-backlog<br />
* Wed 12:05 - 12:45: Cross Project integration: tempest and 3rd party validation https://etherpad.openstack.org/p/mitaka-neutron-core-cross-project-integration<br />
<br />
* Wed 15:40 - 16:20: Cross Project integration: devstack, nova, heat, ... https://etherpad.openstack.org/p/mitaka-neutron-core-cross-project-integration<br />
* Wed 16:40 - 17:20: API and Server extensibility mechanisms https://etherpad.openstack.org/p/mitaka-neutron-core-extensibility<br />
* Wed 17:30 - 18:10: Plugin and Agent extensibility mechanisms https://etherpad.openstack.org/p/mitaka-neutron-core-extensibility<br />
<br />
* Thu 11:00 - 11:40: LBaaS/Octavia/FWaaS https://etherpad.openstack.org/p/mitaka-neutron-next-adv-services<br />
* Thu 11:50 - 12:30: LBaaS/Octavia/FWaaS https://etherpad.openstack.org/p/mitaka-neutron-next-adv-services<br />
<br />
* Thu 13:50 - 14:30: Scalability, operability and reliability pain points https://etherpad.openstack.org/p/mitaka-neutron-next-ops-painpoints<br />
* Thu 14:40 - 15:20: Extending the existing networking logical model and protocols support https://etherpad.openstack.org/p/mitaka-neutron-next-network-model<br />
* Thu 15:30 - 16:10: Lightning talks https://etherpad.openstack.org/p/mitaka-neutron-labs-lighting-talks<br />
* Thu 16:30 - 17:10: NFV foundation elements https://etherpad.openstack.org/p/mitaka-neutron-labs-nfv-foundation<br />
* Thu 17:20 - 18:00: Integration between orchestration platforms and Neutron https://etherpad.openstack.org/p/mitaka-neutron-labs-orchestration<br />
<br />
* Fri 09:00 - 12:30: Neutron contributors meetup https://etherpad.openstack.org/p/mitaka-neutron-unplugged-track<br />
* 14:00 - 17:30: Neutron contributors meetup https://etherpad.openstack.org/p/mitaka-neutron-unplugged-track<br />
<br />
== Nova ==<br />
<br />
* Wed 11:15: REST API https://etherpad.openstack.org/p/mitaka-nova-api<br />
* Wed 12:05: Upgrade https://etherpad.openstack.org/p/mitaka-nova-upgrade<br />
<br />
* Wed 14:00: Unconference https://etherpad.openstack.org/p/mitaka-nova-unconference<br />
* Wed 14:50: OS VIF lib https://etherpad.openstack.org/p/mitaka-nova-os-vif-lib<br />
* Wed 15:40: Resources and Flavors https://etherpad.openstack.org/p/mitaka-nova-resource-modeling<br />
* Wed 16:40: Resources and Flavors (continued) https://etherpad.openstack.org/p/mitaka-nova-resource-modeling<br />
* Wed 17:30: SR-IOV https://etherpad.openstack.org/p/mitaka-nova-sr-iov<br />
<br />
* Thurs 09:00: Cells v2 https://etherpad.openstack.org/p/mitaka-nova-cells<br />
* Thurs 9:50: see Cinder track<br />
* Thurs 11:00: Scheduler https://etherpad.openstack.org/p/mitaka-nova-scheduler<br />
* Thurs 11:50: see Ironic track<br />
<br />
* Thurs 13:50: Unconference https://etherpad.openstack.org/p/mitaka-nova-unconference<br />
* Thurs 14:40: Error handling https://etherpad.openstack.org/p/mitaka-nova-error-handling<br />
* Thurs 15:30: Cross Service issues: Server locking, token refresh, Instance users https://etherpad.openstack.org/p/mitaka-nova-service-users<br />
* Thurs 16:30: Mitaka Priorities https://etherpad.openstack.org/p/mitaka-nova-priorities<br />
* Thurs 17:20: Unconference https://etherpad.openstack.org/p/mitaka-nova-unconference<br />
<br />
* Fri: 09:00 and 14:00: Nova contributors meetup https://etherpad.openstack.org/p/mitaka-nova-summit-meetup<br />
<br />
== Manila ==<br />
* Wed 11:15 - 11:55: (WS) Migration Improvements https://etherpad.openstack.org/p/mitaka-manila-migration-improvements<br />
* Wed 12:05 - 12:45: (WS) Access Allow/Deny Driver Interface https://etherpad.openstack.org/p/mitaka-manila-allow-deny<br />
* Thu 11:00 - 11:40: (FB) Share Replication https://etherpad.openstack.org/p/mitaka-manila-replication<br />
* Thu 11:50 - 12:30: (FB) Alternative Snapshot Semantics https://etherpad.openstack.org/p/mitaka-manila-snapshot-semantics<br />
* Thu 14:40 - 15:20: (WS) Export Location Metadata https://etherpad.openstack.org/p/mitaka-manila-export-location-metadata<br />
* Thu 15:30 - 16:10: (WS) Interactions Between New Features https://etherpad.openstack.org/p/mitaka-manila-feature-interactions<br />
* Fri 09:00 - 12:30: (CM) Contributor Meetup https://etherpad.openstack.org/p/mitaka-manila-contributor-meetup<br />
<br />
==Murano==<br />
'''Wed'''<br />
<br />
* 11:15am (W) [https://etherpad.openstack.org/p/murano-mitaka-work-session-1 Multi-Region in Murano]<br />
* 12:05pm (W) [https://etherpad.openstack.org/p/murano-mitaka-work-session-2 Actions]<br />
<br />
'''Fri'''<br />
<br />
* 9am - 5:30pm [https://etherpad.openstack.org/p/murano-mitaka-contributors-meetup Contributors Meetup]<br />
<br />
== OpenStack-Ansible ==<br />
* [https://etherpad.openstack.org/p/openstack-ansible-mitaka-summit Summary View]<br />
* Wed 14:50-15:30 : [https://etherpad.openstack.org/p/openstack-ansible-mitaka-image-based-deployment Image-based deployments]<br />
* Wed 15:40-16:20 : [https://etherpad.openstack.org/p/openstack-ansible-mitaka-upgrades Production-ready Upgrades]<br />
* Wed 17:30-18:10 : [https://etherpad.openstack.org/p/openstack-ansible-mitaka-inventory Dynamic Inventory Refactor]<br />
* Fri 14:00-17:00 : [https://etherpad.openstack.org/p/openstack-ansible-mitaka-meetup Contributor's Day]<br />
<br />
== OpenStack Chef ==<br />
* [https://etherpad.openstack.org/p/mitaka-openstack-chef-general general discussion]<br />
* Thurs 13:00-15:00 : [https://etherpad.openstack.org/p/mitaka-openstack-chef-refactoring defining the refactoring process]<br />
<br />
== OpenStackClient ==<br />
* Wed 16:40 - 17:20: [https://etherpad.openstack.org/p/tokyo-osc-session Near-term Roadmap]<br />
* Fri 09:00 - 12:30: [https://etherpad.openstack.org/p/tokyo-osc-meetup Meetup]<br />
<br />
== Ops ==<br />
Use https://etherpad.openstack.org/p/TYO-ops-meetup in the meantime.<br />
<br />
<br />
== Oslo ==<br />
<br />
* Wed 16:40: Work session: Tasks task tasks tisk-a-task [https://etherpad.openstack.org/p/mitaka-oslo-taskflow]<br />
* Wed 17:30: Work session: Review recommendations from Security and Logging WG [https://etherpad.openstack.org/p/mitaka-oslo-security-logging]<br />
* Thu 09:50: Mitaka and beyond - New libraries, drivers in Oslo [https://etherpad.openstack.org/p/mitaka-oslo-new-stuff]<br />
* Thu 13:50: Work session: Better Developer Documentation [https://etherpad.openstack.org/p/mitaka-oslo-better-documentation]<br />
* Thu 14:40: Work session: Strategy, CI, Functional testing, Releases, etc. [https://etherpad.openstack.org/p/mitaka-oslo-strategy-ci-functional]<br />
* Thu 15:30: Work session: oslo.messaging HA, performance, future plans [https://etherpad.openstack.org/p/mitaka-oslo-mesaging-ha-performance]<br />
* Thu 16:30: Oslo: Plans/updates to existing libraries [https://etherpad.openstack.org/p/mitaka-oslo-library-updates]<br />
* Thu 17:20: Oslo: State of oslo.messaging Drivers [https://etherpad.openstack.org/p/mitaka-oslo-messaging-zmq-pika-kafka]<br />
<br />
== Puppet OpenStack ==<br />
* General etherpadː https://etherpad.openstack.org/p/HND-puppet<br />
* Wed 2 pm: Code design sessionː https://etherpad.openstack.org/p/HND-puppet-code<br />
* Wed 2.50 pm: Code design sessionː https://etherpad.openstack.org/p/HND-puppet-code<br />
* Thu 1.50 pm: Community feedbackː https://etherpad.openstack.org/p/HND-puppet-community<br />
* Thu 2.40 pm: CI and documentationː https://etherpad.openstack.org/p/HND-puppet-ci and https://etherpad.openstack.org/p/HND-puppet-doc<br />
* Thu 4.30 pm: Code design sessionː https://etherpad.openstack.org/p/HND-puppet-code<br />
<br />
== QA ==<br />
* Wed 14:50-15:30: [https://etherpad.openstack.org/p/mitaka-qa-openstack-health OpenStack Health Dashboard Next Steps]<br />
* Wed 15:40-16:20: [https://etherpad.openstack.org/p/mitaka-qa-tempest-microversions Tempest Microversion Support and Testing]<br />
* Wed 16:40-17:20: [https://etherpad.openstack.org/p/mitaka-qa-testr-datastore-layering Testr datastore layering and architecture cleanup]<br />
* Wed 17:30-18:10: [https://etherpad.openstack.org/p/mitaka-qa-tempest-run-cli Tempest command line runner options/enhancements]<br />
* Thurs 09:00-09:40: [https://etherpad.openstack.org/p/mitaka-qa-tempest-resource-config Tempest Existing Resource Configuration (aka resources.yaml)]<br />
* Thurs 09:50-10:30: [https://etherpad.openstack.org/p/mitaka-qa-tempest-lib-service-clients Tempest-lib expansion and service client plugins]<br />
* Thurs 16:30-17:10: [https://etherpad.openstack.org/p/mitaka-qa-devstack-roadmap Devstack/Grenade in Mitaka]<br />
* Thurs 17:20-18:00: [https://etherpad.openstack.org/p/mitaka-qa-priorities Mitaka QA Priorities]<br />
<br />
== Release management ==<br />
* Thu 15:30: Mitaka process changes [https://etherpad.openstack.org/p/mitaka-relmgt-process-changes]<br />
* Thu 16:30: Work session: the Mitaka plan [https://etherpad.openstack.org/p/mitaka-relmgt-plan]<br />
<br />
== Searchlight ==<br />
* Thu 4ː30: (Fishbowl) Prioritizing Search Integrations and Capabilities https://etherpad.openstack.org/p/searchlight-mitaka-summit-priorities-integrations<br />
* Thu 5ː20: Cross Region Searching https://etherpad.openstack.org/p/searchlight-mitaka-summit-multi-region<br />
<br />
== Sahara ==<br />
* Thu 9:00: (Fishbowl) UX improvements http://etherpad.openstack.org/p/sahara-mitaka-ux<br />
* Thu 9:50: (Fishbowl) Future plugins and EDP jobs https://etherpad.openstack.org/p/sahara-mitaka-future-plugins-edp<br />
* Thu 11:00: Security https://etherpad.openstack.org/p/sahara-mitaka-security<br />
* Thu 11:50: UI tech http://etherpad.openstack.org/p/sahara-mitaka-ui<br />
* Thu 13:50: Image generation http://etherpad.openstack.org/p/sahara-mitaka-images<br />
* Thu 14:40: Deprecation policies and plugins decoupling https://etherpad.openstack.org/p/sahara-mitaka-deprecation-policies<br />
* Thu 15:30: Tests http://etherpad.openstack.org/p/sahara-mitaka-tests<br />
* Fri 14:00-17:30: Contributors Meetup https://etherpad.openstack.org/p/sahara-mitaka-meetup<br />
<br />
== Swift ==<br />
* Wed 11:15am -12:54pm: Work session 1:<br />
** Production Keymaster: https://etherpad.openstack.org/p/swift_production_keymaster_issues<br />
** Ouststanding encryption issues: https://etherpad.openstack.org/p/swift_encryption_issues<br />
<br />
* Wed 2:00pm - 4:20pm: Work session 2:<br />
** container sync: https://etherpad.openstack.org/p/tokyo-swift-container-sync<br />
** hummingbird status and unifying the sync protocol: https://etherpad.openstack.org/p/tokyo-swift-hummingbird<br />
** global clusters: https://etherpad.openstack.org/p/tokyo-swift-global-clusters<br />
<br />
* Wed 4:40pm - 5:20pm: Ops Feedback Session:<br />
** https://etherpad.openstack.org/p/tokyo-swift-ops-feedback<br />
<br />
* Wed 5:30pm - 6:10pm: Inbound cross-project issues:<br />
** https://etherpad.openstack.org/p/tokyo-swift-cross-project<br />
<br />
* Thurs 11:am - 12:30pm: Work session 3:<br />
** Keystone session in swiftclient: https://etherpad.openstack.org/p/keystone-auth-session<br />
** swiftclient docs: https://etherpad.openstack.org/p/swiftclient-docs<br />
** Other issues: https://etherpad.openstack.org/p/tokyo-swiftclient-other<br />
<br />
* Thurs 1:50pm - 4:10pm: Work session 4:<br />
** rings (data placement):<br />
** EC topics:<br />
** symlinks: https://etherpad.openstack.org/p/swift_symlinks<br />
<br />
* Thurs 4:30pm - 6:00pm: Work session 5:<br />
** container sharding: https://etherpad.openstack.org/p/tokyo-swift-container-sharding<br />
** fast-POST: https://etherpad.openstack.org/p/tokyo-swift-fast-post<br />
<br />
* Fri all day: Swift contributors meetup:<br />
** https://etherpad.openstack.org/p/tokyo-swift-contributors-meetup<br />
<br />
== TripleO ==<br />
* Wed 4:40pm - 5:20pm: (Fishbowl) Container Integration https://etherpad.openstack.org/p/tripleo-mitaka-containers<br />
* Wed 5:30pm - 6:10pm: (Fishbowl) Upgrades https://etherpad.openstack.org/p/tripleo-mitaka-upgrades<br />
* Thu 5:20pm - 6:00pm: (Workroom) tripleo-common, REST API https://etherpad.openstack.org/p/tripleo-mitaka-restapi<br />
* Fri 9:00am - 12:30pm: (meetup) https://etherpad.openstack.org/p/tripleo-mitaka-meetup<br />
* Fri 2:00pm - 5:30pm: (meetup) https://etherpad.openstack.org/p/tripleo-mitaka-meetup<br />
<br />
== Trove ==<br />
* Wednesday, 2015-08-28<br />
** 15:40 - [https://etherpad.openstack.org/p/trove-mitaka-multiple-storage-options multiple storage options]<br />
** 16:40 - [https://etherpad.openstack.org/p/trove-mitaka-managing-trove-upgrades managing trove upgrades]<br />
<br />
* Thursday, 2015-08-29<br />
** 11:00 - [https://etherpad.openstack.org/p/trove-mitaka-user-op-session User Op Session]<br />
** 11ː50 - [https://etherpad.openstack.org/p/trove-mitaka-toggle-instance-status toggle instance status]<br />
** 13ː50 - [https://etherpad.openstack.org/p/trove-mitaka-distribution-agnostic distribution agnostic]<br />
** 14ː40 - [https://etherpad.openstack.org/p/trove-mitaka-building-guest-images building guest images]<br />
** 15ː30 - [https://etherpad.openstack.org/p/mitaka-nova-service-users Nova Cross Project issues]<br />
<br />
* Friday, 2015-08-30<br />
** 14:00 - [https://etherpad.openstack.org/p/trove-mitaka-contributors-meetiup contributors meetup]<br />
<br />
== Watcher ==<br />
* Tuesday, 2015-10-27<br />
** 10:45-11:45 - [https://etherpad.openstack.org/p/watcher--mitaka-contributors-meetup contributors meetup]<br />
<br />
== Zaqar ==<br />
* Wed 15:40-16:20 (W) https://etherpad.openstack.org/p/mitaka-zaqar-sahara<br />
* Wed 17:30-18:10 (F) https://etherpad.openstack.org/p/mitaka-zaqar-on-horizon-and-misc<br />
* Thu 09:00-9:40 (W) https://etherpad.openstack.org/p/mitaka-zaqar-client<br />
* Thu 13:50-14:30 (W) https://etherpad.openstack.org/p/mitaka-zaqar-realtime-horizon</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=Meetings/HorizonDrivers&diff=92576Meetings/HorizonDrivers2015-10-14T20:26:42Z<p>Travis Tripp: /* Agenda for October 14 2000 UTC */</p>
<hr />
<div>The [[OpenStack]] [[Horizon]] Drivers Team holds public meetings in #openstack-meeting-3 at alternating times. <br />
<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=20&min=0&sec=0 2000 UTC]<br />
* Wednesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0&sec=0 1200 UTC]<br />
<br />
The focus of this meeting is reviewing blueprints, bugs and priorities. Everyone is encouraged to attend. <br />
<br />
Upcoming meeting schedule:<br />
* October 14 2000 UTC<br />
* October 21 1200 UTC<br />
<br />
== Agenda for October 14 2000 UTC ==<br />
<br />
* Possible pre-summit discussion? - Feature branches (robcresswell)<br />
* Review Angular blueprints (robcresswell)<br />
** https://blueprints.launchpad.net/horizon/+spec/angularize-instances-table<br />
** https://blueprints.launchpad.net/horizon/+spec/angularize-images-table<br />
** https://blueprints.launchpad.net/horizon/+spec/angularize-identity-projects<br />
** https://blueprints.launchpad.net/horizon/+spec/cinder-extensions.service<br />
** https://blueprints.launchpad.net/horizon/+spec/searchlight-search-panel<br />
** https://blueprints.launchpad.net/horizon/+spec/ng-flavors<br />
** https://blueprints.launchpad.net/horizon/+spec/ironic-horizon-panel<br />
** https://blueprints.launchpad.net/horizon/+spec/horizon-angular-mocks<br />
** https://blueprints.launchpad.net/horizon/+spec/horizon-rest-api-mock<br />
<br />
== Agenda for September 23 1200 UTC ==<br />
<br />
* Revisit https://blueprints.launchpad.net/horizon/+spec/horizon-glance-large-image-upload (tsufiev)<br />
* https://blueprints.launchpad.net/horizon/+spec/horizon-large-ldap-users-browsing (tsufiev)<br />
<br />
== Agenda for September 16 2000 UTC ==<br />
<br />
Liberty RC1 bugs and blueprints<br />
* https://launchpad.net/horizon/+milestone/liberty-rc1<br />
<br />
Nova server groups support (bpokorny)<br />
* https://blueprints.launchpad.net/horizon/+spec/nova-server-groups<br />
<br />
== Agenda for September 9 1200 UTC ==<br />
<br />
== Agenda for September 2 2000 UTC ==<br />
Review Angular blueprints (robcresswell) (Continuation of last week)<br />
<br />
* https://blueprints.launchpad.net/horizon/+spec/horizon-angular-mocks<br />
* https://blueprints.launchpad.net/horizon/+spec/horizon-rest-api-mock<br />
* https://blueprints.launchpad.net/horizon/+spec/ifenabled-use-options-object<br />
* https://blueprints.launchpad.net/horizon/+spec/ng-defaults-panel<br />
* https://blueprints.launchpad.net/horizon/+spec/ng-image<br />
* https://blueprints.launchpad.net/horizon/+spec/update-jasmine<br />
* https://blueprints.launchpad.net/horizon/+spec/ng-flavors<br />
* https://blueprints.launchpad.net/horizon/+spec/ng-system-information<br />
* https://blueprints.launchpad.net/horizon/+spec/angular-docs<br />
* https://blueprints.launchpad.net/horizon/+spec/transfer-table-clone-feature<br />
<br />
<br />
Review other blueprints:<br />
* https://blueprints.launchpad.net/horizon/+spec/horizon-glance-large-image-upload<br />
<br />
== Agenda for August 26 1200 UTC ==<br />
Review Angular blueprints (robcresswell) (Continuation of last week)<br />
<br />
Note from TravTː Many, if not all of the below involve people who likely can't make the meeting time this week (5 AM PT). I'd request that most of them be deferred until the following week and you cover blueprints that could be obsolesced.<br />
<br />
(robcresswell) It was agreed to defer these until the following meeting.<br />
<br />
* https://blueprints.launchpad.net/horizon/+spec/horizon-angular-mocks<br />
* https://blueprints.launchpad.net/horizon/+spec/horizon-rest-api-mock<br />
* https://blueprints.launchpad.net/horizon/+spec/ifenabled-use-options-object<br />
* https://blueprints.launchpad.net/horizon/+spec/ng-defaults-panel<br />
* https://blueprints.launchpad.net/horizon/+spec/ng-image<br />
* https://blueprints.launchpad.net/horizon/+spec/update-jasmine<br />
* https://blueprints.launchpad.net/horizon/+spec/ng-flavors<br />
* https://blueprints.launchpad.net/horizon/+spec/ng-system-information<br />
* https://blueprints.launchpad.net/horizon/+spec/angular-docs<br />
* https://blueprints.launchpad.net/horizon/+spec/transfer-table-clone-feature<br />
<br />
<br />
Review other blueprints<br />
* https://blueprints.launchpad.net/horizon/+spec/integration-tests-hardening<br />
<br />
== Agenda for August 19 2000 UTC ==<br />
Review Angular blueprints (robcresswell)<br />
* https://blueprints.launchpad.net/horizon/+spec/angular-workflow-plugin<br />
* https://blueprints.launchpad.net/horizon/+spec/angularize-identity-projects<br />
* https://blueprints.launchpad.net/horizon/+spec/horizon-angular-mocks<br />
* https://blueprints.launchpad.net/horizon/+spec/horizon-rest-api-mock<br />
* https://blueprints.launchpad.net/horizon/+spec/ifenabled-use-options-object<br />
* https://blueprints.launchpad.net/horizon/+spec/ng-defaults-panel<br />
* https://blueprints.launchpad.net/horizon/+spec/ng-image<br />
* https://blueprints.launchpad.net/horizon/+spec/update-jasmine<br />
* https://blueprints.launchpad.net/horizon/+spec/ng-flavors<br />
* https://blueprints.launchpad.net/horizon/+spec/ng-system-information<br />
* https://blueprints.launchpad.net/horizon/+spec/angular-docs<br />
* <strike>https://blueprints.launchpad.net/horizon/+spec/jscs-cleanup</strike><br />
* <strike>https://blueprints.launchpad.net/horizon/+spec/babel-translate-inner-tags</strike><br />
* https://blueprints.launchpad.net/horizon/+spec/transfer-table-clone-feature<br />
<br />
== Agenda for August 12 1200 UTC ==<br />
* Explain goals and reasons for this meeting (david-lyle)</div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Liberty&diff=91921ReleaseNotes/Liberty2015-10-07T15:42:16Z<p>Travis Tripp: /* OpenStack Search (Searchlight) */</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
[[Category:Kilo|Release Note]]<br />
[[Category:Release Note|Liberty]]<br />
<br />
= OpenStack Liberty Release Notes =<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== OpenStack Networking (Neutron) ==<br />
<br />
=== New Features ===<br />
* Neutron now supports IPv6 Prefix Delegation for the automatic assignment of CIDRs to IPv6 subnets. For more information on the usage and configuration of this feature, see the [http://docs.openstack.org/networking-guide/adv_config_ipv6.html#prefix-delegation OpenStack Networking Guide].<br />
* Neutron now exposes a QoS API, initially offering bandwidth limitation on the port level. The API, CLI, configuration and additional information may be found here [http://docs.openstack.org/developer/neutron/devref/quality_of_service.html].<br />
* Router high availability (L3 HA / VRRP) now works when layer 2 population (l2pop) is enabled [https://bugs.launchpad.net/neutron/+bug/1365476].<br />
* VPNaaS reference drivers now work with HA routers.<br />
* Networks used for VRRP traffic for HA routers may now be configured to use a specific segmentation type or physical network tag [https://bugs.launchpad.net/neutron/+bug/1481443].<br />
* The OVS agent may now be restarted without affecting data plane connectivity.<br />
* Neutron now offers role base access control for networks [http://specs.openstack.org/openstack/neutron-specs/specs/liberty/rbac-networks.html].<br />
* LBaaS V2 reference driver is now based on Octavia, an operator grade scalable, reliable Load Balancer platform<br />
* LBaaS V2 API is no longer experimental. It is now stable.<br />
* Neutron now provides a way for admins to manually schedule agents, allowing host resources to be tested before they are enabled for tenant use [https://github.com/openstack/neutron-specs/blob/master/specs/liberty/enable-new-agents.rst#user-documentation].<br />
* Neutron now has a pluggable IP address management framework, enabling the use of alternate or third-party IPAM. The original, non-pluggable version of IPAM is enabled by default.<br />
<br />
=== Deprecated and Removed Plugins and Drivers ===<br />
* The metaplugin is removed in the Liberty release.<br />
* The IBM SDN-VE monolithic plugin is removed in the Liberty release.<br />
* The Cisco N1kV monolithic plugin is removed in the Liberty release (replaced by the ML2 mechanism driver).<br />
<br />
=== Deprecated Features ===<br />
* The FWaaS API is marked as experimental for Liberty. Further, the current API will be removed in Mitaka and replaced with a new FWaaS API which the team is in the process of developing.<br />
* The LBaaS V1 API is marked as deprecated and is planned to be removed in some future release. Going forward the LBaaS V2 API should be used.<br />
<br />
=== Performance Considerations ===<br />
* The stock Trusty Tahr kernel (3.13) shows linear performance degradation when running "ip netns exec" as the number of namespaces increases. In cases where scale is important, a later version kernel (e.g. 3.19) should be used.<br />
* Creating Neutron-LBaaS load balancers in environments without hardware virtualization may be slow when using the Octavia driver. This is due to QEMU using the TCG accelerator versus the KVM accelerator in environments without hardware virtualization available. We recommend enabling hardware virtualization on your compute nodes or enabling nested virtualization when using the Octavia driver inside a virtual environment. See the following link for details on setting up nested virtualization for DevStack running inside KVM: http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Upgrade Notes ===<br />
* If you are coming from Kilo stable, please make sure you have fully upgraded to the latest release of that lineage before deploying Liberty. Due to bug https://bugs.launchpad.net/nova/+bug/1474074 versions of Kilo from before the fix will be problematic when talking to Liberty nodes.<br />
* Allocation ratios for RAM and CPU are now defined within the nova-compute service (so per compute node), thus need to be also provided for the scheduler service. Depending on whether a compute node is running Kilo or Liberty, the allocation ratios will behave differently : ''if the compute node is running Kilo'' then the CPU and RAM allocation ratios for that compute node will be the ones defaulted in the controller's nova.conf file.''Or, if the compute node is Liberty'' then you'll be able to set a per-compute allocation ratio for both CPU and RAM. In order to leave the operator providing the allocation ratios to all the compute nodes, the default allocation ratio will be set in nova.conf to 0.0 (even for the controller). That doesn't mean that allocation ratios will actually be 0.0, just that the operator needs to provide those '''before the next release (ie. Mitaka)'''. To be clear, the default allocation ratios are still 16.0 for cpu_allocation_ratio and 1.5 for ram_allocation_ratio.<br />
* nova-compute should be upgraded to Liberty code before upgrading Neutron services per the new "network-vif-deleted" event: https://review.openstack.org/#/c/187871/<br />
* Rootwrap filters must be updated after release to add the touch command.<br />
** There is a race condition between imagebackend and imagecache mentioned in the Launchpad Bug [https://bugs.launchpad.net/nova/+bug/1256838 1256838]. <br />
** In this case if base image is deleted by ImageCacheManager while imagebackend is copying the image to instance path, then instance goes in to error state.<br />
** In order to resolve this issue, there is a need to add 'touch' command in compute.filters along with the change https://review.openstack.org/#/c/217579/.<br />
** In case of a race condition when libvirt has changed the base file ownership to libvirt-qemu while imagebackend is copying the image, then we get Permission denied error on updating the file access time using os.utime. To resolve this error we need to update the base file access time with root user privileges using touch command.<br />
* The ''DiskFilter'' is now part of the ''scheduler_default_filters'' in Liberty per https://review.openstack.org/#/c/207942/ .<br />
* Per https://review.openstack.org/#/c/103916/ you can now only map one vCenter cluster to a single nova-compute node.<br />
<br />
=== Deprecations ===<br />
* The novaclient.v1_1 module has been deprecated [[https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=61ef35fe79e2a3a76987a92f9ee2db0bf1f6e651]][[https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=0a60aae852d2688861d0b4ba097a1a00529f0611]] since 2.21.0 and we are going to remove it in the first python-novaclient release in Mitaka.<br />
* Method `novaclient.client.get_client_class` is deprecated [[https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=147a1a6ee421f9a45a562f013e233d29d43258e4]] since 2.29.0 and we are going to remove it in Mitaka.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
<br />
=== Key New Features ===<br />
* creation of Aodh to handle alarming service<br />
* improved metadata caching - reduced load of nova api polling<br />
* declarative meters - ability to generate meters by defining meter definition template.<br />
* ceilometer+gnocchi integration - support for data publishing from Ceilometer to Gnocchi<br />
* mandatory limit - limit restricted querying is enforced. limit must be explicitly provided on queries else the result set is restricted to a default limit<br />
* distributed, coordinated notification agents - support for workload partitioning across multiple notification agents<br />
* Events RBAC support<br />
* PowerVM hypervisor support<br />
* improved MongoDB query support<br />
<br />
==== Gnocchi Features ====<br />
<br />
==== Aodh Features ====<br />
** event alarms - ability to trigger action when event is received<br />
<br />
=== Upgrade Notes ===<br />
* The name of some middleware used by ceilometer changed in a backwards-incompatible way. Before upgrading, edit the paste.ini file for ceilometer to change "oslo.middleware" to "oslo_middleware". For example using <nowiki>sed -ri 's/oslo\.middleware/oslo_middleware/' api_paste.ini</nowiki><br />
<br />
=== Deprecation ===<br />
* Ceilometer Alarms is deprecated in favour or Aodh<br />
* RPC publisher and collector is deprecated in favour of topic based notifier publisher<br />
* Non-metric meters are still deprecated to be removed<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* '''''Experimental''''': Store domain specific configuration options in SQL instead of using configuration files use the new REST APIs.<br />
* '''''Experimental''''': Keystone now supports tokenless authorization with X.509 SSL client certificate.<br />
* Configuring per-Identity Provider WebSSO is now supported.<br />
* <code>openstack_user_domain</code> and <code>openstack_project_domain</code> attributes were added to SAML assertion in order to map user and project domains, respectively.<br />
* Credentials list call can now have its results filtered by credential type.<br />
* Support was improved for out-of-tree drivers by defining stable Driver Interfaces.<br />
* Several features were hardened, including Fernet tokens, Federation, Domain specific configurations from database and Role Assignments.<br />
* Certain options in keystone.conf now have choices, which determine if the user's setting is valid.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The EC2 token middleware, deprecated in Juno, is no longer available in keystone. It's been moved to the keystonemiddleware package.<br />
* The compute_port configuration option, deprecated in Juno, is no longer available.<br />
* The XML middleware stub has been removed, so references to it must be removed from the <code>keystone-paste.ini</code> configuration file.<br />
* stats_monitoring and stats_reporting paste filters have been removed, so references to it must be removed from the <code>keystone-paste.ini</code> configuration file<br />
* The external authentication plugins ExternalDefault, ExternalDomain, LegacyDefaultDomain, and LegacyDomain, deprecated in Icehouse, are all no longer available.<br />
* <code>keystone.conf</code> now references entrypoint names for drivers, as such the drivers are now specified like "sql", "ldap", "uuid", etc., rather than the full module path. See the sample configuration file for examples.<br />
* Similarly to the above, we now expose entrypoints for the <code>keystone-manage</code> command instead of a file.<br />
* Schema downgrades via <code>keystone-manage db_sync</code> are no longer supported, only upgrades are supported.<br />
* Features that were "extensions" in previous releases (OAuth delegation, Federated Identity support, Endpoint Policy, etc) are now enabled by default.<br />
* If you're running keystone behind a proxy, check out the new <code>secure_proxy_ssl_header</code> config option<br />
* Several configuration options have been deprecated, renamed, or moved to new sections. Review your <code>keystone.conf</code> file against the current sample configuration file.<br />
* Domain name information is now available to be used in policy rules with the attribute <code>domain_name</code>.<br />
<br />
=== Deprecations ===<br />
<br />
* Running Keystone in Eventlet remains deprecated and will be removed in the Mitaka release<br />
* Using LDAP as the resource backend, i.e for projects and domains, is now deprecated and will be removed in the Mitaka release<br />
* Using the full path to the driver class is deprecated in favor of using the entrypoint. In the Mitaka release, the entrypoint must be used.<br />
* In the [resource] and [role] sections of the <code>keystone.conf</code> file, not specifying the driver and using the assignment driver is deprecated. In the Mitaka release, the resource and role drivers will default to the sql driver.<br />
* In <code>keystone-paste.ini</code>, using paste.filter_factory is deprecated in favor of the "use" directive, specifying an entrypoint.<br />
* Not specifying a domain during a create user, group or project call, which relied on falling back to the default domain, is now deprecated and will be removed in the N release.<br />
* Certain deprecated methods from the assignment manager were removed in favor of the same methods in the [resource] and [role] manager.<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
<br />
=== Key New Features ===<br />
* A generic image caching solution, so popular VM images can be cached and copied-on-write to a new volume. [http://docs.openstack.org/admin-guide-cloud/blockstorage_image_volume_cache.html Read docs for more info]<br />
* Non-disruptive backups [http://docs.openstack.org/admin-guide-cloud/blockstorage_volume_backups.html Read docs for more info].<br />
* Ability to clone consistency groups of volumes [http://docs.openstack.org/admin-guide-cloud/blockstorage-consistency-groups.html Read docs for more info].<br />
* List capabilities of a volume backend (fetch extra-specs)<br />
* Nested quotas<br />
<br />
=== Upgrade Notes ===<br />
<br />
=== Deprecations ===<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
=== New Features ===<br />
<br />
==== Convergence ====<br />
Convergence is a new orchestration engine which is maturing in the heat tree. In Liberty the benefits of using the convergence engine are:<br />
* Greater parallelization of resource actions (for better scaling of large templates)<br />
* The ability to do a stack-update whilst there is already an update in-progress<br />
* Better handling of heat-engine failures (still WIP)<br />
<br />
The convergence engine can be enabled by setting /etc/heat/heat/conf [DEFAULT] convergence_engine=true, then restarting heat-engine. Once this has been done, any subsequent created stack will use the convergence engine, while operations on existing stacks will continue to use the traditional engine.<br />
<br />
Convergence has '''not''' been production tested and thus should be considered '''beta''' quality - use with caution. For the Liberty release we recommend enabling convergence for the purposes of evaluation and scale testing. We will be considering making convergence the default engine in the Mitaka cycle. Convergence specific bugs are tracked in launchpad with the [https://bugs.launchpad.net/heat/+bugs?field.tag=convergence-bugs convergence-bugs tag].<br />
<br />
==== Conditional resource exposure ====<br />
Only resources for actually installed in the cloud services are made available to any user. Operators can further control what resources user may use with standard policy rules in [https://github.com/openstack/heat/blob/master/etc/heat/policy.json#L80 policy.json on per-resource type basis].<br />
<br />
==== heat_template_version: 2015-10-15 ====<br />
<br />
2015-10-15 indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Liberty release. <br />
* Removes the Fn::Select function (path based [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-attr get_attr]/[http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-param get_param] references should be used instead). <br />
* If no <attribute name> is specified for calls to [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-attr get_attr], a dict of all attributes is returned, e.g. { get_attr: [<resource name>]}. <br />
* Adds new [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#str-split str_split] intrinsic function <br />
* Adds support for passing multiple lists to the existing [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#list-join list_join] function.<br />
* Adds support for parsing map/list data to [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#str-replace str_replace] and [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#list-join list_join] (they will be json serialized automatically)<br />
<br />
==== REST API/heatclient additions ====<br />
* Stacks can now be assigned with a set of tags, and stack-list can filter on those tags<br />
* "heat stack-preview ..." will return a preview of changes for a proposed stack-update<br />
* "heat template-validate --show-nested ..." will also validate all template resources and return nested data useful for building user interfaces<br />
* "heat resource-type-template --template-type hot ..." generates a template in HOT format<br />
* "heat resource-type-list" only shows types available to the user, and can filter results by name, version and support_status<br />
* "heat template-version-list" lists available template versions<br />
* "heat template-function-list ..." lists available functions for a template version<br />
<br />
==== Enhancements to existing resources ====<br />
* Software deployments can now use Zaqar for [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server-prop-software_config_transport deploying software data] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::SoftwareDeployment-prop-signal_transport signalling back to Heat]<br />
* Stack actions are now performed on remote [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::Stack OS::Heat::Stack] resources<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server OS::Nova::Server] now supports deletion_policy: Snapshot <br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-updpolicy OS::Heat::ResourceGroup update_policy] now supports specifying [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-prop-batch_create batch_create] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-prop-rolling_update rolling_update] options<br />
<br />
==== New resources ====<br />
The following new resources are now distributed with the Heat release:<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Barbican::Order OS::Barbican::Order] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Barbican::Secret OS::Barbican::Secret] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Ceilometer::GnocchiAggregationByMetricsAlarm OS::Ceilometer::GnocchiAggregationByMetricsAlarm] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Ceilometer::GnocchiAggregationByResourcesAlarm OS::Ceilometer::GnocchiAggregationByResourcesAlarm] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Ceilometer::GnocchiResourcesAlarm OS::Ceilometer::GnocchiResourcesAlarm] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Cinder::VolumeType OS::Cinder::VolumeType] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Designate::Domain OS::Designate::Domain]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Designate::Record OS::Designate::Record]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::None OS::Heat::None]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::TestResource OS::Heat::TestResource]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Endpoint OS::Keystone::Endpoint]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Group OS::Keystone::Group] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::GroupRoleAssignment OS::Keystone::GroupRoleAssignment]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Project OS::Keystone::Project] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Role OS::Keystone::Role] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Service OS::Keystone::Service]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::User OS::Keystone::User] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::UserRoleAssignment OS::Keystone::UserRoleAssignment]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Magnum::BayModel OS::Magnum::BayModel]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Manila::SecurityService OS::Manila::SecurityService]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Manila::Share OS::Manila::Share]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Manila::ShareNetwork OS::Manila::ShareNetwork]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Manila::ShareType OS::Manila::ShareType]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Mistral::CronTrigger OS::Mistral::CronTrigger]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Mistral::Workflow OS::Mistral::Workflow]<br />
* [http://docs.openstack.org/developer/heat/template_guide/unsupported.html#OS::Monasca::AlarmDefinition OS::Monasca::AlarmDefinition] [4]<br />
* [http://docs.openstack.org/developer/heat/template_guide/unsupported.html#OS::Monasca::Notification OS::Monasca::Notification] [4]<br />
* [http://docs.openstack.org/developer/heat/template_guide/unsupported.html#OS::Neutron::ExtraRoute OS::Neutron::ExtraRoute] [3]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Flavor OS::Nova::Flavor] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Sahara::DataSource OS::Sahara::DataSource]<br />
<br />
[1] These existed Kilo as contrib resources as they were for non-integrated projects. These resources are now distributed with Heat as Big Tent projects.<br />
<br />
[2] These existed Kilo as contrib resources as they require a user with an admin role. They are now distributed with Heat. Operators now have ability to hide them from under-privileged users by modifyig policy.json (for reference, OS::Nova::Flavor is hidden from non-admin users in default policy file supplied).<br />
<br />
[3] These existed in Kilo as contrib resources as they used an approach not endorsed by the Heat project. They are now distributed with heat and documented as UNSUPPORTED.<br />
<br />
[4] These resources are for projects which are not yet OpenStack Big Tent projects, so are documented as UNSUPPORTED<br />
<br />
With the new OS::Keystone::* resources it is now be possible for cloud operators to use heat templates to manage Keystone service catalog entries and users.<br />
<br />
==== Deprecated Resource Properties ====<br />
Many resource properties have previously been documented as DEPRECATED. 15 of these properties are now flagged as HIDDEN, which means they will no longer be documented but existing stacks and templates will continue to work after a heat upgrade. The [ http://docs.openstack.org/developer/heat/template_guide/openstack.html Resource Type Reference] should be consulted to determine available resource properties and attributes.<br />
<br />
=== Upgrade notes ===<br />
<br />
==== Configuration Changes ====<br />
Notable changes to the /etc/heat/heat.conf [DEFAULT] section:<br />
* hidden_stack_tags has been added, stacks containing these tag names will be hidden from stack-list results (defaults to data-processing-cluster which hides sahara-created stacks)<br />
* instance_user was deprecated, and is now removed entirely. Nova servers created with OS::Nova::Server resource will now boot configured with the default user set up with the cloud image. AWS::EC2::Instance still creates "ec2-user"<br />
* max_resources_per_stack can now be set to -1 to disable enforcement<br />
* enable_cloud_watch_lite is now false by default as this REST API is deprecated<br />
* default_software_config_transport has gained the option ZAQAR_MESSAGE<br />
* default_deployment_signal_transport has gained the option ZAQAR_SIGNAL<br />
* auth_encryption_key is now documented as requiring exactly 32 characters<br />
* list_notifier_drivers was deprecated and is now removed<br />
* policy options have moved to the [oslo_policy] section<br />
* use_syslog_rfc_format is deprecated and now defaults to true<br />
<br />
Notable changes to other sections of heat.conf:<br />
* [clients_keystone] auth_uri has been added to specify the unversioned keystone url<br />
* [heat_api] workers now defaults to 4 (was previously 0, which created a worker per host CPU)<br />
<br />
The policy file /etc/heat/policy.json can now be configured with per-resource-type access policies, for example:<br />
"resource_types:OS::Nova::Flavor": "rule:context_is_admin"<br />
<br />
==== Upgrading from Kilo to Liberty ====<br />
Progress has been made on supporting live sql migrations, however it is still recommended to bring down the heat service for the duration of the upgrade. Downward SQL schema migrations are no longer supported, so a rollback to Kilo will require restoring a snapshot of the pre-upgrade database.<br />
<br />
== OpenStack Search (Searchlight) ==<br />
<br />
This is the first release for Searchlight. Searchlight is intended to dramatically improving the search capabilities and performance of various OpenStack cloud services by offloading user search queries. It provides Keystone RBAC based search across OpenStack services by indexing their data into ElasticSearch and providing a security layer on top of incoming search queries. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable full-text search engine with a RESTful web interface.<br />
<br />
* [https://wiki.openstack.org/wiki/Searchlight Project Wiki]<br />
<br />
=== Key New Features ===<br />
* [http://docs.openstack.org/developer/searchlight/searchlightapi.html Searchlight Search API] OpenStack Resource Type based API providing native ElasticSearch query support<br />
* [http://docs.openstack.org/developer/searchlight/indexingservice.html#bulk-indexing Bulk Indexing CLI] searchlight-manage indexing command line interface<br />
* [http://docs.openstack.org/developer/searchlight/indexingservice.html#incremental-updates Incremental Notification based updates]<br />
* [http://docs.openstack.org/developer/searchlight/indexingservice.html#search-plugins Resource Type Plugin system] for adding and managing resource indexing and search<br />
* [https://github.com/openstack/searchlight/tree/master/devstack Devstack deployment]<br />
<br />
==== New Resource Types Indexed ====<br />
* [http://docs.openstack.org/developer/searchlight/plugins/nova.html OS::Nova::Server] Nova server instances<br />
* [http://docs.openstack.org/developer/searchlight/plugins/glance.html OS::Glance::Image & OS::Glance::Metadef] Glance Images and Metadata Definitions<br />
* [http://docs.openstack.org/developer/searchlight/plugins/designate.html OS::Designate::Zone & OS::Designate::RecordSet] Designate Domain and Record Sets<br />
<br />
=== Upgrade Notes ===<br />
<br />
N/A<br />
<br />
=== Deprecations ===<br />
<br />
N/A<br />
<br />
</translate></div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Liberty&diff=91887ReleaseNotes/Liberty2015-10-07T06:05:05Z<p>Travis Tripp: /* OpenStack Search (Searchlight) */</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
[[Category:Kilo|Release Note]]<br />
[[Category:Release Note|Liberty]]<br />
<br />
= OpenStack Liberty Release Notes =<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== OpenStack Networking (Neutron) ==<br />
<br />
=== New Features ===<br />
* Neutron now supports IPv6 Prefix Delegation for the automatic assignment of CIDRs to IPv6 subnets. For more information on the usage and configuration of this feature, see the [http://docs.openstack.org/networking-guide/adv_config_ipv6.html#prefix-delegation OpenStack Networking Guide].<br />
* Neutron now exposes a QoS API, initially offering bandwidth limitation on the port level. The API, CLI, configuration and additional information may be found here [http://docs.openstack.org/developer/neutron/devref/quality_of_service.html].<br />
* Router high availability (L3 HA / VRRP) now works when layer 2 population (l2pop) is enabled [https://bugs.launchpad.net/neutron/+bug/1365476].<br />
* VPNaaS reference drivers now work with HA routers.<br />
* Networks used for VRRP traffic for HA routers may now be configured to use a specific segmentation type or physical network tag [https://bugs.launchpad.net/neutron/+bug/1481443].<br />
* The OVS agent may now be restarted without affecting data plane connectivity.<br />
* Neutron now offers role base access control for networks [http://specs.openstack.org/openstack/neutron-specs/specs/liberty/rbac-networks.html].<br />
* LBaaS V2 reference driver is now based on Octavia, an operator grade scalable, reliable Load Balancer platform<br />
* LBaaS V2 API is no longer experimental. It is now stable.<br />
* Neutron now provides a way for admins to manually schedule agents, allowing host resources to be tested before they are enabled for tenant use [https://github.com/openstack/neutron-specs/blob/master/specs/liberty/enable-new-agents.rst#user-documentation].<br />
* Neutron now has a pluggable IP address management framework, enabling the use of alternate or third-party IPAM. The original, non-pluggable version of IPAM is enabled by default.<br />
<br />
=== Deprecated and Removed Plugins and Drivers ===<br />
* The metaplugin is removed in the Liberty release.<br />
* The IBM SDN-VE monolithic plugin is removed in the Liberty release.<br />
* The Cisco N1kV monolithic plugin is removed in the Liberty release (replaced by the ML2 mechanism driver).<br />
<br />
=== Deprecated Features ===<br />
* The FWaaS API is marked as experimental for Liberty. Further, the current API will be removed in Mitaka and replaced with a new FWaaS API which the team is in the process of developing.<br />
* The LBaaS V1 API is marked as deprecated and is planned to be removed in some future release. Going forward the LBaaS V2 API should be used.<br />
<br />
=== Performance Considerations ===<br />
* The stock Trusty Tahr kernel (3.13) shows linear performance degradation when running "ip netns exec" as the number of namespaces increases. In cases where scale is important, a later version kernel (e.g. 3.19) should be used.<br />
* Creating Neutron-LBaaS load balancers in environments without hardware virtualization may be slow when using the Octavia driver. This is due to QEMU using the TCG accelerator versus the KVM accelerator in environments without hardware virtualization available. We recommend enabling hardware virtualization on your compute nodes or enabling nested virtualization when using the Octavia driver inside a virtual environment. See the following link for details on setting up nested virtualization for DevStack running inside KVM: http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Upgrade Notes ===<br />
* If you are coming from Kilo stable, please make sure you have fully upgraded to the latest release of that lineage before deploying Liberty. Due to bug https://bugs.launchpad.net/nova/+bug/1474074 versions of Kilo from before the fix will be problematic when talking to Liberty nodes.<br />
* Allocation ratios for RAM and CPU are now defined within the nova-compute service (so per compute node), thus need to be also provided for the scheduler service. Depending on whether a compute node is running Kilo or Liberty, the allocation ratios will behave differently : ''if the compute node is running Kilo'' then the CPU and RAM allocation ratios for that compute node will be the ones defaulted in the controller's nova.conf file.''Or, if the compute node is Liberty'' then you'll be able to set a per-compute allocation ratio for both CPU and RAM. In order to leave the operator providing the allocation ratios to all the compute nodes, the default allocation ratio will be set in nova.conf to 0.0 (even for the controller). That doesn't mean that allocation ratios will actually be 0.0, just that the operator needs to provide those '''before the next release (ie. Mitaka)'''. To be clear, the default allocation ratios are still 16.0 for cpu_allocation_ratio and 1.5 for ram_allocation_ratio.<br />
* nova-compute should be upgraded to Liberty code before upgrading Neutron services per the new "network-vif-deleted" event: https://review.openstack.org/#/c/187871/<br />
* Rootwrap filters must be updated after release to add the touch command.<br />
** There is a race condition between imagebackend and imagecache mentioned in the Launchpad Bug [https://bugs.launchpad.net/nova/+bug/1256838 1256838]. <br />
** In this case if base image is deleted by ImageCacheManager while imagebackend is copying the image to instance path, then instance goes in to error state.<br />
** In order to resolve this issue, there is a need to add 'touch' command in compute.filters along with the change https://review.openstack.org/#/c/217579/.<br />
** In case of a race condition when libvirt has changed the base file ownership to libvirt-qemu while imagebackend is copying the image, then we get Permission denied error on updating the file access time using os.utime. To resolve this error we need to update the base file access time with root user privileges using touch command.<br />
* The ''DiskFilter'' is now part of the ''scheduler_default_filters'' in Liberty per https://review.openstack.org/#/c/207942/ .<br />
* Per https://review.openstack.org/#/c/103916/ you can now only map one vCenter cluster to a single nova-compute node.<br />
<br />
=== Deprecations ===<br />
* The novaclient.v1_1 module has been deprecated [[https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=61ef35fe79e2a3a76987a92f9ee2db0bf1f6e651]][[https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=0a60aae852d2688861d0b4ba097a1a00529f0611]] since 2.21.0 and we are going to remove it in the first python-novaclient release in Mitaka.<br />
* Method `novaclient.client.get_client_class` is deprecated [[https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=147a1a6ee421f9a45a562f013e233d29d43258e4]] since 2.29.0 and we are going to remove it in Mitaka.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
<br />
=== Key New Features ===<br />
* creation of Aodh to handle alarming service<br />
* improved metadata caching - reduced load of nova api polling<br />
* declarative meters - ability to generate meters by defining meter definition template.<br />
* ceilometer+gnocchi integration - support for data publishing from Ceilometer to Gnocchi<br />
* mandatory limit - limit restricted querying is enforced. limit must be explicitly provided on queries else the result set is restricted to a default limit<br />
* distributed, coordinated notification agents - support for workload partitioning across multiple notification agents<br />
* Events RBAC support<br />
* PowerVM hypervisor support<br />
* improved MongoDB query support<br />
<br />
==== Gnocchi Features ====<br />
<br />
==== Aodh Features ====<br />
** event alarms - ability to trigger action when event is received<br />
<br />
=== Upgrade Notes ===<br />
* The name of some middleware used by ceilometer changed in a backwards-incompatible way. Before upgrading, edit the paste.ini file for ceilometer to change "oslo.middleware" to "oslo_middleware". For example using <nowiki>sed -ri 's/oslo\.middleware/oslo_middleware/' api_paste.ini</nowiki><br />
<br />
=== Deprecation ===<br />
* Ceilometer Alarms is deprecated in favour or Aodh<br />
* RPC publisher and collector is deprecated in favour of topic based notifier publisher<br />
* Non-metric meters are still deprecated to be removed<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* '''''Experimental''''': Store domain specific configuration options in SQL instead of using configuration files use the new REST APIs.<br />
* '''''Experimental''''': Keystone now supports tokenless authorization with X.509 SSL client certificate.<br />
* Configuring per-Identity Provider WebSSO is now supported.<br />
* <code>openstack_user_domain</code> and <code>openstack_project_domain</code> attributes were added to SAML assertion in order to map user and project domains, respectively.<br />
* Credentials list call can now have its results filtered by credential type.<br />
* Support was improved for out-of-tree drivers by defining stable Driver Interfaces.<br />
* Several features were hardened, including Fernet tokens, Federation, Domain specific configurations from database and Role Assignments.<br />
* Certain options in keystone.conf now have choices, which determine if the user's setting is valid.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The EC2 token middleware, deprecated in Juno, is no longer available in keystone. It's been moved to the keystonemiddleware package.<br />
* The compute_port configuration option, deprecated in Juno, is no longer available.<br />
* The XML middleware stub has been removed, so references to it must be removed from the <code>keystone-paste.ini</code> configuration file.<br />
* stats_monitoring and stats_reporting paste filters have been removed, so references to it must be removed from the <code>keystone-paste.ini</code> configuration file<br />
* The external authentication plugins ExternalDefault, ExternalDomain, LegacyDefaultDomain, and LegacyDomain, deprecated in Icehouse, are all no longer available.<br />
* <code>keystone.conf</code> now references entrypoint names for drivers, as such the drivers are now specified like "sql", "ldap", "uuid", etc., rather than the full module path. See the sample configuration file for examples.<br />
* Similarly to the above, we now expose entrypoints for the <code>keystone-manage</code> command instead of a file.<br />
* Schema downgrades via <code>keystone-manage db_sync</code> are no longer supported, only upgrades are supported.<br />
* Features that were "extensions" in previous releases (OAuth delegation, Federated Identity support, Endpoint Policy, etc) are now enabled by default.<br />
* If you're running keystone behind a proxy, check out the new <code>secure_proxy_ssl_header</code> config option<br />
* Several configuration options have been deprecated, renamed, or moved to new sections. Review your <code>keystone.conf</code> file against the current sample configuration file.<br />
* Domain name information is now available to be used in policy rules with the attribute <code>domain_name</code>.<br />
<br />
=== Deprecations ===<br />
<br />
* Running Keystone in Eventlet remains deprecated and will be removed in the Mitaka release<br />
* Using LDAP as the resource backend, i.e for projects and domains, is now deprecated and will be removed in the Mitaka release<br />
* Using the full path to the driver class is deprecated in favor of using the entrypoint. In the Mitaka release, the entrypoint must be used.<br />
* In the [resource] and [role] sections of the <code>keystone.conf</code> file, not specifying the driver and using the assignment driver is deprecated. In the Mitaka release, the resource and role drivers will default to the sql driver.<br />
* In <code>keystone-paste.ini</code>, using paste.filter_factory is deprecated in favor of the "use" directive, specifying an entrypoint.<br />
* Not specifying a domain during a create user, group or project call, which relied on falling back to the default domain, is now deprecated and will be removed in the N release.<br />
* Certain deprecated methods from the assignment manager were removed in favor of the same methods in the [resource] and [role] manager.<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
<br />
=== Key New Features ===<br />
* A generic image caching solution, so popular VM images can be cached and copied-on-write to a new volume. [http://docs.openstack.org/admin-guide-cloud/blockstorage_image_volume_cache.html Read docs for more info]<br />
* Non-disruptive backups [http://docs.openstack.org/admin-guide-cloud/blockstorage_volume_backups.html Read docs for more info].<br />
* Ability to clone consistency groups of volumes [http://docs.openstack.org/admin-guide-cloud/blockstorage-consistency-groups.html Read docs for more info].<br />
* List capabilities of a volume backend (fetch extra-specs)<br />
* Nested quotas<br />
<br />
=== Upgrade Notes ===<br />
<br />
=== Deprecations ===<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
=== New Features ===<br />
<br />
==== Convergence ====<br />
Convergence is a new orchestration engine which is maturing in the heat tree. In Liberty the benefits of using the convergence engine are:<br />
* Greater parallelization of resource actions (for better scaling of large templates)<br />
* The ability to do a stack-update whilst there is already an update in-progress<br />
* Better handling of heat-engine failures (still WIP)<br />
<br />
The convergence engine can be enabled by setting /etc/heat/heat/conf [DEFAULT] convergence_engine=true, then restarting heat-engine. Once this has been done, any subsequent created stack will use the convergence engine, while operations on existing stacks will continue to use the traditional engine.<br />
<br />
Convergence has '''not''' been production tested and thus should be considered '''beta''' quality - use with caution. For the Liberty release we recommend enabling convergence for the purposes of evaluation and scale testing. We will be considering making convergence the default engine in the Mitaka cycle. Convergence specific bugs are tracked in launchpad with the [https://bugs.launchpad.net/heat/+bugs?field.tag=convergence-bugs convergence-bugs tag].<br />
<br />
==== Conditional resource exposure ====<br />
Only resources for actually installed in the cloud services are made available to any user. Operators can further control what resources user may use with standard policy rules in [https://github.com/openstack/heat/blob/master/etc/heat/policy.json#L80 policy.json on per-resource type basis].<br />
<br />
==== heat_template_version: 2015-10-15 ====<br />
<br />
2015-10-15 indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Liberty release. <br />
* Removes the Fn::Select function (path based [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-attr get_attr]/[http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-param get_param] references should be used instead). <br />
* If no <attribute name> is specified for calls to [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-attr get_attr], a dict of all attributes is returned, e.g. { get_attr: [<resource name>]}. <br />
* Adds new [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#str-split str_split] intrinsic function <br />
* Adds support for passing multiple lists to the existing [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#list-join list_join] function.<br />
* Adds support for parsing map/list data to [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#str-replace str_replace] and [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#list-join list_join] (they will be json serialized automatically)<br />
<br />
==== REST API/heatclient additions ====<br />
* Stacks can now be assigned with a set of tags, and stack-list can filter on those tags<br />
* "heat stack-preview ..." will return a preview of changes for a proposed stack-update<br />
* "heat template-validate --show-nested ..." will also validate all template resources and return nested data useful for building user interfaces<br />
* "heat resource-type-template --template-type hot ..." generates a template in HOT format<br />
* "heat resource-type-list" only shows types available to the user, and can filter results by name, version and support_status<br />
* "heat template-version-list" lists available template versions<br />
* "heat template-function-list ..." lists available functions for a template version<br />
<br />
==== Enhancements to existing resources ====<br />
* Software deployments can now use Zaqar for [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server-prop-software_config_transport deploying software data] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::SoftwareDeployment-prop-signal_transport signalling back to Heat]<br />
* Stack actions are now performed on remote [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::Stack OS::Heat::Stack] resources<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server OS::Nova::Server] now supports deletion_policy: Snapshot <br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-updpolicy OS::Heat::ResourceGroup update_policy] now supports specifying [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-prop-batch_create batch_create] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-prop-rolling_update rolling_update] options<br />
<br />
==== New resources ====<br />
The following new resources are now distributed with the Heat release:<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Barbican::Order OS::Barbican::Order] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Barbican::Secret OS::Barbican::Secret] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Ceilometer::GnocchiAggregationByMetricsAlarm OS::Ceilometer::GnocchiAggregationByMetricsAlarm] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Ceilometer::GnocchiAggregationByResourcesAlarm OS::Ceilometer::GnocchiAggregationByResourcesAlarm] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Ceilometer::GnocchiResourcesAlarm OS::Ceilometer::GnocchiResourcesAlarm] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Cinder::VolumeType OS::Cinder::VolumeType] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Designate::Domain OS::Designate::Domain]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Designate::Record OS::Designate::Record]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::None OS::Heat::None]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::TestResource OS::Heat::TestResource]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Endpoint OS::Keystone::Endpoint]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Group OS::Keystone::Group] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::GroupRoleAssignment OS::Keystone::GroupRoleAssignment]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Project OS::Keystone::Project] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Role OS::Keystone::Role] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Service OS::Keystone::Service]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::User OS::Keystone::User] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::UserRoleAssignment OS::Keystone::UserRoleAssignment]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Magnum::BayModel OS::Magnum::BayModel]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Manila::SecurityService OS::Manila::SecurityService]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Manila::Share OS::Manila::Share]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Manila::ShareNetwork OS::Manila::ShareNetwork]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Manila::ShareType OS::Manila::ShareType]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Mistral::CronTrigger OS::Mistral::CronTrigger]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Mistral::Workflow OS::Mistral::Workflow]<br />
* [http://docs.openstack.org/developer/heat/template_guide/unsupported.html#OS::Monasca::AlarmDefinition OS::Monasca::AlarmDefinition] [4]<br />
* [http://docs.openstack.org/developer/heat/template_guide/unsupported.html#OS::Monasca::Notification OS::Monasca::Notification] [4]<br />
* [http://docs.openstack.org/developer/heat/template_guide/unsupported.html#OS::Neutron::ExtraRoute OS::Neutron::ExtraRoute] [3]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Flavor OS::Nova::Flavor] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Sahara::DataSource OS::Sahara::DataSource]<br />
<br />
[1] These existed Kilo as contrib resources as they were for non-integrated projects. These resources are now distributed with Heat as Big Tent projects.<br />
<br />
[2] These existed Kilo as contrib resources as they require a user with an admin role. They are now distributed with Heat. Operators now have ability to hide them from under-privileged users by modifyig policy.json (for reference, OS::Nova::Flavor is hidden from non-admin users in default policy file supplied).<br />
<br />
[3] These existed in Kilo as contrib resources as they used an approach not endorsed by the Heat project. They are now distributed with heat and documented as UNSUPPORTED.<br />
<br />
[4] These resources are for projects which are not yet OpenStack Big Tent projects, so are documented as UNSUPPORTED<br />
<br />
With the new OS::Keystone::* resources it is now be possible for cloud operators to use heat templates to manage Keystone service catalog entries and users.<br />
<br />
==== Deprecated Resource Properties ====<br />
Many resource properties have previously been documented as DEPRECATED. 15 of these properties are now flagged as HIDDEN, which means they will no longer be documented but existing stacks and templates will continue to work after a heat upgrade. The [ http://docs.openstack.org/developer/heat/template_guide/openstack.html Resource Type Reference] should be consulted to determine available resource properties and attributes.<br />
<br />
=== Upgrade notes ===<br />
<br />
==== Configuration Changes ====<br />
Notable changes to the /etc/heat/heat.conf [DEFAULT] section:<br />
* hidden_stack_tags has been added, stacks containing these tag names will be hidden from stack-list results (defaults to data-processing-cluster which hides sahara-created stacks)<br />
* instance_user was deprecated, and is now removed entirely. Nova servers created with OS::Nova::Server resource will now boot configured with the default user set up with the cloud image. AWS::EC2::Instance still creates "ec2-user"<br />
* max_resources_per_stack can now be set to -1 to disable enforcement<br />
* enable_cloud_watch_lite is now false by default as this REST API is deprecated<br />
* default_software_config_transport has gained the option ZAQAR_MESSAGE<br />
* default_deployment_signal_transport has gained the option ZAQAR_SIGNAL<br />
* auth_encryption_key is now documented as requiring exactly 32 characters<br />
* list_notifier_drivers was deprecated and is now removed<br />
* policy options have moved to the [oslo_policy] section<br />
* use_syslog_rfc_format is deprecated and now defaults to true<br />
<br />
Notable changes to other sections of heat.conf:<br />
* [clients_keystone] auth_uri has been added to specify the unversioned keystone url<br />
* [heat_api] workers now defaults to 4 (was previously 0, which created a worker per host CPU)<br />
<br />
The policy file /etc/heat/policy.json can now be configured with per-resource-type access policies, for example:<br />
"resource_types:OS::Nova::Flavor": "rule:context_is_admin"<br />
<br />
==== Upgrading from Kilo to Liberty ====<br />
Progress has been made on supporting live sql migrations, however it is still recommended to bring down the heat service for the duration of the upgrade. Downward SQL schema migrations are no longer supported, so a rollback to Kilo will require restoring a snapshot of the pre-upgrade database.<br />
<br />
== OpenStack Search (Searchlight) ==<br />
<br />
This is the first release for Searchlight. This is intended to dramatically improving the search capabilities and performance of various OpenStack cloud services by offloading user search queries. It provides Keystone RBAC based search across OpenStack services by indexing their data into ElasticSearch and providing a security layer on top of incoming search queries. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable full-text search engine with a RESTful web interface.<br />
<br />
* [https://wiki.openstack.org/wiki/Searchlight Project Wiki]<br />
<br />
=== Key New Features ===<br />
* [http://docs.openstack.org/developer/searchlight/searchlightapi.html Searchlight Search API] OpenStack Resource Type based API providing native ElasticSearch query support<br />
* [http://docs.openstack.org/developer/searchlight/indexingservice.html#bulk-indexing Bulk Indexing CLI] searchlight-manage indexing command line interface<br />
* [http://docs.openstack.org/developer/searchlight/indexingservice.html#incremental-updates Incremental Notification based updates]<br />
* [http://docs.openstack.org/developer/searchlight/indexingservice.html#search-plugins Resource Type Plugin system] for adding and managing resource indexing and search<br />
* [https://github.com/openstack/searchlight/tree/master/devstack Devstack deployment]<br />
<br />
==== New Resource Types Indexed ====<br />
* [http://docs.openstack.org/developer/searchlight/plugins/nova.html OS::Nova::Server] Nova server instances<br />
* [http://docs.openstack.org/developer/searchlight/plugins/glance.html OS::Glance::Image & OS::Glance::Metadef] Glance Images and Metadata Definitions<br />
* [http://docs.openstack.org/developer/searchlight/plugins/designate.html OS::Designate::Zone & OS::Designate::RecordSet] Designate Domain and Record Sets<br />
<br />
=== Upgrade Notes ===<br />
<br />
N/A<br />
<br />
=== Deprecations ===<br />
<br />
N/A<br />
<br />
</translate></div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Liberty&diff=91875ReleaseNotes/Liberty2015-10-06T23:52:09Z<p>Travis Tripp: /* Key New Features */</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
[[Category:Kilo|Release Note]]<br />
[[Category:Release Note|Liberty]]<br />
<br />
= OpenStack Liberty Release Notes =<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== OpenStack Networking (Neutron) ==<br />
<br />
=== New Features ===<br />
* Neutron now supports IPv6 Prefix Delegation for the automatic assignment of CIDRs to IPv6 subnets. For more information on the usage and configuration of this feature, see the [http://docs.openstack.org/networking-guide/adv_config_ipv6.html#prefix-delegation OpenStack Networking Guide].<br />
* Neutron now exposes a QoS API, initially offering bandwidth limitation on the port level. The API, CLI, configuration and additional information may be found here [http://docs.openstack.org/developer/neutron/devref/quality_of_service.html].<br />
* Router high availability (L3 HA / VRRP) now works when layer 2 population (l2pop) is enabled [https://bugs.launchpad.net/neutron/+bug/1365476].<br />
* VPNaaS reference drivers now work with HA routers.<br />
* Networks used for VRRP traffic for HA routers may now be configured to use a specific segmentation type or physical network tag [https://bugs.launchpad.net/neutron/+bug/1481443].<br />
* The OVS agent may now be restarted without affecting data plane connectivity.<br />
* Neutron now offers role base access control for networks [http://specs.openstack.org/openstack/neutron-specs/specs/liberty/rbac-networks.html].<br />
* LBaaS V2 reference driver is now based on Octavia, an operator grade scalable, reliable Load Balancer platform<br />
* LBaaS V2 API is no longer experimental. It is now stable.<br />
* Neutron now supports a way that an agent can start without selectable for auto-scheduling but manual-scheduling available so that a deployer can test an agent manually [https://github.com/openstack/neutron-specs/blob/master/specs/liberty/enable-new-agents.rst#user-documentation].<br />
* Neutron now has a pluggable IP address management framework, enabling the use of alternate or third-party IPAM. The original, non-pluggable version of IPAM is enabled by default.<br />
<br />
=== Deprecated and Removed Plugins and Drivers ===<br />
* The metaplugin is removed in the Liberty release.<br />
* The IBM SDN-VE monolithic plugin is removed in the Liberty release.<br />
* The Cisco N1kV monolithic plugin is removed in the Liberty release (replaced by the ML2 mechanism driver).<br />
<br />
=== Deprecated Features ===<br />
* The FWaaS API is marked as experimental for Liberty. Further, the current API will be removed in Mitaka and replaced with a new FWaaS API which the team is in the process of developing.<br />
* The LBaaS V1 API is marked as deprecated and is planned to be removed in some future release. Going forward the LBaaS V2 API should be used.<br />
<br />
=== Performance Considerations ===<br />
* The stock Trusty Tahr kernel (3.13) shows linear performance degradation when running "ip netns exec" as the number of namespaces increases. In cases where scale is important, a later version kernel (e.g. 3.19) should be used.<br />
* Creating Neutron-LBaaS load balancers in environments without hardware virtualization may be slow when using the Octavia driver. This is due to QEMU using the TCG accelerator versus the KVM accelerator in environments without hardware virtualization available. We recommend enabling hardware virtualization on your compute nodes or enabling nested virtualization when using the Octavia driver inside a virtual environment. See the following link for details on setting up nested virtualization for DevStack running inside KVM: http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Upgrade Notes ===<br />
* If you are coming from Kilo stable, please make sure you have fully upgraded to the latest release of that lineage before deploying Liberty. Due to bug https://bugs.launchpad.net/nova/+bug/1474074 versions of Kilo from before the fix will be problematic when talking to Liberty nodes.<br />
* Allocation ratios for RAM and CPU are now defined within the nova-compute service (so per compute node), thus need to be also provided for the scheduler service. Depending on whether a compute node is running Kilo or Liberty, the allocation ratios will behave differently : ''if the compute node is running Kilo'' then the CPU and RAM allocation ratios for that compute node will be the ones defaulted in the controller's nova.conf file.''Or, if the compute node is Liberty'' then you'll be able to set a per-compute allocation ratio for both CPU and RAM. In order to leave the operator providing the allocation ratios to all the compute nodes, the default allocation ratio will be set in nova.conf to 0.0 (even for the controller). That doesn't mean that allocation ratios will actually be 0.0, just that the operator needs to provide those '''before the next release (ie. Mitaka)'''. To be clear, the default allocation ratios are still 16.0 for cpu_allocation_ratio and 1.5 for ram_allocation_ratio.<br />
* nova-compute should be upgraded to Liberty code before upgrading Neutron services per the new "network-vif-deleted" event: https://review.openstack.org/#/c/187871/<br />
* Rootwrap filters must be updated after release to add the touch command.<br />
** There is a race condition between imagebackend and imagecache mentioned in the Launchpad Bug [https://bugs.launchpad.net/nova/+bug/1256838 1256838]. <br />
** In this case if base image is deleted by ImageCacheManager while imagebackend is copying the image to instance path, then instance goes in to error state.<br />
** In order to resolve this issue, there is a need to add 'touch' command in compute.filters along with the change https://review.openstack.org/#/c/217579/.<br />
** In case of a race condition when libvirt has changed the base file ownership to libvirt-qemu while imagebackend is copying the image, then we get Permission denied error on updating the file access time using os.utime. To resolve this error we need to update the base file access time with root user privileges using touch command.<br />
* The ''DiskFilter'' is now part of the ''scheduler_default_filters'' in Liberty per https://review.openstack.org/#/c/207942/ .<br />
* Per https://review.openstack.org/#/c/103916/ you can now only map one vCenter cluster to a single nova-compute node.<br />
<br />
=== Deprecations ===<br />
* The novaclient.v1_1 module has been deprecated [[https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=61ef35fe79e2a3a76987a92f9ee2db0bf1f6e651]][[https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=0a60aae852d2688861d0b4ba097a1a00529f0611]] since 2.21.0 and we are going to remove it in the first python-novaclient release in Mitaka.<br />
* Method `novaclient.client.get_client_class` is deprecated [[https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=147a1a6ee421f9a45a562f013e233d29d43258e4]] since 2.29.0 and we are going to remove it in Mitaka.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
<br />
=== Key New Features ===<br />
* creation of Aodh to handle alarming service<br />
* improved metadata caching - reduced load of nova api polling<br />
* declarative meters - ability to generate meters by defining meter definition template.<br />
* ceilometer+gnocchi integration - support for data publishing from Ceilometer to Gnocchi<br />
* mandatory limit - limit restricted querying is enforced. limit must be explicitly provided on queries else the result set is restricted to a default limit<br />
* distributed, coordinated notification agents - support for workload partitioning across multiple notification agents<br />
* Events RBAC support<br />
* PowerVM hypervisor support<br />
* improved MongoDB query support<br />
<br />
==== Gnocchi Features ====<br />
<br />
==== Aodh Features ====<br />
** event alarms - ability to trigger action when event is received<br />
<br />
=== Upgrade Notes ===<br />
* The name of some middleware used by ceilometer changed in a backwards-incompatible way. Before upgrading, edit the paste.ini file for ceilometer to change "oslo.middleware" to "oslo_middleware". For example using <nowiki>sed -ri 's/oslo\.middleware/oslo_middleware/' api_paste.ini</nowiki><br />
<br />
=== Deprecation ===<br />
* Ceilometer Alarms is deprecated in favour or Aodh<br />
* RPC publisher and collector is deprecated in favour of topic based notifier publisher<br />
* Non-metric meters are still deprecated to be removed<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* '''''Experimental''''': Store domain specific configuration options in SQL instead of using configuration files use the new REST APIs.<br />
* '''''Experimental''''': Keystone now supports tokenless authorization with X.509 SSL client certificate.<br />
* Configuring per-Identity Provider WebSSO is now supported.<br />
* <code>openstack_user_domain</code> and <code>openstack_project_domain</code> attributes were added to SAML assertion in order to map user and project domains, respectively.<br />
* Credentials list call can now have its results filtered by credential type.<br />
* Support was improved for out-of-tree drivers by defining stable Driver Interfaces.<br />
* Several features were hardened, including Fernet tokens, Federation, Domain specific configurations from database and Role Assignments.<br />
* Certain options in keystone.conf now have choices, which determine if the user's setting is valid.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The EC2 token middleware, deprecated in Juno, is no longer available in keystone. It's been moved to the keystonemiddleware package.<br />
* The compute_port configuration option, deprecated in Juno, is no longer available.<br />
* The XML middleware stub has been removed, so references to it must be removed from the <code>keystone-paste.ini</code> configuration file.<br />
* stats_monitoring and stats_reporting paste filters have been removed, so references to it must be removed from the <code>keystone-paste.ini</code> configuration file<br />
* The external authentication plugins ExternalDefault, ExternalDomain, LegacyDefaultDomain, and LegacyDomain, deprecated in Icehouse, are all no longer available.<br />
* <code>keystone.conf</code> now references entrypoint names for drivers, as such the drivers are now specified like "sql", "ldap", "uuid", etc., rather than the full module path. See the sample configuration file for examples.<br />
* Similarly to the above, we now expose entrypoints for the <code>keystone-manage</code> command instead of a file.<br />
* Schema downgrades via <code>keystone-manage db_sync</code> are no longer supported, only upgrades are supported.<br />
* Features that were "extensions" in previous releases (OAuth delegation, Federated Identity support, Endpoint Policy, etc) are now enabled by default.<br />
* If you're running keystone behind a proxy, check out the new <code>secure_proxy_ssl_header</code> config option<br />
* Several configuration options have been deprecated, renamed, or moved to new sections. Review your <code>keystone.conf</code> file against the current sample configuration file.<br />
* Domain name information is now available to be used in policy rules with the attribute <code>domain_name</code>.<br />
<br />
=== Deprecations ===<br />
<br />
* Running Keystone in Eventlet remains deprecated and will be removed in the Mitaka release<br />
* Using LDAP as the resource backend, i.e for projects and domains, is now deprecated and will be removed in the Mitaka release<br />
* Using the full path to the driver class is deprecated in favor of using the entrypoint. In the Mitaka release, the entrypoint must be used.<br />
* In the [resource] and [role] sections of the <code>keystone.conf</code> file, not specifying the driver and using the assignment driver is deprecated. In the Mitaka release, the resource and role drivers will default to the sql driver.<br />
* In <code>keystone-paste.ini</code>, using paste.filter_factory is deprecated in favor of the "use" directive, specifying an entrypoint.<br />
* Not specifying a domain during a create user, group or project call, which relied on falling back to the default domain, is now deprecated and will be removed in the N release.<br />
* Certain deprecated methods from the assignment manager were removed in favor of the same methods in the [resource] and [role] manager.<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
<br />
=== Key New Features ===<br />
* A generic image caching solution, so popular VM images can be cached and copied-on-write to a new volume. [http://docs.openstack.org/admin-guide-cloud/blockstorage_image_volume_cache.html Read docs for more info]<br />
* Non-disruptive backups [http://docs.openstack.org/admin-guide-cloud/blockstorage_volume_backups.html Read docs for more info].<br />
* Ability to clone consistency groups of volumes [http://docs.openstack.org/admin-guide-cloud/blockstorage-consistency-groups.html Read docs for more info].<br />
* List capabilities of a volume backend (fetch extra-specs)<br />
* Nested quotas<br />
<br />
=== Upgrade Notes ===<br />
<br />
=== Deprecations ===<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
=== New Features ===<br />
<br />
==== Convergence ====<br />
Convergence is a new orchestration engine which is maturing in the heat tree. In Liberty the benefits of using the convergence engine are:<br />
* Greater parallelization of resource actions (for better scaling of large templates)<br />
* The ability to do a stack-update whilst there is already an update in-progress<br />
* Better handling of heat-engine failures (still WIP)<br />
<br />
The convergence engine can be enabled by setting /etc/heat/heat/conf [DEFAULT] convergence_engine=true, then restarting heat-engine. Once this has been done, any subsequent created stack will use the convergence engine, while operations on existing stacks will continue to use the traditional engine.<br />
<br />
Convergence has '''not''' been production tested and thus should be considered '''beta''' quality - use with caution. For the Liberty release we recommend enabling convergence for the purposes of evaluation and scale testing. We will be considering making convergence the default engine in the Mitaka cycle. Convergence specific bugs are tracked in launchpad with the [https://bugs.launchpad.net/heat/+bugs?field.tag=convergence-bugs convergence-bugs tag].<br />
<br />
==== Conditional resource exposure ====<br />
Only resources for actually installed in the cloud services are made available to any user. Operators can further control what resources user may use with standard policy rules in [https://github.com/openstack/heat/blob/master/etc/heat/policy.json#L80 policy.json on per-resource type basis].<br />
<br />
==== heat_template_version: 2015-10-15 ====<br />
<br />
2015-10-15 indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Liberty release. <br />
* Removes the Fn::Select function (path based [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-attr get_attr]/[http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-param get_param] references should be used instead). <br />
* If no <attribute name> is specified for calls to [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-attr get_attr], a dict of all attributes is returned, e.g. { get_attr: [<resource name>]}. <br />
* Adds new [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#str-split str_split] intrinsic function <br />
* Adds support for passing multiple lists to the existing [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#list-join list_join] function.<br />
* Adds support for parsing map/list data to [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#str-replace str_replace] and [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#list-join list_join] (they will be json serialized automatically)<br />
<br />
==== REST API/heatclient additions ====<br />
* Stacks can now be assigned with a set of tags, and stack-list can filter on those tags<br />
* "heat stack-preview ..." will return a preview of changes for a proposed stack-update<br />
* "heat template-validate --show-nested ..." will also validate all template resources and return nested data useful for building user interfaces<br />
* "heat resource-type-template --template-type hot ..." generates a template in HOT format<br />
* "heat resource-type-list" only shows types available to the user, and can filter results by name, version and support_status<br />
* "heat template-version-list" lists available template versions<br />
* "heat template-function-list ..." lists available functions for a template version<br />
<br />
==== Enhancements to existing resources ====<br />
* Software deployments can now use Zaqar for [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server-prop-software_config_transport deploying software data] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::SoftwareDeployment-prop-signal_transport signalling back to Heat]<br />
* Stack actions are now performed on remote [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::Stack OS::Heat::Stack] resources<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server OS::Nova::Server] now supports deletion_policy: Snapshot <br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-updpolicy OS::Heat::ResourceGroup update_policy] now supports specifying [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-prop-batch_create batch_create] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-prop-rolling_update rolling_update] options<br />
<br />
==== New resources ====<br />
The following new resources are now distributed with the Heat release:<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Barbican::Order OS::Barbican::Order] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Barbican::Secret OS::Barbican::Secret] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Ceilometer::GnocchiAggregationByMetricsAlarm OS::Ceilometer::GnocchiAggregationByMetricsAlarm] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Ceilometer::GnocchiAggregationByResourcesAlarm OS::Ceilometer::GnocchiAggregationByResourcesAlarm] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Ceilometer::GnocchiResourcesAlarm OS::Ceilometer::GnocchiResourcesAlarm] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Cinder::VolumeType OS::Cinder::VolumeType] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Designate::Domain OS::Designate::Domain]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Designate::Record OS::Designate::Record]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::None OS::Heat::None]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::TestResource OS::Heat::TestResource]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Endpoint OS::Keystone::Endpoint]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Group OS::Keystone::Group] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::GroupRoleAssignment OS::Keystone::GroupRoleAssignment]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Project OS::Keystone::Project] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Role OS::Keystone::Role] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Service OS::Keystone::Service]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::User OS::Keystone::User] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::UserRoleAssignment OS::Keystone::UserRoleAssignment]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Magnum::BayModel OS::Magnum::BayModel]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Manila::SecurityService OS::Manila::SecurityService]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Manila::Share OS::Manila::Share]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Manila::ShareNetwork OS::Manila::ShareNetwork]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Manila::ShareType OS::Manila::ShareType]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Mistral::CronTrigger OS::Mistral::CronTrigger]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Mistral::Workflow OS::Mistral::Workflow]<br />
* [http://docs.openstack.org/developer/heat/template_guide/unsupported.html#OS::Monasca::AlarmDefinition OS::Monasca::AlarmDefinition] [4]<br />
* [http://docs.openstack.org/developer/heat/template_guide/unsupported.html#OS::Monasca::Notification OS::Monasca::Notification] [4]<br />
* [http://docs.openstack.org/developer/heat/template_guide/unsupported.html#OS::Neutron::ExtraRoute OS::Neutron::ExtraRoute] [3]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Flavor OS::Nova::Flavor] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Sahara::DataSource OS::Sahara::DataSource]<br />
<br />
[1] These existed Kilo as contrib resources as they were for non-integrated projects. These resources are now distributed with Heat as Big Tent projects.<br />
<br />
[2] These existed Kilo as contrib resources as they require a user with an admin role. They are now distributed with Heat. Operators now have ability to hide them from under-privileged users by modifyig policy.json (for reference, OS::Nova::Flavor is hidden from non-admin users in default policy file supplied).<br />
<br />
[3] These existed in Kilo as contrib resources as they used an approach not endorsed by the Heat project. They are now distributed with heat and documented as UNSUPPORTED.<br />
<br />
[4] These resources are for projects which are not yet OpenStack Big Tent projects, so are documented as UNSUPPORTED<br />
<br />
With the new OS::Keystone::* resources it is now be possible for cloud operators to use heat templates to manage Keystone service catalog entries and users.<br />
<br />
==== Deprecated Resource Properties ====<br />
Many resource properties have previously been documented as DEPRECATED. 15 of these properties are now flagged as HIDDEN, which means they will no longer be documented but existing stacks and templates will continue to work after a heat upgrade. The [ http://docs.openstack.org/developer/heat/template_guide/openstack.html Resource Type Reference] should be consulted to determine available resource properties and attributes.<br />
<br />
=== Upgrade notes ===<br />
<br />
==== Configuration Changes ====<br />
Notable changes to the /etc/heat/heat.conf [DEFAULT] section:<br />
* hidden_stack_tags has been added, stacks containing these tag names will be hidden from stack-list results (defaults to data-processing-cluster which hides sahara-created stacks)<br />
* instance_user was deprecated, and is now removed entirely. Nova servers created with OS::Nova::Server resource will now boot configured with the default user set up with the cloud image. AWS::EC2::Instance still creates "ec2-user"<br />
* max_resources_per_stack can now be set to -1 to disable enforcement<br />
* enable_cloud_watch_lite is now false by default as this REST API is deprecated<br />
* default_software_config_transport has gained the option ZAQAR_MESSAGE<br />
* default_deployment_signal_transport has gained the option ZAQAR_SIGNAL<br />
* auth_encryption_key is now documented as requiring exactly 32 characters<br />
* list_notifier_drivers was deprecated and is now removed<br />
* policy options have moved to the [oslo_policy] section<br />
* use_syslog_rfc_format is deprecated and now defaults to true<br />
<br />
Notable changes to other sections of heat.conf:<br />
* [clients_keystone] auth_uri has been added to specify the unversioned keystone url<br />
* [heat_api] workers now defaults to 4 (was previously 0, which created a worker per host CPU)<br />
<br />
The policy file /etc/heat/policy.json can now be configured with per-resource-type access policies, for example:<br />
"resource_types:OS::Nova::Flavor": "rule:context_is_admin"<br />
<br />
==== Upgrading from Kilo to Liberty ====<br />
Progress has been made on supporting live sql migrations, however it is still recommended to bring down the heat service for the duration of the upgrade. Downward SQL schema migrations are no longer supported, so a rollback to Kilo will require restoring a snapshot of the pre-upgrade database.<br />
<br />
== OpenStack Search (Searchlight) ==<br />
<br />
This is the first release for Searchlight. This is intended to dramatically improving the search capabilities and performance of various OpenStack cloud services by offloading user search queries. It provides Keystone RBAC based search across OpenStack services by indexing their data into ElasticSearch and providing a security layer on top of incoming search queries. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable full-text search engine with a RESTful web interface.<br />
<br />
* [https://wiki.openstack.org/wiki/Searchlight Project Wiki]<br />
<br />
=== Key New Features ===<br />
* [http://docs.openstack.org/developer/searchlight/searchlightapi.html Searchlight Search API] OpenStack Resource Type based API providing native ElasticSearch query support<br />
* [http://docs.openstack.org/developer/searchlight/indexingservice.html#bulk-indexing Bulk Indexing CLI] searchlight-manage indexing command line interface<br />
* [http://docs.openstack.org/developer/searchlight/indexingservice.html#incremental-updates Incremental Notification based updates]<br />
* [http://docs.openstack.org/developer/searchlight/indexingservice.html#search-plugins Resource Type Plugin system] for adding and managing resource indexing and search<br />
* [https://github.com/openstack/searchlight/tree/master/devstack Devstack deployment]<br />
<br />
=== Key New Resource Types Indexed ===<br />
* [http://docs.openstack.org/developer/searchlight/plugins/nova.html OS::Nova::Server] Nova server instances<br />
* [http://docs.openstack.org/developer/searchlight/plugins/glance.html OS::Glance::Image & OS::Glance::Metadef] Glance Images and Metadata Definitions<br />
* [http://docs.openstack.org/developer/searchlight/plugins/designate.html OS::Designate::Zone & OS::Designate::RecordSet] Designate Domain and Record Sets<br />
<br />
=== Upgrade Notes ===<br />
<br />
N/A<br />
<br />
=== Deprecations ===<br />
<br />
N/A<br />
<br />
</translate></div>Travis Tripphttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Liberty&diff=91871ReleaseNotes/Liberty2015-10-06T23:37:32Z<p>Travis Tripp: Added Searchlight</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
[[Category:Kilo|Release Note]]<br />
[[Category:Release Note|Liberty]]<br />
<br />
= OpenStack Liberty Release Notes =<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== OpenStack Networking (Neutron) ==<br />
<br />
=== New Features ===<br />
* Neutron now supports IPv6 Prefix Delegation for the automatic assignment of CIDRs to IPv6 subnets. For more information on the usage and configuration of this feature, see the [http://docs.openstack.org/networking-guide/adv_config_ipv6.html#prefix-delegation OpenStack Networking Guide].<br />
* Neutron now exposes a QoS API, initially offering bandwidth limitation on the port level. The API, CLI, configuration and additional information may be found here [http://docs.openstack.org/developer/neutron/devref/quality_of_service.html].<br />
* Router high availability (L3 HA / VRRP) now works when layer 2 population (l2pop) is enabled [https://bugs.launchpad.net/neutron/+bug/1365476].<br />
* VPNaaS reference drivers now work with HA routers.<br />
* Networks used for VRRP traffic for HA routers may now be configured to use a specific segmentation type or physical network tag [https://bugs.launchpad.net/neutron/+bug/1481443].<br />
* The OVS agent may now be restarted without affecting data plane connectivity.<br />
* Neutron now offers role base access control for networks [http://specs.openstack.org/openstack/neutron-specs/specs/liberty/rbac-networks.html].<br />
* LBaaS V2 reference driver is now based on Octavia, an operator grade scalable, reliable Load Balancer platform<br />
* LBaaS V2 API is no longer experimental. It is now stable.<br />
* Neutron now supports a way that an agent can start without selectable for auto-scheduling but manual-scheduling available so that a deployer can test an agent manually [https://github.com/openstack/neutron-specs/blob/master/specs/liberty/enable-new-agents.rst#user-documentation].<br />
* Neutron now has a pluggable IP address management framework, enabling the use of alternate or third-party IPAM. The original, non-pluggable version of IPAM is enabled by default.<br />
<br />
=== Deprecated and Removed Plugins and Drivers ===<br />
* The metaplugin is removed in the Liberty release.<br />
* The IBM SDN-VE monolithic plugin is removed in the Liberty release.<br />
* The Cisco N1kV monolithic plugin is removed in the Liberty release (replaced by the ML2 mechanism driver).<br />
<br />
=== Deprecated Features ===<br />
* The FWaaS API is marked as experimental for Liberty. Further, the current API will be removed in Mitaka and replaced with a new FWaaS API which the team is in the process of developing.<br />
* The LBaaS V1 API is marked as deprecated and is planned to be removed in some future release. Going forward the LBaaS V2 API should be used.<br />
<br />
=== Performance Considerations ===<br />
* The stock Trusty Tahr kernel (3.13) shows linear performance degradation when running "ip netns exec" as the number of namespaces increases. In cases where scale is important, a later version kernel (e.g. 3.19) should be used.<br />
* Creating Neutron-LBaaS load balancers in environments without hardware virtualization may be slow when using the Octavia driver. This is due to QEMU using the TCG accelerator versus the KVM accelerator in environments without hardware virtualization available. We recommend enabling hardware virtualization on your compute nodes or enabling nested virtualization when using the Octavia driver inside a virtual environment. See the following link for details on setting up nested virtualization for DevStack running inside KVM: http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html<br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Upgrade Notes ===<br />
* If you are coming from Kilo stable, please make sure you have fully upgraded to the latest release of that lineage before deploying Liberty. Due to bug https://bugs.launchpad.net/nova/+bug/1474074 versions of Kilo from before the fix will be problematic when talking to Liberty nodes.<br />
* Allocation ratios for RAM and CPU are now defined within the nova-compute service (so per compute node), thus need to be also provided for the scheduler service. Depending on whether a compute node is running Kilo or Liberty, the allocation ratios will behave differently : ''if the compute node is running Kilo'' then the CPU and RAM allocation ratios for that compute node will be the ones defaulted in the controller's nova.conf file.''Or, if the compute node is Liberty'' then you'll be able to set a per-compute allocation ratio for both CPU and RAM. In order to leave the operator providing the allocation ratios to all the compute nodes, the default allocation ratio will be set in nova.conf to 0.0 (even for the controller). That doesn't mean that allocation ratios will actually be 0.0, just that the operator needs to provide those '''before the next release (ie. Mitaka)'''. To be clear, the default allocation ratios are still 16.0 for cpu_allocation_ratio and 1.5 for ram_allocation_ratio.<br />
* nova-compute should be upgraded to Liberty code before upgrading Neutron services per the new "network-vif-deleted" event: https://review.openstack.org/#/c/187871/<br />
* Rootwrap filters must be updated after release to add the touch command.<br />
** There is a race condition between imagebackend and imagecache mentioned in the Launchpad Bug [https://bugs.launchpad.net/nova/+bug/1256838 1256838]. <br />
** In this case if base image is deleted by ImageCacheManager while imagebackend is copying the image to instance path, then instance goes in to error state.<br />
** In order to resolve this issue, there is a need to add 'touch' command in compute.filters along with the change https://review.openstack.org/#/c/217579/.<br />
** In case of a race condition when libvirt has changed the base file ownership to libvirt-qemu while imagebackend is copying the image, then we get Permission denied error on updating the file access time using os.utime. To resolve this error we need to update the base file access time with root user privileges using touch command.<br />
* The ''DiskFilter'' is now part of the ''scheduler_default_filters'' in Liberty per https://review.openstack.org/#/c/207942/ .<br />
* Per https://review.openstack.org/#/c/103916/ you can now only map one vCenter cluster to a single nova-compute node.<br />
<br />
=== Deprecations ===<br />
* The novaclient.v1_1 module has been deprecated [[https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=61ef35fe79e2a3a76987a92f9ee2db0bf1f6e651]][[https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=0a60aae852d2688861d0b4ba097a1a00529f0611]] since 2.21.0 and we are going to remove it in the first python-novaclient release in Mitaka.<br />
* Method `novaclient.client.get_client_class` is deprecated [[https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=147a1a6ee421f9a45a562f013e233d29d43258e4]] since 2.29.0 and we are going to remove it in Mitaka.<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
<br />
=== Key New Features ===<br />
* creation of Aodh to handle alarming service<br />
* improved metadata caching - reduced load of nova api polling<br />
* declarative meters - ability to generate meters by defining meter definition template.<br />
* ceilometer+gnocchi integration - support for data publishing from Ceilometer to Gnocchi<br />
* mandatory limit - limit restricted querying is enforced. limit must be explicitly provided on queries else the result set is restricted to a default limit<br />
* distributed, coordinated notification agents - support for workload partitioning across multiple notification agents<br />
* Events RBAC support<br />
* PowerVM hypervisor support<br />
* improved MongoDB query support<br />
<br />
==== Gnocchi Features ====<br />
<br />
==== Aodh Features ====<br />
** event alarms - ability to trigger action when event is received<br />
<br />
=== Upgrade Notes ===<br />
* The name of some middleware used by ceilometer changed in a backwards-incompatible way. Before upgrading, edit the paste.ini file for ceilometer to change "oslo.middleware" to "oslo_middleware". For example using <nowiki>sed -ri 's/oslo\.middleware/oslo_middleware/' api_paste.ini</nowiki><br />
<br />
=== Deprecation ===<br />
* Ceilometer Alarms is deprecated in favour or Aodh<br />
* RPC publisher and collector is deprecated in favour of topic based notifier publisher<br />
* Non-metric meters are still deprecated to be removed<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* '''''Experimental''''': Store domain specific configuration options in SQL instead of using configuration files use the new REST APIs.<br />
* '''''Experimental''''': Keystone now supports tokenless authorization with X.509 SSL client certificate.<br />
* Configuring per-Identity Provider WebSSO is now supported.<br />
* <code>openstack_user_domain</code> and <code>openstack_project_domain</code> attributes were added to SAML assertion in order to map user and project domains, respectively.<br />
* Credentials list call can now have its results filtered by credential type.<br />
* Support was improved for out-of-tree drivers by defining stable Driver Interfaces.<br />
* Several features were hardened, including Fernet tokens, Federation, Domain specific configurations from database and Role Assignments.<br />
* Certain options in keystone.conf now have choices, which determine if the user's setting is valid.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The EC2 token middleware, deprecated in Juno, is no longer available in keystone. It's been moved to the keystonemiddleware package.<br />
* The compute_port configuration option, deprecated in Juno, is no longer available.<br />
* The XML middleware stub has been removed, so references to it must be removed from the <code>keystone-paste.ini</code> configuration file.<br />
* stats_monitoring and stats_reporting paste filters have been removed, so references to it must be removed from the <code>keystone-paste.ini</code> configuration file<br />
* The external authentication plugins ExternalDefault, ExternalDomain, LegacyDefaultDomain, and LegacyDomain, deprecated in Icehouse, are all no longer available.<br />
* <code>keystone.conf</code> now references entrypoint names for drivers, as such the drivers are now specified like "sql", "ldap", "uuid", etc., rather than the full module path. See the sample configuration file for examples.<br />
* Similarly to the above, we now expose entrypoints for the <code>keystone-manage</code> command instead of a file.<br />
* Schema downgrades via <code>keystone-manage db_sync</code> are no longer supported, only upgrades are supported.<br />
* Features that were "extensions" in previous releases (OAuth delegation, Federated Identity support, Endpoint Policy, etc) are now enabled by default.<br />
* If you're running keystone behind a proxy, check out the new <code>secure_proxy_ssl_header</code> config option<br />
* Several configuration options have been deprecated, renamed, or moved to new sections. Review your <code>keystone.conf</code> file against the current sample configuration file.<br />
* Domain name information is now available to be used in policy rules with the attribute <code>domain_name</code>.<br />
<br />
=== Deprecations ===<br />
<br />
* Running Keystone in Eventlet remains deprecated and will be removed in the Mitaka release<br />
* Using LDAP as the resource backend, i.e for projects and domains, is now deprecated and will be removed in the Mitaka release<br />
* Using the full path to the driver class is deprecated in favor of using the entrypoint. In the Mitaka release, the entrypoint must be used.<br />
* In the [resource] and [role] sections of the <code>keystone.conf</code> file, not specifying the driver and using the assignment driver is deprecated. In the Mitaka release, the resource and role drivers will default to the sql driver.<br />
* In <code>keystone-paste.ini</code>, using paste.filter_factory is deprecated in favor of the "use" directive, specifying an entrypoint.<br />
* Not specifying a domain during a create user, group or project call, which relied on falling back to the default domain, is now deprecated and will be removed in the N release.<br />
* Certain deprecated methods from the assignment manager were removed in favor of the same methods in the [resource] and [role] manager.<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
<br />
=== Key New Features ===<br />
* A generic image caching solution, so popular VM images can be cached and copied-on-write to a new volume. [http://docs.openstack.org/admin-guide-cloud/blockstorage_image_volume_cache.html Read docs for more info]<br />
* Non-disruptive backups [http://docs.openstack.org/admin-guide-cloud/blockstorage_volume_backups.html Read docs for more info].<br />
* Ability to clone consistency groups of volumes [http://docs.openstack.org/admin-guide-cloud/blockstorage-consistency-groups.html Read docs for more info].<br />
* List capabilities of a volume backend (fetch extra-specs)<br />
* Nested quotas<br />
<br />
=== Upgrade Notes ===<br />
<br />
=== Deprecations ===<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
<br />
=== New Features ===<br />
<br />
==== Convergence ====<br />
Convergence is a new orchestration engine which is maturing in the heat tree. In Liberty the benefits of using the convergence engine are:<br />
* Greater parallelization of resource actions (for better scaling of large templates)<br />
* The ability to do a stack-update whilst there is already an update in-progress<br />
* Better handling of heat-engine failures (still WIP)<br />
<br />
The convergence engine can be enabled by setting /etc/heat/heat/conf [DEFAULT] convergence_engine=true, then restarting heat-engine. Once this has been done, any subsequent created stack will use the convergence engine, while operations on existing stacks will continue to use the traditional engine.<br />
<br />
Convergence has '''not''' been production tested and thus should be considered '''beta''' quality - use with caution. For the Liberty release we recommend enabling convergence for the purposes of evaluation and scale testing. We will be considering making convergence the default engine in the Mitaka cycle. Convergence specific bugs are tracked in launchpad with the [https://bugs.launchpad.net/heat/+bugs?field.tag=convergence-bugs convergence-bugs tag].<br />
<br />
==== Conditional resource exposure ====<br />
Only resources for actually installed in the cloud services are made available to any user. Operators can further control what resources user may use with standard policy rules in [https://github.com/openstack/heat/blob/master/etc/heat/policy.json#L80 policy.json on per-resource type basis].<br />
<br />
==== heat_template_version: 2015-10-15 ====<br />
<br />
2015-10-15 indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Liberty release. <br />
* Removes the Fn::Select function (path based [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-attr get_attr]/[http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-param get_param] references should be used instead). <br />
* If no <attribute name> is specified for calls to [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-attr get_attr], a dict of all attributes is returned, e.g. { get_attr: [<resource name>]}. <br />
* Adds new [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#str-split str_split] intrinsic function <br />
* Adds support for passing multiple lists to the existing [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#list-join list_join] function.<br />
* Adds support for parsing map/list data to [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#str-replace str_replace] and [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#list-join list_join] (they will be json serialized automatically)<br />
<br />
==== REST API/heatclient additions ====<br />
* Stacks can now be assigned with a set of tags, and stack-list can filter on those tags<br />
* "heat stack-preview ..." will return a preview of changes for a proposed stack-update<br />
* "heat template-validate --show-nested ..." will also validate all template resources and return nested data useful for building user interfaces<br />
* "heat resource-type-template --template-type hot ..." generates a template in HOT format<br />
* "heat resource-type-list" only shows types available to the user, and can filter results by name, version and support_status<br />
* "heat template-version-list" lists available template versions<br />
* "heat template-function-list ..." lists available functions for a template version<br />
<br />
==== Enhancements to existing resources ====<br />
* Software deployments can now use Zaqar for [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server-prop-software_config_transport deploying software data] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::SoftwareDeployment-prop-signal_transport signalling back to Heat]<br />
* Stack actions are now performed on remote [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::Stack OS::Heat::Stack] resources<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server OS::Nova::Server] now supports deletion_policy: Snapshot <br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-updpolicy OS::Heat::ResourceGroup update_policy] now supports specifying [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-prop-batch_create batch_create] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-prop-rolling_update rolling_update] options<br />
<br />
==== New resources ====<br />
The following new resources are now distributed with the Heat release:<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Barbican::Order OS::Barbican::Order] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Barbican::Secret OS::Barbican::Secret] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Ceilometer::GnocchiAggregationByMetricsAlarm OS::Ceilometer::GnocchiAggregationByMetricsAlarm] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Ceilometer::GnocchiAggregationByResourcesAlarm OS::Ceilometer::GnocchiAggregationByResourcesAlarm] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Ceilometer::GnocchiResourcesAlarm OS::Ceilometer::GnocchiResourcesAlarm] [1]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Cinder::VolumeType OS::Cinder::VolumeType] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Designate::Domain OS::Designate::Domain]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Designate::Record OS::Designate::Record]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::None OS::Heat::None]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::TestResource OS::Heat::TestResource]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Endpoint OS::Keystone::Endpoint]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Group OS::Keystone::Group] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::GroupRoleAssignment OS::Keystone::GroupRoleAssignment]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Project OS::Keystone::Project] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Role OS::Keystone::Role] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::Service OS::Keystone::Service]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::User OS::Keystone::User] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Keystone::UserRoleAssignment OS::Keystone::UserRoleAssignment]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Magnum::BayModel OS::Magnum::BayModel]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Manila::SecurityService OS::Manila::SecurityService]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Manila::Share OS::Manila::Share]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Manila::ShareNetwork OS::Manila::ShareNetwork]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Manila::ShareType OS::Manila::ShareType]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Mistral::CronTrigger OS::Mistral::CronTrigger]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Mistral::Workflow OS::Mistral::Workflow]<br />
* [http://docs.openstack.org/developer/heat/template_guide/unsupported.html#OS::Monasca::AlarmDefinition OS::Monasca::AlarmDefinition] [4]<br />
* [http://docs.openstack.org/developer/heat/template_guide/unsupported.html#OS::Monasca::Notification OS::Monasca::Notification] [4]<br />
* [http://docs.openstack.org/developer/heat/template_guide/unsupported.html#OS::Neutron::ExtraRoute OS::Neutron::ExtraRoute] [3]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Flavor OS::Nova::Flavor] [2]<br />
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Sahara::DataSource OS::Sahara::DataSource]<br />
<br />
[1] These existed Kilo as contrib resources as they were for non-integrated projects. These resources are now distributed with Heat as Big Tent projects.<br />
<br />
[2] These existed Kilo as contrib resources as they require a user with an admin role. They are now distributed with Heat. Operators now have ability to hide them from under-privileged users by modifyig policy.json (for reference, OS::Nova::Flavor is hidden from non-admin users in default policy file supplied).<br />
<br />
[3] These existed in Kilo as contrib resources as they used an approach not endorsed by the Heat project. They are now distributed with heat and documented as UNSUPPORTED.<br />
<br />
[4] These resources are for projects which are not yet OpenStack Big Tent projects, so are documented as UNSUPPORTED<br />
<br />
With the new OS::Keystone::* resources it is now be possible for cloud operators to use heat templates to manage Keystone service catalog entries and users.<br />
<br />
==== Deprecated Resource Properties ====<br />
Many resource properties have previously been documented as DEPRECATED. 15 of these properties are now flagged as HIDDEN, which means they will no longer be documented but existing stacks and templates will continue to work after a heat upgrade. The [ http://docs.openstack.org/developer/heat/template_guide/openstack.html Resource Type Reference] should be consulted to determine available resource properties and attributes.<br />
<br />
=== Upgrade notes ===<br />
<br />
==== Configuration Changes ====<br />
Notable changes to the /etc/heat/heat.conf [DEFAULT] section:<br />
* hidden_stack_tags has been added, stacks containing these tag names will be hidden from stack-list results (defaults to data-processing-cluster which hides sahara-created stacks)<br />
* instance_user was deprecated, and is now removed entirely. Nova servers created with OS::Nova::Server resource will now boot configured with the default user set up with the cloud image. AWS::EC2::Instance still creates "ec2-user"<br />
* max_resources_per_stack can now be set to -1 to disable enforcement<br />
* enable_cloud_watch_lite is now false by default as this REST API is deprecated<br />
* default_software_config_transport has gained the option ZAQAR_MESSAGE<br />
* default_deployment_signal_transport has gained the option ZAQAR_SIGNAL<br />
* auth_encryption_key is now documented as requiring exactly 32 characters<br />
* list_notifier_drivers was deprecated and is now removed<br />
* policy options have moved to the [oslo_policy] section<br />
* use_syslog_rfc_format is deprecated and now defaults to true<br />
<br />
Notable changes to other sections of heat.conf:<br />
* [clients_keystone] auth_uri has been added to specify the unversioned keystone url<br />
* [heat_api] workers now defaults to 4 (was previously 0, which created a worker per host CPU)<br />
<br />
The policy file /etc/heat/policy.json can now be configured with per-resource-type access policies, for example:<br />
"resource_types:OS::Nova::Flavor": "rule:context_is_admin"<br />
<br />
==== Upgrading from Kilo to Liberty ====<br />
Progress has been made on supporting live sql migrations, however it is still recommended to bring down the heat service for the duration of the upgrade. Downward SQL schema migrations are no longer supported, so a rollback to Kilo will require restoring a snapshot of the pre-upgrade database.<br />
<br />
== OpenStack Search (Searchlight) ==<br />
<br />
This is the first release for Searchlight. This is intended to dramatically improving the search capabilities and performance of various OpenStack cloud services by offloading user search queries. It provides Keystone RBAC based search across OpenStack services by indexing their data into ElasticSearch and providing a security layer on top of incoming search queries. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable full-text search engine with a RESTful web interface.<br />
<br />
* [https://wiki.openstack.org/wiki/Searchlight Project Wiki]<br />
<br />
=== Key New Features ===<br />
* [http://docs.openstack.org/developer/searchlight/searchlightapi.html Searchlight Search API] OpenStack Resource Type based API providing native ElasticSearch query support<br />
* [http://docs.openstack.org/developer/searchlight/indexingservice.html#bulk-indexing Buld Indexing CLI] searchlight-manage indexing command line interface<br />
* [http://docs.openstack.org/developer/searchlight/indexingservice.html#incremental-updates Incremental Notification based updates]<br />
* [http://docs.openstack.org/developer/searchlight/indexingservice.html#search-plugins Resource Type Plugin system] for adding and managing resource indexing and search<br />
* [https://github.com/openstack/searchlight/tree/master/devstack Devstack deployment]<br />
<br />
=== Key New Resource Types Indexed ===<br />
* [http://docs.openstack.org/developer/searchlight/plugins/nova.html OS::Nova::Server] Nova server instances<br />
* [http://docs.openstack.org/developer/searchlight/plugins/glance.html OS::Glance::Image & OS::Glance::Metadef] Glance Images and Metadata Definitions<br />
* [http://docs.openstack.org/developer/searchlight/plugins/designate.html OS::Designate::Zone & OS::Designate::RecordSet] Designate Domain and Record Sets<br />
<br />
=== Upgrade Notes ===<br />
<br />
N/A<br />
<br />
=== Deprecations ===<br />
<br />
N/A<br />
<br />
</translate></div>Travis Tripp