https://wiki.openstack.org/w/api.php?action=feedcontributions&user=Thinrichs&feedformat=atomOpenStack - User contributions [en]2024-03-28T22:02:05ZUser contributionsMediaWiki 1.28.2https://wiki.openstack.org/w/index.php?title=Congress&diff=136594Congress2016-10-27T19:25:30Z<p>Thinrichs: /* Policy as a service ("Congress") */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Documents<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [https://drive.google.com/open?id=1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI&authuser=0 Delegation to VM-migration engine design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
** [https://docs.google.com/document/d/1ExDmT06vDZjzOPePYBqojMRfXodvsk0R8nRkX-zrkSw/edit# Proposed] and [https://docs.google.com/document/d/1w55wDighZvI9zKAtnO4jOfDMScmN7Qf5NHFtFdk8Sas/edit#heading=h.v7jiiyumrken Implemented] Use Cases<br />
<br />
* Material from recent summits<br />
** [https://goo.gl/W0Rhcv Tokyo Hands On Lab instructions]<br />
** [https://goo.gl/o062Kc Tokyo Hands On Lab virtual machine]<br />
** [https://drive.google.com/open?id=0ByDz-eYOtswSSlBZVWVkUC1PZHc Tokyo Hands On Lab slides]<br />
** [https://docs.google.com/file/d/0ByDz-eYOtswScTlmamlhLXpmTXc/edit Vancouver Summit Congress Overview slides]<br />
** [https://drive.google.com/open?id=0ByDz-eYOtswScUFUc1ZrVVhmQlk&authuser=0 Vancouver Delegation for VM placement slides]<br />
<br />
*Meetings<br />
** [http://eavesdrop.openstack.org/#Congress_Team_Meeting Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/ Meeting history]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The core grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases and Examples===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases]. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
Different examples of Congress policies that have been implemented can be viewed here: [https://goo.gl/UfeHEk Congress Examples]<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
=== How To Propose a New Feature ===<br />
To propose a new feature, you <br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] describing what your feature does and how it will work. Include just the name, title, and summary.<br />
# Ask at least one core reviewer to give you feedback. If the feature is complex or controversial, the core will ask you to develop a "spec" and add it to the congress-specs repo. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.<br />
# Once your blueprint is approved, you may push code that implements that blueprint up to Gerrit, including the tag 'Implements-blueprint: <blueprint-name>'<br />
<br />
If a core reviewer asks for a spec, here's how you create one.<br />
# Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
# Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
# Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
# Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. <br />
<br />
Note: If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Design_Summit/Newton/Etherpads&diff=124345Design Summit/Newton/Etherpads2016-04-22T15:46:00Z<p>Thinrichs: /* Congress */</p>
<hr />
<div>[[Category:Summit]]<br />
[[Category:Newton]]<br />
[[Category:Etherpad]]<br />
<br />
The grand list of all the Newton Design Summit sessions. Please include Date, Time, and links to etherpads when adding new content.<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
== Event intro/closure ==<br />
* Tue 11:15: Design Summit 101 - https://etherpad.openstack.org/p/newton-design-summit-101<br />
<br />
<br />
==App Catalog==<br />
?<br />
<br />
== Barbican ==<br />
?<br />
<br />
== Cinder ==<br />
Official Schedule: https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Cinder%3A<br />
<br />
Planning/Topic Etherpad: https://etherpad.openstack.org/p/newton-cinder-summit-ideas<br />
<br />
'''Wednesday 27th'''<br />
* '''9:00am-9:40am'''<br />
** Replication Next Steps: https://etherpad.openstack.org/p/cinder-newton-replication<br />
* '''9:50am-10:30am'''<br />
** Active/Active HA: https://etherpad.openstack.org/p/cinder-newton-activeactiveha<br />
* '''11:00am-11:40am'''<br />
** Mitaka Recap, Part 1: https://etherpad.openstack.org/p/cinder-newton-mitakarecap<br />
* '''11:50am-12:30pm'''<br />
** Mitaka Recap, Part 2: https://etherpad.openstack.org/p/cinder-newton-mitakarecap<br />
* '''1:50pm-2:30pm'''<br />
** Rolling Upgrades: https://etherpad.openstack.org/p/cinder-newton-rollingupgrades<br />
* '''2:40pm-3:20pm'''<br />
** Scalable Backup: https://etherpad.openstack.org/p/cinder-newton-scalablebackup<br />
* '''3:30pm-4:10pm'''<br />
** Testing Process: https://etherpad.openstack.org/p/cinder-newton-testingprocess<br />
'''Thursday 28th'''<br />
* '''9:00am-9:40am'''<br />
** CinderClient and OpenStackClient: https://etherpad.openstack.org/p/cinder-newton-cinderclienttoosc<br />
* '''9:50am-10:30am'''<br />
** Unconference: https://etherpad.openstack.org/p/cinder-newton-unconference<br />
* '''11:00am-11:40am'''<br />
** Nova Cross Project<br />
** Details and notes: https://etherpad.openstack.org/p/cinder-nova-api-changes<br />
** Session etherpad: https://etherpad.openstack.org/p/newton-nova-cinder<br />
'''Friday 29th'''<br />
* '''9:00am-12:30pm'''<br />
* '''2:00pm-5:30pm'''<br />
** Contributors Meetup: https://etherpad.openstack.org/p/cinder-newton-contributorsmeetup<br />
<br />
== CloudKitty ==<br />
?<br />
<br />
== Congress ==<br />
All sessions are Wed April 27. Full Congress-related schedule: https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=congress<br />
<br />
* 11:00-11:40: Integrations with other OpenStack projects - https://etherpad.openstack.org/p/newton-congress-integrations<br />
* 11:50-12:30: Distributed Architecture - https://etherpad.openstack.org/p/newton-congress-architecture<br />
* 1:50-2:30p: High Availability and Throughput - https://etherpad.openstack.org/p/newton-congress-availability<br />
* 2:40-3:20p: Other Features for Newton - https://etherpad.openstack.org/p/newton-congress-features<br />
<br />
== Cross-Project workshops ==<br />
<br />
All sessions are on Tuesday 2016-04-26<br />
<br />
https://www.openstack.org/summit/austin-2016/summit-schedule/#day=2016-04-26&summit_types=2&tags=3601<br />
<br />
* '''11:15 - 11:55'''<br />
** Deployment tools discussion - https://etherpad.openstack.org/p/newton-deployment-tools-discussion<br />
** Getting API Docs off of WADL and into RST - https://etherpad.openstack.org/p/newton-api-docs-rst<br />
** How do we get a single CLI? - https://etherpad.openstack.org/p/newton-single-cli<br />
* '''12:05 - 12:45'''<br />
** Alternatives to polling - https://etherpad.openstack.org/p/newton-alternatives-to-polling<br />
** Co-installability Requirements Are Holding Us Back - https://etherpad.openstack.org/p/newton-global-requirements<br />
* '''''Lunch'''''<br />
* '''14:00 - 14:40'''<br />
** Improve oslo.policy to be used more like configuration - https://etherpad.openstack.org/p/newton-policy-in-code<br />
** Moving towards a Identity v3 API only devstack - https://etherpad.openstack.org/p/newton-keystone-v3-devstack<br />
** Stable Branch End of Life Policy - https://etherpad.openstack.org/p/stable-branch-eol-policy-newton<br />
* '''14:50 - 15:30'''<br />
** Backwards compatibility for Libraries - https://etherpad.openstack.org/p/newton-backwards-compat-libs<br />
** Common service deployment in devstack - https://etherpad.openstack.org/p/newton-devstack-wsgi-patterns<br />
** Conventional roles for default policy files - https://etherpad.openstack.org/p/newton-default-policy-roles<br />
* '''15:40 - 16:20'''<br />
** Discovery: Everybody's doing it. Can we all do it the same way? - https://etherpad.openstack.org/p/newton-discovery<br />
** Moving from oslo.rootwrap to oslo.privsep - https://etherpad.openstack.org/p/newton-privsep<br />
** Scaling the OSSA/VMT via Threat Analysis - https://etherpad.openstack.org/p/newton-thread-analysis<br />
* '''''Coffee Break'''''<br />
* '''16:40 - 17:20'''<br />
** (In)secure messaging - https://etherpad.openstack.org/p/newton-secure-messaging<br />
** Brainstorm format for design summit split event - https://etherpad.openstack.org/p/newton-design-summit-format<br />
** The future of baremetal networking - https://etherpad.openstack.org/p/newton-baremetal-networking<br />
* '''17:30 - 18:10'''<br />
** Defining scope of cross projects specs, tracking methods, and approach for providing user/operator feedback as user & design summits separate. - https://etherpad.openstack.org/p/newton-cross-project-spec-scope<br />
** Instance Users - https://etherpad.openstack.org/p/newton-instance-users<br />
** Quota 'delimiter' service and/or library - https://etherpad.openstack.org/p/newton-quota-library<br />
<br />
== Ceilometer ==<br />
<br />
?<br />
<br />
== Cue ==<br />
<br />
?<br />
<br />
== Designate ==<br />
<br />
?<br />
<br />
== Documentation ==<br />
<br />
https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Documentation%3A<br />
<br />
'''Wednesday'''<br />
<br />
* '''09:00-09:40'''<br />
** API Guides workgroup: https://etherpad.openstack.org/p/austin-docs-workgroup-api<br />
* '''11:00-11:40'''<br />
** Mitaka Retrospective: https://etherpad.openstack.org/p/austin-docs-mitakaretro<br />
* '''11:50-12:30'''<br />
** Install Guide workgroup: https://etherpad.openstack.org/p/austin-docs-workgroup-install<br />
* '''16:30-17:10'''<br />
** Docs Toolchain/Infra Session: https://etherpad.openstack.org/p/austin-docs-toolsinfra<br />
<br />
'''Thursday'''<br />
<br />
* '''09:50-10:30'''<br />
** Contributor Guide: https://etherpad.openstack.org/p/austin-docs-contributorguide<br />
* '''11:00-11:40'''<br />
** Security Guide workgroup: https://etherpad.openstack.org/p/austin-docs-workgroup-security<br />
* '''11:50-12:30'''<br />
** Networking Guide workgroup: https://etherpad.openstack.org/p/austin-docs-workgroup-networking<br />
* '''13:30-14:10'''<br />
** Newton planning: https://etherpad.openstack.org/p/austin-docs-newtonplan<br />
<br />
'''Friday'''<br />
<br />
* '''14:00-17:30'''<br />
** Contributor Meetup<br />
<br />
== Dragonflow ==<br />
<br />
https://etherpad.openstack.org/p/dragonflow-design-summit<br />
<br />
== Glance ==<br />
<br />
?<br />
<br />
== Group Based Policy ==<br />
<br />
?<br />
<br />
== Heat ==<br />
<br />
The everything etherpad: https://etherpad.openstack.org/p/newton-heat-sessions<br />
<br />
'''Wednesday April 27'''<br />
* 3:30pm-4:10pm - [https://etherpad.openstack.org/p/heat-newton-tempest (W) Functional and integration tests, tempest plugin and defcore]<br />
* 4:30pm-5:10pm - [https://etherpad.openstack.org/p/heat-newton-hot-parser (W) HOT parser]<br />
* 5:20pm-6:00pm - [https://etherpad.openstack.org/p/heat-newton-release-model (W) Release model and versioning]<br />
<br />
'''Thursday April 28'''<br />
* 9:00am-9:40am - [https://etherpad.openstack.org/p/heat-newton-client-commands (F) New heatclient commands]<br />
* 9:50am-10:30am - [https://etherpad.openstack.org/p/heat-newton-sd-refinents (F) Software deployment refinements]<br />
* 11:00am-11:40am - [https://etherpad.openstack.org/p/heat-newton-observer (F) Continous observer]<br />
* 11:50am-12:30pm - [https://etherpad.openstack.org/p/heat-newton-large-stacks (F) Issues with very large stacks]<br />
* 1:30pm-2:10pm - [https://etherpad.openstack.org/p/heat-newton-dlm (W) Implement DLM to bring HA into Heat]<br />
* 2:20pm-3:00pm - [https://etherpad.openstack.org/p/heat-newton-convergence-switchover (W) Convergence Phase 1 Switchover]<br />
* 3:10pm-3:50pm - [https://etherpad.openstack.org/p/heat-newton-convergence-tidy-up (W) Convergence Phase 1 tidy-up]<br />
* 4:10pm-4:50pm - [https://etherpad.openstack.org/p/heat-newton-performance-improvements (W) Performance improvements]<br />
* 5:00pm-5:40pm - [https://etherpad.openstack.org/p/heat-newton-validation-improvements (W) Validation improvements]<br />
<br />
'''Friday April 29'''<br />
* 2:00pm-5:30pm - [https://etherpad.openstack.org/p/heat-newton-meetup Contributors meetup]<br />
<br />
== Horizon ==<br />
<br />
?<br />
<br />
== I18n ==<br />
* '''Thursday 2016-04-28 13:30 - 15:00''' [https://www.openstack.org/summit/austin-2016/summit-schedule/events/7707 I18n workshop: translation processes and tools]: https://etherpad.openstack.org/p/austin-i18n-workshop<br />
* '''Friday 2016-04-29 9:00 -12:30''' [https://www.openstack.org/summit/austin-2016/summit-schedule/events/9418 I18n Contributors meetup]: https://etherpad.openstack.org/p/austin-i18n-meetup<br />
<br />
== Infrastructure ==<br />
'''Schedule:''' https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Infrastructure%3A<br />
<br />
'''Wednesday:'''<br />
* 09:00-09:40 CDT / 14:00-14:40 UTC fishbowl [https://etherpad.openstack.org/p/newton-infra-community-task-tracking ''Community Task-Tracking''] (MR 400)<br />
** https://etherpad.openstack.org/p/newton-infra-community-task-tracking<br />
* 11:50-12:30 CDT / 16:50-17:30 UTC work session [https://etherpad.openstack.org/p/newton-infra-landing-page-for-contributors ''Landing Page for Contributors''] (MR 416A)<br />
** https://etherpad.openstack.org/p/newton-infra-landing-page-for-contributors<br />
* 13:50-14:30 CDT / 18:50-19:30 UTC work session [https://etherpad.openstack.org/p/newton-infra-launch-node-ansible-and-puppet ''Launch-Node, Ansible and Puppet''] (MR 416A)<br />
** https://etherpad.openstack.org/p/newton-infra-launch-node-ansible-and-puppet<br />
* 15:30-16:10 CDT / 20:30-21:10 UTC fishbowl [https://etherpad.openstack.org/p/newton-infra-wiki-upgrades ''Wiki Upgrades''] (MR 400)<br />
** https://etherpad.openstack.org/p/newton-infra-wiki-upgrades<br />
<br />
'''Thursday:'''<br />
* 09:00-09:40 CDT / 14:00-14:40 UTC work session [https://etherpad.openstack.org/p/newton-infra-proposal-jobs ''Proposal Jobs''] (MR 416A)<br />
** https://etherpad.openstack.org/p/newton-infra-proposal-jobs<br />
* 09:50-10:30 CDT / 14:50-15:30 UTC work session [https://etherpad.openstack.org/p/newton-infra-robustify-ansible-puppet ''Robustify Ansible-Puppet''] (MR 416A)<br />
** https://etherpad.openstack.org/p/newton-infra-robustify-ansible-puppet<br />
* 11:00-11:40 CDT / 16:00-16:50 UTC fishbowl [https://etherpad.openstack.org/p/newton-infra-openid-sso-for-community-systems ''OpenID/SSO for Community Systems''] (MR 400)<br />
** https://etherpad.openstack.org/p/newton-infra-openid-sso-for-community-systems<br />
* 16:10-16:50 CDT / 21:10-21:50 UTC fishbowl [https://etherpad.openstack.org/p/newton-infra-distro-upgrade-plans ''Distro Upgrade Plans''] (MR 400)<br />
** https://etherpad.openstack.org/p/newton-infra-distro-upgrade-plans<br />
<br />
'''Friday:'''<br />
* Infra/QA sprints (MR 404 and elsewhere)<br />
<br />
== Ironic ==<br />
<br />
'''Wednesday April 27'''<br />
* 9:00 - 9:40 - Nova-compatible VNC console - https://etherpad.openstack.org/p/ironic-newton-summit-console<br />
* 9:50 - 10:30 - Status and future of our gate - https://etherpad.openstack.org/p/ironic-newton-summit-gate<br />
* 11:00 - 11:40 - Hardware pool management - https://etherpad.openstack.org/p/ironic-newton-summit-hardware-pools<br />
* 11:50 - 12:30 - Work session (driver composition) - https://etherpad.openstack.org/p/ironic-newton-summit-driver-composition<br />
* 1:50 - 2:30 - Making ops less worse - https://etherpad.openstack.org/p/ironic-newton-summit-ops<br />
* 2:40 - 3:20 - Anomaly detection and resolution - https://etherpad.openstack.org/p/ironic-newton-summit-anomaly-detection<br />
* 4:30 - 5:10 - Work session (Ansible deploy driver) - https://etherpad.openstack.org/p/ironic-newton-summit-ansible-deploy<br />
* 5:20 - 6:00 - Work session (Live upgrades) - https://etherpad.openstack.org/p/ironic-newton-summit-live-upgrades<br />
<br />
'''Thursday April 28'''<br />
* 11:00 - 11:40 - Work session (Inspector) - https://etherpad.openstack.org/p/ironic-newton-summit-inspector<br />
* 11:50 - 12:30 - Work session (Newton priorities and planning) - https://etherpad.openstack.org/p/ironic-newton-summit-priorities<br />
<br />
== Kolla ==<br />
<br />
https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Kolla%3A<br />
<br />
'''Wed April 27'''<br />
<br />
* 09:00 - 09:40 - Adding plugins for compute kit services - https://etherpad.openstack.org/p/kolla-newton-summit-plugin-planning<br />
* 09:50-10:30 - Operational Pain Points - https://etherpad.openstack.org/p/kolla-newton-summit-pain-points<br />
* 11:00 - 11:40 - Improving documentation for Operators - https://etherpad.openstack.org/p/kolla-newton-summit-documentation-planning<br />
* 11:50 - 12:30 - Operator Focused Roadmap - https://etherpad.openstack.org/p/kolla-newton-summit-operator-roadmap<br />
* 13:50 - 14:30 - Security - https://etherpad.openstack.org/p/kolla-newton-summit-security<br />
* 14:40 - 15:20 - Functional Gating - https://etherpad.openstack.org/p/kolla-newton-summit-kolla-functional-gating<br />
* 15:30 - 16:10 - Diagnostics - https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-diagnostics<br />
* 16:30 - 17:10 - kolla-ansible repo split - https://etherpad.openstack.org/p/kolla-newton-summit-kolla-ansible-repo-split<br />
* 17:20 - 18:00 - kolla-kubernetes - https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kubernetes-underlay<br />
<br />
'''Thu April 28'''<br />
<br />
* 09:00 - 09:40 - Registry Post Job - https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-reg-post-job<br />
* 09:50 - 10:30 - plugins extras - https://etherpad.openstack.org/p/kolla-newton-summit-kolla-plugins-extras<br />
* 11:00 - 11:40 - kolla-host repository - https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo<br />
* 11:50 - 12:30 - Threat Analysis with the Security Team - https://etherpad.openstack.org/p/kolla-newton-summit-threat-analysis<br />
* 13:50 - 14:30 - Cross Project Workshop - https://etherpad.openstack.org/p/kolla-newton-summit-kolla-cross-project-workshop<br />
<br />
'''Fri April 29'''<br />
<br />
Morning Contributor Meetup Agenda - https://etherpad.openstack.org/p/kolla-newton-summit-kolla-contrib-part-one<br />
Afternoon Contributor Meetup Agenda - https://etherpad.openstack.org/p/kolla-newton-summit-kolla-contrib-part-two<br />
* 09:00 - 10:30 - code walkthrough of gating and how to add gate jobs - https://etherpad.openstack.org/p/kolla-newton-summit-kolla-gate-walkthru<br />
* 11:00 - 12:30 - threat analysis continued - https://etherpad.openstack.org/p/kolla-newton-summit-threat-analysis<br />
* 14:00 - 14:40 - Reboot Survival - https://etherpad.openstack.org/p/kolla-newton-summit-kolla-reboot-survival<br />
* 15:00 - 15:40 - Deploying the Big Tent - https://etherpad.openstack.org/p/kolla-newton-summit-kolla-deploy-big-tent<br />
<br />
== Keystone ==<br />
<br />
https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Keystone%3A<br />
<br />
'''Wed April 27'''<br />
* 11:00 - 11:40 - New Features - https://etherpad.openstack.org/p/newton-keystone-new-features<br />
* 11:50 - 12:30 - Integration - https://etherpad.openstack.org/p/newton-keystone-integration<br />
* '''Lunch'''<br />
* 13:50 - 14:30 - Work session - https://etherpad.openstack.org/p/newton-keystone-work-session<br />
* 14:40 - 15:00 - Stabilization - https://etherpad.openstack.org/p/newton-keystone-stabilization<br />
* 15:10 - 15:50 - Clients and Libraries - https://etherpad.openstack.org/p/newton-keystone-clients-and-libraries<br />
* '''Coffee Break'''<br />
* 16:30 - 17:10 - Work session - https://etherpad.openstack.org/p/newton-keystone-work-session<br />
* 17:20 - 18:00 - Work session - https://etherpad.openstack.org/p/newton-keystone-work-session<br />
<br />
'''Thu April 28'''<br />
* 09:50 - 10:30 - Testing - https://etherpad.openstack.org/p/newton-keystone-testing<br />
* '''Coffee Break'''<br />
* 11:00 - 11:40 - Work session - https://etherpad.openstack.org/p/newton-keystone-work-session<br />
* 11:50 - 12:30 - Work session - https://etherpad.openstack.org/p/newton-keystone-work-session<br />
* '''Lunch'''<br />
* 13:50 - 14:30 - Work session - https://etherpad.openstack.org/p/newton-keystone-work-session<br />
* 14:40 - 15:00 - Work session - https://etherpad.openstack.org/p/newton-keystone-work-session<br />
* 15:10 - 15:50 - Work session - https://etherpad.openstack.org/p/newton-keystone-work-session<br />
<br />
'''Fri April 29'''<br />
* 09:00 - 12:30 - Contributors Meetup<br />
* '''Lunch'''<br />
* 14:00 - 17:30 - Contributors Meetup<br />
<br />
== Kuryr ==<br />
'''Wednesday 2016-04-27'''<br />
<br />
13:50-14:30 Work session: Kubernetes Integration<br />
<br />
https://etherpad.openstack.org/p/newton-kuryr-k8s-integration<br />
<br />
14:40-15:20 Magnum and Nested containers integration<br />
<br />
15:30-16:10 Generic architecture of Kuryr<br />
<br />
16:30-17:10 Misc. <br />
<br />
'''Thursday 2016-04-28'''<br />
<br />
11:50 – 12:30 Shared Magnum-Kuryr Session (Magnum Fishbowl Session)<br />
<br />
16:10-16:50 Fishbowl: Roadmap and Users priorities<br />
<br />
17:00-17:40 Kuryr Mesos and Kuryr Storage<br />
<br />
https://etherpad.openstack.org/p/kuryr-design-summit<br />
<br />
== Magnum ==<br />
<br />
'''Wednesday 2016-04-27'''<br />
<br />
9:00 - 9:40 Work session: the bay driver design<br />
<br />
https://etherpad.openstack.org/p/newton-magnum-bay-driver<br />
<br />
9:50 - 10:30 Work session: lifecycle operations for long running bays<br />
<br />
https://etherpad.openstack.org/p/newton-magnum-bays-lifecycle-operations<br />
<br />
11:00 - 11:40 Work session: magnum scalability<br />
<br />
https://etherpad.openstack.org/p/newton-magnum-scalability<br />
<br />
'''Thursday 2016-04-28'''<br />
<br />
11:00 - 11:40 Fishbowl: container storage<br />
<br />
https://etherpad.openstack.org/p/newton-magnum-container-storage<br />
<br />
11:50 - 12:30 Fishbowl: container network<br />
<br />
https://etherpad.openstack.org/p/newton-magnum-container-network<br />
<br />
13:30 - 14:10 Fishbowl: ironic integration<br />
<br />
https://etherpad.openstack.org/p/newton-magnum-ironic-integration<br />
<br />
14:20 - 15:00 Fishbowl: challenges in adoption<br />
<br />
https://etherpad.openstack.org/p/newton-magnum-adoption-challenges<br />
<br />
15:10 - 15:50 Fishbowl: unified container abstraction<br />
<br />
https://etherpad.openstack.org/p/newton-magnum-unified-abstraction<br />
<br />
16:10 - 16:50 Work session: heat template versioning<br />
<br />
https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning<br />
<br />
17:00 - 17:40 Work session: bays and containers monitoring<br />
<br />
https://etherpad.openstack.org/p/newton-magnum-monitoring<br />
<br />
== Manila ==<br />
* Wed 17:20 - 18:00: (WS) Concurrency Issues https://etherpad.openstack.org/p/newton-manila-concurrency<br />
* Thu 11:00 - 11:40: (FB) Share Groups https://etherpad.openstack.org/p/newton-manila-share-groups<br />
* Thu 11:50 - 12:30: (FB) Data Service and Migration https://etherpad.openstack.org/p/newton-manila-data-service-migration<br />
* Thu 13:30 - 14:10: (WS) Update Access https://etherpad.openstack.org/p/newton-manila-update-access<br />
* Thu 14:20 - 15:00: (WS) Quotas https://etherpad.openstack.org/p/newton-manila-quotas<br />
* Thu 15:10 - 15:50: (WS) Snapshot Semantics https://etherpad.openstack.org/p/newton-manila-snapshot-semantics<br />
* Fri 14:00 - 17:30: (CM) Contributor Meetup https://etherpad.openstack.org/p/newton-manila-contributor-meetup<br />
<br />
==Mistral==<br />
<br />
'''Wednesday 2016-04-27'''<br />
<br />
5:20pm-6:00pm Fishbowl: Stories from advanced users<br />
<br />
https://etherpad.openstack.org/p/mistral-austin-summit-fishbowl-2106<br />
<br />
'''Thursday 2016-04-28'''<br />
<br />
9:00am-9:40am Work session<br />
<br />
https://etherpad.openstack.org/p/mistral-austin-summit-topics-2106<br />
<br />
9:50am-10:30am Work session<br />
<br />
https://etherpad.openstack.org/p/mistral-austin-summit-topics-2106<br />
<br />
11:00am-11:40am Work session<br />
<br />
https://etherpad.openstack.org/p/mistral-austin-summit-topics-2106<br />
<br />
==Murano==<br />
?<br />
<br />
== Neutron ==<br />
<br />
Wednesday 2016-04-27<br />
<br />
Wed 13:50 - 14:30 Development track: future of *-aas projects https://etherpad.openstack.org/p/newton-neutron-future-adv-services<br />
<br />
Wed 14:40 - 15:20 Development track: neutron-lib next steps https://etherpad.openstack.org/p/newton-neutron-lib-next-steps<br />
<br />
Wed 15:30 - 16:10 User feedback track: health checking and troubleshooting https://etherpad.openstack.org/p/newton-neutron-troubleshooting<br />
<br />
Wed 16:30 - 17:10 Development track: future of Neutron API https://etherpad.openstack.org/p/newton-neutron-future-neutron-api<br />
<br />
Wed 17:20 - 18:00 Development track: future of Neutron architecture https://etherpad.openstack.org/p/newton-neutron-future-neutron-architecture <br />
<br />
<br />
Thursday 2016-04-28<br />
<br />
Thu 09:00 - 09:40 Development track: future of Neutron client https://etherpad.openstack.org/p/newton-neutron-future-neutron-client<br />
<br />
Thu 09:50 - 10:30 Community track: stadium evolution https://etherpad.openstack.org/p/newton-neutron-community-stadium-evolution<br />
<br />
Thu 16:10 - 16:50 User feedback track: end user and operator pain points https://etherpad.openstack.org/p/newton-neutron-pain-points<br />
<br />
Thu 17:00 - 17:40 Development track: completing the Mitaka backlog https://etherpad.openstack.org/p/newton-neutron-core-mitaka-backlog<br />
<br />
Friday 2016-04-29<br />
<br />
Fri 09:00 - 12:30 Neutron: Contributors meetup https://etherpad.openstack.org/p/newton-neutron-unplugged-track<br />
<br />
Fri 14:00 - 17:30 Neutron: Contributors meetup https://etherpad.openstack.org/p/newton-neutron-unplugged-track<br />
<br />
== Nova ==<br />
<br />
https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Nova%3A<br />
<br />
'''Wed April 27'''<br />
<br />
* 09:00 - 09:40 - Scheduler and resource tracking evolution - https://etherpad.openstack.org/p/newton-nova-scheduler<br />
* 09:50 - 10:30 - Scheduler and resource tracking evolution (continued)<br />
* '''Coffee Break'''<br />
* 11:00 - 11:40 - Neutron cross-project - https://etherpad.openstack.org/p/newton-nova-neutron<br />
* 11:50 - 12:30 - Performance VMs CI and technical debt - https://etherpad.openstack.org/p/newton-nova-performance-vms<br />
* '''Lunch'''<br />
* 13:50 - 14:30 - Nova: Unconference #1 - https://etherpad.openstack.org/p/newton-nova-summit-unconference<br />
* 14:40 - 15:20 - Cells v2 - https://etherpad.openstack.org/p/newton-nova-cells<br />
* 15:30 - 16:10 - Cells v2 (continued)<br />
* '''Coffee Break'''<br />
* 16:30 - 17:10 -Low-hanging fruit / getting started in Nova - https://etherpad.openstack.org/p/newton-nova-getting-started<br />
* 17:20 - 18:00 - Live Migration - https://etherpad.openstack.org/p/newton-nova-live-migration<br />
<br />
'''Thu April 28'''<br />
<br />
* 09:00 - 09:40 - API discoverability and policy - https://etherpad.openstack.org/p/newton-nova-api<br />
* 09:50 - 10:30 - API (continued)<br />
* '''Coffee Break'''<br />
* 11:00 - 11:40 - Cross-project with Cinder - https://etherpad.openstack.org/p/newton-nova-cinder<br />
* 11:50 - 12:30 - Feature classification and testing - https://etherpad.openstack.org/p/newton-nova-feature-classification<br />
* '''Lunch'''<br />
* 13:50 - 14:30 - Nova: Unconference #2 - https://etherpad.openstack.org/p/newton-nova-summit-unconference<br />
* 14:40 - 15:20 - Glance v2 integration - https://etherpad.openstack.org/p/newton-nova-glance<br />
* 15:30 - 16:10 - Ironic cross-project session - https://etherpad.openstack.org/p/newton-nova-ironic<br />
* '''Coffee Break'''<br />
* 16:30 - 17:10 - Project ID validation with Keystone - https://etherpad.openstack.org/p/newton-nova-keystone<br />
* 17:20 - 18:00 - Priorities and schedule for Newton - https://etherpad.openstack.org/p/newton-nova-summit-priorities<br />
<br />
'''Fri April 29'''<br />
* 09:00 - 12:30 - Contributors Meetup - https://etherpad.openstack.org/p/newton-nova-meetup<br />
* '''Lunch'''<br />
* 14:00 - 17:30 - Contributors Meetup - https://etherpad.openstack.org/p/newton-nova-meetup<br />
<br />
== OpenStack-Ansible ==<br />
https://etherpad.openstack.org/p/openstack-ansible-newton-summit<br />
<br />
== OpenStack Chef ==<br />
?<br />
<br />
== OpenStackClient ==<br />
28 Apr 2016 15:10 - 15:50<br />
https://etherpad.openstack.org/p/newton-openstackclient<br />
<br />
== Ops ==<br />
Operators sessions are on Monday 2016-04-25<br />
<br />
https://www.openstack.org/summit/austin-2016/summit-schedule/#day=2016-04-25&summit_types=2&tags=976,1419<br />
<br />
== Oslo ==<br />
<br />
'''Wednesday 2016-04-27'''<br />
<br />
13:50 - 14:30 Fishbowl: future plans for mutable config progress + mutable logging + mutable ?<br />
<br />
https://etherpad.openstack.org/p/newton-oslo-mutables<br />
<br />
14:40 - 15:20 Work session: modify oslo.policy so that it reads default policies embedded in project code<br />
<br />
https://etherpad.openstack.org/p/newton-oslo-policy-default-embedded<br />
<br />
15:30 - 16:10 Work session: oslo.policy changes for YAML support<br />
<br />
https://etherpad.openstack.org/p/newton-oslo-policy-yaml-support<br />
<br />
'''Thursday 2016-04-28'''<br />
<br />
09:00 - 09:40 Fishbowl: updates on oslo.messaging drivers - pika, zmq, kombu, amqp1<br />
<br />
https://etherpad.openstack.org/p/newton-oslo-messaging-drivers<br />
<br />
11:00 - 11:40 Workroom: finish our python 3 work<br />
<br />
https://etherpad.openstack.org/p/newton-oslo-python-three<br />
<br />
11:50 - 12:30 Workroom: new libraries (ideas, thoughts, bring your friends).<br />
<br />
https://etherpad.openstack.org/p/newton-oslo-maybe-new-libraries<br />
<br />
14:20 - 15:00 Workroom: improve oslo libraries adoption<br />
<br />
https://etherpad.openstack.org/p/newton-oslo-improve-adoption<br />
<br />
16:10 - 16:50 Fishbowl: backwards compat. testing strategies<br />
<br />
https://etherpad.openstack.org/p/newton-oslo-backwards-compat-testing<br />
<br />
== Packaging OpenStack ==<br />
<br />
* '''2016-04-28: 11:50 - 12:30'''<br />
** Cross-distro (Deb &amp; RPM) packaging discussion - https://etherpad.openstack.org/p/newton-cross-distro-packaging-discussion<br />
* '''2016-04-28: 14:20 - 15:00'''<br />
** RPM Packaging: Work session (Board room 401)<br />
<br />
== Product Team ==<br />
<br />
?<br />
<br />
== Puppet OpenStack ==<br />
https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Puppet<br />
https://etherpad.openstack.org/p/newton-design-puppet<br />
<br />
'''Wed April 27'''<br />
<br />
Project Statusː https://etherpad.openstack.org/p/newton-puppet-project-status <br />
* 1:50 - 2:30 - Project Update - Mitaka retrospective and Newton Plans 1/2<br />
* '''Coffee Break'''<br />
* 2:40 - 3:20 - Project Update - Mitaka retrospective and Newton Plans 2/2<br />
<br />
Work sessionsː<br />
* 4.30 - 5.10 - Work session<br />
* '''Coffee Break'''<br />
* 5.20 - 6.00 - Work session<br />
<br />
<br />
'''Thu April 28'''<br />
* 11.50 - 12.30 - Work session<br />
<br />
<br />
'''Fri April 29'''<br />
* 9.00 - 12.30 - Contributors meetup<br />
https://etherpad.openstack.org/p/newton-community-puppet<br />
<br />
== QA ==<br />
<br />
Wednesday 2016-04-27<br />
<br />
09:50 - 10:30 Development track: Devstack Roadmap: https://etherpad.openstack.org/p/newton-qa-devstack-roadmap<br />
<br />
11:00 - 11:40 Development track: tempest.lib and tempest plugin: https://etherpad.openstack.org/p/newton-qa-tempest-lib-and-tempest-plugin<br />
<br />
14:40 - 15:20 Development track: OpenStack Health The Next Generation: https://etherpad.openstack.org/p/newton-qa-openstack-health<br />
<br />
16:30 - 17:10 Development track: Negative testing: https://etherpad.openstack.org/p/newton-qa-negative-testing<br />
<br />
17:20 - 18:00 Development track: Cruft-busters: https://etherpad.openstack.org/p/newton-qa-cruft-busters<br />
<br />
Thursday 2016-04-28<br />
<br />
11:50 - 12:30 Development track: Defcore and interoperability testing: https://etherpad.openstack.org/p/newton-qa-defcore-and-interoperability<br />
<br />
14:20 - 15:00 Development track: Newton Priorities: https://etherpad.openstack.org/p/newton-qa-newton-priorities<br />
<br />
15:10 - 15:50 Development track: Temepst CLI: https://etherpad.openstack.org/p/newton-qa-tempest-cli<br />
<br />
== Release management ==<br />
<br />
Thursday 2016-04-28<br />
<br />
5:00 pm - 5:40 pm Release Management: Retrospective and Planning Session - https://etherpad.openstack.org/p/newton-release-fishbowl<br />
<br />
Friday 2016-04-29<br />
<br />
2:00 pm - 5:30 pm Release Management: Contributors meetup - https://etherpad.openstack.org/p/newton-relmgt-plan<br />
<br />
== Searchlight ==<br />
?<br />
<br />
== Sahara ==<br />
<br />
'''Wednesday 2016-04-27'''<br />
<br />
09:00 - 09:40 (MR 417A) Work session: Future plans of EDP<br />
<br />
https://etherpad.openstack.org/p/sahara-newton-edp<br />
<br />
Work session: API v2<br />
<br />
https://etherpad.openstack.org/p/sahara-newton-api-v2<br />
<br />
Fishbowl: Future of Data Processing UI<br />
<br />
https://etherpad.openstack.org/p/sahara-newton-ui<br />
<br />
Fishbowl: Deprecation Rodeo<br />
<br />
https://etherpad.openstack.org/p/sahara-newton-deprecation-policy<br />
<br />
Workroom: Security in Sahara<br />
<br />
https://etherpad.openstack.org/p/sahara-newton-security<br />
<br />
Workroom: Image generation<br />
<br />
https://etherpad.openstack.org/p/sahara-newton-images<br />
<br />
Workroom: Release model / Detaching plugins<br />
<br />
https://etherpad.openstack.org/p/sahara-newton-release-model<br />
<br />
'''Thursday 2016-04-28'''<br />
<br />
Workroom: Sahara testing<br />
<br />
https://etherpad.openstack.org/p/sahara-newton-tests<br />
<br />
All links to all documents are located in here:<br />
<br />
https://etherpad.openstack.org/p/sahara-newton-summit<br />
<br />
== Senlin ==<br />
<br />
* [09:00-09:40 Boardroom 403] Profile/Policy validation - https://etherpad.openstack.org/p/newton-senlin-validation<br />
* [09:50-10:30 Boardroom 403] Distributed lock management/Scalability - https://etherpad.openstack.org/p/newton-senlin-dlm<br />
* [13:50-14:30 MR 406] Container clustering - https://etherpad.openstack.org/p/newton-senlin-container<br />
* [15:30-16:10 Boardroom 401] Advanced autoscaling - https://etherpad.openstack.org/p/newton-senlin-as<br />
* [16:30-17:10 Boardroom 401] Task abstraction and scheduling - https://etherpad.openstack.org/p/newton-senlin-sched<br />
* [17:20-18:00 Boardroom 401] Desired capacity and health management - https://etherpad.openstack.org/p/newton-senlin-ha<br />
<br />
== Stable Branch Maintenance ==<br />
* Thursday: 13:30pm-2:10pm: Salon E - https://etherpad.openstack.org/p/newton-stable-summit<br />
<br />
<br />
== Swift ==<br />
* Wednesday: 9:00am, Salon E - https://etherpad.openstack.org/p/swift-newton-ops-feedback<br />
* Wednesday: 9:50am, Salon E - https://etherpad.openstack.org/p/swift-newton-community-feedback<br />
* Wednesday: 2:40pm-6:00pm, MR417A - https://etherpad.openstack.org/p/swift-newton-work-session-1<br />
* Thursday: 9:00am-12:30pm, MR417A - https://etherpad.openstack.org/p/swift-newton-work-session-2<br />
* Thursday: 2:30pm-5:40pm, MR417A - https://etherpad.openstack.org/p/swift-newton-work-session-3<br />
<br />
<br />
== Tacker ==<br />
<br />
Thursday 2016-04-28<br />
<br />
Thu 14:20 - 17:40 Development track: https://etherpad.openstack.org/p/tacker-newton-summit<br />
<br />
== Tricircle ==<br />
?<br />
<br />
== TripleO ==<br />
<br />
'''Thursday April 28'''<br />
* 1:30pm-2:10pm - Upgrades - current status and roadmap - https://etherpad.openstack.org/p/tripleo-newton-upgrades<br />
* 2:20pm-3:00pm - Containerization status/roadmap - current status and roadmap - https://etherpad.openstack.org/p/tripleo-newton-containers<br />
* 3:10pm-3:50pm - Work session (Composable Services and beyond) - https://etherpad.openstack.org/p/tripleo-newton-composable-services<br />
* 4:10pm-4:50pm - Work session (API and TripleO UI) - https://etherpad.openstack.org/p/tripleo-newton-api-ui<br />
* 5:00pm-5:40pm - Work session (Reducing the CI pain) - https://etherpad.openstack.org/p/tripleo-newton-ci<br />
'''Friday April 29'''<br />
* 2:00pm-5:30pm - Contributors meetup - https://etherpad.openstack.org/p/tripleo-newton-meetup<br />
<br />
== Trove ==<br />
''' Wednesday (2016-04-27), Session MR412, 0950 to 1030 '''<br />
* Planning for Python3 support [TBD]:<br />
** https://etherpad.openstack.org/p/trove-newton-summit-python3<br />
* Deploying multiple datastores with the same manager [amrith]:<br />
** https://etherpad.openstack.org/p/trove-newton-summit-multiple-datastores<br />
<br />
''' Wednesday (2016-04-27), Session MR417A, 1350 to 1430 '''<br />
* Management Client for Trove [nikhil/amrith/doug]:<br />
** https://etherpad.openstack.org/p/trove-newton-summit-management-client<br />
* Migrating to the OpenStack Client [nikhil/amrith/doug]:<br />
** https://etherpad.openstack.org/p/trove-newton-summit-openstack-client<br />
<br />
''' Wednesday (2016-04-27), Session MR415B, 1530 to 1610 '''<br />
* Trove Upgrades [doug/morgan]:<br />
** https://etherpad.openstack.org/p/trove-newton-summit-trove-upgrades<br />
<br />
''' Wednesday (2016-04-27), Session MR406, 1720 to 1800 '''<br />
* Extending back-end (persistent) storage options [amrith]:<br />
** https://etherpad.openstack.org/p/trove-newton-summit-extensible-backend-storage<br />
<br />
''' Thursday (2016-04-28), Session MR406, 0950 to 1030 '''<br />
* Trove Container Support [flavio]:<br />
** https://etherpad.openstack.org/p/trove-newton-summit-container<br />
* Snapshots as a backup strategy [telles]:<br />
** https://etherpad.openstack.org/p/trove-newton-summit-snapshot-as-a-backup-strategy<br />
<br />
''' Thursday (2016-04-28), Session MR406, 1100 to 1140 '''<br />
* Making it easier to build guest images [pete]:<br />
** https://etherpad.openstack.org/p/trove-newton-summit-easier-to-build-images<br />
<br />
''' Thursday (2016-04-28), Session MR414, 1150 to 1230 '''<br />
* Trove API v2 [doug/morgan]:<br />
** https://etherpad.openstack.org/p/trove-newton-summit-v2-api<br />
<br />
''' Thursday (2016-04-28), Session MR415B, 1330 to 1410 '''<br />
* Improving code modularity betweek guests and images [amrith]:<br />
** https://etherpad.openstack.org/p/trove-newton-summit-modularity-guest-image<br />
* Improving Trove support for self-signed certificates [amrith]<br />
** https://etherpad.openstack.org/p/trove-newton-summit-ssl-self-signed-certificates<br />
<br />
''' Thursday (2016-04-28), Session MR415A, 1420 to 1500 '''<br />
* Trove Superconductor [nikhil/pete/amrith]:<br />
** https://etherpad.openstack.org/p/trove-newton-summit-superconductor<br />
<br />
''' Friday (2016-04-29), Session MR415B, 0900 to 1230 '''<br />
* Contributor meetup:<br />
** https://etherpad.openstack.org/p/trove-newton-summit-contributor-meetup<br />
<br />
== UX ==<br />
?<br />
<br />
== Watcher ==<br />
<br />
Tuesday 2016-04-26<br />
<br />
2:00pm-6:00pm Work session<br />
<br />
https://etherpad.openstack.org/p/watcher-newton-design-session<br />
<br />
Wednesday 2016-04-27<br />
<br />
9:00am-12:00am Work session<br />
<br />
https://etherpad.openstack.org/p/watcher-newton-design-session<br />
<br />
Thursday 2016-04-28<br />
<br />
9:00am-12:00am Work session<br />
<br />
https://etherpad.openstack.org/p/watcher-newton-design-session<br />
<br />
2:20pm-3:00 Watcher, a Resource Manager for OpenStack: Plans for the N-release and Beyond<br />
<br />
https://www.openstack.org/summit/austin-2016/summit-schedule/events/7108<br />
<br />
== Zaqar ==<br />
?</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=CrossProjectLiaisons&diff=124028CrossProjectLiaisons2016-04-15T15:59:31Z<p>Thinrichs: /* Stable Branch */</p>
<hr />
<div>Many of our cross-project teams need focused help for communicating with the other project teams. This page lists the people who have volunteered for that work.<br />
<br />
== Oslo ==<br />
<br />
There are now more projects consuming code from the Oslo incubator than we have Oslo contributors. That means we are going to need your help to make these migrations happen. We are asking for one person from each project to serve as a liaison between the project and Oslo, and to assist with integrating changes as we move code out of the incubator into libraries.<br />
<br />
* The liaison should be active in the project and familiar with the project-specific requirements for having patches accepted, but does not need to be a core reviewer or the PTL.<br />
* The liaison should be prepared to assist with writing and reviewing patches in their project as libraries are adopted, and with discussions of API changes to the libraries to make them easier to use within the project.<br />
* Liaisons should pay attention to [Oslo] tagged messages on the openstack-dev mailing list.<br />
* It is also useful for liaisons to be able to attend the Oslo team meeting ([[Meetings/Oslo]]) to participate in discussions and raise issues for real-time discussion.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || Julien Danjou || jd__<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Cue || Min Pae || sputnik13<br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Thomas Herve || therve<br />
|-<br />
| Horizon || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Ironic || Lin Tan || lintan<br />
|-<br />
| Keystone || Brant Knudson || bknudson<br />
|-<br />
| Manila || Thomas Bechtold || toabctl<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachyshka<br />
|-<br />
| Nova || ChangBo Guo <glongwave@gmail.com> || gcb<br />
|-<br />
| [[Octavia]] || Michael Johnson || johnsom<br />
|-<br />
| Sahara || ||<br />
|-<br />
| Senlin || Yanyan Hu || Yanyanhu<br />
|-<br />
<br />
| Swift || || <br />
|-<br />
| TripleO || Ben Nemec || bnemec<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Zaqar || Flavio Percoco || flaper87<br />
|-<br />
|}<br />
<br />
== Release management ==<br />
<br />
The Release Management Liaison is responsible for communication with the Release Management team, attending the weekly 1:1 syncs in #openstack-relmgr-office, keeping milestone plans up to date, and signing off milestone and release tags. That task has been [[PTL_Guide#Interactions_with_the_Release_team|traditionally filled by the PTL]], but they may now delegate this task if they wish.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Release Management Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || gordon chung || gordc<br />
|-<br />
| Cinder || Sean McGinnis || smcginnis<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Designate || Graham Hayes || mugsie<br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Sergey Kraynev || skraynev<br />
|-<br />
| Horizon || David Lyle || david-lyle<br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Steve Martinelli || stevemar<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Mistral || Lingxian Kong || kong<br />
|-<br />
| Murano ||Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachys<br />
|-<br />
| Nova || Sylvain Bauza || bauzas<br />
|-<br />
| Oslo || Davanum Srinivas || dims<br />
|-<br />
| Sahara || Vitaly Gridnev || vgridnev<br />
|-<br />
| Searchlight || Travis Tripp || TravT<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || John Dickinson || notmyname<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== QA ==<br />
<br />
There are now more projects that are being tested by Tempest, and Grenade or a part deployable by Devstack than we have QA contributors. That means we are going to need your help to keep on top of everything. We are asking for one person from each project to serve as a liaison between the project and QA, and to assist with integrating changes as we move forward.<br />
<br />
The liaison should be a core reviewer for the project, but does not need to be the PTL. The liaison should be prepared to assist with writing and reviewing patches that interact with their project, and with discussions of changes to the QA projects to make them easier to use within the project.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Steve Heyman || hockeynut <br />
|-<br />
| Ceilometer || Chris Dent || cdent<br />
|-<br />
| Cinder || || <br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Glance || Nikhil Komawar || nikhil_k<br />
|-<br />
| Heat || Steve Baker || stevebaker<br />
|-<br />
| Horizon || Timur Sufiev || tsufiev<br />
|-<br />
| Ironic || John Villalovos || jlvillal<br />
|-<br />
| Keystone || David Stanek || dstanek<br />
|-<br />
| Manila || Valeriy Ponomaryov || vponomaryov<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Matt Riedemann || mriedem<br />
|-<br />
| Oslo || Davanum Srinivas || dims <br />
|-<br />
| Sahara || Luigi Toscano || tosky<br />
|-<br />
| Senlin || Haiwei Xu || haiwei-xu<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Craig Vyvial and Nirav Shah || cp16net and nshah<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== Documentation ==<br />
<br />
The OpenStack Documentation is centralized on docs.openstack.org but often there's a need for specialty information when reviewing patches or triaging doc bugs. A doc liaison should be available to triage doc bugs when the docs team members don't know enough to triage accurately, and be added to doc reviews that affect your project. You'd be notified through email when you're added either to a doc bug or a doc review. We also would appreciate attendance at the [https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting weekly doc team meeting], We meet weekly in #openstack-meeting every Wednesday at alternating times for different timezones:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Constanze Kratel || constanze <br />
|-<br />
| Ceilometer || Ildiko Vancsa || ildikov<br />
|-<br />
| Cinder || Sean Mcginnis || smcginnis <br />
|-<br />
| Congress || Tim Hinrichs || thinrichs <br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Brian Rosmaita || rosmaita<br />
|-<br />
| Heat || Randall Burt || randallburt<br />
|-<br />
| Horizon || Rob Cresswell || robcresswell<br />
|-<br />
| Ironic || Mitsuhiro SHIGEMATSU || pshige<br />
|-<br />
| Keystone || Lance Bragstad || lbragstad<br />
|-<br />
| Magnum || || <br />
|-<br />
| Manila || Dustin Schoenbrun || dustins<br />
|-<br />
| Mistral || || <br />
|-<br />
| Murano || Ekaterina Chernova || katyafervent <br />
|-<br />
| Neutron || Edgar Magana || emagana <br />
|-<br />
| Nova || Joe Gordon or Michael Still || Jog0 or mikal<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Rally || || <br />
|-<br />
| Sahara || Chad Roberts || crobertsrh<br />
|-<br />
| Senlin || Cindia Blue || lixinhui<br />
|-<br />
| Swift || || <br />
|-<br />
| Tripleo || || <br />
|-<br />
| Trove || Laurel Michaels || laurelm<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== Stable Branch ==<br />
<br />
The Stable Branch Liaison is responsible for making sure backports are proposed for critical issues in their project, and make sure proposed backports<br />
are reviewed. They are also the contact point for stable branch release managers around point release times.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Stable Branch Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in its PTL election.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer || Eoghan Glynn || eglynn<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Congress || Masahito Muroi || masahito<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Heat || Zane Bitter || zaneb<br />
|-<br />
| Horizon || Matthias Runge || mrunge <br />
|-<br />
| Ironic || Dmitry Tantsur || dtantsur <br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachys<br />
|-<br />
| Nova || Matt Riedemann || mriedem <br />
|-<br />
| Sahara || || <br />
|-<br />
| Senlin|| Qiming Teng || Qiming<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
|}<br />
<br />
== Vulnerability management ==<br />
<br />
The [[Vulnerability Management]] Team needs domain specialists to help assessing the impact of reported issues, coordinate the development of patches, review proposed patches and propose backports. The liaison should be familiar with the [[Vulnerability Management]] process and embargo rules, and have a good grasp of security issues in software design.<br />
<br />
* The liaison should be a core reviewer for the project, but does not need to be the PTL.<br />
* By default, the liaison will be the PTL.<br />
* The liaison is the first line of contact for the Vulnerability Management team members<br />
* The liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL<br />
* The liaison may further delegate work to other subject matter experts<br />
* The liaison maintains the members of the $PROJECT-coresec team in Launchpad (which can be given access to embargoed vulnerabilities)<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal or Charles Neill || redrobot / ccneill<br />
|-<br />
| Ceilometer || Lianhao Lu or Gordon Chung || llu/gordc <br />
|-<br />
| Cinder || || <br />
|-<br />
| Congress || Masahito Muroi || masahito <br />
|-<br />
| Glance || Stuart McLaren or Nikhil Komawar || mclaren or nikhil_k <br />
|-<br />
| Heat || Steve Hardy || shardy<br />
|-<br />
| Horizon || Lin Hua Cheng || lhcheng <br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Michael Still || mikal<br />
|-<br />
| Sahara || Michael McCune || elmiko<br />
|-<br />
| Searchlight || Travis Tripp or Steve McLellan || TravT or sjmc7<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Craig Vyvial or Nikhil Manchanda || cp16net or SlickNik <br />
|-<br />
|}<br />
<br />
== API Working Group ==<br />
<br />
The [[API_Working_Group|API Working Group]] seeks API subject matter experts for each project to communicate plans for API updates, review API guidelines with their project's view in mind, and review the API Working Group guidelines as they are drafted. The liaison should be familiar with the project's REST API design and future planning for changes to it.<br />
<br />
The members of the [http://specs.openstack.org/openstack/api-wg/liaisons.html API Working Group Cross-Project Liaisons] are maintained in our repo. If you want to read the entire list of CPLs or add/remove yourself from the list, you'll need to update the [http://git.openstack.org/cgit/openstack/api-wg/tree/doc/source/liaisons.json liaisons.json] file. If you don't want to make the update yourself, please ask in #openstack-sdks on IRC and someone can make the change for you.<br />
<br />
== Logging Working Group ==<br />
<br />
The [[LogWorkingGroup|Log Working Group]] seeks experts for each project to assist with making the logging in projects match the new [http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html Logging Guidelines]<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Nova || John Garbutt || johnthetubaguy<br />
|-<br />
| Murano || Nikolay Starodubtsev || Nikolay_St<br />
|-<br />
| Sahara || Andrey Pavlov || AndreyPavlov<br />
|}<br />
<br />
== Infra ==<br />
<br />
These are the project specific groups of people that Infra will look to ACK changes to that project's test configuration. Changes to project-config and devstack-gate should be +1'd by these groups when they are related to their project. Note that in an emergency this may not always be possible and Infra will ask for forgiveness but generally we should look for these +1s.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Flavio Percoco, Nikhil Komawar|| flaper87, nikhil_k<br />
<br />
|-<br />
| Neutron || Kyle Mestery, Armando Migliaccio, Doug Wiegley|| mestery, armax, dougwig<br />
|-<br />
| Documentation || Andreas Jaeger|| AJaeger<br />
<br />
|-<br />
| Trove || Nikhil Manchanda || SlickNik<br />
<br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
<br />
|-<br />
| Fuel || Aleksandra Fedorova, Igor Belikov || bookwar, igorbelikov<br />
<br />
|-<br />
| Puppet OpenStack || Emilien Macchi || EmilienM<br />
|}<br />
<br />
== Product Working Group ==<br />
The product working group consists of product managers, technologists, and operators from a diverse set of organizations. The group is working to aggregate user stories from the market-focused teams (Enterprise, Telco, etc.) and cross-project functional teams (e.g. logging, upgrades, etc.), partner with the development community on resourcing, and help gather data to generate a multi-release roadmap. Most of the user stories being tracked by this team consists of items that can span multiple releases and usually have cross-project dependencies. <br />
<br />
More information about the team can be found on the [[ProductTeam|Product WG wiki]].<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer ||Krish Ragurham || <br />
|-<br />
| Cinder || Shamail Tahir || shamail<br />
|-<br />
| Glance || Nate Ziemann || nate_zman<br />
|-<br />
| Horizon || Carol Barrett || barrett1<br />
|-<br />
| Keystone || || <br />
|-<br />
| Kolla || Carol Barrett || barrett1<br />
|-<br />
| Magnum || Steve Gordon || sgordon<br />
|-<br />
| Manilla ||Pete Chadwick || <br />
|-<br />
| Neutron || Mike Cohen, Duane DeCapite || DuaneDeC7<br />
|-<br />
| Nova || Hugh Blemings || hughhalf <br />
|-<br />
| OSClient || Megan Rossetti || MeganR<br />
|-<br />
| Stable Release|| Rochelle Grober || rockig<br />
|-<br />
| Sahara || Ethan Gafford || egafford<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || Phil Williams || philipw<br />
|-<br />
| Tempest || Arkady Kanevsky || arkady_kanevsky<br />
|-<br />
|}<br />
<br />
== I18n ==<br />
I18n team is responsible for making OpenStack ubiquitously accessible to people of all language backgrounds. The team have translators from all over the world to translate OpenStack into different languages. <br />
<br />
If you want to communicate with translators in I18n team, send email to openstack-i18n@lists.openstack.org.<br />
<br />
* The liaison should be a core reviewer for the project and understand i18n status of the project.<br />
* The liaison should understand project release schedule very well.<br />
* The liaison should notify I18n team happens of important moments in the project release in time. For example, happen of soft string freeze, happen of hard string freeze, and happen of RC1 cutting.<br />
* The liaison should take care of translation patches to the project, and make sure the patches are successfully merged to the final release version. When the translation patch is failed, the liaison should notify I18n team.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || || <br />
|-<br />
| Keystone || || <br />
|-<br />
| Neutron || Akihiro Motoki || amotoki<br />
|-<br />
| Nova || Augustina Ragwitz || auggy <br />
|-<br />
| Cinder || || <br />
|-<br />
| Swift || ||<br />
|-<br />
| Horizon || Doug Fish / Akihiro Motoki || doug-fish / amotoki<br />
|-<br />
| Telemetry || || <br />
|-<br />
| Designate || Graham Hayes || mugsie<br />
|-<br />
| Heat || || <br />
|-<br />
| Magnum || Shu Muto || shu-mutou<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Sahara || Chad Roberts || crobertsrh <br />
|-<br />
|}<br />
<br />
== Inter-project Liaisons ==<br />
<br />
In some cases, it is useful to have liaisons between projects. [http://lists.openstack.org/pipermail/openstack-dev/2015-April/062327.html For example, it is useful for the Nova and Neutron projects to have liaisons, because the projects have complex interactions and dependencies.] Ideally, a cross-project effort should have two members, one from each project, to facilitate communication and knowledge transfer.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Projects !! Name !! IRC Handle !! Role<br />
|-<br />
| Nova / Neutron || || ||<br />
|-<br />
| || Sean M. Collins || sc68cal || Neutron liaison for Nova<br />
|-<br />
| Nova / Glance || || ||<br />
|-<br />
| || Flavio Percoco, Mike Fedosin || flaper87, mfedosin || Glance liaison for Nova<br />
|-<br />
| || Jay Pipes || jaypipes || Nova liaison for Glance<br />
|-<br />
| Nova / Cinder || || ||<br />
|-<br />
| || Scott DAngelo || scottda || Cinder liaison for Nova<br />
|-<br />
| || Matt Riedemann || mriedem || Nova liason for Cinder<br />
|-<br />
| Nova / Ironic || John Villalovos || jlvillal || Ironic liaison for Nova<br />
|-<br />
| || Michael Davies || mrda || Ironic liaison for Nova<br />
|-<br />
| Neutron / Ironic || || ||<br />
|-<br />
| || Sukhdev Kapur || sukhdev || Neutron liaison for Ironic<br />
|-<br />
| || Mitsuhiro SHIGEMATSU and Jim Rollenhagen || pshige and jroll || Ironic liaison for Neutron<br />
|-<br />
| Murano / Glance || || ||<br />
|-<br />
| || Alexander Tivelkov || ativelkov || Glance liaison for Murano, Murano liaison for Glance<br />
|-<br />
| Horizon / i18n || || ||<br />
|-<br />
| || Doug Fish || doug-fish || Horizon liaison for i18n<br />
|-<br />
| Sahara / Heat || || ||<br />
|-<br />
| || Vitaly Gridnev || vgridnev || Sahara liaison for Heat<br />
|-<br />
| || TBD || || Heat liaison for Sahara<br />
|-<br />
| Sahara / Trove || || ||<br />
|-<br />
| || Ethan Gafford || egafford || Sahara liaison for Trove<br />
|-<br />
| || Amrith Kumar || amrith || Trove liaison for Sahara<br />
|-<br />
| Fuel / Puppet || || ||<br />
|-<br />
| || Alex Schultz || mwhahaha || Fuel liaison for Puppet<br />
|-<br />
| Fuel / Ironic || || ||<br />
|-<br />
| || Evgeny L || evgenyl || Fuel liaison for Ironic<br />
|-<br />
| Bareon / Ironic || || ||<br />
|-<br />
| || Evgeny L || evgenyl || Bareon liaison for Ironic<br />
|}<br />
<br />
=== Etherpads ===<br />
<br />
The following is a list of etherpads that are used for inter-project liaisons, and are continuously updated.<br />
<br />
Nova - Neutron: https://etherpad.openstack.org/p/nova-neutron<br />
<br />
== Cross-Project Spec Liaisons ==<br />
<br />
The OpenStack project relies on the cross-project spec liaisons from each participating project to help with coordination and cross-project spec related tasks. See full set of [http://docs.openstack.org/project-team-guide/cross-project.html#cross-project-specification-liaisons responsibilities] The liaison defaults to the PTL, but the PTL can also delegate the responsibilities to someone else on the team by updating this tableː<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Astara|| Adam Gandelman || adam_g<br />
|-<br />
| Barbican || Douglas Mendizabal || redrobot<br />
|-<br />
| Cinder || Kendall Nelson || diablo_rojo<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Designate || Graham hayes || mugsie<br />
|-<br />
| Fuel || Andrew Woodward || xarses<br />
|-<br />
| Glance || Nikhil Komawar || nikhil<br />
|-<br />
| Heat || Rico Lin || ricolin<br />
|-<br />
| Horizon || David Lyle || david-lyle<br />
|-<br />
| Infrastructure || Matthew Wagoner || olaph<br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Samuel de Medeiros Queiroz || samueldmq<br />
|-<br />
| Magnum || Adrian Otto || adria̠n_otto<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Mistral || Renat Akhmerov || rakhmerov<br />
|-<br />
| Murano || Serg Melikyan || smelikyan<br />
|-<br />
| Neutron || Armando Migliaccio || armax<br />
|-<br />
| Nova || Chris Dent || cdent<br />
|-<br />
| Oslo || Davanum Srinivas || dims<br />
|-<br />
| Sahara || Michael McCune || elmiko<br />
|-<br />
| Searchlight || Travis Tripp || TravT<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || John Dickinson || notmyname<br />
|-<br />
| Telemetry || Gordon Chung || gordc<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Watcher || Susanne Balle || sballe <br />
|-<br />
| Zaqar || Fei Long Wang || flwang<br />
<br />
|}</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=CrossProjectLiaisons&diff=124024CrossProjectLiaisons2016-04-15T15:58:05Z<p>Thinrichs: /* Vulnerability management */</p>
<hr />
<div>Many of our cross-project teams need focused help for communicating with the other project teams. This page lists the people who have volunteered for that work.<br />
<br />
== Oslo ==<br />
<br />
There are now more projects consuming code from the Oslo incubator than we have Oslo contributors. That means we are going to need your help to make these migrations happen. We are asking for one person from each project to serve as a liaison between the project and Oslo, and to assist with integrating changes as we move code out of the incubator into libraries.<br />
<br />
* The liaison should be active in the project and familiar with the project-specific requirements for having patches accepted, but does not need to be a core reviewer or the PTL.<br />
* The liaison should be prepared to assist with writing and reviewing patches in their project as libraries are adopted, and with discussions of API changes to the libraries to make them easier to use within the project.<br />
* Liaisons should pay attention to [Oslo] tagged messages on the openstack-dev mailing list.<br />
* It is also useful for liaisons to be able to attend the Oslo team meeting ([[Meetings/Oslo]]) to participate in discussions and raise issues for real-time discussion.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || Julien Danjou || jd__<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Cue || Min Pae || sputnik13<br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Thomas Herve || therve<br />
|-<br />
| Horizon || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Ironic || Lin Tan || lintan<br />
|-<br />
| Keystone || Brant Knudson || bknudson<br />
|-<br />
| Manila || Thomas Bechtold || toabctl<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachyshka<br />
|-<br />
| Nova || ChangBo Guo <glongwave@gmail.com> || gcb<br />
|-<br />
| [[Octavia]] || Michael Johnson || johnsom<br />
|-<br />
| Sahara || Andrey Pavlov || AndreyPavlov<br />
|-<br />
| Senlin || Yanyan Hu || Yanyanhu<br />
|-<br />
<br />
| Swift || || <br />
|-<br />
| TripleO || Ben Nemec || bnemec<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Zaqar || Flavio Percoco || flaper87<br />
|-<br />
|}<br />
<br />
== Release management ==<br />
<br />
The Release Management Liaison is responsible for communication with the Release Management team, attending the weekly 1:1 syncs in #openstack-relmgr-office, keeping milestone plans up to date, and signing off milestone and release tags. That task has been [[PTL_Guide#Interactions_with_the_Release_team|traditionally filled by the PTL]], but they may now delegate this task if they wish.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Release Management Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || gordon chung || gordc<br />
|-<br />
| Cinder || Sean McGinnis || smcginnis<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Designate || Graham Hayes || mugsie<br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Sergey Kraynev || skraynev<br />
|-<br />
| Horizon || David Lyle || david-lyle<br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Steve Martinelli || stevemar<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Mistral || Lingxian Kong || kong<br />
|-<br />
| Murano ||Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachys<br />
|-<br />
| Nova || Sylvain Bauza || bauzas<br />
|-<br />
| Oslo || Davanum Srinivas || dims<br />
|-<br />
| Sahara || Vitaly Gridnev || vgridnev<br />
|-<br />
| Searchlight || Travis Tripp || TravT<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || John Dickinson || notmyname<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== QA ==<br />
<br />
There are now more projects that are being tested by Tempest, and Grenade or a part deployable by Devstack than we have QA contributors. That means we are going to need your help to keep on top of everything. We are asking for one person from each project to serve as a liaison between the project and QA, and to assist with integrating changes as we move forward.<br />
<br />
The liaison should be a core reviewer for the project, but does not need to be the PTL. The liaison should be prepared to assist with writing and reviewing patches that interact with their project, and with discussions of changes to the QA projects to make them easier to use within the project.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Steve Heyman || hockeynut <br />
|-<br />
| Ceilometer || Chris Dent || cdent<br />
|-<br />
| Cinder || || <br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Glance || Nikhil Komawar || nikhil_k<br />
|-<br />
| Heat || Steve Baker || stevebaker<br />
|-<br />
| Horizon || Timur Sufiev || tsufiev<br />
|-<br />
| Ironic || John Villalovos || jlvillal<br />
|-<br />
| Keystone || David Stanek || dstanek<br />
|-<br />
| Manila || Valeriy Ponomaryov || vponomaryov<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Matt Riedemann || mriedem<br />
|-<br />
| Oslo || Davanum Srinivas || dims <br />
|-<br />
| Sahara || Luigi Toscano || tosky<br />
|-<br />
| Senlin || Haiwei Xu || haiwei-xu<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Craig Vyvial and Nirav Shah || cp16net and nshah<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== Documentation ==<br />
<br />
The OpenStack Documentation is centralized on docs.openstack.org but often there's a need for specialty information when reviewing patches or triaging doc bugs. A doc liaison should be available to triage doc bugs when the docs team members don't know enough to triage accurately, and be added to doc reviews that affect your project. You'd be notified through email when you're added either to a doc bug or a doc review. We also would appreciate attendance at the [https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting weekly doc team meeting], We meet weekly in #openstack-meeting every Wednesday at alternating times for different timezones:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Constanze Kratel || constanze <br />
|-<br />
| Ceilometer || Ildiko Vancsa || ildikov<br />
|-<br />
| Cinder || Sean Mcginnis || smcginnis <br />
|-<br />
| Congress || Tim Hinrichs || thinrichs <br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Brian Rosmaita || rosmaita<br />
|-<br />
| Heat || Randall Burt || randallburt<br />
|-<br />
| Horizon || Rob Cresswell || robcresswell<br />
|-<br />
| Ironic || Mitsuhiro SHIGEMATSU || pshige<br />
|-<br />
| Keystone || Lance Bragstad || lbragstad<br />
|-<br />
| Magnum || || <br />
|-<br />
| Manila || Dustin Schoenbrun || dustins<br />
|-<br />
| Mistral || || <br />
|-<br />
| Murano || Ekaterina Chernova || katyafervent <br />
|-<br />
| Neutron || Edgar Magana || emagana <br />
|-<br />
| Nova || Joe Gordon or Michael Still || Jog0 or mikal<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Rally || || <br />
|-<br />
| Sahara || Chad Roberts || crobertsrh<br />
|-<br />
| Senlin || Cindia Blue || lixinhui<br />
|-<br />
| Swift || || <br />
|-<br />
| Tripleo || || <br />
|-<br />
| Trove || Laurel Michaels || laurelm<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== Stable Branch ==<br />
<br />
The Stable Branch Liaison is responsible for making sure backports are proposed for critical issues in their project, and make sure proposed backports<br />
are reviewed. They are also the contact point for stable branch release managers around point release times.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Stable Branch Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in its PTL election.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer || Eoghan Glynn || eglynn<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Congress || Anusha Ramineni || ramineni<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Heat || Zane Bitter || zaneb<br />
|-<br />
| Horizon || Matthias Runge || mrunge <br />
|-<br />
| Ironic || Dmitry Tantsur || dtantsur <br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachys<br />
|-<br />
| Nova || Matt Riedemann || mriedem <br />
|-<br />
| Sahara || || <br />
|-<br />
| Senlin|| Qiming Teng || Qiming<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
|}<br />
<br />
== Vulnerability management ==<br />
<br />
The [[Vulnerability Management]] Team needs domain specialists to help assessing the impact of reported issues, coordinate the development of patches, review proposed patches and propose backports. The liaison should be familiar with the [[Vulnerability Management]] process and embargo rules, and have a good grasp of security issues in software design.<br />
<br />
* The liaison should be a core reviewer for the project, but does not need to be the PTL.<br />
* By default, the liaison will be the PTL.<br />
* The liaison is the first line of contact for the Vulnerability Management team members<br />
* The liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL<br />
* The liaison may further delegate work to other subject matter experts<br />
* The liaison maintains the members of the $PROJECT-coresec team in Launchpad (which can be given access to embargoed vulnerabilities)<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal or Charles Neill || redrobot / ccneill<br />
|-<br />
| Ceilometer || Lianhao Lu or Gordon Chung || llu/gordc <br />
|-<br />
| Cinder || || <br />
|-<br />
| Congress || Masahito Muroi || masahito <br />
|-<br />
| Glance || Stuart McLaren or Nikhil Komawar || mclaren or nikhil_k <br />
|-<br />
| Heat || Steve Hardy || shardy<br />
|-<br />
| Horizon || Lin Hua Cheng || lhcheng <br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Michael Still || mikal<br />
|-<br />
| Sahara || Michael McCune or Sergey Lukjanov || elmiko or SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp or Steve McLellan || TravT or sjmc7<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Craig Vyvial or Nikhil Manchanda || cp16net or SlickNik <br />
|-<br />
|}<br />
<br />
== API Working Group ==<br />
<br />
The [[API_Working_Group|API Working Group]] seeks API subject matter experts for each project to communicate plans for API updates, review API guidelines with their project's view in mind, and review the API Working Group guidelines as they are drafted. The liaison should be familiar with the project's REST API design and future planning for changes to it.<br />
<br />
The members of the [http://specs.openstack.org/openstack/api-wg/liaisons.html API Working Group Cross-Project Liaisons] are maintained in our repo. If you want to read the entire list of CPLs or add/remove yourself from the list, you'll need to update the [http://git.openstack.org/cgit/openstack/api-wg/tree/doc/source/liaisons.json liaisons.json] file. If you don't want to make the update yourself, please ask in #openstack-sdks on IRC and someone can make the change for you.<br />
<br />
== Logging Working Group ==<br />
<br />
The [[LogWorkingGroup|Log Working Group]] seeks experts for each project to assist with making the logging in projects match the new [http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html Logging Guidelines]<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Nova || John Garbutt || johnthetubaguy<br />
|-<br />
| Murano || Nikolay Starodubtsev || Nikolay_St<br />
|-<br />
| Sahara || Andrey Pavlov || AndreyPavlov<br />
|}<br />
<br />
== Infra ==<br />
<br />
These are the project specific groups of people that Infra will look to ACK changes to that project's test configuration. Changes to project-config and devstack-gate should be +1'd by these groups when they are related to their project. Note that in an emergency this may not always be possible and Infra will ask for forgiveness but generally we should look for these +1s.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Flavio Percoco, Nikhil Komawar|| flaper87, nikhil_k<br />
<br />
|-<br />
| Neutron || Kyle Mestery, Armando Migliaccio, Doug Wiegley|| mestery, armax, dougwig<br />
|-<br />
| Documentation || Andreas Jaeger|| AJaeger<br />
<br />
|-<br />
| Trove || Nikhil Manchanda || SlickNik<br />
<br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
<br />
|-<br />
| Fuel || Aleksandra Fedorova, Igor Belikov || bookwar, igorbelikov<br />
<br />
|-<br />
| Puppet OpenStack || Emilien Macchi || EmilienM<br />
|}<br />
<br />
== Product Working Group ==<br />
The product working group consists of product managers, technologists, and operators from a diverse set of organizations. The group is working to aggregate user stories from the market-focused teams (Enterprise, Telco, etc.) and cross-project functional teams (e.g. logging, upgrades, etc.), partner with the development community on resourcing, and help gather data to generate a multi-release roadmap. Most of the user stories being tracked by this team consists of items that can span multiple releases and usually have cross-project dependencies. <br />
<br />
More information about the team can be found on the [[ProductTeam|Product WG wiki]].<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer ||Krish Ragurham || <br />
|-<br />
| Cinder || Shamail Tahir || shamail<br />
|-<br />
| Glance || Nate Ziemann || nate_zman<br />
|-<br />
| Horizon || Carol Barrett || barrett1<br />
|-<br />
| Keystone || || <br />
|-<br />
| Kolla || Carol Barrett || barrett1<br />
|-<br />
| Magnum || Steve Gordon || sgordon<br />
|-<br />
| Manilla ||Pete Chadwick || <br />
|-<br />
| Neutron || Mike Cohen, Duane DeCapite || DuaneDeC7<br />
|-<br />
| Nova || Hugh Blemings || hughhalf <br />
|-<br />
| OSClient || Megan Rossetti || MeganR<br />
|-<br />
| Stable Release|| Rochelle Grober || rockig<br />
|-<br />
| Sahara || Ethan Gafford || egafford<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || Phil Williams || philipw<br />
|-<br />
| Tempest || Arkady Kanevsky || arkady_kanevsky<br />
|-<br />
|}<br />
<br />
== I18n ==<br />
I18n team is responsible for making OpenStack ubiquitously accessible to people of all language backgrounds. The team have translators from all over the world to translate OpenStack into different languages. <br />
<br />
If you want to communicate with translators in I18n team, send email to openstack-i18n@lists.openstack.org.<br />
<br />
* The liaison should be a core reviewer for the project and understand i18n status of the project.<br />
* The liaison should understand project release schedule very well.<br />
* The liaison should notify I18n team happens of important moments in the project release in time. For example, happen of soft string freeze, happen of hard string freeze, and happen of RC1 cutting.<br />
* The liaison should take care of translation patches to the project, and make sure the patches are successfully merged to the final release version. When the translation patch is failed, the liaison should notify I18n team.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || || <br />
|-<br />
| Keystone || || <br />
|-<br />
| Neutron || Akihiro Motoki || amotoki<br />
|-<br />
| Nova || Augustina Ragwitz || auggy <br />
|-<br />
| Cinder || || <br />
|-<br />
| Swift || ||<br />
|-<br />
| Horizon || Doug Fish / Akihiro Motoki || doug-fish / amotoki<br />
|-<br />
| Telemetry || || <br />
|-<br />
| Designate || Graham Hayes || mugsie<br />
|-<br />
| Heat || || <br />
|-<br />
| Magnum || Shu Muto || shu-mutou<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Sahara || Chad Roberts || crobertsrh <br />
|-<br />
|}<br />
<br />
== Inter-project Liaisons ==<br />
<br />
In some cases, it is useful to have liaisons between projects. [http://lists.openstack.org/pipermail/openstack-dev/2015-April/062327.html For example, it is useful for the Nova and Neutron projects to have liaisons, because the projects have complex interactions and dependencies.] Ideally, a cross-project effort should have two members, one from each project, to facilitate communication and knowledge transfer.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Projects !! Name !! IRC Handle !! Role<br />
|-<br />
| Nova / Neutron || || ||<br />
|-<br />
| || Sean M. Collins || sc68cal || Neutron liaison for Nova<br />
|-<br />
| Nova / Glance || || ||<br />
|-<br />
| || Flavio Percoco, Mike Fedosin || flaper87, mfedosin || Glance liaison for Nova<br />
|-<br />
| || Jay Pipes || jaypipes || Nova liaison for Glance<br />
|-<br />
| Nova / Cinder || || ||<br />
|-<br />
| || Scott DAngelo || scottda || Cinder liaison for Nova<br />
|-<br />
| || Matt Riedemann || mriedem || Nova liason for Cinder<br />
|-<br />
| Nova / Ironic || John Villalovos || jlvillal || Ironic liaison for Nova<br />
|-<br />
| || Michael Davies || mrda || Ironic liaison for Nova<br />
|-<br />
| Neutron / Ironic || || ||<br />
|-<br />
| || Sukhdev Kapur || sukhdev || Neutron liaison for Ironic<br />
|-<br />
| || Mitsuhiro SHIGEMATSU and Jim Rollenhagen || pshige and jroll || Ironic liaison for Neutron<br />
|-<br />
| Murano / Glance || || ||<br />
|-<br />
| || Alexander Tivelkov || ativelkov || Glance liaison for Murano, Murano liaison for Glance<br />
|-<br />
| Horizon / i18n || || ||<br />
|-<br />
| || Doug Fish || doug-fish || Horizon liaison for i18n<br />
|-<br />
| Sahara / Heat || || ||<br />
|-<br />
| || Vitaly Gridnev || vgridnev || Sahara liaison for Heat<br />
|-<br />
| || TBD || || Heat liaison for Sahara<br />
|-<br />
| Sahara / Trove || || ||<br />
|-<br />
| || Ethan Gafford || egafford || Sahara liaison for Trove<br />
|-<br />
| || Amrith Kumar || amrith || Trove liaison for Sahara<br />
|-<br />
| Fuel / Puppet || || ||<br />
|-<br />
| || Alex Schultz || mwhahaha || Fuel liaison for Puppet<br />
|-<br />
| Fuel / Ironic || || ||<br />
|-<br />
| || Evgeny L || evgenyl || Fuel liaison for Ironic<br />
|-<br />
| Bareon / Ironic || || ||<br />
|-<br />
| || Evgeny L || evgenyl || Bareon liaison for Ironic<br />
|}<br />
<br />
=== Etherpads ===<br />
<br />
The following is a list of etherpads that are used for inter-project liaisons, and are continuously updated.<br />
<br />
Nova - Neutron: https://etherpad.openstack.org/p/nova-neutron<br />
<br />
== Cross-Project Spec Liaisons ==<br />
<br />
The OpenStack project relies on the cross-project spec liaisons from each participating project to help with coordination and cross-project spec related tasks. See full set of [http://docs.openstack.org/project-team-guide/cross-project.html#cross-project-specification-liaisons responsibilities] The liaison defaults to the PTL, but the PTL can also delegate the responsibilities to someone else on the team by updating this tableː<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Astara|| Adam Gandelman || adam_g<br />
|-<br />
| Barbican || Douglas Mendizabal || redrobot<br />
|-<br />
| Cinder || Kendall Nelson || diablo_rojo<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Designate || Graham hayes || mugsie<br />
|-<br />
| Fuel || Andrew Woodward || xarses<br />
|-<br />
| Glance || Nikhil Komawar || nikhil<br />
|-<br />
| Heat || Rico Lin || ricolin<br />
|-<br />
| Horizon || David Lyle || david-lyle<br />
|-<br />
| Infrastructure || Matthew Wagoner || olaph<br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Samuel de Medeiros Queiroz || samueldmq<br />
|-<br />
| Magnum || Adrian Otto || adria̠n_otto<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Mistral || Renat Akhmerov || rakhmerov<br />
|-<br />
| Murano || Serg Melikyan || smelikyan<br />
|-<br />
| Neutron || Armando Migliaccio || armax<br />
|-<br />
| Nova || Chris Dent || cdent<br />
|-<br />
| Oslo || Davanum Srinivas || dims<br />
|-<br />
| Sahara || Michael McCune or Sergey Lukjanov || elmiko or SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp || TravT<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || John Dickinson || notmyname<br />
|-<br />
| Telemetry || Gordon Chung || gordc<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Watcher || Susanne Balle || sballe <br />
|-<br />
| Zaqar || Fei Long Wang || flwang<br />
<br />
|}</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=CrossProjectLiaisons&diff=124021CrossProjectLiaisons2016-04-15T15:57:02Z<p>Thinrichs: /* Stable Branch */</p>
<hr />
<div>Many of our cross-project teams need focused help for communicating with the other project teams. This page lists the people who have volunteered for that work.<br />
<br />
== Oslo ==<br />
<br />
There are now more projects consuming code from the Oslo incubator than we have Oslo contributors. That means we are going to need your help to make these migrations happen. We are asking for one person from each project to serve as a liaison between the project and Oslo, and to assist with integrating changes as we move code out of the incubator into libraries.<br />
<br />
* The liaison should be active in the project and familiar with the project-specific requirements for having patches accepted, but does not need to be a core reviewer or the PTL.<br />
* The liaison should be prepared to assist with writing and reviewing patches in their project as libraries are adopted, and with discussions of API changes to the libraries to make them easier to use within the project.<br />
* Liaisons should pay attention to [Oslo] tagged messages on the openstack-dev mailing list.<br />
* It is also useful for liaisons to be able to attend the Oslo team meeting ([[Meetings/Oslo]]) to participate in discussions and raise issues for real-time discussion.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || Julien Danjou || jd__<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Cue || Min Pae || sputnik13<br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Thomas Herve || therve<br />
|-<br />
| Horizon || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Ironic || Lin Tan || lintan<br />
|-<br />
| Keystone || Brant Knudson || bknudson<br />
|-<br />
| Manila || Thomas Bechtold || toabctl<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachyshka<br />
|-<br />
| Nova || ChangBo Guo <glongwave@gmail.com> || gcb<br />
|-<br />
| [[Octavia]] || Michael Johnson || johnsom<br />
|-<br />
| Sahara || Andrey Pavlov || AndreyPavlov<br />
|-<br />
| Senlin || Yanyan Hu || Yanyanhu<br />
|-<br />
<br />
| Swift || || <br />
|-<br />
| TripleO || Ben Nemec || bnemec<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Zaqar || Flavio Percoco || flaper87<br />
|-<br />
|}<br />
<br />
== Release management ==<br />
<br />
The Release Management Liaison is responsible for communication with the Release Management team, attending the weekly 1:1 syncs in #openstack-relmgr-office, keeping milestone plans up to date, and signing off milestone and release tags. That task has been [[PTL_Guide#Interactions_with_the_Release_team|traditionally filled by the PTL]], but they may now delegate this task if they wish.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Release Management Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || gordon chung || gordc<br />
|-<br />
| Cinder || Sean McGinnis || smcginnis<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Designate || Graham Hayes || mugsie<br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Sergey Kraynev || skraynev<br />
|-<br />
| Horizon || David Lyle || david-lyle<br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Steve Martinelli || stevemar<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Mistral || Lingxian Kong || kong<br />
|-<br />
| Murano ||Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachys<br />
|-<br />
| Nova || Sylvain Bauza || bauzas<br />
|-<br />
| Oslo || Davanum Srinivas || dims<br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp || TravT<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || John Dickinson || notmyname<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== QA ==<br />
<br />
There are now more projects that are being tested by Tempest, and Grenade or a part deployable by Devstack than we have QA contributors. That means we are going to need your help to keep on top of everything. We are asking for one person from each project to serve as a liaison between the project and QA, and to assist with integrating changes as we move forward.<br />
<br />
The liaison should be a core reviewer for the project, but does not need to be the PTL. The liaison should be prepared to assist with writing and reviewing patches that interact with their project, and with discussions of changes to the QA projects to make them easier to use within the project.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Steve Heyman || hockeynut <br />
|-<br />
| Ceilometer || Chris Dent || cdent<br />
|-<br />
| Cinder || || <br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Glance || Nikhil Komawar || nikhil_k<br />
|-<br />
| Heat || Steve Baker || stevebaker<br />
|-<br />
| Horizon || Timur Sufiev || tsufiev<br />
|-<br />
| Ironic || John Villalovos || jlvillal<br />
|-<br />
| Keystone || David Stanek || dstanek<br />
|-<br />
| Manila || Valeriy Ponomaryov || vponomaryov<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Matt Riedemann || mriedem<br />
|-<br />
| Oslo || Davanum Srinivas || dims <br />
|-<br />
| Sahara || Luigi Toscano || tosky<br />
|-<br />
| Senlin || Haiwei Xu || haiwei-xu<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Craig Vyvial and Nirav Shah || cp16net and nshah<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== Documentation ==<br />
<br />
The OpenStack Documentation is centralized on docs.openstack.org but often there's a need for specialty information when reviewing patches or triaging doc bugs. A doc liaison should be available to triage doc bugs when the docs team members don't know enough to triage accurately, and be added to doc reviews that affect your project. You'd be notified through email when you're added either to a doc bug or a doc review. We also would appreciate attendance at the [https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting weekly doc team meeting], We meet weekly in #openstack-meeting every Wednesday at alternating times for different timezones:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Constanze Kratel || constanze <br />
|-<br />
| Ceilometer || Ildiko Vancsa || ildikov<br />
|-<br />
| Cinder || Sean Mcginnis || smcginnis <br />
|-<br />
| Congress || Tim Hinrichs || thinrichs <br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Brian Rosmaita || rosmaita<br />
|-<br />
| Heat || Randall Burt || randallburt<br />
|-<br />
| Horizon || Rob Cresswell || robcresswell<br />
|-<br />
| Ironic || Mitsuhiro SHIGEMATSU || pshige<br />
|-<br />
| Keystone || Lance Bragstad || lbragstad<br />
|-<br />
| Magnum || || <br />
|-<br />
| Manila || Dustin Schoenbrun || dustins<br />
|-<br />
| Mistral || || <br />
|-<br />
| Murano || Ekaterina Chernova || katyafervent <br />
|-<br />
| Neutron || Edgar Magana || emagana <br />
|-<br />
| Nova || Joe Gordon or Michael Still || Jog0 or mikal<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Rally || || <br />
|-<br />
| Sahara || Chad Roberts || crobertsrh<br />
|-<br />
| Senlin || Cindia Blue || lixinhui<br />
|-<br />
| Swift || || <br />
|-<br />
| Tripleo || || <br />
|-<br />
| Trove || Laurel Michaels || laurelm<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== Stable Branch ==<br />
<br />
The Stable Branch Liaison is responsible for making sure backports are proposed for critical issues in their project, and make sure proposed backports<br />
are reviewed. They are also the contact point for stable branch release managers around point release times.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Stable Branch Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in its PTL election.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer || Eoghan Glynn || eglynn<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Congress || Anusha Ramineni || ramineni<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Heat || Zane Bitter || zaneb<br />
|-<br />
| Horizon || Matthias Runge || mrunge <br />
|-<br />
| Ironic || Dmitry Tantsur || dtantsur <br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachys<br />
|-<br />
| Nova || Matt Riedemann || mriedem <br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Senlin|| Qiming Teng || Qiming<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
|}<br />
<br />
== Vulnerability management ==<br />
<br />
The [[Vulnerability Management]] Team needs domain specialists to help assessing the impact of reported issues, coordinate the development of patches, review proposed patches and propose backports. The liaison should be familiar with the [[Vulnerability Management]] process and embargo rules, and have a good grasp of security issues in software design.<br />
<br />
* The liaison should be a core reviewer for the project, but does not need to be the PTL.<br />
* By default, the liaison will be the PTL.<br />
* The liaison is the first line of contact for the Vulnerability Management team members<br />
* The liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL<br />
* The liaison may further delegate work to other subject matter experts<br />
* The liaison maintains the members of the $PROJECT-coresec team in Launchpad (which can be given access to embargoed vulnerabilities)<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal or Charles Neill || redrobot / ccneill<br />
|-<br />
| Ceilometer || Lianhao Lu or Gordon Chung || llu/gordc <br />
|-<br />
| Cinder || || <br />
|-<br />
| Glance || Stuart McLaren or Nikhil Komawar || mclaren or nikhil_k <br />
|-<br />
| Heat || Steve Hardy || shardy<br />
|-<br />
| Horizon || Lin Hua Cheng || lhcheng <br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Michael Still || mikal<br />
|-<br />
| Sahara || Michael McCune or Sergey Lukjanov || elmiko or SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp or Steve McLellan || TravT or sjmc7<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Craig Vyvial or Nikhil Manchanda || cp16net or SlickNik <br />
|-<br />
|}<br />
<br />
== API Working Group ==<br />
<br />
The [[API_Working_Group|API Working Group]] seeks API subject matter experts for each project to communicate plans for API updates, review API guidelines with their project's view in mind, and review the API Working Group guidelines as they are drafted. The liaison should be familiar with the project's REST API design and future planning for changes to it.<br />
<br />
The members of the [http://specs.openstack.org/openstack/api-wg/liaisons.html API Working Group Cross-Project Liaisons] are maintained in our repo. If you want to read the entire list of CPLs or add/remove yourself from the list, you'll need to update the [http://git.openstack.org/cgit/openstack/api-wg/tree/doc/source/liaisons.json liaisons.json] file. If you don't want to make the update yourself, please ask in #openstack-sdks on IRC and someone can make the change for you.<br />
<br />
== Logging Working Group ==<br />
<br />
The [[LogWorkingGroup|Log Working Group]] seeks experts for each project to assist with making the logging in projects match the new [http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html Logging Guidelines]<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Nova || John Garbutt || johnthetubaguy<br />
|-<br />
| Murano || Nikolay Starodubtsev || Nikolay_St<br />
|-<br />
| Sahara || Andrey Pavlov || AndreyPavlov<br />
|}<br />
<br />
== Infra ==<br />
<br />
These are the project specific groups of people that Infra will look to ACK changes to that project's test configuration. Changes to project-config and devstack-gate should be +1'd by these groups when they are related to their project. Note that in an emergency this may not always be possible and Infra will ask for forgiveness but generally we should look for these +1s.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Flavio Percoco, Nikhil Komawar|| flaper87, nikhil_k<br />
<br />
|-<br />
| Neutron || Kyle Mestery, Armando Migliaccio, Doug Wiegley|| mestery, armax, dougwig<br />
|-<br />
| Documentation || Andreas Jaeger|| AJaeger<br />
<br />
|-<br />
| Trove || Nikhil Manchanda || SlickNik<br />
<br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
<br />
|-<br />
| Fuel || Aleksandra Fedorova, Igor Belikov || bookwar, igorbelikov<br />
<br />
|-<br />
| Puppet OpenStack || Emilien Macchi || EmilienM<br />
|}<br />
<br />
== Product Working Group ==<br />
The product working group consists of product managers, technologists, and operators from a diverse set of organizations. The group is working to aggregate user stories from the market-focused teams (Enterprise, Telco, etc.) and cross-project functional teams (e.g. logging, upgrades, etc.), partner with the development community on resourcing, and help gather data to generate a multi-release roadmap. Most of the user stories being tracked by this team consists of items that can span multiple releases and usually have cross-project dependencies. <br />
<br />
More information about the team can be found on the [[ProductTeam|Product WG wiki]].<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer ||Krish Ragurham || <br />
|-<br />
| Cinder || Shamail Tahir || shamail<br />
|-<br />
| Glance || Nate Ziemann || nate_zman<br />
|-<br />
| Horizon || Carol Barrett || barrett1<br />
|-<br />
| Keystone || || <br />
|-<br />
| Kolla || Carol Barrett || barrett1<br />
|-<br />
| Magnum || Steve Gordon || sgordon<br />
|-<br />
| Manilla ||Pete Chadwick || <br />
|-<br />
| Neutron || Mike Cohen, Duane DeCapite || DuaneDeC7<br />
|-<br />
| Nova || Hugh Blemings || hughhalf <br />
|-<br />
| OSClient || Megan Rossetti || MeganR<br />
|-<br />
| Stable Release|| Rochelle Grober || rockig<br />
|-<br />
| Sahara || Ethan Gafford || egafford<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || Phil Williams || philipw<br />
|-<br />
| Tempest || Arkady Kanevsky || arkady_kanevsky<br />
|-<br />
|}<br />
<br />
== I18n ==<br />
I18n team is responsible for making OpenStack ubiquitously accessible to people of all language backgrounds. The team have translators from all over the world to translate OpenStack into different languages. <br />
<br />
If you want to communicate with translators in I18n team, send email to openstack-i18n@lists.openstack.org.<br />
<br />
* The liaison should be a core reviewer for the project and understand i18n status of the project.<br />
* The liaison should understand project release schedule very well.<br />
* The liaison should notify I18n team happens of important moments in the project release in time. For example, happen of soft string freeze, happen of hard string freeze, and happen of RC1 cutting.<br />
* The liaison should take care of translation patches to the project, and make sure the patches are successfully merged to the final release version. When the translation patch is failed, the liaison should notify I18n team.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || || <br />
|-<br />
| Keystone || || <br />
|-<br />
| Neutron || Akihiro Motoki || amotoki<br />
|-<br />
| Nova || Augustina Ragwitz || auggy <br />
|-<br />
| Cinder || || <br />
|-<br />
| Swift || ||<br />
|-<br />
| Horizon || Doug Fish / Akihiro Motoki || doug-fish / amotoki<br />
|-<br />
| Telemetry || || <br />
|-<br />
| Designate || Graham Hayes || mugsie<br />
|-<br />
| Heat || || <br />
|-<br />
| Magnum || Shu Muto || shu-mutou<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Sahara || Chad Roberts || crobertsrh <br />
|-<br />
|}<br />
<br />
== Inter-project Liaisons ==<br />
<br />
In some cases, it is useful to have liaisons between projects. [http://lists.openstack.org/pipermail/openstack-dev/2015-April/062327.html For example, it is useful for the Nova and Neutron projects to have liaisons, because the projects have complex interactions and dependencies.] Ideally, a cross-project effort should have two members, one from each project, to facilitate communication and knowledge transfer.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Projects !! Name !! IRC Handle !! Role<br />
|-<br />
| Nova / Neutron || || ||<br />
|-<br />
| || Sean M. Collins || sc68cal || Neutron liaison for Nova<br />
|-<br />
| Nova / Glance || || ||<br />
|-<br />
| || Flavio Percoco, Mike Fedosin || flaper87, mfedosin || Glance liaison for Nova<br />
|-<br />
| || Jay Pipes || jaypipes || Nova liaison for Glance<br />
|-<br />
| Nova / Cinder || || ||<br />
|-<br />
| || Scott DAngelo || scottda || Cinder liaison for Nova<br />
|-<br />
| || Matt Riedemann || mriedem || Nova liason for Cinder<br />
|-<br />
| Nova / Ironic || John Villalovos || jlvillal || Ironic liaison for Nova<br />
|-<br />
| || Michael Davies || mrda || Ironic liaison for Nova<br />
|-<br />
| Neutron / Ironic || || ||<br />
|-<br />
| || Sukhdev Kapur || sukhdev || Neutron liaison for Ironic<br />
|-<br />
| || Mitsuhiro SHIGEMATSU and Jim Rollenhagen || pshige and jroll || Ironic liaison for Neutron<br />
|-<br />
| Murano / Glance || || ||<br />
|-<br />
| || Alexander Tivelkov || ativelkov || Glance liaison for Murano, Murano liaison for Glance<br />
|-<br />
| Horizon / i18n || || ||<br />
|-<br />
| || Doug Fish || doug-fish || Horizon liaison for i18n<br />
|-<br />
| Sahara / Heat || || ||<br />
|-<br />
| || Vitaly Gridnev || vgridnev || Sahara liaison for Heat<br />
|-<br />
| || TBD || || Heat liaison for Sahara<br />
|-<br />
| Sahara / Trove || || ||<br />
|-<br />
| || Ethan Gafford || egafford || Sahara liaison for Trove<br />
|-<br />
| || Amrith Kumar || amrith || Trove liaison for Sahara<br />
|-<br />
| Fuel / Puppet || || ||<br />
|-<br />
| || Alex Schultz || mwhahaha || Fuel liaison for Puppet<br />
|-<br />
| Fuel / Ironic || || ||<br />
|-<br />
| || Evgeny L || evgenyl || Fuel liaison for Ironic<br />
|-<br />
| Bareon / Ironic || || ||<br />
|-<br />
| || Evgeny L || evgenyl || Bareon liaison for Ironic<br />
|}<br />
<br />
=== Etherpads ===<br />
<br />
The following is a list of etherpads that are used for inter-project liaisons, and are continuously updated.<br />
<br />
Nova - Neutron: https://etherpad.openstack.org/p/nova-neutron<br />
<br />
== Cross-Project Spec Liaisons ==<br />
<br />
The OpenStack project relies on the cross-project spec liaisons from each participating project to help with coordination and cross-project spec related tasks. See full set of [http://docs.openstack.org/project-team-guide/cross-project.html#cross-project-specification-liaisons responsibilities] The liaison defaults to the PTL, but the PTL can also delegate the responsibilities to someone else on the team by updating this tableː<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Astara|| Adam Gandelman || adam_g<br />
|-<br />
| Barbican || Douglas Mendizabal || redrobot<br />
|-<br />
| Cinder || Kendall Nelson || diablo_rojo<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Designate || Graham hayes || mugsie<br />
|-<br />
| Fuel || Andrew Woodward || xarses<br />
|-<br />
| Glance || Nikhil Komawar || nikhil<br />
|-<br />
| Heat || Rico Lin || ricolin<br />
|-<br />
| Horizon || David Lyle || david-lyle<br />
|-<br />
| Infrastructure || Matthew Wagoner || olaph<br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Samuel de Medeiros Queiroz || samueldmq<br />
|-<br />
| Magnum || Adrian Otto || adria̠n_otto<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Mistral || Renat Akhmerov || rakhmerov<br />
|-<br />
| Murano || Serg Melikyan || smelikyan<br />
|-<br />
| Neutron || Armando Migliaccio || armax<br />
|-<br />
| Nova || Chris Dent || cdent<br />
|-<br />
| Oslo || Davanum Srinivas || dims<br />
|-<br />
| Sahara || Michael McCune or Sergey Lukjanov || elmiko or SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp || TravT<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || John Dickinson || notmyname<br />
|-<br />
| Telemetry || Gordon Chung || gordc<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Watcher || Susanne Balle || sballe <br />
|-<br />
| Zaqar || Fei Long Wang || flwang<br />
<br />
|}</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=CrossProjectLiaisons&diff=124020CrossProjectLiaisons2016-04-15T15:55:43Z<p>Thinrichs: /* Documentation */</p>
<hr />
<div>Many of our cross-project teams need focused help for communicating with the other project teams. This page lists the people who have volunteered for that work.<br />
<br />
== Oslo ==<br />
<br />
There are now more projects consuming code from the Oslo incubator than we have Oslo contributors. That means we are going to need your help to make these migrations happen. We are asking for one person from each project to serve as a liaison between the project and Oslo, and to assist with integrating changes as we move code out of the incubator into libraries.<br />
<br />
* The liaison should be active in the project and familiar with the project-specific requirements for having patches accepted, but does not need to be a core reviewer or the PTL.<br />
* The liaison should be prepared to assist with writing and reviewing patches in their project as libraries are adopted, and with discussions of API changes to the libraries to make them easier to use within the project.<br />
* Liaisons should pay attention to [Oslo] tagged messages on the openstack-dev mailing list.<br />
* It is also useful for liaisons to be able to attend the Oslo team meeting ([[Meetings/Oslo]]) to participate in discussions and raise issues for real-time discussion.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || Julien Danjou || jd__<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Cue || Min Pae || sputnik13<br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Thomas Herve || therve<br />
|-<br />
| Horizon || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Ironic || Lin Tan || lintan<br />
|-<br />
| Keystone || Brant Knudson || bknudson<br />
|-<br />
| Manila || Thomas Bechtold || toabctl<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachyshka<br />
|-<br />
| Nova || ChangBo Guo <glongwave@gmail.com> || gcb<br />
|-<br />
| [[Octavia]] || Michael Johnson || johnsom<br />
|-<br />
| Sahara || Andrey Pavlov || AndreyPavlov<br />
|-<br />
| Senlin || Yanyan Hu || Yanyanhu<br />
|-<br />
<br />
| Swift || || <br />
|-<br />
| TripleO || Ben Nemec || bnemec<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Zaqar || Flavio Percoco || flaper87<br />
|-<br />
|}<br />
<br />
== Release management ==<br />
<br />
The Release Management Liaison is responsible for communication with the Release Management team, attending the weekly 1:1 syncs in #openstack-relmgr-office, keeping milestone plans up to date, and signing off milestone and release tags. That task has been [[PTL_Guide#Interactions_with_the_Release_team|traditionally filled by the PTL]], but they may now delegate this task if they wish.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Release Management Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || gordon chung || gordc<br />
|-<br />
| Cinder || Sean McGinnis || smcginnis<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Designate || Graham Hayes || mugsie<br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Sergey Kraynev || skraynev<br />
|-<br />
| Horizon || David Lyle || david-lyle<br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Steve Martinelli || stevemar<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Mistral || Lingxian Kong || kong<br />
|-<br />
| Murano ||Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachys<br />
|-<br />
| Nova || Sylvain Bauza || bauzas<br />
|-<br />
| Oslo || Davanum Srinivas || dims<br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp || TravT<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || John Dickinson || notmyname<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== QA ==<br />
<br />
There are now more projects that are being tested by Tempest, and Grenade or a part deployable by Devstack than we have QA contributors. That means we are going to need your help to keep on top of everything. We are asking for one person from each project to serve as a liaison between the project and QA, and to assist with integrating changes as we move forward.<br />
<br />
The liaison should be a core reviewer for the project, but does not need to be the PTL. The liaison should be prepared to assist with writing and reviewing patches that interact with their project, and with discussions of changes to the QA projects to make them easier to use within the project.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Steve Heyman || hockeynut <br />
|-<br />
| Ceilometer || Chris Dent || cdent<br />
|-<br />
| Cinder || || <br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Glance || Nikhil Komawar || nikhil_k<br />
|-<br />
| Heat || Steve Baker || stevebaker<br />
|-<br />
| Horizon || Timur Sufiev || tsufiev<br />
|-<br />
| Ironic || John Villalovos || jlvillal<br />
|-<br />
| Keystone || David Stanek || dstanek<br />
|-<br />
| Manila || Valeriy Ponomaryov || vponomaryov<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Matt Riedemann || mriedem<br />
|-<br />
| Oslo || Davanum Srinivas || dims <br />
|-<br />
| Sahara || Luigi Toscano || tosky<br />
|-<br />
| Senlin || Haiwei Xu || haiwei-xu<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Craig Vyvial and Nirav Shah || cp16net and nshah<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== Documentation ==<br />
<br />
The OpenStack Documentation is centralized on docs.openstack.org but often there's a need for specialty information when reviewing patches or triaging doc bugs. A doc liaison should be available to triage doc bugs when the docs team members don't know enough to triage accurately, and be added to doc reviews that affect your project. You'd be notified through email when you're added either to a doc bug or a doc review. We also would appreciate attendance at the [https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting weekly doc team meeting], We meet weekly in #openstack-meeting every Wednesday at alternating times for different timezones:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Constanze Kratel || constanze <br />
|-<br />
| Ceilometer || Ildiko Vancsa || ildikov<br />
|-<br />
| Cinder || Sean Mcginnis || smcginnis <br />
|-<br />
| Congress || Tim Hinrichs || thinrichs <br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Brian Rosmaita || rosmaita<br />
|-<br />
| Heat || Randall Burt || randallburt<br />
|-<br />
| Horizon || Rob Cresswell || robcresswell<br />
|-<br />
| Ironic || Mitsuhiro SHIGEMATSU || pshige<br />
|-<br />
| Keystone || Lance Bragstad || lbragstad<br />
|-<br />
| Magnum || || <br />
|-<br />
| Manila || Dustin Schoenbrun || dustins<br />
|-<br />
| Mistral || || <br />
|-<br />
| Murano || Ekaterina Chernova || katyafervent <br />
|-<br />
| Neutron || Edgar Magana || emagana <br />
|-<br />
| Nova || Joe Gordon or Michael Still || Jog0 or mikal<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Rally || || <br />
|-<br />
| Sahara || Chad Roberts || crobertsrh<br />
|-<br />
| Senlin || Cindia Blue || lixinhui<br />
|-<br />
| Swift || || <br />
|-<br />
| Tripleo || || <br />
|-<br />
| Trove || Laurel Michaels || laurelm<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== Stable Branch ==<br />
<br />
The Stable Branch Liaison is responsible for making sure backports are proposed for critical issues in their project, and make sure proposed backports<br />
are reviewed. They are also the contact point for stable branch release managers around point release times.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Stable Branch Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in its PTL election.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer || Eoghan Glynn || eglynn<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Heat || Zane Bitter || zaneb<br />
|-<br />
| Horizon || Matthias Runge || mrunge <br />
|-<br />
| Ironic || Dmitry Tantsur || dtantsur <br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachys<br />
|-<br />
| Nova || Matt Riedemann || mriedem <br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Senlin|| Qiming Teng || Qiming<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
|}<br />
<br />
== Vulnerability management ==<br />
<br />
The [[Vulnerability Management]] Team needs domain specialists to help assessing the impact of reported issues, coordinate the development of patches, review proposed patches and propose backports. The liaison should be familiar with the [[Vulnerability Management]] process and embargo rules, and have a good grasp of security issues in software design.<br />
<br />
* The liaison should be a core reviewer for the project, but does not need to be the PTL.<br />
* By default, the liaison will be the PTL.<br />
* The liaison is the first line of contact for the Vulnerability Management team members<br />
* The liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL<br />
* The liaison may further delegate work to other subject matter experts<br />
* The liaison maintains the members of the $PROJECT-coresec team in Launchpad (which can be given access to embargoed vulnerabilities)<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal or Charles Neill || redrobot / ccneill<br />
|-<br />
| Ceilometer || Lianhao Lu or Gordon Chung || llu/gordc <br />
|-<br />
| Cinder || || <br />
|-<br />
| Glance || Stuart McLaren or Nikhil Komawar || mclaren or nikhil_k <br />
|-<br />
| Heat || Steve Hardy || shardy<br />
|-<br />
| Horizon || Lin Hua Cheng || lhcheng <br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Michael Still || mikal<br />
|-<br />
| Sahara || Michael McCune or Sergey Lukjanov || elmiko or SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp or Steve McLellan || TravT or sjmc7<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Craig Vyvial or Nikhil Manchanda || cp16net or SlickNik <br />
|-<br />
|}<br />
<br />
== API Working Group ==<br />
<br />
The [[API_Working_Group|API Working Group]] seeks API subject matter experts for each project to communicate plans for API updates, review API guidelines with their project's view in mind, and review the API Working Group guidelines as they are drafted. The liaison should be familiar with the project's REST API design and future planning for changes to it.<br />
<br />
The members of the [http://specs.openstack.org/openstack/api-wg/liaisons.html API Working Group Cross-Project Liaisons] are maintained in our repo. If you want to read the entire list of CPLs or add/remove yourself from the list, you'll need to update the [http://git.openstack.org/cgit/openstack/api-wg/tree/doc/source/liaisons.json liaisons.json] file. If you don't want to make the update yourself, please ask in #openstack-sdks on IRC and someone can make the change for you.<br />
<br />
== Logging Working Group ==<br />
<br />
The [[LogWorkingGroup|Log Working Group]] seeks experts for each project to assist with making the logging in projects match the new [http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html Logging Guidelines]<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Nova || John Garbutt || johnthetubaguy<br />
|-<br />
| Murano || Nikolay Starodubtsev || Nikolay_St<br />
|-<br />
| Sahara || Andrey Pavlov || AndreyPavlov<br />
|}<br />
<br />
== Infra ==<br />
<br />
These are the project specific groups of people that Infra will look to ACK changes to that project's test configuration. Changes to project-config and devstack-gate should be +1'd by these groups when they are related to their project. Note that in an emergency this may not always be possible and Infra will ask for forgiveness but generally we should look for these +1s.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Flavio Percoco, Nikhil Komawar|| flaper87, nikhil_k<br />
<br />
|-<br />
| Neutron || Kyle Mestery, Armando Migliaccio, Doug Wiegley|| mestery, armax, dougwig<br />
|-<br />
| Documentation || Andreas Jaeger|| AJaeger<br />
<br />
|-<br />
| Trove || Nikhil Manchanda || SlickNik<br />
<br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
<br />
|-<br />
| Fuel || Aleksandra Fedorova, Igor Belikov || bookwar, igorbelikov<br />
<br />
|-<br />
| Puppet OpenStack || Emilien Macchi || EmilienM<br />
|}<br />
<br />
== Product Working Group ==<br />
The product working group consists of product managers, technologists, and operators from a diverse set of organizations. The group is working to aggregate user stories from the market-focused teams (Enterprise, Telco, etc.) and cross-project functional teams (e.g. logging, upgrades, etc.), partner with the development community on resourcing, and help gather data to generate a multi-release roadmap. Most of the user stories being tracked by this team consists of items that can span multiple releases and usually have cross-project dependencies. <br />
<br />
More information about the team can be found on the [[ProductTeam|Product WG wiki]].<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer ||Krish Ragurham || <br />
|-<br />
| Cinder || Shamail Tahir || shamail<br />
|-<br />
| Glance || Nate Ziemann || nate_zman<br />
|-<br />
| Horizon || Carol Barrett || barrett1<br />
|-<br />
| Keystone || || <br />
|-<br />
| Kolla || Carol Barrett || barrett1<br />
|-<br />
| Magnum || Steve Gordon || sgordon<br />
|-<br />
| Manilla ||Pete Chadwick || <br />
|-<br />
| Neutron || Mike Cohen, Duane DeCapite || DuaneDeC7<br />
|-<br />
| Nova || Hugh Blemings || hughhalf <br />
|-<br />
| OSClient || Megan Rossetti || MeganR<br />
|-<br />
| Stable Release|| Rochelle Grober || rockig<br />
|-<br />
| Sahara || Ethan Gafford || egafford<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || Phil Williams || philipw<br />
|-<br />
| Tempest || Arkady Kanevsky || arkady_kanevsky<br />
|-<br />
|}<br />
<br />
== I18n ==<br />
I18n team is responsible for making OpenStack ubiquitously accessible to people of all language backgrounds. The team have translators from all over the world to translate OpenStack into different languages. <br />
<br />
If you want to communicate with translators in I18n team, send email to openstack-i18n@lists.openstack.org.<br />
<br />
* The liaison should be a core reviewer for the project and understand i18n status of the project.<br />
* The liaison should understand project release schedule very well.<br />
* The liaison should notify I18n team happens of important moments in the project release in time. For example, happen of soft string freeze, happen of hard string freeze, and happen of RC1 cutting.<br />
* The liaison should take care of translation patches to the project, and make sure the patches are successfully merged to the final release version. When the translation patch is failed, the liaison should notify I18n team.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || || <br />
|-<br />
| Keystone || || <br />
|-<br />
| Neutron || Akihiro Motoki || amotoki<br />
|-<br />
| Nova || Augustina Ragwitz || auggy <br />
|-<br />
| Cinder || || <br />
|-<br />
| Swift || ||<br />
|-<br />
| Horizon || Doug Fish / Akihiro Motoki || doug-fish / amotoki<br />
|-<br />
| Telemetry || || <br />
|-<br />
| Designate || Graham Hayes || mugsie<br />
|-<br />
| Heat || || <br />
|-<br />
| Magnum || Shu Muto || shu-mutou<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Sahara || Chad Roberts || crobertsrh <br />
|-<br />
|}<br />
<br />
== Inter-project Liaisons ==<br />
<br />
In some cases, it is useful to have liaisons between projects. [http://lists.openstack.org/pipermail/openstack-dev/2015-April/062327.html For example, it is useful for the Nova and Neutron projects to have liaisons, because the projects have complex interactions and dependencies.] Ideally, a cross-project effort should have two members, one from each project, to facilitate communication and knowledge transfer.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Projects !! Name !! IRC Handle !! Role<br />
|-<br />
| Nova / Neutron || || ||<br />
|-<br />
| || Sean M. Collins || sc68cal || Neutron liaison for Nova<br />
|-<br />
| Nova / Glance || || ||<br />
|-<br />
| || Flavio Percoco, Mike Fedosin || flaper87, mfedosin || Glance liaison for Nova<br />
|-<br />
| || Jay Pipes || jaypipes || Nova liaison for Glance<br />
|-<br />
| Nova / Cinder || || ||<br />
|-<br />
| || Scott DAngelo || scottda || Cinder liaison for Nova<br />
|-<br />
| || Matt Riedemann || mriedem || Nova liason for Cinder<br />
|-<br />
| Nova / Ironic || John Villalovos || jlvillal || Ironic liaison for Nova<br />
|-<br />
| || Michael Davies || mrda || Ironic liaison for Nova<br />
|-<br />
| Neutron / Ironic || || ||<br />
|-<br />
| || Sukhdev Kapur || sukhdev || Neutron liaison for Ironic<br />
|-<br />
| || Mitsuhiro SHIGEMATSU and Jim Rollenhagen || pshige and jroll || Ironic liaison for Neutron<br />
|-<br />
| Murano / Glance || || ||<br />
|-<br />
| || Alexander Tivelkov || ativelkov || Glance liaison for Murano, Murano liaison for Glance<br />
|-<br />
| Horizon / i18n || || ||<br />
|-<br />
| || Doug Fish || doug-fish || Horizon liaison for i18n<br />
|-<br />
| Sahara / Heat || || ||<br />
|-<br />
| || Vitaly Gridnev || vgridnev || Sahara liaison for Heat<br />
|-<br />
| || TBD || || Heat liaison for Sahara<br />
|-<br />
| Sahara / Trove || || ||<br />
|-<br />
| || Ethan Gafford || egafford || Sahara liaison for Trove<br />
|-<br />
| || Amrith Kumar || amrith || Trove liaison for Sahara<br />
|-<br />
| Fuel / Puppet || || ||<br />
|-<br />
| || Alex Schultz || mwhahaha || Fuel liaison for Puppet<br />
|-<br />
| Fuel / Ironic || || ||<br />
|-<br />
| || Evgeny L || evgenyl || Fuel liaison for Ironic<br />
|-<br />
| Bareon / Ironic || || ||<br />
|-<br />
| || Evgeny L || evgenyl || Bareon liaison for Ironic<br />
|}<br />
<br />
=== Etherpads ===<br />
<br />
The following is a list of etherpads that are used for inter-project liaisons, and are continuously updated.<br />
<br />
Nova - Neutron: https://etherpad.openstack.org/p/nova-neutron<br />
<br />
== Cross-Project Spec Liaisons ==<br />
<br />
The OpenStack project relies on the cross-project spec liaisons from each participating project to help with coordination and cross-project spec related tasks. See full set of [http://docs.openstack.org/project-team-guide/cross-project.html#cross-project-specification-liaisons responsibilities] The liaison defaults to the PTL, but the PTL can also delegate the responsibilities to someone else on the team by updating this tableː<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Astara|| Adam Gandelman || adam_g<br />
|-<br />
| Barbican || Douglas Mendizabal || redrobot<br />
|-<br />
| Cinder || Kendall Nelson || diablo_rojo<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Designate || Graham hayes || mugsie<br />
|-<br />
| Fuel || Andrew Woodward || xarses<br />
|-<br />
| Glance || Nikhil Komawar || nikhil<br />
|-<br />
| Heat || Rico Lin || ricolin<br />
|-<br />
| Horizon || David Lyle || david-lyle<br />
|-<br />
| Infrastructure || Matthew Wagoner || olaph<br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Samuel de Medeiros Queiroz || samueldmq<br />
|-<br />
| Magnum || Adrian Otto || adria̠n_otto<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Mistral || Renat Akhmerov || rakhmerov<br />
|-<br />
| Murano || Serg Melikyan || smelikyan<br />
|-<br />
| Neutron || Armando Migliaccio || armax<br />
|-<br />
| Nova || Chris Dent || cdent<br />
|-<br />
| Oslo || Davanum Srinivas || dims<br />
|-<br />
| Sahara || Michael McCune or Sergey Lukjanov || elmiko or SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp || TravT<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || John Dickinson || notmyname<br />
|-<br />
| Telemetry || Gordon Chung || gordc<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Watcher || Susanne Balle || sballe <br />
|-<br />
| Zaqar || Fei Long Wang || flwang<br />
<br />
|}</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=CrossProjectLiaisons&diff=124019CrossProjectLiaisons2016-04-15T15:55:15Z<p>Thinrichs: /* QA */</p>
<hr />
<div>Many of our cross-project teams need focused help for communicating with the other project teams. This page lists the people who have volunteered for that work.<br />
<br />
== Oslo ==<br />
<br />
There are now more projects consuming code from the Oslo incubator than we have Oslo contributors. That means we are going to need your help to make these migrations happen. We are asking for one person from each project to serve as a liaison between the project and Oslo, and to assist with integrating changes as we move code out of the incubator into libraries.<br />
<br />
* The liaison should be active in the project and familiar with the project-specific requirements for having patches accepted, but does not need to be a core reviewer or the PTL.<br />
* The liaison should be prepared to assist with writing and reviewing patches in their project as libraries are adopted, and with discussions of API changes to the libraries to make them easier to use within the project.<br />
* Liaisons should pay attention to [Oslo] tagged messages on the openstack-dev mailing list.<br />
* It is also useful for liaisons to be able to attend the Oslo team meeting ([[Meetings/Oslo]]) to participate in discussions and raise issues for real-time discussion.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || Julien Danjou || jd__<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Cue || Min Pae || sputnik13<br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Thomas Herve || therve<br />
|-<br />
| Horizon || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Ironic || Lin Tan || lintan<br />
|-<br />
| Keystone || Brant Knudson || bknudson<br />
|-<br />
| Manila || Thomas Bechtold || toabctl<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachyshka<br />
|-<br />
| Nova || ChangBo Guo <glongwave@gmail.com> || gcb<br />
|-<br />
| [[Octavia]] || Michael Johnson || johnsom<br />
|-<br />
| Sahara || Andrey Pavlov || AndreyPavlov<br />
|-<br />
| Senlin || Yanyan Hu || Yanyanhu<br />
|-<br />
<br />
| Swift || || <br />
|-<br />
| TripleO || Ben Nemec || bnemec<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Zaqar || Flavio Percoco || flaper87<br />
|-<br />
|}<br />
<br />
== Release management ==<br />
<br />
The Release Management Liaison is responsible for communication with the Release Management team, attending the weekly 1:1 syncs in #openstack-relmgr-office, keeping milestone plans up to date, and signing off milestone and release tags. That task has been [[PTL_Guide#Interactions_with_the_Release_team|traditionally filled by the PTL]], but they may now delegate this task if they wish.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Release Management Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || gordon chung || gordc<br />
|-<br />
| Cinder || Sean McGinnis || smcginnis<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Designate || Graham Hayes || mugsie<br />
|-<br />
| Glance || Flavio Percoco || flaper87<br />
|-<br />
| Heat || Sergey Kraynev || skraynev<br />
|-<br />
| Horizon || David Lyle || david-lyle<br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Steve Martinelli || stevemar<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Mistral || Lingxian Kong || kong<br />
|-<br />
| Murano ||Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachys<br />
|-<br />
| Nova || Sylvain Bauza || bauzas<br />
|-<br />
| Oslo || Davanum Srinivas || dims<br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp || TravT<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || John Dickinson || notmyname<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== QA ==<br />
<br />
There are now more projects that are being tested by Tempest, and Grenade or a part deployable by Devstack than we have QA contributors. That means we are going to need your help to keep on top of everything. We are asking for one person from each project to serve as a liaison between the project and QA, and to assist with integrating changes as we move forward.<br />
<br />
The liaison should be a core reviewer for the project, but does not need to be the PTL. The liaison should be prepared to assist with writing and reviewing patches that interact with their project, and with discussions of changes to the QA projects to make them easier to use within the project.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Steve Heyman || hockeynut <br />
|-<br />
| Ceilometer || Chris Dent || cdent<br />
|-<br />
| Cinder || || <br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Glance || Nikhil Komawar || nikhil_k<br />
|-<br />
| Heat || Steve Baker || stevebaker<br />
|-<br />
| Horizon || Timur Sufiev || tsufiev<br />
|-<br />
| Ironic || John Villalovos || jlvillal<br />
|-<br />
| Keystone || David Stanek || dstanek<br />
|-<br />
| Manila || Valeriy Ponomaryov || vponomaryov<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Matt Riedemann || mriedem<br />
|-<br />
| Oslo || Davanum Srinivas || dims <br />
|-<br />
| Sahara || Luigi Toscano || tosky<br />
|-<br />
| Senlin || Haiwei Xu || haiwei-xu<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Craig Vyvial and Nirav Shah || cp16net and nshah<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== Documentation ==<br />
<br />
The OpenStack Documentation is centralized on docs.openstack.org but often there's a need for specialty information when reviewing patches or triaging doc bugs. A doc liaison should be available to triage doc bugs when the docs team members don't know enough to triage accurately, and be added to doc reviews that affect your project. You'd be notified through email when you're added either to a doc bug or a doc review. We also would appreciate attendance at the [https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting weekly doc team meeting], We meet weekly in #openstack-meeting every Wednesday at alternating times for different timezones:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Constanze Kratel || constanze <br />
|-<br />
| Ceilometer || Ildiko Vancsa || ildikov<br />
|-<br />
| Cinder || Sean Mcginnis || smcginnis <br />
|-<br />
| Congress || || <br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Brian Rosmaita || rosmaita<br />
|-<br />
| Heat || Randall Burt || randallburt<br />
|-<br />
| Horizon || Rob Cresswell || robcresswell<br />
|-<br />
| Ironic || Mitsuhiro SHIGEMATSU || pshige<br />
|-<br />
| Keystone || Lance Bragstad || lbragstad<br />
|-<br />
| Magnum || || <br />
|-<br />
| Manila || Dustin Schoenbrun || dustins<br />
|-<br />
| Mistral || || <br />
|-<br />
| Murano || Ekaterina Chernova || katyafervent <br />
|-<br />
| Neutron || Edgar Magana || emagana <br />
|-<br />
| Nova || Joe Gordon or Michael Still || Jog0 or mikal<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Rally || || <br />
|-<br />
| Sahara || Chad Roberts || crobertsrh<br />
|-<br />
| Senlin || Cindia Blue || lixinhui<br />
|-<br />
| Swift || || <br />
|-<br />
| Tripleo || || <br />
|-<br />
| Trove || Laurel Michaels || laurelm<br />
|-<br />
| Zaqar || || <br />
|}<br />
<br />
== Stable Branch ==<br />
<br />
The Stable Branch Liaison is responsible for making sure backports are proposed for critical issues in their project, and make sure proposed backports<br />
are reviewed. They are also the contact point for stable branch release managers around point release times.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Stable Branch Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in its PTL election.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer || Eoghan Glynn || eglynn<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Heat || Zane Bitter || zaneb<br />
|-<br />
| Horizon || Matthias Runge || mrunge <br />
|-<br />
| Ironic || Dmitry Tantsur || dtantsur <br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachys<br />
|-<br />
| Nova || Matt Riedemann || mriedem <br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Senlin|| Qiming Teng || Qiming<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
|}<br />
<br />
== Vulnerability management ==<br />
<br />
The [[Vulnerability Management]] Team needs domain specialists to help assessing the impact of reported issues, coordinate the development of patches, review proposed patches and propose backports. The liaison should be familiar with the [[Vulnerability Management]] process and embargo rules, and have a good grasp of security issues in software design.<br />
<br />
* The liaison should be a core reviewer for the project, but does not need to be the PTL.<br />
* By default, the liaison will be the PTL.<br />
* The liaison is the first line of contact for the Vulnerability Management team members<br />
* The liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL<br />
* The liaison may further delegate work to other subject matter experts<br />
* The liaison maintains the members of the $PROJECT-coresec team in Launchpad (which can be given access to embargoed vulnerabilities)<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal or Charles Neill || redrobot / ccneill<br />
|-<br />
| Ceilometer || Lianhao Lu or Gordon Chung || llu/gordc <br />
|-<br />
| Cinder || || <br />
|-<br />
| Glance || Stuart McLaren or Nikhil Komawar || mclaren or nikhil_k <br />
|-<br />
| Heat || Steve Hardy || shardy<br />
|-<br />
| Horizon || Lin Hua Cheng || lhcheng <br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Michael Still || mikal<br />
|-<br />
| Sahara || Michael McCune or Sergey Lukjanov || elmiko or SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp or Steve McLellan || TravT or sjmc7<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Craig Vyvial or Nikhil Manchanda || cp16net or SlickNik <br />
|-<br />
|}<br />
<br />
== API Working Group ==<br />
<br />
The [[API_Working_Group|API Working Group]] seeks API subject matter experts for each project to communicate plans for API updates, review API guidelines with their project's view in mind, and review the API Working Group guidelines as they are drafted. The liaison should be familiar with the project's REST API design and future planning for changes to it.<br />
<br />
The members of the [http://specs.openstack.org/openstack/api-wg/liaisons.html API Working Group Cross-Project Liaisons] are maintained in our repo. If you want to read the entire list of CPLs or add/remove yourself from the list, you'll need to update the [http://git.openstack.org/cgit/openstack/api-wg/tree/doc/source/liaisons.json liaisons.json] file. If you don't want to make the update yourself, please ask in #openstack-sdks on IRC and someone can make the change for you.<br />
<br />
== Logging Working Group ==<br />
<br />
The [[LogWorkingGroup|Log Working Group]] seeks experts for each project to assist with making the logging in projects match the new [http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html Logging Guidelines]<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Nova || John Garbutt || johnthetubaguy<br />
|-<br />
| Murano || Nikolay Starodubtsev || Nikolay_St<br />
|-<br />
| Sahara || Andrey Pavlov || AndreyPavlov<br />
|}<br />
<br />
== Infra ==<br />
<br />
These are the project specific groups of people that Infra will look to ACK changes to that project's test configuration. Changes to project-config and devstack-gate should be +1'd by these groups when they are related to their project. Note that in an emergency this may not always be possible and Infra will ask for forgiveness but generally we should look for these +1s.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Flavio Percoco, Nikhil Komawar|| flaper87, nikhil_k<br />
<br />
|-<br />
| Neutron || Kyle Mestery, Armando Migliaccio, Doug Wiegley|| mestery, armax, dougwig<br />
|-<br />
| Documentation || Andreas Jaeger|| AJaeger<br />
<br />
|-<br />
| Trove || Nikhil Manchanda || SlickNik<br />
<br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
<br />
|-<br />
| Fuel || Aleksandra Fedorova, Igor Belikov || bookwar, igorbelikov<br />
<br />
|-<br />
| Puppet OpenStack || Emilien Macchi || EmilienM<br />
|}<br />
<br />
== Product Working Group ==<br />
The product working group consists of product managers, technologists, and operators from a diverse set of organizations. The group is working to aggregate user stories from the market-focused teams (Enterprise, Telco, etc.) and cross-project functional teams (e.g. logging, upgrades, etc.), partner with the development community on resourcing, and help gather data to generate a multi-release roadmap. Most of the user stories being tracked by this team consists of items that can span multiple releases and usually have cross-project dependencies. <br />
<br />
More information about the team can be found on the [[ProductTeam|Product WG wiki]].<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer ||Krish Ragurham || <br />
|-<br />
| Cinder || Shamail Tahir || shamail<br />
|-<br />
| Glance || Nate Ziemann || nate_zman<br />
|-<br />
| Horizon || Carol Barrett || barrett1<br />
|-<br />
| Keystone || || <br />
|-<br />
| Kolla || Carol Barrett || barrett1<br />
|-<br />
| Magnum || Steve Gordon || sgordon<br />
|-<br />
| Manilla ||Pete Chadwick || <br />
|-<br />
| Neutron || Mike Cohen, Duane DeCapite || DuaneDeC7<br />
|-<br />
| Nova || Hugh Blemings || hughhalf <br />
|-<br />
| OSClient || Megan Rossetti || MeganR<br />
|-<br />
| Stable Release|| Rochelle Grober || rockig<br />
|-<br />
| Sahara || Ethan Gafford || egafford<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || Phil Williams || philipw<br />
|-<br />
| Tempest || Arkady Kanevsky || arkady_kanevsky<br />
|-<br />
|}<br />
<br />
== I18n ==<br />
I18n team is responsible for making OpenStack ubiquitously accessible to people of all language backgrounds. The team have translators from all over the world to translate OpenStack into different languages. <br />
<br />
If you want to communicate with translators in I18n team, send email to openstack-i18n@lists.openstack.org.<br />
<br />
* The liaison should be a core reviewer for the project and understand i18n status of the project.<br />
* The liaison should understand project release schedule very well.<br />
* The liaison should notify I18n team happens of important moments in the project release in time. For example, happen of soft string freeze, happen of hard string freeze, and happen of RC1 cutting.<br />
* The liaison should take care of translation patches to the project, and make sure the patches are successfully merged to the final release version. When the translation patch is failed, the liaison should notify I18n team.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || || <br />
|-<br />
| Keystone || || <br />
|-<br />
| Neutron || Akihiro Motoki || amotoki<br />
|-<br />
| Nova || Augustina Ragwitz || auggy <br />
|-<br />
| Cinder || || <br />
|-<br />
| Swift || ||<br />
|-<br />
| Horizon || Doug Fish / Akihiro Motoki || doug-fish / amotoki<br />
|-<br />
| Telemetry || || <br />
|-<br />
| Designate || Graham Hayes || mugsie<br />
|-<br />
| Heat || || <br />
|-<br />
| Magnum || Shu Muto || shu-mutou<br />
|-<br />
| Murano || Kirill Zaitsev || kzaitsev_<br />
|-<br />
| Sahara || Chad Roberts || crobertsrh <br />
|-<br />
|}<br />
<br />
== Inter-project Liaisons ==<br />
<br />
In some cases, it is useful to have liaisons between projects. [http://lists.openstack.org/pipermail/openstack-dev/2015-April/062327.html For example, it is useful for the Nova and Neutron projects to have liaisons, because the projects have complex interactions and dependencies.] Ideally, a cross-project effort should have two members, one from each project, to facilitate communication and knowledge transfer.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Projects !! Name !! IRC Handle !! Role<br />
|-<br />
| Nova / Neutron || || ||<br />
|-<br />
| || Sean M. Collins || sc68cal || Neutron liaison for Nova<br />
|-<br />
| Nova / Glance || || ||<br />
|-<br />
| || Flavio Percoco, Mike Fedosin || flaper87, mfedosin || Glance liaison for Nova<br />
|-<br />
| || Jay Pipes || jaypipes || Nova liaison for Glance<br />
|-<br />
| Nova / Cinder || || ||<br />
|-<br />
| || Scott DAngelo || scottda || Cinder liaison for Nova<br />
|-<br />
| || Matt Riedemann || mriedem || Nova liason for Cinder<br />
|-<br />
| Nova / Ironic || John Villalovos || jlvillal || Ironic liaison for Nova<br />
|-<br />
| || Michael Davies || mrda || Ironic liaison for Nova<br />
|-<br />
| Neutron / Ironic || || ||<br />
|-<br />
| || Sukhdev Kapur || sukhdev || Neutron liaison for Ironic<br />
|-<br />
| || Mitsuhiro SHIGEMATSU and Jim Rollenhagen || pshige and jroll || Ironic liaison for Neutron<br />
|-<br />
| Murano / Glance || || ||<br />
|-<br />
| || Alexander Tivelkov || ativelkov || Glance liaison for Murano, Murano liaison for Glance<br />
|-<br />
| Horizon / i18n || || ||<br />
|-<br />
| || Doug Fish || doug-fish || Horizon liaison for i18n<br />
|-<br />
| Sahara / Heat || || ||<br />
|-<br />
| || Vitaly Gridnev || vgridnev || Sahara liaison for Heat<br />
|-<br />
| || TBD || || Heat liaison for Sahara<br />
|-<br />
| Sahara / Trove || || ||<br />
|-<br />
| || Ethan Gafford || egafford || Sahara liaison for Trove<br />
|-<br />
| || Amrith Kumar || amrith || Trove liaison for Sahara<br />
|-<br />
| Fuel / Puppet || || ||<br />
|-<br />
| || Alex Schultz || mwhahaha || Fuel liaison for Puppet<br />
|-<br />
| Fuel / Ironic || || ||<br />
|-<br />
| || Evgeny L || evgenyl || Fuel liaison for Ironic<br />
|-<br />
| Bareon / Ironic || || ||<br />
|-<br />
| || Evgeny L || evgenyl || Bareon liaison for Ironic<br />
|}<br />
<br />
=== Etherpads ===<br />
<br />
The following is a list of etherpads that are used for inter-project liaisons, and are continuously updated.<br />
<br />
Nova - Neutron: https://etherpad.openstack.org/p/nova-neutron<br />
<br />
== Cross-Project Spec Liaisons ==<br />
<br />
The OpenStack project relies on the cross-project spec liaisons from each participating project to help with coordination and cross-project spec related tasks. See full set of [http://docs.openstack.org/project-team-guide/cross-project.html#cross-project-specification-liaisons responsibilities] The liaison defaults to the PTL, but the PTL can also delegate the responsibilities to someone else on the team by updating this tableː<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Astara|| Adam Gandelman || adam_g<br />
|-<br />
| Barbican || Douglas Mendizabal || redrobot<br />
|-<br />
| Cinder || Kendall Nelson || diablo_rojo<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Designate || Graham hayes || mugsie<br />
|-<br />
| Fuel || Andrew Woodward || xarses<br />
|-<br />
| Glance || Nikhil Komawar || nikhil<br />
|-<br />
| Heat || Rico Lin || ricolin<br />
|-<br />
| Horizon || David Lyle || david-lyle<br />
|-<br />
| Infrastructure || Matthew Wagoner || olaph<br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Samuel de Medeiros Queiroz || samueldmq<br />
|-<br />
| Magnum || Adrian Otto || adria̠n_otto<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Mistral || Renat Akhmerov || rakhmerov<br />
|-<br />
| Murano || Serg Melikyan || smelikyan<br />
|-<br />
| Neutron || Armando Migliaccio || armax<br />
|-<br />
| Nova || Chris Dent || cdent<br />
|-<br />
| Oslo || Davanum Srinivas || dims<br />
|-<br />
| Sahara || Michael McCune or Sergey Lukjanov || elmiko or SergeyLukjanov<br />
|-<br />
| Searchlight || Travis Tripp || TravT<br />
|-<br />
| Senlin || Qiming Teng || Qiming<br />
|-<br />
| Swift || John Dickinson || notmyname<br />
|-<br />
| Telemetry || Gordon Chung || gordc<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Watcher || Susanne Balle || sballe <br />
|-<br />
| Zaqar || Fei Long Wang || flwang<br />
<br />
|}</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Meetings/NovaScheduler&diff=123086Meetings/NovaScheduler2016-03-28T20:41:20Z<p>Thinrichs: /* Agenda for next meeting */</p>
<hr />
<div><br />
= Weekly Nova Scheduler team meeting =<br />
'''MEETING TIME: Mondays 14:00 UTC (#openstack-meeting-alt)'''<br />
<br />
This meeting is a weekly gathering of developers working on the Nova Scheduler subteam. We cover topics such as development focus, status, bugs, reviews, and other current topics worthy of real-time discussion.<br />
<br />
NOTE: this wiki page should be 'emptied' at the end of each meeting.<br />
<br />
== Agenda for next meeting ==<br />
<br />
Next meetings scheduled for:<br />
* April 4 2016 1400 UTC, #openstack-meeting-alt (http://www.timeanddate.com/worldclock/fixedtime.html?iso=20160404T140000)<br />
* April 11 2016 1400 UTC, #openstack-meeting-alt (http://www.timeanddate.com/worldclock/fixedtime.html?iso=20160411T140000)<br />
<br />
<br />
Here is the agenda for the next meeting:<br />
* Specs<br />
** Policy-based scheduler: https://review.openstack.org/#/c/297898/<br />
* Reviews<br />
** TBD<br />
* Open discussion<br />
** TBD<br />
<br />
== Previous meetings ==<br />
<br />
* [http://eavesdrop.openstack.org/meetings/nova_scheduler/ All other meetings are here]<br />
* [http://eavesdrop.openstack.org/meetings/nova_scheduler/2016/nova_scheduler.2016-03-28-14.00.log.html 2016.03.28]<br />
* [http://eavesdrop.openstack.org/meetings/nova_scheduler/2016/nova_scheduler.2016-03-21-14.01.log.html 2016-03-21]<br />
* [http://eavesdrop.openstack.org/meetings/nova_scheduler/2016/nova_scheduler.2016-03-14-14.04.log.html 2016-03-14]<br />
<br />
[[category: compute]]<br />
[[category: meetings]]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Sprints/CongressMitakaSprint&diff=99571Sprints/CongressMitakaSprint2015-12-14T17:40:18Z<p>Thinrichs: Created page with "== What == This mid-cycle sprint focuses on improving the distributed systems aspects of Congress: (1) sprint to implement the first draft of the distributed architecture desi..."</p>
<hr />
<div>== What ==<br />
This mid-cycle sprint focuses on improving the distributed systems aspects of Congress: (1) sprint to implement the first draft of the distributed architecture designed during the Liberty cycle, (2) design a high-availability, high-throughput deployment of the distributed architecture, (3) discuss using Monasca as a robust way of injecting aggregated time-series data into Congress.<br />
<br />
== When ==<br />
January 26-28, 2016 (Tue-Thu)<br />
<br />
== Where ==<br />
Cisco Building I (eye)<br /><br />
285 W. Tasman Drive <br /><br />
San Jose, California 95134 <br /><br />
United States<br /><br />
<br />
[https://www.google.com/maps/place/Cisco+Bldg+I,+285+W+Tasman+Dr,+San+Jose,+CA+95134/@37.411964,-121.9577079,17z/data=!3m1!4b1!4m2!3m1!1s0x808fc9ac35cfff61:0x52db1ee6a9cb3898 Google Map]<br />
<br />
== Lodging ==<br />
[https://www.google.com/maps/search/hotels+near+285+W.+Tasman+Drive+San+Jose+California+95134/@37.4119627,-121.9730288,14z/data=!3m1!4b1 Google map] of hotels near Cisco campus<br />
<br />
== Registration ==<br />
Please RSVP at eventbrite: https://www.eventbrite.com/e/congress-mitaka-sprint-tickets-19999890210<br />
<br />
== Etherpad ==<br />
Feel free to contribute ideas and agenda items to the etherpad: https://etherpad.openstack.org/p/congress-mitaka-sprint</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Sprints&diff=99564Sprints2015-12-14T17:24:52Z<p>Thinrichs: /* Mitaka sprints */</p>
<hr />
<div>For the list of virtual sprints, please visit the [https://wiki.openstack.org/wiki/VirtualSprints Virtual Sprints] page.<br />
<br />
=== Mitaka sprints ===<br />
<br />
Here is a chronological list of future sprints. please keep it ordered and move past ones to table below.<br />
{| border="1" cellpadding="4" cellspacing="4"<br />
|- bgcolor=#eeeeee<br />
| Date<br />
| Location<br />
| Theme<br />
| More information at<br />
|-<br />
| January 11-13, 2016<br />
| San Antonio, TX<br />
| Barbican Midcycle Sprint<br />
| [[Sprints/BarbicanMitakaSprint]]<br />
|-<br />
| January 12-14, 2016<br />
| RTP, NC<br />
| Manila Midcycle Sprint<br />
| [[Sprints/ManilaMitakaSprint]]<br />
|-<br />
| January 12-15, 2016<br />
| San Antonio, TX [Co-located with Barbican]<br />
| Security Midcycle Sprint<br />
| [[Sprints/SecurityMitakaSprint]]<br />
|-<br />
| January 26-28, 2016<br />
| Bristol, UK<br />
| Nova Midcycle Sprint<br />
| [[Sprints/NovaMitakaSprint]]<br />
|-<br />
| January 26-28, 2016<br />
| San Jose, CA<br />
| Congress Midcycle Sprint<br />
| [[Sprints/CongressMitakaSprint]]<br />
|-<br />
| January 26-29, 2016<br />
| Raleigh, NC<br />
| Cinder Midcycle Sprint<br />
| [[Sprints/CinderMitakaSprint]]<br />
|-<br />
| January 27-29, 2016<br />
| Austin, TX<br />
| Keystone Midcycle Sprint<br />
| [[Sprints/KeystoneMitakaSprint]]<br />
|-<br />
| February 22-25, 2016<br />
| Fort Collins, CO<br />
| Infra Midcycle Sprint<br />
| [[Sprints/InfraMitakaSprint]]<br />
|}<br />
<br />
=== Liberty sprints ===<br />
==== Here is a list of the previous sprints for Liberty (all in '''2015''') ====<br />
<br />
{| border="1" cellpadding="4" cellspacing="4"<br />
|- bgcolor=#eeeeee<br />
| Date<br />
| Location<br />
| Theme<br />
| More information at<br />
|-<br />
| June 24-26, 2015<br />
| Fort Collins, CO, USA<br />
| Neutron Code Sprint<br />
| [[Neutron/LibertyCodeSprint]]<br />
|-<br />
| June 30, July 1, July 2<br />
| Tel Aviv, Israel<br />
| Neutron QoS Code Sprint<br />
| [[Neutron/LibertyCodeSprint]]<br />
|-<br />
| July 15-17, 2015<br />
| Boston University, Boston, MA, USA<br />
| Keystone Midcycle Sprint<br />
| [[Sprints/KeystoneLibertySprint]]<br />
|-<br />
| July 15-17, 2015<br />
| Seattle, WA, USA<br />
| LBaaS Midcycle Sprint<br />
| [[https://etherpad.openstack.org/p/LBaaS-FWaaS-VPNaaS_Summer_Midcycle_meetup]]<br />
|-<br />
| July 21-23, 2015<br />
| IBM, Rochester MN, USA<br />
| Nova Midcycle Sprint<br />
| [[Sprints/NovaLibertySprint]]<br />
|-<br />
| July 21-23, 2015<br />
| HP, Fort Collins, Colorado, USA<br />
| Horizon Midcycle Sprint<br />
| [https://etherpad.openstack.org/p/horizon-liberty-midcycle Sprints/HorizonLibertySprint]<br />
|-<br />
| July 28-29, 2015<br />
| Cisco Systems, Inc., San Jose, California, USA<br />
| Kolla Midcycle Sprint<br />
| [[Sprints/KollaLibertySprint]]<br />
|-<br />
| July 28-30, 2015<br />
| Rackspace, Blacksburg, Virginia, USA<br />
| Glance Midcycle Meetup<br />
| [Liberty Glance Mid Cycle Meetup [https://etherpad.openstack.org/p/liberty-glance-mid-cycle-meetup]]<br />
|-<br />
| July 29-30, 2015<br />
| IBM, Austin, Texas, USA<br />
| DefCore Midcycle Meetup<br />
| [https://etherpad.openstack.org/p/DefCoreFlag.MidCycle Liberty DefCore Mid Cycle Meetup]<br />
|-<br />
| July 29-30, 2015<br />
| NetApp, RTP, NC, USA<br />
| Manila Midcycle Meetup<br />
| [https://etherpad.openstack.org/p/manila-liberty-midcycle-meetup Manila Midcycle Meetup]<br />
|-<br />
| August 4-7, 2015<br />
| HP, Fort Collins, Colorado, USA<br />
| Cinder Midcycle Sprint<br />
| [[Sprints/CinderLibertySprint]]<br />
|-<br />
| August 5-6, 2015<br />
| IBM Silicon Valley Lab, San Jose, CA, USA<br />
| Magnum Midcycle Sprint<br />
| [[Magnum/Midcycle]]<br />
|-<br />
| August 5-7, 2015<br />
| JHU Applied Physics Lab, Laurel, MD, USA<br />
| Barbican Midcycle Sprint<br />
| [[Sprints/BarbicanLibertySprint]]<br />
|-<br />
| August 6-7, 2015<br />
| VMware campus, Palo Alto, CA, USA<br />
| Congress Midcycle Sprint<br />
| [[Sprints/CongressLibertySprint]]<br />
|-<br />
| August 12 - 14, 2015<br />
| HP, Seattle, Washington, USA<br />
| Ironic Midcycle Sprint<br />
| [[Sprints/IronicLibertySprint]]<br />
|- <br />
| August 17-20, 2015<br />
| Rackspace, Austin, TX, USA<br />
| Designate Mid-cycle Meetup<br />
| [https://www.eventbrite.co.uk/e/openstack-designate-2015-summer-mid-cycle-meetup-tickets-17833181526 Eventbrite tickets]<br />
|- <br />
| August 18-19, 2015<br />
| Palo Alto, CA<br />
| Operator's Midcycle Sprint<br />
| [[Operations/Meetups]]<br />
|-<br />
| August 20-21, 2015<br />
| Cisco, San Jose, CA, USA<br />
| Product Working Group Midcycle<br />
| [[Sprints/Product_WGLibertySprint]]<br />
|- <br />
| August 26-28, 2015<br />
| HP, Sunnyvale, CA<br />
| Trove Midcycle Sprint<br />
| [[Sprints/TroveLibertySprint]]<br />
|-<br />
| September 1-4, 2015<br />
| HP, Seattle, Washington USA<br />
| Security Midcycle Sprint<br />
| [[Sprints/SecurityLibertySprint]]<br />
|-<br />
| September 14-16, 2015<br />
| HP, Fort Collins, Colorado USA<br />
| QA End of Cycle Code Sprint<br />
| [[QA/CodeSprintLibertyFortCollins]]<br />
|}<br />
<br />
=== Kilo sprints ===<br />
==== Here is a list of the previous sprints for Kilo (2014/2015) ====<br />
<br />
{| border="1" cellpadding="4" cellspacing="4"<br />
|- bgcolor=#eeeeee<br />
| Date<br />
| Location<br />
| Theme<br />
| More information at<br />
|-<br />
| December 8-10, 2014<br />
| Lehi, Utah, USA<br />
| Neutron<br />
| [[Sprints/NeutronKiloSprint]]<br />
|-<br />
| January 12 - 14, 2015<br />
| Santa Clara, CA, USA<br />
| Refstack<br />
| [https://etherpad.openstack.org/p/refstack-january-2015-midcycle Refstack Midcycle Meetup Etherpad]<br />
|-<br />
| January 19 - 21, 2015<br />
| San Antonio, TX, USA<br />
| Keystone<br />
| [[Sprints/KeystoneKiloSprint]]<br />
|-<br />
| January 19 - 22, 2015<br />
| San Jose, CA, USA<br />
| Designate<br />
| [[Sprints/DesignateKiloSprint]]<br />
|-<br />
| January 26 - 28, 2015<br />
| Palo Alto, CA, USA<br />
| Nova<br />
| [[Sprints/PaloAltoKiloSprint]]<br />
|-<br />
| January 27 - 28, 2015<br />
| Palo Alto, CA, USA<br />
| Glance<br />
| [https://etherpad.openstack.org/p/kilo-glance-mid-cycle-meetup Glance Midcycle Meetup Etherpad]<br />
|-<br />
| January 27 - 29, 2015<br />
| Austin, TX, USA<br />
| Cinder<br />
| [[Sprints/CinderKiloSprint]]<br />
|-<br />
| Feb 2 - 6, 2015<br />
| San Antonio, TX, USA<br />
| Neutron LBaaS<br />
| [https://etherpad.openstack.org/p/lbaas-kilo-meetup LBaaS Midcycle Meetup Etherpad]<br />
|-<br />
| Feb 3 - 5, 2015<br />
| Seattle, WA, USA<br />
| Trove<br />
| [[Sprints/TroveKiloSprint]]<br />
|-<br />
| Feb 3 - 5, 2015<br />
| Grenoble, France<br />
| Ironic<br />
| [[Sprints/IronicKiloSprint]]<br />
|-<br />
| Feb 11 - 13, 2015<br />
| San Francisco, CA, USA<br />
| Ironic<br />
| [[Sprints/IronicKiloSprint]]<br />
|-<br />
| Feb 16-18, 2015<br />
| Austin, TX, USA<br />
| Barbican<br />
| [[Sprints/BarbicanKiloSprint]]<br />
|-<br />
| Feb 18 - 20, 2015<br />
| Seattle, WA, USA<br />
| Deployment/TripleO<br />
| [[Sprints/DeploymentKiloSprint]], [https://etherpad.openstack.org/p/kilo-tripleo-midcycle-meetup Etherpad]<br />
|-<br />
| Feb 17-20, 2015 (Awaiting Confirmation)<br />
| San Francisco, CA, USA<br />
| OpenStack Security Group<br />
| [[Sprints/OSSGKiloSprint]]<br />
|-<br />
| Mar 2-3, 2015<br />
| San Francisco, CA, USA<br />
| Magnum (CaaS for OpenStack)<br />
| [[Magnum/Midcycle]]<br />
|-<br />
| Mar 9-10, 2015<br />
| Philadelphia, PA, USA<br />
| Operators Mid-Cycle<br />
| [[Operations/Meetups]]<br />
|-<br />
| March 25-27, 2015<br />
| New York, NY, USA<br />
| QA Code Sprint<br />
| [[QA/CodeSprintKiloNYC]]<br />
|-<br />
| April 13-15, 2015<br />
| Shanghai, PRC<br />
| Release Candidate Hackathon<br />
| [[PRC_Kilo_Hackathon|Kilo Hackathon in PRC]]<br />
|}<br />
<br />
<br />
=== Juno sprints ===<br />
==== Here is a list of the previous sprints for Juno (all in '''2014''') ====<br />
{| border="1" cellpadding="4" cellspacing="4"<br />
|- bgcolor=#eeeeee<br />
| Date<br />
| Location<br />
| Theme<br />
| More information at<br />
|-<br />
| July 2 - 4<br />
| Paris, France<br />
| Ceilometer/All projects<br />
| [[Sprints/ParisJuno2014]]<br />
|-<br />
| July 7 - 9<br />
| San Antonio, TX, USA<br />
| Barbican<br />
| TBD, [[Meetings/Barbican]]<br />
|-<br />
| July 9 - 11<br />
| Bloomington, MN, USA<br />
| Neutron<br />
| [https://etherpad.openstack.org/p/neutron-juno-mid-cycle-meeting]<br />
|-<br />
| July 9 - 11<br />
| San Antonio, TX, USA<br />
| Keystone<br />
| [http://dolphm.com/openstack-keystone-hackathon-for-juno/]<br />
|-<br />
| July 14 - 18<br />
| Darmstadt, Germany<br />
| QA & Infra<br />
| [[Qa_Infra_Meetup_2014]]<br />
|-<br />
| July 14 - 18<br />
| Seattle, USA<br />
| Security Group<br />
| [https://etherpad.openstack.org/p/ossg-juno-meetup]<br />
|-<br />
| July 21 - July 25<br />
| Raleigh, NC, USA<br />
| TripleO (& Heat)<br />
| [https://etherpad.openstack.org/p/juno-midcycle-meetup]<br />
|-<br />
| July 28 - Jul 30<br />
| Hillsboro, OR, USA<br />
| Nova & Ironic<br />
| [[Sprints/BeavertonJunoSprint]]<br />
|-<br />
| Aug 11 - Aug 15<br />
| Fort Collins, CO<br />
| Cinder<br />
| [[https://etherpad.openstack.org/p/CinderMidCycleMeetupAug2014]]<br />
|-<br />
| Aug 18 - Aug 20<br />
| Raleigh, NC, USA<br />
| Heat<br />
| [https://etherpad.openstack.org/p/heat-juno-midcycle-meetup]<br />
|-<br />
| Aug 20 - Aug 23<br />
| Cambridge, MA, USA<br />
| Trove<br />
| [[Trove/JunoMidCycleMeetup|Link]]<br />
|-<br />
| July 24 - July 25<br />
| Palo Alto, CA, USA<br />
| Glance<br />
| [[https://etherpad.openstack.org/p/glance-juno-mid-cycle-meeting]]<br />
|}</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=95615Congress2015-11-04T19:17:32Z<p>Thinrichs: /* Use Cases and Examples */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Documents<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [https://drive.google.com/open?id=1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI&authuser=0 Delegation to VM-migration engine design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
<br />
* Material from recent summits<br />
** [https://goo.gl/W0Rhcv Tokyo Hands On Lab instructions]<br />
** [https://goo.gl/o062Kc Tokyo Hands On Lab virtual machine]<br />
** [https://drive.google.com/open?id=0ByDz-eYOtswSSlBZVWVkUC1PZHc Tokyo Hands On Lab slides]<br />
** [https://docs.google.com/file/d/0ByDz-eYOtswScTlmamlhLXpmTXc/edit Vancouver Summit Congress Overview slides]<br />
** [https://drive.google.com/open?id=0ByDz-eYOtswScUFUc1ZrVVhmQlk&authuser=0 Vancouver Delegation for VM placement slides]<br />
<br />
*Meetings<br />
** [http://eavesdrop.openstack.org/#Congress_Team_Meeting Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/ Meeting history]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The core grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases and Examples===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases]. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
Different examples of Congress policies that have been implemented can be viewed here: [https://goo.gl/UfeHEk Congress Examples]<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
=== How To Propose a New Feature ===<br />
To propose a new feature, you <br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] describing what your feature does and how it will work. Include just the name, title, and summary.<br />
# Ask at least one core reviewer to give you feedback. If the feature is complex or controversial, the core will ask you to develop a "spec" and add it to the congress-specs repo. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.<br />
# Once your blueprint is approved, you may push code that implements that blueprint up to Gerrit, including the tag 'Implements-blueprint: <blueprint-name>'<br />
<br />
If a core reviewer asks for a spec, here's how you create one.<br />
# Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
# Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
# Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
# Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. <br />
<br />
Note: If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=95614Congress2015-11-04T19:16:17Z<p>Thinrichs: /* Policy as a service ("Congress") */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Documents<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [https://drive.google.com/open?id=1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI&authuser=0 Delegation to VM-migration engine design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
<br />
* Material from recent summits<br />
** [https://goo.gl/W0Rhcv Tokyo Hands On Lab instructions]<br />
** [https://goo.gl/o062Kc Tokyo Hands On Lab virtual machine]<br />
** [https://drive.google.com/open?id=0ByDz-eYOtswSSlBZVWVkUC1PZHc Tokyo Hands On Lab slides]<br />
** [https://docs.google.com/file/d/0ByDz-eYOtswScTlmamlhLXpmTXc/edit Vancouver Summit Congress Overview slides]<br />
** [https://drive.google.com/open?id=0ByDz-eYOtswScUFUc1ZrVVhmQlk&authuser=0 Vancouver Delegation for VM placement slides]<br />
<br />
*Meetings<br />
** [http://eavesdrop.openstack.org/#Congress_Team_Meeting Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/ Meeting history]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The core grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases and Examples===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases]. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
Different examples of Congress policies that have been implemented can be viewed here: [https://goo.gl/UfeHEk Congress Examples]<br />
<br />
Here we give an example of policies that each of our releases supports.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress release<br />
|-<br />
| Monitor violations of: every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Stop a user from constructing a new VM if she owns a VM averaging less than 1% CPU-utilization (requires API gateway or Nova support) || kilo<br />
|-<br />
| Every time a user creates a Neutron security group that opens port 80, delete that security group || kilo<br />
|}<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
=== How To Propose a New Feature ===<br />
To propose a new feature, you <br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] describing what your feature does and how it will work. Include just the name, title, and summary.<br />
# Ask at least one core reviewer to give you feedback. If the feature is complex or controversial, the core will ask you to develop a "spec" and add it to the congress-specs repo. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.<br />
# Once your blueprint is approved, you may push code that implements that blueprint up to Gerrit, including the tag 'Implements-blueprint: <blueprint-name>'<br />
<br />
If a core reviewer asks for a spec, here's how you create one.<br />
# Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
# Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
# Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
# Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. <br />
<br />
Note: If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Design_Summit/Mitaka/Etherpads&diff=92554Design Summit/Mitaka/Etherpads2015-10-14T17:43:44Z<p>Thinrichs: </p>
<hr />
<div>[[Category:Summit]]<br />
[[Category:Liberty]]<br />
[[Category:Etherpad]]<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
== Event intro/closure ==<br />
* Tue 11:15: Design Summit 101 [https://etherpad.openstack.org/p/mitaka-design-summit-101]<br />
* Fri 12:30: Design Summit feedback [https://etherpad.openstack.org/p/mitaka-design-summit-feedback]<br />
<br />
==Cinder==<br />
* Wed 14.00: Why fight it, Cinder could/should be the next ViPR [https://etherpad.openstack.org/p/mitaka-cinder-direction]<br />
* Wed 14.50: Availability zones in Cinder [https://etherpad.openstack.org/p/mitaka-cinder-az]<br />
* Thur 9.00: Experimental APIs and Microversions [https://etherpad.openstack.org/p/mitaka-cinder-experimental-apis]<br />
* Thur 9.50: Cinder Nova Interaction [https://etherpad.openstack.org/p/mitaka-cinder-nova-interaction]<br />
* Thur 13.50: Cinder driver interface [https://etherpad.openstack.org/p/mitaka-cinder-driver-interface]<br />
* Thur 14.40: API Microversions [https://etherpad.openstack.org/p/mitaka-cinder-api-microversions]<br />
* Thur 15.30: ABC work [https://etherpad.openstack.org/p/mitaka-cinder-abc-work]<br />
* Thur 15.30: Driver deadlines [https://etherpad.openstack.org/p/mitaka-cinder-driver-deadlines]<br />
* Thur 16.30: C-Vol Active/Active HA [https://etherpad.openstack.org/p/mitaka-cinder-cvol-aa]<br />
* Thur 17.20: Volume manager locks [https://etherpad.openstack.org/p/mitaka-cinder-volmgr-locks]<br />
* Fri: Contributor Meetup [https://etherpad.openstack.org/p/mitaka-cinder-contributor-meetup]<br />
<br />
==Congress==<br />
* Wed 2:00: [https://etherpad.openstack.org/p/congress-mitaka-arch Distributed architecture and additional features for Mitaka]<br />
* Wed 2:50: [https://etherpad.openstack.org/p/congress-mitaka-integrations Integration with other projects: congress gating (murano, nova, neutron, etc.), keystone]<br />
* Wed 3:40: [https://etherpad.openstack.org/p/congress-mitaka-external Discussions with external teams: OPNFV, Monasca]<br />
<br />
==Cross-Project workshops==<br />
<br />
All sessions are on Tuesday 2015-10-27<br />
<br />
* 11:15<br />
** Cycle themes [https://etherpad.openstack.org/p/mitaka-crossproject-themes]<br />
* 12:05<br />
** Supporting DefCore and Interoperability Testing [https://etherpad.openstack.org/p/mitaka-crossproject-defcore]<br />
** Tags today and tomorrow [https://etherpad.openstack.org/p/mitaka-crossproject-next-tags]<br />
* 14:50<br />
** Role Assignments for Service users [https://etherpad.openstack.org/p/mitaka-cross-project-role-assignment-service-user]<br />
* 15:40<br />
** Documenting the OpenStack way [https://etherpad.openstack.org/p/mitaka-crossproject-doc-the-way]<br />
* 16:40<br />
** Troubleshooting cross-project comms [https://etherpad.openstack.org/p/mitaka-crossproject-comms]<br />
* 17:30<br />
** Serving extreme use cases [https://etherpad.openstack.org/p/mitaka-crossproject-extreme-usecases]<br />
<br />
== Ceilometer ==<br />
* Wednesday, 2015-08-28<br />
** 11:15 - [https://etherpad.openstack.org/p/mitaka-telemetry-alarms alams]<br />
** 12:05 - [https://etherpad.openstack.org/p/mitaka-telemetry-ui visualising data]<br />
** 14:50 - [https://etherpad.openstack.org/p/mitaka-telemetry-upgrades rolling upgrades]<br />
** 15:40 - [https://etherpad.openstack.org/p/mitaka-telemetry-split componentisation]<br />
<br />
* Thursday, 2015-08-29<br />
** 09:00 - [https://etherpad.openstack.org/p/mitaka-telemetry-testing functional and integration testing]<br />
** 09ː50 - [https://etherpad.openstack.org/p/mitaka-telemetry-bi business intelligence]<br />
** 11ː00 - [https://etherpad.openstack.org/p/mitaka-telemetry-polling refined polling]<br />
** 11ː50 - [https://etherpad.openstack.org/p/mitaka-telemetry-cross-project project data ownership]<br />
** 13ː50 - [https://etherpad.openstack.org/p/mitaka-telemetry-alarms event alarms]<br />
<br />
* Friday, 2015-08-30<br />
** 09:00 - [https://etherpad.openstack.org/p/mitaka-telemetry-contributors-meetup contributors meetup]<br />
<br />
== Designate ==<br />
<br />
* Wed 11:15: Roadmap https://etherpad.openstack.org/p/mitaka-designate-summit-roadmap<br />
* Wed 12:05: Alias Records https://etherpad.openstack.org/p/mitaka-designate-summit-alias<br />
* Wed 14:00: Batch API Actions https://etherpad.openstack.org/p/mitaka-designate-summit-batch-api<br />
* Wed 14:50: Embedable Services https://etherpad.openstack.org/p/mitaka-designate-summit-embeddable-services<br />
* Wed 16:40: Incremental Zone Transfer (IFXR) https://etherpad.openstack.org/p/mitaka-designate-summit-ifxr<br />
* Fri 14:00: Contributors Meetup https://etherpad.openstack.org/p/mitaka-designate-summit-meetup<br />
<br />
== Nova ==<br />
<br />
* Wed 11:15: REST API https://etherpad.openstack.org/p/mitaka-nova-api<br />
* Wed 12:05: Upgrade https://etherpad.openstack.org/p/mitaka-nova-upgrade<br />
<br />
* Wed 14:00: Unconference https://etherpad.openstack.org/p/mitaka-nova-unconference<br />
* Wed 14:50: OS VIF lib https://etherpad.openstack.org/p/mitaka-nova-os-vif-lib<br />
* Wed 15:40: Resources and Flavors https://etherpad.openstack.org/p/mitaka-nova-resource-modeling<br />
* Wed 16:40: Resources and Flavors (continued) https://etherpad.openstack.org/p/mitaka-nova-resource-modeling<br />
* Wed 17:30: SR-IOV https://etherpad.openstack.org/p/mitaka-nova-sr-iov<br />
<br />
* Thurs 09:00: Cells v2 https://etherpad.openstack.org/p/mitaka-nova-cells<br />
* Thurs 9:50: see Cinder track<br />
* Thurs 11:00: Scheduler https://etherpad.openstack.org/p/mitaka-nova-scheduler<br />
* Thurs 11:50: see Ironic track<br />
<br />
* Thurs 13:50: Unconference https://etherpad.openstack.org/p/mitaka-nova-unconference<br />
* Thurs 14:40: Error handling https://etherpad.openstack.org/p/mitaka-nova-error-handling<br />
* Thurs 15:30: Cross Service issues: Server locking, token refresh, Instance users https://etherpad.openstack.org/p/mitaka-nova-service-users<br />
* Thurs 16:30: Mitaka Priorities https://etherpad.openstack.org/p/mitaka-nova-priorities<br />
* Thurs 17:20: Unconference https://etherpad.openstack.org/p/mitaka-nova-unconference<br />
<br />
* Fri: 09:00 and 14:00: Nova contributors meetup https://etherpad.openstack.org/p/mitaka-nova-summit-meetup<br />
<br />
== Release management ==<br />
* Thu 15:30: Mitaka process changes [https://etherpad.openstack.org/p/mitaka-relmgt-process-changes]<br />
* Thu 16:30: Work session: the Mitaka plan [https://etherpad.openstack.org/p/mitaka-relmgt-plan]<br />
<br />
== Puppet OpenStack ==<br />
https://etherpad.openstack.org/p/HND-puppet (agenda details are coming...)<br />
<br />
[add your track here]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Stackforge_Namespace_Retirement&diff=91258Stackforge Namespace Retirement2015-09-28T15:17:15Z<p>Thinrichs: /* Inactive Projects to Retire */</p>
<hr />
<div>=== Background ===<br />
<br />
The stackforge/ git namespace is being retired, and active projects are being moved to the openstack/ namespace. See this mailing list post for [http://lists.openstack.org/pipermail/openstack-dev/2015-August/071816.html full background]. These changes are scheduled to occur on October 17, 2015.<br />
<br />
There are two lists below, one for active projects that should be moved, and a second for inactive projects that should become read-only. Please update them to add your project to the correct list.<br />
<br />
=== Active Projects to Move ===<br />
Active stackforge projects that wish to move into the openstack/ namespace should be added to this list. Projects in this list will be automatically moved by the Infrastructure team to openstack/ on October 17. Please note that no other renames can happen during this move -- projects will strictly be moved from stackforge/ to openstack/ with their existing names. <br />
<br />
* aeromancer<br />
* anvil<br />
* bansho<br />
* blazar<br />
* blazar-nova<br />
* ceilometer-powervm<br />
* ceilometer-zvm<br />
* cerberus<br />
* cerberus-dashboard<br />
* cl-openstack-client<br />
* cloud-init<br />
* cloudbase-init<br />
* clouddocs-maven-plugin<br />
* cloudkitty<br />
* cloudkitty-dashboard<br />
* cloudpulse<br />
* cognitive<br />
* compass-adapters<br />
* compass-core<br />
* compass-specs<br />
* compass-web<br />
* compute-hyperv<br />
* designate-msdnsagent<br />
* devstack-plugin-glusterfs<br />
* devstack-plugin-sheepdog<br />
* doc8<br />
* dox<br />
* drbd-devstack<br />
* driverlog<br />
* ec2-api<br />
* ec2-driver<br />
* faafo<br />
* flame<br />
* freezer<br />
* freezer-api<br />
* freezer-web-ui<br />
* fuel-agent<br />
* fuel-astute<br />
* fuel-dev-tools<br />
* fuel-devops<br />
* fuel-docs<br />
* fuel-library<br />
* fuel-main<br />
* fuel-mirror<br />
* fuel-nailgun-agent<br />
* fuel-octane<br />
* fuel-ostf<br />
* fuel-ostf-plugin<br />
* fuel-plugin-calamari<br />
* fuel-plugin-calico<br />
* fuel-plugin-ceilometer-redis<br />
* fuel-plugin-cinder-netapp<br />
* fuel-plugin-cisco-aci<br />
* fuel-plugin-contrail<br />
* fuel-plugin-dbaas-trove<br />
* fuel-plugin-detach-database<br />
* fuel-plugin-detach-keystone<br />
* fuel-plugin-detach-rabbitmq<br />
* fuel-plugin-elasticsearch-kibana<br />
* fuel-plugin-external-emc<br />
* fuel-plugin-external-glusterfs<br />
* fuel-plugin-external-zabbix<br />
* fuel-plugin-glance-nfs<br />
* fuel-plugin-ha-fencing<br />
* fuel-plugin-influxdb-grafana<br />
* fuel-plugin-ironic<br />
* fuel-plugin-ldap<br />
* fuel-plugin-lma-collector<br />
* fuel-plugin-lma-infrastructure-alerting<br />
* fuel-plugin-mellanox<br />
* fuel-plugin-midonet<br />
* fuel-plugin-neutron-fwaas<br />
* fuel-plugin-neutron-lbaas<br />
* fuel-plugin-neutron-vpnaas<br />
* fuel-plugin-nova-nfs<br />
* fuel-plugin-nsxv<br />
* fuel-plugin-opendaylight<br />
* fuel-plugin-saltstack<br />
* fuel-plugin-solidfire-cinder<br />
* fuel-plugin-swiftstack<br />
* fuel-plugin-tintri-cinder<br />
* fuel-plugin-tls<br />
* fuel-plugin-vmware-dvs<br />
* fuel-plugin-vxlan<br />
* fuel-plugin-zabbix-monitoring-emc<br />
* fuel-plugin-zabbix-monitoring-extreme-networks<br />
* fuel-plugin-zabbix-snmptrapd<br />
* fuel-plugins<br />
* fuel-qa<br />
* fuel-specs<br />
* fuel-stats<br />
* fuel-upgrade<br />
* fuel-web<br />
* gce-api<br />
* gerrit-dash-creator<br />
* git-upstream<br />
* golang-client<br />
* group-based-policy<br />
* group-based-policy-automation<br />
* group-based-policy-specs<br />
* group-based-policy-ui<br />
* intel-nfv-ci-tests<br />
* merlin<br />
* monasca-agent<br />
* monasca-api<br />
* monasca-ceilometer<br />
* monasca-common<br />
* monasca-log-api<br />
* monasca-notification<br />
* monasca-persister<br />
* monasca-statsd<br />
* monasca-thresh<br />
* monasca-ui<br />
* monasca-vagrant<br />
* monitoring-for-openstack<br />
* namos<br />
* nerd-reviewer<br />
* networking-6wind<br />
* networking-bagpipe-l2<br />
* networking-hyperv<br />
* networking-ovs-dpdk<br />
* networking-zvm<br />
* nova-docker<br />
* nova-powervm<br />
* nova-solver-scheduler<br />
* nova-zvm-virt-driver<br />
* ooi<br />
* ops-tags-team<br />
* osprofiler<br />
* ospurge<br />
* packstack<br />
* poppy<br />
* proliantutils<br />
* puppet-autossh<br />
* puppet-ceph<br />
* puppet-n1k-vsm<br />
* puppet-setproxy<br />
* puppet-surveil<br />
* python-blazarclient<br />
* python-cerberusclient<br />
* python-cloudkittyclient<br />
* python-cloudpulseclient<br />
* python-cognitiveclient<br />
* python-fuelclient<br />
* python-group-based-policy-client <br />
* python-jenkins<br />
* python-monascaclient<br />
* python-openstacksdk<br />
* python-rackclient<br />
* python-senlinclient<br />
* python-sticksclient<br />
* python-surveilclient<br />
* python-tackerclient<br />
* python-watcherclient<br />
* rack<br />
* requests-mock<br />
* sahara-ci-config<br />
* senlin<br />
* senlin-dashboard<br />
* shaker<br />
* sqlalchemy-migrate<br />
* surveil<br />
* surveil-specs<br />
* stackalytics<br />
* sticks<br />
* sticks-dashboard<br />
* swift3<br />
* swiftonfile<br />
* swift-ceph-backend<br />
* tacker<br />
* tacker-horizon<br />
* tacker-specs<br />
* tap-as-a-service<br />
* telcowg-usecases<br />
* terracotta<br />
* third-party-ci-tools<br />
* tricircle<br />
* vmtp<br />
* watcher<br />
* wsme<br />
* xenapi-os-testing<br />
* xstatic-d3<br />
* xstatic-angular-sanitize<br />
* xstatic-bootstrap-datepicker<br />
* xstatic-angular-gettext <br />
* xstatic-bootswatch <br />
* xstatic-angular-cookies <br />
* xstatic-bootstrap-scss<br />
* xstatic-angular-smart-table<br />
* xstatic-angular-fileupload<br />
* xstatic-angular-bootstrap<br />
* xstatic-angular<br />
* xstatic-angular-mock<br />
* xstatic-angular-lrdragndrop<br />
* xstatic-roboto-fontface <br />
* xstatic-jquery.quicksearch<br />
* xstatic-mdi<br />
* xstatic-rickshaw<br />
* xstatic-magic-search <br />
* xstatic-font-awesome <br />
* xstatic-hogan <br />
* xstatic-spin<br />
* xstatic-jasmine<br />
* xstatic-jquery-migrate<br />
* xstatic-jquery-migrate<br />
* xstatic-jsencrypt<br />
* yaql<br />
<br />
=== Inactive Projects to Retire ===<br />
Inactive projects that should be retired should be added to this list. These projects will have a commit merged removing their content and replacing it with a message indicating the project is no longer maintained and will become read-only in Gerrit.<br />
<br />
* compass-monit<br />
* congressmiddleware<br />
* fuel-plugin-availability-zones<br />
* fuel-provision<br />
* fuel-tasklib<br />
* mercador-pub<br />
* mercador-sub<br />
* MRaaS<br />
* networking-portforwarding<br />
* openstackdroid<br />
* python-mercadorclient<br />
* rubick<br />
* sahara-guestagent<br />
* libra<br />
* logaas<br />
* python-libraclient<br />
* python-rallyclient<br />
* cookbook-pacemaker<br />
* puppet-openstack_dev_env<br />
* puppet_openstack_builder<br />
* puppet-openstack-cloud<br />
* tripleo-ansible<br />
* kickstack<br />
* packstack-vagrant<br />
* haos<br />
* novaimagebuilder</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Meetings/Congress&diff=86779Meetings/Congress2015-07-27T18:16:43Z<p>Thinrichs: Removed this page, but left pointer to new version.</p>
<hr />
<div>= Congress Meeting =<br />
<br />
This page is now deprecated. Please change your bookmarks to: [http://eavesdrop.openstack.org/#Congress_Team_Meeting eavesdrop].</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=86778Congress2015-07-27T18:13:22Z<p>Thinrichs: /* Policy as a service ("Congress") */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Documents<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [https://drive.google.com/open?id=1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI&authuser=0 Delegation to VM-migration engine design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
<br />
* Material from last summit<br />
** [https://docs.google.com/file/d/0ByDz-eYOtswScTlmamlhLXpmTXc/edit Vancouver Summit Congress Overview slides]<br />
** [https://etherpad.openstack.org/p/congress-liberty-design-session Vancouver Summit Liberty etherpad]<br />
** [https://docs.google.com/document/d/1lXmMkUhiSZYK45POd5ungPjVR--Fs_wJHeQ6bXWwP44/pub Vancouver Hands On Lab instructions]<br />
** [https://drive.google.com/open?id=0ByDz-eYOtswScUFUc1ZrVVhmQlk&authuser=0 Delegation for VM placement slides]<br />
<br />
*Meetings<br />
** [http://eavesdrop.openstack.org/#Congress_Team_Meeting Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/ Meeting history]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The core grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases and Examples===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases]. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
Different examples of Congress policies that have been implemented can be viewed here: [https://goo.gl/UfeHEk Congress Examples]<br />
<br />
Here we give an example of policies that each of our releases supports.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress release<br />
|-<br />
| Monitor violations of: every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Stop a user from constructing a new VM if she owns a VM averaging less than 1% CPU-utilization (requires API gateway or Nova support) || kilo<br />
|-<br />
| Every time a user creates a Neutron security group that opens port 80, delete that security group || kilo<br />
|}<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
=== How To Propose a New Feature ===<br />
To propose a new feature, you <br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] describing what your feature does and how it will work. Include just the name, title, and summary.<br />
# Ask at least one core reviewer to give you feedback. If the feature is complex or controversial, the core will ask you to develop a "spec" and add it to the congress-specs repo. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.<br />
# Once your blueprint is approved, you may push code that implements that blueprint up to Gerrit, including the tag 'Implements-blueprint: <blueprint-name>'<br />
<br />
If a core reviewer asks for a spec, here's how you create one.<br />
# Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
# Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
# Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
# Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. <br />
<br />
Note: If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=85315Congress2015-07-07T14:08:44Z<p>Thinrichs: /* Policy as a service ("Congress") */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Documents<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [https://drive.google.com/open?id=1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI&authuser=0 Delegation to VM-migration engine design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
<br />
* Material from last summit<br />
** [https://docs.google.com/file/d/0ByDz-eYOtswScTlmamlhLXpmTXc/edit Vancouver Summit Congress Overview slides]<br />
** [https://etherpad.openstack.org/p/congress-liberty-design-session Vancouver Summit Liberty etherpad]<br />
** [https://docs.google.com/document/d/1lXmMkUhiSZYK45POd5ungPjVR--Fs_wJHeQ6bXWwP44/pub Vancouver Hands On Lab instructions]<br />
** [https://drive.google.com/open?id=0ByDz-eYOtswScUFUc1ZrVVhmQlk&authuser=0 Delegation for VM placement slides]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/ Meeting history]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The core grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases and Examples===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases]. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
Different examples of Congress policies that have been implemented can be viewed here: [https://goo.gl/UfeHEk Congress Examples]<br />
<br />
Here we give an example of policies that each of our releases supports.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress release<br />
|-<br />
| Monitor violations of: every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Stop a user from constructing a new VM if she owns a VM averaging less than 1% CPU-utilization (requires API gateway or Nova support) || kilo<br />
|-<br />
| Every time a user creates a Neutron security group that opens port 80, delete that security group || kilo<br />
|}<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
=== How To Propose a New Feature ===<br />
To propose a new feature, you <br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] describing what your feature does and how it will work. Include just the name, title, and summary.<br />
# Ask at least one core reviewer to give you feedback. If the feature is complex or controversial, the core will ask you to develop a "spec" and add it to the congress-specs repo. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.<br />
# Once your blueprint is approved, you may push code that implements that blueprint up to Gerrit, including the tag 'Implements-blueprint: <blueprint-name>'<br />
<br />
If a core reviewer asks for a spec, here's how you create one.<br />
# Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
# Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
# Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
# Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. <br />
<br />
Note: If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Sprints/CongressLibertySprint&diff=85163Sprints/CongressLibertySprint2015-07-06T14:14:21Z<p>Thinrichs: /* Where */</p>
<hr />
<div>== What ==<br />
The Congress message bus is what allows the policy engine to communicate with the wrappers around external services like Nova and Neutron. This mid-cycle sprint is focused on enabling the Congress message bus to span multiple processes and multiple hosts. This cross-process, cross-host message bus is the platform we'll use to build version 2.0 of our distributed architecture.<br />
<br />
== When ==<br />
August 6-7, 2015 (Thu-Fri)<br />
<br />
== Where ==<br />
VMware campus<br /><br />
3401 Hillview Ave<br /><br />
Palo Alto, CA, USA<br />
<br />
[https://www.google.com/maps/place/3401+Hillview+Ave,+Palo+Alto,+CA+94304/@37.4003365,-122.1444536,17z/data=!4m2!3m1!1s0x808fba9c4532d1ef:0xf76ebe4b63463bbc Google map]<br />
<br />
== Lodging ==<br />
[https://www.google.com/maps/search/hotels+near+3401+Hillview+Avenue,+Palo+Alto,+CA/@37.4091639,-122.1342971,15z Google map] of hotels near VMware campus<br />
<br />
== Registration ==<br />
Please RSVP at eventbrite: https://www.eventbrite.com/e/congress-liberty-midcycle-sprint-tickets-17654731778<br />
<br />
== Etherpad ==<br />
Feel free to contribute ideas and agenda items to the etherpad: https://etherpad.openstack.org/p/congress-liberty-sprint</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Sprints/CongressLibertySprint&diff=85162Sprints/CongressLibertySprint2015-07-06T14:12:46Z<p>Thinrichs: /* What */</p>
<hr />
<div>== What ==<br />
The Congress message bus is what allows the policy engine to communicate with the wrappers around external services like Nova and Neutron. This mid-cycle sprint is focused on enabling the Congress message bus to span multiple processes and multiple hosts. This cross-process, cross-host message bus is the platform we'll use to build version 2.0 of our distributed architecture.<br />
<br />
== When ==<br />
August 6-7, 2015 (Thu-Fri)<br />
<br />
== Where ==<br />
VMware campus<br />
<br />
3401 Hillview Ave, Palo Alto, CA, USA<br />
<br />
[https://www.google.com/maps/place/3401+Hillview+Ave,+Palo+Alto,+CA+94304/@37.4003365,-122.1444536,17z/data=!4m2!3m1!1s0x808fba9c4532d1ef:0xf76ebe4b63463bbc Google map]<br />
<br />
== Lodging ==<br />
[https://www.google.com/maps/search/hotels+near+3401+Hillview+Avenue,+Palo+Alto,+CA/@37.4091639,-122.1342971,15z Google map] of hotels near VMware campus<br />
<br />
== Registration ==<br />
Please RSVP at eventbrite: https://www.eventbrite.com/e/congress-liberty-midcycle-sprint-tickets-17654731778<br />
<br />
== Etherpad ==<br />
Feel free to contribute ideas and agenda items to the etherpad: https://etherpad.openstack.org/p/congress-liberty-sprint</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Sprints/CongressLibertySprint&diff=85160Sprints/CongressLibertySprint2015-07-06T14:06:57Z<p>Thinrichs: /* Where */</p>
<hr />
<div>== What ==<br />
This mid-cycle sprint is focused on enabling the Congress message bus to span multiple processes and multiple hosts. The message bus is what allows the Congress policy engine to communicate with the Congress wrappers around external services like Nova and Neutron. This cross-process, cross-host message bus is the platform we'll use to build version 2.0 of our distributed architecture.<br />
<br />
== When ==<br />
August 6-7, 2015 (Thu-Fri)<br />
<br />
== Where ==<br />
VMware campus<br />
<br />
3401 Hillview Ave, Palo Alto, CA, USA<br />
<br />
[https://www.google.com/maps/place/3401+Hillview+Ave,+Palo+Alto,+CA+94304/@37.4003365,-122.1444536,17z/data=!4m2!3m1!1s0x808fba9c4532d1ef:0xf76ebe4b63463bbc Google map]<br />
<br />
== Lodging ==<br />
[https://www.google.com/maps/search/hotels+near+3401+Hillview+Avenue,+Palo+Alto,+CA/@37.4091639,-122.1342971,15z Google map] of hotels near VMware campus<br />
<br />
== Registration ==<br />
Please RSVP at eventbrite: https://www.eventbrite.com/e/congress-liberty-midcycle-sprint-tickets-17654731778<br />
<br />
== Etherpad ==<br />
Feel free to contribute ideas and agenda items to the etherpad: https://etherpad.openstack.org/p/congress-liberty-sprint</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Sprints/CongressLibertySprint&diff=85159Sprints/CongressLibertySprint2015-07-06T14:06:29Z<p>Thinrichs: Created page with "== What == This mid-cycle sprint is focused on enabling the Congress message bus to span multiple processes and multiple hosts. The message bus is what allows the Congress po..."</p>
<hr />
<div>== What ==<br />
This mid-cycle sprint is focused on enabling the Congress message bus to span multiple processes and multiple hosts. The message bus is what allows the Congress policy engine to communicate with the Congress wrappers around external services like Nova and Neutron. This cross-process, cross-host message bus is the platform we'll use to build version 2.0 of our distributed architecture.<br />
<br />
== When ==<br />
August 6-7, 2015 (Thu-Fri)<br />
<br />
== Where ==<br />
VMware campus<br />
3401 Hillview Ave, Palo Alto, CA, USA<br />
[https://www.google.com/maps/place/3401+Hillview+Ave,+Palo+Alto,+CA+94304/@37.4003365,-122.1444536,17z/data=!4m2!3m1!1s0x808fba9c4532d1ef:0xf76ebe4b63463bbc Google map]<br />
<br />
== Lodging ==<br />
[https://www.google.com/maps/search/hotels+near+3401+Hillview+Avenue,+Palo+Alto,+CA/@37.4091639,-122.1342971,15z Google map] of hotels near VMware campus<br />
<br />
== Registration ==<br />
Please RSVP at eventbrite: https://www.eventbrite.com/e/congress-liberty-midcycle-sprint-tickets-17654731778<br />
<br />
== Etherpad ==<br />
Feel free to contribute ideas and agenda items to the etherpad: https://etherpad.openstack.org/p/congress-liberty-sprint</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Sprints&diff=85155Sprints2015-07-06T13:33:12Z<p>Thinrichs: /* Liberty sprints */</p>
<hr />
<div>For the list of virtual sprints, please visit the [https://wiki.openstack.org/wiki/VirtualSprints Virtual Sprints] page.<br />
<br />
=== Liberty sprints ===<br />
<br />
Here is a chronological list of future sprints. please keep it ordered and move past ones to table below.<br />
{| border="1" cellpadding="4" cellspacing="4"<br />
|- bgcolor=#eeeeee<br />
| Date<br />
| Location<br />
| Theme<br />
| More information at<br />
|-<br />
| June 24-26, 2015<br />
| Fort Collins, CO, USA<br />
| Neutron Code Sprint<br />
| [[Neutron/LibertyCodeSprint]]<br />
|-<br />
| June 30, July 1, July 2<br />
| Tel Aviv, Israel<br />
| Neutron QoS Code Sprint<br />
| [[Neutron/LibertyCodeSprint]]<br />
|-<br />
| July 15-17, 2015<br />
| Boston University, Boston, MA, USA<br />
| Keystone Midcycle Sprint<br />
| [[Sprints/KeystoneLibertySprint]]<br />
|-<br />
| July 15-17, 2015<br />
| Seattle, WA, USA<br />
| LBaaS Midcycle Sprint<br />
| [[https://etherpad.openstack.org/p/LBaaS-FWaaS-VPNaaS_Summer_Midcycle_meetup]]<br />
|-<br />
| July 21-23, 2015<br />
| IBM, Rochester MN, USA<br />
| Nova Midcycle Sprint<br />
| [[Sprints/NovaLibertySprint]]<br />
|-<br />
| July 21-23, 2015<br />
| HP, Fort Collins, Colorado, USA<br />
| Horizon Midcycle Sprint<br />
| [[Sprints/HorizonLibertySprint]]<br />
|-<br />
| July 28-29, 2015<br />
| Cisco Systems, Inc., San Jose, California, USA<br />
| Kolla Midcycle Sprint<br />
| [[Sprints/KollaLibertySprint]]<br />
|-<br />
| July 28-30, 2015<br />
| Rackspace, Blacksburg, Virginia, USA<br />
| Glance Midcycle Meetup<br />
| [Liberty Glance Mid Cycle Meetup [https://etherpad.openstack.org/p/liberty-glance-mid-cycle-meetup]]<br />
|-<br />
| July 29-30, 2015<br />
| IBM, Austin, Texas, USA<br />
| DefCore Midcycle Meetup<br />
| [https://etherpad.openstack.org/p/DefCoreFlag.MidCycle Liberty DefCore Mid Cycle Meetup]<br />
|-<br />
| August 4-7, 2015<br />
| HP, Fort Collins, Colorado, USA<br />
| Cinder Midcycle Sprint<br />
| [[Sprints/CinderLibertySprint]]<br />
|-<br />
| August 5-7, 2015<br />
| JHU Applied Physics Lab, Laurel, MD, USA<br />
| Barbican Midcycle Sprint<br />
| [[Sprints/BarbicanLibertySprint]]<br />
|-<br />
| August 6-7, 2015<br />
| VMware campus, Palo Alto, CA, USA<br />
| Congress Midcycle Sprint<br />
| [[Sprints/CongressLibertySprint]]<br />
|-<br />
| August 11 - 12, 2015<br />
| Cisco, Richardson, TX, USA<br />
| Product Working Group Midcycle<br />
| [[Sprints/Product_WGLibertySprint]]<br />
|- <br />
| August 12 - 14, 2015<br />
| HP, Seattle, Washington, USA<br />
| Ironic Midcycle Sprint<br />
| [[Sprints/IronicLibertySprint]]<br />
|- <br />
| August 26-28, 2015<br />
| HP, Sunnyvale, CA<br />
| Trove Midcycle Sprint<br />
| [[Sprints/TroveLibertySprint]]<br />
|-<br />
| September 1-4, 2015<br />
| HP, Seattle, Washington USA<br />
| Security Midcycle Sprint<br />
| [[Sprints/SecurityLibertySprint]]<br />
|}<br />
<br />
<br />
==== Previous Liberty Sprints ====<br />
<br />
{| border="1" cellpadding="4" cellspacing="4"<br />
|- bgcolor=#eeeeee<br />
| Date<br />
| Location<br />
| Theme<br />
| More information at<br />
|-<br />
|<br />
|<br />
|<br />
|<br />
|}<br />
<br />
=== Kilo sprints ===<br />
<br />
Here is a chronological list of future sprints. please keep it ordered and move past ones to table below.<br />
{| border="1" cellpadding="4" cellspacing="4"<br />
|- bgcolor=#eeeeee<br />
| Date<br />
| Location<br />
| Theme<br />
| More information at<br />
|-<br />
| March 25-27, 2015<br />
| New York, NY, USA<br />
| QA Code Sprint<br />
| [[QA/CodeSprintKiloNYC]]<br />
|-<br />
| April 13-15, 2015<br />
| Shanghai, PRC<br />
| Release Candidate Hackathon<br />
| [[PRC_Kilo_Hackathon|Kilo Hackathon in PRC]]<br />
|}<br />
<br />
<br />
<br />
==== Previous Kilo Sprints ====<br />
<br />
{| border="1" cellpadding="4" cellspacing="4"<br />
|- bgcolor=#eeeeee<br />
| Date<br />
| Location<br />
| Theme<br />
| More information at<br />
|-<br />
| December 8-10, 2014<br />
| Lehi, Utah, USA<br />
| Neutron<br />
| [[Sprints/NeutronKiloSprint]]<br />
|-<br />
| January 12 - 14, 2015<br />
| Santa Clara, CA, USA<br />
| Refstack<br />
| [https://etherpad.openstack.org/p/refstack-january-2015-midcycle Refstack Midcycle Meetup Etherpad]<br />
|-<br />
| January 19 - 21, 2015<br />
| San Antonio, TX, USA<br />
| Keystone<br />
| [[Sprints/KeystoneKiloSprint]]<br />
|-<br />
| January 19 - 22, 2015<br />
| San Jose, CA, USA<br />
| Designate<br />
| [[Sprints/DesignateKiloSprint]]<br />
|-<br />
| January 26 - 28, 2015<br />
| Palo Alto, CA, USA<br />
| Nova<br />
| [[Sprints/PaloAltoKiloSprint]]<br />
|-<br />
| January 27 - 28, 2015<br />
| Palo Alto, CA, USA<br />
| Glance<br />
| [https://etherpad.openstack.org/p/kilo-glance-mid-cycle-meetup Glance Midcycle Meetup Etherpad]<br />
|-<br />
| January 27 - 29, 2015<br />
| Austin, TX, USA<br />
| Cinder<br />
| [[Sprints/CinderKiloSprint]]<br />
|-<br />
| Feb 2 - 6, 2015<br />
| San Antonio, TX, USA<br />
| Neutron LBaaS<br />
| [https://etherpad.openstack.org/p/lbaas-kilo-meetup LBaaS Midcycle Meetup Etherpad]<br />
|-<br />
| Feb 3 - 5, 2015<br />
| Seattle, WA, USA<br />
| Trove<br />
| [[Sprints/TroveKiloSprint]]<br />
|-<br />
| Feb 3 - 5, 2015<br />
| Grenoble, France<br />
| Ironic<br />
| [[Sprints/IronicKiloSprint]]<br />
|-<br />
| Feb 11 - 13, 2015<br />
| San Francisco, CA, USA<br />
| Ironic<br />
| [[Sprints/IronicKiloSprint]]<br />
|-<br />
| Feb 16-18, 2015<br />
| Austin, TX, USA<br />
| Barbican<br />
| [[Sprints/BarbicanKiloSprint]]<br />
|-<br />
| Feb 18 - 20, 2015<br />
| Seattle, WA, USA<br />
| Deployment/TripleO<br />
| [[Sprints/DeploymentKiloSprint]], [https://etherpad.openstack.org/p/kilo-tripleo-midcycle-meetup Etherpad]<br />
|-<br />
| Feb 17-20, 2015 (Awaiting Confirmation)<br />
| San Francisco, CA, USA<br />
| OpenStack Security Group<br />
| [[Sprints/OSSGKiloSprint]]<br />
|-<br />
| Mar 2-3, 2015<br />
| San Francisco, CA, USA<br />
| Magnum (CaaS for OpenStack)<br />
| [[Magnum/Midcycle]]<br />
|-<br />
| Mar 9-10, 2015<br />
| Philadelphia, PA, USA<br />
| Operators Mid-Cycle<br />
| [[Operations/Meetups]]<br />
|}<br />
<br />
<br />
=== Juno sprints ===<br />
==== Here is a list of the previous sprints for Juno (all in '''2014''') ====<br />
{| border="1" cellpadding="4" cellspacing="4"<br />
|- bgcolor=#eeeeee<br />
| Date<br />
| Location<br />
| Theme<br />
| More information at<br />
|-<br />
| July 2 - 4<br />
| Paris, France<br />
| Ceilometer/All projects<br />
| [[Sprints/ParisJuno2014]]<br />
|-<br />
| July 7 - 9<br />
| San Antonio, TX, USA<br />
| Barbican<br />
| TBD, [[Meetings/Barbican]]<br />
|-<br />
| July 9 - 11<br />
| Bloomington, MN, USA<br />
| Neutron<br />
| [https://etherpad.openstack.org/p/neutron-juno-mid-cycle-meeting]<br />
|-<br />
| July 9 - 11<br />
| San Antonio, TX, USA<br />
| Keystone<br />
| [http://dolphm.com/openstack-keystone-hackathon-for-juno/]<br />
|-<br />
| July 14 - 18<br />
| Darmstadt, Germany<br />
| QA & Infra<br />
| [[Qa_Infra_Meetup_2014]]<br />
|-<br />
| July 14 - 18<br />
| Seattle, USA<br />
| Security Group<br />
| [https://etherpad.openstack.org/p/ossg-juno-meetup]<br />
|-<br />
| July 21 - July 25<br />
| Raleigh, NC, USA<br />
| TripleO (& Heat)<br />
| [https://etherpad.openstack.org/p/juno-midcycle-meetup]<br />
|-<br />
| July 28 - Jul 30<br />
| Hillsboro, OR, USA<br />
| Nova & Ironic<br />
| [[Sprints/BeavertonJunoSprint]]<br />
|-<br />
| Aug 11 - Aug 15<br />
| Fort Collins, CO<br />
| Cinder<br />
| [[https://etherpad.openstack.org/p/CinderMidCycleMeetupAug2014]]<br />
|-<br />
| Aug 18 - Aug 20<br />
| Raleigh, NC, USA<br />
| Heat<br />
| [https://etherpad.openstack.org/p/heat-juno-midcycle-meetup]<br />
|-<br />
| Aug 20 - Aug 23<br />
| Cambridge, MA, USA<br />
| Trove<br />
| [[Trove/JunoMidCycleMeetup|Link]]<br />
|-<br />
| July 24 - July 25<br />
| Palo Alto, CA, USA<br />
| Glance<br />
| [[https://etherpad.openstack.org/p/glance-juno-mid-cycle-meeting]]<br />
|}</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=CrossProjectLiaisons&diff=84021CrossProjectLiaisons2015-06-22T15:19:57Z<p>Thinrichs: /* Release management */</p>
<hr />
<div>Many of our cross-project teams need focused help for communicating with the other project teams. This page lists the people who have volunteered for that work.<br />
<br />
== Oslo ==<br />
<br />
There are now more projects consuming code from the Oslo incubator than we have Oslo contributors. That means we are going to need your help to make these migrations happen. We are asking for one person from each project to serve as a liaison between the project and Oslo, and to assist with integrating changes as we move code out of the incubator into libraries.<br />
<br />
* The liaison should be active in the project and familiar with the project-specific requirements for having patches accepted, but does not need to be a core reviewer or the PTL.<br />
* The liaison should be prepared to assist with writing and reviewing patches in their project as libraries are adopted, and with discussions of API changes to the libraries to make them easier to use within the project.<br />
* Liaisons should pay attention to [Oslo] tagged messages on the openstack-dev mailing list.<br />
* It is also useful for liaisons to be able to attend the Oslo team meeting ([[Meetings/Oslo]]) to participate in discussions and raise issues for real-time discussion.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || Julien Danjou || jd__<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Louis Taylor || kragniz<br />
|-<br />
| Heat || Thomas Herve || therve<br />
|-<br />
| Horizon || || (needs a volunteer)<br />
|-<br />
| Ironic || Lin Tan || lintan<br />
|-<br />
| Keystone || Brant Knudson || bknudson<br />
|-<br />
| Manila || Thomas Bechtold || toabctl<br />
|-<br />
| Murano || Serg Melikyan || sergmelikyan<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachyshka<br />
|-<br />
| Nova || Victor Stinner|| haypo<br />
|-<br />
| [[Octavia]] || Michael Johnson || johnsom<br />
|-<br />
| Sahara || Sergey Reshetnyak || sreshetnyak<br />
|-<br />
| Swift || || <br />
|-<br />
| TripleO || Ben Nemec || bnemec<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Zaqar || Flavio Percoco || flaper87<br />
|-<br />
|}<br />
<br />
== Release management ==<br />
<br />
The Release Management Liaison is responsible for communication with the Release Management team, attending the weekly 1:1 syncs in #openstack-relmgr-office, keeping milestone plans up to date, and signing off milestone and release tags. That task has been [[PTL_Guide#Interactions_with_the_Release_team|traditionally filled by the PTL]], but they may now delegate this task if they wish.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Release Management Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Nova || John Garbutt || johnthetubaguy<br />
|-<br />
| Cinder || Mike Perez || thingee<br />
|-<br />
| Swift || John Dickinson || notmyname<br />
|-<br />
| Neutron || Kyle Mestery || mestery<br />
|-<br />
| Keystone || Morgan Fainberg || morganfainberg<br />
|-<br />
| Horizon || David Lyle || david-lyle<br />
|-<br />
| Glance || Nikhil Komawar || nikhil_k<br />
|-<br />
| Ceilometer || gordon chung || gordc<br />
|-<br />
| Heat || Angus Salkeld || asalkeld<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Trove || Nikhil Manchanda || SlickNik<br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Ironic || Devananda Van der Veen || devananda<br />
|-<br />
| Zaqar || || <br />
|-<br />
| Designate || || <br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Murano || Serg Melikyan || sergmelikyan<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|}<br />
<br />
== QA ==<br />
<br />
There are now more projects that are being tested by Tempest, and Grenade or a part deployable by Devstack than we have QA contributors. That means we are going to need your help to keep on top of everything. We are asking for one person from each project to serve as a liaison between the project and QA, and to assist with integrating changes as we move forward.<br />
<br />
The liaison should be a core reviewer for the project, but does not need to be the PTL. The liaison should be prepared to assist with writing and reviewing patches that interact with their project, and with discussions of changes to the QA projects to make them easier to use within the project.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Nova || Matt Riedemann || mriedem<br />
|-<br />
| Cinder || || <br />
|-<br />
| Swift || || <br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Keystone || David Stanek || dstanek<br />
|-<br />
| Horizon || || <br />
|-<br />
| Glance || || <br />
|-<br />
| Ceilometer || Chris Dent || cdent<br />
|-<br />
| Heat || Steve Baker || stevebaker<br />
|-<br />
| Oslo || Davanum Srinivas || dims <br />
|-<br />
| Trove || Nikhil Manchanda and Peter Stachowski || SlickNik and peterstac<br />
|-<br />
| Sahara || Luigi Toscano and Sergey Lukjanov || tosky and SergeyLukjanov<br />
|-<br />
| Ironic || Adam Gandelman || adam_g<br />
|-<br />
| Zaqar || || <br />
|-<br />
| Barbican || Steve Heyman || hockeynut <br />
|-<br />
| Manila || Valeriy Ponomaryov || vponomaryov<br />
|}<br />
<br />
== Documentation ==<br />
<br />
The OpenStack Documentation is centralized on docs.openstack.org but often there's a need for specialty information when reviewing patches or triaging doc bugs. A doc liaison should be available to triage doc bugs when the docs team members don't know enough to triage accurately, and be added to doc reviews that affect your project. You'd be notified through email when you're added either to a doc bug or a doc review. We also would appreciate attendance at the [https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting weekly doc team meeting], We meet weekly in #openstack-meeting every Wednesday at alternating times for different timezones:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Nova || Joe Gordon or Michael Still || Jog0 or mikal<br />
|-<br />
| Cinder || Mike Perez || thingee <br />
|-<br />
| Swift || Atul Jha or Chuck Thier || koolhead or creight<br />
|-<br />
| Neutron || Edgar Magana || emagana <br />
|-<br />
| Keystone || Steve Martinelli || stevemar<br />
|-<br />
| Horizon || Rob Cresswell || robcresswell<br />
|-<br />
| Glance || Brian Rosmaita || rosmaita<br />
|-<br />
| Ceilometer || Ildiko Vancsa || ildikov<br />
|-<br />
| Heat || Randall Burt || randallburt<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Trove || Laurel Michaels, Matt Griffin || laurelm mattgriffin<br />
|-<br />
| Sahara || Chad Roberts || crobertsrh<br />
|-<br />
| Ironic || Mitsuhiro SHIGEMATSU || pshige<br />
|-<br />
| Zaqar || || <br />
|-<br />
| Barbican || Constanze Kratel || constanze <br />
|-<br />
| Manila || || <br />
|}<br />
<br />
== Stable Branch ==<br />
<br />
The Stable Branch Liaison is responsible for making sure backports are proposed for critical issues in their project, and make sure proposed backports<br />
are reviewed. They are also the contact point for stable branch release managers around point release times.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Stable Branch Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer || Eoghan Glynn || eglynn<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Heat || Zane Bitter || zaneb<br />
|-<br />
| Horizon || Matthias Runge || mrunge <br />
|-<br />
| Ironic || Adam Gandelman || adam_g<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Neutron || Kyle Mestery (Ihar Hrachyshka?) || mestery (ihrachyshka?)<br />
|-<br />
| Nova || || <br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
|}<br />
<br />
== Vulnerability management ==<br />
<br />
The [[Vulnerability Management]] Team needs domain specialists to help assessing the impact of reported issues, coordinate the development of patches, review proposed patches and propose backports. The liaison should be familiar with the [[Vulnerability Management]] process and embargo rules, and have a good grasp of security issues in software design.<br />
<br />
* The liaison should be a core reviewer for the project, but does not need to be the PTL.<br />
* By default, the liaison will be the PTL.<br />
* The liaison is the first line of contact for the Vulnerability Management team members<br />
* The liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL<br />
* The liaison may further delegate work to other subject matter experts<br />
* The liaison maintains the members of the $PROJECT-coresec team in Launchpad (which can be given access to embargoed vulnerabilities)<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer || Lianhao Lu or Gordon Chung || llu/gordc <br />
|-<br />
| Cinder || || <br />
|-<br />
| Glance || Stuart McLaren or Nikhil Komawar || mclaren or nikhil_k <br />
|-<br />
| Heat || Steve Hardy || shardy<br />
|-<br />
| Horizon || Lin Hua Cheng || lhcheng <br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Michael Still || mikal<br />
|-<br />
| Sahara || Michael McCune or Sergey Lukjanov || elmiko or SergeyLukjanov<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Nikhil Manchanda || SlickNik <br />
|-<br />
|}<br />
<br />
== API Working Group ==<br />
<br />
The [[API_Working_Group|API Working Group]] seeks API subject matter experts for each project to communicate plans for API updates, review API guidelines with their project's view in mind, and review the API Working Group guidelines as they are drafted. The liaison should be familiar with the project's REST API design and future planning for changes to it.<br />
<br />
* The liaison should be the PTL or whomever they delegate to be their representative<br />
* The liaison is the first line of contact for the API Working Group team members<br />
* The liaison may further delegate work to other subject matter experts<br />
* The liaison should be aware of and engaged in the API Working Group [[API_Working_Group#Communication|Communication channels]]<br />
* The Nova team has been very explicit about how they will liaise with the API Working Group, see the [[Nova/APIWGLiaisons|Responsibilities of Liaisons]]<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || Chris Dent || cdent<br />
|-<br />
| Cinder || Alex Meade || ameade<br />
|-<br />
| Congress || || <br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Stuart McLaren or Nikhil Komawar || mclaren or nikhil_k <br />
|-<br />
| Heat || Ryan Brown || ryansb<br />
|-<br />
| Horizon || Cindy Lu || clu_ <br />
|-<br />
| Ironic || Lucas Alvares Gomes || lucasagomes<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| MagnetoDB|| Ilya Sviridov || isviridov<br />
|-<br />
| Magnum || ||<br />
|-<br />
| Manila || Alex Meade || ameade<br />
|-<br />
| Mistral || ||<br />
|-<br />
| Murano || ||<br />
|-<br />
| Neutron || Salvatore Orlando<br />Henry Gessau || salv-orlando<br />HenryG<br />
|-<br />
| Nova || Matthew Gilliard and Alex Xu || gilliard and alex_xu <br />
|-<br />
| Rally || || <br />
|-<br />
| Sahara || Michael McCune and Sergey Lukjanov || elmiko and SergeyLukjanov<br />
|-<br />
| Swift || John Dickinson || notmyname <br />
|-<br />
| Trove || Peter Stachowski and Amrith Kumar || peterstac and amrith<br />
|-<br />
| Tripleo || || <br />
|-<br />
| Zaqar || Fei Long Wang || flwang<br />
|-<br />
|}<br />
<br />
== Logging Working Group ==<br />
<br />
The [[LogWorkingGroup|Log Working Group]] seeks experts for each project to assist with making the logging in projects match the new [http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html Logging Guidelines]<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Nova || John Garbutt || johnthetubaguy<br />
|-<br />
| Sahara || Nikolay Starodubtsev || Nikolay_St<br />
|}<br />
<br />
== Inter-project Liaisons ==<br />
<br />
In some cases, it is useful to have liaisons between projects. [http://lists.openstack.org/pipermail/openstack-dev/2015-April/062327.html For example, it is useful for the Nova and Neutron projects to have liaisons, because the projects have complex interactions and dependencies.] Ideally, a cross-project effort should have two members, one from each project, to facilitate communication and knowledge transfer.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Projects !! Name !! IRC Handle !! Role<br />
|-<br />
| Nova / Neutron || || ||<br />
|-<br />
| || Sean M. Collins || sc68cal || Neutron liaison for Nova<br />
|-<br />
| || Brent Eagles || beagles || Nova liaison for Neutron<br />
|-<br />
| Nova / Glance || || ||<br />
|-<br />
| || Fei Long Wang || - || Glance liaison for Nova<br />
|-<br />
| || Jay Pipes || jaypipes || Nova liaison for Glance<br />
|-<br />
| Nova / Ironic || John Villalovos || jlvillal || Ironic liaison for Nova<br />
|-<br />
| || Michael Davies || mrda || Ironic liaison for Nova<br />
|-<br />
| Neutron / Ironic || || ||<br />
|-<br />
| || Sukhdev Kapur || sukhdev || Neutron liaison for Ironic<br />
|-<br />
| || Mitsuhiro SHIGEMATSU and Jim Rollenhagen || pshige and jroll || Ironic liaison for Neutron<br />
|}<br />
<br />
=== Etherpads ===<br />
<br />
The following is a list of etherpads that are used for inter-project liaisons, and are continuously updated.<br />
<br />
Nova - Neutron: https://etherpad.openstack.org/p/nova-neutron</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=CrossProjectLiaisons&diff=84020CrossProjectLiaisons2015-06-22T15:18:40Z<p>Thinrichs: /* Oslo */</p>
<hr />
<div>Many of our cross-project teams need focused help for communicating with the other project teams. This page lists the people who have volunteered for that work.<br />
<br />
== Oslo ==<br />
<br />
There are now more projects consuming code from the Oslo incubator than we have Oslo contributors. That means we are going to need your help to make these migrations happen. We are asking for one person from each project to serve as a liaison between the project and Oslo, and to assist with integrating changes as we move code out of the incubator into libraries.<br />
<br />
* The liaison should be active in the project and familiar with the project-specific requirements for having patches accepted, but does not need to be a core reviewer or the PTL.<br />
* The liaison should be prepared to assist with writing and reviewing patches in their project as libraries are adopted, and with discussions of API changes to the libraries to make them easier to use within the project.<br />
* Liaisons should pay attention to [Oslo] tagged messages on the openstack-dev mailing list.<br />
* It is also useful for liaisons to be able to attend the Oslo team meeting ([[Meetings/Oslo]]) to participate in discussions and raise issues for real-time discussion.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || Julien Danjou || jd__<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Congress || Tim Hinrichs || thinrichs<br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Louis Taylor || kragniz<br />
|-<br />
| Heat || Thomas Herve || therve<br />
|-<br />
| Horizon || || (needs a volunteer)<br />
|-<br />
| Ironic || Lin Tan || lintan<br />
|-<br />
| Keystone || Brant Knudson || bknudson<br />
|-<br />
| Manila || Thomas Bechtold || toabctl<br />
|-<br />
| Murano || Serg Melikyan || sergmelikyan<br />
|-<br />
| Neutron || Ihar Hrachyshka || ihrachyshka<br />
|-<br />
| Nova || Victor Stinner|| haypo<br />
|-<br />
| [[Octavia]] || Michael Johnson || johnsom<br />
|-<br />
| Sahara || Sergey Reshetnyak || sreshetnyak<br />
|-<br />
| Swift || || <br />
|-<br />
| TripleO || Ben Nemec || bnemec<br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
| Zaqar || Flavio Percoco || flaper87<br />
|-<br />
|}<br />
<br />
== Release management ==<br />
<br />
The Release Management Liaison is responsible for communication with the Release Management team, attending the weekly 1:1 syncs in #openstack-relmgr-office, keeping milestone plans up to date, and signing off milestone and release tags. That task has been [[PTL_Guide#Interactions_with_the_Release_team|traditionally filled by the PTL]], but they may now delegate this task if they wish.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Release Management Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Nova || John Garbutt || johnthetubaguy<br />
|-<br />
| Cinder || Mike Perez || thingee<br />
|-<br />
| Swift || John Dickinson || notmyname<br />
|-<br />
| Neutron || Kyle Mestery || mestery<br />
|-<br />
| Keystone || Morgan Fainberg || morganfainberg<br />
|-<br />
| Horizon || David Lyle || david-lyle<br />
|-<br />
| Glance || Nikhil Komawar || nikhil_k<br />
|-<br />
| Ceilometer || gordon chung || gordc<br />
|-<br />
| Heat || Angus Salkeld || asalkeld<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Trove || Nikhil Manchanda || SlickNik<br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Ironic || Devananda Van der Veen || devananda<br />
|-<br />
| Zaqar || || <br />
|-<br />
| Designate || || <br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Manila || Ben Swartzlander || bswartz<br />
|-<br />
| Murano || Serg Melikyan || sergmelikyan<br />
|}<br />
<br />
== QA ==<br />
<br />
There are now more projects that are being tested by Tempest, and Grenade or a part deployable by Devstack than we have QA contributors. That means we are going to need your help to keep on top of everything. We are asking for one person from each project to serve as a liaison between the project and QA, and to assist with integrating changes as we move forward.<br />
<br />
The liaison should be a core reviewer for the project, but does not need to be the PTL. The liaison should be prepared to assist with writing and reviewing patches that interact with their project, and with discussions of changes to the QA projects to make them easier to use within the project.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Nova || Matt Riedemann || mriedem<br />
|-<br />
| Cinder || || <br />
|-<br />
| Swift || || <br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Keystone || David Stanek || dstanek<br />
|-<br />
| Horizon || || <br />
|-<br />
| Glance || || <br />
|-<br />
| Ceilometer || Chris Dent || cdent<br />
|-<br />
| Heat || Steve Baker || stevebaker<br />
|-<br />
| Oslo || Davanum Srinivas || dims <br />
|-<br />
| Trove || Nikhil Manchanda and Peter Stachowski || SlickNik and peterstac<br />
|-<br />
| Sahara || Luigi Toscano and Sergey Lukjanov || tosky and SergeyLukjanov<br />
|-<br />
| Ironic || Adam Gandelman || adam_g<br />
|-<br />
| Zaqar || || <br />
|-<br />
| Barbican || Steve Heyman || hockeynut <br />
|-<br />
| Manila || Valeriy Ponomaryov || vponomaryov<br />
|}<br />
<br />
== Documentation ==<br />
<br />
The OpenStack Documentation is centralized on docs.openstack.org but often there's a need for specialty information when reviewing patches or triaging doc bugs. A doc liaison should be available to triage doc bugs when the docs team members don't know enough to triage accurately, and be added to doc reviews that affect your project. You'd be notified through email when you're added either to a doc bug or a doc review. We also would appreciate attendance at the [https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting weekly doc team meeting], We meet weekly in #openstack-meeting every Wednesday at alternating times for different timezones:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Nova || Joe Gordon or Michael Still || Jog0 or mikal<br />
|-<br />
| Cinder || Mike Perez || thingee <br />
|-<br />
| Swift || Atul Jha or Chuck Thier || koolhead or creight<br />
|-<br />
| Neutron || Edgar Magana || emagana <br />
|-<br />
| Keystone || Steve Martinelli || stevemar<br />
|-<br />
| Horizon || Rob Cresswell || robcresswell<br />
|-<br />
| Glance || Brian Rosmaita || rosmaita<br />
|-<br />
| Ceilometer || Ildiko Vancsa || ildikov<br />
|-<br />
| Heat || Randall Burt || randallburt<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Trove || Laurel Michaels, Matt Griffin || laurelm mattgriffin<br />
|-<br />
| Sahara || Chad Roberts || crobertsrh<br />
|-<br />
| Ironic || Mitsuhiro SHIGEMATSU || pshige<br />
|-<br />
| Zaqar || || <br />
|-<br />
| Barbican || Constanze Kratel || constanze <br />
|-<br />
| Manila || || <br />
|}<br />
<br />
== Stable Branch ==<br />
<br />
The Stable Branch Liaison is responsible for making sure backports are proposed for critical issues in their project, and make sure proposed backports<br />
are reviewed. They are also the contact point for stable branch release managers around point release times.<br />
<br />
* By default, the liaison will be the PTL.<br />
* The Stable Branch Liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL.<br />
* The liaison may further delegate work to other subject matter experts<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer || Eoghan Glynn || eglynn<br />
|-<br />
| Cinder || Jay Bryant || jungleboyj<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Heat || Zane Bitter || zaneb<br />
|-<br />
| Horizon || Matthias Runge || mrunge <br />
|-<br />
| Ironic || Adam Gandelman || adam_g<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Neutron || Kyle Mestery (Ihar Hrachyshka?) || mestery (ihrachyshka?)<br />
|-<br />
| Nova || || <br />
|-<br />
| Sahara || Sergey Lukjanov || SergeyLukjanov<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Amrith Kumar || amrith<br />
|-<br />
|}<br />
<br />
== Vulnerability management ==<br />
<br />
The [[Vulnerability Management]] Team needs domain specialists to help assessing the impact of reported issues, coordinate the development of patches, review proposed patches and propose backports. The liaison should be familiar with the [[Vulnerability Management]] process and embargo rules, and have a good grasp of security issues in software design.<br />
<br />
* The liaison should be a core reviewer for the project, but does not need to be the PTL.<br />
* By default, the liaison will be the PTL.<br />
* The liaison is the first line of contact for the Vulnerability Management team members<br />
* The liaison is considered a contributor to the Release Cycle Management Program and therefore is allowed to vote in election its PTL<br />
* The liaison may further delegate work to other subject matter experts<br />
* The liaison maintains the members of the $PROJECT-coresec team in Launchpad (which can be given access to embargoed vulnerabilities)<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Ceilometer || Lianhao Lu or Gordon Chung || llu/gordc <br />
|-<br />
| Cinder || || <br />
|-<br />
| Glance || Stuart McLaren or Nikhil Komawar || mclaren or nikhil_k <br />
|-<br />
| Heat || Steve Hardy || shardy<br />
|-<br />
| Horizon || Lin Hua Cheng || lhcheng <br />
|-<br />
| Ironic || Jim Rollenhagen || jroll<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| Neutron || Salvatore Orlando || salv-orlando<br />
|-<br />
| Nova || Michael Still || mikal<br />
|-<br />
| Sahara || Michael McCune or Sergey Lukjanov || elmiko or SergeyLukjanov<br />
|-<br />
| Swift || || <br />
|-<br />
| Trove || Nikhil Manchanda || SlickNik <br />
|-<br />
|}<br />
<br />
== API Working Group ==<br />
<br />
The [[API_Working_Group|API Working Group]] seeks API subject matter experts for each project to communicate plans for API updates, review API guidelines with their project's view in mind, and review the API Working Group guidelines as they are drafted. The liaison should be familiar with the project's REST API design and future planning for changes to it.<br />
<br />
* The liaison should be the PTL or whomever they delegate to be their representative<br />
* The liaison is the first line of contact for the API Working Group team members<br />
* The liaison may further delegate work to other subject matter experts<br />
* The liaison should be aware of and engaged in the API Working Group [[API_Working_Group#Communication|Communication channels]]<br />
* The Nova team has been very explicit about how they will liaise with the API Working Group, see the [[Nova/APIWGLiaisons|Responsibilities of Liaisons]]<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Barbican || Douglas Mendizábal || redrobot<br />
|-<br />
| Ceilometer || Chris Dent || cdent<br />
|-<br />
| Cinder || Alex Meade || ameade<br />
|-<br />
| Congress || || <br />
|-<br />
| Designate || || <br />
|-<br />
| Glance || Stuart McLaren or Nikhil Komawar || mclaren or nikhil_k <br />
|-<br />
| Heat || Ryan Brown || ryansb<br />
|-<br />
| Horizon || Cindy Lu || clu_ <br />
|-<br />
| Ironic || Lucas Alvares Gomes || lucasagomes<br />
|-<br />
| Keystone || Dolph Mathews || dolphm<br />
|-<br />
| MagnetoDB|| Ilya Sviridov || isviridov<br />
|-<br />
| Magnum || ||<br />
|-<br />
| Manila || Alex Meade || ameade<br />
|-<br />
| Mistral || ||<br />
|-<br />
| Murano || ||<br />
|-<br />
| Neutron || Salvatore Orlando<br />Henry Gessau || salv-orlando<br />HenryG<br />
|-<br />
| Nova || Matthew Gilliard and Alex Xu || gilliard and alex_xu <br />
|-<br />
| Rally || || <br />
|-<br />
| Sahara || Michael McCune and Sergey Lukjanov || elmiko and SergeyLukjanov<br />
|-<br />
| Swift || John Dickinson || notmyname <br />
|-<br />
| Trove || Peter Stachowski and Amrith Kumar || peterstac and amrith<br />
|-<br />
| Tripleo || || <br />
|-<br />
| Zaqar || Fei Long Wang || flwang<br />
|-<br />
|}<br />
<br />
== Logging Working Group ==<br />
<br />
The [[LogWorkingGroup|Log Working Group]] seeks experts for each project to assist with making the logging in projects match the new [http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html Logging Guidelines]<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Liaison !! IRC Handle<br />
|-<br />
| Glance || Erno Kuvaja || jokke_<br />
|-<br />
| Oslo || Doug Hellmann || dhellmann<br />
|-<br />
| Nova || John Garbutt || johnthetubaguy<br />
|-<br />
| Sahara || Nikolay Starodubtsev || Nikolay_St<br />
|}<br />
<br />
== Inter-project Liaisons ==<br />
<br />
In some cases, it is useful to have liaisons between projects. [http://lists.openstack.org/pipermail/openstack-dev/2015-April/062327.html For example, it is useful for the Nova and Neutron projects to have liaisons, because the projects have complex interactions and dependencies.] Ideally, a cross-project effort should have two members, one from each project, to facilitate communication and knowledge transfer.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Projects !! Name !! IRC Handle !! Role<br />
|-<br />
| Nova / Neutron || || ||<br />
|-<br />
| || Sean M. Collins || sc68cal || Neutron liaison for Nova<br />
|-<br />
| || Brent Eagles || beagles || Nova liaison for Neutron<br />
|-<br />
| Nova / Glance || || ||<br />
|-<br />
| || Fei Long Wang || - || Glance liaison for Nova<br />
|-<br />
| || Jay Pipes || jaypipes || Nova liaison for Glance<br />
|-<br />
| Nova / Ironic || John Villalovos || jlvillal || Ironic liaison for Nova<br />
|-<br />
| || Michael Davies || mrda || Ironic liaison for Nova<br />
|-<br />
| Neutron / Ironic || || ||<br />
|-<br />
| || Sukhdev Kapur || sukhdev || Neutron liaison for Ironic<br />
|-<br />
| || Mitsuhiro SHIGEMATSU and Jim Rollenhagen || pshige and jroll || Ironic liaison for Neutron<br />
|}<br />
<br />
=== Etherpads ===<br />
<br />
The following is a list of etherpads that are used for inter-project liaisons, and are continuously updated.<br />
<br />
Nova - Neutron: https://etherpad.openstack.org/p/nova-neutron</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=82979Congress2015-06-09T13:55:20Z<p>Thinrichs: /* Policy as a service ("Congress") */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Documents<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [https://drive.google.com/open?id=1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI&authuser=0 Delegation to VM-migration engine design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
<br />
* Material from last summit<br />
** [https://docs.google.com/file/d/0ByDz-eYOtswScTlmamlhLXpmTXc/edit Vancouver Summit Congress Overview slides]<br />
** [https://etherpad.openstack.org/p/congress-liberty-design-session Vancouver Summit Liberty etherpad]<br />
** [https://docs.google.com/document/d/1lXmMkUhiSZYK45POd5ungPjVR--Fs_wJHeQ6bXWwP44/pub Vancouver Hands On Lab instructions]<br />
** [https://drive.google.com/open?id=0ByDz-eYOtswScUFUc1ZrVVhmQlk&authuser=0 Delegation for VM placement slides]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The core grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases and Examples===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases]. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
Different examples of Congress policies that have been implemented can be viewed here: [https://goo.gl/UfeHEk Congress Examples]<br />
<br />
Here we give an example of policies that each of our releases supports.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress release<br />
|-<br />
| Monitor violations of: every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Stop a user from constructing a new VM if she owns a VM averaging less than 1% CPU-utilization (requires API gateway or Nova support) || kilo<br />
|-<br />
| Every time a user creates a Neutron security group that opens port 80, delete that security group || kilo<br />
|}<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
=== How To Propose a New Feature ===<br />
To propose a new feature, you <br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] describing what your feature does and how it will work. Include just the name, title, and summary.<br />
# Ask at least one core reviewer to give you feedback. If the feature is complex or controversial, the core will ask you to develop a "spec" and add it to the congress-specs repo. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.<br />
# Once your blueprint is approved, you may push code that implements that blueprint up to Gerrit, including the tag 'Implements-blueprint: <blueprint-name>'<br />
<br />
If a core reviewer asks for a spec, here's how you create one.<br />
# Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
# Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
# Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
# Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. <br />
<br />
Note: If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=81833Congress2015-05-26T17:10:40Z<p>Thinrichs: /* How To Propose a New Feature */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Documents<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [https://drive.google.com/open?id=1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI&authuser=0 Delegation to VM-migration engine design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
** [https://docs.google.com/file/d/0ByDz-eYOtswScTlmamlhLXpmTXc/edit Vancouver Summit Congress Overview slides]<br />
** [https://etherpad.openstack.org/p/congress-liberty-design-session Vancouver Summit Liberty etherpad]<br />
** [https://docs.google.com/document/d/1lXmMkUhiSZYK45POd5ungPjVR--Fs_wJHeQ6bXWwP44/pub Vancouver Hands On Lab instructions]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The core grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases and Examples===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases]. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
Different examples of Congress policies that have been implemented can be viewed here: [https://goo.gl/UfeHEk Congress Examples]<br />
<br />
Here we give an example of policies that each of our releases supports.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress release<br />
|-<br />
| Monitor violations of: every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Stop a user from constructing a new VM if she owns a VM averaging less than 1% CPU-utilization (requires API gateway or Nova support) || kilo<br />
|-<br />
| Every time a user creates a Neutron security group that opens port 80, delete that security group || kilo<br />
|}<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
=== How To Propose a New Feature ===<br />
To propose a new feature, you <br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] describing what your feature does and how it will work. Include just the name, title, and summary.<br />
# Ask at least one core reviewer to give you feedback. If the feature is complex or controversial, the core will ask you to develop a "spec" and add it to the congress-specs repo. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.<br />
# Once your blueprint is approved, you may push code that implements that blueprint up to Gerrit, including the tag 'Implements-blueprint: <blueprint-name>'<br />
<br />
If a core reviewer asks for a spec, here's how you create one.<br />
# Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
# Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
# Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
# Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. <br />
<br />
Note: If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=81832Congress2015-05-26T17:05:20Z<p>Thinrichs: /* Use Cases and Examples */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Documents<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [https://drive.google.com/open?id=1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI&authuser=0 Delegation to VM-migration engine design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
** [https://docs.google.com/file/d/0ByDz-eYOtswScTlmamlhLXpmTXc/edit Vancouver Summit Congress Overview slides]<br />
** [https://etherpad.openstack.org/p/congress-liberty-design-session Vancouver Summit Liberty etherpad]<br />
** [https://docs.google.com/document/d/1lXmMkUhiSZYK45POd5ungPjVR--Fs_wJHeQ6bXWwP44/pub Vancouver Hands On Lab instructions]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The core grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases and Examples===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases]. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
Different examples of Congress policies that have been implemented can be viewed here: [https://goo.gl/UfeHEk Congress Examples]<br />
<br />
Here we give an example of policies that each of our releases supports.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress release<br />
|-<br />
| Monitor violations of: every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Stop a user from constructing a new VM if she owns a VM averaging less than 1% CPU-utilization (requires API gateway or Nova support) || kilo<br />
|-<br />
| Every time a user creates a Neutron security group that opens port 80, delete that security group || kilo<br />
|}<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
To propose a new feature, you <br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] describing what your feature does and how it will work. Include just the name, title, and summary.<br />
# Ask at least one core reviewer to give you feedback. If the feature is complex or controversial, the core will ask you to develop a "spec" and add it to the congress-specs repo. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.<br />
# Once your blueprint is approved, you may push code that implements that blueprint up to Gerrit, including the tag 'Implements-blueprint: <blueprint-name>'<br />
<br />
If a core reviewer asks for a spec, here's how you create one.<br />
# Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
# Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
# Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
# Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. <br />
<br />
Note: If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=81831Congress2015-05-26T17:04:56Z<p>Thinrichs: /* Use Cases and Examples */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Documents<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [https://drive.google.com/open?id=1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI&authuser=0 Delegation to VM-migration engine design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
** [https://docs.google.com/file/d/0ByDz-eYOtswScTlmamlhLXpmTXc/edit Vancouver Summit Congress Overview slides]<br />
** [https://etherpad.openstack.org/p/congress-liberty-design-session Vancouver Summit Liberty etherpad]<br />
** [https://docs.google.com/document/d/1lXmMkUhiSZYK45POd5ungPjVR--Fs_wJHeQ6bXWwP44/pub Vancouver Hands On Lab instructions]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The core grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases and Examples===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases]. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
Different examples of Congress policies that have been implemented can be viewed here: [https://goo.gl/UfeHEk Congress Examples]<br />
<br />
Here we give an example of policies that each of our releases supports.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress (target) release<br />
|-<br />
| Monitor violations of: every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Stop a user from constructing a new VM if she owns a VM averaging less than 1% CPU-utilization (requires API gateway or Nova support) || kilo<br />
|-<br />
| Every time a user creates a Neutron security group that opens port 80, delete that security group || kilo<br />
|}<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
To propose a new feature, you <br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] describing what your feature does and how it will work. Include just the name, title, and summary.<br />
# Ask at least one core reviewer to give you feedback. If the feature is complex or controversial, the core will ask you to develop a "spec" and add it to the congress-specs repo. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.<br />
# Once your blueprint is approved, you may push code that implements that blueprint up to Gerrit, including the tag 'Implements-blueprint: <blueprint-name>'<br />
<br />
If a core reviewer asks for a spec, here's how you create one.<br />
# Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
# Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
# Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
# Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. <br />
<br />
Note: If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=81830Congress2015-05-26T16:49:23Z<p>Thinrichs: /* Use Cases and Examples */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Documents<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [https://drive.google.com/open?id=1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI&authuser=0 Delegation to VM-migration engine design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
** [https://docs.google.com/file/d/0ByDz-eYOtswScTlmamlhLXpmTXc/edit Vancouver Summit Congress Overview slides]<br />
** [https://etherpad.openstack.org/p/congress-liberty-design-session Vancouver Summit Liberty etherpad]<br />
** [https://docs.google.com/document/d/1lXmMkUhiSZYK45POd5ungPjVR--Fs_wJHeQ6bXWwP44/pub Vancouver Hands On Lab instructions]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The core grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases and Examples===<br />
<br />
==== Use Cases ====<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases]. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
==== Examples ====<br />
<br />
Different examples of Congress policies that have been implemented can be viewed here: [https://goo.gl/UfeHEk Congress Examples]<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
To propose a new feature, you <br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] describing what your feature does and how it will work. Include just the name, title, and summary.<br />
# Ask at least one core reviewer to give you feedback. If the feature is complex or controversial, the core will ask you to develop a "spec" and add it to the congress-specs repo. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.<br />
# Once your blueprint is approved, you may push code that implements that blueprint up to Gerrit, including the tag 'Implements-blueprint: <blueprint-name>'<br />
<br />
If a core reviewer asks for a spec, here's how you create one.<br />
# Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
# Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
# Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
# Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. <br />
<br />
Note: If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=81828Congress2015-05-26T16:44:50Z<p>Thinrichs: /* Policy Language */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Documents<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [https://drive.google.com/open?id=1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI&authuser=0 Delegation to VM-migration engine design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
** [https://docs.google.com/file/d/0ByDz-eYOtswScTlmamlhLXpmTXc/edit Vancouver Summit Congress Overview slides]<br />
** [https://etherpad.openstack.org/p/congress-liberty-design-session Vancouver Summit Liberty etherpad]<br />
** [https://docs.google.com/document/d/1lXmMkUhiSZYK45POd5ungPjVR--Fs_wJHeQ6bXWwP44/pub Vancouver Hands On Lab instructions]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The core grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases and Examples===<br />
<br />
==== Use Cases ====<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
==== Examples ====<br />
<br />
Different examples on Congress policies can be viewed here: [https://goo.gl/UfeHEk Congress Examples]<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
To propose a new feature, you <br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] describing what your feature does and how it will work. Include just the name, title, and summary.<br />
# Ask at least one core reviewer to give you feedback. If the feature is complex or controversial, the core will ask you to develop a "spec" and add it to the congress-specs repo. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.<br />
# Once your blueprint is approved, you may push code that implements that blueprint up to Gerrit, including the tag 'Implements-blueprint: <blueprint-name>'<br />
<br />
If a core reviewer asks for a spec, here's how you create one.<br />
# Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
# Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
# Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
# Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. <br />
<br />
Note: If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=81827Congress2015-05-26T16:44:18Z<p>Thinrichs: </p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Documents<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [https://drive.google.com/open?id=1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI&authuser=0 Delegation to VM-migration engine design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
** [https://docs.google.com/file/d/0ByDz-eYOtswScTlmamlhLXpmTXc/edit Vancouver Summit Congress Overview slides]<br />
** [https://etherpad.openstack.org/p/congress-liberty-design-session Vancouver Summit Liberty etherpad]<br />
** [https://docs.google.com/document/d/1lXmMkUhiSZYK45POd5ungPjVR--Fs_wJHeQ6bXWwP44/pub Vancouver Hands On Lab instructions]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases and Examples===<br />
<br />
==== Use Cases ====<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
==== Examples ====<br />
<br />
Different examples on Congress policies can be viewed here: [https://goo.gl/UfeHEk Congress Examples]<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
To propose a new feature, you <br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] describing what your feature does and how it will work. Include just the name, title, and summary.<br />
# Ask at least one core reviewer to give you feedback. If the feature is complex or controversial, the core will ask you to develop a "spec" and add it to the congress-specs repo. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.<br />
# Once your blueprint is approved, you may push code that implements that blueprint up to Gerrit, including the tag 'Implements-blueprint: <blueprint-name>'<br />
<br />
If a core reviewer asks for a spec, here's how you create one.<br />
# Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
# Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
# Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
# Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. <br />
<br />
Note: If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=81826Congress2015-05-26T16:44:00Z<p>Thinrichs: /* Policy as a service ("Congress") */</p>
<hr />
<div> [https://etherpad.openstack.org/p/congress-liberty-design-session Liberty design session etherpad]<br />
[https://goo.gl/bZVWSt Liberty Hands-on Instructions]<br />
== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Documents<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [https://drive.google.com/open?id=1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI&authuser=0 Delegation to VM-migration engine design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
** [https://docs.google.com/file/d/0ByDz-eYOtswScTlmamlhLXpmTXc/edit Vancouver Summit Congress Overview slides]<br />
** [https://etherpad.openstack.org/p/congress-liberty-design-session Vancouver Summit Liberty etherpad]<br />
** [https://docs.google.com/document/d/1lXmMkUhiSZYK45POd5ungPjVR--Fs_wJHeQ6bXWwP44/pub Vancouver Hands On Lab instructions]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases and Examples===<br />
<br />
==== Use Cases ====<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
==== Examples ====<br />
<br />
Different examples on Congress policies can be viewed here: [https://goo.gl/UfeHEk Congress Examples]<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
To propose a new feature, you <br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] describing what your feature does and how it will work. Include just the name, title, and summary.<br />
# Ask at least one core reviewer to give you feedback. If the feature is complex or controversial, the core will ask you to develop a "spec" and add it to the congress-specs repo. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.<br />
# Once your blueprint is approved, you may push code that implements that blueprint up to Gerrit, including the tag 'Implements-blueprint: <blueprint-name>'<br />
<br />
If a core reviewer asks for a spec, here's how you create one.<br />
# Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
# Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
# Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
# Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. <br />
<br />
Note: If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=81825Congress2015-05-26T16:40:33Z<p>Thinrichs: /* Policy as a service ("Congress") */</p>
<hr />
<div> [https://etherpad.openstack.org/p/congress-liberty-design-session Liberty design session etherpad]<br />
[https://goo.gl/bZVWSt Liberty Hands-on Instructions]<br />
== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Docs and slides<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
** [https://docs.google.com/file/d/0ByDz-eYOtswScTlmamlhLXpmTXc/edit Vancouver Summit Congress Overview]<br />
** [https://etherpad.openstack.org/p/congress-liberty-design-session Vancouver Summit Liberty etherpad]<br />
** [https://docs.google.com/document/d/1lXmMkUhiSZYK45POd5ungPjVR--Fs_wJHeQ6bXWwP44/pub Vancouver Hands On Lab instructions]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases and Examples===<br />
<br />
==== Use Cases ====<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
==== Examples ====<br />
<br />
Different examples on Congress policies can be viewed here: [https://goo.gl/UfeHEk Congress Examples]<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
To propose a new feature, you <br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] describing what your feature does and how it will work. Include just the name, title, and summary.<br />
# Ask at least one core reviewer to give you feedback. If the feature is complex or controversial, the core will ask you to develop a "spec" and add it to the congress-specs repo. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.<br />
# Once your blueprint is approved, you may push code that implements that blueprint up to Gerrit, including the tag 'Implements-blueprint: <blueprint-name>'<br />
<br />
If a core reviewer asks for a spec, here's how you create one.<br />
# Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
# Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
# Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
# Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. <br />
<br />
Note: If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=77862Congress2015-04-20T17:04:59Z<p>Thinrichs: Edited "proposing new feature" since our process has changed a bit.</p>
<hr />
<div> [https://etherpad.openstack.org/p/congress-liberty-design-session Liberty design session etherpad]<br />
<br />
== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Docs<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
** [https://www.openstack.org/assets/presentation-media/Congress-OpenStack-Atlanta-2014.pdf Slides from Atlanta summit]<br />
** [https://etherpad.openstack.org/p/juno-congress Notes from Atlanta summit]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases ===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
To propose a new feature, you <br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] describing what your feature does and how it will work. Include just the name, title, and summary.<br />
# Ask at least one core reviewer to give you feedback. If the feature is complex or controversial, the core will ask you to develop a "spec" and add it to the congress-specs repo. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.<br />
# Once your blueprint is approved, you may push code that implements that blueprint up to Gerrit, including the tag 'Implements-blueprint: <blueprint-name>'<br />
<br />
If a core reviewer asks for a spec, here's how you create one.<br />
# Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
# Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
# Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
# Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. <br />
<br />
Note: If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=77860Congress2015-04-20T16:51:30Z<p>Thinrichs: </p>
<hr />
<div> [https://etherpad.openstack.org/p/congress-liberty-design-session Liberty design session etherpad]<br />
<br />
== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Docs<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
** [https://www.openstack.org/assets/presentation-media/Congress-OpenStack-Atlanta-2014.pdf Slides from Atlanta summit]<br />
** [https://etherpad.openstack.org/p/juno-congress Notes from Atlanta summit]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases ===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
Proposing a new feature requires you to create two descriptions of that feature: a blueprint and a spec. Before code that implements a new feature is merged, its blueprint must be approved. A blueprint is approved once its spec is approved. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. (If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.)<br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] briefly describing the feature. Include just the name, title, and summary.<br />
# Create an additional description of your feature, called a ''spec'', and add it to the congress-specs repo. <br />
## Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
## Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
## Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
## Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. You then copy the finalized details from your spec into your blueprint.<br />
# If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.<br />
<br />
[[Category: Working_Groups]]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=77859Congress2015-04-20T16:50:10Z<p>Thinrichs: Changed "stackforge" to "openstack", since repos all moved.</p>
<hr />
<div> [https://etherpad.openstack.org/p/congress-liberty-design-session Liberty design session etherpad]<br />
<br />
== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/openstack/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/openstack/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:openstack/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/openstack/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Docs<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
** [https://www.openstack.org/assets/presentation-media/Congress-OpenStack-Atlanta-2014.pdf Slides from Atlanta summit]<br />
** [https://etherpad.openstack.org/p/juno-congress Notes from Atlanta summit]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases ===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
Proposing a new feature requires you to create two descriptions of that feature: a blueprint and a spec. Before code that implements a new feature is merged, its blueprint must be approved. A blueprint is approved once its spec is approved. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. (If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.)<br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] briefly describing the feature. Include just the name, title, and summary.<br />
# Create an additional description of your feature, called a ''spec'', and add it to the congress-specs repo. <br />
## Checkout the [https://github.com/openstack/congress-specs congress-specs repo].<br />
## Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
## Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/openstack/congress-specs/blob/master/specs/template.rst spec template]. <br />
## Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/openstack/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:openstack/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. You then copy the finalized details from your spec into your blueprint.<br />
# If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.<br />
<br />
== Incubation Plan ==<br />
The official TC governance is here https://wiki.openstack.org/wiki/Governance/Approved/Incubation<br />
Below are the incubation requirements from the TC from https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements.rst as of 03jun2014. This RST is still under development. Likely our incubation proposal will be debated within the TC in an etherpad.<br />
<br />
The TC will evaluate the project scope and its complementarity with existing integrated projects and other official programs, look into the project technical choices, and check a number of requirements, including (but not limited to):<br />
<br />
'''Scope'''<br />
<br />
Project must have a clear and defined scope.<br />
<br />
Project's scope should represent a measured progression for OpenStack as a whole.<br />
<br />
Project should not inadvertently duplicate functionality present in other OpenStack projects. If they do, they should have a clear plan and timeframe to prevent long-term scope duplication.<br />
<br />
Project should leverage existing functionality in other OpenStack projects as much as possible<br />
<br />
<br />
'''Maturity'''<br />
<br />
Project should have an active team of contributors<br />
<br />
Project should not have a major architectural rewrite planned<br />
<br />
<br />
'''Process'''<br />
<br />
Project must be hosted under stackforge (and therefore use git as its VCS)<br />
<br />
Project must obey OpenStack coordinated project interface (such as tox, pbr, global-requirements...)<br />
<br />
Project should use oslo libraries or oslo-incubator where appropriate<br />
<br />
If project is not part of an existing program, it needs to file for a new program concurrently with the Incubation request, and fill the corresponding requirements.<br />
<br />
Project must have a well-defined core review team, with reviews distributed amongst the team (and not being primarily done by one person)<br />
<br />
Reviews should follow the same criteria as OpenStack projects (2 +2s before +A)<br />
<br />
Project should use the official openstack lists for discussion<br />
<br />
<br />
<br />
'''API'''<br />
<br />
Project APIs should be reasonably stable<br />
<br />
Project must have a REST API with at least a JSON entity representation<br />
<br />
Project must have a Python client library API for its REST API<br />
<br />
<br />
'''QA'''<br />
<br />
Project must have a basic devstack-gate job set up<br />
<br />
'''Documentation / User support'''<br />
<br />
Project must have docs for developers who want to contribute to the project<br />
<br />
Project should have API documentation for devs who want to add to the API, updated when the code is updated<br />
<br />
<br />
'''Legal requirements'''<br />
<br />
Project must be licensed under the Apache License v2<br />
<br />
Project must have no library dependencies which effectively restrict how the project may be distributed or deployed [1]<br />
<br />
All contributors to the project must have signed the CLA<br />
<br />
Project must have no known trademark issues [2]<br />
<br />
[[Category: Working_Groups]]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=77497Congress2015-04-14T18:04:05Z<p>Thinrichs: </p>
<hr />
<div> [https://etherpad.openstack.org/p/congress-liberty-design-session Liberty design session etherpad]<br />
<br />
== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/stackforge/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:stackforge/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/stackforge/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:stackforge/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/stackforge/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Docs<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
** [https://www.openstack.org/assets/presentation-media/Congress-OpenStack-Atlanta-2014.pdf Slides from Atlanta summit]<br />
** [https://etherpad.openstack.org/p/juno-congress Notes from Atlanta summit]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases ===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
Proposing a new feature requires you to create two descriptions of that feature: a blueprint and a spec. Before code that implements a new feature is merged, its blueprint must be approved. A blueprint is approved once its spec is approved. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. (If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.)<br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] briefly describing the feature. Include just the name, title, and summary.<br />
# Create an additional description of your feature, called a ''spec'', and add it to the congress-specs repo. <br />
## Checkout the [https://github.com/stackforge/congress-specs congress-specs repo].<br />
## Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
## Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/stackforge/congress-specs/blob/master/specs/template.rst spec template]. <br />
## Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/stackforge/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:stackforge/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. You then copy the finalized details from your spec into your blueprint.<br />
# If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.<br />
<br />
== Incubation Plan ==<br />
The official TC governance is here https://wiki.openstack.org/wiki/Governance/Approved/Incubation<br />
Below are the incubation requirements from the TC from https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements.rst as of 03jun2014. This RST is still under development. Likely our incubation proposal will be debated within the TC in an etherpad.<br />
<br />
The TC will evaluate the project scope and its complementarity with existing integrated projects and other official programs, look into the project technical choices, and check a number of requirements, including (but not limited to):<br />
<br />
'''Scope'''<br />
<br />
Project must have a clear and defined scope.<br />
<br />
Project's scope should represent a measured progression for OpenStack as a whole.<br />
<br />
Project should not inadvertently duplicate functionality present in other OpenStack projects. If they do, they should have a clear plan and timeframe to prevent long-term scope duplication.<br />
<br />
Project should leverage existing functionality in other OpenStack projects as much as possible<br />
<br />
<br />
'''Maturity'''<br />
<br />
Project should have an active team of contributors<br />
<br />
Project should not have a major architectural rewrite planned<br />
<br />
<br />
'''Process'''<br />
<br />
Project must be hosted under stackforge (and therefore use git as its VCS)<br />
<br />
Project must obey OpenStack coordinated project interface (such as tox, pbr, global-requirements...)<br />
<br />
Project should use oslo libraries or oslo-incubator where appropriate<br />
<br />
If project is not part of an existing program, it needs to file for a new program concurrently with the Incubation request, and fill the corresponding requirements.<br />
<br />
Project must have a well-defined core review team, with reviews distributed amongst the team (and not being primarily done by one person)<br />
<br />
Reviews should follow the same criteria as OpenStack projects (2 +2s before +A)<br />
<br />
Project should use the official openstack lists for discussion<br />
<br />
<br />
<br />
'''API'''<br />
<br />
Project APIs should be reasonably stable<br />
<br />
Project must have a REST API with at least a JSON entity representation<br />
<br />
Project must have a Python client library API for its REST API<br />
<br />
<br />
'''QA'''<br />
<br />
Project must have a basic devstack-gate job set up<br />
<br />
'''Documentation / User support'''<br />
<br />
Project must have docs for developers who want to contribute to the project<br />
<br />
Project should have API documentation for devs who want to add to the API, updated when the code is updated<br />
<br />
<br />
'''Legal requirements'''<br />
<br />
Project must be licensed under the Apache License v2<br />
<br />
Project must have no library dependencies which effectively restrict how the project may be distributed or deployed [1]<br />
<br />
All contributors to the project must have signed the CLA<br />
<br />
Project must have no known trademark issues [2]<br />
<br />
[[Category: Working_Groups]]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=IRC&diff=75327IRC2015-03-10T17:31:47Z<p>Thinrichs: /* OpenStack IRC channels (chat.freenode.net) */</p>
<hr />
<div>IRC, or Internet Relay Chat, is often used as a real-time communication capability with open source projects. We're pretty proud of the friendly vibe in the OpenStack channels and invite anyone wanting to ask questions or talk about all things OpenStack to the channels.<br />
<br />
IRC software can be found for all operating systems. The [https://en.wikipedia.org/wiki/Comparison_of_Internet_Relay_Chat_clients#Operating_system_support IRC clients comparison chart on Wikipedia] can help you pick one for your operating system.<br />
<br />
You don't have to have a complex setup to use IRC. You can use the web client for Freenode, which doesn't require any download or setup. Just pick a nickname and join #openstack: http://webchat.freenode.net/?channels=openstack,openstack-101.<br />
<br />
<br />
==== How to read messages exchanged when you're offline ====<br />
<br />
IRC, unlike other chat systems, doesn't keep when you're offline. In order to be notified of relevant communications you can either look at the [http://eavesdrop.openstack.org/irclogs/ channel logs] or setup an IRC proxy. <br />
<br />
The most common IRC proxies are [http://wiki.znc.in/ZNC znc] and [https://bip.milkypond.org/ bip]. See the following guides to configure them:<br />
<br />
* [https://kashyapc.fedorapeople.org/notes-bip-IRC-proxy/README Installation notes for Fedora/RH-like] and [https://kashyapc.fedorapeople.org/notes-bip-IRC-proxy/bip.conf example bip.conf] contributed by Kashyap Chamarthy<br />
* ZNC [https://dague.net/2014/09/13/my-irc-proxy-setup/ configuration notes] contributed by Sean Dague<br />
<br />
<br />
== OpenStack IRC channels (chat.freenode.net) ==<br />
<br />
<br />
If you want to start a new IRC channel, please consult with the InfrastructureTeam in #openstack-infra or at openstack-infra@lists.openstack.org to ensure it gets registered appropriately. <br />
<br />
'''Many IRC channels are logged and [http://eavesdrop.openstack.org/irclogs/ recordings are publicly accessible]'''. If you're concerned about privacy consider using a [https://freenode.net/faq.shtml#cloaks cloak], [https://freenode.net/irc_servers.shtml#tor tor], hide your real name and be mindful not to write sensitive data in these channels.<br />
<br />
{| class="wikitable sortable" border="1"<br />
|- <br />
! IRC Channel !! Description<br />
|-<br />
|'''#openstack''' || general discussion, support<br />
|-<br />
| '''#openstack-101''' || guidance for new contributors<br />
|-<br />
|'''#openstack-anvil''' || [http://anvil.readthedocs.org/ Anvil] discussion channel<br />
|-<br />
|'''#openstack-barbican''' || Barbican-related team discussions<br />
|-<br />
|'''#openstack-blazar''' || blazar (formerly climate) team discussions<br />
|-<br />
|'''#openstack-board''' || OpenStack Foundation Board Meeting Back channel (mainly quiet except during meetings)<br />
|-<br />
|'''#openstack-ceilometer''' || ceilometer team discussions<br />
|-<br />
|'''#openstack-chef''' || deployment and operating OpenStack with Chef<br />
|- <br />
|'''#openstack-cinder''' || cinder team discussions<br />
|-<br />
| '''#openstack-community''' || coordination of community activity<br />
|-<br />
| '''#openstack-containers''' || containers team discussion <br />
|-<br />
|'''#openstack-dev''' || general and cross-project development discussion<br />
|-<br />
|'''#openstack-dns''' || Designate DNS team discussions<br />
|-<br />
|'''#openstack-doc''' || documentation team discussion<br />
|-<br />
|'''#openstack-fr''' || general discussion, support in French<br />
|-<br />
|'''#openstack-fwaas''' || Firewall as a Service discussions <br />
|-<br />
|'''#openstack-gbp''' || Group Based Policy discussions<br />
|-<br />
|'''#openstack-glance''' || glance team discussions<br />
|-<br />
|'''#openstack-gsoc''' || google summer of code discussions<br />
|-<br />
|'''#openstack-horizon''' || horizon team discussions<br />
|-<br />
|'''#openstack-hyper-v''' || Microsoft Windows guests and hypervisor discussion<br />
|-<br />
|'''#openstack-infra''' || developer community infrastructure, continuous integration<br />
|-<br />
|'''#openstack-ironic''' || ironic & bare metal discussions<br />
|-<br />
|'''#openstack-keystone''' || keystone team discussions<br />
|-<br />
|'''#openstack-ko''' || general discussion, support in Korean<br />
|-<br />
|'''#openstack-latinamerica''' || OpenStack Latin America (Spanish)<br />
|-<br />
|'''#openstack-lbaas''' || Neutron LBaaS and Project Octavia discussions<br />
|-<br />
|'''#openstack-manila''' || shared / distributed file system service team discussions<br />
|-<br />
|'''#openstack-marconi''' || queue/messaging marconi team discussions<br />
|-<br />
|'''#openstack-meeting''' || team meetings<br />
|-<br />
|'''#openstack-meeting-alt''' || team meetings, alternate channel<br />
|-<br />
|'''#openstack-meeting-3''' || team meetings, another alternate channel<br />
|-<br />
|'''#openstack-meeting-4''' || team meetings, another alternate channel<br />
|-<br />
|'''#openstack-mistral''' || Mistral Workflow Service for OpenStack<br />
|-<br />
|'''#openstack-neutron''' || neutron team discussions<br />
|-<br />
|'''#openstack-nfv''' || [[Teams/NFV|NFV]] team discussions<br />
|-<br />
|'''#openstack-nova''' || nova team discussions<br />
|-<br />
|'''#openstack-operators''' || OpenStack Operators discussion channel<br />
|-<br />
|'''#openstack-opw''' || GNOME OPW mentor, intern and supporter discussions<br />
|-<br />
|'''#openstack-oslo''' || [https://wiki.openstack.org/wiki/Oslo Oslo] development discussion<br />
|-<br />
| '''#openstack-qa''' || QA team discussion<br />
|-<br />
|'''#openstack-rally''' || [https://wiki.openstack.org/wiki/Rally Rally] measure performance of your cloud<br />
|-<br />
|'''#openstack-rating''' || Rating team discussions<br />
|-<br />
|'''#openstack-relmgr-office''' || Release managers office hours channel<br />
|-<br />
|'''#openstack-sahara''' || [https://wiki.openstack.org/wiki/Sahara Sahara] team discussions<br />
|-<br />
|'''#openstack-sdks''' || Development of SDKs to work with OpenStack<br />
|-<br />
|'''#openstack-security''' || General discussion about OpenStack security and open channel for the OpenStack Security Group (OSSG)<br />
|-<br />
|'''#openstack-stable''' || stable branch management and packaging discussions<br />
|-<br />
|'''#openstack-state-management''' || [https://wiki.openstack.org/wiki/TaskFlow TaskFlow] and state-management development discussion<br />
|-<br />
|'''#openstack-swift''' || swift team discussions<br />
|-<br />
| '''#openstack-translation''' || translation groups discussion<br />
|-<br />
|'''#openstack-trove''' || trove database team discussions<br />
|-<br />
|'''#openstack-tw''' || general discussion, support in Taiwan<br />
|-<br />
| '''#openstack-ux''' || discussion channel for user experience<br />
|-<br />
|'''#openstack-vmware''' || The VMwareAPI team discussion channel<br />
|-<br />
|'''#heat''' || Heat developer discussion channel<br />
|-<br />
|'''#magnetodb''' || Key-Value storage for OpenStack<br />
|-<br />
|'''#murano''' || Murano team discussions<br />
|-<br />
|'''#nova-docker''' || Nova Docker team discussions<br />
|-<br />
|'''#refstack''' || RefStack (related to Core Definition)<br />
|-<br />
|'''#storyboard''' || StoryBoard team discussions<br />
|-<br />
|'''#tripleo''' || TripleO team discussions<br />
|-<br />
|'''#openstack-ansible''' || Openstack ansible deployments discussions<br />
|-<br />
|'''#congress''' || Congress team discussions<br />
|-}<br />
<br />
[[Category:Connect]]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=75159Congress2015-03-05T22:33:33Z<p>Thinrichs: /* Policy as a service ("Congress") */</p>
<hr />
<div> [https://etherpad.openstack.org/p/par-kilo-congress-design-session Kilo design session etherpad]<br />
<br />
== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/stackforge/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:stackforge/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/stackforge/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:stackforge/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/stackforge/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Docs<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
** [https://www.openstack.org/assets/presentation-media/Congress-OpenStack-Atlanta-2014.pdf Slides from Atlanta summit]<br />
** [https://etherpad.openstack.org/p/juno-congress Notes from Atlanta summit]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases ===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
Proposing a new feature requires you to create two descriptions of that feature: a blueprint and a spec. Before code that implements a new feature is merged, its blueprint must be approved. A blueprint is approved once its spec is approved. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. (If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.)<br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] briefly describing the feature. Include just the name, title, and summary.<br />
# Create an additional description of your feature, called a ''spec'', and add it to the congress-specs repo. <br />
## Checkout the [https://github.com/stackforge/congress-specs congress-specs repo].<br />
## Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
## Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/stackforge/congress-specs/blob/master/specs/template.rst spec template]. <br />
## Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/stackforge/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:stackforge/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. You then copy the finalized details from your spec into your blueprint.<br />
# If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.<br />
<br />
== Incubation Plan ==<br />
The official TC governance is here https://wiki.openstack.org/wiki/Governance/Approved/Incubation<br />
Below are the incubation requirements from the TC from https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements.rst as of 03jun2014. This RST is still under development. Likely our incubation proposal will be debated within the TC in an etherpad.<br />
<br />
The TC will evaluate the project scope and its complementarity with existing integrated projects and other official programs, look into the project technical choices, and check a number of requirements, including (but not limited to):<br />
<br />
'''Scope'''<br />
<br />
Project must have a clear and defined scope.<br />
<br />
Project's scope should represent a measured progression for OpenStack as a whole.<br />
<br />
Project should not inadvertently duplicate functionality present in other OpenStack projects. If they do, they should have a clear plan and timeframe to prevent long-term scope duplication.<br />
<br />
Project should leverage existing functionality in other OpenStack projects as much as possible<br />
<br />
<br />
'''Maturity'''<br />
<br />
Project should have an active team of contributors<br />
<br />
Project should not have a major architectural rewrite planned<br />
<br />
<br />
'''Process'''<br />
<br />
Project must be hosted under stackforge (and therefore use git as its VCS)<br />
<br />
Project must obey OpenStack coordinated project interface (such as tox, pbr, global-requirements...)<br />
<br />
Project should use oslo libraries or oslo-incubator where appropriate<br />
<br />
If project is not part of an existing program, it needs to file for a new program concurrently with the Incubation request, and fill the corresponding requirements.<br />
<br />
Project must have a well-defined core review team, with reviews distributed amongst the team (and not being primarily done by one person)<br />
<br />
Reviews should follow the same criteria as OpenStack projects (2 +2s before +A)<br />
<br />
Project should use the official openstack lists for discussion<br />
<br />
<br />
<br />
'''API'''<br />
<br />
Project APIs should be reasonably stable<br />
<br />
Project must have a REST API with at least a JSON entity representation<br />
<br />
Project must have a Python client library API for its REST API<br />
<br />
<br />
'''QA'''<br />
<br />
Project must have a basic devstack-gate job set up<br />
<br />
'''Documentation / User support'''<br />
<br />
Project must have docs for developers who want to contribute to the project<br />
<br />
Project should have API documentation for devs who want to add to the API, updated when the code is updated<br />
<br />
<br />
'''Legal requirements'''<br />
<br />
Project must be licensed under the Apache License v2<br />
<br />
Project must have no library dependencies which effectively restrict how the project may be distributed or deployed [1]<br />
<br />
All contributors to the project must have signed the CLA<br />
<br />
Project must have no known trademark issues [2]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=75157Congress2015-03-05T22:32:28Z<p>Thinrichs: /* Policy as a service ("Congress") */</p>
<hr />
<div> [https://etherpad.openstack.org/p/par-kilo-congress-design-session Kilo design session etherpad]<br />
<br />
== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/stackforge/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:stackforge/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/stackforge/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:stackforge/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/stackforge/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Docs<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
** [http://ruleyourcloud.com/ Blog]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
** [https://www.openstack.org/assets/presentation-media/Congress-OpenStack-Atlanta-2014.pdf Slides from Atlanta summit]<br />
** [https://etherpad.openstack.org/p/juno-congress Notes from Atlanta summit]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (thinrichs@vmware.com) and Peter Balland (pballand@vmware.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases ===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
Proposing a new feature requires you to create two descriptions of that feature: a blueprint and a spec. Before code that implements a new feature is merged, its blueprint must be approved. A blueprint is approved once its spec is approved. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. (If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.)<br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] briefly describing the feature. Include just the name, title, and summary.<br />
# Create an additional description of your feature, called a ''spec'', and add it to the congress-specs repo. <br />
## Checkout the [https://github.com/stackforge/congress-specs congress-specs repo].<br />
## Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
## Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/stackforge/congress-specs/blob/master/specs/template.rst spec template]. <br />
## Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/stackforge/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:stackforge/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. You then copy the finalized details from your spec into your blueprint.<br />
# If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.<br />
<br />
== Incubation Plan ==<br />
The official TC governance is here https://wiki.openstack.org/wiki/Governance/Approved/Incubation<br />
Below are the incubation requirements from the TC from https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements.rst as of 03jun2014. This RST is still under development. Likely our incubation proposal will be debated within the TC in an etherpad.<br />
<br />
The TC will evaluate the project scope and its complementarity with existing integrated projects and other official programs, look into the project technical choices, and check a number of requirements, including (but not limited to):<br />
<br />
'''Scope'''<br />
<br />
Project must have a clear and defined scope.<br />
<br />
Project's scope should represent a measured progression for OpenStack as a whole.<br />
<br />
Project should not inadvertently duplicate functionality present in other OpenStack projects. If they do, they should have a clear plan and timeframe to prevent long-term scope duplication.<br />
<br />
Project should leverage existing functionality in other OpenStack projects as much as possible<br />
<br />
<br />
'''Maturity'''<br />
<br />
Project should have an active team of contributors<br />
<br />
Project should not have a major architectural rewrite planned<br />
<br />
<br />
'''Process'''<br />
<br />
Project must be hosted under stackforge (and therefore use git as its VCS)<br />
<br />
Project must obey OpenStack coordinated project interface (such as tox, pbr, global-requirements...)<br />
<br />
Project should use oslo libraries or oslo-incubator where appropriate<br />
<br />
If project is not part of an existing program, it needs to file for a new program concurrently with the Incubation request, and fill the corresponding requirements.<br />
<br />
Project must have a well-defined core review team, with reviews distributed amongst the team (and not being primarily done by one person)<br />
<br />
Reviews should follow the same criteria as OpenStack projects (2 +2s before +A)<br />
<br />
Project should use the official openstack lists for discussion<br />
<br />
<br />
<br />
'''API'''<br />
<br />
Project APIs should be reasonably stable<br />
<br />
Project must have a REST API with at least a JSON entity representation<br />
<br />
Project must have a Python client library API for its REST API<br />
<br />
<br />
'''QA'''<br />
<br />
Project must have a basic devstack-gate job set up<br />
<br />
'''Documentation / User support'''<br />
<br />
Project must have docs for developers who want to contribute to the project<br />
<br />
Project should have API documentation for devs who want to add to the API, updated when the code is updated<br />
<br />
<br />
'''Legal requirements'''<br />
<br />
Project must be licensed under the Apache License v2<br />
<br />
Project must have no library dependencies which effectively restrict how the project may be distributed or deployed [1]<br />
<br />
All contributors to the project must have signed the CLA<br />
<br />
Project must have no known trademark issues [2]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=PolicyGuidedFulfillmentMeetings&diff=70094PolicyGuidedFulfillmentMeetings2014-12-10T17:46:27Z<p>Thinrichs: /* Dec 10, 2014 */</p>
<hr />
<div>=Policy Guided Fulfillment Meetings=<br />
<br />
== Introduction ==<br />
This wiki page captures meetings info on [[PolicyGuidedFulfillment|Policy Guided Fulfillment]].<br />
<br />
== Meetings ==<br />
Meetings is held using Hangouts every Wednesday at 18:00 CET ( 9:00 AM PST, 19:00 IL).<br />
<br />
=== Dec 10, 2014 ===<br />
<br />
Agenda<br />
* Congress – ‘policy’ attachment<br />
** Clarification/refinement from last call: IMO we have to attach ‘a rule of Murano policy’ instead of ‘Murano policy’ – am I right? <br />
* Blueprints – assignment<br />
** Each assignee shall provide more as-much-as-possible details on what/how will be done into blueprint/specification.<br />
<br />
Blueprints<br />
* Policy Enforcement Point - [https://review.openstack.org/140395 #140395]<br />
* Policy Rules Attachment in Murano - [https://review.openstack.org/140396 #140396]<br />
* Congress Support in Murano - [https://review.openstack.org/140397 #140397]<br />
* Murano Data Schema in Congress - [https://review.openstack.org/140398 #140398]<br />
* Congress version of Murano integration - [https://review.openstack.org/#/c/134421/ #134421]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=67577Congress2014-11-10T21:14:20Z<p>Thinrichs: /* Roadmap */</p>
<hr />
<div> [https://etherpad.openstack.org/p/par-kilo-congress-design-session Kilo design session etherpad]<br />
<br />
== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/stackforge/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:stackforge/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/stackforge/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:stackforge/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/stackforge/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Design docs<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
** [https://www.openstack.org/assets/presentation-media/Congress-OpenStack-Atlanta-2014.pdf Slides from Atlanta summit]<br />
** [https://etherpad.openstack.org/p/juno-congress Notes from Atlanta summit]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (thinrichs@vmware.com) and Peter Balland (pballand@vmware.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases ===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
=== Future Features ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
Proposing a new feature requires you to create two descriptions of that feature: a blueprint and a spec. Before code that implements a new feature is merged, its blueprint must be approved. A blueprint is approved once its spec is approved. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. (If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.)<br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] briefly describing the feature. Include just the name, title, and summary.<br />
# Create an additional description of your feature, called a ''spec'', and add it to the congress-specs repo. <br />
## Checkout the [https://github.com/stackforge/congress-specs congress-specs repo].<br />
## Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
## Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/stackforge/congress-specs/blob/master/specs/template.rst spec template]. <br />
## Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/stackforge/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:stackforge/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. You then copy the finalized details from your spec into your blueprint.<br />
# If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.<br />
<br />
== Incubation Plan ==<br />
The official TC governance is here https://wiki.openstack.org/wiki/Governance/Approved/Incubation<br />
Below are the incubation requirements from the TC from https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements.rst as of 03jun2014. This RST is still under development. Likely our incubation proposal will be debated within the TC in an etherpad.<br />
<br />
The TC will evaluate the project scope and its complementarity with existing integrated projects and other official programs, look into the project technical choices, and check a number of requirements, including (but not limited to):<br />
<br />
'''Scope'''<br />
<br />
Project must have a clear and defined scope.<br />
<br />
Project's scope should represent a measured progression for OpenStack as a whole.<br />
<br />
Project should not inadvertently duplicate functionality present in other OpenStack projects. If they do, they should have a clear plan and timeframe to prevent long-term scope duplication.<br />
<br />
Project should leverage existing functionality in other OpenStack projects as much as possible<br />
<br />
<br />
'''Maturity'''<br />
<br />
Project should have an active team of contributors<br />
<br />
Project should not have a major architectural rewrite planned<br />
<br />
<br />
'''Process'''<br />
<br />
Project must be hosted under stackforge (and therefore use git as its VCS)<br />
<br />
Project must obey OpenStack coordinated project interface (such as tox, pbr, global-requirements...)<br />
<br />
Project should use oslo libraries or oslo-incubator where appropriate<br />
<br />
If project is not part of an existing program, it needs to file for a new program concurrently with the Incubation request, and fill the corresponding requirements.<br />
<br />
Project must have a well-defined core review team, with reviews distributed amongst the team (and not being primarily done by one person)<br />
<br />
Reviews should follow the same criteria as OpenStack projects (2 +2s before +A)<br />
<br />
Project should use the official openstack lists for discussion<br />
<br />
<br />
<br />
'''API'''<br />
<br />
Project APIs should be reasonably stable<br />
<br />
Project must have a REST API with at least a JSON entity representation<br />
<br />
Project must have a Python client library API for its REST API<br />
<br />
<br />
'''QA'''<br />
<br />
Project must have a basic devstack-gate job set up<br />
<br />
'''Documentation / User support'''<br />
<br />
Project must have docs for developers who want to contribute to the project<br />
<br />
Project should have API documentation for devs who want to add to the API, updated when the code is updated<br />
<br />
<br />
'''Legal requirements'''<br />
<br />
Project must be licensed under the Apache License v2<br />
<br />
Project must have no library dependencies which effectively restrict how the project may be distributed or deployed [1]<br />
<br />
All contributors to the project must have signed the CLA<br />
<br />
Project must have no known trademark issues [2]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=67576Congress2014-11-10T21:12:54Z<p>Thinrichs: /* Policy as a service ("Congress") */</p>
<hr />
<div> [https://etherpad.openstack.org/p/par-kilo-congress-design-session Kilo design session etherpad]<br />
<br />
== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/stackforge/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:stackforge/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/stackforge/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:stackforge/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/stackforge/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Design docs<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
** [https://www.openstack.org/assets/presentation-media/Congress-OpenStack-Atlanta-2014.pdf Slides from Atlanta summit]<br />
** [https://etherpad.openstack.org/p/juno-congress Notes from Atlanta summit]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (thinrichs@vmware.com) and Peter Balland (pballand@vmware.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases ===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
=== Roadmap ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
Proposing a new feature requires you to create two descriptions of that feature: a blueprint and a spec. Before code that implements a new feature is merged, its blueprint must be approved. A blueprint is approved once its spec is approved. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. (If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.)<br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] briefly describing the feature. Include just the name, title, and summary.<br />
# Create an additional description of your feature, called a ''spec'', and add it to the congress-specs repo. <br />
## Checkout the [https://github.com/stackforge/congress-specs congress-specs repo].<br />
## Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
## Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/stackforge/congress-specs/blob/master/specs/template.rst spec template]. <br />
## Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/stackforge/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:stackforge/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. You then copy the finalized details from your spec into your blueprint.<br />
# If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.<br />
<br />
== Incubation Plan ==<br />
The official TC governance is here https://wiki.openstack.org/wiki/Governance/Approved/Incubation<br />
Below are the incubation requirements from the TC from https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements.rst as of 03jun2014. This RST is still under development. Likely our incubation proposal will be debated within the TC in an etherpad.<br />
<br />
The TC will evaluate the project scope and its complementarity with existing integrated projects and other official programs, look into the project technical choices, and check a number of requirements, including (but not limited to):<br />
<br />
'''Scope'''<br />
<br />
Project must have a clear and defined scope.<br />
<br />
Project's scope should represent a measured progression for OpenStack as a whole.<br />
<br />
Project should not inadvertently duplicate functionality present in other OpenStack projects. If they do, they should have a clear plan and timeframe to prevent long-term scope duplication.<br />
<br />
Project should leverage existing functionality in other OpenStack projects as much as possible<br />
<br />
<br />
'''Maturity'''<br />
<br />
Project should have an active team of contributors<br />
<br />
Project should not have a major architectural rewrite planned<br />
<br />
<br />
'''Process'''<br />
<br />
Project must be hosted under stackforge (and therefore use git as its VCS)<br />
<br />
Project must obey OpenStack coordinated project interface (such as tox, pbr, global-requirements...)<br />
<br />
Project should use oslo libraries or oslo-incubator where appropriate<br />
<br />
If project is not part of an existing program, it needs to file for a new program concurrently with the Incubation request, and fill the corresponding requirements.<br />
<br />
Project must have a well-defined core review team, with reviews distributed amongst the team (and not being primarily done by one person)<br />
<br />
Reviews should follow the same criteria as OpenStack projects (2 +2s before +A)<br />
<br />
Project should use the official openstack lists for discussion<br />
<br />
<br />
<br />
'''API'''<br />
<br />
Project APIs should be reasonably stable<br />
<br />
Project must have a REST API with at least a JSON entity representation<br />
<br />
Project must have a Python client library API for its REST API<br />
<br />
<br />
'''QA'''<br />
<br />
Project must have a basic devstack-gate job set up<br />
<br />
'''Documentation / User support'''<br />
<br />
Project must have docs for developers who want to contribute to the project<br />
<br />
Project should have API documentation for devs who want to add to the API, updated when the code is updated<br />
<br />
<br />
'''Legal requirements'''<br />
<br />
Project must be licensed under the Apache License v2<br />
<br />
Project must have no library dependencies which effectively restrict how the project may be distributed or deployed [1]<br />
<br />
All contributors to the project must have signed the CLA<br />
<br />
Project must have no known trademark issues [2]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=67078Congress2014-10-30T20:27:37Z<p>Thinrichs: /* Policy as a service ("Congress") */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Join us at our design session at the OpenStack summit in Paris: Tuesday, 04 November, 16:40 - 18:10, room 124/125.<br />
<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/stackforge/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:stackforge/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/stackforge/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:stackforge/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/stackforge/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Design docs<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
** [https://www.openstack.org/assets/presentation-media/Congress-OpenStack-Atlanta-2014.pdf Slides from Atlanta summit]<br />
** [https://etherpad.openstack.org/p/juno-congress Notes from Atlanta summit]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (thinrichs@vmware.com) and Peter Balland (pballand@vmware.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases ===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
=== Roadmap ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
Proposing a new feature requires you to create two descriptions of that feature: a blueprint and a spec. Before code that implements a new feature is merged, its blueprint must be approved. A blueprint is approved once its spec is approved. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. (If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.)<br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] briefly describing the feature. Include just the name, title, and summary.<br />
# Create an additional description of your feature, called a ''spec'', and add it to the congress-specs repo. <br />
## Checkout the [https://github.com/stackforge/congress-specs congress-specs repo].<br />
## Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
## Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/stackforge/congress-specs/blob/master/specs/template.rst spec template]. <br />
## Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/stackforge/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:stackforge/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. You then copy the finalized details from your spec into your blueprint.<br />
# If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.<br />
<br />
== Incubation Plan ==<br />
The official TC governance is here https://wiki.openstack.org/wiki/Governance/Approved/Incubation<br />
Below are the incubation requirements from the TC from https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements.rst as of 03jun2014. This RST is still under development. Likely our incubation proposal will be debated within the TC in an etherpad.<br />
<br />
The TC will evaluate the project scope and its complementarity with existing integrated projects and other official programs, look into the project technical choices, and check a number of requirements, including (but not limited to):<br />
<br />
'''Scope'''<br />
<br />
Project must have a clear and defined scope.<br />
<br />
Project's scope should represent a measured progression for OpenStack as a whole.<br />
<br />
Project should not inadvertently duplicate functionality present in other OpenStack projects. If they do, they should have a clear plan and timeframe to prevent long-term scope duplication.<br />
<br />
Project should leverage existing functionality in other OpenStack projects as much as possible<br />
<br />
<br />
'''Maturity'''<br />
<br />
Project should have an active team of contributors<br />
<br />
Project should not have a major architectural rewrite planned<br />
<br />
<br />
'''Process'''<br />
<br />
Project must be hosted under stackforge (and therefore use git as its VCS)<br />
<br />
Project must obey OpenStack coordinated project interface (such as tox, pbr, global-requirements...)<br />
<br />
Project should use oslo libraries or oslo-incubator where appropriate<br />
<br />
If project is not part of an existing program, it needs to file for a new program concurrently with the Incubation request, and fill the corresponding requirements.<br />
<br />
Project must have a well-defined core review team, with reviews distributed amongst the team (and not being primarily done by one person)<br />
<br />
Reviews should follow the same criteria as OpenStack projects (2 +2s before +A)<br />
<br />
Project should use the official openstack lists for discussion<br />
<br />
<br />
<br />
'''API'''<br />
<br />
Project APIs should be reasonably stable<br />
<br />
Project must have a REST API with at least a JSON entity representation<br />
<br />
Project must have a Python client library API for its REST API<br />
<br />
<br />
'''QA'''<br />
<br />
Project must have a basic devstack-gate job set up<br />
<br />
'''Documentation / User support'''<br />
<br />
Project must have docs for developers who want to contribute to the project<br />
<br />
Project should have API documentation for devs who want to add to the API, updated when the code is updated<br />
<br />
<br />
'''Legal requirements'''<br />
<br />
Project must be licensed under the Apache License v2<br />
<br />
Project must have no library dependencies which effectively restrict how the project may be distributed or deployed [1]<br />
<br />
All contributors to the project must have signed the CLA<br />
<br />
Project must have no known trademark issues [2]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=64948Congress2014-10-14T14:52:25Z<p>Thinrichs: /* Relationship to Other Projects */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/stackforge/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:stackforge/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/stackforge/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:stackforge/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/stackforge/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Design docs<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
** [https://www.openstack.org/assets/presentation-media/Congress-OpenStack-Atlanta-2014.pdf Slides from Atlanta summit]<br />
** [https://etherpad.openstack.org/p/juno-congress Notes from Atlanta summit]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (thinrichs@vmware.com) and Peter Balland (pballand@vmware.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases ===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
=== Roadmap ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
* [https://wiki.openstack.org/wiki/Graffiti Graffiti]: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
Proposing a new feature requires you to create two descriptions of that feature: a blueprint and a spec. Before code that implements a new feature is merged, its blueprint must be approved. A blueprint is approved once its spec is approved. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. (If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.)<br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] briefly describing the feature. Include just the name, title, and summary.<br />
# Create an additional description of your feature, called a ''spec'', and add it to the congress-specs repo. <br />
## Checkout the [https://github.com/stackforge/congress-specs congress-specs repo].<br />
## Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
## Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/stackforge/congress-specs/blob/master/specs/template.rst spec template]. <br />
## Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/stackforge/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:stackforge/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. You then copy the finalized details from your spec into your blueprint.<br />
# If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.<br />
<br />
== Incubation Plan ==<br />
The official TC governance is here https://wiki.openstack.org/wiki/Governance/Approved/Incubation<br />
Below are the incubation requirements from the TC from https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements.rst as of 03jun2014. This RST is still under development. Likely our incubation proposal will be debated within the TC in an etherpad.<br />
<br />
The TC will evaluate the project scope and its complementarity with existing integrated projects and other official programs, look into the project technical choices, and check a number of requirements, including (but not limited to):<br />
<br />
'''Scope'''<br />
<br />
Project must have a clear and defined scope.<br />
<br />
Project's scope should represent a measured progression for OpenStack as a whole.<br />
<br />
Project should not inadvertently duplicate functionality present in other OpenStack projects. If they do, they should have a clear plan and timeframe to prevent long-term scope duplication.<br />
<br />
Project should leverage existing functionality in other OpenStack projects as much as possible<br />
<br />
<br />
'''Maturity'''<br />
<br />
Project should have an active team of contributors<br />
<br />
Project should not have a major architectural rewrite planned<br />
<br />
<br />
'''Process'''<br />
<br />
Project must be hosted under stackforge (and therefore use git as its VCS)<br />
<br />
Project must obey OpenStack coordinated project interface (such as tox, pbr, global-requirements...)<br />
<br />
Project should use oslo libraries or oslo-incubator where appropriate<br />
<br />
If project is not part of an existing program, it needs to file for a new program concurrently with the Incubation request, and fill the corresponding requirements.<br />
<br />
Project must have a well-defined core review team, with reviews distributed amongst the team (and not being primarily done by one person)<br />
<br />
Reviews should follow the same criteria as OpenStack projects (2 +2s before +A)<br />
<br />
Project should use the official openstack lists for discussion<br />
<br />
<br />
<br />
'''API'''<br />
<br />
Project APIs should be reasonably stable<br />
<br />
Project must have a REST API with at least a JSON entity representation<br />
<br />
Project must have a Python client library API for its REST API<br />
<br />
<br />
'''QA'''<br />
<br />
Project must have a basic devstack-gate job set up<br />
<br />
'''Documentation / User support'''<br />
<br />
Project must have docs for developers who want to contribute to the project<br />
<br />
Project should have API documentation for devs who want to add to the API, updated when the code is updated<br />
<br />
<br />
'''Legal requirements'''<br />
<br />
Project must be licensed under the Apache License v2<br />
<br />
Project must have no library dependencies which effectively restrict how the project may be distributed or deployed [1]<br />
<br />
All contributors to the project must have signed the CLA<br />
<br />
Project must have no known trademark issues [2]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=64364Congress2014-10-06T16:05:21Z<p>Thinrichs: /* Policy as a service ("Congress") */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
* Code<br />
** Server<br />
*** [https://github.com/stackforge/congress Server source code (github)] <br />
*** [https://launchpad.net/congress Server bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:stackforge/congress,n,z Outstanding server changes (Gerrit)]<br />
** Python client<br />
*** [https://github.com/stackforge/python-congressclient Python client source code (github)]<br />
*** [https://launchpad.net/python-congressclient Python client bugs (Launchpad)]<br />
*** [https://review.openstack.org/#/q/status:open+project:stackforge/python-congressclient,n,z Outstanding python client changes (Gerrit)]<br />
** [https://github.com/stackforge/congress-specs Specs: suggestions for additional features for both server and client (github)]<br />
<br />
* Design docs<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
** [https://www.openstack.org/assets/presentation-media/Congress-OpenStack-Atlanta-2014.pdf Slides from Atlanta summit]<br />
** [https://etherpad.openstack.org/p/juno-congress Notes from Atlanta summit]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (thinrichs@vmware.com) and Peter Balland (pballand@vmware.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases ===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
=== Roadmap ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
Proposing a new feature requires you to create two descriptions of that feature: a blueprint and a spec. Before code that implements a new feature is merged, its blueprint must be approved. A blueprint is approved once its spec is approved. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. (If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.)<br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] briefly describing the feature. Include just the name, title, and summary.<br />
# Create an additional description of your feature, called a ''spec'', and add it to the congress-specs repo. <br />
## Checkout the [https://github.com/stackforge/congress-specs congress-specs repo].<br />
## Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
## Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/stackforge/congress-specs/blob/master/specs/template.rst spec template]. <br />
## Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/stackforge/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:stackforge/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. You then copy the finalized details from your spec into your blueprint.<br />
# If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.<br />
<br />
== Incubation Plan ==<br />
The official TC governance is here https://wiki.openstack.org/wiki/Governance/Approved/Incubation<br />
Below are the incubation requirements from the TC from https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements.rst as of 03jun2014. This RST is still under development. Likely our incubation proposal will be debated within the TC in an etherpad.<br />
<br />
The TC will evaluate the project scope and its complementarity with existing integrated projects and other official programs, look into the project technical choices, and check a number of requirements, including (but not limited to):<br />
<br />
'''Scope'''<br />
<br />
Project must have a clear and defined scope.<br />
<br />
Project's scope should represent a measured progression for OpenStack as a whole.<br />
<br />
Project should not inadvertently duplicate functionality present in other OpenStack projects. If they do, they should have a clear plan and timeframe to prevent long-term scope duplication.<br />
<br />
Project should leverage existing functionality in other OpenStack projects as much as possible<br />
<br />
<br />
'''Maturity'''<br />
<br />
Project should have an active team of contributors<br />
<br />
Project should not have a major architectural rewrite planned<br />
<br />
<br />
'''Process'''<br />
<br />
Project must be hosted under stackforge (and therefore use git as its VCS)<br />
<br />
Project must obey OpenStack coordinated project interface (such as tox, pbr, global-requirements...)<br />
<br />
Project should use oslo libraries or oslo-incubator where appropriate<br />
<br />
If project is not part of an existing program, it needs to file for a new program concurrently with the Incubation request, and fill the corresponding requirements.<br />
<br />
Project must have a well-defined core review team, with reviews distributed amongst the team (and not being primarily done by one person)<br />
<br />
Reviews should follow the same criteria as OpenStack projects (2 +2s before +A)<br />
<br />
Project should use the official openstack lists for discussion<br />
<br />
<br />
<br />
'''API'''<br />
<br />
Project APIs should be reasonably stable<br />
<br />
Project must have a REST API with at least a JSON entity representation<br />
<br />
Project must have a Python client library API for its REST API<br />
<br />
<br />
'''QA'''<br />
<br />
Project must have a basic devstack-gate job set up<br />
<br />
'''Documentation / User support'''<br />
<br />
Project must have docs for developers who want to contribute to the project<br />
<br />
Project should have API documentation for devs who want to add to the API, updated when the code is updated<br />
<br />
<br />
'''Legal requirements'''<br />
<br />
Project must be licensed under the Apache License v2<br />
<br />
Project must have no library dependencies which effectively restrict how the project may be distributed or deployed [1]<br />
<br />
All contributors to the project must have signed the CLA<br />
<br />
Project must have no known trademark issues [2]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=64188Congress2014-10-02T15:17:17Z<p>Thinrichs: /* How To Propose a New Feature */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
*Source code<br />
** [https://github.com/stackforge/congress Server source code ] <br />
** [https://github.com/stackforge/python-congressclient Python client source code]<br />
** [https://github.com/stackforge/congress-specs Specs (Suggestions for additional features) ]<br />
** [https://review.openstack.org/#/q/status:open+project:stackforge/congress,n,z Outstanding Gerrit server changes]<br />
** [https://review.openstack.org/#/q/status:open+project:stackforge/python-congressclient,n,z Outstanding Gerrit python client changes]<br />
** [https://review.openstack.org/#/q/status:open+project:stackforge/congress-specs,n,z Outstanding Gerrit specs changes]<br />
** [https://launchpad.net/congress Congress on Launchpad]<br />
<br />
* Design docs<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
** [https://www.openstack.org/assets/presentation-media/Congress-OpenStack-Atlanta-2014.pdf Slides from Atlanta summit]<br />
** [https://etherpad.openstack.org/p/juno-congress Notes from Atlanta summit]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (thinrichs@vmware.com) and Peter Balland (pballand@vmware.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases ===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
=== Roadmap ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
Proposing a new feature requires you to create two descriptions of that feature: a blueprint and a spec. Before code that implements a new feature is merged, its blueprint must be approved. A blueprint is approved once its spec is approved. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. (If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.)<br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] briefly describing the feature. Include just the name, title, and summary.<br />
# Create an additional description of your feature, called a ''spec'', and add it to the congress-specs repo. <br />
## Checkout the [https://github.com/stackforge/congress-specs congress-specs repo].<br />
## Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst<br />
## Use reStructuredText (RST) (an [http://sphinx-doc.org/rest.html RST tutorial]) to describe your new feature by filling out the [https://github.com/stackforge/congress-specs/blob/master/specs/template.rst spec template]. <br />
## Push your changes to the congress-specs repo, a process that is explained [https://wiki.openstack.org/wiki/Gerrit_Workflow here]<br />
# Go back to your blueprint and add a link (e.g. https://github.com/stackforge/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:stackforge/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. You then copy the finalized details from your spec into your blueprint.<br />
# If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.<br />
<br />
== Incubation Plan ==<br />
The official TC governance is here https://wiki.openstack.org/wiki/Governance/Approved/Incubation<br />
Below are the incubation requirements from the TC from https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements.rst as of 03jun2014. This RST is still under development. Likely our incubation proposal will be debated within the TC in an etherpad.<br />
<br />
The TC will evaluate the project scope and its complementarity with existing integrated projects and other official programs, look into the project technical choices, and check a number of requirements, including (but not limited to):<br />
<br />
'''Scope'''<br />
<br />
Project must have a clear and defined scope.<br />
<br />
Project's scope should represent a measured progression for OpenStack as a whole.<br />
<br />
Project should not inadvertently duplicate functionality present in other OpenStack projects. If they do, they should have a clear plan and timeframe to prevent long-term scope duplication.<br />
<br />
Project should leverage existing functionality in other OpenStack projects as much as possible<br />
<br />
<br />
'''Maturity'''<br />
<br />
Project should have an active team of contributors<br />
<br />
Project should not have a major architectural rewrite planned<br />
<br />
<br />
'''Process'''<br />
<br />
Project must be hosted under stackforge (and therefore use git as its VCS)<br />
<br />
Project must obey OpenStack coordinated project interface (such as tox, pbr, global-requirements...)<br />
<br />
Project should use oslo libraries or oslo-incubator where appropriate<br />
<br />
If project is not part of an existing program, it needs to file for a new program concurrently with the Incubation request, and fill the corresponding requirements.<br />
<br />
Project must have a well-defined core review team, with reviews distributed amongst the team (and not being primarily done by one person)<br />
<br />
Reviews should follow the same criteria as OpenStack projects (2 +2s before +A)<br />
<br />
Project should use the official openstack lists for discussion<br />
<br />
<br />
<br />
'''API'''<br />
<br />
Project APIs should be reasonably stable<br />
<br />
Project must have a REST API with at least a JSON entity representation<br />
<br />
Project must have a Python client library API for its REST API<br />
<br />
<br />
'''QA'''<br />
<br />
Project must have a basic devstack-gate job set up<br />
<br />
'''Documentation / User support'''<br />
<br />
Project must have docs for developers who want to contribute to the project<br />
<br />
Project should have API documentation for devs who want to add to the API, updated when the code is updated<br />
<br />
<br />
'''Legal requirements'''<br />
<br />
Project must be licensed under the Apache License v2<br />
<br />
Project must have no library dependencies which effectively restrict how the project may be distributed or deployed [1]<br />
<br />
All contributors to the project must have signed the CLA<br />
<br />
Project must have no known trademark issues [2]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=64184Congress2014-10-02T15:00:13Z<p>Thinrichs: /* How To Create a Blueprint */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
*Source code<br />
** [https://github.com/stackforge/congress Server source code ] <br />
** [https://github.com/stackforge/python-congressclient Python client source code]<br />
** [https://github.com/stackforge/congress-specs Specs (Suggestions for additional features) ]<br />
** [https://review.openstack.org/#/q/status:open+project:stackforge/congress,n,z Outstanding Gerrit server changes]<br />
** [https://review.openstack.org/#/q/status:open+project:stackforge/python-congressclient,n,z Outstanding Gerrit python client changes]<br />
** [https://review.openstack.org/#/q/status:open+project:stackforge/congress-specs,n,z Outstanding Gerrit specs changes]<br />
** [https://launchpad.net/congress Congress on Launchpad]<br />
<br />
* Design docs<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
** [https://www.openstack.org/assets/presentation-media/Congress-OpenStack-Atlanta-2014.pdf Slides from Atlanta summit]<br />
** [https://etherpad.openstack.org/p/juno-congress Notes from Atlanta summit]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (thinrichs@vmware.com) and Peter Balland (pballand@vmware.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases ===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
=== Roadmap ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Propose a New Feature ==<br />
# Create a [https://blueprints.launchpad.net/congress/+addspec blueprint] briefly describing the feature. Include just the name, title, and summary.<br />
# Create an additional description of your feature (called a ''spec'') and submit your spec to the [https://github.com/stackforge/congress-specs congress-specs repo]. (The spec is used only because a blueprint has no mechanism for leaving comments and making suggestions. A blueprint must be approved before code implementing a new feature is merged, and the way to get a blueprint approved is to get its spec approved.)<br />
## To create the content, fill out the [https://github.com/stackforge/congress-specs/blob/master/specs/template.rst spec template]. <br />
## The format of the spec is reStructuredText (RST). Here is a [http://sphinx-doc.org/rest.html tutorial].<br />
## The name of the file you create should be close to the blueprint name. <br />
## The location of the file should be in the current release location. e.g. ./specs/kilo/your-spec-here.rst<br />
# Go back to your blueprint and add a link (e.g. https://github.com/stackforge/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details. <br />
# Participate in the discussion and refinement of your feature via [https://review.openstack.org/#/q/status:open+project:stackforge/congress-specs,n,z Gerrit reviews]<br />
# Eventually, core reviewers will either reject or approve your spec<br />
# If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. You then copy the details from your spec into your blueprint.<br />
# Blueprints not approved or not implemented during a release cycle must be resubmitted for the next cycle<br />
<br />
== Incubation Plan ==<br />
The official TC governance is here https://wiki.openstack.org/wiki/Governance/Approved/Incubation<br />
Below are the incubation requirements from the TC from https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements.rst as of 03jun2014. This RST is still under development. Likely our incubation proposal will be debated within the TC in an etherpad.<br />
<br />
The TC will evaluate the project scope and its complementarity with existing integrated projects and other official programs, look into the project technical choices, and check a number of requirements, including (but not limited to):<br />
<br />
'''Scope'''<br />
<br />
Project must have a clear and defined scope.<br />
<br />
Project's scope should represent a measured progression for OpenStack as a whole.<br />
<br />
Project should not inadvertently duplicate functionality present in other OpenStack projects. If they do, they should have a clear plan and timeframe to prevent long-term scope duplication.<br />
<br />
Project should leverage existing functionality in other OpenStack projects as much as possible<br />
<br />
<br />
'''Maturity'''<br />
<br />
Project should have an active team of contributors<br />
<br />
Project should not have a major architectural rewrite planned<br />
<br />
<br />
'''Process'''<br />
<br />
Project must be hosted under stackforge (and therefore use git as its VCS)<br />
<br />
Project must obey OpenStack coordinated project interface (such as tox, pbr, global-requirements...)<br />
<br />
Project should use oslo libraries or oslo-incubator where appropriate<br />
<br />
If project is not part of an existing program, it needs to file for a new program concurrently with the Incubation request, and fill the corresponding requirements.<br />
<br />
Project must have a well-defined core review team, with reviews distributed amongst the team (and not being primarily done by one person)<br />
<br />
Reviews should follow the same criteria as OpenStack projects (2 +2s before +A)<br />
<br />
Project should use the official openstack lists for discussion<br />
<br />
<br />
<br />
'''API'''<br />
<br />
Project APIs should be reasonably stable<br />
<br />
Project must have a REST API with at least a JSON entity representation<br />
<br />
Project must have a Python client library API for its REST API<br />
<br />
<br />
'''QA'''<br />
<br />
Project must have a basic devstack-gate job set up<br />
<br />
'''Documentation / User support'''<br />
<br />
Project must have docs for developers who want to contribute to the project<br />
<br />
Project should have API documentation for devs who want to add to the API, updated when the code is updated<br />
<br />
<br />
'''Legal requirements'''<br />
<br />
Project must be licensed under the Apache License v2<br />
<br />
Project must have no library dependencies which effectively restrict how the project may be distributed or deployed [1]<br />
<br />
All contributors to the project must have signed the CLA<br />
<br />
Project must have no known trademark issues [2]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=62311Congress2014-09-09T16:26:12Z<p>Thinrichs: /* Policy as a service ("Congress") */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
*Source code<br />
** [https://github.com/stackforge/congress Server source code ] <br />
** [https://github.com/stackforge/python-congressclient Python client source code]<br />
** [https://github.com/stackforge/congress-specs Specs (Suggestions for additional features) ]<br />
** [https://review.openstack.org/#/q/status:open+project:stackforge/congress,n,z Outstanding Gerrit server changes]<br />
** [https://review.openstack.org/#/q/status:open+project:stackforge/python-congressclient,n,z Outstanding Gerrit python client changes]<br />
** [https://review.openstack.org/#/q/status:open+project:stackforge/congress-specs,n,z Outstanding Gerrit specs changes]<br />
** [https://launchpad.net/congress Congress on Launchpad]<br />
<br />
* Design docs<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
** [https://www.openstack.org/assets/presentation-media/Congress-OpenStack-Atlanta-2014.pdf Slides from Atlanta summit]<br />
** [https://etherpad.openstack.org/p/juno-congress Notes from Atlanta summit]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (thinrichs@vmware.com) and Peter Balland (pballand@vmware.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases ===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
=== Roadmap ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Create a Blueprint ==<br />
# The blueprint is created here https://blueprints.launchpad.net/congress/+addspec, only the name, title, and summary need to be filled out<br />
# Submit a spec RST patch based on the template https://github.com/stackforge/congress-specs/blob/master/specs/template.rst<br />
# Gerrit reviews are the debate and refinement<br />
# Final RST patch either is rejected or approved<br />
# If the RST patch is approved, then it gets merged, the blueprint is approved, and the RST details are manually updated into the blueprint<br />
# The blueprint can only be approved for current release<br />
# Blueprints not approved or not implemented during a release cycle need the RST patch be resubmitted for the next cycle<br />
<br />
== Incubation Plan ==<br />
The official TC governance is here https://wiki.openstack.org/wiki/Governance/Approved/Incubation<br />
Below are the incubation requirements from the TC from https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements.rst as of 03jun2014. This RST is still under development. Likely our incubation proposal will be debated within the TC in an etherpad.<br />
<br />
The TC will evaluate the project scope and its complementarity with existing integrated projects and other official programs, look into the project technical choices, and check a number of requirements, including (but not limited to):<br />
<br />
'''Scope'''<br />
<br />
Project must have a clear and defined scope.<br />
<br />
Project's scope should represent a measured progression for OpenStack as a whole.<br />
<br />
Project should not inadvertently duplicate functionality present in other OpenStack projects. If they do, they should have a clear plan and timeframe to prevent long-term scope duplication.<br />
<br />
Project should leverage existing functionality in other OpenStack projects as much as possible<br />
<br />
<br />
'''Maturity'''<br />
<br />
Project should have an active team of contributors<br />
<br />
Project should not have a major architectural rewrite planned<br />
<br />
<br />
'''Process'''<br />
<br />
Project must be hosted under stackforge (and therefore use git as its VCS)<br />
<br />
Project must obey OpenStack coordinated project interface (such as tox, pbr, global-requirements...)<br />
<br />
Project should use oslo libraries or oslo-incubator where appropriate<br />
<br />
If project is not part of an existing program, it needs to file for a new program concurrently with the Incubation request, and fill the corresponding requirements.<br />
<br />
Project must have a well-defined core review team, with reviews distributed amongst the team (and not being primarily done by one person)<br />
<br />
Reviews should follow the same criteria as OpenStack projects (2 +2s before +A)<br />
<br />
Project should use the official openstack lists for discussion<br />
<br />
<br />
<br />
'''API'''<br />
<br />
Project APIs should be reasonably stable<br />
<br />
Project must have a REST API with at least a JSON entity representation<br />
<br />
Project must have a Python client library API for its REST API<br />
<br />
<br />
'''QA'''<br />
<br />
Project must have a basic devstack-gate job set up<br />
<br />
'''Documentation / User support'''<br />
<br />
Project must have docs for developers who want to contribute to the project<br />
<br />
Project should have API documentation for devs who want to add to the API, updated when the code is updated<br />
<br />
<br />
'''Legal requirements'''<br />
<br />
Project must be licensed under the Apache License v2<br />
<br />
Project must have no library dependencies which effectively restrict how the project may be distributed or deployed [1]<br />
<br />
All contributors to the project must have signed the CLA<br />
<br />
Project must have no known trademark issues [2]</div>Thinrichshttps://wiki.openstack.org/w/index.php?title=Congress&diff=62310Congress2014-09-09T16:16:57Z<p>Thinrichs: /* Policy as a service ("Congress") */</p>
<hr />
<div>== Policy as a service ("Congress") ==<br />
Here are some external resources with detailed information about Congress.<br />
<br />
*Source code<br />
** [https://github.com/stackforge/congress Server source code ] <br />
** [https://github.com/stackforge/python-congressclient Python client source code]<br />
** [https://review.openstack.org/#/q/status:open+project:stackforge/congress,n,z Outstanding Gerrit server changes]<br />
** [https://review.openstack.org/#/q/status:open+project:stackforge/python-congressclient,n,z Outstanding Gerrit python client changes]<br />
** [https://launchpad.net/congress Congress on Launchpad]<br />
<br />
* Design docs<br />
** [https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit Overall design doc]<br />
** [https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit Data integration design doc]<br />
** [https://goo.gl/1E5MeY API design doc]<br />
<br />
*Meetings<br />
** [https://wiki.openstack.org/wiki/Meetings/Congress Meeting times]<br />
** [http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/ Meeting history]<br />
** [https://www.openstack.org/assets/presentation-media/Congress-OpenStack-Atlanta-2014.pdf Slides from Atlanta summit]<br />
** [https://etherpad.openstack.org/p/juno-congress Notes from Atlanta summit]<br />
<br />
<br />
If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (thinrichs@vmware.com) and Peter Balland (pballand@vmware.com).<br />
<br />
=== Mission ===<br />
Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.<br />
<br />
=== Why Congress===<br />
IT services will always be governed and brought into compliance with business-level policies. <br />
<br />
In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible. <br />
<br />
Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in. <br />
<br />
The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.<br />
<br />
=== What is Congress ===<br />
Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.<br />
<br />
Congress aims to include the following functionality:<br />
* Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:<br />
** Application A is only allowed to communicate with application B.<br />
** Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.<br />
** Virtual machine A should never be provisioned in a different geographic region than storage B.<br />
* Offer a pluggable architecture that connects to any collection of cloud services<br />
* Enforce policy<br />
** Proactively: preventing violations before they occur<br />
** Reactively: correcting violations after they occur<br />
** Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.<br />
<br />
=== Policy Language ===<br />
The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.<br />
<br />
<policy> ::= <rule>*<br /><br />
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*<br /><br />
<literal> ::= <atom> <br /><br />
<literal> ::= NOT <atom> <br /><br />
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN <br /><br />
<term> ::= INTEGER | FLOAT | STRING | VARIABLE <br /><br />
<br />
=== Use Cases ===<br />
<br />
Detailed use cases can be viewed in the google doc here: [https://goo.gl/RM3W6W Congress Use Cases] We are currently migrating these use cases to follow the [[#How_To_Create_a_Blueprint|Blueprint / Specs workflow]].<br />
<br />
Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Use case !! Congress target release<br />
|-<br />
| Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner || alpha<br />
|-<br />
| Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. || alpha<br />
|-<br />
| Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. || beta<br />
|-<br />
| Express a rule to determine if [[Solum]] user may advance an assembly from one [[Solum/Environments|Environment]] to another. See adrian_otto on IRC for details. || TBD<br />
|}<br />
<br />
=== Roadmap ===<br />
Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.<br />
<br />
* Basic Policy language implementation<br />
** Multi-threaded Datalog implementation<br />
** Bottom-up datalog evaluation<br />
** Query optimization<br />
** Materialized implementation: automated view selection/inlining<br />
<br />
* Enhanced Policy language<br />
** ActiveDirectory facade<br />
** Syntax improvements (modals like insert/delete)<br />
** Support for compound objects (safe, stratified recursion)<br />
** Richer support for describing actions in Enforcement policy<br />
** Modules<br />
<br />
* Policy structure<br />
** Multi-tenant<br />
** Multi-stakeholder (tenants, finance, operations, etc.)<br />
<br />
* Enforcement<br />
** Execution of (rich) actions<br />
** Carve out subpolicies and push to other components, e.g. Neutron<br />
** Add consultation with Congress to other OS components, e.g. Nova/Neutron<br />
** Proper remediation enumeration with Classification+Action policies<br />
** Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)<br />
** Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.<br />
<br />
* Libraries<br />
** Data source drivers for common OS and non-OS components<br />
** HIPAA, etc. encoding<br />
** Ontologies for different sectors, e.g. finance<br />
<br />
* Policy Analysis<br />
** Look for infinite loops through Enforcement policy (using Action policy)<br />
** Compare Access control policy and Classification policy for redundancy<br />
** Change impact analysis<br />
<br />
* Dashboard<br />
** IDE for policy (different levels: raw-Datalog, AD, checkbox-based)<br />
** List violations<br />
** Explain violations (step-by-step tracing thru policy)<br />
** Simulate state change and action execution<br />
** Enumerate remediations for a given violation<br />
<br />
* Architecture and API<br />
** Formalize and implement full introspection and query APIs<br />
** Distribute across multiple nodes<br />
** Ensure Congress can use another Congress instance as data<br />
<br />
* Authentication and Access Control<br />
** Add authentication and access controls on API/dashboard<br />
** When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.<br />
<br />
=== Relationship to Other Projects ===<br />
<br />
'''Related OpenStack components'''<br />
* [https://wiki.openstack.org/wiki/Keystone Keystone]: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.<br />
<br />
* [https://wiki.openstack.org/wiki/Heat Heat]: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.<br />
<br />
* [https://wiki.openstack.org/wiki/Mistral Mistral]: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.<br />
<br />
'''Policy Initiatives'''<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/solver-scheduler SolverScheduler (Nova blueprint)]: The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.<br />
<br />
* [https://github.com/openstack/gantt Gantt]: A scheduler framework for use by different OpenStack components. Used to be a [https://wiki.openstack.org/wiki/Meetings/Scheduler subgroup of Nova] and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.<br />
<br />
* [https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?pli=1 Neutron policy group]: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well. <br />
<br />
* [https://wiki.openstack.org/wiki/OpenAttestation Open Attestation]: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.<br />
<br />
* [https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler Policy-based Scheduling Module (Nova blueprint)]: This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit. <br />
<br />
* [https://docs.google.com/document/d/1DMsnGxQ3P-OwZCF3uxaUeEFaKX8LqUqmmgQ_7EVK7Y8/edit Tetris]: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the [https://blueprints.launchpad.net/nova/+spec/resource-optimization-service Runtime Policies blueprint] within Nova. It also appears to subsume the [http://openstack-neat.org/ Neat] effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.<br />
<br />
* [https://review.openstack.org/#/c/95907/6/specs/convergence.rst Convergence Engine (Heat spec)]: This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.<br />
<br />
* [http://docs.openstack.org/developer/swift/overview_policies.html Swift Storage Policies]: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.<br />
<br />
<br />
Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)<br />
<br />
<br />
[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]<br />
<br />
== How To Create a Blueprint ==<br />
# The blueprint is created here https://blueprints.launchpad.net/congress/+addspec, only the name, title, and summary need to be filled out<br />
# Submit a spec RST patch based on the template https://github.com/stackforge/congress-specs/blob/master/specs/template.rst<br />
# Gerrit reviews are the debate and refinement<br />
# Final RST patch either is rejected or approved<br />
# If the RST patch is approved, then it gets merged, the blueprint is approved, and the RST details are manually updated into the blueprint<br />
# The blueprint can only be approved for current release<br />
# Blueprints not approved or not implemented during a release cycle need the RST patch be resubmitted for the next cycle<br />
<br />
== Incubation Plan ==<br />
The official TC governance is here https://wiki.openstack.org/wiki/Governance/Approved/Incubation<br />
Below are the incubation requirements from the TC from https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements.rst as of 03jun2014. This RST is still under development. Likely our incubation proposal will be debated within the TC in an etherpad.<br />
<br />
The TC will evaluate the project scope and its complementarity with existing integrated projects and other official programs, look into the project technical choices, and check a number of requirements, including (but not limited to):<br />
<br />
'''Scope'''<br />
<br />
Project must have a clear and defined scope.<br />
<br />
Project's scope should represent a measured progression for OpenStack as a whole.<br />
<br />
Project should not inadvertently duplicate functionality present in other OpenStack projects. If they do, they should have a clear plan and timeframe to prevent long-term scope duplication.<br />
<br />
Project should leverage existing functionality in other OpenStack projects as much as possible<br />
<br />
<br />
'''Maturity'''<br />
<br />
Project should have an active team of contributors<br />
<br />
Project should not have a major architectural rewrite planned<br />
<br />
<br />
'''Process'''<br />
<br />
Project must be hosted under stackforge (and therefore use git as its VCS)<br />
<br />
Project must obey OpenStack coordinated project interface (such as tox, pbr, global-requirements...)<br />
<br />
Project should use oslo libraries or oslo-incubator where appropriate<br />
<br />
If project is not part of an existing program, it needs to file for a new program concurrently with the Incubation request, and fill the corresponding requirements.<br />
<br />
Project must have a well-defined core review team, with reviews distributed amongst the team (and not being primarily done by one person)<br />
<br />
Reviews should follow the same criteria as OpenStack projects (2 +2s before +A)<br />
<br />
Project should use the official openstack lists for discussion<br />
<br />
<br />
<br />
'''API'''<br />
<br />
Project APIs should be reasonably stable<br />
<br />
Project must have a REST API with at least a JSON entity representation<br />
<br />
Project must have a Python client library API for its REST API<br />
<br />
<br />
'''QA'''<br />
<br />
Project must have a basic devstack-gate job set up<br />
<br />
'''Documentation / User support'''<br />
<br />
Project must have docs for developers who want to contribute to the project<br />
<br />
Project should have API documentation for devs who want to add to the API, updated when the code is updated<br />
<br />
<br />
'''Legal requirements'''<br />
<br />
Project must be licensed under the Apache License v2<br />
<br />
Project must have no library dependencies which effectively restrict how the project may be distributed or deployed [1]<br />
<br />
All contributors to the project must have signed the CLA<br />
<br />
Project must have no known trademark issues [2]</div>Thinrichs