Jump to: navigation, search


Revision as of 18:30, 10 March 2015 by Smaffulli (talk | contribs) (added to working groups)
Kilo design session etherpad

Policy as a service ("Congress")

Here are some external resources with detailed information about Congress.

If you'd like more information, join #congress on freenode, or contact Tim Hinrichs (timothy.l.hinrichs@gmail.com).


Congress is an OpenStack project to provide policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures.

Why Congress

IT services will always be governed and brought into compliance with business-level policies.

In the past, policy was enforced manually, e.g. by someone sending an email asking for an application to be added to the network, secured by specific firewall entries, connected to an agreed-on storage, and so on. In the cloud era, IT has become more agile: users expect immediate delivery of services, a level of responsiveness that is unattainable by the team responsible for governance. Hence, manual enforcement is no longer feasible.

Both enterprises and vendors have fielded engines for enforcing policy (semi)-automatically, creating a fragmented market where enterprises reinvent the wheel while maintaining their own code, and vendors fail to meet enterprise needs, either for technical reasons or because their solutions require vertical integration and lock-in.

The Congress policy service enables IT services to extend their OpenStack footprint by onboarding new applications while keeping the strong compliance and governance dictated by their own business policies. All of that leveraging a community-driven implementation in which vendors can plug into a common interface.

What is Congress

Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g. application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement.

Congress aims to include the following functionality:

  • Allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples:
    • Application A is only allowed to communicate with application B.
    • Virtual machine owned by tenant A should always have a public network connection if tenant A is part of the group B.
    • Virtual machine A should never be provisioned in a different geographic region than storage B.
  • Offer a pluggable architecture that connects to any collection of cloud services
  • Enforce policy
    • Proactively: preventing violations before they occur
    • Reactively: correcting violations after they occur
    • Interactively: give administrators insight into policy and its violations, e.g. identifying violations, explaining their causes, computing potential remediations, simulating a sequence of changes.

Policy Language

The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.

<policy> ::= <rule>*
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*
<literal> ::= <atom>
<literal> ::= NOT <atom>
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN

Use Cases

Detailed use cases can be viewed in the google doc here: Congress Use Cases We are currently migrating these use cases to follow the Blueprint / Specs workflow.

Below we list use cases and the Congress release at which we hope to support that use case. Please feel free to add additional use cases that you would like Congress to support.

Use case Congress target release
Every network connected to a VM must either be public or owned by someone in the same group as the VM's owner alpha
Define a group hierarchy for servers where base groups are defined in LDAP. When base-level group change occurs, update network ACLs. alpha
Every time a vulnerability scanner reports a vulnerability, disconnect the VM from the network, patch the vulnerability, and reconnect. beta
Express a rule to determine if Solum user may advance an assembly from one Environment to another. See adrian_otto on IRC for details. TBD

Future Features

Here is a list of features that would be useful in future releases. We have not prioritized or assigned any of these, so if any of you developers think they sound fun, let us know and we can explain more.

  • Basic Policy language implementation
    • Multi-threaded Datalog implementation
    • Bottom-up datalog evaluation
    • Query optimization
    • Materialized implementation: automated view selection/inlining
  • Enhanced Policy language
    • ActiveDirectory facade
    • Syntax improvements (modals like insert/delete)
    • Support for compound objects (safe, stratified recursion)
    • Richer support for describing actions in Enforcement policy
    • Modules
  • Policy structure
    • Multi-tenant
    • Multi-stakeholder (tenants, finance, operations, etc.)
  • Enforcement
    • Execution of (rich) actions
    • Carve out subpolicies and push to other components, e.g. Neutron
    • Add consultation with Congress to other OS components, e.g. Nova/Neutron
    • Proper remediation enumeration with Classification+Action policies
    • Find ways of automatically choosing the proper remediation strategy (e.g. priorities/monotonicity)
    • Give cloud owner way of configuring how proactive/reactive/etc. based on information from separate policies.
  • Libraries
    • Data source drivers for common OS and non-OS components
    • HIPAA, etc. encoding
    • Ontologies for different sectors, e.g. finance
  • Policy Analysis
    • Look for infinite loops through Enforcement policy (using Action policy)
    • Compare Access control policy and Classification policy for redundancy
    • Change impact analysis
  • Dashboard
    • IDE for policy (different levels: raw-Datalog, AD, checkbox-based)
    • List violations
    • Explain violations (step-by-step tracing thru policy)
    • Simulate state change and action execution
    • Enumerate remediations for a given violation
  • Architecture and API
    • Formalize and implement full introspection and query APIs
    • Distribute across multiple nodes
    • Ensure Congress can use another Congress instance as data
  • Authentication and Access Control
    • Add authentication and access controls on API/dashboard
    • When remediating, which user(s) are executing actions? Does Congress need admin credentials for all its cloud services or are user credentials part of actions? Need proper storage for those credentials. Etc.

Relationship to Other Projects

Related OpenStack components

  • Keystone: Keystone is an identity service providing authentication and high-level authorization for OpenStack. Congress can leverage Keystone as an input for policies. For example, an auditor might want to ensure that the running system is consistent with current Keystone authorization decisions.
  • Heat: Heat is an orchestration service for application provisioning and lifecycle management. Congress can ensure that applications managed by Heat are consistent with business policy.
  • Mistral: Mistral is a task management service, or in other words Workflow as a service. Its primary use cases include Cron-for-the-cloud, execution of long-running tasks, and big data analysis. Congress could potentially utilize Mistral to execute actions that bring the cloud back into policy compliance.

Policy Initiatives

  • SolverScheduler (Nova blueprint): The SolverScheduler provides an interface for using different constraint solvers to solve placement problems for Nova. Depending on how it is applied, could be used for initial provisioning, re-balancing loads, or both.
  • Gantt: A scheduler framework for use by different OpenStack components. Used to be a subgroup of Nova and focused on scheduling VMs based on resource utilization. Includes plugin framework for making arbitrary metrics available to the scheduler.
  • Neutron policy group: This group aims to add a policy API to Neutron, where tenants express policy between groups of networks, ports, etc., and that policy is enforced. Policy statements are of the form "for every network flow between groups A and B that satisfies these conditions, apply a constraint on that flow". The constraints that can be enforced on a flow will grow as the enforcement engine matures; currently, the constraints are Allow and Deny, but there are plans for quality-of-service constraints as well.
  • Open Attestation: This project provides an SDK for verifying host integrity. It provides some policy-based management capabilities, though documentation is limited.
  • Policy-based Scheduling Module (Nova blueprint): This effort aims to schedule Nova resources per client, per cluster of resources and per context (e.g. overload, time, etc.). A proof of concept was presented at the Demo Theater at OpenStack Juno Summit.
  • Tetris: This effort provides condition-action policies (under certain conditions, execute these actions). It is intended to be a generic condition-action engine handling complex actions and optimization. This effort subsumes the Runtime Policies blueprint within Nova. It also appears to subsume the Neat effort. Tetris and Congress have recently decided merge because of their highly aligned goals and approaches.
  • Convergence Engine (Heat spec): This effort separates the ideas of desired state and observed state for the objects Heat manages. The Convergence Engine will detect when the desired state and observed state differ and take action to eliminate those differences.
  • Swift Storage Policies: A Swift storage policy describes a virtual storage system that Swift implements with physical devices. Today each policy dictates how many partitions the storage system has, how many replicas of each object it should maintain, and the minimum amount of time before a partition can be moved to a different physical location since the last time it was moved.
  • Graffiti: Graffiti aims to store and query (hierarchical) metadata about OpenStack objects, e.g. tagging a Glance image with the software installed on that image. Currently, the team is working within other OpenStack projects to add user interfaces for people to create and query metadata and to store that metadata within the project's database. This project is about creating metadata, which could be useful for writing business policies, not about policies over that metadata.

Congress is complementary to all of these efforts. It is intended to be a general-purpose policy component and hence could (potentially) express any of the policies described above. However, to enforce those policies Congress would carve off subpolicies and send them to the appropriate enforcement point in the cloud. For example, Congress might carve off the compute Load Balancing policy and send it to the Runtime Policy engine and/or carve off the networking policy and sending it to the Neutron policy engine. Part of the goal of Congress is to give administrators a single place to write and inspect the policy being enforced throughout the datacenter or cloud, distribute the relevant portions of that policy to all of the available enforcement points, and monitor the state of the cloud to let administrators know if the cloud is in compliance. (Congress also attempts to correct policy violations when they occur, but optimization policies such as many of those addressed above will not be enforceable by Congress directly.)

[Thanks to Jay Lau, Gokul Kandiraju, and others on the mailing list for help compiling this list.]

How To Propose a New Feature

Proposing a new feature requires you to create two descriptions of that feature: a blueprint and a spec. Before code that implements a new feature is merged, its blueprint must be approved. A blueprint is approved once its spec is approved. A spec is a proxy for a blueprint that everyone in the community can comment on and help refine. (If Launchpad, the software OpenStack uses for managing blueprints and bugs, made it possible for the community to add comments to a blueprint, we would not need specs at all.)

  1. Create a blueprint briefly describing the feature. Include just the name, title, and summary.
  2. Create an additional description of your feature, called a spec, and add it to the congress-specs repo.
    1. Checkout the congress-specs repo.
    2. Create a new file, whose name is similar to the blueprint name, in the current release folder, e.g. congress-specs/specs/kilo/your-spec-here.rst
    3. Use reStructuredText (RST) (an RST tutorial) to describe your new feature by filling out the spec template.
    4. Push your changes to the congress-specs repo, a process that is explained here
  3. Go back to your blueprint and add a link (e.g. https://github.com/stackforge/congress-specs/tree/master/specs/kilo/your-spec-here.rst) to your spec in the Specification Location field on the Blueprint details.
  4. Participate in the discussion and refinement of your feature via Gerrit reviews
  5. Eventually, core reviewers will either reject or approve your spec
  6. If the spec is approved, a core reviewer will merge it into the repo and approve your blueprint. You then copy the finalized details from your spec into your blueprint.
  7. If your blueprint is either not approved or not implemented during a release cycle, it must be resubmitted for the next cycle.

Incubation Plan

The official TC governance is here https://wiki.openstack.org/wiki/Governance/Approved/Incubation Below are the incubation requirements from the TC from https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements.rst as of 03jun2014. This RST is still under development. Likely our incubation proposal will be debated within the TC in an etherpad.

The TC will evaluate the project scope and its complementarity with existing integrated projects and other official programs, look into the project technical choices, and check a number of requirements, including (but not limited to):


Project must have a clear and defined scope.

Project's scope should represent a measured progression for OpenStack as a whole.

Project should not inadvertently duplicate functionality present in other OpenStack projects. If they do, they should have a clear plan and timeframe to prevent long-term scope duplication.

Project should leverage existing functionality in other OpenStack projects as much as possible


Project should have an active team of contributors

Project should not have a major architectural rewrite planned


Project must be hosted under stackforge (and therefore use git as its VCS)

Project must obey OpenStack coordinated project interface (such as tox, pbr, global-requirements...)

Project should use oslo libraries or oslo-incubator where appropriate

If project is not part of an existing program, it needs to file for a new program concurrently with the Incubation request, and fill the corresponding requirements.

Project must have a well-defined core review team, with reviews distributed amongst the team (and not being primarily done by one person)

Reviews should follow the same criteria as OpenStack projects (2 +2s before +A)

Project should use the official openstack lists for discussion


Project APIs should be reasonably stable

Project must have a REST API with at least a JSON entity representation

Project must have a Python client library API for its REST API


Project must have a basic devstack-gate job set up

Documentation / User support

Project must have docs for developers who want to contribute to the project

Project should have API documentation for devs who want to add to the API, updated when the code is updated

Legal requirements

Project must be licensed under the Apache License v2

Project must have no library dependencies which effectively restrict how the project may be distributed or deployed [1]

All contributors to the project must have signed the CLA

Project must have no known trademark issues [2]