Official Title: OpenStack Common Libraries
PTL: Doug Hellmann <firstname.lastname@example.org>
To produce a set of python libraries containing code shared by OpenStack projects. The APIs provided by these libraries should be high quality, stable, consistent, documented and generally applicable.
- 1 The Oslo Team
- 2 Libraries
- 3 Principles
- 4 Incubation
- 5 FAQs
- 6 Resources
The Oslo Team
The Oslo program brings together generalist code reviewers and specialist API maintainers. They share a common interest in tackling copy-and-paste technical debt across the OpenStack project.
Generalist Code Reviewers
Oslo's core reviewers take on a generalist role on the project. They are folks with good taste in Python code, provide constructive input in their reviews and make time to review any patches submitted to the project, irrespective of the area which a given patch targets.
Specialist API Maintainers
Each library or incubating API has one or more specialist maintainers who have responsibility for evolving the API in question. They work to ensure the API meets the needs of all OpenStack projects and helps to ensure the APIs are widely adopted across the project, wherever the APIs can reduce duplication of functionality.
Getting in Touch
We use the email@example.com mailing list for discussions and we all hang out on #openstack-dev.
The following libraries are currently published by the Oslo program. Where we felt that a library had real potential for widespread use outside OpenStack, we chose not to include them in the oslo namespace.
oslo.config is a library for parsing configuration files and command line arguments. It is maintained by Mark McLoughlin.
Please file bugs in the oslo project in launchpad.
See this historical blueprint describing the initial requirements for the API.
pbr (or Python Build Reasonableness) is a set of sensible default setuptools behaviours. It is maintained by Monty Taylor.
Please file bugs in the pbr project in launchpad.
hacking is a set of tools for enforcing coding style guidelines. It is maintained by Joe Gordon and Sean Dague.
Please file bugs in the hacking project in launchpad.
The oslo.messaging provides a messaging API which supports RPC and notifications over a number of different messaging transports. It is maintained by Mark McLoughlin.
Bugs and blueprints should be filed using the oslo.messaging launchpad project.
This etherpad captures the latest status and background to this project.
oslo.sphinx provides theme and extension support for Sphinx documentation from the OpenStack project. It is maintained by Doug Hellmann.
Please file bugs in the oslo project in launchpad.
oslo.version handles getting the version for an installed piece of software from the python metadata that already exists. It is maintained by Monty Taylor.
Please file bugs in the oslo project in launchpad.
oslo.db is a proposed database handling library.
cookiecutter Cookiecutter is a project that creates a skeleton OpenStack project from a set of templates.
Please file bugs in the oslo project in launchpad.
oslo.rootwrap Rootwrap allows fine filtering of shell commands to run as root from OpenStack services.
Please file bugs in the oslo project in launchpad and tag them with "rootwrap"
APIs included in Oslo should reflect a rough consensus across the project on the requirements and design for that use case. New OpenStack projects should be able to use an Oslo API safe in the knowledge that, by doing so, the project is being a good OpenStack citizen and building upon established best practice.
To that end, we keep a number of principles in mind when designing and evolving Oslo APIs:
- The API should be generally useful and a "good fit" - e.g. it shouldn't encode any assumptions specific to the project it originated from, it should follow a style consistent with other Oslo APIs and should fit generally in a theme like error handling, configuration options, time and date, notifications, WSGI, etc.
- The API should already be in use by a number of OpenStack projects
- There should be a commitment to adopt the API in all other OpenStack projects (where appropriate) and there should be no known major blockers to that adoption
- The API should represents the "rough consensus" across OpenStack projects
- There should be no other API in OpenStack competing for this "rough consensus"
- It should be possible for the API to evolve while continuing to maintain backwards compatibility with older versions for a reasonable period - e.g. compatibility with an API deprecated in release N may only be removed in release N+2
The process of developing a new Oslo API usually begins by taking code which is common to some OpenStack projects and moving it into the oslo-incubator repository. New APIs live in the incubator until they have matured to meet the criteria described above.
While incubating, all APIs should have a specialist API maintainer. The responsibility of these maintainers and the list of maintainers for each incubating API is documented in the MAINTAINERS file in oslo-incubator.
Developers making major changes to incubating APIs in oslo-incubator must be prepared to update the copies in the projects which have previously imported the code.
Incubation shouldn't be seen as a long term option for any API - it is merely a stepping stone to inclusion into a published Oslo library.
Please file bugs for incubating APIs in the oslo project in launchpad.
We track the graduation status of incubated code in Oslo/GraduationStatus.
Syncing Code from Incubator
APIs which are incubating can be copied into individual openstack projects from oslo-incubator using the
update.py script provided. An
openstack-common.conf configuration file in the project describes which modules to copy and where they should be copied to.
Usually the API maintainer or those making significant changes to an API take responsibility for syncing that specific module into the projects which use it by doing e.g.:
$> cd ../ $> git clone .../oslo-incubator $> cd oslo-incubator $> python update.py --nodeps --base nova --dest-dir ../nova --modules jsonutils,gettextutils
Alternatively, it can make sense for someone to batch sync more minor changes into a project by doing e.g.: To sync all code for a specific project, you can do:
$> python update.py ../nova Copying the config,exception,extensions,utils,wsgi modules under the nova module in ../nova
in this latter case, the
update.py script uses the
openstack-common.conf config file to determine which modules to copy. The format of that file is e.g.
$> cd ../nova $> cat openstack-common.conf [DEFAULT] # The list of modules to copy from oslo-incubator modules=cfg,iniparser # The base module to hold the copy of openstack.common base=nova
Projects which are using such incubating APIs must avoid ever modifying their copies of the code. All changes should be made in oslo-incubator itself and copied into the project.
Code in the incubator is expected to move out to its own repository to be packaged as a standalone library or project.
When that process starts, the MAINTAINERS file should be updated so the status of the module(s) is "Graduating". While the module is in the Graduating state, bug fixes and features will need to be maintained in the incubator and in the new library.
After the first release of the new library, the status of the module(s) should be updated to "Obsolete." During this phase, only critical bug fixes will be allowed in the incubator version of the code. New features and minor bugs should be fixed in the released library, and effort should be spent focusing on having downstream projects consume the library.
After all integrated projects that use the code are using the library instead of the incubator, the module(s)_ can be deleted from the incubator.
Graduating modules need to be made part of the integrated gate, and devstack needs to know how to install them. Copy how oslo.messaging did it at:
- config/blob/master/modules/openstack_project/files/zuul/layout.yaml (job definition)
- devstack-gate/blob/master/devstack-vm-gate-wrap.sh (PROJECTS=)
- devstack/stackrc (OSLOMSG_*=)
Try to push the changes in that order: devstack-gate => config => devstack (might make it self testing on the last part)
Why aren't alpha releases of oslo.config published to PyPI?
oslo.config is considered part of the OpenStack coordinated release and follows the same release cadence.
The thinking behind this is that any major development that happens in oslo.config is done in support of the projects in OpenStack's coordinated release. Similar benefits accrue from having nova and oslo.config on the same release schedule as say nova and glance. For example, we front-load the major, risky development towards the start of the release cycle and towards the end of the cycle we restrict ourselves to bugfixes. This reduces the risk of major regressions blocking us from doing a release. That's not to say that trunk should ever intentionally be broken. Nor should nova trunk ever be broken. We are committed to supporting folks who wish to deploy from trunk but still recognize that (for the forseeable future, at least) releases are less risky to consume than trunk.
The point is simple - oslo.config is part of the OpenStack release-based development process.
Users of OpenStack releases may just be using 'pip install' to install OpenStack dependencies. If we published oslo.config development releases to PyPI, we'd find ourselves inadvertently breaking working configurations for some users and having to scramble to release fixes for those issues. During our heavy development phase, we really want to avoid the pressure of knowing that any release we make may immediately break working installations of the previous OpenStack release.
You might think that with the level of test coverage we have and our commitment to API stability, the risk of breaking working setups should be minimal. Our experience with oslo.config in Havana teaches us otherwise. At the beginning of the Havana development cycle we expected to be making minimal changes to oslo.config but actually made pretty massive changes to fix some broken semantics. This, in turn, caused issues for Quantum like this and this. In one case, we subtly changed a public oslo.config API we really didn't consider public and in another case Quantum was using a clearly internal oslo.config API. One of the issues was caught by Quantum unit tests, the other issue wasn't.
Ok, so why not restrict the versions of oslo.config used by releases? e.g. why didn't Grizzly restrict itself to '>=1.1.0,<1.2'? It's quite simple - we need to be able to run a mixture of projects from different releases in the same environment. You may choose to upgrade Glance before Nova, for example. If Havana Glance only works with 1.2 and Grizzly Nova only works with 1.1, that doesn't work. This is also the reason why Oslo libraries need to make such a strong commitment to API stability.
So, our requirement is that during the development cycle, the development versions of OpenStack projects should be able to use the development versions of Oslo libraries. But that users of existing stable releases should not be exposed to development versions of Oslo libraries.
The best solution we've come up with to date is to publish oslo.config pre-releases (aka alphas) using the X.Y.ZaN numbering scheme to tarballs.openstack.org and reference them in requirements.txt using:
-f http://tarballs.openstack.org/oslo.config/oslo.config-1.2.0a3.tar.gz#egg=oslo.config-1.2.0a3 oslo.config>=1.2.0a3
The rationale for using this exact method is explained in this review.
We're still looking for better solutions.
We've discussed only pushing alpha releases to OpenStack's pypi and including a --extra-index-url in our requirements.txt pointing to this mirror during the development cycle. While we're only talking about a very small number of libraries, that really isn't much different to the above though.
The new --pre feature in pip 1.4 looked like a very promising solution. However, we realized that there will be people using older versions of pip for a long time to come. If we push pre-releases to PyPI, pip 1.4 users wouldn't be exposed to them by default but users of older pip would.
We are currently scheming about the possibility of only publishing pre-releases in wheel format, which would mean they could only be consumed by pip 1.4 users. This solution does indeed look promising, but we're still thinking through the implications.
Why does oslo.config have a CONF object? Global object SUCK!
Indeed. Well, it's a long story and well documented in mailing list archives if anyone cares to dig up some links.
Around the time of the Folsom Design Summit, an attempt was made to remove our dependence on a global object like this. There was massive debate and, in the end, the rough consensus was to stick with using this approach.
Nova, through its use of the gflags library, used this approach from commit zero. Some OpenStack projects didn't initially use this approach, but most now do. The idea is that having all projects use the same approach is more important than the objections to the approach. Sharing code between projects is great, but by also having projects use the same idioms for stuff like this it makes it much easier for people to work on multiple projects.
This debate will probably never completely go away, though.
We use launchpad blueprints to track design proposals.
The icehouse blueprints detail the work currently underway.
Etherpads from sessions at the Icehouse Design summit.
- Creating REST services with Pecan/WSME
- OpenStack Client Update
- Updates to hacking, our code style enforcement tool
- I18n policies of messages
- oslo.messaging - API design, plans for Icehouse
- oslo.config enhancements, including removing import side-effects from consumers
- Rootwrap: Icehouse plans
- State of affairs in DB schema migrations
- Towards more structured & qualified notifications
- Merge logging and notifications
- Writing a service synchronisation library
- Oslo incubated libraries status
- Aggressively split oslo-incubator
Messaging Related Work in Havana
See this etherpad for yet more details.
Etherpads from sessions at the Havana Design summit.
- Oslo Status and Plans
- Pecan/WSME Status
- No-downtime DB migrations
- Rootwrap improvements for the Havana cycle
- Common packaging support and code analysis tools
- RPC API review
- ZeroMQ RPC for Ceilometer and Quantum
- Message queue access control
- RPC Message Signing and Encryption
- Zipkin tracing in OpenStack
- i18n strategy for OpenStack services
- Common XenAPI libary
Etherpads from sessions at the Grizzly Design summit.
- Oslo status and plans
- Unified CLI, take 2
- Adding optional security to RPC
- Services framework for command and control
- Using the message bus for messaging
- Choosing a WSGI framework for API services
- XML request/response processing
- Entrypoints based plugins
- Unified rootwrap & password management
- A common database
- Instrumentation monitoring
Etherpads from sessions at the Folsom Design summit.