TripleO - OpenStack on OpenStack
TripleO is a program aimed at installing, upgrading and operating OpenStack clouds using OpenStack's own cloud facilities as the foundations - building on nova, neutron and heat to automate fleet management at datacentre scale (and scaling down to as few as 2 machines).
We gave a presentation at the portland 2013 summit about TripleO.
TripleO is raw but usable today - see our tripleo-incubator for deployment instructions.
Folks working on TripleO are contributing to Nova, Neutron, Heat and Ironic to ensure they have the facilities needed to deploy to baremetal at scale. We also have a small number of projects we're shepherds for ourselves:
- tripleo-incubator (docs)- our incubator - new code lives here until we decide what the right long term home for it is.
- os-collect-config - collect and cache metadata, run hooks on changes. See OsCollectConfig
- os-apply-config - small templating layer for writing out config files.
- os-refresh-config - react to heat metadata changes and send heat events
- os-cloud-config - common code for tuskar and the seed initialisation logic, the post heat completion initial configuration of a cloud
- diskimage-builder - build golden disk images
- tripleo-image-elements - rules for diskimage-builder for OpenStack golden images.
- tripleo-heat-templates Heat templates for deploying OpenStack
- tripleo-ci - CI glue for TripleO
- Tuskar - Tuskar is a stateful API and UI for managing the deployment of OpenStack
Our overall story is to invest in robust solid automation such that we can do continuous integration and deployment testing of a cloud at the bare metal layer, then deploy the very same tested images to production clouds using nova baremetal (now ironic), rather than any separate management stack, leading to shared expertise in both deployments in the cloud, and of the cloud. Finally, because we can setup OpenStack in a fully HA environment, we can host the baremetal cloud used to deploy OpenStack in itself, and have a fully self sustaining HA cluster. On top of that we intend to build out a solid operations story - baseline monitoring autoconfigured as the overcloud - the cloud we deploy on top of the bare-metal "under" cloud scales up.
TripleO contributor cloud
In order to validate our design we have our own continuously deployed cloud that any TripleO ATC can use.
As a team we have responsibility for the design and quality of the code we're creating, and to respond to critical bugs and security issues in that, do reviews, triage bugs and generally support our users.
However, we also have things we haven't released yet that are in the same codebases, and we don't want to run ourselves ragged treating every bug as a regression, unless it's actually in something we've delivered and are maintaining.
Regressions in these things are firedrills which (as a team) we need to hop on and fix ASAP. If you find one please report it to use as a Critical bug on https://bugs.launchpad.net/tripleo/+filebug. If you're a TripleO contributor and you find one, or see that one has been reported, please add a Firedrill card to the [TripleO kanban] (Kanban is an experiment at the moment, but so far we're finding it pretty useful).
If a particular TripleO endeavour isn't listed here, it's not yet supported. If you want it to be supported, add a item for it to the next TripleO meeting
- The TripleO Cloud MVP2 : ATC's should have usercodes, and the cloud resets entirely every hour.
- toci identified devtest story issues *within the TripleO code*. We'll move to supporting everything when we're in the integrated gate
Stable releases of OpenStack
[Stable branches etherpad] - like other OpenStack projects, if folk want to step up and maintain stable branches they can, but we won't make stable branches unless/until that happens.
A key goal of ours is to play nice with folk that already have deep investment in operational areas - such as automation via Chef/Puppet/Salt, or monitoring via icinga/assimilator etc. We're ensuring we have clean interfaces that alternative implementations can be plugged into [e.g. you can use Chef/Puppet/Salt to do the in-instance configuration of a golden TripleO disk image].
All tripleo blueprints. When creating new blueprints please ensure you put 'tripleo' in the short name.
Anyone can do reviews, but only the 'tripleo-core' team can approve them to land. We operate with the OpenStack standard two x +2's except in well, exceptional circumstances. Where multiple people collaborate on a single patch, one of the +2's must come from someone that isn't an author of the patch.
The review team should look for reviews in all the following projects:
For simplicity, the following URL will show you reviews from the above projects that are open:
Triage for us consists of:
- assigning an importance
- putting any obvious tags (e.g. 'baremetal') on it
- setting status to 'triaged'.
(If you're a TripleO contributor and you're filing your own bug, you can skip 'confirm' and go straight to triaged - unless you believe the bug isn't real, in which case why are you filing it? :)
The bug triage team for TripleO is https://launchpad.net/~tripleo.
We're mostly using the process described at https://wiki.openstack.org/wiki/BugTriage, with two key differences:
- we don't use wishlist: things we'd like to do and things that we do wrong are both defects; except for regressions the priority of the work is not affect by whether we've delivered the the thing or not, and using wishlist just serves to flatten all the priority of all unimplemented things into one bucket: not helpful.
- we use triaged, not just 'confirmed'.