Jump to: navigation, search


Revision as of 09:20, 7 October 2013 by Markmc (talk | contribs) (fix os-*-config URLs and add os-collect-config)

TripleO - Openstack on Openstack

TripleO is a program aimed at installing, upgrading and operating OpenStack clouds using OpenStack's own cloud facilities as the foundations - building on nova, neutron and heat to automate fleet management at datacentre scale (and scaling down to as few as 2 machines).

We gave a presentation at the portland 2013 summit about TripleO.

TripleO is raw but usable today - see our tripleo-incubator for deployment instructions.

Folk working on TripleO are contributing to Nova, Neutron, Heat and Ironic to ensure they have the facilities needed to deploy to baremetal at scale. We also have a small number of projects we're shepards for ourselves:

  • tuskar - stateful API for managing the deployment of OpenStack
  • tuskar-ui - UI and Horizon plugin for managing the definition of a cloud(s)
  • python-tuskarclient - API client for tuskar.


Our overall story is to invest in robust solid automation such that we can do continuous integration and deployment testing of a cloud at the bare metal layer, then deploy the very same tested images to production clouds using nova baremetal (now ironic), rather than any separate management stack, leading to shared expertise in both deployments in the cloud, and of the cloud. Finally, because we can setup OpenStack in a fully HA environment, we can host the baremetal cloud used to deploy OpenStack in itself, and have a fully self sustaining HA cluster. On top of that we intend to build out a solid operations story - baseline monitoring autoconfigured as the overcloud - the cloud we deploy on top of the bare-metal "under" cloud scales up.

TripleO contributor cloud

In order to validate our design we have our own continuously deployed cloud that any TripleO ATC can use.


A key goal of ours is to play nice with folk that already have deep investment in operational areas - such as automation via Chef/Puppet/Salt, or monitoring via icinga/assimilator etc. We're ensuring we have clean interfaces that alternative implementations can be plugged into [e.g. you can use Chef/Puppet/Salt to do the in-instance configuration of a golden TripleO disk image].

Review team

Anyone can do reviews, but only the 'tripleo-core' team can approve them to land. We operate with the OpenStack standard two x +2's except in well, exceptional circumstances.

The review team should look for reviews in all the following projects:

For simplicity, the following URL will show you reviews from the above projects that:

  • are open
  • have been verified by Jenkins
  • do not have a current -1 or -2 review
       TripleO Reviews