Jump to: navigation, search

Difference between revisions of "TestingBrainstorm"

Line 26: Line 26:
 
* Test with Jenkins with packages
 
* Test with Jenkins with packages
 
* Test with Jenkins with pip and install virtaulenv
 
* Test with Jenkins with pip and install virtaulenv
 +
 +
=== Anso at NASA's Testing Deployment ===
 +
We've taken a few approaches as the code has evolved (automatic
 +
deployment and testing was always crucial)
 +
 +
While preparing for the Nebula beta release our system was as follows.
 +
 +
Automatically deploying a fresh cluster every morning using a shell
 +
script that would:
 +
 +
* pxe/preseed -> deploy and configure base ubuntu os
 +
* puppet -> deploy and configure nova
 +
* run a few shell commands on the head node to setup users/whatnot
 +
 +
Then we would run the smoketests against the cluster, recording errors
 +
and exceptions (to sentry using nova-sentry).
 +
 +
Then through the day users and QA would test against the cluster,
 +
reporting bugs.  If required anyone could re-deploy (pxe -> setup) by
 +
running a single shell command.  (it would take about 30 minutes from
 +
reboot to ready to run)
 +
 +
Running a cloud is more than just nova.  We've had critical issues
 +
with configuration (intel 10G cards + LRO/MRO + linux network bridging
 +
brought network performance for guests down to 20Kbps) that were
 +
completely external to Nova, but crucial for running an openstack
 +
deployment.
 +
 +
Have a set of "golden systems":  OS + configurations (network model +
 +
api + hypervisors) + relevant smoketests
 +
that run with every trunk merge.  Having the system that runs these
 +
tests open source so others can setup labs to run tests for their
 +
equipment/configurations would be preferable as well.
 +
 +
I also agree that "machines are cheaper than devs" and think a good
 +
phase two would be able to run this against arbitrary branches.

Revision as of 21:25, 21 February 2011

Basic Testing Cluster Plan:

Jordan at Rackspace is setting up a cluster of machines of which some section will be allocated to our testing efforts.

We will be using Jenkins to spawn tests against those machines.

The tests will generally look like the following:

  • Reset a machine or set of machines to a specific configuration
  • On some machines we will be running the unit tests in a couple of ways and coverage tests
  • Some machines we will be targeted by the smoketests (and other applicable integration tests)
  • Record results

Repository of Stuff

Building up an ad-hoc repository of existing and proposed tests. Please add links to your Jenkins configs (tarballs are fine) and add descriptions of deployment configurations that you expect to be using so that we can make sure they are tested.

Repository of Ideas

Please add any specific or meta thoughts on testing below so that we can compile a general document to work from (rather than spread it out over an email thread).

  • It would be nice to run the tests pre-emptively against merge proposals
  • It would be nice to set up a page cataloging test results at a finer granularity than the current Jenkins setups.
  • Test with Jenkins with packages
  • Test with Jenkins with pip and install virtaulenv

Anso at NASA's Testing Deployment

We've taken a few approaches as the code has evolved (automatic deployment and testing was always crucial)

While preparing for the Nebula beta release our system was as follows.

Automatically deploying a fresh cluster every morning using a shell script that would:

  • pxe/preseed -> deploy and configure base ubuntu os
  • puppet -> deploy and configure nova
  • run a few shell commands on the head node to setup users/whatnot

Then we would run the smoketests against the cluster, recording errors and exceptions (to sentry using nova-sentry).

Then through the day users and QA would test against the cluster, reporting bugs. If required anyone could re-deploy (pxe -> setup) by running a single shell command. (it would take about 30 minutes from reboot to ready to run)

Running a cloud is more than just nova. We've had critical issues with configuration (intel 10G cards + LRO/MRO + linux network bridging brought network performance for guests down to 20Kbps) that were completely external to Nova, but crucial for running an openstack deployment.

Have a set of "golden systems": OS + configurations (network model + api + hypervisors) + relevant smoketests that run with every trunk merge. Having the system that runs these tests open source so others can setup labs to run tests for their equipment/configurations would be preferable as well.

I also agree that "machines are cheaper than devs" and think a good phase two would be able to run this against arbitrary branches.