Jump to: navigation, search

TestingBrainstorm

Basic Testing Cluster Plan:

Jordan at Rackspace is setting up a cluster of machines of which some section will be allocated to our testing efforts.

We will be using Jenkins to spawn tests against those machines.

The tests will generally look like the following:

  • Reset a machine or set of machines to a specific configuration
  • On some machines we will be running the unit tests in a couple of ways and coverage tests
  • Some machines we will be targeted by the smoketests (and other applicable integration tests)
  • Record results

Repository of Stuff

Building up an ad-hoc repository of existing and proposed tests. Please add links to your Jenkins configs (tarballs are fine) and add descriptions of deployment configurations that you expect to be using so that we can make sure they are tested.

Repository of Ideas

Please add any specific or meta thoughts on testing below so that we can compile a general document to work from (rather than spread it out over an email thread).

  • It would be nice to run the tests pre-emptively against merge proposals
  • It would be nice to set up a page cataloging test results at a finer granularity than the current Jenkins setups.
  • Test with Jenkins with packages
  • Test with Jenkins with pip and install virtaulenv

Anso at NASA's Testing Deployment

We've taken a few approaches as the code has evolved (automatic deployment and testing was always crucial)

While preparing for the Nebula beta release our system was as follows.

Automatically deploying a fresh cluster every morning using a shell script that would:

  • pxe/preseed -> deploy and configure base ubuntu os
  • puppet -> deploy and configure nova
  • run a few shell commands on the head node to setup users/whatnot

Then we would run the smoketests against the cluster, recording errors and exceptions (to sentry using nova-sentry).

Then through the day users and QA would test against the cluster, reporting bugs. If required anyone could re-deploy (pxe -> setup) by running a single shell command. (it would take about 30 minutes from reboot to ready to run)

Running a cloud is more than just nova. We've had critical issues with configuration (intel 10G cards + LRO/MRO + linux network bridging brought network performance for guests down to 20Kbps) that were completely external to Nova, but crucial for running an openstack deployment.

Have a set of "golden systems": OS + configurations (network model + api + hypervisors) + relevant smoketests that run with every trunk merge. Having the system that runs these tests open source so others can setup labs to run tests for their equipment/configurations would be preferable as well.

I also agree that "machines are cheaper than devs" and think a good phase two would be able to run this against arbitrary branches.

Jay's summary of known efforts

  • Anso has created some Vagrant scripts that test multi-node functionality of the EC2 API, libvirt + KVM, and nova-objectstore
  • Vishy/Devin have been refactored Nova's existing smoketests/ and updated to include netadmin tests. Still only testing EC2 API
  • Trey has been "volunteered" to write an OpenStack API smoketest for XenServer functionality (https://bugs.launchpad.net/nova/+bug/720941)
  • Jordan Rinke has been working on a 10-machine test cluster for testing deployments and running smoketests on
  • Other rackers (Pvo, Ant?) have been working on getting a much larger production-level test cluster for running longer, more complex tests on

Steve's additions to Jay's summary

Stuff we need to do:

  • Create a staging/testing branch, have Openstack Hudson LP user own it
  • Get the test cluster machines entered into Hudson
  • On each merge proposal into trunk, have Tarmac pull the branch, run unit tests automatically, fire off smoketests/ against the test machines automatically, and notify the merge proposal that tests pass or don't pass.
  • For merge proposals that pass the merge into staging and all tests that also have 2 Approves from core devs, have Tarmac merge into trunk
  • Create long-running functional tests that are essentially re-playing large Apache/nginx log files of existing Nebula and Cloud Servers API nodes against Nova staging branch with various configurations