Jump to: navigation, search

Difference between revisions of "GSoC2014/Rally/BenchmarksVirtualMachinesOpenStack"

m (Status: Correct link.)
(Status: Add brief status report for the seventh week of coding.)
Line 35: Line 35:
 
Week 04 (June 09 - June 15)
 
Week 04 (June 09 - June 15)
 
Updates (patch sets 5 and 6) for "Add benchmark 'Blogbench' in VMs". These changes reflect the decision for using Python instead of Bash for the run part of the benchmarks.
 
Updates (patch sets 5 and 6) for "Add benchmark 'Blogbench' in VMs". These changes reflect the decision for using Python instead of Bash for the run part of the benchmarks.
From this point, everything seems to be ready (base: there is a benchmark scenario, benchmark: one benchmark with setup and run parts). The testing/unify part will begin in order to demonstrate that there is something usable.
 
  
 
Week 05 (June 16 - June 22)
 
Week 05 (June 16 - June 22)
 +
Under construction :-)
 +
 +
Week 06 (June 23 - 29)
 +
Under construction :-)
 +
 +
Week 07 (June 30 - July 06)
 +
Updates (patch sets 10, 11 and 12) for "Add the benchmark 'Blogbench' for the Virtual Machines" (https://review.openstack.org/#/c/97030/). These are mainly Unit tests and replacement of subprocess.check_output function with utilities from subprocess.Popen class because I didn't want to pull Python 2.7 as requirement for the benchmark. Two small, not directly related, commits that merged into master (https://review.openstack.org/#/c/104180/ and https://review.openstack.org/#/c/104924/). Start developing (patch set 1) for "Add the context 'benchmark_image'" (https://review.openstack.org/#/c/104564/).
 +
 +
Week 08 (July 07 - July 13)
 
Under construction :-)
 
Under construction :-)
  

Revision as of 15:49, 8 July 2014

Introduction

The ability to benchmark a Virtual Machine is an important activity that more and more developers will need to perform as they host their SaaS applications in a cloud. The aim of this project is to integrate into the Rally project the ability to run easily and in an automated manner various benchmarks for measuring the performance of the deployed Virtual Machines of an OpenStack cloud.

Description and Analysis

The goal of this project is to port to Rally existing popular benchmarks used for measuring the performance of a computer system, to the virtual machines of an OpenStack cloud. In order to accomplish that, it is required to develop an architecture that will be flexible to adapt to any benchmark that someone will likely need to use to measure the performance of her Virtual Machine(s).

To achieve that, the architecture is modular and it can be distinguished in the following discrete steps:

1. Create/boot [1, n] VM(s), where n >= 1
2. Inject into the spawned VM(s) the setup.sh script of the specified benchmark. This script builds the benchmark in the VM(s).
3. Inject into the VM(s) the run.py script of the specified benchmark. This script executes the benchmark in the VM(s), process the output of the benchmark, and returns to Rally the output in a form that it can be stored into Rally's database.
4. With the processed results, we choose which kind of chart can visualize better this benchmark, and produce a chart for it in the HTML task report.

The above procedure reveals that a VM benchmark for Rally will be only 2 scripts: one that installs the benchmark, and a second one that executes it. Also, if a benchmark requires more than one VM in order to be run properly (ex. iperf3 that needs two machines) then with this method we can easily isolate the steps, first create the VMs, install into them the benchmark, and then we are ready to execute it. In this case, the run script will be a bit more complex (in contrast with running a benchmark that requires only one machine) but this is inevitable, and we still have all the logic for execution in a separate file without polluting the codebase of Rally with functions for this particular benchmark.

List of Benchmarks to Port

* Blogbench (Disk/io)

Status

Week 01 (May 19 - May 25) Implement the developing/testing environment. Testing environment: Single node architecture (services: keystone, glance, nova, nova-network) on a separate physical machine. Developing environment: Vim with appropriate plugins for Python development, on a FreeBSD desktop, with Rally installed on it (https://review.openstack.org/#/c/95341/). Research available benchmarks; selected primary source of information: Phoronix Test Suite, OpenBenchmarking.org. Tried the PTS Desktop Live.

Week 02 (May 26 - June 01) Design an initial architecture for the project with modularity and extensibility in mind. What is written in the "Description and Analysis" section is a product of this. Search for a good (in respect of popularity and reliable results) disk/io benchmark. Tried locally bonnie++, dbench, blogbench. Implement and test the setup script for blogbench (https://review.openstack.org/#/c/97030/).

Week 03 (June 02 - June 08) Developing (patch set 1) the base of the project; a new Rally benchmark scenario that will be used when a user wants to use one of the ported VM benchmarks (https://review.openstack.org/#/c/98172/). Updates (patch sets 3 and 4) for "Add benchmark 'Blogbench' for VMs". Working on an "in-house" script 'deploy-rally' to help in the testing of the changes that I make. Start writing the project's (this one) wiki page.

Week 04 (June 09 - June 15) Updates (patch sets 5 and 6) for "Add benchmark 'Blogbench' in VMs". These changes reflect the decision for using Python instead of Bash for the run part of the benchmarks.

Week 05 (June 16 - June 22) Under construction :-)

Week 06 (June 23 - 29) Under construction :-)

Week 07 (June 30 - July 06) Updates (patch sets 10, 11 and 12) for "Add the benchmark 'Blogbench' for the Virtual Machines" (https://review.openstack.org/#/c/97030/). These are mainly Unit tests and replacement of subprocess.check_output function with utilities from subprocess.Popen class because I didn't want to pull Python 2.7 as requirement for the benchmark. Two small, not directly related, commits that merged into master (https://review.openstack.org/#/c/104180/ and https://review.openstack.org/#/c/104924/). Start developing (patch set 1) for "Add the context 'benchmark_image'" (https://review.openstack.org/#/c/104564/).

Week 08 (July 07 - July 13) Under construction :-)

Source Code

Any code written in the development of this project.

Work In Progress

Merged

Links

[1] Blogbench, http://www.pureftpd.org/project/blogbench