Jump to: navigation, search

Difference between revisions of "GSoC2014/Rally/BenchmarksVirtualMachinesOpenStack"

m (Links: Change list style.)
(Remove the Status section as it was outdated. Progress has been made (please check the course of the reviews in Gerrit) but I didn't get it logged in that section as I considered it not useful.)
Line 10: Line 10:
  
 
For the second part, the ported benchmarks for now is only Blogbench (https://review.openstack.org/#/c/97030/). However, as soon as the base is stable enough, it will be quite easy to port new benchmarks because with the current architecture it will only be needed to create a setup script and a run script for the benchmark that needs to be ported and nothing more.
 
For the second part, the ported benchmarks for now is only Blogbench (https://review.openstack.org/#/c/97030/). However, as soon as the base is stable enough, it will be quite easy to port new benchmarks because with the current architecture it will only be needed to create a setup script and a run script for the benchmark that needs to be ported and nothing more.
 
== Status ==
 
 
Week 01 (May 19 - May 25)
 
Implement the developing/testing environment. Testing environment: Single node architecture (services: keystone, glance, nova, nova-network) on a separate physical machine. Developing environment: Vim with appropriate plugins for Python development, on a FreeBSD desktop, with Rally installed on it (https://review.openstack.org/#/c/95341/).
 
Research available benchmarks; selected primary source of information: Phoronix Test Suite, OpenBenchmarking.org. Tried the PTS Desktop Live.
 
 
Week 02 (May 26 - June 01)
 
Design an initial architecture for the project with modularity and extensibility in mind. What is written in the "Description and Analysis" section is a product of this.
 
Search for a good (in respect of popularity and reliable results) disk/io benchmark. Tried locally bonnie++, dbench, blogbench. Implement and test the setup script for blogbench (https://review.openstack.org/#/c/97030/).
 
 
Week 03 (June 02 - June 08)
 
Developing (patch set 1) the base of the project; a new Rally benchmark scenario that will be used when a user wants to use one of the ported VM benchmarks (https://review.openstack.org/#/c/98172/).
 
Updates (patch sets 3 and 4) for "Add benchmark 'Blogbench' for VMs".
 
Working on an "in-house" script 'deploy-rally' to help in the testing of the changes that I make.
 
Start writing the project's (this one) wiki page.
 
 
Week 04 (June 09 - June 15)
 
Updates (patch sets 5 and 6) for "Add benchmark 'Blogbench' in VMs". These changes reflect the decision for using Python instead of Bash for the run part of the benchmarks.
 
 
Week 05 (June 16 - June 22)
 
Under construction :-)
 
 
Week 06 (June 23 - 29)
 
Under construction :-)
 
 
Week 07 (June 30 - July 06)
 
Updates (patch sets 10, 11 and 12) for "Add the benchmark 'Blogbench' for the Virtual Machines" (https://review.openstack.org/#/c/97030/). These are mainly Unit tests and replacement of subprocess.check_output function with utilities from subprocess.Popen class because I didn't want to pull Python 2.7 as requirement for the benchmark. Two small, not directly related, commits that merged into master (https://review.openstack.org/#/c/104180/ and https://review.openstack.org/#/c/104924/). Start developing (patch set 1) for "Add the context 'benchmark_image'" (https://review.openstack.org/#/c/104564/).
 
 
Week 08 (July 07 - July 13)
 
Under construction :-)
 
  
 
== Source Code ==
 
== Source Code ==

Revision as of 22:05, 18 August 2014

Introduction

The ability to benchmark a Virtual Machine is an important activity that more and more developers will need to perform as they host their SaaS applications in a cloud. The aim of this project is to integrate into the Rally project the ability to run easily and in an automated manner various benchmarks for measuring the performance of the deployed Virtual Machines of an OpenStack cloud.

Description and Analysis

The goal of this project is twofold. Firstly, develop in Rally the necessary code base for executing commands inside the virtual machines of an OpenStack cloud, and secondly, using this base, to port existing popular benchmarks that are used to measure the performance of a computer system, to the virtual machines.

For the first part, a context is developed (https://review.openstack.org/#/c/104564/) that produces an image that has installed the required programs to run a specific benchmark. This context takes an image, a flavor and some other necessary information from the task configuration file of the benchmark scenario (https://review.openstack.org/#/c/98172/) and boots a virtual machine, using a new keypair and security group access the virtual machine, and then using SSH it executes the setup script (https://review.openstack.org/#/c/97030/) of the specified benchmark. This context is used by the benchmark scenario "boot_benchmark_delete" in order to avoid the setup part every time that the benchmark scenario will be run. The benchmark scenario boots a virtual machine using the image produced by the context "benchmark_image", and by generating a keypair and security group, it accesses the virtual machine via SSH and executes now the run script (https://review.openstack.org/#/c/97030/) of the specified benchmark. If the execution of the run script is successful, it returns some of the results of the benchmark in JSON format.

For the second part, the ported benchmarks for now is only Blogbench (https://review.openstack.org/#/c/97030/). However, as soon as the base is stable enough, it will be quite easy to port new benchmarks because with the current architecture it will only be needed to create a setup script and a run script for the benchmark that needs to be ported and nothing more.

Source Code

All the code that was written during the official GSoC period for the development of this project.

Under Review

Merged

Links