Jump to: navigation, search

Difference between revisions of "GSoC2014/Rally/BenchmarksVirtualMachinesOpenStack"

(Remove the Status section as it was outdated. Progress has been made (please check the course of the reviews in Gerrit) but I didn't get it logged in that section as I considered it not useful.)
(Rewrite description.)
Line 2: Line 2:
 
The ability to benchmark a Virtual Machine is an important activity that more and more developers will need to perform as they host their SaaS applications in a cloud. The aim of this project is to integrate into the [[Rally]] project the ability to run easily and in an automated manner various benchmarks for measuring the performance of the deployed Virtual Machines of an OpenStack cloud.
 
The ability to benchmark a Virtual Machine is an important activity that more and more developers will need to perform as they host their SaaS applications in a cloud. The aim of this project is to integrate into the [[Rally]] project the ability to run easily and in an automated manner various benchmarks for measuring the performance of the deployed Virtual Machines of an OpenStack cloud.
  
== Description and Analysis ==
+
== Description ==
  
The goal of this project is twofold. Firstly, develop in Rally the necessary code base for executing commands inside the virtual machines of an OpenStack cloud, and secondly, using this base, to port existing popular benchmarks that are used to measure the performance of a computer system, to the virtual machines.
+
The project can be divided in two parts. The first part has to do with the development of an architecture (some kind of framework) that defines a standard and easy way of porting different benchmarks to Rally, and in the second part, with the use of this framework, port existing popular benchmarks that are used to measure the performance of different aspects of a computer system.
  
For the first part, a context is developed (https://review.openstack.org/#/c/104564/) that produces an image that has installed the required programs to run a specific benchmark. This context takes an image, a flavor and some other necessary information from the task configuration file of the benchmark scenario (https://review.openstack.org/#/c/98172/) and boots a virtual machine, using a new keypair and security group access the virtual machine, and then using SSH it executes the setup script (https://review.openstack.org/#/c/97030/) of the specified benchmark.
+
For the first part, a new benchmark context (benchmark_image) is developed that generates an image that has installed all the required programs to run the specified benchmark. The context takes an image, a flavor and some other necessary information from the task configuration file of the benchmark scenario and boots a virtual machine. Then, using the already created users and their keypairs and security groups, it gains access to the virtual machine with SSH and executes the setup script of the specified benchmark. The setup script is a Bash script that installs the benchmark (and its dependencies) in the virtual machine. Finally, the context takes a snapshot of that virtual machine and returns the name of the newly created benchmark-ready image. Now the benchmark scenario uses the image that the context returned, in order to boot the virtual machine. It gains access to the virtual machine with SSH and executes the run script of the specified benchmark. The run script is a Python script that executes the benchmark and returns the results of it in JSON format.  
This context is used by the benchmark scenario "boot_benchmark_delete" in order to avoid the setup part every time that the benchmark scenario will be run. The benchmark scenario boots a virtual machine using the image produced by the context "benchmark_image", and by generating a keypair and security group, it accesses the virtual machine via SSH and executes now the run script (https://review.openstack.org/#/c/97030/) of the specified benchmark. If the execution of the run script is successful, it returns some of the results of the benchmark in JSON format.
 
  
For the second part, the ported benchmarks for now is only Blogbench (https://review.openstack.org/#/c/97030/). However, as soon as the base is stable enough, it will be quite easy to port new benchmarks because with the current architecture it will only be needed to create a setup script and a run script for the benchmark that needs to be ported and nothing more.
+
For the second part, because of the architecture that was defined in the first part, there is only need to develop the setup script that installs the benchmark, and the run script that executes the benchmark and returns the result in JSON format. This will be done for every possible benchmark that needs to be ported to Rally in order to be executed in a virtual machine of an OpenStack cloud.
  
 
== Source Code ==
 
== Source Code ==

Revision as of 12:04, 30 September 2014

Introduction

The ability to benchmark a Virtual Machine is an important activity that more and more developers will need to perform as they host their SaaS applications in a cloud. The aim of this project is to integrate into the Rally project the ability to run easily and in an automated manner various benchmarks for measuring the performance of the deployed Virtual Machines of an OpenStack cloud.

Description

The project can be divided in two parts. The first part has to do with the development of an architecture (some kind of framework) that defines a standard and easy way of porting different benchmarks to Rally, and in the second part, with the use of this framework, port existing popular benchmarks that are used to measure the performance of different aspects of a computer system.

For the first part, a new benchmark context (benchmark_image) is developed that generates an image that has installed all the required programs to run the specified benchmark. The context takes an image, a flavor and some other necessary information from the task configuration file of the benchmark scenario and boots a virtual machine. Then, using the already created users and their keypairs and security groups, it gains access to the virtual machine with SSH and executes the setup script of the specified benchmark. The setup script is a Bash script that installs the benchmark (and its dependencies) in the virtual machine. Finally, the context takes a snapshot of that virtual machine and returns the name of the newly created benchmark-ready image. Now the benchmark scenario uses the image that the context returned, in order to boot the virtual machine. It gains access to the virtual machine with SSH and executes the run script of the specified benchmark. The run script is a Python script that executes the benchmark and returns the results of it in JSON format.

For the second part, because of the architecture that was defined in the first part, there is only need to develop the setup script that installs the benchmark, and the run script that executes the benchmark and returns the result in JSON format. This will be done for every possible benchmark that needs to be ported to Rally in order to be executed in a virtual machine of an OpenStack cloud.

Source Code

All the code that was written during the official GSoC period for the development of this project.

Under Review

Merged

Links