Jump to: navigation, search

Difference between revisions of "Rally/Updates"

(24.02.2014)
m
Line 6: Line 6:
 
Hello stackers,
 
Hello stackers,
  
this week, several important contributions have been made to Rally, considering both the overall system stability and the improvements of user interface. To name a few:
+
this week, several important contributions have been made to Rally, considering both the overall system stability and the improvements of the user interface. To name a few:
 
* '''''Vast refactoring of the ScenarioRunner class''''' has enabled to stop sharing OpenStack clients objects between processes in the core of the system, which occasionally caused bugs in Rally. (https://review.openstack.org/#/c/74769/);
 
* '''''Vast refactoring of the ScenarioRunner class''''' has enabled to stop sharing OpenStack clients objects between processes in the core of the system, which occasionally caused bugs in Rally. (https://review.openstack.org/#/c/74769/);
 
* Another important refactoring step resulted in '''''replacement of OpenStack endpoint dictionaries with special objects throughout the system''''', which has made the code more reliable and extendable (https://review.openstack.org/#/c/74425/);
 
* Another important refactoring step resulted in '''''replacement of OpenStack endpoint dictionaries with special objects throughout the system''''', which has made the code more reliable and extendable (https://review.openstack.org/#/c/74425/);

Revision as of 08:57, 25 February 2014

Weekly updates

February 24, 2014

Hello stackers,

this week, several important contributions have been made to Rally, considering both the overall system stability and the improvements of the user interface. To name a few:

  • Vast refactoring of the ScenarioRunner class has enabled to stop sharing OpenStack clients objects between processes in the core of the system, which occasionally caused bugs in Rally. (https://review.openstack.org/#/c/74769/);
  • Another important refactoring step resulted in replacement of OpenStack endpoint dictionaries with special objects throughout the system, which has made the code more reliable and extendable (https://review.openstack.org/#/c/74425/);
  • Perhaps the prettiest patch of the week was the introduction of a benchmark result visualization tool, implemented with the nvd3 plugin to d3.js (so that the actual charts are drawn to a html file). The graphs look really great and will be of great use for those who are keen on sharing their benchmarking results (https://review.openstack.org/#/c/72970/);
  • Several nice improvements in the CLI include showing of 90- and 95- percentile results in the benchmark summary (https://review.openstack.org/#/c/73522/) and a new show command which allows the user to get the information on images/flavors/networks/etc. available in the current deployment in a very quick way (https://review.openstack.org/#/c/75699/).


The ongoing work includes:

  • An extention of the use command which soon will be applicable not only for deployments but also for tasks (https://review.openstack.org/#/c/75936/);
  • Further refactoring of the core benchmark engine, including the work around input configuration parameters validation (for detailed description of what's going to be done, see special document);
  • After finishing some major refactoring procedues, we have also resumed the work around passing pre-created user endpoints to the DummyEngine (https://review.openstack.org/#/c/67720/) and generating the "stress" load on the cloud.


We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=icehouse&metric=commits&project_type=all&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team


February 17, 2014

Hello stackers,

the first thing we would like to mention today is an extensive piece of work that has been done recently on our Wiki. We have updated both the main page and the basic tutorials: how to install Rally and how to use it. These tutorials have been simplified quite a lot and have been updated with new features that have been merged in Rally recently.

As for the actual updates in the Rally code, the main ones are as follows:

  • The refactoring of the ScenarioRunner class has been successfully continued by reimplementing different benchmark execution strategies (continuous/periodic executions), which resided previously inside the original class, via subclassing. The new ContinuousScenarioRunner and PeriodicScenarioRunner classes enable us to make the code much more readable (no more complicated if...else logic to choose the appropriate execution strategy is present in the code) and extendable, so that it is now very easy to add your own ScenarioRunner (https://review.openstack.org/#/c/70771/);
  • We've successfully started the work on Tempest & Rally integration in order for the latter to be able to perform OpenStack deployment verification procedures as well as to possibly add new Tempest-based benchmark scenarios (https://review.openstack.org/#/c/70131/);
  • On the way to adding a full support for benchmarking with predefined OpenStack users (instead of using the generated ones), we've refactored the Endpoint entity, making it able to distinguish between administrator/user permisisons. Besides, each deployment gets now stored in the database with a list of endpoints instead of only one endpoint, since we are going to enable the DummyEngine to take several endpoints as its input (https://review.openstack.org/#/c/67154/);
  • One of the future features of Rally will be Heat-based benchmark scenarios which will make it possible to test the VMs performance. This week, we have started contributing to this as well (https://review.openstack.org/#/c/72749/);
  • Among many other simplier refactoring patches finished this week, we'd like to mention the one that improves the CLI code by structuring it to submodules (https://review.openstack.org/#/c/73059/).


The ongoing work includes:


We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=icehouse&metric=commits&project_type=all&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team


February 10, 2014

Hello stackers,

the past week has been extremely successful for us regarding the overall Rally code improvement, bugfixing, as well as new features implementation. Rally is actually about to become a totally easy-to-understand and easy-to-use piece of software which can be exploited by everyone interested in it.

The most important contributions to Rally made during the past week are as follows:

  • The code refactoring stuff has been quite involved:
  • We have issued a drastic rearrangement of the ScenarioRunner class (that is responsible for the actual benchmark method calls using a particular benchmarking strategy) by moving out some code from this class to new context classes. This change also enabled Rally to process all errors occuring on the cloud during benchmarking/cleanup correctly (https://review.openstack.org/#/c/69886/);
  • Another significant contribution is the sshutils module refactoring, which involves the API improvement as well as the new ability to process the stdin data (https://review.openstack.org/#/c/68063/);
  • Finally, a very nice work has been done on the benchmark scenarios refactoring by moving the hardcoded timeout and cloud poll interval values to rally.conf (https://review.openstack.org/#/c/71272/).


The current work encompasses the following directions:


We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team


February 03, 2014

Hello stackers,

our efforts during the past week were heavily focused on code refactoring and bugfixing. Among the most significant contrubutions are:


Several novelties have been introduced to Rally:

  • After having developed the abstract validators mechanism, we have developed a couple of useful concrete validators as well: the one that checks that the image indicated in the config for, say, the NovaServers.boot_and_delete_server benchmark scenario really exists and can be used (https://review.openstack.org/#/c/68055/) and another validator that does the same for flavors (https://review.openstack.org/#/c/70082/). Both validators have been attached to benchmark scenarios where they are of great use.
  • We've implemented the mechanism of measuring the time taken by atomic actions in our benchmark scneario (https://review.openstack.org/#/c/69828/): e.g. now Rally outputs not only the information on how long it took the cloud to boot and delete a single server (in the NovaServers.boot_and_delete_server scenario), but also how much time it took to boot the server and to delete it.


This week there is still a huge amount of work to be done around refactoring the very fundamental code in Rally. Among other things, we now rewrite the ScenarioRunner class which is the tool for launching benchmark scenarios (https://review.openstack.org/#/c/69886/) so that its functionality gets split into several context classes (responsible for temporary users management and resource cleanup after benchmarking), and also implementing different scenario launching strategies via inheritance (https://review.openstack.org/#/c/70771/).

We continue implementing new features in Rally as well. One example is the ongoing work on atomic actions runtime measurement: it is about to be supported by the CLI which will now display this detailed runtime information in a user-friendly way (https://review.openstack.org/#/c/70362/).


We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team


January 27, 2014

Hello stackers,

we are happy to share our recent updates in Rally:

  • We've done a very nice job on improving the command line interface for Rally. This has been done through several separate relatively small enhancements, which have resulted together in an overall much more positive user experience while using Rally. These improvement comprise:
  • A new feature, namely the validators, has been added to Rally. The validators are essentially checker methods that can be bound to different benchmark scenarios and are called by Rally before the actual benchmarking starts to check whether the resources needed by that benchmark are available etc. (https://review.openstack.org/#/c/67157/);
  • There has been a huge work on refactoring the benchmark engine code to improve its quality and make it more object-oriented (https://review.openstack.org/#/c/68593/). This work is going to be continued during the next weeks;
  • The FUEL client for Rally is now ready to use (https://review.openstack.org/#/c/59943/). This work is going to be followed by the FUEL deploy engine, which is currently in progress (https://review.openstack.org/#/c/61963/).


The basic plan for this week consists of the following tasks:

  • We are about to finish the implementation of benchmark launching with predefined users (instead of the temporary ones generated automatically by Rally). This work laos includes a set of changes in the Dummy deploy engine, which now is going to accept as its input not the single set of endpoints with administrator permission, but a list of endpoints that can be all with ordinary user permissions - in that case, these endpoints will be used instead of temporary users during benchmarking (https://review.openstack.org/#/c/67154/, https://review.openstack.org/#/c/67643/, https://review.openstack.org/#/c/67710/, https://review.openstack.org/#/c/67720/);
  • The work on the stress execution of benchmarks is going to be completed as well. It will, however, differ a little from the originally planned one: instead of creating a separate execution option in the input config, we will just extend the already existing continuous and periodic ones (https://review.openstack.org/#/c/63055/).


We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team


January 20, 2014

Hello stackers,

here are the updates in our project that deserve to be mentioned in the first place:


The main directions of our current work are as follows:


We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team


January 13, 2014

Hello stackers,

we've recovered from the New Year holidays and already accomplished a range a tasks. Here are some recent updates in Rally:


This week we will start implementing the REST API for Rally, as well as a python client for it. Besides, there is still much work to do regarding the new deploy engines and server providers mentioned in previous reports. We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team


December 23, 2013

Hello stackers,

here is the update for the last week. From all the work we've completed this week we would like to highlight the following:

  • A new execution type, namely the periodic execution type has been added to the benchmark engine (https://review.openstack.org/#/c/57628/). Benchmark engine is now possible to launch a given benchmark scenario once in a specified period of time, thus creating a load which closely resembles the real world load scenarios. For example, you can now ask Rally to lanch the "boot and delete server" scenario 50 times, booting a new server every 3 minutes. This requires only slight changes in the input configuration file;
  • We've started the work on replacing the old FUEL/OSTF-based cloud verification mechanism with a new one, based on Tempest. While patches involving Tempest integration are still in progress, we've already get rid of all the FUEL/OSTF stuff in Rally (https://review.openstack.org/#/c/63653/), which has been both a great code cleanup for our project and also has reduced the amount of requirements for Rally.


Our current work is concentrated on:

  • Adding another execution type to the benchmark engine: the stress execution type, which enables the user to easily specify a benchmark scenario with automatically increasing load of active users (say, from 10 to 100 with step 5). This benchmark scenario will also automatically halt as soon as the cloud starts to fail too frequently: the corresponding maximal failure rate can be also set in the input configuration file (https://review.openstack.org/#/c/63055/);
  • Further creation of new benchmark scenarios for Rally. The most interesting scenario during the past week was presumably the one that boots a server and then allows the user to run a custom script on that server via SSH (https://review.openstack.org/#/c/63138/).


We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team


December 16, 2013

Hello stackers,

this week has been very fruitful for Rally and below we share with you some of our most important recent results:

  • Deployment & Benchmark Workflows have now become completely separate things. While previously you usually created/specified an OpenStack deployment and also which benchmarks to run on it in one bulk, now Rally requires you first to create a deployment and then reference this deployment while launching benchmark scenarios (https://review.openstack.org/#/c/57455/). This, among other things, allows you to re-use a single deployment for many benchmarking tasks. You are highly encouraged to check out our updated "How-to" page where the process of managing deployments and using them in benchmarking tasks is explained in more details;
  • Support of resource tracking has been added to LXC server provider. This was the last server provider that didn't support the resource tracking functionality implemented during the previous week and by this patch (https://review.openstack.org/#/c/60930/) we finish the integration of that functionality to the server providers;
  • Adding input config vaildation to several deployment engines and server providers: after we've implemented the common config validation procedure last week, the processing of input configuration for deployment engines and server providers has mostly become a matter of writing correct JSON schemas that reflect engine-specific things. Recently, we have merged such schemas for devstack engine (https://review.openstack.org/#/c/57226/), dummy engine (https://review.openstack.org/#/c/57239/) and OpenStack server provider (https://review.openstack.org/#/c/60275/);
  • New benchmark scenarios for Nova API. We are proud to see that our community starts to grow faster and new interested people come in. A contribution of one of our newcomers (QianLin from Huawei) is a benchmark scenario that exploits Nova server rescue/unrescue API (https://review.openstack.org/#/c/61688/).


The working plan for this week encompasses:

  • Adding more diverse benchmark scenarios to Rally:
  • Adding out-of-the-box support for stress testing: enhancing the benchmarking engine of Rally with the ability to automatically stop when too many benchmarks start to fail. This is often the case when a significant number of benchmark scenarios (i.e. stress testing) is launched on one cloud. This will also require slight changes in the input config format;
  • Further work on deploy engines: the high-priority work is to finish the implementation of FUEL (https://review.openstack.org/#/c/61963/) and multihost (https://review.openstack.org/#/c/57240/) deploy engines.
  • Code refactoring, which is this week concentrated on unit tests: the goal is to move certain "Fake" classes commonly used for testing to a special utils-like module (https://review.openstack.org/#/c/62191/), to avoid code duplicate while using these Fake objects (https://review.openstack.org/#/c/62193/) and also to ensure the correct decorator syntax is used for mocking, which is still not the case for many unit tests.


We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team



December 9, 2013

Hello stackers,

There has been much activity during the past week in Rally, and several significant patches have been merged recently:

  • Splitting Deploy's & Benchmark's Workflows is coming to the end!

The last blocker was that we were storing allocated Resources by Server Providers in memory of Rally process instead of permanent storage (e.g. DB). During this week we added new table Resource to DB and switched almost all (except LXC) Server Providers to use DB instead of in-memory storage. Now we should switch LXC provider and then we will be able to merge the final patch addressing the splitting task (https://review.openstack.org/#/c/57455/).

  • Generic cloud cleanup after benchmark scenarios.

Performing a generic cleanup after launching benchmark scenarios is essential for guaranteeing the cloud to stay clean after benchmarking and, besides, enables the benchmark scenario writers not to worry about deleting all the resources they create in init() methods or specific benchmark procedures (https://review.openstack.org/#/c/55695/). "Generic" means that Rally should free all possible kinds of allocated resources: servers, images etc.

  • Code refactoring:
  1. Fixing a structure issue in the folder with Rally configuration samples (https://review.openstack.org/#/c/59259/);
  2. Renaming the ServerDTO -> Server entity (used by server providers) to improve code readability (https://review.openstack.org/#/c/59749/).


A wide variety of new contributions to Rally is still under development and pending for review:

  • Enriching the benchmark engine with the mechanism for cloud initialization before launching benchmark scenarios. The support for init() methods in benchmark scenario classes was actually already implemented in Rally 0.1 but has been broken since creating multiple temporary tenants/users for benchmarking had been introduced to Rally (due to the fact that resources - servers, floating IPs etc. - created in init() did no longer belong to appropriate temporary tenants/users and thus could not be used in benchmark scenarios). There is now a patch (https://review.openstack.org/#/c/59782/) that fixes this issue by calling init() once for each temporary user and thus creating the appropriate resources (servers, floating IPs etc.) for every temporary OpenStack user that may be involved in benchmarking. This patch also has as a consequence a couple of smaller patches that improve the performance of OpenStack python clients (https://review.openstack.org/#/c/59780/, https://review.openstack.org/#/c/59781/) and thus optimize the procedure of creating resources for temporary users in init() methods;
  • Glance benchmark scenarios: while all the previous benchmark scenarios were focused on capabilities provided by Nova API, this patch makes the first contribution for benchmarking other core OpenStack Projects. The patch (https://review.openstack.org/#/c/60469/) currently implements 2 basic scenarios: creating/deleting images and also creating images and using them to boot servers;
  • Code refactoring: there are currently two patches dedicated to deploy engine and server provider code unification; both implement logic for input configuration file validation common for all the deploy engines and server providers respectively (https://review.openstack.org/#/c/57222/, https://review.openstack.org/#/c/60030/). These patches are the followed by the ones concentrated on deploy engine- and server provider-specific things for config validation (https://review.openstack.org/#/c/57239/, https://review.openstack.org/#/c/57226/, https://review.openstack.org/#/c/60275/)


We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team



December 2, 2013

Hello stackers,

below you will find the latest review of our activities in Rally for the past week.

Our achievements for the end of November comprise:

  • Numerous changes in the benchmark engine, the most important among which are:
  • Rally is now able not only to perform a specified number of benchmark scenario launches, but also to create a continuous load on the cloud by running any scenario for the given period of time. For example, you can now boot and delete servers in the cloud continiously from a number of temporary users, say, for 10 minutes, thus simulating in this way a stress load on the cloud. To do so, the only thing you should change in your configuration file is the "times" field for the benchmark scenario you are going to launch, which now should be replaced with "duration" field and initialized to the number of minutes the benchmark is expected to run (https://review.openstack.org/#/c/56036/);
  • Access to openstack clients with administrator permissions is now enabled for all the scenarios through the admin_clients() method of the base Scenario class. Before this update, this class provided only the clients() method which returned a reference to a non-admin OpenStack client. This, however, turned out to be not enough for keystone-based benchmark scenarios that are to come in the future releases (https://review.openstack.org/#/c/58381/);
  • Bugfix for the init() methods of benchmark scenarios which now enables benchmark scenario writers to pass through the context dictionary (which is the dictionary that init() returns) not only primitive objects like strings or numbers but also the complex ones like references to specially prepared servers or floating ips (https://review.openstack.org/#/c/55465/).
  • The work on separating the deployment and task entities mentioned in previous updates has now come closely to its successful conclusion. The main results here include:
  • Server provider for OpenStack: another ServerProvider class that wraps with the default ServerProvider interface (create_vms() and destroy_vms() methods) the functionality of python-novaclient. Along with lxc and virsh server providers (already present in the system) it constitutes the essential basis for working with different virtualization technologies (https://review.openstack.org/#/c/48811);
  • The first contribution to data processing and visualization in Rally: a new CLI command for tasks has been added, namely plot aggregated which draws plots illustrating the cloud performance on one of the finished benchmark tasks. The CLI command requires the user to indicate the parameter for which the plots will be drawn. For example, if one specifies active_users as the aggregating parameter, Rally will draw a plot that shows how the number of active users making requests to the cloud affects the runtime of benchmark scenarios. The code uses the matplotlib library to draw the plots (https://review.openstack.org/#/c/52712/).


This week, our work will be concentrated on the following:


We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team



November 25, 2013

Hello stackers,

here is the second report on our activities in Rally development for the past week.

The main results that have been recently merged with master are as follows:

  • Changes in the benchmark engine: we have significantly restructured the format of the input benchmark config (https://review.openstack.org/#/c/56035/). The changes make it more transparent to the end-user as well as more flexible. This will enable us to implement new features in the benchmark engine like running tests periodically or for a given amount of time. We have also refactored the test code related to benchmark scenarios by replacing ugly-looking nested with-blocks for mocks with a more readable decorator syntax (https://review.openstack.org/#/c/57732/);
  • Further work on splitting the system logic between the two basic entities, namely the deployment and the benchmark task. While still having the legacy combined config that contains information both on the deployment and on the benchmarks, we have closely come to the point where we can completely split everything related to these two entites. To be more precise, during the lask week we have made:


Our plan for the current week comprises:


We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team



November 18, 2013

Hello stackers,

here is the first issue of our weekly update notes on Rally, Benchmark-as-a-Service project for OpenStack. Once a week we are going to post a few remarks on what we have done and what we plan to implement in Rally during the next week.


During the past week we have been focusing our efforts on two main aspects of Rally development:

  • Splitting the Rally workflow into 2 parts: the OpenStack deployment part and the benchmark tasks running part. Both have been previously treated by the system as a single process configured once by the end user. Separation of deployment from benchmark tasks, however, allows one to reuse existing deployments. The current results here are:


We have also recently received several e-mails notifying us of a possible issue in the soft/hard server reboot benchmark scenario. We would like to thank all of you who reported the problem. We will try to fix it as soon as possible.


The Rally roadmap for the next week goes as follows:

  • Continue the work on separating OpenStack deployments from benchmark tasks: introduce the necessary CLI commands, integrate the Deployment class with deploy engines, rewrite the orchestrator part to support the separated deployments and benchmarks;
  • Implement multihost OpenStack deployment engine using LXC;
  • Add two new capabilities to the benchmark runner:
  • Benchmark launching for a given period of time (not a strict amount of times);
  • Launching several benchmarks with configured intervals.
  • Improve the benchmark config format to make it both more flexible and more clear for the end user;
  • Implement generic cleanup for our benchmark scenarios;
  • Work on automated output data processing and drawing plots for benchmark results.


Several patches addressing the above tasks are already available on Gerrit code review. You are welcome to take a look at them.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team