Jump to: navigation, search

Rally/UpdatesDecember2013

Weekly updates - December 2013

December 23, 2013

Hello stackers,

here is the update for the last week. From all the work we've completed this week we would like to highlight the following:

  • A new execution type, namely the periodic execution type has been added to the benchmark engine (https://review.openstack.org/#/c/57628/). Benchmark engine is now possible to launch a given benchmark scenario once in a specified period of time, thus creating a load which closely resembles the real world load scenarios. For example, you can now ask Rally to lanch the "boot and delete server" scenario 50 times, booting a new server every 3 minutes. This requires only slight changes in the input configuration file;
  • We've started the work on replacing the old FUEL/OSTF-based cloud verification mechanism with a new one, based on Tempest. While patches involving Tempest integration are still in progress, we've already get rid of all the FUEL/OSTF stuff in Rally (https://review.openstack.org/#/c/63653/), which has been both a great code cleanup for our project and also has reduced the amount of requirements for Rally.


Our current work is concentrated on:

  • Adding another execution type to the benchmark engine: the stress execution type, which enables the user to easily specify a benchmark scenario with automatically increasing load of active users (say, from 10 to 100 with step 5). This benchmark scenario will also automatically halt as soon as the cloud starts to fail too frequently: the corresponding maximal failure rate can be also set in the input configuration file (https://review.openstack.org/#/c/63055/);
  • Further creation of new benchmark scenarios for Rally. The most interesting scenario during the past week was presumably the one that boots a server and then allows the user to run a custom script on that server via SSH (https://review.openstack.org/#/c/63138/).


We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team


December 16, 2013

Hello stackers,

this week has been very fruitful for Rally and below we share with you some of our most important recent results:

  • Deployment & Benchmark Workflows have now become completely separate things. While previously you usually created/specified an OpenStack deployment and also which benchmarks to run on it in one bulk, now Rally requires you first to create a deployment and then reference this deployment while launching benchmark scenarios (https://review.openstack.org/#/c/57455/). This, among other things, allows you to re-use a single deployment for many benchmarking tasks. You are highly encouraged to check out our updated "How-to" page where the process of managing deployments and using them in benchmarking tasks is explained in more details;
  • Support of resource tracking has been added to LXC server provider. This was the last server provider that didn't support the resource tracking functionality implemented during the previous week and by this patch (https://review.openstack.org/#/c/60930/) we finish the integration of that functionality to the server providers;
  • Adding input config vaildation to several deployment engines and server providers: after we've implemented the common config validation procedure last week, the processing of input configuration for deployment engines and server providers has mostly become a matter of writing correct JSON schemas that reflect engine-specific things. Recently, we have merged such schemas for devstack engine (https://review.openstack.org/#/c/57226/), dummy engine (https://review.openstack.org/#/c/57239/) and OpenStack server provider (https://review.openstack.org/#/c/60275/);
  • New benchmark scenarios for Nova API. We are proud to see that our community starts to grow faster and new interested people come in. A contribution of one of our newcomers (QianLin from Huawei) is a benchmark scenario that exploits Nova server rescue/unrescue API (https://review.openstack.org/#/c/61688/).


The working plan for this week encompasses:

  • Adding more diverse benchmark scenarios to Rally:
  • Adding out-of-the-box support for stress testing: enhancing the benchmarking engine of Rally with the ability to automatically stop when too many benchmarks start to fail. This is often the case when a significant number of benchmark scenarios (i.e. stress testing) is launched on one cloud. This will also require slight changes in the input config format;
  • Further work on deploy engines: the high-priority work is to finish the implementation of FUEL (https://review.openstack.org/#/c/61963/) and multihost (https://review.openstack.org/#/c/57240/) deploy engines.
  • Code refactoring, which is this week concentrated on unit tests: the goal is to move certain "Fake" classes commonly used for testing to a special utils-like module (https://review.openstack.org/#/c/62191/), to avoid code duplicate while using these Fake objects (https://review.openstack.org/#/c/62193/) and also to ensure the correct decorator syntax is used for mocking, which is still not the case for many unit tests.


We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team



December 9, 2013

Hello stackers,

There has been much activity during the past week in Rally, and several significant patches have been merged recently:

  • Splitting Deploy's & Benchmark's Workflows is coming to the end!

The last blocker was that we were storing allocated Resources by Server Providers in memory of Rally process instead of permanent storage (e.g. DB). During this week we added new table Resource to DB and switched almost all (except LXC) Server Providers to use DB instead of in-memory storage. Now we should switch LXC provider and then we will be able to merge the final patch addressing the splitting task (https://review.openstack.org/#/c/57455/).

  • Generic cloud cleanup after benchmark scenarios.

Performing a generic cleanup after launching benchmark scenarios is essential for guaranteeing the cloud to stay clean after benchmarking and, besides, enables the benchmark scenario writers not to worry about deleting all the resources they create in init() methods or specific benchmark procedures (https://review.openstack.org/#/c/55695/). "Generic" means that Rally should free all possible kinds of allocated resources: servers, images etc.

  • Code refactoring:
  1. Fixing a structure issue in the folder with Rally configuration samples (https://review.openstack.org/#/c/59259/);
  2. Renaming the ServerDTO -> Server entity (used by server providers) to improve code readability (https://review.openstack.org/#/c/59749/).


A wide variety of new contributions to Rally is still under development and pending for review:

  • Enriching the benchmark engine with the mechanism for cloud initialization before launching benchmark scenarios. The support for init() methods in benchmark scenario classes was actually already implemented in Rally 0.1 but has been broken since creating multiple temporary tenants/users for benchmarking had been introduced to Rally (due to the fact that resources - servers, floating IPs etc. - created in init() did no longer belong to appropriate temporary tenants/users and thus could not be used in benchmark scenarios). There is now a patch (https://review.openstack.org/#/c/59782/) that fixes this issue by calling init() once for each temporary user and thus creating the appropriate resources (servers, floating IPs etc.) for every temporary OpenStack user that may be involved in benchmarking. This patch also has as a consequence a couple of smaller patches that improve the performance of OpenStack python clients (https://review.openstack.org/#/c/59780/, https://review.openstack.org/#/c/59781/) and thus optimize the procedure of creating resources for temporary users in init() methods;
  • Glance benchmark scenarios: while all the previous benchmark scenarios were focused on capabilities provided by Nova API, this patch makes the first contribution for benchmarking other core OpenStack Projects. The patch (https://review.openstack.org/#/c/60469/) currently implements 2 basic scenarios: creating/deleting images and also creating images and using them to boot servers;
  • Code refactoring: there are currently two patches dedicated to deploy engine and server provider code unification; both implement logic for input configuration file validation common for all the deploy engines and server providers respectively (https://review.openstack.org/#/c/57222/, https://review.openstack.org/#/c/60030/). These patches are the followed by the ones concentrated on deploy engine- and server provider-specific things for config validation (https://review.openstack.org/#/c/57239/, https://review.openstack.org/#/c/57226/, https://review.openstack.org/#/c/60275/)


We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team



December 2, 2013

Hello stackers,

below you will find the latest review of our activities in Rally for the past week.

Our achievements for the end of November comprise:

  • Numerous changes in the benchmark engine, the most important among which are:
  • Rally is now able not only to perform a specified number of benchmark scenario launches, but also to create a continuous load on the cloud by running any scenario for the given period of time. For example, you can now boot and delete servers in the cloud continiously from a number of temporary users, say, for 10 minutes, thus simulating in this way a stress load on the cloud. To do so, the only thing you should change in your configuration file is the "times" field for the benchmark scenario you are going to launch, which now should be replaced with "duration" field and initialized to the number of minutes the benchmark is expected to run (https://review.openstack.org/#/c/56036/);
  • Access to openstack clients with administrator permissions is now enabled for all the scenarios through the admin_clients() method of the base Scenario class. Before this update, this class provided only the clients() method which returned a reference to a non-admin OpenStack client. This, however, turned out to be not enough for keystone-based benchmark scenarios that are to come in the future releases (https://review.openstack.org/#/c/58381/);
  • Bugfix for the init() methods of benchmark scenarios which now enables benchmark scenario writers to pass through the context dictionary (which is the dictionary that init() returns) not only primitive objects like strings or numbers but also the complex ones like references to specially prepared servers or floating ips (https://review.openstack.org/#/c/55465/).
  • The work on separating the deployment and task entities mentioned in previous updates has now come closely to its successful conclusion. The main results here include:
  • Server provider for OpenStack: another ServerProvider class that wraps with the default ServerProvider interface (create_vms() and destroy_vms() methods) the functionality of python-novaclient. Along with lxc and virsh server providers (already present in the system) it constitutes the essential basis for working with different virtualization technologies (https://review.openstack.org/#/c/48811);
  • The first contribution to data processing and visualization in Rally: a new CLI command for tasks has been added, namely plot aggregated which draws plots illustrating the cloud performance on one of the finished benchmark tasks. The CLI command requires the user to indicate the parameter for which the plots will be drawn. For example, if one specifies active_users as the aggregating parameter, Rally will draw a plot that shows how the number of active users making requests to the cloud affects the runtime of benchmark scenarios. The code uses the matplotlib library to draw the plots (https://review.openstack.org/#/c/52712/).


This week, our work will be concentrated on the following:


We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.

Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z


Stay tuned.


Regards,
The Rally team