Rally/Updates
Contents
- 1 Weekly updates
- 1.1 March 24, 2014
- 1.2 March 17, 2014
- 1.3 March 10, 2014
- 1.4 March 3, 2014
- 1.5 February 24, 2014
- 1.6 February 17, 2014
- 1.7 February 10, 2014
- 1.8 February 03, 2014
- 1.9 January 27, 2014
- 1.10 January 20, 2014
- 1.11 January 13, 2014
- 1.12 December 23, 2013
- 1.13 December 16, 2013
- 1.14 December 9, 2013
- 1.15 December 2, 2013
- 1.16 November 25, 2013
- 1.17 November 18, 2013
Weekly updates
March 24, 2014
Hello stackers,
with great pleasure we've been observing the high rate at which the number of patches pending for review grew over the past week in Rally. Indeed, our community constantly becomes larger and larger (including one new core developer this week), and much more active as well. The highlights of our recent efforts are as follows:
- New benchmark scenarios, including those for testing the creation performance in keystone and also for a server and then issuing the "servers list" command;
- Introduction of the REST API basics which ultimately will make it possible to use Rally as a Service;
- A wide range of nice local code improvements that bring more consistency and simplicity to it: among others, let's mention:
- a patch that quite a lot of unused code in server providers;
- code duplicate in logging wrappers via their unification
- the scenario runner output by switching from the simple dictionaries usage to a bit more involved ScenarioRunnerResult class which atomatically performs the format correctness tests, thus making the code much more reliable.
This week, we are going to invest a lot of time in further code refactoring (in a range of areas), as well as in implementing new benchmark scnearios to make Rally applicable for more and more testing use cases.
We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=icehouse&metric=commits&project_type=all&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team
March 17, 2014
Hello stackers,
the past week has resulted in the further refinement of different parts of Rally code, of its CLI as well as of the configuration file formats. Several of these changes have been caused by the current integration of contexts into Rally (let us remind you that the notion of contexts is used to define different environments in which benchmark scenarios can be launched, e.g. environments with temporarity generated OpenStack users and/or with generic cleanup facilities). Some interesting changes include:
- An enormously important and overall refactoring patch that brings different optimizations to the config validation step, to the CLI output/logging and also (NB!) changes the input task configuration file format (take a look at the updated task configuration samples);
- An important step on the way to a complete Rally-Tempest integration is adding the ability to launch Tempest tests without "sudo";
- OpenStack clients helper module refactoring, which is concentrated on reimplementing the "lazy" client handles in a more elegant way and also making them accessible in a direct way, without auxiliary methods like rally.benchmark.utils.create_openstack_clients(), which made the code unreasonably more complicated;
- We've also added several missing unit tests: the ones for the "deployment list" command as well as for the Authenticate benchmark scenario group.
This week, the directions of our efforts are going to be mostly defined by the changes from the refactoring patch mentioned earlier. A lot of stuff has to be rebased, while a couple of deferred important patches (including those implementing support for pre-created users in Rally or introducing the new "stress" scenario running strategy) will be brought to life again.
We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=icehouse&metric=commits&project_type=all&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team
March 10, 2014
Hello stackers,
over the past week the direction of our efforts hasn't changed significantly: we are still working hard on further logical organization of the core parts of Rally which will enable the system to be even more extendable than it is now. Some important changes include:
- Further work on integrating the Context classes into Rally. Let us remind you that the notion of contexts is used by us to define different environments in which benchmark scenarios can be launched by Rally, e.g. an environment with temporarity generated OpenStack users and/or a context that enables generic cleanup for the benchmark scenarios. This week, we have added the base Context class with a unified interface and we have also rewritten some already existing context classes according to the base class API (https://review.openstack.org/#/c/78193/);
- Various fixes in the Devstack deploy engine, including the support for connecting to the VM with a user-password combination instead of a key-pair (https://review.openstack.org/#/c/77540/), minor bugfix in the cleanup procedure (https://review.openstack.org/#/c/70727/) and adding support for git branching (https://review.openstack.org/#/c/78225/);
- Many small but important improvements that make the code overall more readable, e.g. using the configuration files in appropriate places (https://review.openstack.org/#/c/78325/), moving a couple of helper methods for the benchmark engine to the correct modules (https://review.openstack.org/#/c/78524/), replacing the incorrect mocking syntax with the decorator-based one (https://review.openstack.org/#/c/78589/) and so on.
This week, we are going to continue the work on the context classes for benchmark scenarios since this is going to be a tool which will make Rally really pluggable. Current tasks include:
- Changing the benchmark scenario input config format;
- Splitting the already existing validation procedures to different context classes in a logical way;
- Implementing the Context class factory (like we did with deploy engines or scenario runners);
and many others.
We are also going to introduce several enhancement both to the task result output (in its HTML form) and to the code (by moving some common code to a special utils module).
We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=icehouse&metric=commits&project_type=all&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team
March 3, 2014
Hello stackers,
the most important changes during the past week have been concentrated on further logical structuring of the core part of Rally, namely the benchmark engine, and include:
- Benchmark scenario arguments validation refactoring: we've moved the whole process from the ScenarioRunner to the BenchmarkEngine class (which is a much more logical place for that) and also added the support for admin-based and user-based validation differentiation (https://review.openstack.org/#/c/76162/);
- Context introduction, which is a very important novelty for Rally: from now on, we are going to use the notion of context to define different environments in which benchmark scenarios can be launched by Rally. The already existing temporary UserGenerator and ResourceCleanuper classes are in fact also context, so a natural first step was to move them to a special context module (https://review.openstack.org/#/c/77322/);
- Much work has been done around small fixes in the code (https://review.openstack.org/#/c/77170/, https://review.openstack.org/#/c/77192/) and in the unit tests (https://review.openstack.org/#/c/75877/, https://review.openstack.org/#/c/76884/), as well as around user experience improvements touching the CLI (https://review.openstack.org/#/c/76226/, https://review.openstack.org/#/c/76221/).
The current direction of our work is the further development of contexts, which will involve introducing new context classes as well.
We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=icehouse&metric=commits&project_type=all&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team
February 24, 2014
Hello stackers,
this week, several important contributions have been made to Rally, considering both the overall system stability and the improvements of the user interface. To name a few:
- Vast refactoring of the ScenarioRunner class has enabled to stop sharing OpenStack clients objects between processes in the core of the system, which occasionally caused bugs in Rally (https://review.openstack.org/#/c/74769/);
- Another important refactoring step resulted in the replacement of OpenStack endpoint dictionaries with special objects throughout the system, which has made the code more reliable and extendable (https://review.openstack.org/#/c/74425/);
- Perhaps the prettiest patch of the week was the introduction of a benchmark result visualization tool, implemented with the nvd3 plugin to d3.js (so that the actual charts are drawn to a html file). The graphs look really nice and will be of great use for those who want to share their benchmarking results (https://review.openstack.org/#/c/72970/);
- Several nice improvements in the CLI include the showing of 90- and 95- percentile results in the benchmark summary (https://review.openstack.org/#/c/73522/) and a new show command which allows the user to get the information on images/flavors/networks/etc. available in the current deployment in a very quick way (https://review.openstack.org/#/c/75699/).
The ongoing work includes:
- An extention of the use command which will be applicable soon not only to deployments but also to tasks (https://review.openstack.org/#/c/75936/);
- Further refactoring of the core benchmark engine, including the work around input configuration parameters validation (for a detailed description of what's going to be done, see this special document);
- After finishing some major refactoring procedues, we have also resumed the work around passing pre-created user endpoints to the DummyEngine (https://review.openstack.org/#/c/67720/) and generating the "stress" load on the cloud.
We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=icehouse&metric=commits&project_type=all&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team
February 17, 2014
Hello stackers,
the first thing we would like to mention today is an extensive piece of work that has been done recently on our Wiki. We have updated both the main page and the basic tutorials: how to install Rally and how to use it. These tutorials have been simplified quite a lot and have been updated with new features that have been merged in Rally recently.
As for the actual updates in the Rally code, the main ones are as follows:
- The refactoring of the ScenarioRunner class has been successfully continued by reimplementing different benchmark execution strategies (continuous/periodic executions), which resided previously inside the original class, via subclassing. The new ContinuousScenarioRunner and PeriodicScenarioRunner classes enable us to make the code much more readable (no more complicated if...else logic to choose the appropriate execution strategy is present in the code) and extendable, so that it is now very easy to add your own ScenarioRunner (https://review.openstack.org/#/c/70771/);
- We've successfully started the work on Tempest & Rally integration in order for the latter to be able to perform OpenStack deployment verification procedures as well as to possibly add new Tempest-based benchmark scenarios (https://review.openstack.org/#/c/70131/);
- On the way to adding a full support for benchmarking with predefined OpenStack users (instead of using the generated ones), we've refactored the Endpoint entity, making it able to distinguish between administrator/user permisisons. Besides, each deployment gets now stored in the database with a list of endpoints instead of only one endpoint, since we are going to enable the DummyEngine to take several endpoints as its input (https://review.openstack.org/#/c/67154/);
- One of the future features of Rally will be Heat-based benchmark scenarios which will make it possible to test the VMs performance. This week, we have started contributing to this as well (https://review.openstack.org/#/c/72749/);
- Among many other simplier refactoring patches finished this week, we'd like to mention the one that improves the CLI code by structuring it to submodules (https://review.openstack.org/#/c/73059/).
The ongoing work includes:
- Further changes needed for the DummyEngine in order to be able to accept predefined user endpoints (instead of a single admin endpoint) and for the ScenarioRunner to use them in benchmarks (https://review.openstack.org/#/c/67643/, https://review.openstack.org/#/c/67710/, https://review.openstack.org/#/c/67720/);
- New benchmark result visualization tool based on nvd3 plugin to d3.js (so that the actual charts are drawn to a html file). This also includes a new CLI command rally task plot2html <task_uuid> (https://review.openstack.org/#/c/72970/);
- Adding 90- and 95-percentile results to the CLI output for benchmark runtimes, i.e. printing the runtime ranges that encompass 90% and 95% of benchmarks respectively (https://review.openstack.org/#/c/73522/);
- Continuing the work on Rally & Tempest integration, new deployment engine types etc.
We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=icehouse&metric=commits&project_type=all&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team
February 10, 2014
Hello stackers,
the past week has been extremely successful for us regarding the overall Rally code improvement, bugfixing, as well as new features implementation. Rally is actually about to become a totally easy-to-understand and easy-to-use piece of software which can be exploited by everyone interested in it.
The most important contributions to Rally made during the past week are as follows:
- The code refactoring stuff has been quite involved:
- We have issued a drastic rearrangement of the ScenarioRunner class (that is responsible for the actual benchmark method calls using a particular benchmarking strategy) by moving out some code from this class to new context classes. This change also enabled Rally to process all errors occuring on the cloud during benchmarking/cleanup correctly (https://review.openstack.org/#/c/69886/);
- Another significant contribution is the sshutils module refactoring, which involves the API improvement as well as the new ability to process the stdin data (https://review.openstack.org/#/c/68063/);
- Finally, a very nice work has been done on the benchmark scenarios refactoring by moving the hardcoded timeout and cloud poll interval values to rally.conf (https://review.openstack.org/#/c/71272/).
- Very important bugfixes addressing the improper implementation of OpenStack resource deletion (https://review.openstack.org/#/c/66856/) and benchmark timeout handling (https://review.openstack.org/#/c/72103/) have been merged this week as well;
- Our set of available benchmark scenarios has been expanded with benchmark scenarios for Glance: they include a scenario for adding and deleting an image and a scenario for booting several instances from a previously added image (https://review.openstack.org/#/c/60469/).
The current work encompasses the following directions:
- Further scenario runners refactoring: we are now reimplementing the different benchmark execution strategies (continuous/periodic executions) via subclasses of the base ScenarioRunner class thus making the code much more readable and extendable (https://review.openstack.org/#/c/70771/);
- Reimplementing the patches for DummyEngine refactoring (making it able to work with a predefined set of users instead of the generated ones) based on the updated scenario runners (https://review.openstack.org/#/c/67154/, https://review.openstack.org/#/c/67643/, https://review.openstack.org/#/c/67710/, https://review.openstack.org/#/c/67720/);
- Rally & Tempest integration is a new ambitious piece of work we are conducting now. Tempest is going to be used inside Rally as a base for the cloud verification functionality in Rally as well as for new benchmark scenarios that use Tempest (https://review.openstack.org/#/c/70131/).
- We are putting now much effort in finishing the work on new deploy engines: the MultihostEngine (https://review.openstack.org/#/c/57240/), the LxcEngine (https://review.openstack.org/#/c/56222/) and the FuelEngine (https://review.openstack.org/#/c/61963/).
We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team
February 03, 2014
Hello stackers,
our efforts during the past week were heavily focused on code refactoring and bugfixing. Among the most significant contrubutions are:
- A fix for certain inconsistencies in the code that checked the availability of resources of OpenStack, e.g. whether a particular resource got deleted or not (https://review.openstack.org/#/c/66856/);
- The work on refactoring the scenario runner to make its code clean (https://review.openstack.org/#/c/69846/).
Several novelties have been introduced to Rally:
- After having developed the abstract validators mechanism, we have developed a couple of useful concrete validators as well: the one that checks that the image indicated in the config for, say, the NovaServers.boot_and_delete_server benchmark scenario really exists and can be used (https://review.openstack.org/#/c/68055/) and another validator that does the same for flavors (https://review.openstack.org/#/c/70082/). Both validators have been attached to benchmark scenarios where they are of great use.
- We've implemented the mechanism of measuring the time taken by atomic actions in our benchmark scneario (https://review.openstack.org/#/c/69828/): e.g. now Rally outputs not only the information on how long it took the cloud to boot and delete a single server (in the NovaServers.boot_and_delete_server scenario), but also how much time it took to boot the server and to delete it.
This week there is still a huge amount of work to be done around refactoring the very fundamental code in Rally. Among other things, we now rewrite the ScenarioRunner class which is the tool for launching benchmark scenarios (https://review.openstack.org/#/c/69886/) so that its functionality gets split into several context classes (responsible for temporary users management and resource cleanup after benchmarking), and also implementing different scenario launching strategies via inheritance (https://review.openstack.org/#/c/70771/).
We continue implementing new features in Rally as well. One example is the ongoing work on atomic actions runtime measurement: it is about to be supported by the CLI which will now display this detailed runtime information in a user-friendly way (https://review.openstack.org/#/c/70362/).
We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team
January 27, 2014
Hello stackers,
we are happy to share our recent updates in Rally:
- We've done a very nice job on improving the command line interface for Rally. This has been done through several separate relatively small enhancements, which have resulted together in an overall much more positive user experience while using Rally. These improvement comprise:
- The rally use deploy command which allows to specify the deployment we are working with only once and thus there is no need to write down the long deployment id string every time we want to launch a benchmarking task(https://review.openstack.org/#/c/68395/);
- The ability to create a deployment from the environment variables, if they are specified (https://review.openstack.org/#/c/68347/);
- More useful deployment-related output from CLI after a deployment has been created (https://review.openstack.org/#/c/68766/);
- The deployment check subcommand which verifies that keystone endpoints are reachable and prints user-friendly information on that (https://review.openstack.org/#/c/68901/).
- A new feature, namely the validators, has been added to Rally. The validators are essentially checker methods that can be bound to different benchmark scenarios and are called by Rally before the actual benchmarking starts to check whether the resources needed by that benchmark are available etc. (https://review.openstack.org/#/c/67157/);
- There has been a huge work on refactoring the benchmark engine code to improve its quality and make it more object-oriented (https://review.openstack.org/#/c/68593/). This work is going to be continued during the next weeks;
- The FUEL client for Rally is now ready to use (https://review.openstack.org/#/c/59943/). This work is going to be followed by the FUEL deploy engine, which is currently in progress (https://review.openstack.org/#/c/61963/).
The basic plan for this week consists of the following tasks:
- We are about to finish the implementation of benchmark launching with predefined users (instead of the temporary ones generated automatically by Rally). This work laos includes a set of changes in the Dummy deploy engine, which now is going to accept as its input not the single set of endpoints with administrator permission, but a list of endpoints that can be all with ordinary user permissions - in that case, these endpoints will be used instead of temporary users during benchmarking (https://review.openstack.org/#/c/67154/, https://review.openstack.org/#/c/67643/, https://review.openstack.org/#/c/67710/, https://review.openstack.org/#/c/67720/);
- The work on the stress execution of benchmarks is going to be completed as well. It will, however, differ a little from the originally planned one: instead of creating a separate execution option in the input config, we will just extend the already existing continuous and periodic ones (https://review.openstack.org/#/c/63055/).
We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team
January 20, 2014
Hello stackers,
here are the updates in our project that deserve to be mentioned in the first place:
- Rally now supports the automated documentation generation using Sphinx. The docs are generated from the docsrings in the code (https://review.openstack.org/#/c/66092/);
- It is now possible to perform Rally installation from DevStack (https://review.openstack.org/#/c/65765/). This has been implemented using the so-called devstack extras;
- We continue to put lots of effort in refactoring both the source code of Rally and its unit tests. During the past week, we have improved the workflow of the cloud endpoints returned by deploy engines in our system (https://review.openstack.org/#/c/66277/, https://review.openstack.org/#/c/67276/) and improved the code readability for the deploy engine unit tests as well (https://review.openstack.org/#/c/67275/, https://review.openstack.org/#/c/66838/).
- Last but not least, certain work has been accomplished concerning the Rally CLI: we've enhanced the CLI output while working with cloud deployments (https://review.openstack.org/#/c/66314/) and fixed a bug related to the "task detailed" command (https://review.openstack.org/#/c/67830/).
The main directions of our current work are as follows:
- Providing Rally with a REST API: an extensive work is being conducted both on the server side (https://review.openstack.org/#/c/66788/, https://review.openstack.org/#/c/67346/) and on the python-rallyclient (https://review.openstack.org/#/c/66919/);
- Making Rally able to work with a predefined set of users while launching benchmarking scenarios; this is particularly important when the user does not want to pass the admin user credentials to Rally, but has a set of predefined users which he or she would like to use for benchmarking. Several patches that implement this are ready for review (for reference, see https://blueprints.launchpad.net/rally/+spec/benchmarking-with-predefined-users)
- Enhancing Rally with the so-called validators - special methods checking that certain conditions hold for the cloud before starting benchmarking it. These validations can include the checks for availability of certain resources in the cloud, making sure that the volumes are of the desired size etc. We aim to support fully customizable user-defined validators (https://review.openstack.org/#/c/67157/).
We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team
January 13, 2014
Hello stackers,
we've recovered from the New Year holidays and already accomplished a range a tasks. Here are some recent updates in Rally:
- Benchmark scenarios for Cinder have been added to Rally. Those include creating/deleting volumes, as well as just volume creation: recall that those volumes will be deleted anyway at the end of benchmarking by the benchmark engine cleanup mechanism (https://review.openstack.org/#/c/61833/);
- Benchmark scenarios for Keystone have been added to Rally. Those include creating/deleting users, as well as just users creation (https://review.openstack.org/#/c/64329/). The Keystone cleanup mechanism has been implemented as well (https://review.openstack.org/#/c/64220/);
- Refactoring of both the source code (DummyProvider: https://review.openstack.org/#/c/62934/, Benchmark engine: https://review.openstack.org/#/c/64131/) and the unit tests (tests for Nova benchmark scenarios: https://review.openstack.org/#/c/64294/, removing duplicate tests for benchmark engine: https://review.openstack.org/#/c/64128/);
- Improving the CLI output: modifying the outlook of tables (https://review.openstack.org/#/c/62840/) and performing somewhat more consistent logging (https://review.openstack.org/#/c/64446/).
This week we will start implementing the REST API for Rally, as well as a python client for it. Besides, there is still much work to do regarding the new deploy engines and server providers mentioned in previous reports. We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team
December 23, 2013
Hello stackers,
here is the update for the last week. From all the work we've completed this week we would like to highlight the following:
- A new execution type, namely the periodic execution type has been added to the benchmark engine (https://review.openstack.org/#/c/57628/). Benchmark engine is now possible to launch a given benchmark scenario once in a specified period of time, thus creating a load which closely resembles the real world load scenarios. For example, you can now ask Rally to lanch the "boot and delete server" scenario 50 times, booting a new server every 3 minutes. This requires only slight changes in the input configuration file;
- We've started the work on replacing the old FUEL/OSTF-based cloud verification mechanism with a new one, based on Tempest. While patches involving Tempest integration are still in progress, we've already get rid of all the FUEL/OSTF stuff in Rally (https://review.openstack.org/#/c/63653/), which has been both a great code cleanup for our project and also has reduced the amount of requirements for Rally.
Our current work is concentrated on:
- Adding another execution type to the benchmark engine: the stress execution type, which enables the user to easily specify a benchmark scenario with automatically increasing load of active users (say, from 10 to 100 with step 5). This benchmark scenario will also automatically halt as soon as the cloud starts to fail too frequently: the corresponding maximal failure rate can be also set in the input configuration file (https://review.openstack.org/#/c/63055/);
- Further creation of new benchmark scenarios for Rally. The most interesting scenario during the past week was presumably the one that boots a server and then allows the user to run a custom script on that server via SSH (https://review.openstack.org/#/c/63138/).
We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team
December 16, 2013
Hello stackers,
this week has been very fruitful for Rally and below we share with you some of our most important recent results:
- Deployment & Benchmark Workflows have now become completely separate things. While previously you usually created/specified an OpenStack deployment and also which benchmarks to run on it in one bulk, now Rally requires you first to create a deployment and then reference this deployment while launching benchmark scenarios (https://review.openstack.org/#/c/57455/). This, among other things, allows you to re-use a single deployment for many benchmarking tasks. You are highly encouraged to check out our updated "How-to" page where the process of managing deployments and using them in benchmarking tasks is explained in more details;
- Support of resource tracking has been added to LXC server provider. This was the last server provider that didn't support the resource tracking functionality implemented during the previous week and by this patch (https://review.openstack.org/#/c/60930/) we finish the integration of that functionality to the server providers;
- Adding input config vaildation to several deployment engines and server providers: after we've implemented the common config validation procedure last week, the processing of input configuration for deployment engines and server providers has mostly become a matter of writing correct JSON schemas that reflect engine-specific things. Recently, we have merged such schemas for devstack engine (https://review.openstack.org/#/c/57226/), dummy engine (https://review.openstack.org/#/c/57239/) and OpenStack server provider (https://review.openstack.org/#/c/60275/);
- New benchmark scenarios for Nova API. We are proud to see that our community starts to grow faster and new interested people come in. A contribution of one of our newcomers (QianLin from Huawei) is a benchmark scenario that exploits Nova server rescue/unrescue API (https://review.openstack.org/#/c/61688/).
The working plan for this week encompasses:
- Adding more diverse benchmark scenarios to Rally:
- Benchmark scenarios for Nova servers metadata (https://review.openstack.org/#/c/50588/);
- Benchmark scenarios for Cinder (https://review.openstack.org/#/c/61833/);
- Benchmark scenarios for Glance (https://review.openstack.org/#/c/60469/).
- Adding out-of-the-box support for stress testing: enhancing the benchmarking engine of Rally with the ability to automatically stop when too many benchmarks start to fail. This is often the case when a significant number of benchmark scenarios (i.e. stress testing) is launched on one cloud. This will also require slight changes in the input config format;
- Further work on deploy engines: the high-priority work is to finish the implementation of FUEL (https://review.openstack.org/#/c/61963/) and multihost (https://review.openstack.org/#/c/57240/) deploy engines.
- Code refactoring, which is this week concentrated on unit tests: the goal is to move certain "Fake" classes commonly used for testing to a special utils-like module (https://review.openstack.org/#/c/62191/), to avoid code duplicate while using these Fake objects (https://review.openstack.org/#/c/62193/) and also to ensure the correct decorator syntax is used for mocking, which is still not the case for many unit tests.
We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team
December 9, 2013
Hello stackers,
There has been much activity during the past week in Rally, and several significant patches have been merged recently:
- Splitting Deploy's & Benchmark's Workflows is coming to the end!
The last blocker was that we were storing allocated Resources by Server Providers in memory of Rally process instead of permanent storage (e.g. DB). During this week we added new table Resource to DB and switched almost all (except LXC) Server Providers to use DB instead of in-memory storage. Now we should switch LXC provider and then we will be able to merge the final patch addressing the splitting task (https://review.openstack.org/#/c/57455/).
- Generic cloud cleanup after benchmark scenarios.
Performing a generic cleanup after launching benchmark scenarios is essential for guaranteeing the cloud to stay clean after benchmarking and, besides, enables the benchmark scenario writers not to worry about deleting all the resources they create in init() methods or specific benchmark procedures (https://review.openstack.org/#/c/55695/). "Generic" means that Rally should free all possible kinds of allocated resources: servers, images etc.
- Code refactoring:
- Fixing a structure issue in the folder with Rally configuration samples (https://review.openstack.org/#/c/59259/);
- Renaming the ServerDTO -> Server entity (used by server providers) to improve code readability (https://review.openstack.org/#/c/59749/).
A wide variety of new contributions to Rally is still under development and pending for review:
- Enriching the benchmark engine with the mechanism for cloud initialization before launching benchmark scenarios. The support for init() methods in benchmark scenario classes was actually already implemented in Rally 0.1 but has been broken since creating multiple temporary tenants/users for benchmarking had been introduced to Rally (due to the fact that resources - servers, floating IPs etc. - created in init() did no longer belong to appropriate temporary tenants/users and thus could not be used in benchmark scenarios). There is now a patch (https://review.openstack.org/#/c/59782/) that fixes this issue by calling init() once for each temporary user and thus creating the appropriate resources (servers, floating IPs etc.) for every temporary OpenStack user that may be involved in benchmarking. This patch also has as a consequence a couple of smaller patches that improve the performance of OpenStack python clients (https://review.openstack.org/#/c/59780/, https://review.openstack.org/#/c/59781/) and thus optimize the procedure of creating resources for temporary users in init() methods;
- Glance benchmark scenarios: while all the previous benchmark scenarios were focused on capabilities provided by Nova API, this patch makes the first contribution for benchmarking other core OpenStack Projects. The patch (https://review.openstack.org/#/c/60469/) currently implements 2 basic scenarios: creating/deleting images and also creating images and using them to boot servers;
- Code refactoring: there are currently two patches dedicated to deploy engine and server provider code unification; both implement logic for input configuration file validation common for all the deploy engines and server providers respectively (https://review.openstack.org/#/c/57222/, https://review.openstack.org/#/c/60030/). These patches are the followed by the ones concentrated on deploy engine- and server provider-specific things for config validation (https://review.openstack.org/#/c/57239/, https://review.openstack.org/#/c/57226/, https://review.openstack.org/#/c/60275/)
We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team
December 2, 2013
Hello stackers,
below you will find the latest review of our activities in Rally for the past week.
Our achievements for the end of November comprise:
- Numerous changes in the benchmark engine, the most important among which are:
- Rally is now able not only to perform a specified number of benchmark scenario launches, but also to create a continuous load on the cloud by running any scenario for the given period of time. For example, you can now boot and delete servers in the cloud continiously from a number of temporary users, say, for 10 minutes, thus simulating in this way a stress load on the cloud. To do so, the only thing you should change in your configuration file is the "times" field for the benchmark scenario you are going to launch, which now should be replaced with "duration" field and initialized to the number of minutes the benchmark is expected to run (https://review.openstack.org/#/c/56036/);
- Access to openstack clients with administrator permissions is now enabled for all the scenarios through the admin_clients() method of the base Scenario class. Before this update, this class provided only the clients() method which returned a reference to a non-admin OpenStack client. This, however, turned out to be not enough for keystone-based benchmark scenarios that are to come in the future releases (https://review.openstack.org/#/c/58381/);
- Bugfix for the init() methods of benchmark scenarios which now enables benchmark scenario writers to pass through the context dictionary (which is the dictionary that init() returns) not only primitive objects like strings or numbers but also the complex ones like references to specially prepared servers or floating ips (https://review.openstack.org/#/c/55465/).
- The work on separating the deployment and task entities mentioned in previous updates has now come closely to its successful conclusion. The main results here include:
- The restructurization of the orchestrator workflow that now makes usage of deployment make_deploy() and make_cleanup() functions (https://review.openstack.org/#/c/57057/);
- Adding CLI commands for the deployment entity: create, destroy, list recreate etc. (https://review.openstack.org/#/c/56226/);
- Server provider for OpenStack: another ServerProvider class that wraps with the default ServerProvider interface (create_vms() and destroy_vms() methods) the functionality of python-novaclient. Along with lxc and virsh server providers (already present in the system) it constitutes the essential basis for working with different virtualization technologies (https://review.openstack.org/#/c/48811);
- The first contribution to data processing and visualization in Rally: a new CLI command for tasks has been added, namely plot aggregated which draws plots illustrating the cloud performance on one of the finished benchmark tasks. The CLI command requires the user to indicate the parameter for which the plots will be drawn. For example, if one specifies active_users as the aggregating parameter, Rally will draw a plot that shows how the number of active users making requests to the cloud affects the runtime of benchmark scenarios. The code uses the matplotlib library to draw the plots (https://review.openstack.org/#/c/52712/).
This week, our work will be concentrated on the following:
- Further enhancements in the benchmark engine: adding facilities for periodic benchmark execution (https://review.openstack.org/#/c/57628/) and enabling the init() methods of benchmark scenarios to allocate resources like servers or floating ips using OpenStack clients for temporary users (so that these resources can be used further in the bodies of benchmark methods). We are also about to start the work on parallel benchmark execution which will enable Rally to simulate "noise" load on the cloud;
- Implementing LXC and multihost OpenStack deployment facilities (https://review.openstack.org/#/c/56222/, https://review.openstack.org/#/c/57240/). Another improvement related to Rally deployment engines is the effort to unify the input configuration file validation for all the engine types (https://review.openstack.org/#/c/57222/);
- Finishing the work on separating deployments and tasks (https://review.openstack.org/#/c/57455).
We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team
November 25, 2013
Hello stackers,
here is the second report on our activities in Rally development for the past week.
The main results that have been recently merged with master are as follows:
- Changes in the benchmark engine: we have significantly restructured the format of the input benchmark config (https://review.openstack.org/#/c/56035/). The changes make it more transparent to the end-user as well as more flexible. This will enable us to implement new features in the benchmark engine like running tests periodically or for a given amount of time. We have also refactored the test code related to benchmark scenarios by replacing ugly-looking nested with-blocks for mocks with a more readable decorator syntax (https://review.openstack.org/#/c/57732/);
- Further work on splitting the system logic between the two basic entities, namely the deployment and the benchmark task. While still having the legacy combined config that contains information both on the deployment and on the benchmarks, we have closely come to the point where we can completely split everything related to these two entites. To be more precise, during the lask week we have made:
- the integration of the deployment entity with DeploymentEngine classes (https://review.openstack.org/#/c/56481/);
- сode refactoring for Task and Deployment classes: making them similarly structured (https://review.openstack.org/#/c/56727) and moving them to a special rally.objects module (https://review.openstack.org/#/c/56480/);
- test coverage improvement for the Task class (https://review.openstack.org/#/c/57055/) and for the Orchestrator API (https://review.openstack.org/#/c/57054/).
- Minor updates related to deploy engines and server providers: better support for Debian/Ubuntu in DevStack engine (https://review.openstack.org/#/c/57181/) and removing legacy code for SSH support (https://review.openstack.org/#/c/57266/)
Our plan for the current week comprises:
- Finishing the work on separating deployments and benchmark tasks in Rally (https://review.openstack.org/#/c/57455/, https://review.openstack.org/#/c/56226/ etc.);
- Adding new features to the benchmark engine like executing benchmarks periodically and for a user-speficied period of time (https://review.openstack.org/#/c/57628/, https://review.openstack.org/#/c/56036/);
- Implementing LXC and multihost OpenStack deployment facilities (https://review.openstack.org/#/c/56222/, https://review.openstack.org/#/c/57240/).
We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=all&metric=commits&project_type=All&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team
November 18, 2013
Hello stackers,
here is the first issue of our weekly update notes on Rally, Benchmark-as-a-Service project for OpenStack. Once a week we are going to post a few remarks on what we have done and what we plan to implement in Rally during the next week.
During the past week we have been focusing our efforts on two main aspects of Rally development:
- Splitting the Rally workflow into 2 parts: the OpenStack deployment part and the benchmark tasks running part. Both have been previously treated by the system as a single process configured once by the end user. Separation of deployment from benchmark tasks, however, allows one to reuse existing deployments. The current results here are:
- Deployment model for SQLAlchemy (https://review.openstack.org/#/c/56185/);
- A separate wrapper class for the deployment model (https://review.openstack.org/#/c/56267/).
- Improving the LXC server provider: code refactoring and support of some useful Btrfs features. This will be needed soon for multihost OpenStack deployment implementation. (https://review.openstack.org/#/c/55534/, https://review.openstack.org/#/c/56221/)
We have also recently received several e-mails notifying us of a possible issue in the soft/hard server reboot benchmark scenario. We would like to thank all of you who reported the problem. We will try to fix it as soon as possible.
The Rally roadmap for the next week goes as follows:
- Continue the work on separating OpenStack deployments from benchmark tasks: introduce the necessary CLI commands, integrate the Deployment class with deploy engines, rewrite the orchestrator part to support the separated deployments and benchmarks;
- Implement multihost OpenStack deployment engine using LXC;
- Add two new capabilities to the benchmark runner:
- Benchmark launching for a given period of time (not a strict amount of times);
- Launching several benchmarks with configured intervals.
- Improve the benchmark config format to make it both more flexible and more clear for the end user;
- Implement generic cleanup for our benchmark scenarios;
- Work on automated output data processing and drawing plots for benchmark results.
Several patches addressing the above tasks are already available on Gerrit code review. You are welcome to take a look at them.
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
Stay tuned.
Regards,
The Rally team