Jump to: navigation, search

Difference between revisions of "Rally"

(Performance of Nova instance list command)
(Replaced content with " The page has been moved to https://rally.readthedocs.io")
 
(67 intermediate revisions by 9 users not shown)
Line 1: Line 1:
  
 
+
The page has been moved to https://rally.readthedocs.io
== What is Rally? ==
 
 
 
If you are here, you are probably familiar with OpenStack and you also know that it's a really huge ecosystem of cooperative services. When something fails, performs slowly or doesn't scale, it's really hard to answer different questions on ''"what"'', ''"why"'' and ''"where"'' has happened. Another reason why you could be here is that you would like to build an OpenStack CI/CD system that will allow you to improve SLA, performance and stability of OpenStack continuously.
 
 
 
The OpenStack QA team mostly works on CI/CD that ensures that new patches don't break some specific single node installation of OpenStack. On the other hand it's clear that such CI/CD is only an indication and does not cover all cases (e.g. if a cloud works well on a single node installation it doesn't mean that it will continue to do so on a 1k servers installation under high load as well). Rally aims to fix this and help us to answer the question "How does OpenStack work at scale?". To make it possible, we are going to automate and unify all steps that are required for benchmarking OpenStack at scale: multi-node OS deployment, verification, benchmarking & profiling.
 
 
 
<center>[[File:Rally-Actions.png|850px]]</center>
 
 
 
* '''''Deploy engine''''' is not ''yet another deployer of OpenStack'', but just a pluggable mechanism that allows to unify & simplify work with different deployers like: DevStack, Fuel, Anvil on hardware/VMs that you have.
 
* '''''Verification''''' - ''(work in progress)'' uses tempest to verify the functionality of a deployed OpenStack cloud. In future Rally will support other OS verifiers.
 
* '''''Benchmark engine''''' - allows to create parameterized load on the cloud based on a big repository of benchmarks.
 
 
 
For more information about how it works take a look at [[Rally#Architecture|Rally Architecture]]
 
 
 
 
 
== Use Cases ==
 
 
Before diving deep in Rally architecture let's take a look at 3 major high level Rally Use Cases:
 
 
 
<center>[[File:Rally-UseCases.png|850px]] </center>
 
 
 
 
 
Typical cases where Rally aims to help are:
 
 
 
# Automate measuring & profiling focused on how new code changes affect the OS performance;
 
# Using Rally profiler to detect scaling & performance issues;
 
# Investigate how different deployments affect the OS performance:
 
#* Find the set of suitable OpenStack deployment architectures;
 
#* Create deployment specifications for different loads (amount of controllers, swift nodes, etc.);
 
# Automate the search for hardware best suited for particular OpenStack cloud;
 
# Automate the production cloud specification generation:
 
#* Determine terminal loads for basic cloud operations: VM start & stop, Block Device create/destroy & various OpenStack API methods;
 
#* Check performance of basic cloud operations in case of different loads.
 
 
 
 
 
== Architecture ==
 
 
 
Usually OpenStack projects are as-a-Service, so Rally provides this approach and a CLI driven approach that does not require a daemon:
 
# Rally as-a-Service: Run rally as a set of daemons that present Web UI ''(work in progress)'' so 1 RaaS could be used by whole team.
 
# Rally as-an-App: Rally as a just lightweight CLI app (without any daemons), that makes it simple to develop & much more portable.
 
 
 
 
 
How is this possible? Take a look at diagram below:
 
<center>
 
[[File:Rally_Architecture.png|750px]]
 
</center>
 
 
 
So what is behind Rally?
 
 
 
 
 
=== Rally Components ===
 
 
 
Rally consists of 4 main components:
 
 
 
# '''''Server Providers''''' - provide servers (virtual servers), with ssh access, in one L3 network.
 
# '''''Deploy Engines''''' - deploy OpenStack cloud on servers that are presented by '''''Server Providers'''''
 
# '''''Verification''''' - component that runs tempest (or another pecific set of tests) against a deployed cloud, collects results & presents them in human readable form.
 
# '''''Benchmark engine''''' - allows to write parameterized benchmark scenarios & run them against the cloud.
 
 
 
 
 
But '''why''' does Rally need these components?<br />
 
It becomes really clear if we try to imagine: ''how I will benchmark cloud at Scale, if ...''<br />
 
<center>
 
[[File:Rally QA.png|750px|center]]
 
</center>
 
 
 
 
 
== Rally in action ==
 
 
 
=== How amqp_rpc_single_reply_queue affects performance ===
 
 
 
To show Rally's capabilities and potential we used ''NovaServers.boot_and_destroy'' scenario to see how ''amqp_rpc_single_reply_queue'' option affects VM bootup time. Some time ago it was [https://docs.google.com/file/d/0B-droFdkDaVhVzhsN3RKRlFLODQ/edit?pli=1 shown] that cloud performance can be boosted by setting it on so naturally we decided to check this result. To make this test we issued requests for booting up and deleting VMs for different number of concurrent users ranging from one to 30 with and without this option set. For each group of users a total number of 200 requests was issued. Averaged time per request is shown below:
 
 
 
<center>[[File:Amqp rpc single reply queue.png| amqp_rpc_single_replya_queue]]</center>
 
 
 
So apparently this option affects cloud performance, but not in the way it was thought before.
 
 
 
 
 
=== Performance of Nova instance list command ===
 
 
 
'''Context:'''
 
1 OpenStack user
 
 
 
'''Scenario:'''
 
1) boot VM from this user
 
2) list VM
 
 
 
'''Runner:'''
 
Repeat 200 times.
 
 
 
As a result, on every next iteration user has more and more VMs and perfromance of VM list is degrading quite fast:
 
<center> [[File:Rally_VM_list.png| nova vm list performance]]</center>
 
 
 
== How To ==
 
 
 
Actually there are only 3 steps that should be interesting for you:
 
 
 
<big>
 
# [[Rally/installation|Install Rally]]
 
# [[Rally/HowTo|Use Rally]]
 
# [[Rally/Develop|Improve Rally]]
 
</big>
 
 
 
 
 
== Weekly updates ==
 
 
 
'''Each week we write up on a special [[Rally/Updates|weekly updates page]] what sort of things have been accomplished in Rally during the past week and what are our plans for the next one. Below you can find the most recent report.'''
 
 
 
The past week has resulted in the further refinement of different parts of Rally code, of its CLI as well as of the configuration file formats. Several of these changes have been caused by the current integration of '''''contexts''''' into Rally (let us remind you that the notion of ''contexts'' is used to define different environments in which benchmark scenarios can be launched, e.g. environments with temporarity generated OpenStack users and/or with generic cleanup facilities). Some interesting changes include:
 
* An enormously important and overall [https://review.openstack.org/#/c/80151/ refactoring patch] that brings different optimizations to the '''''config validation''''' step, to '''''the CLI output/logging''''' and also ('''NB!''') changes '''''the input task configuration file format''''' (take a look at the updated [https://github.com/stackforge/rally/tree/master/doc/samples/tasks task configuration samples]);
 
* An important step on the way to a complete '''''Rally-Tempest integration''''' is adding the ability to [https://review.openstack.org/#/c/79664/ launch Tempest tests without ''"sudo"''];
 
* [https://review.openstack.org/#/c/79372 OpenStack clients helper module refactoring], which is concentrated on reimplementing the "lazy" client handles in a more elegant way and also making them accessible in a direct way, without auxiliary methods like ''rally.benchmark.utils.create_openstack_clients()'', which made the code unreasonably more complicated;
 
* We've also added several missing unit tests: the ones for [https://review.openstack.org/#/c/80772/ the "deployment list" command] as well as for [https://review.openstack.org/#/c/80342/ the Authenticate benchmark scenario group].
 
 
 
 
 
This week, the directions of our efforts are going to be mostly defined by the changes from the refactoring patch mentioned earlier. A lot of stuff has to be rebased, while a couple of deferred important patches (including those implementing support for '''''pre-created users in Rally''''' or introducing '''''the new "stress" scenario running strategy''''') will be brought to life again.
 
 
 
 
 
We encourage you to take a look at new patches in Rally pending for review and to help us making Rally better.
 
 
 
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally<br />
 
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=icehouse&metric=commits&project_type=all&module=rally <br/>
 
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
 
 
 
 
 
Stay tuned.
 
 
 
 
 
Regards,<br />
 
The Rally team
 
 
 
 
 
'''[[Rally/Updates|Weekly Updates Archives]] '''
 
 
 
== Rally in the World ==
 
 
 
{| class="wikitable sortable"
 
|-
 
! Date !! Authors !! Title !! Location
 
|-
 
| 01/Mar/2014 ||  <ol><li> Bangalore C.B. Ananth (cbpadman at cisco.com)</li><li> Rahul Upadhyaya (rahuupad at cisco.com)</li></ol> ||  [http://www.slideshare.net/sliderakrup/rally-baa-s-os-meetup-31864829 Benchmark as a Service OpenStack-Rally] || OpenStack Meetup Bangalore
 
|-
 
| 28/Feb/2014 || <ol><li> Peeyush Gupta </li></ol> || [http://www.thegeekyway.com/benchmarking-openstack-rally/ Benchmarking OpenStack With Rally] || http://www.thegeekyway.com/
 
|-
 
| 26/Feb/2014 || <ol><li> Oleg Gelbukh </li></ol> || [http://www.mirantis.com/blog/benchmarking-openstack-megascale-tested-mirantis-openstack-softlayer/ Benchmarking OpenStack at megascale: How we tested Mirantis OpenStack at SoftLayer] || http://www.mirantis.com/blog/
 
|-
 
| 07/Nov/2013 || <ol><li> Boris Pavlovic </li></ol> ||  [http://www.slideshare.net/mirantis/rally-benchmarkingatscale Benchmark OpenStack at Scale] || Openstack summit Hong Kong
 
|}
 
 
 
 
 
== Join Rally team ==
 
 
 
==== Open and assigned tasks ====
 
https://trello.com/b/DoD8aeZy/rally
 
 
 
To get account ping Boris in IRC (boris-42) or email me (boris(at)pavlovic.me)
 
 
 
==== IRC chat ====
 
server:  '''freenode.net'''
 
 
 
chanel: '''#openstack-rally'''
 
 
 
==== Weekly Meetings ====
 
The Rally project team holds weekly meetings on Tuesdays at [http://www.timeanddate.com/worldclock/fixedtime.html?msg=OpenStack+Rally+meeting&iso=20131029T21&p1=166&am=30 1700 UTC] in IRC, at the <code><nowiki>#openstack-meeting</nowiki></code> channel.
 
 
 
==== Source ====
 
https://github.com/stackforge/rally
 
 
 
==== Project space ====
 
http://launchpad.net/rally
 
 
 
==== Blueprints ====
 
active:    http://blueprints.launchpad.net/rally
 
 
 
v1 base: https://blueprints.launchpad.net/rally/+spec/init
 
 
 
==== Bugs ====
 
https://bugs.launchpad.net/rally
 
 
 
==== Pending Code Reviews ====
 
https://review.openstack.org/#/q/status:open+rally,n,z
 

Latest revision as of 12:59, 30 October 2017

The page has been moved to https://rally.readthedocs.io