Jump to: navigation, search

Difference between revisions of "Rally"

(Replaced content with " The page has been moved to https://rally.readthedocs.io")
 
(143 intermediate revisions by 11 users not shown)
Line 1: Line 1:
== Introduction ==
 
Rally is a Benchmark-as-a-Service project for OpenStack.
 
  
Rally is intended for providing the community with a benchmarking tool that is capable of performing '''specific''', '''complex''' and '''reproducible''' tests for '''real deployment''' scenarios.
+
The page has been moved to https://rally.readthedocs.io
 
 
[[File:rally_flow_diagram.png|650px|right]]
 
 
 
In the OpenStack ecosystem there are currently several tools that are helpful in carrying out the benchmarking process for an OpenStack deployment. To name a few, there are ''DevStack'' and ''FUEL'' which are intended for deploying and managing OpenStack clouds, the ''Tempest testing framework'' that validates OpenStack APIs, some tracing facilities like ''Tomograph'' with ''Zipkin'', and others. The challenge, however, is to combine all these tools together on a reproducible basis. That can be a rather difficult task since the number of compute nodes in a practical deployment can be really huge and also because one may be willing to use lots of different deployment strategies that pursue different goals (e.g., while benchmarking the Nova Scheduler, one usually does not care of virtualization details, but is more concerned with the infrastructure topologies; while in other specific cases it may be the virtualization technology that matters). Compiling a bunch of already existing benchmarking facilities into one project, making it flexible to user requirements and ensuring the reproducibility of test results, is exactly what Rally does.
 
 
 
 
 
 
 
== Use Cases ==
 
 
 
# Investigate how different deployments affect OS performance:
 
#* Find the set of good OpenStack deployment architectures,
 
#* Create deployment specification for different loads (amount of controllers, swift nodes, etc.).
 
# Automate search for hardware best suited for particular OpenStack cloud/
 
# Automate production cloud specification generation:
 
#* Determine terminal loads for basic cloud operations: VM start & stop, Block Device create/destroy & various OpenStack API methods.
 
#* Check performance of basic cloud operations in case of different loads.
 
# Automate measuring & profiling data collection about how new code changes affect OS performance.
 
# Using Rally profiler to detect scaling & performance issues. ''E.g. when we delete 3 VMs by one request they are deleted one by one because of [http://37.58.79.43:8080/traces/0011f252c9d98e31 DB lock on quotas table]''
 
 
 
 
 
 
 
== Architecture ==
 
 
 
Rally is basically split into 4 main components:
 
 
 
# Deploy Engine, which is responsible for processing and deploying VM images (using DevStack or FUEL according to user’s preferences). The engine can do one of the following:
 
#* deploying an OS on already existing VMs;
 
#* starting VMs from a VM image with pre-installed OS and OpenStack;
 
#* deploying multiple VMs each of which has running OpenStack compute node based on a VM image.
 
# Server Provider, provides servers (virtual servers) to deploy OpenStack.
 
# Benchmarking Tool, which carries out the benchmarking process in several stages:
 
#* runs Tempest tests, reduced to 5-minute length (to save the usually expensive computing time);
 
#* runs a set of benchmark scenarios (using the Rally testing framework);
 
#* collects all the test results and processes them by Zipkin tracer;
 
#* puts together a benchmarking report and stores it on the machine Rally was lauched on.
 
# Orchestrator, which is the central component of the system. It uses the Deploy Engine to run control and compute nodes and to launch an OpenStack distribution and, after that, calls the Benchmarking Tool to start the benchmarking process.
 
 
 
 
 
To dive deeper into Rally architecture, see <big>[[Rally/ArchitectureForDevelopers|Rally architecture for developers]]</big>.
 
 
 
== Rally in action ==
 
 
 
=== How amqp_rpc_single_reply_queue affects performance ===
 
 
 
To show Rally's capabilities and potential we used ''NovaServers.boot_and_destroy'' scenario to see how ''amqp_rpc_single_reply_queue'' option affects VM bootup time. Some time ago it was [https://docs.google.com/file/d/0B-droFdkDaVhVzhsN3RKRlFLODQ/edit?pli=1 shown] that cloud performance can be boosted by setting it on so naturally we decided to check this result. To make this test we issued requests for booting up and deleting VMs for different number of concurrent users ranging from one to 30 with and without this option set. For each group of users a total number of 200 requests was issued. Averaged time per request is shown below:
 
 
 
[[File:Amqp rpc single reply queue.png| amqp_rpc_single_replya_queue]]
 
 
 
So apparently this option affects cloud performance, but not in the way it was thought before.
 
 
 
== How To ==
 
 
 
<big>
 
# [[Rally/installation|Rally installation]]
 
# [[Rally/HowTo|How to use Rally]]
 
# [[Rally/DeployEngines|Available Deploy engines]]
 
# [[Rally/ServerProviders|Available Server providers]]
 
# [[Rally/BenchmarkScenarios|Available Benchmark scenarios]]
 
# [[Rally/HowToExtendRally|Extend Rally functionality]]
 
# [[Rally/RoadMap|Rally Road Map]]
 
</big>
 
 
 
 
 
 
 
 
 
== Links ==
 
 
 
==== Source ====
 
https://github.com/stackforge/rally
 
==== Pending Code Reviews ====
 
https://review.openstack.org/#/q/status:open+rally,n,z
 
 
 
==== Project space ====
 
http://launchpad.net/rally
 
==== Blueprints ====
 
 
 
active:    http://blueprints.launchpad.net/rally
 
 
 
v1 base: https://blueprints.launchpad.net/rally/+spec/init
 
 
 
==== Bugs ====
 
https://bugs.launchpad.net/rally
 
 
 
==== IRC chat ====
 
server:  '''freenode.net'''
 
 
 
chanel: '''#openstack-rally'''
 

Latest revision as of 12:59, 30 October 2017

The page has been moved to https://rally.readthedocs.io