- 1 What is Rally?
- 2 Documentation
- 3 Use Cases
- 4 Architecture
- 5 Rally in action
- 6 How To
- 7 Updates
- 8 Rally in the World
- 9 Project Info
What is Rally?
OpenStack is, undoubtedly, a really huge ecosystem of cooperative services. Rally is a benchmarking tool that answers the question: “How does OpenStack work at scale?”. To make this possible, Rally automates and unifies multi-node OpenStack deployment, cloud verification, benchmarking & profiling. Rally does it in a pluggable way, making it possible to check whether OpenStack is going to work well on, say, a 1k-servers installation under high load. Thus it can be used as a basic tool for an OpenStack CI/CD system that would continuously improve its SLA, performance and stability.
- Deploy engine is not yet another deployer of OpenStack, but just a pluggable mechanism that allows to unify & simplify work with different deployers like: DevStack, Fuel, Anvil on hardware/VMs that you have.
- Verification - (work in progress) uses tempest to verify the functionality of a deployed OpenStack cloud. In future Rally will support other OS verifiers.
- Benchmark engine - allows to create parameterized load on the cloud based on a big repository of benchmarks.
Rally documentation on ReadTheDocs is a perfect place to start learning about Rally. It provides you with an easy and illustrative guidance through this benchmarking tool. For example, check out the Rally step-by-step tutorial that explains, in a series of lessons, how to explore the power of Rally in benchmarking your OpenStack clouds.
Before diving deep in Rally architecture let's take a look at 3 major high level Rally Use Cases:
Typical cases where Rally aims to help are:
- Automate measuring & profiling focused on how new code changes affect the OS performance;
- Using Rally profiler to detect scaling & performance issues;
- Investigate how different deployments affect the OS performance:
- Find the set of suitable OpenStack deployment architectures;
- Create deployment specifications for different loads (amount of controllers, swift nodes, etc.);
- Automate the search for hardware best suited for particular OpenStack cloud;
- Automate the production cloud specification generation:
- Determine terminal loads for basic cloud operations: VM start & stop, Block Device create/destroy & various OpenStack API methods;
- Check performance of basic cloud operations in case of different loads.
Usually OpenStack projects are as-a-Service, so Rally provides this approach and a CLI driven approach that does not require a daemon:
- Rally as-a-Service: Run rally as a set of daemons that present Web UI (work in progress) so 1 RaaS could be used by whole team.
- Rally as-an-App: Rally as a just lightweight CLI app (without any daemons), that makes it simple to develop & much more portable.
How is this possible? Take a look at diagram below:
So what is behind Rally?
Rally consists of 4 main components:
- Server Providers - provide servers (virtual servers), with ssh access, in one L3 network.
- Deploy Engines - deploy OpenStack cloud on servers that are presented by Server Providers
- Verification - component that runs tempest (or another pecific set of tests) against a deployed cloud, collects results & presents them in human readable form.
- Benchmark engine - allows to write parameterized benchmark scenarios & run them against the cloud.
But why does Rally need these components?
It becomes really clear if we try to imagine: how I will benchmark cloud at Scale, if ...
TO BE CONTINUED
Rally in action
How amqp_rpc_single_reply_queue affects performance
To show Rally's capabilities and potential we used NovaServers.boot_and_destroy scenario to see how amqp_rpc_single_reply_queue option affects VM bootup time. Some time ago it was shown that cloud performance can be boosted by setting it on so naturally we decided to check this result. To make this test we issued requests for booting up and deleting VMs for different number of concurrent users ranging from one to 30 with and without this option set. For each group of users a total number of 200 requests was issued. Averaged time per request is shown below:
So apparently this option affects cloud performance, but not in the way it was thought before.
Performance of Nova instance list command
Context: 1 OpenStack user
Scenario: 1) boot VM from this user 2) list VM
Runner: Repeat 200 times.
As a result, on every next iteration user has more and more VMs and performance of VM list is degrading quite fast:
Complex scenarios & detailed information
For example NovaServers.snapshot contains a lot of "atomic" actions:
- boot VM
- snapshot VM
- delete VM
- boot VM from snapshot
- delete VM
- delete snapshot
Fortunately Rally collects information about duration of all these operation for every iteration.
As a result we are generating beautiful graphs:
Actually there are only 3 steps that should be interesting for you:
- Install Rally
- Rally step by step guide
- Add rally performance jobs to your project
- Main concepts of Rally
- Improve Rally
Periodically, we write up on a special updates page what sort of things have been accomplished in Rally recently and what are our plans for the future. Below you can find the most recent report (January 22, 2015).
We are happy to announce that we have completely redesigned our Rally documentation in ReadTheDocs. The docs have now received a simpler structure and have become much easier to get through!
One of the nicest new things is the Rally step-by-step tutorial that explains, in a series of lessons, how to explore the power of Rally in benchmarking your OpenStack clouds.
Since our previous update, there have been many interesting updates in Rally:
- Rally now has a Network Context class that enables easy Neutron network management.
- Input task files now can be written using the jinja2-based templates syntax. Very useful if you want, say, parameterize the image name used throughout your complex input task file.
- Rally scenarios have been 100%-covered with docstrings. That means that now the rally info find <query> command will always output a complete piece information about whatever you ask it.
- New benchmark scenarios include thos for Cinder (list_volumes, list_snapshots, extend_volume), Nova (cold_migrate)
- We have moved the directory with samples in our repository to the root level: now it is rally/samples instead of rally/doc/samples (and so much quicker to get to).
- Rally is on the way to being Python 3 compatible. We have added a Gate job that checks Rally in Python 3 and have produced lots of patches that fix incompability issues. Few changes are left to make Rally fully Python 3 compatible.
We encourage you to take a look at new patches in Rally pending for review and to help us make Rally better!
Source code for Rally is hosted at GitHub: https://github.com/stackforge/rally
You can track the overall progress in Rally via Stackalytics: http://stackalytics.com/?release=kilo&metric=commits&project_type=all&module=rally
Open reviews for Rally: https://review.openstack.org/#/q/status:open+rally,n,z
The Rally team
Rally in the World
||Extreme OpenStack: Scale Testing OpenStack Messaging||Ubuntu Server Team|
||Rally: Testing & Benchmarking OpenStack||India OpenStack Meetup Noida|
||Rally: OpenStack Tempest Testing Made Simple(r)||https://www.mirantis.com/blog|
||KVM and Docker LXC Benchmarking with OpenStack||http://bodenr.blogspot.ru/|
||Benchmark as a Service OpenStack-Rally||OpenStack Meetup Bangalore|
||Benchmarking OpenStack With Rally||http://www.thegeekyway.com/|
||Benchmarking OpenStack at megascale: How we tested Mirantis OpenStack at SoftLayer||http://www.mirantis.com/blog/|
||Benchmark OpenStack at Scale||Openstack summit Hong Kong|
- Source code
- Rally road map
- Project space
- Patches on review
- Meeting logs
- IRC logs, server: irc.freenode.net channel: #openstack-rally
Where can I discuss & propose changes?
- Our IRC channel: IRC server
- Weekly Rally team meeting: held on Tuesdays at 1700 UTC in IRC, at the
- Openstack mailing list: firstname.lastname@example.org (see subscription and usage instructions);
- Rally team on Launchpad: Answers/Bugs/Blueprints.