Jump to: navigation, search

Difference between revisions of "Rally/RoadMap"

(Profiling)
(Deployers)
Line 114: Line 114:
  
 
= Deployers =
 
= Deployers =
 +
  
 
=== Implement MultihostEngine. ===
 
=== Implement MultihostEngine. ===
 
This engine will deploy multihost configuration using existing engines.
 
This engine will deploy multihost configuration using existing engines.
 +
  
 
=== Implement Dev Fuel based engine ===
 
=== Implement Dev Fuel based engine ===
 
Deploy OpenStack using FUEL on existing servers or VMs
 
Deploy OpenStack using FUEL on existing servers or VMs
 +
  
 
=== Implement Full Fuel based engine ===
 
=== Implement Full Fuel based engine ===
 
Deploy OpenStack with Fuel on bare metal nodes.
 
Deploy OpenStack with Fuel on bare metal nodes.
 +
  
 
=== Implement TrippleO based engine ===
 
=== Implement TrippleO based engine ===
 
Deploy OpenStack on bare metal nodes using TrippleO.
 
Deploy OpenStack on bare metal nodes using TrippleO.

Revision as of 09:38, 16 October 2013

Benchmarking Engine

Add support of Users & Tenants out of box

At this moment we are supporting next 3 parameters:

  1. timeout - this is the timeout of 1 scenario loop
  2. times - how much loops of scenario to run
  3. concurrent - how much loops should be run simultaneously

All tests are run from one user => it is not real situations.

We are going to add two new parameters:

  1. tenants - how much tenants to create
  2. users_pro_tenant - how much users should be in each tenant


Benchmark Engine will create all tenants & users, and prepare OpenStack python clients, before starting benchmarking.


Add generic cleanup mechanism

In benchmarks we are creating a lot of different resources: Tenants, Users, VMs, Snapshots, Block devices.

If something went wrong, or test is not well written we will get a lot of allocated resources that could make influence on next benchmarks. So we should clean up our OpenStack.

Such generic cleanup could be easily implemented. As we are creating all resources, using tenants, created before running benchmark Scenario.

We need to make only 2 steps:

  1. Purge resources of each user
  2. Destroy all users & tenants


Run multiple scenarios simultaneously

Okay we now how to make load on Nova: Boot & Destroy VM scenario But how will influence huge load of another Scenario: Create & Destroy Block device on Nova Scenario?

This could be also easily extended. For example we will get special name for such benchmarks:

benchmark: {
  "@composed" : {
      "NovaServers.boot_and_destroy": [...], 
      ....
  },
  ""
}


More scenarios

We should make more different benchmark scenarios:

  1. E.g Restarting VMs
  2. Associating Floating IPs
  3. Creating & Destroying Block Devices, Snapshots and so on
  4. .....
  5. PROFIT!!

Data processing

At this moment only thing that we have is getting tables with: min, max, avg. As a first step good=) But we need more!

Graphics & Plots

Simple plot Time of Loop / Iteration

<img>


Histogram of loop times:

<img>


Profiling

Improve & Merge Tomograph into upstream

To collect profiling data we use a small library Tomograph that was created to be used with OpenStack (needs to be improved as well). Profiling data is collected by inserting profiling/log points in OpenStack source code and adding event listeners/hooks on events from 3rd party libraries that support such events (e.g. sqlalchemy). Currently our patches are applied to OpenStack code during cloud deployment. For easier maintenance and better profiling results profiler could be integrated as an oslo component and in such case it’s patches could be merged to upstream. Profiling itself would be managed by configuration options.


Improve Zipkin or use something else

Currently we use Zipkin as a collector and visualization service but in future we plan to replace it with something more suitable in terms of load (possibly Ceilometer?) and improve visualization format (need better charts). Early results you coude

Some early results you can see here:


Make it work out of box

Few things should be done:

  1. Merge into upstream Tomograph that will send LOGs to Zipkin
  2. Bind Tomograph with Benchmark Engine
  3. Automate installation of Zipkin from Rally

Server providing

Improve VirshProvider

Implement netinstall linux on VM (currently implemented only cloning existing VM) Add support zfs/lvm2 for fast cloning

Implement LxcProvider

This provider about to be used for fast deployment large amount of instances. Support zfs clones for fast deployment.

Implement AmazonProvider

Get your VMs from Amazon

Deployers

Implement MultihostEngine.

This engine will deploy multihost configuration using existing engines.


Implement Dev Fuel based engine

Deploy OpenStack using FUEL on existing servers or VMs


Implement Full Fuel based engine

Deploy OpenStack with Fuel on bare metal nodes.


Implement TrippleO based engine

Deploy OpenStack on bare metal nodes using TrippleO.