Jump to: navigation, search

Difference between revisions of "Rally/Concepts"

m (Style)
(Replaced content with " The page has been moved to https://rally.readthedocs.io")
 
(6 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== Main concepts ==
 
  
This article describes in detail several main design concepts used throughout Rally (such as '''benchmark scenarios''', '''contexts''' etc.). Good understanding of these concepts may be essential for a successful contribution to Rally.
+
The page has been moved to https://rally.readthedocs.io
 
 
 
 
=== Benchmark scenarios ===
 
==== Concept ====
 
The concept of '''benchmark scenarios''' is a central one in Rally. Benchmark scenarios are what Rally actually uses to '''test the performance of an OpenStack deployment'''. They also play the role of main building blocks in '''benchmark tasks configuration files'''. Each benchmark scenario performs a small '''set of atomic operations''', thus testing some '''simple use case''', usually that of a specific OpenStack project. For example, the '''''"NovaServers"''''' scenario group contains scenarios that employ several basic operations available in '''''nova'''''. For example, the '''''"boot_and_delete_server"''''' benchmark scenario from that group allows to benchmark the performance of a sequence of only '''two simple operations''': it first '''boots''' a server (with customizable parameters) and then '''deletes''' it.
 
 
 
 
 
==== User's view ====
 
In user's view, Rally launches different benchmark scenarios while performing some benchmark task. '''Benchmark task''' is essentially a set of benchmark scenarios run against some OpenStack deployment in a specific (and customizable) manner by the CLI command:
 
 
 
'''''rally task start --task=<task_config.json>'''''
 
 
 
Accordingly, the user may specify the names and parameters of benchmark scenarios to be run in '''benchmark task configuration files'''. A typical configuration file would have the following contents:
 
 
 
{
 
    '''"NovaServers.boot_server"''': [
 
        {
 
            '''"args": {'''
 
                '''"flavor_id": 42,'''
 
                '''"image_id": "73257560-c59b-4275-a1ec-ab140e5b9979"'''
 
            '''},'''
 
            "runner": {"times": 3},
 
            "context": {...}
 
        },
 
        {
 
            '''"args": {'''
 
                '''"flavor_id": 1,'''
 
                '''"image_id": "3ba2b5f6-8d8d-4bbe-9ce5-4be01d912679"'''
 
            '''},'''
 
            "runner": {"times": 3},
 
            "context": {...}
 
        }
 
    ],
 
    '''"CinderVolumes.create_volume"''': [
 
        {
 
            '''"args": {'''
 
                '''"size": 42'''
 
            '''},'''
 
            "runner": {"times": 3},
 
            "context": {...}
 
        }
 
    ]
 
}
 
 
 
 
 
In this example, the task configuration file specifies two benchmarks to be run, namely '''"NovaServers.boot_server"''' and '''"CinderVolumes.create_volume"''' (benchmark name = ''ScenarioClassName.method_name''). Each benchmark scenario may be started several times with different parameters. In our example, that's the case with '''"NovaServers.boot_server"''', which is used to test booting servers from different images & flavors.
 
 
 
Note that inside each scenario configuration, the benchmark scenario is actually launched '''3 times''' (that is specified in the '''"runner"''' field). It can be specified in '''"runner"''' in more detail how exactly the benchmark scenario should be launched; we elaborate on that in the ''"Sceario Runners"'' section below.
 
 
 
 
 
==== Developer's  view ====
 
 
 
In developer's prospective, a benchmark scenario is a method marked by a '''@scenario''' decorator and placed in a class that inherits from the base [https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/base.py#L40 '''Scenario'''] class and located in some subpackage of [https://github.com/stackforge/rally/tree/master/rally/benchmark/scenarios ''rally.benchmark.scenarios'']. There may be arbitrary many benchmark scenarios in a scenario class; each of them should be referenced to (in the task configuration file) as ''ScenarioClassName.method_name''.
 
 
 
In a toy example below, we define a scenario class ''MyScenario'' with one benchmark scenario ''MyScenario.scenario''. This benchmark scenario tests the performance of a sequence of 2 actions, implemented via private methods in the same class. Both methods are marked with the '''@atomic_action_timer''' decorator. This allows Rally to handle those actions in a special way and, after benchmarks complete, show runtime statistics not only for the whole scenarios, but for separate actions as well.
 
 
 
<pre>
 
from rally.benchmark.scenarios import base
 
from rally.benchmark.scenarios import utils
 
 
 
 
 
class MyScenario(base.Scenario):
 
    """My class that contains benchmark scenarios."""
 
 
 
    @utils.atomic_action_timer("action_1")
 
    def _action_1(self, **kwargs):
 
        """Do something with the cloud."""
 
 
 
    @utils.atomic_action_timer("action_2")
 
    def _action_2(self, **kwargs):
 
        """Do something with the cloud."""
 
 
 
    @base.scenario()
 
    def scenario(self, **kwargs):
 
        self._action_1()
 
        self._action_2()
 
</pre>
 
 
 
 
 
 
 
 
 
=== Scenario runners ===
 
==== Concept ====
 
'''Scenario Runners''' in Rally are entities that control the execution type and order of benchmark scenarios. They support different running '''strategies for creating load on the cloud''', including simulating ''concurrent requests'' from different users, periodic load, gradually growing load and so on.
 
 
 
 
 
==== User's view ====
 
The user can specify which type of load on the cloud he would like to have through the '''"runner"''' section in the '''task configuration file''':
 
 
 
{
 
    "NovaServers.boot_server": [
 
        {
 
            "args": {
 
                "flavor_id": 42,
 
                "image_id": "73257560-c59b-4275-a1ec-ab140e5b9979"
 
            },
 
            '''"runner": {'''
 
                '''"type": "constant",'''
 
                '''"times": 15,'''
 
                '''"concurrency": 2'''
 
            '''},'''
 
            "context": {
 
                "users": {
 
                    "tenants": 1,
 
                    "users_per_tenant": 3
 
                },
 
                "quotas": {
 
                    "nova": {
 
                        "instances": 20
 
                    }
 
                }
 
            }
 
        }
 
    ]
 
}
 
 
 
 
 
The scenario running strategy is specified by its '''type''' and also by some type-specific parameters. Available types include:
 
* '''''constant''''', for creating a constant load by running the scenario for a fixed number of '''''times''''', possibly in parallel (that's controlled by the ''"concurrency"'' parameter).
 
* '''''constant_for_duration''''' that works exactly as '''''constant''''', but runs the benchmark scenario until a specified number of seconds elapses ('''''"duration"''''' parameter).
 
* '''''periodic''''', which executes benchmark scenarios with intervals between two consecutive runs, specified in the '''''"period"''''' field in seconds.
 
* '''''serial''''', which is very useful to test new scenarios since it just runs the benchmark scenario for a fixed number of '''''times''''' in a single thread.
 
 
 
 
 
Also, all scenario runners can be provided (again, through the '"runner"''' section in the config file) with an optional '''''"timeout"''''' parameter, which specifies the timeout for each single benchmark scenario run (in seconds).
 
 
 
 
 
==== Developer's  view ====
 
It is possible to extend Rally with new Scenario Runner types, if needed. Basically, each scenario runner should be implemented as a subclass of the base [https://github.com/stackforge/rally/blob/master/rally/benchmark/runners/base.py#L137 '''ScenarioRunner'''] class and located in the [https://github.com/stackforge/rally/tree/master/rally/benchmark/runners rally.benchmark.runners package]. The interface each scenario runner class should support is fairly easy:
 
 
 
<pre>
 
from rally.benchmark.runners import base
 
from rally import utils
 
 
 
class MyScenarioRunner(base.ScenarioRunner):
 
    """My scenario runner."""
 
 
 
    # This string is what the user will have to specify in the task
 
    # configuration file (in "runner": {"type": ...})
 
 
 
    __execution_type__ = "my_scenario_runner"
 
 
 
 
 
    # CONFIG_SCHEMA is used to automatically validate the input
 
    # config of the scenario runner, passed by the user in the task
 
    # configuration file.
 
 
 
    CONFIG_SCHEMA = {
 
        "type": "object",
 
        "$schema": utils.JSON_SCHEMA,
 
        "properties": {
 
            "type": {
 
                "type": "string"
 
            },
 
            "some_specific_property": {...}
 
        }
 
    }
 
 
 
    def _run_scenario(self, cls, method_name, ctx, args):
 
        """Run the scenario 'method_name' from scenario class 'cls'
 
        with arguments 'args', given a context 'ctx'.
 
 
 
        This method should return the results dictionary wrapped in
 
        a base.ScenarioRunnerResult object (not plain JSON)
 
        """
 
        results = ...
 
 
 
        return base.ScenarioRunnerResult(results)
 
</pre>
 
 
 
 
 
 
 
 
 
=== Benchmark contexts ===
 
==== Concept ====
 
The notion of '''contexts''' in Rally is essentially used to define different types of '''environments''' in which benchmark scenarios can be launched. Those environments are usually specified by such parameters as the number of '''tenants and users''' that should be present in an OpenStack project, the '''roles''' granted to those users, extended or narrowed '''quotas''' and so on.
 
 
 
 
 
==== User's view ====
 
In user's prospective, contexts in Rally are manageable via the '''task configuration files'''. In a typical configuration file, each benchmark scenario to be run has is not only supplied by the information on with which arguments and how many times it should be launched, but also with a special '''"context"''' section. In this section, the user may configure a number of contexts he needs his scenarios to be run within.
 
 
 
In the example below, the '''"users" context''' specifies that the ''"NovaServers.boot_server"'' scenario should be run from '''1 tenant''' having '''3 users''' in it. Bearing in mind that the default quota for the number of instances is 10 instances pro tenant, it is also reasonable to extend it to, say, '''20 instances''' in the '''"quotas" context'''. Otherwise the scenario would eventually fail, since it tries to boot a server 15 times from a single tenant.
 
 
 
{
 
    "NovaServers.boot_server": [
 
        {
 
            "args": {
 
                "flavor_id": 42,
 
                "image_id": "73257560-c59b-4275-a1ec-ab140e5b9979"
 
            },
 
            "runner": {
 
                "type": "constant",
 
                "times": 15,
 
                "concurrency": 2
 
            },
 
            '''"context": {'''
 
                '''"users": {'''
 
                    '''"tenants": 1,'''
 
                    '''"users_per_tenant": 3'''
 
                '''},'''
 
                '''"quotas": {'''
 
                    '''"nova": {'''
 
                        '''"instances": 20'''
 
                    '''}'''
 
                '''}'''
 
            '''}'''
 
        }
 
    ]
 
}
 
 
 
 
 
==== Developer's view ====
 
 
 
In developer's view, contexts management is implemented via '''Context classes'''. Each context type that can be specified in the task configuration file corresponds to a certain subclass of the base [https://github.com/stackforge/rally/blob/master/rally/benchmark/context/base.py '''Context'''] class, located in the [https://github.com/stackforge/rally/tree/master/rally/benchmark/context '''rally.benchmark.context'''] module. Every context class should implement a fairly simple '''interface''':
 
 
 
<pre>
 
from rally import utils
 
 
 
class YourContext(base.Context):
 
    """Yet another context class."""
 
 
 
    __ctx_name__ = "your_context"  # Corresponds to the context field name in task configuration files
 
    __ctx_order__ = xxx            # a 3-digit number specifying the priority with which the context should be set up
 
    __ctx_hidden__ = False        # True if the context cannot be configured through the task configuration file
 
 
 
    # The schema of the context configuration format
 
    CONFIG_SCHEMA = {
 
        "type": "object",
 
        "$schema": utils.JSON_SCHEMA,
 
        "additionalProperties": False,
 
        "properties": {
 
            "property_1": <SCHEMA>,
 
            "property_2": <SCHEMA>
 
        }
 
    }
 
 
 
    def __init__(self, context):
 
        super(YourContext, self).__init__(context)
 
        # Initialize the necessary stuff
 
 
 
    def setup(self):
 
        # Prepare the environment in the desired way
 
 
 
    def cleanup(self):
 
        # Cleanup the environment properly
 
</pre>
 
 
 
Consequently, the algorithm of initiating the contexts can be roughly seen as follows:
 
<pre>
 
context1 = Context1(ctx)
 
context2 = Context2(ctx)
 
context3 = Context3(ctx)
 
 
 
context1.setup()
 
context2.setup()
 
context3.setup()
 
 
 
<Run benchmark scenarios in the prepared environment>
 
 
 
context3.cleanup()
 
context2.cleanup()
 
context1.cleanup()
 
</pre>
 
 
 
- where the order of contexts in which they are set up depends on the value of their ''__ctx_order__'' attribute. Contexts with lower ''__ctx_order__'' have higher priority: ''1xx'' contexts are reserved for users-related stuff (e.g. users/tenants creation, roles assignment etc.), ''2xx'' - for quotas etc.
 
 
 
The ''__ctx_hidden__'' attribute defines whether the context should be a ''hidden'' one. '''Hidden contexts''' cannot be configured by end-users through the task configuration file as shown above, but should be specified by a benchmark scenario developer through a special ''@base.scenario(context={...})'' decorator. Hidden contexts are typically needed to satisfy some specific benchmark scenario-specific needs, which don't require the end-user's attention. For example, the hidden [https://github.com/stackforge/rally/blob/master/rally/benchmark/context/secgroup.py#L80-L109 '''"allow_ssh" context'''] is used in the [https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/vm/vmtasks.py#L38-L43 '''VMTasks.boot_runcommand_delete benchmark scenario'''] to enable the SSH access to the servers. The fact that end-users do not have to worry about such details about SSH while launching this benchmark scenarios obviously makes their life easier and shows why hiddent contexts are of great importance in Rally.
 
 
 
If you want to dive deeper, see also the [https://github.com/stackforge/rally/blob/master/rally/benchmark/context/base.py#L78-L117 context manager] class that actually implements the algorithm described above.
 

Latest revision as of 13:00, 30 October 2017

The page has been moved to https://rally.readthedocs.io