Jump to: navigation, search

Difference between revisions of "Rally/HowTo"

(Available Rally facilities)
 
(34 intermediate revisions by 10 users not shown)
Line 1: Line 1:
  
== Usage demo ==
+
<big>'''Rally wiki documentation is obsolete.'''</big>
  
'''''NOTE:''''' Throughout this demo, we assume that you have a configured [[Rally/installation|Rally installation]] and an already existing OpenStack deployment has keystone available at ''<KEYSTONE_AUTH_URL>''.
+
''Everything moved to https://rally.readthedocs.org''
  
 
+
Here is [https://rally.readthedocs.org/en/latest/tutorial.html Rally Step by Step Guide]
=== Step 1. Deployment initialization ===
 
First, you have to provide Rally with an Openstack deployment it is going to benchmark. This is done through ''deployment configuration files''. The actual deployment can be either created by Rally (see ''/doc/samples'' for configuration examples) or, as in our example, an already existing one. The configuration file (let's call it '''dummy_deployment.json''') should contain the deployment strategy (in our case, the deployment will be performed by the so called ''"DummyEngine"'', since the deployment is ready to use) and some specific parameters (for the ''DummyEngine'', an endpoint with administrator permissions):
 
 
 
<pre>
 
{
 
    "name": "DummyEngine",
 
    "endpoint": {
 
        "auth_url": <KEYSTONE_AUTH_URL>,
 
        "username": <ADMIN_USER_NAME>,
 
        "password": <ADMIN_PASSWORD>,
 
        "tenant_name": <ADMIN_TENANT>
 
    }
 
}
 
</pre>
 
 
 
 
 
To register this deployment in Rally, use the '''deployment create''' command:
 
 
 
<pre>
 
$ rally deployment create --filename=dummy_deployment.json --name=dummy
 
+---------------------------+----------------------------+-------+------------------+
 
|            uuid          |        created_at        |  name |      status      |
 
+---------------------------+----------------------------+-------+------------------+
 
|    <Deployment UUID>    | 2014-02-15 22:00:28.270941 | dummy | deploy->finished |
 
+---------------------------+----------------------------+-------+------------------+
 
Using deployment : <Deployment UUID>
 
</pre>
 
 
 
 
 
Note the last line in the output. It says that the just created deployment is now used by Rally; that means that all the benchmarking operations from now on are going to be performed on this deployment. In case you want to switch to another deployment, execute the '''use deployment''' command:
 
 
 
<pre>
 
$ rally use deployment --deploy_id=<Another deployment UUID>
 
Using deployment : <Another deployment UUID>
 
</pre>
 
 
 
 
 
Finally, the '''deployment check''' command enables you to verify that your current deployment is healthy and ready to be benchmarked:
 
 
 
<pre>
 
$ rally deployment check
 
+----------+-----------+-----------+
 
| services |    type  |  status  |
 
+----------+-----------+-----------+
 
|  nova  |  compute  | Available |
 
| cinderv2 |  volumev2 | Available |
 
|  novav3  | computev3 | Available |
 
|    s3    |    s3    | Available |
 
|  glance  |  image  | Available |
 
|  cinder  |  volume  | Available |
 
|  ec2    |    ec2    | Available |
 
| keystone |  identity | Available |
 
+----------+-----------+-----------+
 
</pre>
 
 
 
 
 
=== Step 2. Benchmarking ===
 
Now that we have a working and registered deployment, we can start benchmarking it. Again, the sequence of benchmark scenarios to be launched by Rally should be specified in a ''benchmark task configuration file''. Note that there is already a set of nice benchmark tasks examples in ''doc/samples/tasks/'' (assuming that you are in the Rally root directory). The natural thing would be just to try one of these sample benchmark tasks, say, the one that boots and deletes multiple servers (''doc/samples/tasks/nova/boot-and-delete.json''). To start a benchmark task, run the task start command:
 
 
 
<pre>
 
$ rally task start --task=doc/samples/tasks/nova/boot-and-delete.json
 
+--------------------------------------+----------------------------+--------+--------+
 
|                uuid                |        created_at        | status | failed |
 
+--------------------------------------+----------------------------+--------+--------+
 
| 269297e1-1e25-4e08-98f4-8018d6df9adb | 2014-02-16 08:52:16.722415 |  init  | False  |
 
+--------------------------------------+----------------------------+--------+--------+
 
2014-02-16 12:52:28.576 31995 ERROR glanceclient.common.http [-] Request returned failure status.
 
2014-02-16 12:52:39.524 31995 ERROR rally.benchmark.engine [-] Scenario (0, NovaServers.boot_and_delete_server) input arguments validation error: Image with id '73257560-c59b-4275-a1ec-ab140e5b9979' not found
 
 
 
================================================================================
 
Task 269297e1-1e25-4e08-98f4-8018d6df9adb is finished. Failed: True
 
--------------------------------------------------------------------------------
 
...
 
</pre>
 
 
 
 
 
This attempt, however, will most likely fail because of an ''input arguments validation error'' (due to a non-existing image id). The thing is that the benchmark scenario that boots a server needs to do that using a concrete image available in the OpenStack deployment; these images have different ids. That's why you should first make a copy of the sample benchmark task:
 
 
 
<pre>
 
cp doc/samples/tasks/nova/boot-and-delete.json my-task.json
 
</pre>
 
 
 
 
 
and then edit it with the resource uuids from your OpenStack installation:
 
 
 
<pre>
 
{
 
  "NovaServers.boot_and_delete_server": [
 
    {"args": {"flavor_id": <NOVA_FLAVOR_ID>, "image_id": <GLANCE_IMAGE_UUID>},
 
    "config": {"times": 2, "active_users": 1}},
 
    {"args": {"flavor_id": <NOVA_FLAVOR_ID>, "image_id": <GLANCE_IMAGE_UUID>},
 
    "config": {"times": 4, "active_users": 2}}
 
  ]
 
}
 
</pre>
 
 
 
 
 
To obtain proper image_id and flavor_id, you can use standard python clients. First of all, you should source a proper ''openrc'' file for your cloud. The next command will work for you:
 
 
 
  $ . ~/.rally/openrc
 
 
 
 
 
Now let's get a proper image uuid:
 
 
 
<pre>
 
$ glance image-list
 
+--------------------------------------+---------------------------------+-------------+------------------+----------+
 
| ID                                  | Name                            | Disk Format | Container Format | Size    |
 
+--------------------------------------+---------------------------------+-------------+------------------+----------+
 
| <UUID_THAT_YOU_NEED>                | cirros-0.3.1-x86_64-uec        | ami        | ami              | 25165824 |
 
| b420cefb-eae6-4738-9ee7-6ad8d36b125d | cirros-0.3.1-x86_64-uec-kernel  | aki        | aki              | 4955792  |
 
| 8b217e33-4557-4826-aa1a-983149a27ed7 | cirros-0.3.1-x86_64-uec-ramdisk | ari        | ari              | 3714968  |
 
+--------------------------------------+---------------------------------+-------------+------------------+----------+
 
</pre>
 
 
 
 
 
and a proper flavor id:
 
 
 
<pre>
 
$ nova flavor-list
 
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
 
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
 
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
 
| 1  | m1.tiny  | 512      | 1    | 0        |      | 1    | 1.0        | True      |
 
| 2  | m1.small  | 2048      | 20  | 0        |      | 1    | 1.0        | True      |
 
| 3  | m1.medium | 4096      | 40  | 0        |      | 2    | 1.0        | True      |
 
| 4  | m1.large  | 8192      | 80  | 0        |      | 4    | 1.0        | True      |
 
| 42 | m1.nano  | 64        | 0    | 0        |      | 1    | 1.0        | True      |
 
| 5  | m1.xlarge | 16384    | 160  | 0        |      | 8    | 1.0        | True      |
 
| 84 | m1.micro  | 128      | 0    | 0        |      | 1    | 1.0        | True      |
 
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
 
</pre>
 
 
 
 
 
After you've edited the '''''my-task.json''''' file, you can run this benchmark task again. This time, let's also use the ''--verbose'' parameter that will allow us to retrieve more logging from Rally while it performs benchmarking:
 
 
 
<pre>
 
$ rally --verbose task start --task=my-task.json
 
</pre>
 
 
 
 
 
Using another terminal (or ssh connection to the VM with Rally), you can watch the current task status by the '''task list''' command:
 
 
 
<pre>
 
$ rally task list
 
+------------------------------+----------------------------+-------------------------+--------+
 
|            uuid            |        created_at        |          status        | failed |
 
+------------------------------+----------------------------+-------------------------+--------+
 
|        <Task UUID>          | 2013-09-16 05:28:57.241456 | test_tool->benchmarking | False  |
 
+------------------------------+----------------------------+-------------------------+--------+
 
</pre>
 
 
 
 
 
Once the benchmark task has been finished, you can get the detailed results information by the '''task detailed''' command:
 
 
 
<pre>
 
$ rally task detailed <Task UUID>
 
 
 
================================================================================
 
Task <Task UUID> is finished.
 
--------------------------------------------------------------------------------
 
test scenario NovaServers.boot_and_delete_server
 
args position 0
 
args values:
 
{u'args': {u'flavor_id': <Flavor ID>,
 
        u'image_id': u'<Image ID>'},
 
u'concurrent': 1,
 
u'times': 2}
 
+---------------+---------------+---------------+-------+
 
|      max      |      avg      |      min      | ratio |
 
+---------------+---------------+---------------+-------+
 
| 13.4224121571 | 13.2850991488 | 13.1477861404 |  1.0  |
 
+---------------+---------------+---------------+-------+
 
--------------------------------------------------------------------------------
 
test scenario NovaServers.boot_and_delete_server
 
args position 1
 
args values:
 
{u'args': {u'flavor_id': <Flavor ID>,
 
        u'image_id': u'<Image ID>'},
 
u'concurrent': 2,
 
u'times': 6}
 
+--------------+---------------+---------------+-------+
 
|    max      |      avg      |      min      | ratio |
 
+--------------+---------------+---------------+-------+
 
| 19.802423954 | 16.9980401595 | 16.3908159733 |  1.0  |
 
+--------------+---------------+---------------+-------+
 
</pre>
 
 
 
 
 
== Available Rally facilities ==
 
 
 
To be able to run complex benchmark scenarios on somewhat more sophisticated OpenStack deployment types, you should familiarize yourself with more '''''deploy engines''''', '''''server providers''''' and '''''benchmark scenarios''''' available in Rally.
 
 
 
List of available Deploy engines (including their description and usage examples):
 
[[Rally/DeployEngines|Deploy engines]]
 
 
 
List of available Server providers (including their description and usage examples):
 
[[Rally/ServerProviders|Server providers]]
 
 
 
List of available Benchmark scenarios (including their description and usage examples):
 
[[Rally/BenchmarkScenarios|Benchmark scenarios]]
 

Latest revision as of 00:44, 27 February 2015

Rally wiki documentation is obsolete.

Everything moved to https://rally.readthedocs.org

Here is Rally Step by Step Guide