Jump to: navigation, search

Difference between revisions of "Rally/HowTo"

m
Line 37: Line 37:
  
 
<pre>
 
<pre>
$ rally use deployment --deploy_id=<Another deployment UUID>
+
$ rally use deployment --deploy-id=<Another deployment UUID>
 
Using deployment : <Another deployment UUID>
 
Using deployment : <Another deployment UUID>
 
</pre>
 
</pre>

Revision as of 13:47, 27 February 2014

Usage demo

NOTE: Throughout this demo, we assume that you have a configured Rally installation and an already existing OpenStack deployment has keystone available at <KEYSTONE_AUTH_URL>.


Step 1. Deployment initialization

First, you have to provide Rally with an Openstack deployment it is going to benchmark. This is done through deployment configuration files. The actual deployment can be either created by Rally (see /doc/samples for configuration examples) or, as in our example, an already existing one. The configuration file (let's call it dummy_deployment.json) should contain the deployment strategy (in our case, the deployment will be performed by the so called "DummyEngine", since the deployment is ready to use) and some specific parameters (for the DummyEngine, an endpoint with administrator permissions):

{
    "name": "DummyEngine",
    "endpoint": {
        "auth_url": <KEYSTONE_AUTH_URL>,
        "username": <ADMIN_USER_NAME>,
        "password": <ADMIN_PASSWORD>,
        "tenant_name": <ADMIN_TENANT>
    }
}


To register this deployment in Rally, use the deployment create command:

$ rally deployment create --filename=dummy_deployment.json --name=dummy
+---------------------------+----------------------------+-------+------------------+
|            uuid           |         created_at         |  name |      status      |
+---------------------------+----------------------------+-------+------------------+
|     <Deployment UUID>     | 2014-02-15 22:00:28.270941 | dummy | deploy->finished |
+---------------------------+----------------------------+-------+------------------+
Using deployment : <Deployment UUID>


Note the last line in the output. It says that the just created deployment is now used by Rally; that means that all the benchmarking operations from now on are going to be performed on this deployment. In case you want to switch to another deployment, execute the use deployment command:

$ rally use deployment --deploy-id=<Another deployment UUID>
Using deployment : <Another deployment UUID>


Finally, the deployment check command enables you to verify that your current deployment is healthy and ready to be benchmarked:

$ rally deployment check
+----------+-----------+-----------+
| services |    type   |   status  |
+----------+-----------+-----------+
|   nova   |  compute  | Available |
| cinderv2 |  volumev2 | Available |
|  novav3  | computev3 | Available |
|    s3    |     s3    | Available |
|  glance  |   image   | Available |
|  cinder  |   volume  | Available |
|   ec2    |    ec2    | Available |
| keystone |  identity | Available |
+----------+-----------+-----------+


Step 2. Benchmarking

Now that we have a working and registered deployment, we can start benchmarking it. Again, the sequence of benchmark scenarios to be launched by Rally should be specified in a benchmark task configuration file. Note that there is already a set of nice benchmark tasks examples in doc/samples/tasks/ (assuming that you are in the Rally root directory). The natural thing would be just to try one of these sample benchmark tasks, say, the one that boots and deletes multiple servers (doc/samples/tasks/nova/boot-and-delete.json). To start a benchmark task, run the task start command:

$ rally task start --task=doc/samples/tasks/nova/boot-and-delete.json
+--------------------------------------+----------------------------+--------+--------+
|                 uuid                 |         created_at         | status | failed |
+--------------------------------------+----------------------------+--------+--------+
| 269297e1-1e25-4e08-98f4-8018d6df9adb | 2014-02-16 08:52:16.722415 |  init  | False  |
+--------------------------------------+----------------------------+--------+--------+
2014-02-16 12:52:28.576 31995 ERROR glanceclient.common.http [-] Request returned failure status.
2014-02-16 12:52:39.524 31995 ERROR rally.benchmark.engine [-] Scenario (0, NovaServers.boot_and_delete_server) input arguments validation error: Image with id '73257560-c59b-4275-a1ec-ab140e5b9979' not found

================================================================================
Task 269297e1-1e25-4e08-98f4-8018d6df9adb is finished. Failed: True
--------------------------------------------------------------------------------
...


This attempt, however, will most likely fail because of an input arguments validation error (due to a non-existing image id). The thing is that the benchmark scenario that boots a server needs to do that using a concrete image available in the OpenStack deployment; these images have different ids. That's why you should first make a copy of the sample benchmark task:

cp doc/samples/tasks/nova/boot-and-delete.json my-task.json


and then edit it with the resource uuids from your OpenStack installation:

{
  "NovaServers.boot_and_delete_server": [
    {"args": {"flavor_id": <NOVA_FLAVOR_ID>, "image_id": <GLANCE_IMAGE_UUID>},
     "config": {"times": 2, "active_users": 1}},
    {"args": {"flavor_id": <NOVA_FLAVOR_ID>, "image_id": <GLANCE_IMAGE_UUID>},
     "config": {"times": 4, "active_users": 2}}
  ]
}


To obtain proper image_id and flavor_id, you can use standard python clients. First of all, you should source a proper openrc file for your cloud. The next command will work for you:

 $ . ~/.rally/openrc


Now let's get a proper image uuid:

$ glance image-list 
+--------------------------------------+---------------------------------+-------------+------------------+----------+
| ID                                   | Name                            | Disk Format | Container Format | Size     |
+--------------------------------------+---------------------------------+-------------+------------------+----------+
| <UUID_THAT_YOU_NEED>                 | cirros-0.3.1-x86_64-uec         | ami         | ami              | 25165824 |
| b420cefb-eae6-4738-9ee7-6ad8d36b125d | cirros-0.3.1-x86_64-uec-kernel  | aki         | aki              | 4955792  |
| 8b217e33-4557-4826-aa1a-983149a27ed7 | cirros-0.3.1-x86_64-uec-ramdisk | ari         | ari              | 3714968  |
+--------------------------------------+---------------------------------+-------------+------------------+----------+


and a proper flavor id:

$ nova flavor-list
+----------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|          ID          | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1                    | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2                    | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3                    | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4                    | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| <UUID_THAT_YOU_NEED> | m1.nano   | 64        | 0    | 0         |      | 1     | 1.0         | True      |
| 5                    | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
| 84                   | m1.micro  | 128       | 0    | 0         |      | 1     | 1.0         | True      |
+----------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+


After you've edited the my-task.json file, you can run this benchmark task again. This time, let's also use the --verbose parameter that will allow us to retrieve more logging from Rally while it performs benchmarking:

$ rally --verbose task start --task=my-task.json


Using another terminal (or ssh connection to the VM with Rally), you can watch the current task status by the task list command:

$ rally task list 
+------------------------------+----------------------------+-------------------------+--------+
|             uuid             |         created_at         |          status         | failed |
+------------------------------+----------------------------+-------------------------+--------+
|         <Task UUID>          | 2013-09-16 05:28:57.241456 | test_tool->benchmarking | False  |
+------------------------------+----------------------------+-------------------------+--------+


Once the benchmark task has been finished, you can get the detailed information about the results by the task detailed command. Note that Rally not only prints out the aggregated information (min/max/avg running time for each task as well as about the fraction of successfully finished tasks), but also presents the load created by separate atomic actions inside benchmark scenarios (in the case of the NovaServers.boot_and_delete_server scenario, this is the time taken by booting and deleting servers separately).

$ rally task detailed <Task UUID>

================================================================================
Task <Task UUID> is finished. Failed: False
--------------------------------------------------------------------------------

test scenario NovaServers.boot_and_delete_server
args position 0
args values:
{u'args': {u'flavor_id': <Flavor UUID>,
           u'image_id': u'<Image UUID>'},
 u'config': {u'active_users': 1, u'times': 2}}
+--------------------+---------------+---------------+---------------+
|       action       |   max (sec)   |   avg (sec)   |   min (sec)   |
+--------------------+---------------+---------------+---------------+
|  nova.boot_server  | 9.22798299789 | 8.90022659302 | 8.57247018814 |
| nova.delete_server | 4.24928498268 | 3.26377093792 | 2.27825689316 |
+--------------------+---------------+---------------+---------------+

+---------------+---------------+---------------+---------------+-------------+
|   max (sec)   |   avg (sec)   |   min (sec)   | success/total | total times |
+---------------+---------------+---------------+---------------+-------------+
| 13.4775559902 | 12.1641695499 | 10.8507831097 |      1.0      |      2      |
+---------------+---------------+---------------+---------------+-------------+
--------------------------------------------------------------------------------

test scenario NovaServers.boot_and_delete_server
args position 1
args values:
{u'args': {u'flavor_id': <Flavor UUID>,
           u'image_id': u'<Image UUID>'},
 u'config': {u'active_users': 2, u'times': 4}}
+--------------------+---------------+---------------+---------------+
|       action       |   max (sec)   |   avg (sec)   |   min (sec)   |
+--------------------+---------------+---------------+---------------+
|  nova.boot_server  | 9.64801907539 | 8.30236756802 | 6.95671606064 |
| nova.delete_server | 4.46917510033 | 4.45528066158 | 4.44138622284 |
+--------------------+---------------+---------------+---------------+

+---------------+---------------+--------------+---------------+-------------+
|   max (sec)   |   avg (sec)   |  min (sec)   | success/total | total times |
+---------------+---------------+--------------+---------------+-------------+
| 14.0895900726 | 12.7578320503 | 11.426074028 |      0.5      |      4      |
+---------------+---------------+--------------+---------------+-------------+


Available Rally facilities

To be able to run complex benchmark scenarios on somewhat more sophisticated OpenStack deployment types, you should familiarize yourself with more deploy engines, server providers and benchmark scenarios available in Rally.

List of available Deploy engines (including their description and usage examples): Deploy engines

List of available Server providers (including their description and usage examples): Server providers

List of available Benchmark scenarios (including their description and usage examples): Benchmark scenarios