Jump to: navigation, search

Difference between revisions of "Rally/HowTo"

m (Usage demo)
(Usage demo: Step 2)
Line 2: Line 2:
 
== Usage demo ==
 
== Usage demo ==
  
'''''Note:''''' Throughout this demo, we assume that you have a configured [[Rally/installation|Rally installation]] and an already existing OpenStack deployment has keystone available at ''<KEYSTONE_AUTH_URL>''.
+
'''''NOTE:''''' Throughout this demo, we assume that you have a configured [[Rally/installation|Rally installation]] and an already existing OpenStack deployment has keystone available at ''<KEYSTONE_AUTH_URL>''.
  
  
 
=== Step 1. Deployment initialization ===
 
=== Step 1. Deployment initialization ===
First, you have to provide Rally with an Openstack deployment it is going to benchmark. This is done through deployment configuration files. The actual deployment can be either created by Rally (see ''/doc/samples'' for configuration examples) or, as in our example, an already existing one. The configuration file (let's call it '''dummy_deployment.json''') should contain the deployment strategy (in our case, the deployment will be performed by the so called ''"DummyEngine"'', since the deployment is ready to use) and some specific parameters (for the ''DummyEngine'', an endpoint with administrator permissions):
+
First, you have to provide Rally with an Openstack deployment it is going to benchmark. This is done through ''deployment configuration files''. The actual deployment can be either created by Rally (see ''/doc/samples'' for configuration examples) or, as in our example, an already existing one. The configuration file (let's call it '''dummy_deployment.json''') should contain the deployment strategy (in our case, the deployment will be performed by the so called ''"DummyEngine"'', since the deployment is ready to use) and some specific parameters (for the ''DummyEngine'', an endpoint with administrator permissions):
  
 
<pre>
 
<pre>
Line 62: Line 62:
  
 
=== Step 2. Benchmarking ===
 
=== Step 2. Benchmarking ===
A sample of a task configuration also a good point to start. Take copy of '''doc/samples/tasks/nova/boot-and-delete.json''', specify a proper image_id and store to a separate file '''boot-and-delete.json''':
+
Now that we have a working and registered deployment, we can start benchmarking it. Again, the sequence of benchmark scenarios to be launched by Rally should be specified in a ''benchmark task configuration file''. Note that there is already a set of nice benchmark tasks examples in ''doc/samples/tasks/'' (assuming that you are in the Rally root directory). The natural thing would be just to try one of these sample benchmark tasks, say, the one that boots and deletes multiple servers (''doc/samples/tasks/nova/boot-and-delete.json''). To start a benchmark task, run the task start command:
 +
 
 +
<pre>
 +
$ rally task start --task=doc/samples/tasks/nova/boot-and-delete.json
 +
+--------------------------------------+----------------------------+--------+--------+
 +
|                uuid                |        created_at        | status | failed |
 +
+--------------------------------------+----------------------------+--------+--------+
 +
| 269297e1-1e25-4e08-98f4-8018d6df9adb | 2014-02-16 08:52:16.722415 |  init  | False  |
 +
+--------------------------------------+----------------------------+--------+--------+
 +
2014-02-16 12:52:28.576 31995 ERROR glanceclient.common.http [-] Request returned failure status.
 +
2014-02-16 12:52:39.524 31995 ERROR rally.benchmark.engine [-] Scenario (0, NovaServers.boot_and_delete_server) input arguments validation error: Image with id '73257560-c59b-4275-a1ec-ab140e5b9979' not found
 +
 
 +
================================================================================
 +
Task 269297e1-1e25-4e08-98f4-8018d6df9adb is finished. Failed: True
 +
--------------------------------------------------------------------------------
 +
...
 +
</pre>
 +
 
 +
 
 +
This attempt, however, will most likely fail because of an ''input arguments validation error'' (due to a non-existing image id). The thing is that the benchmark scenario that boots a server needs to do that using a concrete image available in the OpenStack deployment; these images have different ids. That's why you should first make a copy of the sample benchmark task:
 +
 
 +
<pre>
 +
cp doc/samples/tasks/nova/boot-and-delete.json my-task.json
 +
</pre>
 +
 
 +
 
 +
and then edit it with the resource uuids from your OpenStack installation:
 +
 
 
<pre>
 
<pre>
 
{
 
{
 
   "NovaServers.boot_and_delete_server": [
 
   "NovaServers.boot_and_delete_server": [
     {"args": {"flavor_id": 1, "image_id": <GLANCE_IMAGE_UUID>},
+
     {"args": {"flavor_id": <NOVA_FLAVOR_ID>, "image_id": <GLANCE_IMAGE_UUID>},
 
     "config": {"times": 2, "active_users": 1}},
 
     "config": {"times": 2, "active_users": 1}},
     {"args": {"flavor_id": 1, "image_id": <GLANCE_IMAGE_UUID>},
+
     {"args": {"flavor_id": <NOVA_FLAVOR_ID>, "image_id": <GLANCE_IMAGE_UUID>},
 
     "config": {"times": 4, "active_users": 2}}
 
     "config": {"times": 4, "active_users": 2}}
 
   ]
 
   ]
Line 74: Line 101:
 
</pre>
 
</pre>
  
<big>'''NOTE: To obtain proper image_id and flavor_id you can use standard python clients:'''</big>
 
  
First of all you should source proper openrc file for your cloud, next command should work for you:
+
To obtain proper image_id and flavor_id, you can use standard python clients. First of all, you should source a proper ''openrc'' file for your cloud. The next command will work for you:
  
 
   $ . ~/.rally/openrc
 
   $ . ~/.rally/openrc
  
Now let's get proper image uuid:  
+
 
 +
Now let's get a proper image uuid:  
  
 
<pre>
 
<pre>
Line 93: Line 120:
 
</pre>
 
</pre>
  
Let's get proper flavor id:  
+
 
 +
and a proper flavor id:  
  
 
<pre>
 
<pre>
Line 110: Line 138:
 
</pre>
 
</pre>
  
Ok now we should run our rally with benchmark configuration:
+
After you've edited the '''''my-task.json''''' file, you can run this benchmark task again. This time, let's also use the ''--verbose'' parameter that will allow us to retrieve more logging from Rally while it performs benchmarking:
 +
 
 +
<pre>
 +
rally --verbose task start --task=my-task.json
 +
</pre>
 +
 
  
    $ rally --verbose task start --task boot-and-delete.json
+
Using another terminal (or ssh connection to the VM with Rally), you can watch the current task status by the '''task list''' command:
  
Now using another ssh connection to Rally VM. Run this command:
+
<pre>
 +
rally task list
 +
+------------------------------+----------------------------+-------------------------+--------+
 +
|            uuid            |        created_at        |          status        | failed |
 +
+------------------------------+----------------------------+-------------------------+--------+
 +
|        <Task UUID>          | 2013-09-16 05:28:57.241456 | test_tool->benchmarking | False  |
 +
+------------------------------+----------------------------+-------------------------+--------+
 +
</pre>
  
    $ rally task list
 
    +--------------------------------------+----------------------------+-------------------------+--------+
 
    |                uuid                |        created_at        |          status        | failed |
 
    +--------------------------------------+----------------------------+-------------------------+--------+
 
    | 83d9e08c-4f2b-4c1d-9c83-f36bcc6b5a68 | 2013-09-16 05:28:57.241456 | test_tool->benchmarking | False  |
 
    +--------------------------------------+----------------------------+-------------------------+--------+
 
  
 
To get detailed results by task with uuid:  83d9e08c-4f2b-4c1d-9c83-f36bcc6b5a68 you should run:
 
To get detailed results by task with uuid:  83d9e08c-4f2b-4c1d-9c83-f36bcc6b5a68 you should run:

Revision as of 09:11, 16 February 2014

Usage demo

NOTE: Throughout this demo, we assume that you have a configured Rally installation and an already existing OpenStack deployment has keystone available at <KEYSTONE_AUTH_URL>.


Step 1. Deployment initialization

First, you have to provide Rally with an Openstack deployment it is going to benchmark. This is done through deployment configuration files. The actual deployment can be either created by Rally (see /doc/samples for configuration examples) or, as in our example, an already existing one. The configuration file (let's call it dummy_deployment.json) should contain the deployment strategy (in our case, the deployment will be performed by the so called "DummyEngine", since the deployment is ready to use) and some specific parameters (for the DummyEngine, an endpoint with administrator permissions):

{
    "name": "DummyEngine",
    "endpoint": {
        "auth_url": <KEYSTONE_AUTH_URL>,
        "username": <ADMIN_USER_NAME>,
        "password": <ADMIN_PASSWORD>,
        "tenant_name": <ADMIN_TENANT>
    }
}


To register this deployment in Rally, use the deployment create command:

$ rally deployment create --filename=dummy_deployment.json --name=dummy
+---------------------------+----------------------------+-------+------------------+
|            uuid           |         created_at         |  name |      status      |
+---------------------------+----------------------------+-------+------------------+
|     <Deployment UUID>     | 2014-02-15 22:00:28.270941 | dummy | deploy->finished |
+---------------------------+----------------------------+-------+------------------+
Using deployment : <Deployment UUID>


Note the last line in the output. It says that the just created deployment is now used by Rally; that means that all the benchmarking operations from now on are going to be performed on this deployment. In case you want to switch to another deployment, execute the use deployment command:

$ rally use deployment --deploy_id=<Another deployment UUID>
Using deployment : <Another deployment UUID>


Finally, the deployment check command enables you to verify that your current deployment is healthy and ready to be benchmarked:

$ rally deployment check
+----------+-----------+-----------+
| services |    type   |   status  |
+----------+-----------+-----------+
|   nova   |  compute  | Available |
| cinderv2 |  volumev2 | Available |
|  novav3  | computev3 | Available |
|    s3    |     s3    | Available |
|  glance  |   image   | Available |
|  cinder  |   volume  | Available |
|   ec2    |    ec2    | Available |
| keystone |  identity | Available |
+----------+-----------+-----------+


Step 2. Benchmarking

Now that we have a working and registered deployment, we can start benchmarking it. Again, the sequence of benchmark scenarios to be launched by Rally should be specified in a benchmark task configuration file. Note that there is already a set of nice benchmark tasks examples in doc/samples/tasks/ (assuming that you are in the Rally root directory). The natural thing would be just to try one of these sample benchmark tasks, say, the one that boots and deletes multiple servers (doc/samples/tasks/nova/boot-and-delete.json). To start a benchmark task, run the task start command:

$ rally task start --task=doc/samples/tasks/nova/boot-and-delete.json
+--------------------------------------+----------------------------+--------+--------+
|                 uuid                 |         created_at         | status | failed |
+--------------------------------------+----------------------------+--------+--------+
| 269297e1-1e25-4e08-98f4-8018d6df9adb | 2014-02-16 08:52:16.722415 |  init  | False  |
+--------------------------------------+----------------------------+--------+--------+
2014-02-16 12:52:28.576 31995 ERROR glanceclient.common.http [-] Request returned failure status.
2014-02-16 12:52:39.524 31995 ERROR rally.benchmark.engine [-] Scenario (0, NovaServers.boot_and_delete_server) input arguments validation error: Image with id '73257560-c59b-4275-a1ec-ab140e5b9979' not found

================================================================================
Task 269297e1-1e25-4e08-98f4-8018d6df9adb is finished. Failed: True
--------------------------------------------------------------------------------
...


This attempt, however, will most likely fail because of an input arguments validation error (due to a non-existing image id). The thing is that the benchmark scenario that boots a server needs to do that using a concrete image available in the OpenStack deployment; these images have different ids. That's why you should first make a copy of the sample benchmark task:

cp doc/samples/tasks/nova/boot-and-delete.json my-task.json


and then edit it with the resource uuids from your OpenStack installation:

{
  "NovaServers.boot_and_delete_server": [
    {"args": {"flavor_id": <NOVA_FLAVOR_ID>, "image_id": <GLANCE_IMAGE_UUID>},
     "config": {"times": 2, "active_users": 1}},
    {"args": {"flavor_id": <NOVA_FLAVOR_ID>, "image_id": <GLANCE_IMAGE_UUID>},
     "config": {"times": 4, "active_users": 2}}
  ]
}


To obtain proper image_id and flavor_id, you can use standard python clients. First of all, you should source a proper openrc file for your cloud. The next command will work for you:

 $ . ~/.rally/openrc


Now let's get a proper image uuid:

$ glance image-list 
+--------------------------------------+---------------------------------+-------------+------------------+----------+
| ID                                   | Name                            | Disk Format | Container Format | Size     |
+--------------------------------------+---------------------------------+-------------+------------------+----------+
| <UUID_THAT_YOU_NEED>                 | cirros-0.3.1-x86_64-uec         | ami         | ami              | 25165824 |
| b420cefb-eae6-4738-9ee7-6ad8d36b125d | cirros-0.3.1-x86_64-uec-kernel  | aki         | aki              | 4955792  |
| 8b217e33-4557-4826-aa1a-983149a27ed7 | cirros-0.3.1-x86_64-uec-ramdisk | ari         | ari              | 3714968  |
+--------------------------------------+---------------------------------+-------------+------------------+----------+


and a proper flavor id:

$ nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 42 | m1.nano   | 64        | 0    | 0         |      | 1     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
| 84 | m1.micro  | 128       | 0    | 0         |      | 1     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

After you've edited the my-task.json file, you can run this benchmark task again. This time, let's also use the --verbose parameter that will allow us to retrieve more logging from Rally while it performs benchmarking:

rally --verbose task start --task=my-task.json


Using another terminal (or ssh connection to the VM with Rally), you can watch the current task status by the task list command:

rally task list 
+------------------------------+----------------------------+-------------------------+--------+
|             uuid             |         created_at         |          status         | failed |
+------------------------------+----------------------------+-------------------------+--------+
|         <Task UUID>          | 2013-09-16 05:28:57.241456 | test_tool->benchmarking | False  |
+------------------------------+----------------------------+-------------------------+--------+


To get detailed results by task with uuid: 83d9e08c-4f2b-4c1d-9c83-f36bcc6b5a68 you should run:

   $ rally task detailed  8532319f-d093-47de-b9f3-2909c03c7e30
   ================================================================================
   Task 83d9e08c-4f2b-4c1d-9c83-f36bcc6b5a68 is finished.
   --------------------------------------------------------------------------------
   test scenario NovaServers.boot_and_delete_server
   args position 0
   args values:
  {u'args': {u'flavor_id': 2,
           u'image_id': u'0d7cfe07-f684-4afa-813d-ca2611373c59'},
   u'concurrent': 1,
   u'times': 2}
   +---------------+---------------+---------------+-------+
   |      max      |      avg      |      min      | ratio |
   +---------------+---------------+---------------+-------+
   | 13.4224121571 | 13.2850991488 | 13.1477861404 |  1.0  |
   +---------------+---------------+---------------+-------+
   --------------------------------------------------------------------------------
   test scenario NovaServers.boot_and_delete_server
   args position 1
   args values:
   {u'args': {u'flavor_id': 2,
            u'image_id': u'0d7cfe07-f684-4afa-813d-ca2611373c59'},
    u'concurrent': 2,
    u'times': 6}
   +--------------+---------------+---------------+-------+
   |     max      |      avg      |      min      | ratio |
   +--------------+---------------+---------------+-------+
   | 19.802423954 | 16.9980401595 | 16.3908159733 |  1.0  |
   +--------------+---------------+---------------+-------+

Available Rally facilities

List of available Deploy engines (including their description and usage examples): Deploy engines

List of available Server providers (including their description and usage examples): Server providers

List of available Benchmark scenarios (including their description and usage examples): Benchmark scenarios