Jump to: navigation, search

Difference between revisions of "Mistral/Testing"

(Step 2: Rally deployment initialization)
(Updated repo links)
 
Line 15: Line 15:
 
= How to execute tests manually =
 
= How to execute tests manually =
 
Almost all existing automated tests can be executed manually on the developer's desktop (except those which check OpenStack actions). To do this you should clone mistral repository (or python-mistralclient) and perform the following shell commands:
 
Almost all existing automated tests can be executed manually on the developer's desktop (except those which check OpenStack actions). To do this you should clone mistral repository (or python-mistralclient) and perform the following shell commands:
  git clone https://github.com/stackforge/mistral
+
  git clone https://git.openstack.org/openstack/mistral.git
 
  cd mistral
 
  cd mistral
  
Line 45: Line 45:
 
To run integration test suite:
 
To run integration test suite:
 
* in OpenStack mode (when auth in Mistral is enabled and Mistral integrates with OpenStack components)  
 
* in OpenStack mode (when auth in Mistral is enabled and Mistral integrates with OpenStack components)  
  pip install git+http://github.com/openstack/tempest.git
+
  pip install git+http://git.openstack.org/openstack/tempest.git
 
  nosetests <mistral or mistralclient>/tests/functional
 
  nosetests <mistral or mistralclient>/tests/functional
  

Latest revision as of 21:48, 4 January 2016

Different types of tests

On Mistral project we have two separate test suites:

  • Unit tests - executed by Jenkins CI job in OpenStack gerrit (python-style checks and execution of all unit tests)
  • Integration tests - executed by Devstack Gate job in OpenStack gerrit (integration tests for Mistral after the OpenStack deployment with devstack)

Where we can find automated tests

Mistral:

Python-mistralclient:

How to execute tests manually

Almost all existing automated tests can be executed manually on the developer's desktop (except those which check OpenStack actions). To do this you should clone mistral repository (or python-mistralclient) and perform the following shell commands:

git clone https://git.openstack.org/openstack/mistral.git
cd mistral

Unit tests

To run unit tests suite

tox

To run unit tests suite against a specific python version

tox -e py27

To run tests from a specific test class (using a specific python version)

tox -e py27 -- 'DataFlowEngineTest'

Integration tests

There are several test suites of intergation tests in both repositories:

Mistral:

  • mistral/tests/functional/api/v1/test_mistral_basic.py - contains tests which check Mistral API v1 (workbooks, executions, tasks endpoints)
  • mistral/tests/functional/api/v1/test_workflow_execution.py - contains tests which check execution of standard scenarios, task results and dependencies in v1
  • mistral/tests/functional/api/v2/test_mistral_basic.py - contains tests which check Mistral API v2 (workbooks, workflows, executions, tasks, actions and cron triggers endpoints)
  • mistral/tests/functional/engine/actions - contains tests which check Mistral integration with OpenStack components (Nova, Glance, Keystone)


Python-mistralclient:

  • mistralclient/tests/functional/cli/ - contains test suites for v1 and v2 which check interaction with Mistral using CLI
  • mistralclient/tests/functional/client/ - contains test suites which check integration and interaction of Mistral client and API


To run integration test suite:

  • in OpenStack mode (when auth in Mistral is enabled and Mistral integrates with OpenStack components)
pip install git+http://git.openstack.org/openstack/tempest.git
nosetests <mistral or mistralclient>/tests/functional
  • in Non-OpenStack mode:
set auth_enable=false in the mistral.conf
restart Mistral server
execute: ./run_functional_tests

Test Plans

Discussion of the general direction of testing can be found here: https://etherpad.openstack.org/p/MistralTests

More detailed test plan will be available later.

Load and Performance Testing

Mistral server consists of three main components api, engine and executor, which can be run in one process or every component can be run in its own one. In a production environment (OpenStack cloud or not) it is more convenient and natural next situation. A lot of users with their Mistral client and connect to Mistral server, there a few variants are possible. First variant is when there is one Mistral server with 1 api, 1 engine and 1 executor, it is not a good decision for a big environment and complex Mistral scenarios, but why not? Another one is when there are a few instances of api servers, engines and executors. Mistral has to work in both situations and we need to know what is its limit.


Rally is a tool that will help us to make a stress and performance Mistral testing.

Load/performance test plan:

  • Prepare simple Rally scenarios which will execute main Mistral actions: get list of workbooks, create and delete workbook, workflow, create execution and wait for its success.

Status: Done

  • Measure the time of one request (time of getting the list of objects, time of creating/deleting different objects, time of the simplest workflow execution) to have the initial value for the next experiments.

Status: Good progress

  • Make a series of measurements for all mentioned above scenarios increasing number of requests (1, 10, 100, 1000, 10000 times for 1 tenant)

Status: Good progress

  • Make the same steps as in 3) but now with increasing number of tenants and accordingly use the concurrency.

Status: Not started

  • Next step is to get time of workflow execution increasing number of Mistral engines and executors and to analyze what is going on with time.

Status: Not started

  • Prepare complex workflow and check Mistral performance in conditions close to reality.

Status: Not started

Results

Results will be here: https://etherpad.openstack.org/p/mistral-rally-testing-results

(if this format is not very suitable then results can be published in other way)

Rally gate for Mistral:

Now we have special gate in the 'mistral' repository that runs Mistral Rally scenarios against OpenStack deployed by DevStack with installed Rally & Mistral.

How To Benchmark Mistral With Rally

Since we have special scenarios in Rally it is possible to collect different meters, for example how long Mistral will create simple workbook in case of 100 such parallel requests from different users ? All you need to do for that is run Mistral Rally scenarios with different parameters: number of users, concurrency and so on.

Step 1: Rally Installation

Clone Rally repository:

   git clone https://github.com/stackforge/rally.git

Install Rally:

  • in a system:
   ./rally/install_rally.sh 
  • in a virtual environment:
   ./rally/install_rally.sh -v

Step 2: Rally deployment initialization

Rally needs to know OpenStack credentials of the cloud (with preinstalled Mistral) to benchmark it. There are two ways to provide credentials to Rally.

Using local environment

Export auth information to the environment:

   export OS_USERNAME=<ADMIN_USER_NAME>
   export OS_TENANT_NAME=<ADMIN_PASSWORD>
   export OS_PASSWORD=<ADMIN_TENANT>
   export OS_AUTH_URL=<KEYSTONE_AUTH_URL>

Pass it to Rally:

   $ rally deployment create --name <deployment_name> --fromenv

Using deployment configuration file

Create input file in json format:

   {
       "type": "ExistingCloud",
       "auth_url": <KEYSTONE_AUTH_URL>,
       "admin": {
           "username": <ADMIN_USER_NAME>,
           "password": <ADMIN_PASSWORD>,
           "tenant_name": <ADMIN_TENANT>
       }
   }

Register this deployment in Rally:

   $ rally deployment create --filename=<file_name>.json --name=<deployment_name>
   +---------------------------+----------------------------+----------+------------------+
   |            uuid           |         created_at         |   name   |      status      |
   +---------------------------+----------------------------+----------+------------------+
   |     <Deployment UUID>     | 2014-02-15 22:00:28.270941 | existing | deploy->finished |
   +---------------------------+----------------------------+----------+------------------+
   Using deployment : <deployment UUID>

Note: all the benchmarking operations from now on are going to be performed on this deployment. To switch to another deployment, execute:

   $ rally use deployment --uuid=<another_deployment_UUID>
   Using deployment : <another_deployment_UUID>

After registering OpenStack cloud need to be sure that everything’s working properly and there are no authentication problems and all needed services are registered (including Mistral, of course). To check it , execute this command:

   $ rally deployment check

Step 3: Benchmarking

Now all is ready to load and performance testing. All available scenarios are placed in the rally/samples/tasks/scenarios folder, here there is a set of subfolders for specific components, including Mistral.

Short look to the Mistral scenarios

For the current moment there are three available Mistral Rally scenarios which allow to test basic Mistral operations: list-workbook, create-workbook and create-delete-workbook. Let’s choose create-delete-workbook scenario, which you can find in the heat directory. It contains the following definition:

   ---
   MistralWorkbooks.create_workbook:
   -
     args:
       definition: rally-jobs/extra/mistral_wb.yaml
       do_delete: true
     runner:
       type: "constant"
       times: 50
       concurrency: 10
     context:
       users:
         tenants: 1
         users_per_tenant: 1


Here is a code of MistralWorkbooks.create_workbook method:

   def create_workbook(self, definition, do_delete=False):
       """Scenario tests workbook creation and deletion.
       This scenario is a very useful tool to measure the
       "mistral workbook-create" and "mistral workbook-delete"
       commands performance.
       :param definition: string (yaml string) representation of given
                          file content (Mistral workbook definition)
       :param do_delete: if False than it allows to check performance
                         in "create only" mode.
       """
       wb = self._create_workbook(definition)
       if do_delete:
           self._delete_workbook(wb.name)

This benchmark scenario consists of a sequence of basic actions:

  • create workbook – the action sends a request to create a workbook with given definition (it uses definition, path to which is provided in args section )
  • delete workbook – the action sends a request to delete a workbook (execution of this step depends on ‘do_delete’ flag)


This benchmark scenario will be run by 1 temporary created user from 1 tenant 50 times in total. At any given moment, only 10 requests will be running simultaneously. It it possible to change all these parameters or to use another type of load. To know more about structure of task file or load types, please visit: https://wiki.openstack.org/wiki/Rally/Concepts

Running Of Task

To run this benchmark, need to execute:

   $ rally task start rally/samples/tasks/scenarios/mistral/create-delete-workbook.yaml