Mistral/Testing

= Different types of tests = On Mistral project we have two separate test suites:
 * Unit tests - executed by Jenkins CI job in OpenStack gerrit (python-style checks and execution of all unit tests)
 * Integration tests - executed by Devstack Gate job in OpenStack gerrit (integration tests for Mistral after the OpenStack deployment with devstack)

= Where we can find automated tests = Mistral:
 * Unit tests can be found here: https://github.com/stackforge/mistral/tree/master/mistral/tests/unit
 * Integration tests can be found here: https://github.com/stackforge/mistral/tree/master/mistral/tests/functional

Python-mistralclient:
 * Unit tests can be found here: https://github.com/stackforge/python-mistralclient/tree/master/mistralclient/tests/unit
 * Integration tests can be found here: https://github.com/stackforge/python-mistralclient/tree/master/mistralclient/tests/functional

= How to execute tests manually = Almost all existing automated tests can be executed manually on the developer's desktop (except those which check OpenStack actions). To do this you should clone mistral repository (or python-mistralclient) and perform the following shell commands: git clone https://git.openstack.org/openstack/mistral.git cd mistral

Unit tests
To run unit tests suite tox

To run unit tests suite against a specific python version tox -e py27

To run tests from a specific test class (using a specific python version) tox -e py27 -- 'DataFlowEngineTest'

Integration tests
There are several test suites of intergation tests in both repositories:

Mistral:
 * mistral/tests/functional/api/v1/test_mistral_basic.py - contains tests which check Mistral API v1 (workbooks, executions, tasks endpoints)
 * mistral/tests/functional/api/v1/test_workflow_execution.py - contains tests which check execution of standard scenarios, task results and dependencies in v1
 * mistral/tests/functional/api/v2/test_mistral_basic.py - contains tests which check Mistral API v2 (workbooks, workflows, executions, tasks, actions and cron triggers endpoints)
 * mistral/tests/functional/engine/actions - contains tests which check Mistral integration with OpenStack components (Nova, Glance, Keystone)

Python-mistralclient:
 * mistralclient/tests/functional/cli/ - contains test suites for v1 and v2 which check interaction with Mistral using CLI
 * mistralclient/tests/functional/client/ - contains test suites which check integration and interaction of Mistral client and API

To run integration test suite: pip install git+http://git.openstack.org/openstack/tempest.git nosetests /tests/functional
 * in OpenStack mode (when auth in Mistral is enabled and Mistral integrates with OpenStack components)

set auth_enable=false in the mistral.conf restart Mistral server execute: ./run_functional_tests
 * in Non-OpenStack mode:

= Test Plans = Discussion of the general direction of testing can be found here: https://etherpad.openstack.org/p/MistralTests

More detailed test plan will be available later.

= Load and Performance Testing =

Mistral server consists of three main components api, engine and executor, which can be run in one process or every component can be run in its own one. In a production environment (OpenStack cloud or not) it is more convenient and natural next situation. A lot of users with their Mistral client and connect to Mistral server, there a few variants are possible. First variant is when there is one Mistral server with 1 api, 1 engine and 1 executor, it is not a good decision for a big environment and complex Mistral scenarios, but why not? Another one is when there are a few instances of api servers, engines and executors. Mistral has to work in both situations and we need to know what is its limit.

Rally is a tool that will help us to make a stress and performance Mistral testing.

Load/performance test plan:
Status: Done Status: Good progress Status: Good progress Status: Not started Status: Not started Status: Not started
 * Prepare simple Rally scenarios which will execute main Mistral actions: get list of workbooks, create and delete workbook, workflow, create execution and wait for its success.
 * Measure the time of one request (time of getting the list of objects, time of creating/deleting different objects, time of the simplest workflow execution) to have the initial value for the next experiments.
 * Make a series of measurements for all mentioned above scenarios increasing number of requests (1, 10, 100, 1000, 10000 times for 1 tenant)
 * Make the same steps as in 3) but now with increasing number of tenants and accordingly use the concurrency.
 * Next step is to get time of workflow execution increasing number of Mistral engines and executors and to analyze what is going on with time.
 * Prepare complex workflow and check Mistral performance in conditions close to reality.

Results
Results will be here: https://etherpad.openstack.org/p/mistral-rally-testing-results

(if this format is not very suitable then results can be published in other way)

Rally gate for Mistral:
Now we have special gate in the 'mistral' repository that runs Mistral Rally scenarios against OpenStack deployed by DevStack with installed Rally & Mistral.

=How To Benchmark Mistral With Rally=

Since we have special scenarios in Rally it is possible to collect different meters, for example how long Mistral will create simple workbook in case of 100 such parallel requests from different users ? All you need to do for that is run Mistral Rally scenarios with different parameters: number of users, concurrency and so on.

Step 1: Rally Installation
Clone Rally repository:

git clone https://github.com/stackforge/rally.git

Install Rally:

./rally/install_rally.sh    ./rally/install_rally.sh -v
 * in a system:
 * in a virtual environment:

Step 2: Rally deployment initialization
Rally needs to know OpenStack credentials of the cloud (with preinstalled Mistral) to benchmark it. There are two ways to provide credentials to Rally.

Using local environment
Export auth information to the environment:

export OS_USERNAME= export OS_TENANT_NAME= export OS_PASSWORD= export OS_AUTH_URL=

Pass it to Rally: $ rally deployment create --name  --fromenv

Using deployment configuration file
Create input file in json format:

{       "type": "ExistingCloud", "auth_url": , "admin": { "username": , "password": , "tenant_name":  }   }

Register this deployment in Rally: $ rally deployment create --filename=.json --name= +---++--+--+   |            uuid           |         created_at         |   name   |      status      | +---++--+--+   |          | 2014-02-15 22:00:28.270941 | existing | deploy->finished | +---++--+--+   Using deployment : 

Note: all the benchmarking operations from now on are going to be performed on this deployment. To switch to another deployment, execute: $ rally use deployment --uuid= Using deployment : 

After registering OpenStack cloud need to be sure that everything’s working properly and there are no authentication problems and all needed services are registered (including Mistral, of course). To check it, execute this command: $ rally deployment check

Step 3: Benchmarking
Now all is ready to load and performance testing. All available scenarios are placed in the rally/samples/tasks/scenarios folder, here there is a set of subfolders for specific components, including Mistral.

Short look to the Mistral scenarios
For the current moment there are three available Mistral Rally scenarios which allow to test basic Mistral operations: list-workbook, create-workbook and create-delete-workbook. Let’s choose create-delete-workbook scenario, which you can find in the heat directory. It contains the following definition:

---   MistralWorkbooks.create_workbook: -     args: definition: rally-jobs/extra/mistral_wb.yaml do_delete: true runner: type: "constant" times: 50 concurrency: 10 context: users: tenants: 1 users_per_tenant: 1

Here is a code of MistralWorkbooks.create_workbook method:

def create_workbook(self, definition, do_delete=False): """Scenario tests workbook creation and deletion.       This scenario is a very useful tool to measure the        "mistral workbook-create" and "mistral workbook-delete"        commands performance.        :param definition: string (yaml string) representation of given                           file content (Mistral workbook definition)        :param do_delete: if False than it allows to check performance                          in "create only" mode.        """ wb = self._create_workbook(definition) if do_delete: self._delete_workbook(wb.name)

This benchmark scenario consists of a sequence of basic actions:
 * create workbook – the action sends a request to create a workbook with given definition (it uses definition, path to which is provided in args section )
 * delete workbook – the action sends a request to delete a workbook (execution of this step depends on ‘do_delete’ flag)

This benchmark scenario will be run by 1 temporary created user from 1 tenant 50 times in total. At any given moment, only 10 requests will be running simultaneously. It it possible to change all these parameters or to use another type of load. To know more about structure of task file or load types, please visit: https://wiki.openstack.org/wiki/Rally/Concepts

Running Of Task
To run this benchmark, need to execute: $ rally task start rally/samples/tasks/scenarios/mistral/create-delete-workbook.yaml