Difference between revisions of "Murano/TestsDocumentation"
Akuznetsova (talk | contribs) (→Murano Automated Tests: UI Tests) |
Akuznetsova (talk | contribs) (→Murano Continious Integration Service) |
||
Line 8: | Line 8: | ||
Murano CI url: [https://murano-ci.mirantis.com/ murano-ci.mirantis.com] | Murano CI url: [https://murano-ci.mirantis.com/ murano-ci.mirantis.com] | ||
+ | |||
+ | There are two jobs for each repository: one of them run on Ubuntu, another one on CentOS. | ||
Here you can see several Jenkins jobs with different targets: | Here you can see several Jenkins jobs with different targets: | ||
− | * Jobs | + | * Jobs 'murano-dashboard-integration-tests-*' allow to verify each commit to murano-dashboard repository on different distributive. |
− | * Jobs | + | * Jobs 'murano-engine-app-deployment-tests-*' allow to verify each commit to murano repository on different distributive |
− | |||
Revision as of 08:56, 18 June 2014
Contents
Murano Automated Tests Description
This page describes automated tests for OpenStack Murano project: how you can download and run tests, how understand the root of problems with FAILed tests and detailed description about each test, which executes for per commit.
Murano Continious Integration Service
Murano project has the CI server, which tests all commits for Murano components and allow to verify that new code does not break nothing.
Murano CI uses OpenStack QA cloud for testing infrastructure.
Murano CI url: murano-ci.mirantis.com
There are two jobs for each repository: one of them run on Ubuntu, another one on CentOS.
Here you can see several Jenkins jobs with different targets:
- Jobs 'murano-dashboard-integration-tests-*' allow to verify each commit to murano-dashboard repository on different distributive.
- Jobs 'murano-engine-app-deployment-tests-*' allow to verify each commit to murano repository on different distributive
Other jobs allow to build and test Murano documentation and perform another usefull work to support Murano CI infrastructure.
Murano Automated Tests: UI Tests
Murano project has a Web User Interface and we have the test suite for Murano Web UI. All UI tests are located in murano-tests repository. Here we have several tests suites for Murano WebUI testing.
All automated tests for Murano Web UI writed on python using advanced Selenium library, which allow to find Web elements using captions for fields and other information to find elements without/with xpath.
How To Run
Prerequisites:
Install nose using:
- easy_install nose
- or
- pip install nose
This will install the nose libraries, as well as the nosetests script, which you can use to automatically discover and run tests.
These external python libraries are required for Murano Web UI tests:
- testtools
- selenium
Download and run
Make sure that additional components are installed.
- Clone murano-tests git repository:
- Go to the muranodashboard-tests directory where tests for dashboard are stored
- Firstly, need to change default settings in config/config_file.conf, set appropriate urls and credentials.
- If you don't have remote server where selenium is installed and want to run tests on local machine,then change following command in base.py:
self.driver = webdriver.Remote( command_executor=cfg.common.selenium_server, desired_capabilities=DesiredCapabilities.FIREFOX)
to:
self.driver = webdriver.Firefox()
- All tests are grouped for a few suites, to specify which tests/suite to run, pass test/suite names on the command line::
- nosetests <test/suite_name>
You should see output something like this:
..................................
Ran 34 tests in 1.440s
OK
There are also a number of command line options that can be used to control the test execution and generated outputs. For help with nosetests’ many command-line options, try:
- nosetests -h
Tests Structure
Functional tests for Web UI are based on python with selenium library. The web UI tests allow to perform complex integration testing with REST API service, REST API client, orchestrator component and Murano dashboard component.
Tests are divided into groups called suites according to the functionality that they check.
Test suites:
- sanity_check.py contains tests that make a primary check (can we log in in murano-dashboard, can we create environment, can we mark image, can we create and delete murano services) and contains tests with actions on murano's metadata repository (compose new service, modify existing service, download/upload service)
- deploy.py contains tests with actions on deploy some basic services
Murano Automated Tests: Tempest Tests
All Murano services have tempest-based automated tests, which allow to verify API interfaces and deployment scenarious.
Tempest tests for Murano are located in two different repositories:
- Basic Repository with full test suites for different Murano services
- Custom Tests with advanced tests for specific cases.
The following Python files are contain basic tests suites for different Murano components:
- test_murano_envs.py contains test suite with actions on murano's environments(create, delete, get and etc.)
- test_murano_sessions.py contains test suite with actions on murano's sessions(create, delete, get and etc.)
- test_murano_services.py contains test suite with actions on murano's services(create, delete, get and etc.)
- test_murano_deploy.py contains test suite with actions on deploy some basic services
- test_murano_metadata.py contains test suite with actions on murano's metadata repository
If you want to know, what steps does this test, in code in comments for this test you can see test's scenario. For example:
@attr(type='smoke') def test_get_environment(self): """ Get environment by id Test create environment, afterthat test try to get environment's info, using environment's id, and finally delete this environment Target component: Murano Scenario: 1. Send request to create environment. 2. Send request to get environment 3. Send request to delete environment """ resp, env = self.create_environment('test') self.environments.append(env) resp, infa = self.get_environment_by_id(env['id']) assert resp['status'] == '200' assert infa['name'] == 'test' self.delete_environment(env['id']) self.environments.pop(self.environments.index(env))