Jump to: navigation, search

Difference between revisions of "Horizon/Testing/UI"

m (As an admin)
(Appendix:Run test on Windows with IE or Chrome)
(28 intermediate revisions by 5 users not shown)
Line 11: Line 11:
 
If you're interested in contributing to the tests, thank you!
 
If you're interested in contributing to the tests, thank you!
  
* Please add your name to the [[#Test_areas_breakdown|Test Area Breakdown]] section to avoid people accidentally duplicating work.
+
* Please open a bug and tag it with "[https://bugs.launchpad.net/horizon/+bugs?field.tag=integration-tests integration-tests]".
* It would be nice if you could also include: "Partially implements blueprint: [https://blueprints.launchpad.net/horizon/+spec/selenium-integration-testing selenium-integration-testing]" in the commit message so that people interested in the integration tests get a notification about your patch (anyone can subscribe to the blueprint).
+
* Look at the [https://github.com/openstack/horizon/tree/master/openstack_dashboard/test/integration_tests existing tests] and familiarise yourself with the [[#Page_Object_Pattern_.28Selected_Approach.29|page object pattern]] used for the tests.
* Please also open a bug and tag it with "[https://bugs.launchpad.net/horizon/+bugs?field.tag=integration-tests integration-tests]".
 
 
* In general if you're not sure how to address something, have a look at how Tempest does things. We want to keep consistent with the way integration tests are written in the wider project.
 
* In general if you're not sure how to address something, have a look at how Tempest does things. We want to keep consistent with the way integration tests are written in the wider project.
  
 
=== Reviewing the tests ===
 
=== Reviewing the tests ===
  
* [https://review.openstack.org/#/q/project:openstack/horizon+file:%255E.*/integration_tests/.*+status:open,n,z Gerrit query showing the currently open reviews], that touch the integration tests files
+
* [https://review.openstack.org/#/q/project:openstack/horizon+file:%255E.*/integration_tests/.*+status:open,n,z Gerrit query showing the currently open reviews], that touch the integration tests files.
 +
* Subscribe to bugs tagged with the [https://bugs.launchpad.net/horizon/+bugs?field.tag=integration-tests integration-tests] keyword (*) to keep track of new and missing tests.
 +
 
 +
(*) How to subscribe: go to the [https://bugs.launchpad.net/horizon main bug page] and click on "Subscribe to bug mail" to the right handside, then "Receive mail for bugs affecting OpenStack Dashboard (Horizon) that" -> "are added or changed in any way" -> "Bugs must match this filter" -> "Tags" -> integration-tests. You can choose whether to get comments for individual comments too or only on status changes.
  
 
== Page Object Pattern (Selected Approach) ==
 
== Page Object Pattern (Selected Approach) ==
Line 80: Line 82:
  
 
== Test areas breakdown ==
 
== Test areas breakdown ==
Multiple people have come forward to help out with writing the tests (which is great, thank you!). To avoid accidentally duplicating work and wasting effort, it makes sense to divide Horizon into different areas. As a starting point, the following list does so by panel: please add your name besides the area you want to take on (feel free to add missing ones or break it down further).
 
 
=== General ===
 
* Logging in - Daniel Korn (dkorn)
 
* Help - Daniel Korn (dkorn)
 
 
=== As a regular user ===
 
* Overview page and usage form -
 
* Instances (may be broken down further if needed, there is a lot in there)
 
:# Instance Overview - santib
 
:# Luanch Instance - Mohan Seri
 
:# Allocate IP to Project - Ravikumar Venkatesan
 
:# Release Floating IPs - Ravikumar Venkatesan
 
:# Associate Floating IP - Ravikumar Venkatesan
 
:# Edit Instance - Mohan Seri
 
:# Soft Reboot Instances
 
:# Terminate Instances - Mohan Seri
 
:# Create Sanpshot
 
:# Disassociate Floating IP
 
:# Edit Security Groups
 
:# Console
 
:# View log
 
:# Pause Instance
 
:# Suspend Instance
 
:# Resize Instance
 
:# Soft Reboot
 
:# Hard Reboot
 
:# Shut Off Instance
 
:# Rebuild
 
:# Terminate - Mohan Seri
 
:# Search for an instance in the instances main page.
 
* Volumes - Mohan Seri
 
:# Create Volume
 
:# Edit Volume
 
:# Extend Volume
 
:# Edit Attachments
 
:# Create Snapshot
 
:# Delete Volume
 
:# Delete Volumes
 
:# Search Volumes
 
* Images - Daniel Korn (dkorn)
 
* Access and Security
 
:# Keypair - Ravikumar Venkatesan
 
:# Security Groups - Ravikumar Venkatesan
 
* Networks - Ravikumar Venkatesan
 
* Routers - Ravikumar Venkatesan
 
* Load balancers -
 
* Firewalls -
 
* VPN -
 
* Containers - Ravikumar Venkatesan
 
* Orchestration - Daniel Korn (dkorn)
 
* Databases / Database backups -
 
* Network topology -
 
 
=== As an admin ===
 
* Admin overview -
 
* Resource Usage -
 
* Hypervisors -
 
* Aggregates -
 
* Instances -
 
* Volumes -
 
* Images -
 
* Networks -
 
* Routers -
 
* Defaults -
 
* System Info -
 
* Flavors -
 
* Domains -
 
* Projects - Vlad Okhrimenko
 
* Users - Michael Jackson
 
* Groups -
 
* Roles -
 
 
=== Settings ===
 
* User settings - Daniel Korn (dkorn)
 
:# Language
 
:# Timezone
 
:# Items per page
 
* Change password - Daniel Korn (dkorn)
 
 
== Other Implementation ideas (Dismissed approaches) ==
 
 
=== Directly using Selenium ===
 
Point Selenium to a (probably devstack-based) OpenStack installation.
 
 
Advantages: Simplest solution
 
  
Disadvantages: Must know Python, must know how to write robust UI-driven tests using Selenium
 
  
=== Translator middleware ===
+
Multiple people have come forward to help out with writing the tests (which is great, thank you!). To avoid accidentally duplicating work and wasting effort, it initially made sense to divide Horizon into different areas. However as more people came up to help, it became more difficult to figure out who was still involved and who forgot to remove their name before disappearing. At this point rather than updating this wiki page it would be better to create new bugs with the appropriate tag, as described in the [[#Writing_a_test|instructions on how to write a test]].
A translator framework that makes it possible to write tests in a high-level language (similar to [http://en.wikipedia.org/wiki/Behavior-driven_development BDD]) and makes the tests more resilient to changes in the software implementation details. See also: [http://jbehave.org/ JBehave].
 
  
Advantages: Simplify the writing of test scenarios by experienced QA/QE/Testing teams who may not know Python, by enabling them to write the scenarios in English.
+
==Appendix:Run test with IE, FireFox or Chrome==
  
Disadvantages: The load is now shifted onto the development team who must spend the time keeping the translator layer up to date, and enabling new keywords/scenarios. (?)
+
*Install virtualven
 +
*Clone horizon repository
 +
*Virtualenv horizon/.venv
 +
*Activate by .venv\Scripts\activate on windows(. ./.venv/bin/activate on linux)
 +
*pip install -r requirements.txt
 +
*pip install -r test-requirements.txt
 +
*modify openstack_dashboard/test/integration_tests/horizon.conf to point the remote devstack
 +
*set INTEGRATION_TESTS=1(export on linux)
 +
*set BROWSER=IE
 +
*set FIREFOX_PROFILE=/home/user/.mozilla/firefox/horizon.default(reuse a working profile)
 +
* nosetests openstack_dashboard/test/integration_tests/tests

Revision as of 04:12, 30 April 2015

During the Icehouse cycle we set up the foundations for a robust, UI-driven integration testing framework for Horizon. The tests follow the Page Object Pattern.

Related blueprint: https://blueprints.launchpad.net/horizon/+spec/selenium-integration-testing

Integration tests in Horizon

The integration tests currently live in the Horizon repository, see https://github.com/openstack/horizon/tree/master/openstack_dashboard/test/integration_tests which also contains instructions on how to run the tests. In the future, the tests themselves may move to the Tempest repository to live together with the other OpenStack integration tests (this will be discussed once we have a larger test suite that we know is stable).

Writing a test

If you're interested in contributing to the tests, thank you!

  • Please open a bug and tag it with "integration-tests".
  • Look at the existing tests and familiarise yourself with the page object pattern used for the tests.
  • In general if you're not sure how to address something, have a look at how Tempest does things. We want to keep consistent with the way integration tests are written in the wider project.

Reviewing the tests

(*) How to subscribe: go to the main bug page and click on "Subscribe to bug mail" to the right handside, then "Receive mail for bugs affecting OpenStack Dashboard (Horizon) that" -> "are added or changed in any way" -> "Bugs must match this filter" -> "Tags" -> integration-tests. You can choose whether to get comments for individual comments too or only on status changes.

Page Object Pattern (Selected Approach)

Description

The Page Object Pattern offers a level of indirection, and segregates the tests so that the UI model does not affect them.

You build up regions that become reusable components (example of a base page). These properties can then be redefined or overriden (e.g. selectors) in the actual pages (subclasses) (example of a page another example).

The page objects are read-only and define the read-only and clickable elements of a page, which work to shield the tests. For instance, from the test perspective, if "Logout" used to be a link but suddenly becomes a an option in a drop-down menu, there are no changes because it still simply calls the "click_on_logout" action method.

This approach has 2 main aspects:

  • The classes with the actual tests should be as readable as possible
  • The other parts of the testing framework should be as much about data as possible, so that if the CSS etc. changes you only need to change that one property. If the flow changes, only the action method should need to change.


There is little that is Selenium-specific in the Pages, except for the properties.

There is little coupling between the tests and the pages. Writing the tests becomes like writing out a list of steps (by using the previously mentioned action methods).

One of the key points, particularly important for this kind of UI driven testing is to isolate the tests from what's behind them.

Another interesting aspect to explore: pytest improves on the concept of fixtures, which can be used to create utility functions that can easily be injected into the tests (e.g. maximizing a window). More information on fixtures from the pytest website.

Python

On the Python side, the recommendation is to use pytest as a test runner, which makes it easy to swap things in and out based on the data available, and the mozwebqa-plugin in the tests themselves.

Next steps

Focus on a page/base.py and page/page.py in order to create a good foundation (there are example available - see example projects below - and they are probably abstract enough to be mostly reusable.)

Implement the navigation first, because it will be on every page - implement it as a region. Regions can then be built manually.

Additionally, let's consider writing unit tests for the page objects themselves, in order to validate the selectors (this can help to catch bugs that would happen on only one kind of browser).

Summary and links

Pros: Tests are readable and robust - they should not need to change when the codebase does

Cons: A new (small) framework to learn and maintain

More information:


Note: the Mozilla guidelines in the first link sound very sensible and I think we should stick to them as well.

Dependencies of interest:


Example projects that use Page Objects:

Test areas breakdown

Multiple people have come forward to help out with writing the tests (which is great, thank you!). To avoid accidentally duplicating work and wasting effort, it initially made sense to divide Horizon into different areas. However as more people came up to help, it became more difficult to figure out who was still involved and who forgot to remove their name before disappearing. At this point rather than updating this wiki page it would be better to create new bugs with the appropriate tag, as described in the instructions on how to write a test.

Appendix:Run test with IE, FireFox or Chrome

  • Install virtualven
  • Clone horizon repository
  • Virtualenv horizon/.venv
  • Activate by .venv\Scripts\activate on windows(. ./.venv/bin/activate on linux)
  • pip install -r requirements.txt
  • pip install -r test-requirements.txt
  • modify openstack_dashboard/test/integration_tests/horizon.conf to point the remote devstack
  • set INTEGRATION_TESTS=1(export on linux)
  • set BROWSER=IE
  • set FIREFOX_PROFILE=/home/user/.mozilla/firefox/horizon.default(reuse a working profile)
  • nosetests openstack_dashboard/test/integration_tests/tests