Jump to: navigation, search

Horizon/Testing/UI

During the Icehouse cycle we set up the foundations for a robust, UI-driven integration testing framework for Horizon. The tests follow the Page Object Pattern.

Related blueprint: https://blueprints.launchpad.net/horizon/+spec/selenium-integration-testing

Integration tests in Horizon

The integration tests currently live in the Horizon repository, see https://github.com/openstack/horizon/tree/master/openstack_dashboard/test/integration_tests which also contains instructions on how to run the tests. In the future, the tests themselves may move to the Tempest repository to live together with the other OpenStack integration tests (this will be discussed once we have a larger test suite that we know is stable).

Writing a test

If you're interested in contributing to the tests, thank you!

  • Please open a bug and tag it with "integration-tests".
  • Look at the existing tests and familiarise yourself with the page object pattern used for the tests.
  • In general if you're not sure how to address something, have a look at how Tempest does things. We want to keep consistent with the way integration tests are written in the wider project.

Reviewing the tests

(*) How to subscribe: go to the main bug page and click on "Subscribe to bug mail" to the right handside, then "Receive mail for bugs affecting OpenStack Dashboard (Horizon) that" -> "are added or changed in any way" -> "Bugs must match this filter" -> "Tags" -> integration-tests. You can choose whether to get comments for individual comments too or only on status changes.

Page Object Pattern (Selected Approach)

Description

The Page Object Pattern offers a level of indirection, and segregates the tests so that the UI model does not affect them.

You build up regions that become reusable components (example of a base page). These properties can then be redefined or overriden (e.g. selectors) in the actual pages (subclasses) (example of a page another example).

The page objects are read-only and define the read-only and clickable elements of a page, which work to shield the tests. For instance, from the test perspective, if "Logout" used to be a link but suddenly becomes a an option in a drop-down menu, there are no changes because it still simply calls the "click_on_logout" action method.

This approach has 2 main aspects:

  • The classes with the actual tests should be as readable as possible
  • The other parts of the testing framework should be as much about data as possible, so that if the CSS etc. changes you only need to change that one property. If the flow changes, only the action method should need to change.


There is little that is Selenium-specific in the Pages, except for the properties.

There is little coupling between the tests and the pages. Writing the tests becomes like writing out a list of steps (by using the previously mentioned action methods).

One of the key points, particularly important for this kind of UI driven testing is to isolate the tests from what's behind them.

Another interesting aspect to explore: pytest improves on the concept of fixtures, which can be used to create utility functions that can easily be injected into the tests (e.g. maximizing a window). More information on fixtures from the pytest website.

Python

On the Python side, the recommendation is to use pytest as a test runner, which makes it easy to swap things in and out based on the data available, and the mozwebqa-plugin in the tests themselves.

Next steps

Focus on a page/base.py and page/page.py in order to create a good foundation (there are example available - see example projects below - and they are probably abstract enough to be mostly reusable.)

Implement the navigation first, because it will be on every page - implement it as a region. Regions can then be built manually.

Additionally, let's consider writing unit tests for the page objects themselves, in order to validate the selectors (this can help to catch bugs that would happen on only one kind of browser).

Summary and links

Pros: Tests are readable and robust - they should not need to change when the codebase does

Cons: A new (small) framework to learn and maintain

More information:


Note: the Mozilla guidelines in the first link sound very sensible and I think we should stick to them as well.

Dependencies of interest:


Example projects that use Page Objects:

Test areas breakdown

Multiple people have come forward to help out with writing the tests (which is great, thank you!). To avoid accidentally duplicating work and wasting effort, it initially made sense to divide Horizon into different areas. However as more people came up to help, it became more difficult to figure out who was still involved and who forgot to remove their name before disappearing. At this point rather than updating this wiki page it would be better to create new bugs with the appropriate tag, as described in the instructions on how to write a test.

Appendix:Run test with IE, FireFox or Chrome

  • Install virtualenv
  • Clone horizon repository
  • Virtualenv horizon/.venv
  • Activate by .venv\Scripts\activate on windows(. ./.venv/bin/activate on linux)
  • pip install -r requirements.txt
  • pip install -r test-requirements.txt
  • modify openstack_dashboard/test/integration_tests/horizon.conf to point the remote devstack
  • set INTEGRATION_TESTS=1(export on linux)
  • set BROWSER=IE
  • set FIREFOX_PROFILE=/home/user/.mozilla/firefox/horizon.default(reuse a working profile)
  • nosetests openstack_dashboard/test/integration_tests/tests