Jump to: navigation, search

MagnetoDB/QA/Test process

Test Process

Overview

When participating in community-style OpenStack projects, we have to adjust to specific standards that were established there from the very beginning. Traditionally, such testing activities imply vast automation based on Tempest framework and, therefore, are limited mostly with functional test coverage. Often there is no need in gathering statistics and in deep analysis of the obtained results as functional test results are self-evident and easily understandable.

On MagnetoDB we have not only test the system from the functional prospective but also run a number of non-functional tests on a regular basis and often enough to perform all the results analysis manually.

Goals

Performing a large number of time-consuming functional and non-functional tests, after each run we become “proud owners” of tons of test logs and measured characteristics. As the complexity of such amounts of data is overwhelming, the original idea was to automate:

  • Gathering and processing of test results
  • Comparing actual test results with the required ones

to:

  1. Mark properly working functionality and separate it from the erroneous one
  2. Assess test coverage
  3. Locate incorrectly implemented requirements
  4. Find out the exact areas in components/system implementation having defects to fix
  5. etc

Test process

To cope with all the mentioned difficulties, first of all, it was decided to apply a google-like approach to testing: make each developer to test his/her own implemented functionality. Certainly, not each of the developers has a decent level of knowledge in testing. In view of this, a separate QA Lead was introduced into the team and was made responsible for overall and feature-level test design helping developers with test ideas, descriptions, methodologies, documenting, etc.

Secondly, we have to take into account quite a lenghty process of creating automated tests:

  • Some of them are ready to use and are applied to already implemented functionality
  • Some are checked against not completed functionality that may have some defects
  • And some just can not be run properly as their functionality is not completed yet or the appropriate system feature was not finalised.

So we divide these mentioned groups into 3 folders (by categories): Ready, In progress, Not implemented yet. The first category was made “voting” to allow taking a decision on readiness of the build and, thus, influence further steps of the CI process. Other 2 groups (“In progress” and “Not implemented yet”) are “not voting” but results of their test runs are attentively monitored to trace system’s dynamics. If some test script or some systems functionality becomes completely implemented, this script or the appropriate test scripts are moved to “Ready” category becoming members of the stable set used for regression testing.

The third aspect is creating a “feedback loop” for automated parsing of just gathered and structurized tests results. All the results are put into the appropriate forms and are visualized as html-pages. There is a special parser mechanism that allows comparing actual test results with the expected ones and check them against the requirements, calculate positive and negative test coverage, measure other characteristics listed in section [ Goals].

Test Workflow

Workflow itself is similar to traditional ones. Software development process is organized according to standards taken from SCRUM that is we have a chain of sprints.

  1. Test planning and design (on all levels) are performed on each sprint iteratively by a dedicated QA Lead
  2. Functional test case design on component and integration levels should be ready (if possible, according to the sprint plan and backlog) by the time when the appropriate feature development is completed
  3. QA engineer (in our approach it is a dev engineer, the author of the feature) starts working on creating the appropriate automated scenarios (using already existing conceptual or detailed test case design) right after completion with the feature implementation
  4. QA engineer stores all the created scenarios into the test repository dividing all the tests into several folders (stable, in_progress, not_ready) to define voting status for each test according to its current status. See the detailed description of this concept and its purpose in “Organization of tests in CI” document.
  5. The created automated test sets are run on each commit automatically by the CI system
  6. According to the test cycle and their appropriate goals, other types of tests (irregular, both manual and automated) are run
  7. Results of test running are automatically gathered into log-files for the following analysis by the development team. The appropriate notifications with the test results and links are sent to the responsible team members
  8. Test logs are automatically processed to fill in the test status forms (see “MagnetoDB Test Cases”) reports
  9. Based on test status forms, test reports and statistics (see “MagnetoDB test statistics”) are generated automatically.