Jump to: navigation, search

StarlingX/Test

< StarlingX
Revision as of 23:28, 9 January 2020 by Ada.cabrales (talk | contribs) (Team Information)

StarlingX Test Sub-project

Team Information

  • Project Lead: TBD
  • Technical Lead: TBD
  • PyTest Developer Lead: Yang Liu <yang.liu@windriver.com>
  • Team members:

Team Objective / Priorities

  • Verification and validation of StarlingX system - take a look at the test strategy
  • Consolidate an automated test suite using an unified Framework
  • Reduce to 0 the amount of manual testing

Documentation

Project calls

Bi-weekly meetings on Tuesdays at 9:00am PDT / 1600 UTC

Agenda and meeting minutes are in this etherpad

Story Board Tags

All story board stories created for this team should use the tag "stx.test" and the prefix [Test]

Team Work Items

Sanity Information

Overview

  • 4 configurations are run:
    • AIO-SX (Simplex)
    • AIO-DX (Duplex)
    • Standard Local Storage (2+2)
    • Standard External Storage (2+2+2)
  • 2 environments are run:
    • Baremetal
    • Virtual
  • Execution is run with no proxy settings, with proxy settings and with local registry.
  • Execution is run in a variety of hardware
  • Execution with Robot Framework developed by Intel and PyTest Framework developed by WR.
  • Execution split in Sanity Platform and Sanity OpenStack

Tests Cycle

  • Each site, Intel and WR, runs two configuration for two weeks, then switch.
  • Baremetal Environment will be executed by Intel and WR
  • Virtual Environment will be executed by Intel only
  • Execution of Sanity will be run with PyTest Framework that is already in the public repo.
  • Execution during Week 1 & 2:
    • Intel:
      • AIO-SX
      • Standard External Storage (2+2+2)
    • WR:
      • AIO-DX
      • Standard Local Storage (2+2)
  • Execution during Week 3 & 4:
    • Intel:
      • AIO-DX
      • Standard Local Storage (2+2)
    • WR:
      • AIO-SX
      • Standard External Storage (2+2+2)

Frameworks


Launchpads

  • N/A

Notes

  • Pings from external is not enabled yet, we are able only to ping from the node where VM lives to the VM, no matter what configuration is.
  • Ping between VMs is also possible, but the logic to automate this is not ready.