Difference between revisions of "Test Standards"

From eLinux.org
Jump to: navigation, search
(Test Execution API (E))
Line 47: Line 47:
  
 
= Test Execution API (E) =
 
= Test Execution API (E) =
 +
See [[Test Execution Notes]] for more details and miscellaneous notes.
 +
 
* test API
 
* test API
 
* host/target abstraction
 
* host/target abstraction
Line 58: Line 60:
 
*** runtest.sh?
 
*** runtest.sh?
 
* test phases
 
* test phases
 
If a program uses Makefile to build and install the software under test, then it should provide
 
the following Makefile targets, to support testing of the software.
 
"make test"
 
 
or
 
 
"make check"
 
 
See https://www.gnu.org/software/make/manual/make.html#Goals (section "9.2 Arguments to Specify the Goals")
 
 
See also: https://www.gnu.org/software/make/manual/make.html#Standard-Targets
 
 
See [[Test Execution Notes]]
 
  
 
= Build Artifacts =
 
= Build Artifacts =

Revision as of 17:48, 26 September 2019

This page will be used to collect information about test standards.

meta-documents

A survey of existing test systems was conducted in the Fall of 2018. The survey and results are here: Test Stack Survey


Here are some things we'd like to standardize in open source automated testing:

Terminology and Framework

Diagram

Below is a diagram for the high level CI loop:

The boxes represent different processes, hardware, or storage locations. Lines between boxes indicate APIs or control flow, and are labeled with letters. The intent of this is to provide a reference model for the test standards.

high level CI loop

Power Control

See the document...

Test Definition

The test definition is the set of attributes, code, and data that are used to perform a test. A test definition standard would specify things like the following:

  • fields - the data elements of a test
  • file format (json, xml, etc.) - how a test is expressed and transported
  • meta-data - data describing the test
  • visualization control - information used for visualization of results
  • instructions - executable code to perform the test

See Test Definition Project for more information about a project to harmonize test definitions across multiple test systems.


Test dependencies

  • how to specify test dependencies
    • ex: assert_define ENV_VAR_NAME
    • ex: kernel_config
  • types of dependencies

See Test_Dependencies

Test Execution API (E)

See Test Execution Notes for more details and miscellaneous notes.

  • test API
  • host/target abstraction
    • kernel installation / provisioning
    • file operations
    • console access
    • command execution
  • test retrieval, build, deployment
    • test execution:
      • ex: 'make test'
      • runtest.sh?
  • test phases

Build Artifacts

  • test package format
    • meta-data for each test
    • test results
    • baseline expected results for particular tests on particular platforms

Test package format

This is a package intended to be installed on a target (as opposed to the collection of test definition information that may be stored elsewhere in the test system)

Run Artifacts

  • logs
  • data files (audio, video)
  • monitor results (power log, trace log)
  • snapshots


Results Format

See Test Results Format Notes for details and miscellaneous notes

The results format is the output from the test and creates is part of the interface between the test program and the test execution layer (or test harness).

The main thing that the format communicates is the list of testcases (or metrics, in the case of benchmarks) and the result of the testcase (pass, fail, etc.)

Standards

The Linux kernel kselftest uses TAP as the preferred output format.

Pass Criteria

The pass criteria is a set of data that indicate to the test framework how to interpret the results from a test. They can indicate the following:

  • what tests can be skipped (this is more part of test execution and control)
  • what test results can be ignored (xfail)
  • min required pass counts, max allowed failures
  • thresholds for measurement results
    • requires testcase id, number and operator

The pass criteria allows separation of things like expected failure from the test code itself, to allow for situations where different sets of results are interpreted as success or failure depending on factors outside the test (for example, kernel version, kernel configuration, available hardware, etc.)

For things like functional unit tests, a single failing result should result in the overall failure of a test suite. However, for system tests or benchmarks, it is often the case that some results must be interpreted in a context-sensitive manner, or some set of testcases are ignored for expediency's sake.

Miscelaneous (uncategorized)

  • environment variables used to create an SDK build environment for a board
  • environment variables used for controlling execution of a test
  • location of kernel configuration (used for dependency testing) KCONFIG_PATH (adopted by LTP)
  • default name of test program in a target package (run-test.sh?)
    • this should be part of the test definition