Test Standards

This page will be used to collect information about test standards.

= meta-documents =
 * https://tools.ietf.org/html/rfc2119 - IETF MUST, SHALL, MAY, etc. wording standards

A survey of existing test systems was conducted in the Fall of 2018. The survey and results are here: Test Stack Survey

Here are some things we'd like to standardize in open source automated testing: = Terminology and Framework =
 * Test nomenclature - See the Test Glossary
 * CI loop diagram

Diagram
Below is a diagram for the high level CI loop:

The boxes represent different processes, hardware, or storage locations. Lines between boxes indicate APIs or control flow, and are labeled with letters. The intent of this is to provide a reference model for the test standards.



= Power Control = See the document...

= Test Definition = The test definition is the set of attributes, code, and data that are used to perform a test. A test definition standard would specify things like the following:


 * fields - the data elements of a test
 * file format (json, xml, etc.) - how a test is expressed and transported
 * meta-data - data describing the test
 * visualization control - information used for visualization of results
 * instructions - executable code to perform the test

See Test Definition Project for more information about a project to harmonize test definitions across multiple test systems.

Test dependencies

 * how to specify test dependencies
 * ex: assert_define ENV_VAR_NAME
 * ex: kernel_config
 * types of dependencies

See Test_Dependencies

= Test Execution API (E) =
 * test API
 * host/target abstraction
 * kernel installation
 * file operations
 * console access
 * command execution
 * test retrieval, build, deployment
 * test execution:
 * ex: 'make test'
 * test phases

= Build Artifacts =
 * test package format
 * meta-data for each test
 * test results
 * baseline expected results for particular tests on particular platforms

Test package format
This is a package intended to be installed on a target (as opposed to the collection of test definition information that may be stored elsewhere in the test system)

= Run Artifacts =
 * logs
 * data files (audio, video)
 * monitor results (power log, trace log)
 * snapshots

Results Format

 * test log output format
 * counts
 * subtest results
 * Candidate formats:
 * TAP (TestAnythingProtocol)
 * SubUnit
 * JUnit

One aspect of the result format is the result or status code for individual test cases or the test itself. See Test Result Codes and comparison of TAP, SubUnit and JUnit output formats.

TAP version 14
The effort to create TAP version 14 has stalled.

Version 14 was intended to capture current practices that are already in use.

The pull request for version 14, and resulting discussion is at:

* https://github.com/TestAnything/testanything.github.io/pull/36/files

You can see the full version 14 document in the submitter's repo:

$ git clone https://github.com/isaacs/testanything.github.io.git $ cd testanything.github.io $ git checkout tap14 $ ls tap-version-14-specification.md

= Pass Criteria =
 * what tests can be skipped (this is more part of test execution and control)
 * what test results can be ignored (xfail)
 * min required pass counts, max allowed failures
 * thresholds for measurement results
 * requires testcase id, number and operator

= Miscelaneous (uncategorized) =
 * environment variables used to create an SDK build environment for a board
 * environment variables used for
 * location of kernel configuration (used for dependency testing) KCONFIG_PATH (adopted by LTP)
 * default name of test program in a target package (run-test.sh?)
 * this should be part of the test definition