This page will be used to collect information about test standards.
- 1 meta-documents
- 2 Terminology and Framework
- 3 Board management
- 4 Test Definition
- 5 Test Execution API (E)
- 6 Build Artifacts
- 7 Run Artifacts
- 8 Pass Criteria
- 9 Miscelaneous (uncategorized)
- https://tools.ietf.org/html/rfc2119 - IETF MUST, SHALL, MAY, etc. wording standards
A survey of existing test systems was conducted in the Fall of 2018. The survey and results are here: Test Stack Survey
Here are some things we'd like to standardize in open source automated testing:
Terminology and Framework
- Test nomenclature - See the Test Glossary
- CI loop diagram
Below is a diagram for the high level CI loop:
The boxes represent different processes, hardware, or storage locations. Lines between boxes indicate APIs or control flow, and are labeled with letters. The intent of this is to provide a reference model for the test standards.
This standard has a set of APIs or interfaces for managing the devices under test (DUTs) It includes things like:
- board reservation
- image instantiation (in the case of VMs or emulators)
- board provisioning (installation of software under test)
- power control (or, in the case of VMs - VM start)
- bus control
- power measurement
- attribute discovery
- console monitoring
- file transfer to/from the board
- command execution
The power control API is a standard for controlling the power state of a board in a board farm (inside an automated testing farm).
pdudaemon was selected as the standard for controlling power to a board in a lab.
The document containing this standard is at:
The test definition is the set of attributes, code, and data that are used to perform a test. A test definition standard would specify things like the following:
- fields - the data elements of a test
- file format (json, xml, etc.) - how a test is expressed and transported
- meta-data - data describing the test
- visualization control - information used for visualization of results
- instructions - executable code to perform the test
See Test Definition Project for more information about a project to harmonize test definitions across multiple test systems.
- how to specify test dependencies
- ex: assert_define ENV_VAR_NAME
- ex: kernel_config
- types of dependencies
Test Execution API (E)
See Test Execution Notes for more details and miscellaneous notes.
- test API
- host/target abstraction
- kernel installation / provisioning
- file operations
- console access
- command execution
- test retrieval, build, deployment
- test execution:
- ex: 'make test'
- test execution:
- test phases
- test package format
- meta-data for each test
- test results
- baseline expected results for particular tests on particular platforms
Test package format
This is a package intended to be installed on a target (as opposed to the collection of test definition information that may be stored elsewhere in the test system)
This is a place where tests or test packages can be stored, and downloaded for use in a CI framework.
- data files (audio, video)
- monitor results (power log, trace log)
See Test Results Format Notes for details and miscellaneous notes
The results format is the output from the test and creates is part of the interface between the test program and the test execution layer (or test harness).
The main thing that the format communicates is the list of testcases (or metrics, in the case of benchmarks) and the result of the testcase (pass, fail, etc.)
- test log output format
Server-based results storage
All our test results for YP builds are added to a git repository:
(they're stored in a json format). the YP project doesn't have good tools to analyse the data yet but are at least storing them.
Uses the fserver project (https://github.com/tbird20d/fserver) to store run results in a common location.
Data is stored using Fuego's run.json format (http://fuegotest.org/wiki/run.json)
Fuego can also save results to a kernelci backend and a Squad backend.
A test has the results from a test: https://api.kernelci.org/schema-test.html A test group is a collection of test cases: See https://api.kernelci.org/schema-test-group.html and https://api.kernelci.org/schema-test-case.html
The results backend for LKFT is: https://qa-reports.linaro.org/lkft/
BigQuery common results server project
The client for this is at: https://github.com/spbnick/kcidb
The server for this is at: ??
The Linux kernel kselftest uses TAP as the preferred output format.
The pass criteria is a set of data that indicate to the test framework how to interpret the results from a test. They can indicate the following:
- what tests can be skipped (this is more part of test execution and control)
- what test results can be ignored (xfail)
- min required pass counts, max allowed failures
- thresholds for measurement results
- requires testcase id, number and operator
The pass criteria allows separation of things like expected failure from the test code itself, to allow for situations where different sets of results are interpreted as success or failure depending on factors outside the test (for example, kernel version, kernel configuration, available hardware, etc.)
For things like functional unit tests, a single failing result should result in the overall failure of a test suite. However, for system tests or benchmarks, it is often the case that some results must be interpreted in a context-sensitive manner, or some set of testcases are ignored for expediency's sake.
- environment variables used to create an SDK build environment for a board
- environment variables used for controlling execution of a test
- location of kernel configuration (used for dependency testing) KCONFIG_PATH (adopted by LTP)
- default name of test program in a target package (run-test.sh?)
- this should be part of the test definition