Test Result Codes

A result code is a set of values from which a valid result is enumerated. The result or "status" of the test indicated the test outcome.

This status is usually restricted to on of a set of possible values. Due to different kinds of problems that can arise during testing, the result codes consist of more than just PASS and FAIL, which indicate success or failure for whatever the test is testing for.

This page is used to document the most common result codes, to try to harmonize the industry usage of these.

xfail

 * xfail is something gcc does (besides pytest).
 * Tims comment: putting xfail in the test itself works for the developers of the test, but not for end-users running the test. How would the test know what failures an end-user wants to ignore for the moment?
 * pytest documentation for xfail: https://docs.pytest.org/en/latest/skipping.html

LTP

 * TPASS Passed - test was successful
 * TFAIL Failed - test assertions failed
 * TSKIP Skipped - Test was skipped because of missing pre-requisite or configuration
 * TWARN Warning - test produced warnings - usually produced when test cleanup failed to restore the system
 * TBROK Broken - Broken is usually reported when test setup fails before the test even attemps to test the test assertions

Fuego
See http://fuegotest.org/wiki/run.json, the 'status' field:
 * PASS - a testcase, test set or test suite completed successfully
 * FAIL - a testcase, test set or test suite was unsuccessful
 * ERROR - a test did not execute properly (e.g. the test program did not run correctly)
 * SKIP - a test was not executed, usually due to invalid configuration (missing some pre-requisite)

Jenkins

 * Stable - color: blue; everything passed
 * many people install the 'greenballs' plugin for this to be green
 * Unstable - color: grey; Test were successfully executed but found failures.
 * Failed - color: red; Problem with compilation / configuration / runtime error.
 * Aborted - color: grey; Build time-out, someone intentionally stopped the run in the middle.
 * Not executed yet - color: grey; Test has not been executed yet

pytest
Pytest has exit codes, with a particular meaning:


 * Exit code 0:	All tests were collected and passed successfully
 * Exit code 1:	Tests were collected and run but some of the tests failed
 * Exit code 2:	Test execution was interrupted by the user
 * Exit code 3:	Internal error happened while executing tests
 * Exit code 4:	pytest command line usage error
 * Exit code 5:	No tests were collected

pytest also has result codes. The letter is used with summary reports to filter which items are included in the summary:
 * failed (f)
 * error (E)
 * skipped (s)
 * xfailed (x)
 * xpassed (X)
 * passed (p)
 * passed with output (P) - I think this just controls output for the summary report.

buildbot

 * SUCCESS: Value: 0; color: green; a successful run.
 * WARNINGS: Value: 1; color: orange; a successful run, with some warnings.
 * FAILURE: Value: 2; color: red; a failed run, due to problems in the build itself, as opposed to a Buildbot misconfiguration or bug.
 * SKIPPED: Value: 3; color: white; a run that was skipped – usually a step skipped by doStepIf (see Common Parameters)
 * EXCEPTION: Value: 4; color: purple; a run that failed due to a problem in Buildbot itself.
 * RETRY: Value: 5; color: purple; a run that should be retried, usually due to a worker disconnection.
 * CANCELLED: Value: 6; color: pink; a run that was cancelled by the user.

See http://docs.buildbot.net/latest/developer/results.html

kernelCI
pass, fail, skip, unknown

(Probably from LAVA)

LAVA
pass, fail, skip, unknown

ktest
pass, fail

labgrid
pass, fail, error, skip, xfail

(provided by pytest)

KFT

 * pass - color: green
 * fail - color: red
 * skip - color: yellow
 * xfail - color: blue, used for a failed test where the failure should be ignored
 * total - color: grey

opentest
pass, fail, skip, not-run, blocked

tbot
''No names. True/False only.''

TCF
PASS, FAIL, ERRR, FAIL, SKIP, BLCK
 * PASS: all went well
 * FAILure: deterministic resolution of the test failing, eg we multiply 2*2 and yields 5 or power measurements from an attached gauge while doing op X yielded a power consumption outside of the expected band.
 * ERRoR: unexpected negative output (for example, a kernel crash while reading a file)
 * SKIP: the DUTs lack the capabilities needed to run such test) and we could only determine that once they were configured, setup and powered up (vs just looking at the metadata)
 * BLoCK: any problem related to infrastructure that disallowed from carrying the test to completion (eg: network failure communicating with the server, DUT power switch failing to execute a power up command, etc...)

Xilinx
PASS, FAIL, UNANALYZED

Yocto Project
Pass, Fail, Skip and Error (error means the testcase broke somehow)