Test Results Format Notes
This document has information about various test results formats, and their strengths and weaknesses.
Contents
Introduction
The results format is the output from the test and creates is part of the interface between the test program and the test execution layer (or test harness).
The main thing that the format communicates is the list of testcases (or metrics, in the case of benchmarks) and the result of the testcase (pass, fail, etc.)
A good starting document that describes different test report formats is:
- https://github.com/ligurio/testres/wiki/Everything-you-need-to-know-about-software-testing-report-formats
- comparison of TAP, SubUnit and JUnit output formats.
Existing output formats
Here are some of the existing formats that are used by various test programs and frameworks:
- TAP (TestAnythingProtocol)
- SubUnit
- xUnit (junit, xunit, etc.)
Elements
A test output format needs to communicate the following information:
- testcase identifiers (names or descriptions or ID numbers)
- result of the testcase (pass, fail, skip, error, xfail)
- additional information
- counts (aggregate data)
- subtest results
- diagnostic information - general information that may help diagnose the test operation
- reason - text explaining why a test passed or failed
testcase identifiers
There should be a way to identify a test, so that when a test is repeated it can be determined if the test result changed or not. The testcase identifier could be a number or a short name, or a description. But it should be the same ever time the test is run (it should be invariant over test invocations).
Many test developers will change the output related to a testcase based on the testcase result. There needs to be a portion of the testcase output that is invariant, and which can be parsed to an identifier that is unique within a single run of the test.
result strings
One aspect of the result format is the result or status code for individual test cases or the test itself.
Result codes
- test log output format
Metric data
Metric or measurement data is a a string indicating the value for an operation. This is usually used for performance, timing or other number-related data (such as that reported by benchmarks)
The metric data needs to report a number, and most likely a 'units' indicating how the number should be interpreted.
There may also be associated with a metric (or measurement), some additional information indicating parameters associated with the metric used to determine whether the value indicates success or failure of the related testcase.
parser helper information
Some tests use simple line-based output. Here is an idea for how a program or log might provide information about it's output format, allowing the test framework to perform introspection on the logs.
Note that this is a fallback mechanism for when a test has already been written to with some ad-hoc consistency in its output. It is much preferred when writing new tests to use one of the existing test output formats.
Standards
For the Linux kernel selftests, the preferred output format was TAP (TestAnythingProtocol)
The preferred output has changed to KTAP (currently version 1).
TAP version 14
The effort to create TAP version 14 has stalled.
Version 14 was intended to capture current practices that are already in use. The pull request for version 14, and resulting discussion is at:
* https://github.com/TestAnything/testanything.github.io/pull/36/files
You can see the full version 14 document in the submitter's repo:
$ git clone https://github.com/isaacs/testanything.github.io.git $ cd testanything.github.io $ git checkout tap14 $ ls tap-version-14-specification.md
KTAP - Kernel Test Anything Protocol
The Kernel Test Anything Protocol (KTAP) began as an attempt to describe how the Linux kernel implemented (and departed from) the TAP specification.
Disambiguation
This is not the 'ktap' Linux kernel dynamic tracing tool:
- [RFC PATCH 00/28] ktap: A lightweight dynamic tracing tool for Linux
- lwn article: Ktap — yet another kernel tracer
KTAP version 1
Documentation/dev-tools/ktap.rst (this link is to the current top of tree version, not the original commit). The specification can be converted from .rst to various formats with the kernel make command: make htmldocs make latexdocs make pdfdocs make epubdocs make xmldocs Formatted version 1, as of commit 312310928417 ("Linux 5.18-rc1"), Sun Apr 3 14:08:21 2022: tar of ktap-version_1-312310928417.html
In June 2020 Tim Bird started an
RFC email
thread proposing a KTAP specification. There is much interesting discussion,
but the thread ended without creating a specification.
The discussion continued in August 2021 with another RFC email proposing a KTAP specification based on Tim's previous RFC. There is discussion in the thread, but again the thread ended without creating a specification.
In December 2021 David Gow submitted an RFC email with a pared down KTAP specification, based on the previous discussions. This was quickly followed by RFC email, v2. After a short discussion this proposal was added to the Linux kernel source tree as: Documentation/dev-tools/ktap.rst (this link is to the current top of tree version, not the original commit).
There have been a few additional commits modifying version 1. These changes have been for clarity, and have not modified the KTAP output format.
KTAP version 2
TODO: link to tree holding version 2 development
In March 2022 Frank Rowand started a discussion with an RFC email.
It was agreed to keep the version 2 changes in a branch instead of getting pulled into torvalds/master to avoid the confusion of having a partially modified specification in the Linux source tree. Frank Rowand volunteered to host the branch in his kernel.org Linux source tree. He will create the branch after one more commit for version 1 of the specification arrives in torvalds/master.
Process
Keeping discusssion of proposed changes focused
In his RFC email Frank mentioned:
I intend to take some specific suggestions from the August 2021 discussion to create stand-alone RFC patches to the Specification instead of adding them as additional patches in this series. The intent is to focus discussion on a single area of the Specification in each patch email thread.
If you are make proposals for version 2 of the specification, please try to follow this principal of one major topic per email thread.
Patches to tests and/or test parsers
David Gow suggested:
I'd also be curious to see patches to tests and/or test parsers to show off any particularly compatibility-breaking and/or interesting changes, though I don't think that _has_ to be a prerequisite for discussion or the spec.
Suggested email distribution
Frank will add patches to his ktap version 2 branch once review of the patches completes.
to: Frank Rowand <frowand.list@gmail.com> David Gow <davidgow@google.com> Shuah Khan <skhan@linuxfoundation.org> Kees Cook <keescook@chromium.org> Tim.Bird@sony.com Brendan Higgins <brendanhiggins@google.com>
cc: Jonathan Corbet <corbet@lwn.net> rmr167@gmail.com guillaume.tucker@collabora.com dlatypov@google.com kernelci@groups.io kunit-dev@googlegroups.com linux-kselftest@vger.kernel.org linux-doc@vger.kernel.org linux-kernel@vger.kernel.org
Proposal email threads
I will attempt to keep a list of proposal email threads here:
Proposal email threads (most recent at bottom): email thread [PATCH v2 0/2] begin KTAP spec v2 process [PATCH v2 1/2] ktap_v2: change version to 2-rc in KTAP specification [PATCH v2 2/2] ktap_v2: change "version 1" to "version 2" in examples email thread [RFC] KTAP spec v2: prefix to KTAP data