Difference between revisions of "Test Definition Project"

From eLinux.org
Jump to: navigation, search
(API library candidates)
Line 173: Line 173:
  
 
What languages are supported?
 
What languages are supported?
 +
== test framework APIs ==
 +
These are the APIS between the test and the test framework
  
= board-local test APIs =
+
=== API library candidates ===
== API library candidates ==
+
* functions.sh (Fuego)
 +
* beakerlib (Beaker)
 +
* rhts_lib(sp?) (CKI)
 +
* (other system's function libraries??)
 +
 
 +
=== test framework functions ===
 +
(need to peruse the list above for individual functions)
 +
 
 +
=== candidates ===
 +
 
 +
== board-local test APIs ==
 +
These are the APIs between the test and the DUT system itself, that are
 +
not part of the DUT system (that is, which call a test framework library).
 +
 
 +
=== API library candidates ===
 
There are candidates for libraries that libraries that have test APIs:
 
There are candidates for libraries that libraries that have test APIs:
 
* beakerlib (Beaker)
 
* beakerlib (Beaker)
Line 181: Line 197:
 
* sh_test_lib (LAVA? or LKFT?)
 
* sh_test_lib (LAVA? or LKFT?)
  
== local functions ==
+
=== local functions ===
 
[put list of board-local functions here]
 
[put list of board-local functions here]
 +
 +
=== candidates ===
 +
 +
== external equipment APIS ==
 +
These are APIs between the test and external equipment, such as external BUS controllers, power measurement
 +
devices, external tracers and logic analyzers, video or audio grabbers, or other external hardware.
 +
 +
Question: Does this include the API to server hardware, like netperf server?

Revision as of 19:13, 27 September 2019

The Test Definition Project is consists of work to categorize and harmonize the attribute of open source tests, so that tests and test artifacts can more easily be shared between different test projects.

Tim Bird is working on a "Test Definition Standards" document, to be presented and discussed at various meetings in the Fall of 2019.

This page has information that is being retained on this wiki as resources for this work.

Presentations

Resources

See Test definition survey


Test definition elements

Categories

  • Information about a test
  • Pre-requisites
  • Dependencies
  • Instructions
  • Output format
  • Test variables
  • Results analysis
  • Visualization control

For individual elements of the above:

  • field or item name
  • language (C, Python, sh, etc.)
  • file format (json, xml, etc.)
  • allowed values
    • including APIs, for code items
  • groupings


Information about a test (meta-data)

Also known as test meta-data. Does not affect the test execution, but provides information about it.

Surveyed fields:

  • Name
  • Description
  • License
  • Version
    • test program version
    • test wrapper version
  • Author
  • Maintainer
  • Test format version
  • Package manifest

Pre-requisites

  • required machine attributes
    • memory
    • cpus
    • storage
    • architecture
    • (specific hardware - for scheduling)
      • network, bus, device
  • required kernel attributes
    • config value (= kernel feature)
    • kernel module
  • required distribution attributes
    • distribution
    • distro version
    • logging system
    • init system
    • installed package
    • installed program
    • installed file
  • test environment
    • root permissions
    • file system type
  • tags

Dependencies

Things that can be installed or modified

  • kernel configuration (eg. kselftest config fragments)
  • packages

Test control

test variables

    • params

skiplists

    • how does each test handle skiplists
    • Tests that are know to handle skiplists:
      • LTP
      • xfstests
        • xfstests also has a mechanism for selecting individual tests (which Fuego does for LTP with LTP_one_test)
      • (does kselftest? - they have a skip mechanism based on CONFIG fragments, is it generalized)

expected duration (timeout)

Expected duration is something a test needs to communicate to the test harness (test manager and scheduler), so that the harness can automatically detect if the test or machine has hung.

The difficulty here is that actual duration may be affected by a lot of factors. So coming up with a value that will work in all circumstances is difficult.

Some systems, like Jenkins, measure the duration of previous runs, but (to my knowledge), do not use that information to stop an instance of a test.

Instructions

  • source location
  • build instructions
  • run instructions
  • setup instructions
  • teardown/cleanup instructions

test library

The test library is a set of functions or capabilities that are available to a test to perform utility operations on a target device.

A category of library functions is a set of programs that a test may utilize on a target, to perform operations.

Fuego has a list of core programs that it tries to constrain itself to. All other programs must be specified in as dependencies in the test_pre_check function, or in NEED_ variables.

Here is Fuego's minimal Linux command list:

  • cat, df, find, free, grep, head, (logger or logread), mkdir, mount, ps, rm, rmdir, sync, tail, tee, touch, true, umount, uname, uptime, xargs, [
  • /sbin/reboot, /sbin/true, /sbin/logread

Depending on the distribution, certain files must be present:

  • /var/log/syslog (optional)

Certain psuedo-files are required by the Fuego core:

  • /proc/interrupts, /proc/sys/vm/drop_caches, /proc/$$/oom_score_adj

Certain commands have required minimum arguments that must be supported:

  • mkdir -p, rm -rf, grep -Fv

One program to consider for characterizing the hardware on a platform is 'lshw'. This may be something that a test manager runs, to characterize a DUT (the information might be used for scheduling, or test selection, or to modify the parameters to a test). Or, it could be something a test itself runs, to alter its own execution.

Output format

  • parser
  • results output API

Results analysis

  • pass criteria
    • ignore lists
    • pass/fail counts
    • expected values
    • threshholds

Visualization control

  • key fields
  • chart types
  • thresholds
  • groupings
  • summaries

Things that are NOT test part of a test definition

  • build artifact
  • results artifact
  • board scheduling API
  • lab management API (unless test performs board reboot, provision, hardware control, etc.)
  • trigger API
  • scheduling API

element expression

  • file formats
  • file names
  • data format

APIs

  • API between test and test framework
  • API between test and target system
  • API between test and external equipment

What languages are supported?

test framework APIs

These are the APIS between the test and the test framework

API library candidates

  • functions.sh (Fuego)
  • beakerlib (Beaker)
  • rhts_lib(sp?) (CKI)
  • (other system's function libraries??)

test framework functions

(need to peruse the list above for individual functions)

candidates

board-local test APIs

These are the APIs between the test and the DUT system itself, that are not part of the DUT system (that is, which call a test framework library).

API library candidates

There are candidates for libraries that libraries that have test APIs:

  • beakerlib (Beaker)
  • fuego_board_function_lib.sh (Fuego)
  • sh_test_lib (LAVA? or LKFT?)

local functions

[put list of board-local functions here]

candidates

external equipment APIS

These are APIs between the test and external equipment, such as external BUS controllers, power measurement devices, external tracers and logic analyzers, video or audio grabbers, or other external hardware.

Question: Does this include the API to server hardware, like netperf server?