Test Execution Notes

From eLinux.org
Jump to: navigation, search

This page has notes about aspects of the execution control for different systems that perform test execution.

Here are some of the elements of a the test execution API:

  • test API - what functions can the test call?
    • file operations
    • service operations (start, stop)
    • log access
    • results output functions
  • host/target abstraction - is the test executed from host or directly on target
  • DUT preparation/control: - this might be in a separate standard
    • kernel installation / provisioning
    • console access
    • log access
    • test activation - how is the test activated or started?
      • e.g. installed and activated as part of boot
      • e.g. a command is executed as a program?
      • e.g. is the test built into the kernel and activated via some trigger mechanism (e.g. via /proc or /sys or an ioctl)?
  • test retrieval, build, deployment
  • test execution:
    • test name, or standard for test invocation name
    • ex: 'make test' - a standard make target name?
    • runtest.sh? - a standard shell script name?
    • <testname>.sh - a standard shell script name, based on test meta-data?
  • test phases?
  • test selection / skiplists - how are test selected for inclusion or exclusion in a test run
  • test variables - how does the test control execution
    • make variables
    • environment variables
    • command line arguments

Does this include test scheduling? (no), but it does include test selection.

Does this include test building (no) - see Test Building notes


Notes by test system

Fuego

test locations

  • test source location: each test is in it's own directory under fuego-core/tests/<test-name>
  • on the target, test materials are placed in $BOARD_TESTIDR/fuego.$TESTDIR
    • BOARD_TESTDIR is usually something like: /home/fuego
    • TESTDIR is the name of the test: eg. Functional.hello_world
    • so the final path is something like: /home/fuego/fuego.Functional.hello_world

test script names

  • each test has a script called 'fuego_test.sh'
    • this script is run on the host
  • often there is a test on the target, which is executed by the test on the host
    • there is no convention for the name of the target-side script
    • it is referenced inside the host-side 'fuego_test.sh'

APIs

Provided by board library: fuego_board_function_lib.sh

Is a library of shell functions.

board-side functions:

  • set_init_manager()
  • detect_logger_service()
  • exec_service_on_target()
  • detect_active_eth_device()
  • get_service_status()

host-side functions: See http://fuegotest.org/wiki/Test_Script_APIs

test variables

Provided in environment variables, prefixed by test name:

  • ex: FUNCTIONAL_HELLO_ARGS
  • ex:

Required minimum Linux command set

Tools builtin or on the PATH

  • cat, df, find, free, grep, head, logger, logread, mkdir, mount, ps, rm ,rmdir, sync, tail, tee, touch, true, umount, uname, uptime, xargs, [

Required command arguments:

  • mkdir -p, rm -rf, grep -f,

Tools at specific paths:

  • /sbin/reboot, /bin/true, /sbin/route

Data files:

  • /var/log/messages, /var/log/syslog
  • /proc/interrupts, /proc/drop_caches, /proc/oom_score_adj

(optional)

  • proc/config.gz

LKFT/Linaro

In test-definitions Linaro tests try to follow LAVA conventions. So there is no specific script naming convention however we try to use <testname>.sh. Sometimes there are more than one script to execute the test. There is additional YAML test description that can be consumed by LAVA to execute tests. This way tests can be executed in LAVA, standalone using script and using test-runner.py from test-definitions. There is nothing that prevents us from renaming all scripts to say 'runtest.sh'. It's just not very high on the priority list.

LAVA

CKI

location of tests

test invocation

CKI uses the test invocation mechanism that Beaker uses. This means having a Makefile with a "make run" target (which can call build if needed) that executes the test.

The test file is called runtest.sh, by convention, but it can be any name, as it is referenced by the test's Makefile.

Yocto Project ptest

ptest execution consists of multiple phases:

  • compile the tests (as part of a yocto project build
  • do a "make install" of the tests to a specific directory

There may be separate make targets for these first two steps or a combined one. It depends on the software and its build system. Sometimes there is no option but to have recipe code move the files around itself.

YP has a test that splits 'make check' into "make buildtest" and "make runtest". The libxml run-ptest script above shows how we'd use runtest on target.

http://git.yoctoproject.org/cgit.cgi/poky/tree/meta/recipes-devtools/automake/automake/buildtest.patch

location of tests

  • the test source code is contained within the source for each package
  • for test binaries, for each piece of software there is a directory with the test materials at:

/usr/lib/<name>/ptest

test name

There is always a single file called 'run-ptest', that is placed into /usr/lib/<name>/ptest

Here are some examples:

A nice simple make invocation: http://git.yoctoproject.org/cgit.cgi/poky/tree/meta/recipes-core/libxml/libxml2/run-ptest

A not so simple execution of test scripts: http://git.yoctoproject.org/cgit.cgi/poky/tree/meta/recipes-core/util-linux/util-linux/run-ptest

test runner

The YP ptest system has a standalone software project called ptest-runner2:

https://git.yoctoproject.org/cgit.cgi/ptest-runner2/

It looks for and executes the individual run-ptest scripts and processes the output. It finds them as a simple search of /usr/lib/*/ptest/run-ptest.

kselftest

location of tests

  • source of tests is in: <kernel_repository>/tools/testing/selftests/<area>
  • test binaries are placed in: $INSTALL_PATH/kselftest
    • they are organized by test area, which corresponds to the directory in which they reside in the kernel source repsoitory

test executable names

There is no standard, but the names appear to be obtainable with 'make -s -C tools/testing/selftests/<area> emit_tests'

A script is generated which runs all the test, sourcing 'kselftest/runner.sh' and calling all the test names with the function 'run_many', generating TAP output based on the return code for each test.

LTP

  • test selection via 'scenario' or 'runtest' files (located in <topdir>/runtest)
  • test skiplist support via runltp -S option
    • ex: runltp ... -S skiplist.txt ...
  • test command line argument control is via the runtest file

Standards

Makefile targets

  • Makefile targets for software testing:
    • make test
    • make check

sources

If a program uses Makefile to build and install the software under test, then it should provide the following Makefile targets, to support testing of the software.

"make test"

or

"make check"

See https://www.gnu.org/software/make/manual/make.html#Goals (section "9.2 Arguments to Specify the Goals")

See also: https://www.gnu.org/software/make/manual/make.html#Standard-Targets

board-local test APIs