Test definition survey

From eLinux.org
Revision as of 12:12, 18 January 2019 by Tim Bird (talk | contribs) (LAVA)
Jump to: navigation, search

Here is a list of test definition fields, attributes, file formats, operations, instructions, functions, etc. (I won't know what even what they consist of until I see them).

This is the object in your test system that "defines" what a test is. It likely has meta-data about the program to run, how to get the program started, maybe what things are required for the program to run, how the results should be interpreted, etc.

Survey - link to test definitions

Here is a link to test definitions in different systems: One simple, one characteristic, and link to repository containing lots of them:

Fuego

fuego files

  • fuego_test.sh
  • spec.json
  • parser.py - has the testlog parser for this test
  • criteria.json - has the pass criteria for this test
  • test.yaml - has meta-data for this test
  • chart_config.json - has charting configuration
  • reference.json - has units for test results
  • docs - directory of test and testcase documentation
  • (test program source) - tarball, or repository reference for test program source
  • (patches against test program source) - changes for test program source

jenkins files

  • config.xml (job file) - Jenkins job description for a test ((board, spec, test) combination)

fields

  • config.xml::actions
  • config.xml::descriptions
  • config.xml::keepDependencies
  • config.xml::scm
  • config.xml::assignedNode - tag for which board or set of boards can run this job
  • config.xml::canRoam
  • config.xml::disabled
  • config.xml::blockBuildWhenDownstreamBuilding
  • config.xml::blockBuildWhenUpstreamBuilding
  • config.xml::triggers
  • config.xml::concurrentBuild
  • config.xml::customWorkspace
  • config.xml::builders
  • config.xml::hudson.tasks.Shell:command - Fuego command to run (includes board, spec, timeout, flags, and test)
  • config.xml::publishers
  • config.xml::flotile.FlotPlublisher
  • config.xml::hudson.plugins.descriptionSetter.DescriptionSetterPublisher(:regexp,:regexpForFailed,:description,:descriptionForFailed,:setForMatrix)
  • config.xml::buildWrappers
  • fuego_test.sh::NEED_* - 'need' variables for declarative dependency checks
  • fuego_test.sh::tarball - program source reference (can be local tarball or remote tarball, or url?)
  • fuego_test.sh::test_pre_check - (optional) shell function to test dependencies and pre-conditions
  • fuego_test.sh::test_build - shell function to build test program source
  • fuego_test.sh::test_deploy - shell function to put test program on target board
  • fuego_test.sh::test_run - shell function to run test program on the target board
  • fuego_test.sh::test_snapshot - (optional) shell function to gather machine status
  • fuego_test.sh::test_fetch_results - (optional) shell function to gather results and logs from target board
  • fuego_test.sh::test_processing - shell function to determine result
  • spec.json::testName - name of the test
  • spec.json::specs - list of test specs (variants)
  • spec.json::specs[<specname>].xxx - arbitrary test variables for the indicated test spec
  • spec.json::specs[<specname>].skiplist - list of testcases to skip
  • spec.json::specs[<specname>].extra_success_links - links for Jenkins display on test success
  • spec.json::specs[<specname>].extra_fail_links - links for Jenkins display on test failure
  • parser.py - python code to parse the testlog from (test_run::report(), report_live(), and log_this()) calls
  • reference.json::test_sets::name
  • reference.json::test_sets::test_cases - list of test cases in this test_set
  • reference.json::test_sets::test_cases::name
  • reference.json::test_sets::test_cases::measurements - list of measurements in this test case
  • reference.json::test_sets::test_cases::measurements::name
  • reference.json::test_sets::test_cases::measurements::unit
  • criteria.json::schema_version
  • criteria.json::criteria - list of results pass criteria
  • criteria.json::criteria::tguid - test globally unique identifier for this criteria
  • criteria.json::criteria::reference - reference condition for this criteria
  • criteria.json::criteria::reference::value - reference value(s) for this criteria
  • criteria.json::criteria::reference::operator - operator for this condition (eq, le, lt, ge, gt, bt, ne)
  • criteria.json::criteria::min_pass
  • criteria.json::criteria::max_fail
  • criteria.json::criteria::fail_ok_list
  • criteria.json::criteria::must_pass_list
  • test.yaml::fuego_package_version - indicates the version of package (in case of changes to the package schema). For now, this is always 1.
  • test.yaml::name - has the full Fuego name of the test. Ex: Benchmark.iperf
  • test.yaml::description - has an English description of the test
  • test.yaml::license - has an SPDX identifier for the test.
  • test.yaml::author -the author or authors of the base test
  • test.yaml::maintainer - the maintainer of the Fuego materials for this test
  • test.yaml::version - the version of the base test
  • test.yaml::fuego_release - the version of Fuego materials for this test. This is a monotonically incrementing integer, starting at 1 for each new version of the base test.
  • test.yaml::type - either Benchmark or Functional
  • test.yaml::tags - a list of tags used to categorize this test. This is intended to be used in an eventual online test store.
  • test.yaml::tarball_src - a URL where the tarball was originally obtained from
  • test.yaml::gitrepo - a git URL where the source may be obtained from
  • test.yaml::host_dependencies - a list of Debian package names that must be installed in the docker container in order for this test to work properly. This field is optional, and indicates packages needed that are beyond those included in the standard Fuego host distribution in the Fuego docker container.
  • test.yaml::params - a list of test variables that may be used with this test, including their descriptions, whether they are optional or required, and an example value for each one
  • test.yaml::data_files - a list of the files that are included in this test. This is used as the manifest for packaging the test.

Example

Here is an example test.yaml file, for the package Benchmark.iperf3:

fuego_package_version: 1
name: Benchmark.iperf3
description: |
    iPerf3 is a tool for active measurements of the maximum achievable
    bandwidth on IP networks.
license: BSD-3-Clause.
author: |
    Jon Dugan, Seth Elliott, Bruce A. Mah, Jeff Poskanzer, Kaustubh Prabhu,
    Mark Ashley, Aaron Brown, Aeneas Jaißle, Susant Sahani, Bruce Simpson,
    Brian Tierney.
maintainer: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
version: 3.1.3
fuego_release: 1
type: Benchmark
tags: ['network', 'performance']
tarball_src: https://iperf.fr/download/source/iperf-3.1.3-source.tar.gz
gitrepo: https://github.com/esnet/iperf.git
params:
    - server_ip:
        description: |
            IP address of the server machine. If not provided, then SRV_IP
            _must_ be provided on the board file. Otherwise the test will fail.
            if the server ip is assigned to the host, the test automatically
            starts the iperf3 server daemon. Otherwise, the tester _must_ make
            sure that iperf3 -V -s -D is already running on the server machine.
        example: 192.168.1.45
        optional: yes
    - client_params:
        description: extra parameters for the client
        example: -p 5223 -u -b 10G
        optional: yes
data_files:
    - chart_config.json
    - fuego_test.sh
    - parser.py
    - spec.json
    - criteria.json
    - iperf-3.1.3-source.tar.gz
    - reference.json
    - test.yaml

LAVA

For the 'files' part, each test in test-definitions is stored in a separate directory. The directory has to contain at least the YAML file that is compliant with LAVA test definition. We have a sanity check script (validate.py) that is executed on any pull request. This ensures all files pushed to the repository are compliant. Usual practice is that test directory contains a test script (shell script). Script is responsible for installing dependencies, running tests and parsing results. There is no mandatory format there but test-definitions provides a library of functions that help writing test scripts. There are libraries for 'linux' and 'android'. We also host directory for manual tests and simple executor for them but in the context of automated testing these are irrelevant.

files

  • <tesname>.sh - is the thing that will run on the target?
  • <testname>.yaml - describes test properties

fields

  • busybox.sh - shell code to execute things on target
  • testname.yaml::metadata::format
  • testname.yaml::metadata::format - format of this yaml test definition file
  • testname.yaml::metadata::name - name of this test
  • testname.yaml::metadata::description - description of the test
  • testname.yaml::metadata::maintainer - list of email addresses of test maintainer(s)
  • testname.yaml::metadata::os - list of Linux distributions where this test can run
  • testname.yaml::metadata::scope - can be 'functional'
  • testname.yaml::metadata::devices - list of device types (board names) where this test can run
  • testname.yaml::PARAMS:: list of arbitrary test variables
  • testname.yaml::run - items for test execution
  • testname.yaml::run::steps - shell lines to execute the test (executed on target board)

Yocto Project

An "on target" test of the compiler:

(same directory has simple python/perl tests and so on)

http://git.yoctoproject.org/cgit.cgi/poky/tree/meta/lib/oeqa/files (for the test files for context, they're just hello world examples)


This is a "selftest" for the "devtool" command that is part of the overall build system, its a bit more complex with shared functions and tests for each of devtool's subcommands.

This has all the test code and core test definitions. Test definitions are in cases directories under "manual", "runtime", "sdk" and "selftest" directories.