LAVA survey response

= LAVA survey response = LAVA survey response provided by Matt Hart

Survey Questions

 * What is the name of your test framework? LAVA

Which of the aspects below of the CI loop does your test framework perform?

Lab/Board Farm, Scheduler, DUT Control (Deploy, Provision, Test, Collect Results)

Does your test framework:

source code access

 * access source code repositories for the software under test? No
 * access source code repositories for the test software? Yes, all the common VCS
 * include the source for the test software? No
 * provide interfaces for developers to perform code reviews? No
 * detect that the software under test has a new version? No, most people would use jenkins or similar
 * if so, how? (e.g. polling a repository, a git hook, scanning a mail list, etc.)
 * detect that the test software has a new version?

test definitions
Does your test system: Does your test definition include:
 * have a test definition repository? LAVA does not come with tests, however Linaro does maintain a set of job definitions
 * if so, what data format or language is used (e.g. yaml, json, shell script) YAML
 * source code (or source code location)? Yes
 * dependency information? Yes
 * execution instructions? Yes
 * command line variants? Yes
 * environment variants? Yes
 * setup instructions? Yes
 * cleanup instructions? Yes
 * if anything else, please describe:

Does your test system:
 * provide a set of existing tests? No
 * if so, how many?

build management
Does your test system:
 * build the software under test (e.g. the kernel)? No
 * build the test software? Sometimes, quite a lot of LAVA users will build the test software on the device they are testing before executing it
 * build other software (such as the distro, libraries, firmware)? 'No
 * support cross-compilation? No
 * require a toolchain or build system for the SUT? Yes
 * require a toolchain or build system for the test software? No, it can be built on the device, though pre-built is obviously faster
 * come with pre-built toolchains? No
 * store the build artifacts for generated software? No, however pushing to external storage is supported (Artifactorial)
 * in what format is the build metadata stored (e.g. json)?
 * are the build artifacts stored as raw files or in a database?
 * if a database, what database?

Test scheduling/management
Does your test system:
 * check that dependencies are met before a test is run? Yes
 * schedule the test for the DUT? Yes
 * select an appropriate individual DUT based on SUT or test attributes? Yes
 * reserve the DUT? Yes
 * release the DUT? Yes
 * install the software under test to the DUT? Yes
 * install required packages before a test is run? Yes
 * require particular bootloader on the DUT? (e.g. grub, uboot, etc.) Yes
 * deploy the test program to the DUT? Yes
 * prepare the test environment on the DUT? Yes
 * start a monitor (another process to collect data) on the DUT? No
 * start a monitor on external equipment? No
 * initiate the test on the DUT? Yes
 * clean up the test environment on the DUT? Yes

DUT control
Does your test system:
 * store board configuration data? Yes
 * in what format? YAML, rendered from Jinja2
 * store external equipment configuration data? Yes
 * in what format? YAML, rendered from Jinja2
 * power cycle the DUT? Yes
 * monitor the power usage during a run? Possibly, there is basic support for ARM energy probes
 * gather a kernel trace during a run? Yes
 * claim other hardware resources or machines (other than the DUT) for use during a test? Yes, and can claim other DUT in a multi-node job
 * reserve a board for interactive use (ie remove it from automated testing)? Not directly, but users use LAVA to provision a device and hand over SSH access
 * provide a web-based control interface for the lab? Yes
 * provide a CLI control interface for the lab? Yes

Run artifact handling
Does your test system:
 * store run artifacts No
 * in what format?
 * put the run meta-data in a database? Yes
 * if so, which database? LAVA results database, postgres
 * parse the test logs for results? Yes, during the run 
 * convert data from test logs into a unified format? Yes
 * if so, what is the format? Stored in database, can be fetched as YAML
 * evaluate pass criteria for a test (e.g. ignored results, counts or thresholds)? Yes
 * do you have a common set of result name? Yes
 * if so, what are they? Pass, Fail, Skip, Unknown
 * How is run data collected from the DUT?
 * e.g. by pushing from the DUT, or pulling from a server? Parsed from the DUT serial output on the fly
 * How is run data collected from external equipment? External equipment is considered another DUT
 * Is external equipment data parsed? Same as other DUT

User interface
Does your test system:
 * have a visualization system? Yes
 * show build artifacts to users? No
 * show run artifacts to users? Yes, if pushed to external storage a link can be put in the results
 * do you have a common set of result colors? No
 * if so, what are they? N/A
 * generate reports for test runs? No
 * notify users of test results by e-mail? Only job status
 * can you query (aggregate and filter) the build meta-data? No
 * can you query (aggregate and filter) the run meta-data? Yes, if stored as a result
 * what language or data format is used for online results presentation? (e.g. HTML, Javascript, xml, etc.) Javascript
 * what language or data format is used for reports? (e.g. PDF, excel, etc.) N/A
 * does your test system have a CLI control tool? Yes
 * what is it called? LAVACLI

Languages:
Examples: json, python, yaml, C, javascript, etc. What languages or data formats is the user required to learn? (as opposed to those used internally)
 * what is the base language of your test framework core? Python


 * YAML for writing a job definition and device description
 * Shell script for writing a test definition

Can a user do the following with your test framework:

 * manually request that a test be executed (independent of a CI trigger)? 'Yes
 * see the results of recent tests? Yes
 * set the pass criteria for a test? Yes, would require editing the job
 * set the threshold value for a benchmark test? Yes, would require editing the job
 * set the list of testcase results to ignore? Yes, would require editing the job
 * provide a rating for a test? (e.g. give it 4 stars out of 5) No
 * customize a test? Yes, would require editing the job
 * alter the command line for the test program? Yes, would require editing the job
 * alter the environment of the test program? Yes, would require editing the job
 * specify to skip a testcase? Yes, would require editing the job
 * set a new expected value for a test? Yes, would require editing the job
 * edit the test program source? Yes, would require editing the job
 * customize the notification criteria? Yes
 * customize the notification mechanism (eg. e-mail, text) Yes
 * generate a custom report for a set of runs? Yes
 * save the report parameters to generate the same report in the future? Yes

Requirements
Does your test framework:
 * require minimum software on the DUT? Bootloader
 * If so, what? POSIX shell for most DUT, however IOT devices are supported without a shell
 * require minimum hardware on the DUT (e.g. memory) Serial port
 * require agent software on the DUT? (e.g. extra software besides production software) No
 * If so, what agent?
 * is there optional agent software or libraries for the DUT? No
 * require external hardware in your labs? Power control is required for most DUT types

APIS
Does your test framework:
 * use existing APIs or data formats to interact within itself, or with 3rd-party modules? ZMQ (see http://zeromq.org/)
 * have a published API for any of its sub-module interactions (any of the lines in the diagram)? No
 * Please provide a link or links to the APIs?

Sorry - this is kind of open-ended... Are they:
 * What is the nature of the APIs you currently use?
 * RPCs?
 * Unix-style? (command line invocation, while grabbing sub-tool output)
 * compiled libraries?
 * interpreter modules or libraries?
 * web-based APIs?
 * something else?


 * ZMQ is used between LAVA workers (dispatchers) and the master, to schedule jobs
 * ZMQ is kind of like message-based network sockets (Note by Tim)
 * XML-RPC is for users to submit jobs, access results, and control the lab

Relationship to other software:

 * what major components does your test framework use? Jenkins, Squad
 * does your test framework interoperate with other test frameworks or software?
 * which ones? A common LAVA setup is Jenkins to create the builds, LAVA to execute the tests, and Squad/KernelCI to consume the results

Overview
Please list your major components here:


 * LAVA Server - (DUT Scheduler) - UI, Results storage, Device configuration files, Job scheduling, User interaction
 * LAVA Dispatcher - (DUT Controller) - Device interaction (deploy, boot, execute tests), Device Power Control, Test results Parsing

Glossary
Also, please suggest any terms or concepts that are missing.

PDU - Power Distribution Unit, however has become a standard term for automation power control.

= Additional Data =