0-day survey response

From eLinux.org
Jump to: navigation, search

0-day survey response

0-day survey response provided by Philip Li

Survey Questions

  • What is the name of your test framework? 0-day CI

Does your test framework:

source code access

  • access source code repositories for the software under test? yes
  • access source code repositories for the test software? yes
  • include the source for the test software? yes
  • provide interfaces for developers to perform code reviews? no
  • detect that the software under test has a new version? yes, through polling the repo and scanning mailing list regularly

Note: this software is not open source or part of the existing 0-day repository.

    • if so, how?
  • detect that the test software has a new version? yes, we check the test software regularly and fetch it to the local host to use it later

test definitions

Does your test system:

  • have a test definition repository? yes
    • if so, what data format or language is used: yaml and shell

Does your test definition include:

  • source code (or source code location)? yes
  • dependency information? yes
  • execution instructions? yes
  • command line variants? yes
  • environment variants? yes
  • setup instructions? yes
  • cleanup instructions? no
    • if anything else, please describe:

Does your test system:

  • provide a set of existing tests? yes
    • if so, how many? 70+

build management

Does your test system:

  • build the software under test (e.g. the kernel)? yes
  • build the test software? yes
  • build other software (such as the distro, libraries, firmware)? yes, for certain distro we use makepkg to build lib if it does not exist
  • support cross-compilation? yes
  • require a toolchain or build system for the SUT? yes
  • require a toolchain or build system for the test software? yes
  • come with pre-built toolchains? yes
  • store the build artifacts for generated software? yes
    • in what format is the build metadata stored (e.g. json)? yaml
    • are the build artifacts stored as raw files or in a database? raw file
      • if a database, what database?

Test scheduling/management

Does your test system:

  • check that dependencies are met before a test is run? It checks kernel kconfig dependency now

Also, in lkp-test/jobs there are need_* declarations in some of the yaml files: need_memory, need_modules, need_cpu, need_x.

Also, 0-day supports: need_kernel_headers and need_kernel_selftests

need_kconfig can be defined like so:

need_kconfig
 - CONFIG_RUNTIME_TESTING_MENU=y
 - CONFIG_TEST_FIRMWARE
 - CONFIG_TEST_USER_COPY

See https://github.com/intel/lkp-tests/blob/master/include/kernel_selftests

  • schedule the test for the DUT?
    • select an appropriate individual DUT based on SUT or test attributes? yes
    • reserve the DUT? yes
    • release the DUT? yes
  • install the software under test to the DUT? yes
  • install required packages before a test is run? yes
  • require particular bootloader on the DUT? (e.g. grub, uboot, etc.) no
  • deploy the test program to the DUT? yes
  • prepare the test environment on the DUT? yes
  • start a monitor (another process to collect data) on the DUT? yes
  • start a monitor on external equipment? yes, like pmeter
  • initiate the test on the DUT? yes
  • clean up the test environment on the DUT? no, the environment (tmp, overlay) will be cleaned up during reboot/kexec to next test

DUT control

Does your test system:

  • store board configuration data? yes
    • in what format? yaml
  • store external equipment configuration data? yes
    • in what format? yaml
  • power cycle the DUT? yes
  • monitor the power usage during a run? yes
  • gather a kernel trace during a run? yes
  • claim other hardware resources or machines (other than the DUT) for use during a test? yes
  • reserve a board for interactive use (ie remove it from automated testing)? yes
  • provide a web-based control interface for the lab? not yet, the web UI is just started this year but focusing on test status query firstly
  • provide a CLI control interface for the lab? yes

Run artifact handling

Does your test system:

  • store run artifacts yes
    • in what format? raw file
  • put the run meta-data in a database? no
    • if so, which database?
  • parse the test logs for results? yes
  • convert data from test logs into a unified format? yes
    • if so, what is the format? json
  • evaluate pass criteria for a test (e.g. ignored results, counts or thresholds)? yes
  • do you have a common set of result names? no, we use the sets of each integrated test suites
    • if so, what are they?
  • How is run data collected from the DUT?
    • e.g. by pushing from the DUT, or pulling from a server?
  • How is run data collected from external equipment?
  • Is external equipment data parsed?

User interface

Does your test system:

  • have a visualization system? no
  • show build artifacts to users? yes
  • show run artifacts to users? yes
  • do you have a common set of result colors? no
    • if so, what are they?
  • generate reports for test runs?
  • notify users of test results by e-mail? yes, but only for kernel build status or regression report
  • can you query (aggregate and filter) the build meta-data? yes
  • can you query (aggregate and filter) the run meta-data? yes
  • what language or data format is used for online results presentation? N/A
  • what language or data format is used for reports? (e.g. PDF, excel, etc.) N/A
  • does your test system have a CLI control tool? yes
    • what is it called? lkp

Languages:

Examples: json, python, yaml, C, javascript, etc.

  • what is the base language of your test framework core? shell, ruby

What languages or data formats is the user required to learn? shell

Can a user do the following with your test framework:

  • manually request that a test be executed (independent of a CI trigger)? yes
  • see the results of recent tests? yes
  • set the pass criteria for a test? yes
    • set the threshold value for a benchmark test? no
    • set the list of testcase results to ignore? yes
  • provide a rating for a test? (e.g. give it 4 stars out of 5) no
  • customize a test?
    • alter the command line for the test program? yes
    • alter the environment of the test program? yes
    • specify to skip a testcase? no
    • set a new expected value for a test? no
    • edit the test program source? yes
  • customize the notification criteria?
    • customize the notification mechanism (eg. e-mail, text) no
  • generate a custom report for a set of runs? no
  • save the report parameters to generate the same report in the future? yes

Requirements

Does your test framework:

  • require minimum software on the DUT? yes
    • If so, what? POSiX shell, PXE boot
  • require minimum hardware on the DUT (e.g. memory) yes
    • If so, what? PXE boot (network)
  • require agent software on the DUT? yes
    • If so, what agent? lkp init scripts installed during system boot
  • is there optional agent software or libraries for the DUT? no
  • require external hardware in your labs? yes, power control, serial cable

APIS

Does your test framework:

  • use existing APIs or data formats to interact within itself, or with 3rd-party modules? yes
  • have a published API for any of its sub-module interactions (any of the lines in the diagram)? no
    • Please provide a link or links to the APIs?
  • What is the nature of the APIs you currently use?

Are they:

    • RPCs?
    • Unix-style? yes, part of
    • compiled libraries?
    • interpreter modules or libraries?
    • web-based APIs? yes, part of
    • something else?

Relationship to other software:

  • what major components does your test framework use? Jenkins, Mongo
  • does your test framework interoperate with other test frameworks or software?
    • which ones? we integrate a lot of industry test suites to execute test

Overview

Please list the major components of your test system.

  • KBuild - git polling, fetch mailing list, kernel compiling, static analysis, notification
  • LKP - test management, scheduling, execution, result analysis, cyclic testing
  • Jenkins - UI for manually configure/schedule required tests (calls into LKP component)
  • Bisection - bisect regression to identify bad commit

Additional Data