Iperf test definition comparison

From eLinux.org
Jump to: navigation, search

This page has a comparison between the test definitions from Fuego and Linaro for the OpenSSL test.

(NOTE: This page is under construction!! - right now lots of it are copied and pasted from the sysbench page)

Differences

  • Fuego only runs ...
  • Linaro runs ...

High Level Assumptions

  • Fuego does not disturb the system
    • if something is installed, it is removed, by default
    • if something is started, it is stopped
  • Fuego assumes you can run another test upon completion of one test
  • Linaro assumes a clean install, that will be replaced on next test
    • Things can be modified (packages installed, and forgotten about)
  • Fuego treats system like final product that is immutable
  • Linaro treats system like development system, that is mutable

preparation

building

  • Fuego cross-builds the test software
  • Linaro does not build the software

Pre-requisites

  • Fuego checks for cross-compiler variables
  • Linaro checks for root account

Alterations

  • Linaro can install packages required by openssl on the board
  • Fuego deploys the test software to the board

Execution

  • Linaro runs test for each crypto algorithm separately
  • Fuego runs test for all crypto algorithms together
  • Factorization of the test is different
    • dependency check, alterations, test execution, parsing are done on board for Linaro
    • dependency check, test execution, parsing are done on the Host for Fuego

Parsing

  • Linaro parses the output for each crypto test on the target using awk
  • Fuego parses the combined output on the host using python (parser.py)

Results

  • output is different

Presentation

  • Linaro doesn't include presentation control for the test results in the test

Metadata

  • Fuego specifies author, license gitrepo, for test program
  • Linaro specifies the devices for the test to run on
  • Linaro specifies distros where test can run

questions:

  • Linaro install_deps: does this also install the package itself (with the openssl binary)?
  • Linaro: what does send-to-laval.sh do?


Field comparisons

Field items
Fuego Linaro Notes
item use item use
fuego_test.sh:test_pre_check check required test pre-requisites (none in this test) sysbench.sh:! check_root && error_msg check required test pre-requisites Linaro code is inline in test script
fuego_test.sh:test_build cross-build the test program from tar sysbench.sh:install_sysbench download and build the test program -
- - sysbench.sh:install_sysbench install required packages for build Linaro has different build and dependency info per distro, Fuego has no notion of installing auxiliary packages on the board
fuego_test.sh:test_deploy Put test program on the board sysbench.sh:install_sysbench install test program on board (locally)
fuego_test.sh:test_run instructions to execute the test program on the board sysbench.yaml:run:steps: instructions to execute the test program on the board -
parser.py code to parse the test program log sysbench.sh:general_parser and awk lines code to parse the test program log Linaro parsing is done on board
spec.json indicates values for test variables (none for this test) sysbench.sh:NUM_THREADS= indicates values for test variables Linaro options are read on command line of test script
test.yaml:fuego_package indicates type/format of test sysbench.yaml:metadata:format indicates type/format of test -
test.yaml:name name of test sysbench.yaml:metadata:name name of test similar
test.yaml:description description of test sysbench.yaml:metadata:description description of test similar
test.yaml:license/author/version test program information - - Informational data
test.yaml:maintainer Maintainer of this Fuego test sysbench.yaml:metadata:maintainer Maintainer of this Linaro test similar
test.yaml:fuego_release Fuego revision of this test - - -
test.yaml:type type of test sysbench.yaml:metadata:scope type of test? -
- - sysbench.yaml:metadata:os OSes that this test can run on Linaro only?
- - sysbench.yaml:metadata:devices devices that this test can run on Linaro only? (Fuego board selection is done by user when creating jobs for boards?)
test.yaml:tags tags for this test - - Fuego only?
test.yaml:params test variable names, values, options (note: none in this test) sysbench.yaml:params test variable names and values similar
test.yaml:gitrepo upstream git repository for test program - - Fuego only?
test.yaml:data_files manifest used for packaging the test - - Fuego only?

Fuego source

fuego_test.sh

tarball=iperf-2.0.5.tar.gz

function test_build {
    # get updated config.sub and config.guess files, so configure
    # doesn't reject new toolchains
    cp /usr/share/misc/config.{sub,guess} .
    ./configure --host=$HOST --build=`./config.guess`
    sed -i -e "s|#define bool int|//#define bool int|g" config.h
    make config.h
    sed -i -e "s/#define HAVE_MALLOC 0/#define HAVE_MALLOC 1/g" -e "s/#define malloc rpl_malloc/\/\* #undef malloc \*\//g" config.h
    sed -i -e '/HEADERS\(\)/ a\#include "gnu_getopt.h"' src/Settings.cpp
    make
}

function test_deploy {
	put src/iperf  $BOARD_TESTDIR/fuego.$TESTDIR/
}

function test_run {
	cmd "killall -SIGKILL iperf 2>/dev/null; exit 0"

	# Start iperf server on Jenkins host
	iperf_exec=`which iperf`

	if [ -z $iperf_exec ];
	then 
	 echo "ERROR: Cannot find iperf"
	 false
	else
	 $iperf_exec -s &
	fi

	assert_define BENCHMARK_IPERF_SRV

	if [ "$BENCHMARK_IPERF_SRV" = "default" ]; then
	  srv=$SRV_IP
	else
	  srv=$BENCHMARK_IPERF_SRV
	fi

	report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./iperf -c $srv -t 15; ./iperf -c $srv -d -t 15" $BOARD_TESTDIR/fuego.$TESTDIR/${TESTDIR}.log
}

function test_cleanup {
	kill_procs iperf
}

parser.py

#!/usr/bin/python

import os, re, sys
import common as plib

#------------------------------------------------------------
#Client connecting to 10.90.101.49, TCP port 5001
#TCP window size: 16.0 KByte (default)
#------------------------------------------------------------
#[  3] local 10.90.100.60 port 38868 connected with 10.90.101.49 port 5001
#[ ID] Interval       Transfer     Bandwidth
#[  3]  0.0-15.0 sec   117 MBytes  65.4 Mbits/sec
#------------------------------------------------------------
#Server listening on TCP port 5001
#TCP window size: 85.3 KByte (default)
#------------------------------------------------------------
#------------------------------------------------------------
#Client connecting to 10.90.101.49, TCP port 5001
#TCP window size: 21.1 KByte (default)
#------------------------------------------------------------
#[  5] local 10.90.100.60 port 38869 connected with 10.90.101.49 port 5001
#[  4] local 10.90.100.60 port 5001 connected with 10.90.101.49 port 40772
#[ ID] Interval       Transfer     Bandwidth
#[  5]  0.0-15.0 sec  99.9 MBytes  55.7 Mbits/sec
#[  4]  0.0-15.2 sec  50.8 MBytes  28.0 Mbits/sec

# The following was also possible in the past for tx test:
#[  3]  0.0- 3.7 sec  9743717424271204 bits  0.00 (null)s/sec

ref_section_pat = "^\[[\w\d_ ./]+.[gle]{2}\]"
cur_search_pat = re.compile("^.* ([\d.]+) Mbits/sec\n.*\n.*\n.*\n.*\n.*\n.*\n.*\n.*\n.*\n.*\n.*\n.* ([\d.]+) Mbits/sec\n.* ([\d.]+) Mbits/sec", re.MULTILINE)

cur_dict = {}
pat_result = plib.parse(cur_search_pat)
if pat_result:
        for item in pat_result:
                #print item
                cur_dict["tcp.tx"] = item[0]
                cur_dict["tcp.bi_tx"] = item[1]
                cur_dict["tcp.bi_rx"] = item[2]

if "tcp.tx" in cur_dict:
        sys.exit(plib.process_data(ref_section_pat, cur_dict, 's', 'Rate, MB/s'))
else:
        print "Fuego error reason: could not parse measured bandwidth"

spec.json

{
    "testName": "Benchmark.iperf",
    "specs": {
        "default": {
            "SRV":"default"
        }
    }
}

chart_config.json

{
    "iperf":["tcp"]
}

test.yaml

None provided.

Linaro source

iperf.sh

#!/bin/sh -ex

# shellcheck disable=SC1091
. ../../lib/sh-test-lib
OUTPUT="$(pwd)/output"
RESULT_FILE="${OUTPUT}/result.txt"
LOGFILE="${OUTPUT}/iperf.txt"
# Test localhost by default, which tests the efficiency of TCP/IP stack.
# To test physical network bandwidth, specify remote test server with '-c'.
# Execute 'iperf3 -s' on remote host to run iperf3 test server.
SERVER="127.0.0.1"
# Time in seconds to transmit for
TIME="10"
# Number of parallel client streams to run
THREADS="1"
# Specify iperf3 version for CentOS.
VERSION="3.1.4"

usage() {
    echo "Usage: $0 [-c server] [-t time] [-p number] [-v version] [-s true|false]" 1>&2
    exit 1
}

while getopts "c:t:p:v:s:h" o; do
  case "$o" in
    c) SERVER="${OPTARG}" ;;
    t) TIME="${OPTARG}" ;;
    p) THREADS="${OPTARG}" ;;
    v) VERSION="${OPTARG}" ;;
    s) SKIP_INSTALL="${OPTARG}" ;;
    h|*) usage ;;
  esac
done

create_out_dir "${OUTPUT}"
cd "${OUTPUT}"

if [ "${SKIP_INSTALL}" = "true" ] || [ "${SKIP_INSTALL}" = "True" ]; then
    info_msg "iperf installation skipped"
else
    dist_name
    # shellcheck disable=SC2154
    case "${dist}" in
        debian|ubuntu|fedora)
            install_deps "iperf3"
            ;;
        centos)
            install_deps "wget gcc make"
            wget https://github.com/esnet/iperf/archive/"${VERSION}".tar.gz
            tar xf "${VERSION}".tar.gz
            cd iperf-"${VERSION}"
            ./configure
            make
            make install
            ;;
    esac
fi

# Run local iperf3 server as a daemon when testing localhost.
[ "${SERVER}" = "127.0.0.1" ] && iperf3 -s -D

# Run iperf test with unbuffered output mode.
stdbuf -o0 iperf3 -c "${SERVER}" -t "${TIME}" -P "${THREADS}" 2>&1 \
    | tee "${LOGFILE}"

# Parse logfile.
if [ "${THREADS}" -eq 1 ]; then
    egrep "(sender|receiver)" "${LOGFILE}" \
        | awk '{printf("iperf-%s pass %s %s\n", $NF,$7,$8)}' \
        | tee -a "${RESULT_FILE}"
elif [ "${THREADS}" -gt 1 ]; then
    egrep "[SUM].*(sender|receiver)" "${LOGFILE}" \
        | awk '{printf("iperf-%s pass %s %s\n", $NF,$6,$7)}' \
        | tee -a "${RESULT_FILE}"
fi

# Kill iperf test daemon if any.
pkill iperf3 || true

iperf.yaml

metadata:
    name: iperf
    format: "Lava-Test-Shell Test Definition 1.0"
    description: "iperf is a tool for active measurements of the maximum
                  achievable bandwidth on IP networks."
    maintainer:
        - chase.qi@linaro.org
    os:
        - debian
        - ubuntu
        - fedora
        - centos
    scope:
        - performance
    environment:
        - lava-test-shell
    devices:
        - hi6220-hikey
        - apq8016-sbc
        - mustang
        - moonshot
        - thunderX
        - d03
        - d05

params:
    # Time in seconds to transmit for
    TIME: "10"
    # Number of parallel client streams to run
    THREADS: "1"
    SKIP_INSTALL: "false"
    # Specify iperf server
    # Set the var to lava-host-role for test run with LAVA multinode job
    SERVER: 127.0.0.1
    # When running with LAVA multinode job, set the following vars to the values
    # sent by lava-send from host role.
    MSG_ID: server-ready
    MSG_KEY: ipaddr

run:
    steps:
        - fixed_server="${SERVER}"
        - if [ "${SERVER}" = "lava-host-role" ]; then
        -     lava-wait "${MSG_ID}"
        -     fixed_server=$(grep "${MSG_KEY}" /tmp/lava_multi_node_cache.txt | awk -F"=" '{print $NF}')
        - fi
        - cd ./automated/linux/iperf/
        - ./iperf.sh -t "${TIME}" -p "${THREADS}" -s "${SKIP_INSTALL}" -c "${fixed_server}"
        - ../../utils/send-to-lava.sh ./output/result.txt
        - '[ "${SERVER}" = "lava-host-role" ] && lava-send client-done'