Iperf test definition comparison

This page has a comparison between the test definitions from Fuego and Linaro for the OpenSSL test.

(NOTE: This page is under construction!! - right now lots of it are copied and pasted from the sysbench page)

= Differences =
 * Fuego only runs ...
 * Linaro runs ...

High Level Assumptions

 * Fuego does not disturb the system
 * if something is installed, it is removed, by default
 * if something is started, it is stopped
 * Fuego assumes you can run another test upon completion of one test


 * Linaro assumes a clean install, that will be replaced on next test
 * Things can be modified (packages installed, and forgotten about)


 * Fuego treats system like final product that is immutable
 * Linaro treats system like development system, that is mutable

building

 * Fuego cross-builds the test software
 * Linaro does not build the software

Pre-requisites

 * Fuego checks for cross-compiler variables
 * Linaro checks for root account

Alterations

 * Linaro can install packages required by openssl on the board
 * Fuego deploys the test software to the board

Execution

 * Linaro runs test for each crypto algorithm separately
 * Fuego runs test for all crypto algorithms together


 * Factorization of the test is different
 * dependency check, alterations, test execution, parsing are done on board for Linaro
 * dependency check, test execution, parsing are done on the Host for Fuego

Parsing

 * Linaro parses the output for each crypto test on the target using awk
 * Fuego parses the combined output on the host using python (parser.py)

Results

 * output is different

Presentation

 * Linaro doesn't include presentation control for the test results in the test

Metadata

 * Fuego specifies author, license gitrepo, for test program
 * Linaro specifies the devices for the test to run on
 * Linaro specifies distros where test can run

= questions: =
 * Linaro install_deps: does this also install the package itself (with the openssl binary)?
 * Linaro: what does send-to-laval.sh do?

= Field comparisons =

= Fuego source =

fuego_test.sh
tarball=iperf-2.0.5.tar.gz

function test_build { # get updated config.sub and config.guess files, so configure # doesn't reject new toolchains cp /usr/share/misc/config.{sub,guess}. ./configure --host=$HOST --build=`./config.guess` sed -i -e "s|#define bool int|//#define bool int|g" config.h   make config.h    sed -i -e "s/#define HAVE_MALLOC 0/#define HAVE_MALLOC 1/g" -e "s/#define malloc rpl_malloc/\/\* #undef malloc \*\//g" config.h    sed -i -e '/HEADERS\(\)/ a\#include "gnu_getopt.h"' src/Settings.cpp make }

function test_deploy { put src/iperf $BOARD_TESTDIR/fuego.$TESTDIR/ }

function test_run { cmd "killall -SIGKILL iperf 2>/dev/null; exit 0"

# Start iperf server on Jenkins host iperf_exec=`which iperf`

if [ -z $iperf_exec ]; then echo "ERROR: Cannot find iperf" false else $iperf_exec -s & fi

assert_define BENCHMARK_IPERF_SRV

if [ "$BENCHMARK_IPERF_SRV" = "default" ]; then srv=$SRV_IP else srv=$BENCHMARK_IPERF_SRV fi

report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./iperf -c $srv -t 15; ./iperf -c $srv -d -t 15" $BOARD_TESTDIR/fuego.$TESTDIR/${TESTDIR}.log }

function test_cleanup { kill_procs iperf }

parser.py

 * 1) !/usr/bin/python

import os, re, sys import common as plib


 * 1) Client connecting to 10.90.101.49, TCP port 5001
 * 2) TCP window size: 16.0 KByte (default)
 * 3) [ 3] local 10.90.100.60 port 38868 connected with 10.90.101.49 port 5001
 * 4) [ ID] Interval      Transfer     Bandwidth
 * 5) [ 3]  0.0-15.0 sec   117 MBytes  65.4 Mbits/sec
 * 6) Server listening on TCP port 5001
 * 7) TCP window size: 85.3 KByte (default)
 * 8) Client connecting to 10.90.101.49, TCP port 5001
 * 9) TCP window size: 21.1 KByte (default)
 * 10) [ 5] local 10.90.100.60 port 38869 connected with 10.90.101.49 port 5001
 * 11) [ 4] local 10.90.100.60 port 5001 connected with 10.90.101.49 port 40772
 * 12) [ ID] Interval      Transfer     Bandwidth
 * 13) [ 5]  0.0-15.0 sec  99.9 MBytes  55.7 Mbits/sec
 * 14) [ 4]  0.0-15.2 sec  50.8 MBytes  28.0 Mbits/sec
 * 1) [ 5] local 10.90.100.60 port 38869 connected with 10.90.101.49 port 5001
 * 2) [ 4] local 10.90.100.60 port 5001 connected with 10.90.101.49 port 40772
 * 3) [ ID] Interval      Transfer     Bandwidth
 * 4) [ 5]  0.0-15.0 sec  99.9 MBytes  55.7 Mbits/sec
 * 5) [ 4]  0.0-15.2 sec  50.8 MBytes  28.0 Mbits/sec
 * 1) [ 4]  0.0-15.2 sec  50.8 MBytes  28.0 Mbits/sec


 * 1) The following was also possible in the past for tx test:
 * 2) [ 3]  0.0- 3.7 sec  9743717424271204 bits  0.00 (null)s/sec

ref_section_pat = "^\[[\w\d_ ./]+.[gle]{2}\]" cur_search_pat = re.compile("^.* ([\d.]+) Mbits/sec\n.*\n.*\n.*\n.*\n.*\n.*\n.*\n.*\n.*\n.*\n.*\n.* ([\d.]+) Mbits/sec\n.* ([\d.]+) Mbits/sec", re.MULTILINE)

cur_dict = {} pat_result = plib.parse(cur_search_pat) if pat_result: for item in pat_result: #print item cur_dict["tcp.tx"] = item[0] cur_dict["tcp.bi_tx"] = item[1] cur_dict["tcp.bi_rx"] = item[2]

if "tcp.tx" in cur_dict: sys.exit(plib.process_data(ref_section_pat, cur_dict, 's', 'Rate, MB/s')) else: print "Fuego error reason: could not parse measured bandwidth"

spec.json
{   "testName": "Benchmark.iperf", "specs": { "default": { "SRV":"default" }   } }

chart_config.json
{   "iperf":["tcp"] }

test.yaml
None provided.

= Linaro source =

iperf.sh

 * 1) !/bin/sh -ex

. ../../lib/sh-test-lib OUTPUT="$(pwd)/output" RESULT_FILE="${OUTPUT}/result.txt" LOGFILE="${OUTPUT}/iperf.txt" SERVER="127.0.0.1" TIME="10" THREADS="1" VERSION="3.1.4"
 * 1) shellcheck disable=SC1091
 * 1) Test localhost by default, which tests the efficiency of TCP/IP stack.
 * 2) To test physical network bandwidth, specify remote test server with '-c'.
 * 3) Execute 'iperf3 -s' on remote host to run iperf3 test server.
 * 1) Time in seconds to transmit for
 * 1) Number of parallel client streams to run
 * 1) Specify iperf3 version for CentOS.

usage { echo "Usage: $0 [-c server] [-t time] [-p number] [-v version] [-s true|false]" 1>&2 exit 1 }

while getopts "c:t:p:v:s:h" o; do case "$o" in    c) SERVER="${OPTARG}" ;;    t) TIME="${OPTARG}" ;; p) THREADS="${OPTARG}" ;;   v) VERSION="${OPTARG}" ;; s) SKIP_INSTALL="${OPTARG}" ;;   h|*) usage ;; esac done

create_out_dir "${OUTPUT}" cd "${OUTPUT}"

if [ "${SKIP_INSTALL}" = "true" ] || [ "${SKIP_INSTALL}" = "True" ]; then info_msg "iperf installation skipped" else dist_name # shellcheck disable=SC2154 case "${dist}" in       debian|ubuntu|fedora)            install_deps "iperf3"            ;;        centos) install_deps "wget gcc make" wget https://github.com/esnet/iperf/archive/"${VERSION}".tar.gz           tar xf "${VERSION}".tar.gz            cd iperf-"${VERSION}" ./configure make make install ;;   esac fi

[ "${SERVER}" = "127.0.0.1" ] && iperf3 -s -D
 * 1) Run local iperf3 server as a daemon when testing localhost.

stdbuf -o0 iperf3 -c "${SERVER}" -t "${TIME}" -P "${THREADS}" 2>&1 \ | tee "${LOGFILE}"
 * 1) Run iperf test with unbuffered output mode.

if [ "${THREADS}" -eq 1 ]; then egrep "(sender|receiver)" "${LOGFILE}" \ | awk '{printf("iperf-%s pass %s %s\n", $NF,$7,$8)}' \ | tee -a "${RESULT_FILE}" elif [ "${THREADS}" -gt 1 ]; then egrep "[SUM].*(sender|receiver)" "${LOGFILE}" \ | awk '{printf("iperf-%s pass %s %s\n", $NF,$6,$7)}' \ | tee -a "${RESULT_FILE}" fi
 * 1) Parse logfile.

pkill iperf3 || true
 * 1) Kill iperf test daemon if any.

iperf.yaml
metadata: name: iperf format: "Lava-Test-Shell Test Definition 1.0" description: "iperf is a tool for active measurements of the maximum                 achievable bandwidth on IP networks." maintainer: - chase.qi@linaro.org os: - debian - ubuntu - fedora - centos scope: - performance environment: - lava-test-shell devices: - hi6220-hikey - apq8016-sbc - mustang - moonshot - thunderX - d03 - d05

params: # Time in seconds to transmit for TIME: "10" # Number of parallel client streams to run THREADS: "1" SKIP_INSTALL: "false" # Specify iperf server # Set the var to lava-host-role for test run with LAVA multinode job SERVER: 127.0.0.1 # When running with LAVA multinode job, set the following vars to the values # sent by lava-send from host role. MSG_ID: server-ready MSG_KEY: ipaddr

run: steps: - fixed_server="${SERVER}" - if [ "${SERVER}" = "lava-host-role" ]; then -    lava-wait "${MSG_ID}" -    fixed_server=$(grep "${MSG_KEY}" /tmp/lava_multi_node_cache.txt | awk -F"=" '{print $NF}') - fi       - cd ./automated/linux/iperf/ - ./iperf.sh -t "${TIME}" -p "${THREADS}" -s "${SKIP_INSTALL}" -c "${fixed_server}" - ../../utils/send-to-lava.sh ./output/result.txt - '[ "${SERVER}" = "lava-host-role" ] && lava-send client-done'