Difference between revisions of "Benchmark Programs"
(→Unix Bench) |
|||
(8 intermediate revisions by 5 users not shown) | |||
Line 3: | Line 3: | ||
<div style="margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#ffffcc; align:right; "> | <div style="margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#ffffcc; align:right; "> | ||
''Note: It is important to recognize that benchmarks between systems may be misleading. | ''Note: It is important to recognize that benchmarks between systems may be misleading. | ||
− | + | Benchmarks should primarily be used to determine differences in performance for different | |
software configurations on the '''same''' hardware system.'' | software configurations on the '''same''' hardware system.'' | ||
</div> | </div> | ||
− | == | + | == Unix Bench == |
− | + | https://github.com/kdlucas/byte-unixbench | |
− | |||
− | |||
UnixBench contains 9 kinds of tests: | UnixBench contains 9 kinds of tests: | ||
Line 22: | Line 20: | ||
# System Call Overhead | # System Call Overhead | ||
− | + | == lmbench == | |
− | + | The LMBench home page is at: http://www.bitmover.com/lmbench/ and/or http://lmbench.sourceforge.net/<br> | |
+ | The sourceforge project page is at: http://sourceforge.net/projects/lmbench | ||
+ | |||
+ | === Instructions for lmbench-3.0-a9 === | ||
+ | |||
+ | (Adjust CC and OS according to your needs.) | ||
+ | |||
+ | cd lmbench-3.0-a9/src | ||
+ | make CC=arm-linux-gcc OS=arm-linux TARGET=linux | ||
+ | |||
+ | Make the whole lmbench-3.0-a9 directory accessible on the target, | ||
+ | e.g. by copying or NFS mount. Make sure the benchmark scripts | ||
+ | can write the configuration file and results, and also unpack | ||
+ | a tarball used during the benchmark (in case tar is not available | ||
+ | on target): | ||
− | + | chmod a+w ../bin/arm-linux ../results | |
− | + | tar xf webpage-lm.tar | |
− | + | To run the benchmark on the target: | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | cd lmbench-3.0-a9/src | |
− | The | + | hostname foo # make sure hostname is set, the scripts use it to name config and result files |
− | + | OS=arm-linux ../scripts/config-run | |
+ | OS=arm-linux ../scripts/results | ||
+ | |||
+ | This worked for me on a target using BusyBox v1.10.2 ash. | ||
+ | |||
+ | The results are written into lmbench-3.0-a9/results/, for each run of the ../scripts/results | ||
+ | a new file is created. You can copy the results back to your PC and run | ||
+ | various kinds of summary postprocessing scripts from lmbench, e.g. | ||
+ | |||
+ | ../scripts/getsummary ../results/arm-linux/* | ||
+ | |||
+ | == Wishlist == | ||
+ | |||
+ | A list of benchmark results would be useful: | ||
+ | * Comparing performance of different FFT implementations on Beagleboard-XM: http://pmeerw.dyndns.org/blog/programming/arm_fft.html | ||
+ | |||
+ | [[Category:Development Tools]] |
Latest revision as of 04:10, 10 August 2016
Here are some different programs for performing benchmarking.
Note: It is important to recognize that benchmarks between systems may be misleading. Benchmarks should primarily be used to determine differences in performance for different software configurations on the same hardware system.
Unix Bench
https://github.com/kdlucas/byte-unixbench
UnixBench contains 9 kinds of tests:
- Dhrystone 2 using register variables
- Double-Precision Whetstone
- Execl Throughput
- File Copy
- Pipe Throughput
- Pipe-based Context Switching
- Process Creation
- Shell Script
- System Call Overhead
lmbench
The LMBench home page is at: http://www.bitmover.com/lmbench/ and/or http://lmbench.sourceforge.net/
The sourceforge project page is at: http://sourceforge.net/projects/lmbench
Instructions for lmbench-3.0-a9
(Adjust CC and OS according to your needs.)
cd lmbench-3.0-a9/src make CC=arm-linux-gcc OS=arm-linux TARGET=linux
Make the whole lmbench-3.0-a9 directory accessible on the target, e.g. by copying or NFS mount. Make sure the benchmark scripts can write the configuration file and results, and also unpack a tarball used during the benchmark (in case tar is not available on target):
chmod a+w ../bin/arm-linux ../results tar xf webpage-lm.tar
To run the benchmark on the target:
cd lmbench-3.0-a9/src hostname foo # make sure hostname is set, the scripts use it to name config and result files OS=arm-linux ../scripts/config-run OS=arm-linux ../scripts/results
This worked for me on a target using BusyBox v1.10.2 ash.
The results are written into lmbench-3.0-a9/results/, for each run of the ../scripts/results a new file is created. You can copy the results back to your PC and run various kinds of summary postprocessing scripts from lmbench, e.g.
../scripts/getsummary ../results/arm-linux/*
Wishlist
A list of benchmark results would be useful:
- Comparing performance of different FFT implementations on Beagleboard-XM: http://pmeerw.dyndns.org/blog/programming/arm_fft.html