Encyclopedia of Big Data Technologies

Living Edition
| Editors: Sherif Sakr, Albert Zomaya

Benchmark Harness

  • Nicolas MichaelEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-63962-8_134-1



A benchmark harness is a software that provides the infrastructure to conduct benchmarks of a software and/or hardware system, typically with the goal to quantitatively assess the system’s characteristics and capabilities or to compare the characteristics and capabilities of multiple systems relative to each other. It facilitates the development and execution of benchmarks and the analysis of benchmark results. Typical components of a big data benchmark harness include a tool to generate data, an execution environment to run benchmarks, a data collection component to monitor the benchmark run, and a reporting component to calculate and summarize benchmark results.


In computing, benchmarking is the process of assessing a system’s quantitative characteristics and capabilities by running a benchmark workload (or set of workloads) against it. The assessed system, also referred to as the system under test (SUT), may be a software and/or hardware...

This is a preview of subscription content, log in to check access.


  1. Baru C, Bhandarkar M, Nambiar R, Poess M, Rabl T (2012) Setting the direction for big data benchmark standards. In: Technology conference on performance evaluation and benchmarking. Springer, Berlin, pp 197–208Google Scholar
  2. Capotă M, et al (2015) Graphalytics: a big data benchmark for graph-processing platforms. Proceedings of the GRADES’15. ACMGoogle Scholar
  3. Ghazal A, Rabl T, Hu M, Raab F, Poess M, Crolotte A, Jacobsen HA (2013) BigBench: towards an industry standard benchmark for big data analytics. In: Proceedings of the 2013 ACM SIGMOD international conference on management of data. ACM, pp 1197–1208Google Scholar
  4. Han R, Lu X, Jiangtao X (2014) On big data benchmarking, Workshop on big data benchmarks, performance optimization, and emerging hardware. Springer, ChamGoogle Scholar
  5. Michael N, et al (2017) CloudPerf: a performance test framework for distributed and dynamic multi-tenant environments. Proceedings of the 8th ACM/SPEC on international conference on performance engineering. ACMGoogle Scholar
  6. Ming Z et al (2013) BDGS: a scalable big data generator suite in big data benchmarking, Workshop on big data benchmarks. Springer, ChamGoogle Scholar
  7. Rabl T, Poess M (2011) Parallel data generation for performance analysis of large, complex RDBMS. In: Proceedings of the fourth international workshop on testing database systems. ACM, p 5Google Scholar
  8. Wang L, et al (2014) Bigdatabench: a big data benchmark suite from internet services. High Performance Computer Architecture (HPCA), 2014 IEEE 20th international symposium on. IEEEGoogle Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.OracleSan FranciscoUSA

Section editors and affiliations

  • Meikel Poess
    • 1
  • Tilmann Rabl
    • 2
  1. 1.Server TechnologiesOracleRedwood ShoresUSA
  2. 2.Database Systems and Information Management GroupTechnische Universität BerlinBerlinGermany