Encyclopedia of Big Data Technologies

Living Edition
| Editors: Sherif Sakr, Albert Zomaya

Microbenchmark

Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-63962-8_111-1

Synonyms

Definition

A microbenchmark is either a program or routine to measure and test the performance of a single component or task. Microbenchmarks are used to measure simple and well-defined quantities such as elapsed time, rate of operations, bandwidth, or latency. Typically, microbenchmarks were associated with the testing of individual software subroutines or lower-level hardware components such as the CPU and for a short period of time. However, in the BigData scope, the term microbenchmarking is broadened to include the cluster – group of networked computers – acting as a single system, as well as the testing of frameworks, algorithms, logical and distributed components, for a longer period and larger data sizes.

Overview

Microbenchmarks constitute the first line of performance testing. Through them, we can ensure the proper and timely functioning of the different individual components that make up our system. The term micro, of...

This is a preview of subscription content, log in to check access.

References

  1. ALOJA-Web (2015) Aloja home page. https://aloja.bsc.es/
  2. Anon EA, Bitton D, Brown M, Catell R, Ceri S, Chou T, DeWitt D, Gawlick D, Garcia-Molina H, Good B, Gray J, Homan P, Jolls B, Lukes T, Lazowska E, Nauman J, Pong M, Spector A, Trieber K, Sammer H, Serlin O, Stonebraker M, Reuter A, Weinberger P (1985) A measure of transaction processing power. Datamation 31(7):112–118. http://dl.acm.org/citation.cfm?id=13900.18159
  3. benchmark (2012 IOPS world record) F (2005) https://en.wikipedia.org/wiki/Jens_Axboe
  4. Cantin JF, Hill MD (2001) Cache performance for selected spec cpu2000 benchmarks. ACM SIGARCH Comput Archit News 29:13–18CrossRefGoogle Scholar
  5. Chen Y, Raab F, Katz R (2012) From TPC-C to big data benchmarks: a functional workload model. In: WBDBGoogle Scholar
  6. Chow TS (1978) Testing software design modeled by finite-state machines. IEEE Trans Softw Eng SE-4(3):178–187.  https://doi.org/10.1109/TSE.1978.231496MathSciNetCrossRefGoogle Scholar
  7. Cooper BF, Silberstein A, Tam E, Ramakrishnan R, Sears R (2010) Benchmarking cloud serving systems with YCSB. In: Proceedings of the 1st ACM symposium on cloud computing, SoCC 2010, Indianapolis, 10–11 June 2010, pp 143–154Google Scholar
  8. Curnow HJ, Wichmann BA (1976) A synthetic benchmark. OLW Comput J 19(1):43–49MathSciNetCrossRefGoogle Scholar
  9. Dean J, Ghemawat S (2004) MapReduce: simplified data processing on large clusters. In: Proceedings of the 6th conference on symposium on operating systems design and implementation (OSDI’04). USENIX AssociationGoogle Scholar
  10. DeWitt DJ, Levine C (2008) Not just correct, but correct and fast: a look at one of Jim Gray’s contributions to database system performance. SIGMOD Rec 37(2): 45–49. https://doi.org/10.1145/1379387.1379403CrossRefGoogle Scholar
  11. Dongarra JJ, Luszczek P, Petitet A (2003) The linpack benchmark: past, present, and future. Concurr Comput Pract Exp 15:2003CrossRefGoogle Scholar
  12. Ehliar A, Liu D (2004) Benchmarking network processors. In: Swedish system-on-chip conferenceGoogle Scholar
  13. Gray J (1992) Benchmark handbook: for database and transaction processing systems. Morgan Kaufmann Publishers Inc., San FranciscoGoogle Scholar
  14. Gregg B (2013) Systems performance: enterprise and the cloud. Always learning, Prentice Hall. https://books.google.de/books?id=xQdvAQAAQBAJ
  15. Huang S, Huang J, Dai J, Xie T, Huang B (2010) The HiBench benchmark suite: characterization of the MapReduce-based data analysis. In: 22nd international conference on data engineering workshops, pp 41–51.  https://doi.org/10.1109/icdew.2010.5452747
  16. ICT, Chinese Academy of Sciences (2014) BigDataBench 3.1. http://prof.ict.ac.cn/BigDataBench/
  17. Islam NS, Lu X, Wasi-ur Rahman M, Jose J, Panda DKD (2014) A Micro-benchmark suite for evaluating HDFS operations on modern clusters. Springer, Berlin/Heidelberg, pp 129–147. https://doi.org/10.1007/978-3-642-53974-9_12CrossRefGoogle Scholar
  18. Ivanov T, Rabl T, Poess M, Queralt A, Poelman J, Poggi N, Buell J (2015) Big data benchmark compendium. In: TPCTC, pp 135–155CrossRefGoogle Scholar
  19. Jain R (1991a) The art of computer systems performance analysis – techniques for experimental design, measurement, simulation, and modeling. Wiley professional computing. Wiley-Interscience, New YorkGoogle Scholar
  20. Jain R (1991b) The art of computer systems performance analysis: techniques for experimental design, measurement, simulation, and modeling. Wiley-Interscience, New YorkGoogle Scholar
  21. Kim K, Jeon K, Han H, Kim SG, Jung H, Yeom HY (2008) Mrbench: a benchmark for mapreduce framework. In: 14th international conference on parallel and distributed systems, ICPADS 2008, Melbourne, 8–10 Dec 2008, pp 11–18Google Scholar
  22. Li M, Tan J, Wang Y, Zhang L, Salapura V (2015) Sparkbench: a comprehensive benchmarking suite for in memory data analytic platform spark. In: Proceedings of the 12th ACM international conference on computing frontiers, CF’15. ACM, New York, pp 53:1–53:8Google Scholar
  23. Longbottom R (2017) Whetstone benchmark history and results. https://www.roylongbottom.org.uk/whetstone.htm
  24. Lu X, Wasi-ur Rahman M, Islam NS, Panda DKD (2014) A micro-benchmark suite for evaluating Hadoop RPC on high-performance networks. Springer International Publishing, Cham, pp 32–42. https://doi.org/10.1007/978-3-319-10596-3_3Google Scholar
  25. Nair AA, John LK (2008) Simulation points for SPEC CPU 2006. In: IEEE international conference on computer design, pp 397-403. ISSN:1063-6404Google Scholar
  26. Nambiar R, Poess M, Dey A, Cao P, Magdon-Ismail T, Qi Ren D, Bond A (2015) Introducing TPCx-HS: the first industry standard for benchmarking big data systems. Springer International Publishing, Cham, pp 1–12. https://doi.org/10.1007/978-3-319-15350-6_1Google Scholar
  27. Patil S, Polte M, Ren K, Tantisiriroj W, Xiao L, López J, Gibson G, Fuchs A, Rinaldi B (2011) YCSB++: benchmarking and performance debugging advanced features in scalable table stores. In: ACM symposium on cloud computing in conjunction with SOSP 2011, SOCC’11, Cascais, 26–28 Oct 2011, p 9Google Scholar
  28. Pavlo A, Paulson E, Rasin A, Abadi DJ, DeWitt DJ, Madden S, Stonebraker M (2009) A comparison of approaches to large-scale data analysis. In: SIGMOD, pp 165–178Google Scholar
  29. Petitet JDAC A, Whaley RC (2004) Hpl – a portable implementation of the high-performance linpack benchmark for distributed-memory computers. ICL – UTK Computer Science Department Retrieved 22 Sept 2016Google Scholar
  30. Poggi N, Grier D (2017) Evaluating NVMe drives for accelerating HBase. Dataworks Summit (Hadoop Summit) San JoseGoogle Scholar
  31. Poggi N, Carrera D, Vujic N, Blakeley J et al (2014) ALOJA: a systematic study of hadoop deployment variables to enable automated characterization of cost-effectiveness. In: IEEE international conference on big data, big data 2014, Washington, DC, 27–30 OctGoogle Scholar
  32. Poggi N, Berral JL, Fenech T, Carrera D, Blakeley J, Minhas UF, Vujic N (2016) The state of sql-on-hadoop in the cloud. In: 2016 IEEE international conference on big data (big data), pp 1432–1443.  https://doi.org/10.1109/BigData.2016.7840751
  33. Rabl T, Poess M, Baru CK, Jacobsen H-A (2014) Specifying big data benchmarks – First workshop, WBDB 2012, San Jose, 8–9 May 2012, and Second workshop, WBDB 2012, Pune, India, 17–18 Dec 2012, Revised selected papers. Lecture notes in computer science, vol 8163. Springer (2014). https://doi.org/10.1007/978-3-642-53974-9
  34. Rabl T, Baru C (2014) Big data benchmarking tutorial. In: IEEE international conference on big data, big data 2014, Washington, DC, 27–30 OctGoogle Scholar
  35. Seltzer M, Krinsky D, Smith K, Zhang X (1999) The case for application-specific benchmarking. In: Proceedings of the seventh workshop on hot topics in operating systems, HOTOS’99. IEEE Computer Society, Washington, DC, pp 102Google Scholar
  36. Sortbenchmarkorg (1987) http://sortbenchmark.org/
  37. Stonebraker M, Abadi DJ, DeWitt DJ, Madden S, Paulson E, Pavlo A, Rasin A (2010) MapReduce and parallel DBMSs: friends or foes? Commun ACM 53(1):64–71CrossRefGoogle Scholar
  38. Sumne FH (1974) Measurement techniques in computer hardware design. State of the Art Report No 18, pp 367–390Google Scholar
  39. Traeger A, Zadok E, Joukov N, Wright CP (2008) A nine year study of file system and storage benchmarking. Trans Storage 4(2):5:1–5:56. https://doi.org/10.1145/1367829.1367831CrossRefGoogle Scholar
  40. Transaction Processing Performance Council (2014) TPC benchmark H – standard specification. Version 2.17.1Google Scholar
  41. Waller J (2015) Performance benchmarking of application monitoring frameworks. BoD – books on demandGoogle Scholar
  42. YCSB-Web (2017) Ycsb code repository. https://github.com/brianfrankcooper/YCSB

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Databricks Inc., Amsterdam, NLBarcelonaTech (UPC)BarcelonaSpain

Section editors and affiliations

  • Meikel Poess
    • 1
  • Tilmann Rabl
    • 2
  1. 1.Server TechnologiesOracleRedwood ShoresUSA
  2. 2.Database Systems and Information Management GroupTechnische Universität BerlinBerlinGermany