Skip to main content

Benchmarking Basics

  • Chapter


This chapter provides a definition of the term “benchmark” followed by definitions of the major system quality attributes that are typically subject of benchmarking. After that, a classification of the different types of benchmarks is provided, followed by an overview of strategies for performance benchmarking. Finally, the quality criteria for good benchmarks are discussed in detail, and the chapter is wrapped up by a discussion of application scenarios for benchmarks.

One accurate measurement is worth a thousand expert opinions.

—Grace Hopper (1906–1992), US Navy Rear Admiral

From a user’s perspective, the best benchmark is the user’s own application program.

—Kaivalya M. Dixit (1942–2004), Long-time SPEC President

This is a preview of subscription content, access via your institution.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  • Avizienis, A., Laprie, J.-C., Randell, B., & Landwehr C. (2004). Basic concepts and taxonomy of dependable and secure computing. IEEE Transactions on Dependable and Secure Computing, 1(1), 11–33. Los Alamitos, CA: IEEE Computer Society.

    Google Scholar 

  • Gagniuc, P. A. (2017). Markov chains: From theory to implementation and experimentation. Hoboken, NJ: Wiley.

    CrossRef  Google Scholar 

  • Gustafson, J. & Snell, Q. (1995). HINT: A new way to measure computer performance. In Proceedings of the 28th annual Hawaii international conference on system sciences (Wailea, Hawaii, USA). (Vol. 2, pp. 392–401). Piscataway: IEEE.

    Google Scholar 

  • Gustafson, J., Rover, D., Elbert, S., & Carter, M. (1991). The design of a scalable, fixed-time computer benchmark. Journal of Parallel and Distributed Computing, 12(4), 388–401. Orlando, FL: Academic Press, Inc.

    Google Scholar 

  • Henning, J. L. (2000). SPEC CPU2000: Measuring CPU performance in the new millennium. Computer, 33(7), 28–35. New Jersey: IEEE.

    Google Scholar 

  • Herbst, N. R., Kounev, S., & Reussner, R. (2013). Elasticity in cloud computing: What it is, and what it is not. In Proceedings of the 10th international conference on autonomic computing (ICAC 2013) (San Jose, CA, USA). USENIX (pp. 23–27).

    Google Scholar 

  • Huppler, K. (2009). The art of building a good benchmark. In R. O. Nambiar & M. Poess (Eds.). Performance evaluation and benchmarking—first TPC technology conference (TPCTC 2009), revised selected papers. Lecture Notes in Computer Science (Vol. 5895, pp. 18–30). Berlin: Springer.

    Google Scholar 

  • Huppler, K. & Johnson, D. (2014). TPC express—a new path for TPC benchmarks. In R. O. Nambiar & M. Poess (Eds.) Performance characterization and benchmarking—5th TPC technology conference (TPCTC 2013), revised selected papers. Lecture Notes in Computer Science (Vol. 8391, pp. 48–60). Berlin: Springer.

    Google Scholar 

  • Kistowski, J. von, Arnold, J. A., Huppler, K., Lange, K.-D., Henning, J. L., & Cao, P. (2015). How to build a benchmark. In Proceedings of the 6th ACM/SPEC international conference on performance engineering (ICPE 2015) (Austin, TX, USA) (pp. 333–336). New York: ACM.

    CrossRef  Google Scholar 

  • Laprie, J.-C. (1995). Dependable computing: Concepts, limits, challenges. In Proceedings of the 25th international symposium on fault-tolerant computing (FTCS 1995) (Pasadena, CA, USA) (pp. 42–54). Washington, DC: IEEE Computer Society.

    Google Scholar 

  • Lilja, D. J. (2000). Measuring computer performance: A practitioner’s guide. Cambridge: Cambridge University Press.

    CrossRef  Google Scholar 

  • Menascé, D. A., Almeida, V. A., & Dowdy, L. W. (2004). Performance by design: Computer capacity planning by example. Upper Saddle River, NJ: Prentice Hall.

    Google Scholar 

  • Sim, S. E., Easterbrook, S., & Holt, R. C. (2003). Using benchmarking to advance research: A challenge to software engineering. In Proceedings of the 25th international conference on software engineering (ICSE 2003) (Portland, Oregon) (pp. 74–83). Washington, DC: IEEE Computer Society.

    CrossRef  Google Scholar 

  • Skadron, K., Martonosi, M., August, D. I., Hill, M. D., Lilja, D. J., & Pai, V. S. (2003). Challenges in computer architecture evaluation. Computer, 36(8), 30–36. IEEE Computer Society: Los Alamitos, CA, USA.

    Google Scholar 

  • Smith, C. U. & Williams, L. G. (2001). Performance solutions: A practical guide to creating responsive, scalable software. Addison-Wesley professional computing series. Boston: Addison-Wesley.

    Google Scholar 

  • SPECpower Committee (2014). Power and performance benchmark methodology v2.2. Gainesville: Standard Performance Evaluation Corporation (SPEC).

    Google Scholar 

  • Trivedi, K. S. (2016). Probability and statistics with reliability, queuing and computer science applications (2nd ed.). Wiley: Hoboken, NJ.

    CrossRef  Google Scholar 

  • Vieira, M., Madeira, H., Sachs, K., & Kounev, S. (2012). Resilience benchmarking. In K. Wolter, A. Avritzer, M. Vieira, & A. van Moorsel (Eds.) Resilience assessment and evaluation of computing systems (pp. 283–301). Berlin: Springer.

    CrossRef  Google Scholar 

Download references

Author information

Authors and Affiliations


Rights and permissions

Reprints and Permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Cite this chapter

Kounev, S., Lange, KD., Kistowski, J.v. (2020). Benchmarking Basics. In: Systems Benchmarking. Springer, Cham.

Download citation

  • DOI:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-41704-8

  • Online ISBN: 978-3-030-41705-5

  • eBook Packages: Computer ScienceComputer Science (R0)