Benchmarking in the Cloud: What It Should, Can, and Cannot Be

  • Enno Folkerts
  • Alexander Alexandrov
  • Kai Sachs
  • Alexandru Iosup
  • Volker Markl
  • Cafer Tosun
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7755)


With the increasing adoption of Cloud Computing, we observe an increasing need for Cloud Benchmarks, in order to assess the performance of Cloud infrastructures and software stacks, to assist with provisioning decisions for Cloud users, and to compare Cloud offerings. We understand our paper as one of the first systematic approaches to the topic of Cloud Benchmarks. Our driving principle is that Cloud Benchmarks must consider end-to-end performance and pricing, taking into account that services are delivered over the Internet. This requirement yields new challenges for benchmarking and requires us to revisit existing benchmarking practices in order to adopt them to the Cloud.


Cloud Computing Cloud Service Service Level Agreement Cloud Service Provider Cloud Infrastructure 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Huppler, K.: The Art of Building a Good Benchmark. In: Nambiar, R., Poess, M. (eds.) TPCTC 2009. LNCS, vol. 5895, pp. 18–30. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  2. 2.
    Binnig, C., Kossmann, D., Kraska, T., Loesing, S.: How is the weather tomorrow?: towards a benchmark for the cloud. In: DB Test. ACM (2009)Google Scholar
  3. 3.
    SPEC: The SPEC CPU2006 Benchmark,
  4. 4.
    TPC: The TPC-C Benchmark,
  5. 5.
    Florescu, D., Kossmann, D.: Rethinking cost and performance of database systems. SIGMOD Record 38(1), 43–48 (2009)CrossRefGoogle Scholar
  6. 6.
    Gray, J. (ed.): The Benchmark Handbook for Database and Transaction Systems, 2nd edn. Morgan Kaufmann (1993)Google Scholar
  7. 7.
    Kounev, S.: Performance Engineering of Distributed Component-Based Systems - Benchmarking, Modeling and Performance Prediction. PhD thesis, Technische Universität Darmstadt (2005)Google Scholar
  8. 8.
    Sachs, K., Kounev, S., Bacon, J., Buchmann, A.: Performance evaluation of message-oriented middleware using the SPECjms 2007 benchmark. Performance Evaluation 66(8), 410–434 (2009)CrossRefGoogle Scholar
  9. 9.
    Sachs, K.: Performance Modeling and Benchmarking of Event-Based Systems. PhD thesis, TU Darmstadt (2011)Google Scholar
  10. 10.
    Madeira, H., Vieira, M., Sachs, K., Kounev, S.: Dagstuhl Seminar 10292. In: Resilience Benchmarking, Springer (2011)Google Scholar
  11. 11.
    NIST: The NIST Definition of Cloud Computing (2011),
  12. 12.
    Youseff, L., Butrico, M., Silva, D.D.: Towards a unified ontology of cloud computing. In: Proc. of the Grid Computing Environments Workshop (GCE 2008) (2008)Google Scholar
  13. 13.
    Huppler, K.: Benchmarking with Your Head in the Cloud. In: Nambiar, R., Poess, M. (eds.) TPCTC 2011. LNCS, vol. 7144, pp. 97–110. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  14. 14.
    Leimeister, S., Böhm, M., Riedl, C., Krcmar, H.: The business perspective of cloud computing: Actors, roles and value networks. In: Alexander, P.M., Turpin, M., van Deventer, J.P. (eds.) ECIS (2010)Google Scholar
  15. 15.
    SPEC Open Systems Group: Report on cloud computing to the OSG Steering Committee. Technical Report OSG-wg-final-20120214 (February 2012)Google Scholar
  16. 16.
    Shen, S., Visser, O., Iosup, A.: Rtsenv: An experimental environment for real-time strategy games. In: Shirmohammadi, S., Griwodz, C. (eds.) NETGAMES, pp. 1–6. IEEE (2011)Google Scholar
  17. 17.
    Nae, V., Iosup, A., Prodan, R.: Dynamic resource provisioning in massively multiplayer online games. IEEE Trans. Parallel Distrib. Syst. 22(3), 380–395 (2011)CrossRefGoogle Scholar
  18. 18.
    Ratti, S., Hariri, B., Shirmohammadi, S.: A survey of first-person shooter gaming traffic on the internet. IEEE Internet Computing 14(5), 60–69 (2010)CrossRefGoogle Scholar
  19. 19.
    Li, M., Sasanka, R., Adve, S., Chen, Y., Debes, E.: The ALPBench benchmark suite for complex multimedia applications. In: Proceedings of the IEEE International Workload Characterization Symposium, pp. 34–45 (2005)Google Scholar
  20. 20.
    Lee, C., Potkonjak, M., Mangione-Smith, W.H.: Mediabench: A tool for evaluating and synthesizing multimedia and communicatons systems. In: MICRO, pp. 330–335 (1997)Google Scholar
  21. 21.
    Guthaus, M.R., Ringenberg, J.S., Ernst, D., Austin, T.M., Mudge, T., Brown, R.B.: MiBench: A free, commercially representative embedded benchmark suite. In: Proceedings of the Fourth Annual IEEE International Workshop on Workload Characterization, WWC-4 (Cat. No. 01EX538), pp. 3–14. IEEE (2001)Google Scholar
  22. 22.
    Fritsch, T., Ritter, H., Schiller, J.H.: The effect of latency and network limitations on mmorpgs: a field study of everquest2. In: NETGAMES, pp. 1–9. ACM (2005)Google Scholar
  23. 23.
    Chen, K.T., Huang, P., Lei, C.L.: How sensitive are online gamers to network quality? Commun. ACM 49(11), 34–38 (2006)CrossRefGoogle Scholar
  24. 24.
    Claypool, M.: The effect of latency on user performance in real-time strategy games. Computer Networks 49(1), 52–70 (2005)CrossRefGoogle Scholar
  25. 25.
    Beigbeder, T., Coughlan, R., Lusher, C., Plunkett, J., Agu, E., Claypool, M.: The effects of loss and latency on user performance in unreal tournament 2003. In: Chang Feng, W. (ed.) NETGAMES, pp. 144–151. ACM (2004)Google Scholar
  26. 26.
    Balint, M., Posea, V., Dimitriu, A., Iosup, A.: User behavior, social networking, and playing style in online and face to face bridge communities. In: NETGAMES, pp. 1–2. IEEE (2010)Google Scholar
  27. 27.
    Iosup, A., Lăscăteu, A.: Clouds and Continuous Analytics Enabling Social Networks for Massively Multiplayer Online Games. In: Bessis, N., Xhafa, F. (eds.) Next Generation Data Technologies for Collective Computational Intelligence. SCI, vol. 352, pp. 303–328. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  28. 28.
    Kim, K., Jeon, K., Han, H., Kim, S.G., Jung, H., Yeom, H.Y.: Mrbench: A benchmark for mapreduce framework. In: Proceedings of the 2008 14th IEEE International Conference on Parallel and Distributed Systems, ICPADS 2008, pp. 11–18. IEEE Computer Society, Washington, DC (2008)CrossRefGoogle Scholar
  29. 29.
    Chen, Y., Ganapathi, A., Griffith, R., Katz, R.H.: The case for evaluating mapreduce performance using workload suites. In: MASCOTS, pp. 390–399. IEEE (2011)Google Scholar
  30. 30.
    Pavlo, A., Paulson, E., Rasin, A., Abadi, D.J., DeWitt, D.J., Madden, S., Stonebraker, M.: A comparison of approaches to large-scale data analysis. In: Çetintemel, U., Zdonik, S.B., Kossmann, D., Tatbul, N. (eds.) SIGMOD Conference, pp. 165–178. ACM (2009)Google Scholar
  31. 31.
    Huppler, K.: Price and the TPC. In: Nambiar, R., Poess, M. (eds.) TPCTC 2010. LNCS, vol. 6417, pp. 73–84. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  32. 32.
    Kossmann, D., Kraska, T., Loesing, S.: An evaluation of alternative architectures for transaction processing in the cloud. In: Proceedings of the 2010 ACM SIGMOD International Conference on Management of Data, SIGMOD 2010, pp. 579–590. ACM, New York (2010)CrossRefGoogle Scholar
  33. 33.
    Islam, S., Lee, K., Fekete, A., Liu, A.: How a consumer can measure elasticity for cloud platforms. In: [38], pp. 85-96Google Scholar
  34. 34.
    Rabl, T., Poess, M.: Parallel data generation for performance analysis of large, complex rdbms. In: Graefe, G., Salem, K. (eds.) DBTest, p. 5. ACM (2011)Google Scholar
  35. 35.
    Frank, M., Poess, M., Rabl, T.: Efficient update data generation for dbms benchmarks. In: [38], pp. 169–180Google Scholar
  36. 36.
    Armbrust, M., Fox, A., Griffith, R., Joseph, A.D., Katz, R.H., Konwinski, A., Lee, G., Patterson, D.A., Rabkin, A., Stoica, I., Zaharia, M.: A view of cloud computing. Commun. ACM 53(4), 50–58 (2010)CrossRefGoogle Scholar
  37. 37.
    Villegas, D., Antoniou, A., Sadjadi, S.M., Iosup, A.: An analysis of provisioning and allocation policies for infrastructure-as-a-service clouds. In: CCGRID (2012)Google Scholar
  38. 38.
    Iosup, A., Yigitbasi, N., Epema, D.H.J.: On the performance variability of production cloud services. In: CCGRID, pp. 104–113. IEEE (2011)Google Scholar
  39. 39.
    Kaeli, D.R., Rolia, J., John, L.K., Krishnamurthy, D. (eds.): Third Joint WOSP/SIPEW International Conference on Performance Engineering, ICPE 2012, Boston, MA, USA, April 22-25. ACM (2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Enno Folkerts
    • 1
  • Alexander Alexandrov
    • 2
  • Kai Sachs
    • 1
  • Alexandru Iosup
    • 3
  • Volker Markl
    • 2
  • Cafer Tosun
    • 1
  1. 1.SAP AGWalldorfGermany
  2. 2.TU BerlinGermany
  3. 3.Delft University of TechnologyThe Netherlands

Personalised recommendations