Benchmarking Web API Quality

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9671)


Web APIs are increasingly becoming an integral part of web or mobile applications. As a consequence, performance characteristics and availability of the APIs used directly impact the user experience of end users. Still, quality of web APIs is largely ignored and simply assumed to be sufficiently good and stable. Especially considering geo-mobility of today’s client devices, this can lead to negative surprises at runtime.

In this work, we present an approach and toolkit for benchmarking the quality of web APIs considering geo-mobility of clients. Using our benchmarking tool, we then present the surprising results of a geo-distributed 3-month benchmark run for 15 web APIs and discuss how application developers can deal with volatile quality both from an architectural and engineering point of view.




  1. 1.
    Archibald, J.: Introducing Background Sync. Accessed: 17 Dec 2015
  2. 2.
    Bermbach, D., Tai, S.: Benchmarking eventual consistency: lessons learned from long-term experimental studies. In: Proceeding of IC2E, pp. 47–56. IEEE (2014)Google Scholar
  3. 3.
    Bermbach, D.: Benchmarking Eventually Consistent Distributed Storage Systems. Ph.D. thesis, Karlsruhe Institute of Technology (2014)Google Scholar
  4. 4.
    Bermbach, D., Kern, R., Wichmann, P., Rath, S., Zirpins, C.: An extendable toolkit for managing quality of human-based electronic services. In: Proceedings of the 3rd Human Computation Workshop HCOMP (2011)Google Scholar
  5. 5.
    Bermbach, D., Kuhlenkamp, J.: Consistency in distributed storage systems. In: Gramoli, V., Guerraoui, R. (eds.) NETYS 2013. LNCS, vol. 7853, pp. 175–189. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  6. 6.
    Bermbach, D., Tai, S.: Eventual consistency: how soon is eventual? an evaluation of amazon s3’s consistency behavior. In: Proceedings of MW4SOC, pp. 1–6. ACM (2011)Google Scholar
  7. 7.
    Bermbach, D., Zhao, L., Sakr, S.: Towards comprehensive measurement of consistency guarantees for cloud-hosted data storage services. In: Nambiar, R., Poess, M. (eds.) TPCTC 2013. LNCS, vol. 8391, pp. 32–47. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  8. 8.
    Binnig, C., Kossmann, D., Kraska, T., Loesing, S.: How is the weather tomorrow?: towards a benchmark for the cloud. In: Proceedings of DBTEST, pp. 1–6. ACM (2009)Google Scholar
  9. 9.
    Borhani, A.H., Leitner, P., Lee, B.S., Li, X., Hung, T.: WPress: an application-driven performance benchmark for cloud-based virtual machines. In: Proceedings of EDOC, pp. 101–109. IEEE (2014)Google Scholar
  10. 10.
    Brutlag, J.: Speed Matters for Google Web Search. Google, Inc, Technical report (2009)Google Scholar
  11. 11.
    Coarfa, C., Druschel, P., Wallach, D.S.: Performance analysis of TLS web servers. ACM Trans. Comput. Syst. (TOCS) 24(1), 39–69 (2006)CrossRefGoogle Scholar
  12. 12.
    Cooper, B.F., Silberstein, A., Tam, E., Ramakrishnan, R., Sears, R.: Benchmarking cloud serving systems with YCSB. In: Proceedings of SOCC, pp. 143–154. ACM (2010)Google Scholar
  13. 13.
    Espinha, T., Zaidman, A., Gross, H.G.: Web API Fragility: How robust is your mobile application? In: Proceedings of MOBILESoft, pp. 12–21. IEEE (2015)Google Scholar
  14. 14.
    Fredj, M., Georgantas, N., Issarny, V., Zarras, A.: Dynamic service substitution in service-oriented architectures. In: IEEE Congress on Services - Part I, pp. 101–104. IEEE, July 2008Google Scholar
  15. 15.
    Garfinkel, S.L.: An Evaluation of Amazon’s Grid Computing Services: EC2, S3, and SQS. Harvard University, Technical report (2007)Google Scholar
  16. 16.
    Juric, M.B., Rozman, I., Brumen, B., Colnaric, M., Hericko, M.: Comparison of performance of web services, WS-security, RMI, and RMI-SSL. J. Syst. Softw. 79(5), 689–700 (2006)CrossRefGoogle Scholar
  17. 17.
    Klems, M., Bermbach, D., Weinert, R.: A runtime quality measurement framework for cloud database service systems. In: Proceedings of QUATIC. pp. 38–46 (2012)Google Scholar
  18. 18.
    Klems, M., Menzel, M., Fischer, R.: Consistency benchmarking: evaluating the consistency behavior of middleware services in the cloud. In: Maglio, P.P., Weske, M., Yang, J., Fantinato, M. (eds.) ICSOC 2010. LNCS, vol. 6470, pp. 627–634. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  19. 19.
    Kossmann, D., Kraska, T., Loesing, S.: An evaluation of alternative architectures for transaction processing in the cloud. In: Proceedings of SIGMOD. pp. 579–590. ACM (2010)Google Scholar
  20. 20.
    Kuhlenkamp, J., Klems, M., Röss, O.: Benchmarking Scalability and Elasticity of Distributed Database Systems. pp. 1219–1230 (2014)Google Scholar
  21. 21.
    Kuhlenkamp, J., Rudolph, K., Bermbach, D.: AISLE: assessment of provisioned service levels in public IaaS-based database systems. In: Barros, A., et al. (eds.) ICSOC 2015. LNCS, vol. 9435, pp. 154–168. Springer, Heidelberg (2015). doi: 10.1007/978-3-662-48616-0_10 CrossRefGoogle Scholar
  22. 22.
    Kurze, T., Klems, M., Bermbach, D., Lenk, A., Tai, S., Kunze, M.: Cloud federation. Cloud Comput. 2011, 32–38 (2011)Google Scholar
  23. 23.
    Lenk, A., Menzel, M., Lipsky, J., Tai, S., Offermann, P.: What are you paying for? performance benchmarking for infrastructure-as-a-service offerings. In: Proceedings of CLOUD, pp. 484–491. IEEE (2011)Google Scholar
  24. 24.
    Müller, S., Bermbach, D., Tai, S., Pallas, F.: Benchmarking the performance impact of transport layer security in cloud database systems. In: Proceedings of IC2E, pp. 27–36. IEEE (2014)Google Scholar
  25. 25.
    Nielsen, J.: Usability Engineering. Elsevier, 1st edn. (1994)Google Scholar
  26. 26.
    Patil, S., Polte, M., Ren, K., Tantisiriroj, W., Xiao, L.,López, J., Gibson, G., Fuchs, A., Rinaldi, B.: YCSB++: benchmarking and performance debugging advancedfeatures in scalable table stores. In: Proceedings of SOCC, pp. 1–14. ACM (2011)Google Scholar
  27. 27.
    Rabl, T., Gómez-Villamor, S., Sadoghi, M., Muntés-Mulero, V., Jacobsen, H.A., Mankovskii, S.: Solving Big Data Challenges for Enterprise Application Performance Management. pp. 1724–1735 (2012)Google Scholar
  28. 28.
    Sachs, K., Kounev, S., Bacon, J., Buchmann, A.: Performance evaluation of message-oriented middleware using the SPECjms2007 benchmark. Perform. Eval. 66(8), 410–434 (2009)CrossRefGoogle Scholar
  29. 29.
    Sohan, S., Anslow, C., Maurer, F.: A case study of web API evolution. In: Proceedings of SERVICES, pp. 245–252. IEEE (2015)Google Scholar
  30. 30.
    Suter, P., Wittern, E.: Inferring web api descriptions from usage data. In: Proceedings of the 3rd IEEE Workshop on Hot Topics in Web Systems and Technologies (HotWeb), pp. 7–12 (2015)Google Scholar
  31. 31.
    Wada, H., Fekete, A., Zhao, L., Lee, K., Liu, A.: Data consistency properties and the trade-offs in commercial cloud storages: the consumers’ perspective. In: Proceedings of CIDR, pp. 134–143 (2011)Google Scholar
  32. 32.
    Wang, S., Keivanloo, I., Zou, Y.: How do developers react to RESTful API evolution? In: Franch, X., Ghose, A.K., Lewis, G.A., Bhiri, S. (eds.) ICSOC 2014. LNCS, vol. 8831, pp. 245–259. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  33. 33.
    Wittern, E., Laredo, J., Vukovic, M., Muthusamy, V., Slominski, A.: A graph-based data model for api ecosystem insights. In: Proceedings of ICWS, pp. 41–48. IEEE (2014)Google Scholar
  34. 34.
    Zellag, K., Kemme, B.: How consistent is your cloud application? In: Proceedings of SOCC. ACM (2012)Google Scholar
  35. 35.
    Zhao, L., Liu, A., Keung, J.: Evaluating cloud platform architecture with the CARE framework. In: Proceedings of APSEC, pp. 60–69 (2010)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.ISE Research GroupTU BerlinBerlinGermany
  2. 2.IBM T.J. Watson Research CenterYorktown HeightsUSA

Personalised recommendations