A Comparison of the Influence of Different Multi-core Processors on the Runtime Overhead for Application-Level Monitoring

  • Jan Waller
  • Wilhelm Hasselbring
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7303)


Application-level monitoring is required for continuously operating software systems to maintain their performance and availability at runtime. Performance monitoring of software systems requires storing time series data in a monitoring log or stream. Such monitoring may cause a significant runtime overhead to the monitored system.

In this paper, we evaluate the influence of multi-core processors on the overhead of the Kieker application-level monitoring framework. We present a breakdown of the monitoring overhead into three portions and the results of extensive controlled laboratory experiments with micro-benchmarks to quantify these portions of monitoring overhead under controlled and repeatable conditions. Our experiments show that the already low overhead of the Kieker framework may be further reduced on multi-core processors with asynchronous writing of the monitoring log.

Our experiment code and data are available as open source software such that interested researchers may repeat or extend our experiments for comparison on other hardware platforms or with other monitoring frameworks.


Runtime Overhead Logical Core Monitoring Framework Physical Core Monitor Software System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Chawla, A., Orso, A.: A generic instrumentation framework for collecting dynamic information. ACM SIGSOFT Softw. Eng. Notes 29, 1–4 (2004)CrossRefGoogle Scholar
  2. 2.
    Chen, S., Gibbons, P.B., Kozuch, M., Mowry, T.C.: Log-based architectures: Using multicore to help software behave correctly. ACM SIGOPS Operating Systems Review 45, 84–91 (2011)CrossRefGoogle Scholar
  3. 3.
    Ehlers, J., Hasselbring, W.: A Self-adaptive Monitoring Framework for Component-Based Software Systems. In: Crnkovic, I., Gruhn, V., Book, M. (eds.) ECSA 2011. LNCS, vol. 6903, pp. 278–286. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  4. 4.
    Ehlers, J., van Hoorn, A., Waller, J., Hasselbring, W.: Self-adaptive software system monitoring for performance anomaly localization. In: Proc. of the 8th IEEE/ACM Int. Conf. on Autonomic Computing (ICAC 2011), pp. 197–200. ACM (2011)Google Scholar
  5. 5.
    Gao, J., Zhu, E., Shim, S., Chang, L.: Monitoring software components and component-based software. In: The 24th Annual International Computer Software and Applications Conference (COMPSAC 2000), pp. 403–412. IEEE Computer Society (2000)Google Scholar
  6. 6.
    Georges, A., Buytaert, D., Eeckhout, L.: Statistically rigorous Java performance evaluation. ACM SIGPLAN Notices 42, 57–76 (2007)CrossRefGoogle Scholar
  7. 7.
    Ha, J., Arnold, M., Blackburn, S., McKinley, K.: A concurrent dynamic analysis framework for multicore hardware. ACM SIGP. 44, 155–174 (2009)CrossRefGoogle Scholar
  8. 8.
    van Hoorn, A., Rohr, M., Hasselbring, W., Waller, J., Ehlers, J., Frey, S., Kieselhorst, D.: Continuous monitoring of software services: Design and application of the Kieker framework. Tech. Rep. TR-0921, Department of Computer Science, University of Kiel, Germany (2009)Google Scholar
  9. 9.
    van Hoorn, A., Waller, J., Hasselbring, W.: Kieker: A framework for application performance monitoring and dynamic software analysis. In: Proc. of 3rd ACM/SPEC Int. Conf. on Performance Eng. (ICPE 2012). ACM (2012)Google Scholar
  10. 10.
    Huang, X., Seyster, J., Callanan, S., Dixit, K., Grosu, R., Smolka, S., Stoller, S., Zadok, E.: Software monitoring with controllable overhead. Int. Journal on Software Tools for Technology Transfer (STTT), 1–21 (2010)Google Scholar
  11. 11.
    Maebe, J., Buytaert, D., Eeckhout, L., De Bosschere, K.: Javana: A system for building customized Java program analysis tools. ACM SIGPLAN Notices 41, 153–168 (2006)CrossRefGoogle Scholar
  12. 12.
    Marowka, A.: Parallel computing on any desktop. Communications of the ACM 50, 74–78 (2007)CrossRefGoogle Scholar
  13. 13.
    Moon, S., Chang, B.M.: A thread monitoring system for multithreaded Java programs. ACM SIGPLAN Notices 41, 21–29 (2006)CrossRefGoogle Scholar
  14. 14.
    Mos, A.: A Framework for Adaptive Monitoring and Performance Management of Component-Based Enterprise Applications. Ph.D. thesis, Dublin City University (2004)Google Scholar
  15. 15.
    Moseley, T., Shye, A., Reddi, V.J., Grunwald, D., Peri, R.: Shadow profiling: Hiding instrumentation costs with parallelism. In: Int. Symp. on Code Generation and Optimization (CGO 2007), pp. 198–208 (2007)Google Scholar
  16. 16.
    Nair, A., Shankar, K., Lysecky, R.: Efficient hardware-based nonintrusive dynamic application profiling. ACM Transactions on Embedded Computing Systems 10, 32:1–32:22 (2011)Google Scholar
  17. 17.
    Okanović, D., van Hoorn, A., Konjović, Z., Vidaković, M.: Towards adaptive monitoring of Java EE applications. In: Proc. of the 5th Int. Conf. on Information Technology (ICIT 2011). IEEE Computer Society (2011)Google Scholar
  18. 18.
    Parsons, T., Mos, A., Murphy, J.: Non-intrusive end-to-end runtime path tracing for J2EE systems. IEEE Software 153, 149–161 (2006)CrossRefGoogle Scholar
  19. 19.
    Patil, H., Fischer, C.N.: Efficient run-time monitoring using shadow processing. In: AADEBUG, pp. 119–132 (1995)Google Scholar
  20. 20.
    Sim, S.E., Easterbrook, S., Holt, R.C.: Using benchmarking to advance research: A challenge to software engineering. In: Proc. of the 25th Int. Conf. on Software Engineering (ICSE 2003), pp. 74–83. IEEE (2003)Google Scholar
  21. 21.
    Tichy, W.: Should computer scientists experiment more? IEEE Computer 31, 32–40 (1998)CrossRefGoogle Scholar
  22. 22.
    Vlachos, E., Goodstein, M.L., Kozuch, M., Chen, S., Falsafi, B., Gibbons, P., Mowry, T.: ParaLog: Enabling and accelerating online parallel monitoring of multithreaded applications. ACM SIGPLAN Notices 45, 271–284 (2010)CrossRefGoogle Scholar
  23. 23.
    Wallace, S., Hazelwood, K.: SuperPin: Parallelizing dynamic instrumentation for real-time performance. In: Int. Symp. on Code Generation and Optimization (CGO 2007), pp. 209–220 (2007)Google Scholar
  24. 24.
    Zhao, Q., Cutcutache, I., Wong, W.F.: Pipa: Pipelined profiling and analysis on multi-core systems. In: Proc. of the 6th Annual IEEE/ACM Int. Symp. on Code Generation and Optimization (CGO 2008), pp. 185–194. ACM (2008)Google Scholar
  25. 25.
    Zheng, Q., Ou, Z., Liu, L., Liu, T.: A novel method on software structure evaluation. In: Proc. of the 2nd IEEE Int. Conf. on Software Engineering and Service (ICSESS 2011). IEEE Computer Society (2011)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Jan Waller
    • 1
  • Wilhelm Hasselbring
    • 1
    • 2
  1. 1.Software Engineering GroupChristian-Albrechts-University KielGermany
  2. 2.SPEC Research Group, Steering CommitteeGainesvilleUSA

Personalised recommendations