A View of Monitoring and Tracing Techniques and Their Application to Service-Based Environments

  • Heidar Pirzadeh
  • Abdelwahab Hamou-Lhadj
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 2)


Software systems are perhaps today’s most complex engineering systems due to the ever evolving technologies they employ. Understanding how these systems are built and why they are built this way calls for advanced tools and techniques that go beyond mere analysis of the source code as it is the case of most existing software comprehension and modernization approaches. In this paper, we argue that the complex behavior embedded in most distributed, multi-tier, and service-based software systems can benefit significantly from applying dynamic analysis approaches such as the ones based on monitoring and tracing techniques. The main advantage of these techniques is that they constitute a natural fit with the distributed paradigm that is the main mechanism of such applications. In this paper, we present several monitoring and tracing techniques, and compare them based on their advantages and disadvantages. We then discuss how these techniques can be used to understand serviced-based applications, with the ultimate objective being to uncover the key challenges that need to be addressed.


Monitoring and tracing techniques software modernization and comprehension service-based applications distributed systems 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Department of defense standard, department of defense trusted computer system evaluation criteria (1985)Google Scholar
  2. 2.
    Debugging blue screens. Technical paper, Compuware Corporation (September 1999)Google Scholar
  3. 3.
    Agarwala, S., Schwan, K.: SysProf: Online Distributed Behavior Diagnosis through Fine-grain System Monitoring. In: 26th International Conference on Distributed Computing Systems (ICDCS), Lisboa, Portugal (July 2006)Google Scholar
  4. 4.
    Ball, T.: The Concept of Dynamic Analysis. LNCS, pp. 216–234. Springer, Heidelberg (1999)Google Scholar
  5. 5.
    Bauer, L., Ligatti, J., Walker, D.: Composing security policies with polymer. In: Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation, pp. 305–314. ACM, New York (2005)CrossRefGoogle Scholar
  6. 6.
    Bocker, H.D., Fischer, G., Nieper, H.: The enhancement of understanding through visual representations. SIGCHI Bull. 17(4), 44–50 (1986)CrossRefGoogle Scholar
  7. 7.
    Calingaert, P.: System performance evaluation: survey and appraisal. Communications of the ACM 10(1), 12–18 (1967)CrossRefGoogle Scholar
  8. 8.
    Calvez, J., Pasquier, O.: Performance Monitoring and Assessment of Embedded HW/SW Systems. Design Automation for Embedded Systems 3(1), 5–22 (1998)CrossRefGoogle Scholar
  9. 9.
    De Pauw, W., Jensen, E., Mitchell, N., Sevitsky, G., Vlissides, J., Yang, J.: Visualizing the Execution of Java Programs. Software Visualization, 151–162 (2002)Google Scholar
  10. 10.
    De Pauw, W., Lorenz, D., Vlissides, J., Wegman, M.: Execution Patterns in Object-Oriented VisualizationGoogle Scholar
  11. 11.
    Delgado, N., Gates, A., Roach, S.: A taxonomy and catalog of runtime software-fault monitoring tools. IEEE Transactions on Software Engineering 30(12), 859–872 (2004)CrossRefGoogle Scholar
  12. 12.
    Ducassé, M., Noyé, J.: Tracing Prolog programs by source instrumentation is efficient enough. The Journal of Logic Programming 43(2), 157–172 (2000)zbMATHCrossRefGoogle Scholar
  13. 13.
    Aral, Z., Gertner, I.: Non-intrusive and Interactive Profiling in Parasight. In: Proceedings of the ACM/SIGPLAN PPEALS 1988, September 1988, pp. 21–30 (1988)Google Scholar
  14. 14.
    Erlingsson, U., Schneider, F.: SASI enforcement of security policies: a retrospective. In: Proceedings of the 1999 workshop on New security paradigms, pp. 87–95. ACM, New York (1999)Google Scholar
  15. 15.
    Evans, D., Twyman, A.: Flexible policy-directed code safety. In: Proceedings of the 1999 IEEE Symposium on Security and Privacy, pp. 32–45 (1999)Google Scholar
  16. 16.
    Gallo, A., Wilder, R.P.: Performance measurement of data communications systems with emphasis on open system interconnections (osi). In: ISCA 1981: Proceedings of the 8th annual symposium on Computer Architecture, pp. 149–161. IEEE Computer Society Press, Los Alamitos (1981)Google Scholar
  17. 17.
    Geist, A., Heath, T., Peyton, B., Worley, P.: A User’s Guide to PICL: A Portable Instrumentation Communication Library. Technical report, TR TM-11616, Oak Ridge National Lab. (1990)Google Scholar
  18. 18.
    Haban, D., Wybranietz, D.: A hybrid monitor for behavior and performance analysis of distributed systems. IEEE Trans. Softw. Eng. 16(2), 197–211 (1990)CrossRefGoogle Scholar
  19. 19.
    Hamou-Lhadj, A., Lethbridge, T.: Summarizing the content of large traces to facilitate the understanding of the behaviour of a software system. In: Proc. 14th Int. Conf. on Program Comprehension (ICPC), pp. 181–190Google Scholar
  20. 20.
    Hamou-Lhadj, A., Lethbridge, T.: Measuring Various Properties of Execution Traces to Help Build Better Trace Analysis Tools. In: Proceedings of 10th IEEE International Conference on Engineering of Complex Computer Systems, ICECCS 2005, pp. 559–568. IEEE Computer Society Press, Los Alamitos (2005)Google Scholar
  21. 21.
    Henry, R., Whaley, K., Forstall, B.: The University ofWashington illustrating compiler. ACM SIGPLAN Notices 25(6), 223–233 (1990)CrossRefGoogle Scholar
  22. 22.
    Jain, R.: The art of computer systems performance analysis. Wiley, Chichester (1991)zbMATHGoogle Scholar
  23. 23.
    Jeffery, C.: Monitoring and Visualizing Program Execution: an Exploratory Approach (1993)Google Scholar
  24. 24.
    Jerding, D., Rugaber, S.: Using visualization for architectural localization and extraction. In: WCRE 1997: Proceedings of the Fourth Working Conference on Reverse Engineering (WCRE 1997), Washington, DC, USA, p. 56. IEEE Computer Society, Los Alamitos (1997)CrossRefGoogle Scholar
  25. 25.
    Karush, A.: Two approaches for measuring the performance of time-sharing systems. In: Proceedings of the second symposium on Operating systems principles, pp. 159–166. ACM Press, New York (1969)CrossRefGoogle Scholar
  26. 26.
    Kishon, A., Hudak, P., Consel, C.: Monitoring semantics: a formal framework for specifying, implementing, and reasoning about execution monitors. SIGPLAN Not. 26(6), 338–352 (1991)CrossRefGoogle Scholar
  27. 27.
    Linton, M.: The Evolution of Dbx. In: Proceedings of the Summer USENIX Conference, pp. 211–220 (1990)Google Scholar
  28. 28.
    Lucas Jr., H.: Performance evaluation and monitoring. ACM Comput. Surv. 3(3), 79–91 (1971)zbMATHCrossRefMathSciNetGoogle Scholar
  29. 29.
    Malony, A.D.: Multiprocessor instrumentation: approaches for cedar, pp. 1–33 (1989)Google Scholar
  30. 30.
    Mohr, B.: Performance evaluation of parallel programs in parallel and distributed systems. In: Burkhart, H. (ed.) CONPAR 1990 and VAPP 1990. LNCS, vol. 457, pp. 176–187. Springer, Heidelberg (1990)Google Scholar
  31. 31.
    Oechsle, R.: Automatic visualization with object and sequence diagrams using the java debug interface (jdi). In: Diehl, S. (ed.) Dagstuhl Seminar 2001. LNCS, vol. 2269, pp. 176–190. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  32. 32.
    Ousterhout, J.K., Da Costa, H., Harrison, D., Kunze, J.A., Kupfer, M., Thompson, J.G.: A trace-driven analysis of the unix 4.2 bsd file system. SIGOPS Oper. Syst. Rev. 19(5), 15–24 (1985)CrossRefGoogle Scholar
  33. 33.
    Tsai, W.T.: Service-Oriented System Engineering: A New Paradigm. In: IEEE International Workshop on Service-Oriented System Engineering (SOSE), Beijing, October 2005, pp. 3–8 (2005)Google Scholar
  34. 34.
    Plattner, B., Nievergelt, J.: Special Feature: Monitoring Program Execution: A Survey. Computer 14(11), 76–93 (1981)CrossRefGoogle Scholar
  35. 35.
    Robbins, J.: Debugging windows based applications using windbg. Miscrosoft Systems Journal (1999)Google Scholar
  36. 36.
    Rountev, A., Connell, B.: Object naming analysis for reverse-engineered sequence diagrams. In: International Conference on Software Engineering: Proceedings of the 27th international conference on Software engineering, vol. 15, pp. 254–263 (2005)Google Scholar
  37. 37.
    Snodgrass, R.: A relational approach to monitoring complex systems. ACM Transactions on Computer Systems (TOCS) 6(2), 157–195 (1988)CrossRefGoogle Scholar
  38. 38.
    Systa, T.: Dynamic Reverse Engineering of Java Software. LNCS, p. 174. Springer, Heidelberg (1999)Google Scholar
  39. 39.
    Vasudevan, A., Yerraballi, R.: Cobra: Fine-grained Malware Analysis using Stealth Localized-Executions. In: IEEE Symposium on Security and Privacy (2006)Google Scholar
  40. 40.
    Willems, C., Holz, T., Freiling, F.: Toward Automated Dynamic Malware Analysis Using CWSandbox. IEEE SECURITY & PRIVACY, 32–39 (2007)Google Scholar
  41. 41.
    Zaidman, A.: Scalability Solutions for Program Comprehension through Dynamic Analysis. In: Proceedings of the 10th European Conference on Software Maintenance and Reengineering, CSMR 2006 (2006)Google Scholar
  42. 42.
    Simpson, J., Weiner, E. (eds.): Oxford English Dictionary Additions Series, vol. 2. Clarendon Press, Oxford (1993)Google Scholar
  43. 43.
    Hamou-Lhadj, A., Braun, E., Amyot, D., Lethbridge, T.: Recovering behavioral design models from execution traces. In: Proceedings of the 9th European Conference on Software Maintenance and Reengineering (CSMR 2005), pp. 112–121. IEEE Computer Society, Los Alamitos (2005)Google Scholar

Copyright information

© Springer Berlin Heidelberg 2010

Authors and Affiliations

  • Heidar Pirzadeh
    • 1
  • Abdelwahab Hamou-Lhadj
    • 1
  1. 1.Department of Electrical and Computer EngineeringConcordia UniversityMontréalCanada

Personalised recommendations