Trace-Based Memory Aliasing Across Program Versions

  • Murali Krishna Ramanathan
  • Suresh Jagannathan
  • Ananth Grama
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3922)


One of the major costs of software development is associated with testing and validation of successive versions of software systems. An important problem encountered in testing and validation is memory aliasing, which involves correlation of variables across program versions. This is useful to ensure that existing invariants are preserved in newer versions and to match program execution histories. Recent work in this area has focused on trace-based techniques to better isolate affected regions. A variation of this general approach considers memory operations to generate more refined impact sets. The utility of such an approach eventually relies on the ability to effectively recognize aliases.

In this paper, we address the general memory aliasing problem and present a probabilistic trace-based technique for correlating memory locations across execution traces, and associated variables in program versions. Our approach is based on computing the log-odds ratio, which defines the affinity of locations based on observed patterns. As part of the aliasing process, the traces for initial test inputs are aligned without considering aliasing. From the aligned traces, the log-odds ratio of the memory locations is computed. Subsequently, aliasing is used for alignment of successive traces. Our technique can easily be extended to other applications where detecting aliasing is necessary. As a case study, we implement and use our approach in dynamic impact analysis for detecting variations across program versions. Using detailed experiments on real versions of software systems, we observe significant improvements in detection of affected regions when aliasing occurs.


Dynamic Programming Line Number Memory Location Program Version Test Input 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Apiwattanapong, T., Orso, A., Harrold, M.: Efficient and precise dynamic impact analysis using execute-after sequences. In: Inverardi, P., Jazayeri, M. (eds.) ICSE 2005. LNCS, vol. 4309, pp. 432–441. Springer, Heidelberg (2006)Google Scholar
  2. 2.
  3. 3.
    Brodie, M., Ma, S., Lohman, G., Syeda-Mahmood, T., Mignet, L., Modani, N., Wilding, M., Champlin, J., Sohn, P.: An architecture for quickly detecting known software problems. In: ICAC 2005: Proceedings of the International Conference on Autonomic Computing (2005)Google Scholar
  4. 4.
  5. 5.
    Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to algorithms, 2nd edn. MIT Press and McGraw-Hill Book Company (2001)Google Scholar
  6. 6.
    Dayhoff, M.O., Schwartz, R.M., Orcutt, B.C.: A model of evolutionary change in proteins. matrices for detecting distant relationships. In: Dayhoff, M.O. (ed.) Atlas of protein sequence and structure, National biomedical research foundation Washington DC, vol. 5, pp. 345–358 (1979)Google Scholar
  7. 7.
    Ernst, M., Cockrell, J., Griswold, W., Notkin, D.: Dynamically discovering likely program invariants to support program evolution. IEEE Transactions on Software Engineering 27(2), 1–25 (2001)CrossRefGoogle Scholar
  8. 8.
  9. 9.
    Henikoff, S., Henikoff, J.: Amino acid substitution matrices from protein blocks. In: Proc. National Academy of Sciences, USA (1992)Google Scholar
  10. 10.
    Hind, M.: Pointer analysis: haven’t we solved this problem yet? In: PASTE 2001: Proceedings of the 2001 ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering, pp. 54–61 (2001)Google Scholar
  11. 11.
    Hind, M., Pioli, A.: Which pointer analysis should i use? In: ISSTA 2000: Proceedings of the 2000 ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 113–123 (2000)Google Scholar
  12. 12.
    Hirschberg, D.: Algorithms for the longest common subsequence problem. Journal of ACM 24(4), 664–675 (1977)MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
  14. 14.
    Jagannathan, S., Thiemann, P., Weeks, S., Wright, A.K.: Single and loving it: Must-alias analysis for higher-order languages. In: Symposium on Principles of Programming Languages, pp. 329–341 (1998)Google Scholar
  15. 15.
    Law, J., Rothermel, G.: Whole program path-based dynamic impact analysis. In: ICSE 2003: Proceedings of the 25th International Conference on Software Engineering, pp. 308–318 (2003)Google Scholar
  16. 16.
    Luk, C., Cohn, R., Muth, R., Patil, H., Klauser, A., Lowney, G., Wallace, S., Reddi, V., Hazelwood, K.: Pin: building customized program analysis tools with dynamic instrumentation. In: PLDI 2005: Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 190–200 (2005)Google Scholar
  17. 17.
    Orso, A., Apiwattanapong, T., Harrold, M.: Leveraging field data for impact analysis and regression testing. In: ESEC/FSE-11: Proceedings of the 9th European software engineering conference held jointly with 11th ACM SIGSOFT international symposium on Foundations of software engineering, pp. 128–137 (2003)Google Scholar
  18. 18.
  19. 19.
    Zhang, X., Gupta, R.: Matching execution histories of program versions. In: Proceedings of the 13th ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC-FSE), pp. 197–206 (September 2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Murali Krishna Ramanathan
    • 1
  • Suresh Jagannathan
    • 1
  • Ananth Grama
    • 1
  1. 1.Department of Computer SciencePurdue UniversityWest LafayetteUSA

Personalised recommendations