Advertisement

Empirical Software Engineering

, Volume 12, Issue 5, pp 551–571 | Cite as

Empirical studies in reverse engineering: state of the art and future trends

  • Paolo Tonella
  • Marco Torchiano
  • Bart Du Bois
  • Tarja Systä
Article

Abstract

Starting with the aim of modernizing legacy systems, often written in old programming languages, reverse engineering has extended its applicability to virtually every kind of software system. Moreover, the methods originally designed to recover a diagrammatic, high-level view of the target system have been extended to address several other problems faced by programmers when they need to understand and modify existing software. The authors’ position is that the next stage of development for this discipline will necessarily be based on empirical evaluation of methods. In fact, this evaluation is required to gain knowledge about the actual effects of applying a given approach, as well as to convince the end users of the positive cost–benefit trade offs. The contribution of this paper to the state of the art is a roadmap for the future research in the field, which includes: clarifying the scope of investigation, defining a reference taxonomy, and adopting a common framework for the execution of the experiments.

Keywords

Reverse engineering Taxonomy State of art Empirical framework 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Basili VR (1996) The role of experimentation in software engineering: past, current, and future. In: ICSE ’96: Proceedings of the 18th international conference on software engineering. IEEE Computer Society, Washington, DC, USA, 1996. ISBN 0-8186-7246-3, pp 442–449CrossRefGoogle Scholar
  2. Basili VR, Shull F, Lanubile F (1999) Building knowledge through families of experiments. In: IEEE transactions on software engineering. IEEE Computer Society, pp 456–473Google Scholar
  3. Brooks R (1977) Towards a theory of the cognitive processes in computer programming. Int J Man-Mach Stud 9(6):737–741CrossRefGoogle Scholar
  4. Chikofsky E, Cross II J (1990) Reverse engineering and design recovery: a taxonomy. IEEE Softw 7(1):13–17CrossRefGoogle Scholar
  5. Conradi R, Wang AI (eds) (2003) Empirical methods and studies in software engineering, experiences from ESERNET. Springer, Berlin Heidelberg New YorkzbMATHGoogle Scholar
  6. Davis FD (1989) Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quart 13(3):319–340CrossRefGoogle Scholar
  7. Do H, Elbaum SG, Rothermel G (2005) Supporting controlled experimentation with testing techniques: an infrastructure and its potential impact. Empir Softw Eng 10(4):405–435CrossRefGoogle Scholar
  8. Erlikh L (2000) Leveraging legacy system dollars for e-business. In: IEEE IT Pro, pp 17–23Google Scholar
  9. IEEE (1983) Glossary of software engineering terminology. Technical ReportGoogle Scholar
  10. ISO (1995) ISO/IEC DIS 14102: evaluation and selection of computer-aided software engineering (CASE) toolsGoogle Scholar
  11. ISO (1998) ISO 9241: ergonomic requirements for office work with visual display terminals (Part 11. Guidance on Usability)Google Scholar
  12. John B, Marks S (1997) Tracking the effectiveness of usability evaluation methods. Behav Inf Technol 16(4–5):188–202CrossRefGoogle Scholar
  13. Jones TC (1998) Estimating software costs. McGraw HillGoogle Scholar
  14. Kitchenham B (2004) Procedures for performing systematic reviews. Technical Report TR/SE-0401, Keele University and NICTAGoogle Scholar
  15. Kitchenham B, Travassos G, von Mayrhauser A, Niessink F, Schneidewind NF, Singer J, Takada S, Vehvilainen R, Yang H (1999) Towards an ontology of software maintenance. J Softw Maint 11(6):365–389CrossRefGoogle Scholar
  16. Kitchenham BA, Pfleeger SL, Pickard LM, Jones PW, Hoaglin DC, Emam KE, Rosenberg J (2002) Preliminary guidelines for empirical research in software engineering. IEEE Trans Softw Eng 28(8):721–734CrossRefGoogle Scholar
  17. Lethbridge T, Singer J (1997) Understanding software maintenance tools: some empirical research. In: Proceedings of the IEEE workshop on empirical studies in software maintenance (WESS), pp 157–162Google Scholar
  18. Letovsky S (1986) Cognitive processes in program comprehension. In: Soloway E, Iyengar S (eds) Proceedings of the 1st workshop on empirical studies of programmers. Ablex Publishing Company, Norwood, NJGoogle Scholar
  19. Littman DC, Pinto J, Letovsky S, Soloway E (1987) Mental models and software maintenance. J Syst Softw 7(4):341–355CrossRefGoogle Scholar
  20. Miller J (2000) Applying meta-analytical procedures to software engineering experiments. J Syst Softw 54(1):29–39CrossRefGoogle Scholar
  21. Muller H, Jahnke J, Smith D, Storey M-A, Tilley S, Wong K (2000) Reverse engineering: a roadmap. In: Proc. of the 22nd international conference on software engineering, future of software engineering track. ACM Press, New York, pp 47–60CrossRefGoogle Scholar
  22. Nelson ML (1996) Software maintenance cost estimation and modernization support. Technical Report, Old Dominion UniversityGoogle Scholar
  23. Nielsen J (1993) Usability engineering. Academic, Boston, MAzbMATHGoogle Scholar
  24. Pennington N (1987) Comprehension strategies in programming. In: Olson G, Sheppard S, Soloway E (eds) Proceedings of the 2nd workshop on empirical studies of programmers. Ablex Publishing Company, Norwood, NJGoogle Scholar
  25. Pfleeger SL, Kitchenham B (2001–2002) Principles of survey research – parts 1–5. ACM Software Engineering Notes 26,27(6,1,2,3,5):16–18,18–20,20–24,20–23,17–20Google Scholar
  26. Pickard LM, Kitchenham BA, Jones PW (1998) Combining empirical results in software engineerig. Inf Softw Technol 40(1):811–821CrossRefGoogle Scholar
  27. Sayyad Shirabad J, Menzies T (2005) The PROMISE Repository of software engineering databases. School of Information Technology and Engineering, University of Ottawa, CanadaGoogle Scholar
  28. Shadish WR, Cook TD, Campbell DT (2001) Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin, Boston, MAGoogle Scholar
  29. Shaw M (2002) What makes good research in software engineering? Int J Softw Tools Technol Transf 4(1):1–7. ISSN 1433-2779 (print), 1433-2787 (electronic)CrossRefGoogle Scholar
  30. Shull F, Basili VR, Carver J, Maldonado JC, Travassos GH, Mendonça MG, Fabbri SCPF (2002) Replicating software engineering experiments: addressing the tacit knowledge problem. In: Proc. of the international symposium on empirical software engineering (ISESE). IEEE Computer Society, Los Alamitos, CA, pp 7–16CrossRefGoogle Scholar
  31. Shull F, Mendonça MG, Basili VR, Carver J, Maldonado JC, Fabbri SCPF, Travassos GH, de Oliveira MCF (2004) Knowledge-sharing issues in experimental software engineering. Empir Softw Eng 9(1–2):111–137CrossRefGoogle Scholar
  32. Sim SE, Di Penta M (eds) (2006) Proceedings of the 13th working conference on reverse engineering. IEEE Computer Society, Benevento, Italy, 2006. ISBN 0-7695-2719-1Google Scholar
  33. Sim SE, Easterbrook SM, Holt RC (2003) Using benchmarking to advance research: a challenge to software engineering. In: Proceedings of the 25th international conference on software engineering. IEEE Computer Society, Los Alamitos, CA, pp 74–83CrossRefGoogle Scholar
  34. Sim SE, Storey M-AD (2000) A structured demonstration of program comprehension tools. In: Proceedings of the working conference on reverse engineering (WCRE), pp 184–193Google Scholar
  35. Sim SE, Storey M-AD, Winter A (2000) A structured demonstration of five program comprehension tools: lessons learnt. In: Proceedings of the working conference on reverse engineering (WCRE), pp 210–212Google Scholar
  36. Singer J (1999) Using the American Psychological Association (APA) style guidelines to report experimental results. In: Proceedings of the IEEE workshop on empirical studies in software maintenance (WESS), pp 71–75Google Scholar
  37. Sjoberg D, Hannay J, Hansen O, Kampenes V, Karahasanovic A, Liborg N, Rekdal A (2005) A survey of controlled experiments in software engineering. IEEE Trans Softw Eng 31(9):733–753CrossRefGoogle Scholar
  38. Sneed H (1996) Encapsulating legacy software for use in client/server systems. In: Proceedings of the working conference on reverse engineering, Monterey, CA, pp 104–119Google Scholar
  39. Soloway E, Ehrlich K (1984) Empirical studies of programming knowledge. IEEE Trans Softw Eng 10(5):595–609CrossRefGoogle Scholar
  40. Storey M-AD, Fracchia FD, Mueller HA (1997) Cognitive design elements to support the construction of a mental model during software visualization. In: Proceedings of the 5th international workshop on program comprehension (IWPC), p 17Google Scholar
  41. Tichy WF, Lukowicz P, Prechelt L, Heinz EA (1995) Experimental evaluation in computer science: a quantitative study. J Syst Softw 28(1):9–18CrossRefGoogle Scholar
  42. Venkatesh V, Davis FD (2000) A theoretical extension of the technology acceptance model: four longitudinal field studies. Manage Sci 46(2):186–204CrossRefGoogle Scholar
  43. von Mayrhauser A, Vans A (1994) Comprehension processes during large scale maintenance. In: Proceedings of the international conference on software engineering, Sorrento, Italy. IEEE Computer Society Press, Los Alamitos, CA, pp 39–48Google Scholar
  44. Walenstein A (2002) Cognitive support in software engineering tools: a distributed cognition framework. Ph.D. thesis, School of Computing Science, Simon Fraser UniversityGoogle Scholar
  45. Waters RG, Chikofsky E (1994) Reverse engineering: progress along many dimensions. Commun ACM 37(5):22–25CrossRefGoogle Scholar
  46. Wohlin C, Runeson P, Host M, Ohlsson MC, Regnell B, Wesslen A (2000) Experimentation in software engineering: an introduction. Kluwer, Boston, MAzbMATHGoogle Scholar
  47. Zayour I, Lethbridge T (2001) Adoption of reverse engineering tools: a cognitive perspective and methodology. In: Proc. of the international workshop on program comprehension (IWPC). IEEE Computer Society, Los Alamitos, CA, pp 245–258Google Scholar
  48. Zelkowitz M, Wallace D (1997) Experimental validation in software engineering. Inf Softw Technol 39(11):735–743CrossRefGoogle Scholar
  49. Zelkowitz M, Wallace D (1998) Experimental models for validating technology. IEEE Computer 31(5):23–31Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2007

Authors and Affiliations

  • Paolo Tonella
    • 1
  • Marco Torchiano
    • 2
  • Bart Du Bois
    • 3
  • Tarja Systä
    • 4
  1. 1.ITC-irstCentro per la Ricerca Scientifica e TecnologicaPovoItaly
  2. 2.Politecnico di TorinoTorinoItaly
  3. 3.University of AntwerpAntwerpBelgium
  4. 4.Tampere University of TechnologyTampereFinland

Personalised recommendations