Empirical Software Engineering

, Volume 18, Issue 5, pp 859–900 | Cite as

What your plug-in test suites really test: an integration perspective on test suite understanding

Article

Abstract

Software architectures such as plug-in and service-oriented architectures enable developers to build extensible software products, whose functionality can be enriched by adding or configuring components. A well-known example of such an architecture is Eclipse, best known for its use to create a series of extensible IDEs. Although such architectures give users and developers a great deal of flexibility to create new products, the complexity of the built systems increases. In order to manage this complexity developers use extensive automated test suites. Unfortunately, current testing tools offer little insight in which of the many possible combinations of components and components configurations are actually tested. The goal of this paper is to remedy this problem. To that end, we interview 25 professional developers on the problems they experience in test suite understanding for plug-in architectures. The findings have been incorporated in five architectural views that provide an extensibility perspective on plug-in-based systems and their test suites. The views combine static and dynamic information on plug-in dependencies, extension initialization, extension and service usage, and the test suites. The views have been implemented in ETSE, the Eclipse Plug-in Test Suite Exploration tool. We evaluate the proposed views by analyzing eGit, Mylyn, and a Mylyn connector.

Keywords

Plug-in systems Dynamic analysis Static analysis Test suite understanding 

References

  1. Adolph S, Hall W, Kruchten P (2011) Using grounded theory to study the experience of software development. Empir Software Eng 16(4):487–513CrossRefGoogle Scholar
  2. Binder RV (1999) Testing object-oriented systems: models, patterns, and tools. Addison-Wesley ProfessionalGoogle Scholar
  3. Bryant A, Charmaz K (2007) The SAGE handbook of grounded theory. SAGEGoogle Scholar
  4. Chatley R, Eisenbach S, Kramer J, Magee J, Uchitel S (2004) Predictable dynamic plugin systems. In: 7th international conference on fundamental approaches to software engineering (FASE). Springer, pp 129–143Google Scholar
  5. Corbin JM, Strauss A (1990) Grounded theory research: Procedures, canons, and evaluative criteria. Qual Sociol 13:3–21CrossRefGoogle Scholar
  6. Cornelissen B, van Deursen A, Moonen L, Zaidman A (2007) Visualizing testsuites to aid in software understanding. In: Proceedings of the 11th European conference on software maintenance and reengineering (CSMR’07). IEEE Computer Society, pp 213–222Google Scholar
  7. Cornelissen B, Zaidman A, van Deursen A, Moonen L, Koschke R (2009) A systematic survey of program comprehension through dynamic analysis. IEEE Trans Softw Eng 35(5):684–702CrossRefGoogle Scholar
  8. Dagenais B, Robillard MP (2010) Creating and evolving developer documentation: understanding the decisions of open source contributors. In: Proceedings foundations of sofatware engineering (FSE). ACM SIGSOFT, pp 127–136Google Scholar
  9. Demeyer S, Ducasse S, Nierstrasz O (2003) Object-oriented reengineering patterns. Morgan KaufmannGoogle Scholar
  10. Evans E (2003) Domain-driven design: tackling complexity in the heart of software, 1. a. edn. Addison-Wesley ProfessionalGoogle Scholar
  11. Feathers M (2004) Working effectively with legacy code. Prentice HallGoogle Scholar
  12. Freeman S, Pryce N (2010) Growing object-oriented software, guided by tests. Addison-WesleyGoogle Scholar
  13. Gaelli M, Lanza M, Nierstrasz O (2005) Towards a taxonomy of SUnit tests. In: 13th international european smalltalk conference (ESUG 2005), pp 1–22Google Scholar
  14. Gamma E, Beck K (2003) Contributing to eclipse: principles, patterns, and plugins. Addison Wesley Longman Publishing Co., Inc., Redwood City, CA, USAGoogle Scholar
  15. Garousi V, Koochakzadeh N (2010) An empirical evaluation to study benefits of visual versus textual test coverage information. In: Proceedings of the 5th international academic and industrial conference on testing—practice and research techniques, TAIC PART’10. Springer, Berlin, pp 189–193Google Scholar
  16. Glaser B, Strauss A (1967) The discovery of grounded theory: strategies for qualitative research. Aldine TransactionGoogle Scholar
  17. Greiler M, van Deursen A, Storey MA (2012) Test confessions: a study of testing practices for plug-in systems. In: Proceedings of the 2012 international conference on software engineering, ICSE 2012. IEEE Press, Piscataway, pp 244–254Google Scholar
  18. Greiler M, Groß HG, van Deursen A (2010) Understanding plug-in test suites from an extensibility perspective. In: Proceedings 17th working conference on reverse engineering (WCRE). IEEE Computer Society, pp 67–76Google Scholar
  19. Hartmann J, Imoberdorf C, Meisinger M (2000) UML-based integration testing. In: International symposium on software testing and analysis. ACM, pp 60–70Google Scholar
  20. Hermans F, Pinzger M, van Deursen A (2011) Supporting professional spreadsheet users by generating leveled dataflow diagrams. In: Gall H, Medvidovic N (eds) Proceedings 33rd international conference on software engineering (ICSE 2011). ACMGoogle Scholar
  21. Jorgensen PC, Erickson C (1994) Object-oriented integration testing. Commun ACM 37(9):30CrossRefGoogle Scholar
  22. Koochakzadeh N, Garousi V (2010) Tecrevis: a tool for test coverage and test redundancy visualization. In: Proceedings of the 5th international academic and industrial conference on testing—practice and research techniques, TAIC PART’10. Springer, Berlin, pp 129–136Google Scholar
  23. Marquardt K (1999) Patterns for plug-ins. In: Proceedings 4th european conference on pattern languages of programs (EuroPLoP). Bad Irsee, Germany, p 37Google Scholar
  24. Martin RC (2008) Clean code: a handbook of agile software craftsmanship, 1 edn. Prentice Hall PTR, Upper Saddle RiverGoogle Scholar
  25. Mayer J, Melzer I, Schweiggert F (2003) Lightweight plug-in-based application development. In: International conference NetObjectDays, NODe 2002. Springer, pp 87–102Google Scholar
  26. Mens T, Fernández-Ramil J, Degrandsart S (2008) The evolution of eclipse. In: Proceedings 24th IEEE international conference on software maintenance (ICSM). IEEE, pp 386–395Google Scholar
  27. Meszaros G (2007) xUnit test patterns: refactoring test code. Addison-WesleyGoogle Scholar
  28. Pezzè M, Young M (2008) Software testing and analysis. WileyGoogle Scholar
  29. Reis S, Metzger A, Pohl K (2007) Integration testing in software product line engineering: a model-based technique. In: Lecture notes in computer science, pp 321–335Google Scholar
  30. Rigby PC, Storey MA (2011) Understanding broadcast based peer review on open source software projects. In: ICSE ’11: Proceedings of the 33rd international conference on software engineering. ACMGoogle Scholar
  31. Rountev A, Milanova A, Ryder B (2004) Fragment class analysis for testing of polymorphism in Java software. IEEE Trans Softw Eng 30(6):372–387CrossRefGoogle Scholar
  32. Rubio D (2009) Testing with spring and OSGi, chap 9. Apress, Berkeley, CA, pp 331–359Google Scholar
  33. Shavor S, D’Anjou J, Fairbrother S, Kehn D, Kellerman J, McCarthy P (2005) The Java developer’s guide to Eclipse. Addison-Wesley Longman, BostonGoogle Scholar
  34. The OSGi Alliance (2011) OSGi Service Platform Core Specification; Release 4, Version 4.3. http://www.osgi.org. Accessed 22 Aug 2011
  35. van Deursen A, Hofmeister C, Koschke R, Moonen L, Riva C. (2004) Symphony: view-driven software architecture reconstruction. In: Proceedings working IEEE/IFIP conference on software architecture (WICSA’04). IEEE Computer Society Press, pp 122–134Google Scholar
  36. van Deursen A, Moonen L, van Den Bergh A, Kok G (2002) Refactoring test code. In: Succi G, Marchesi M, Wells D, Williams L (eds) Extreme programming perspectives. Addison Wesley, pp 141–152Google Scholar
  37. van Rompaey B, Du Bois B, Demeyer S, Rieger M (2007) On the detection of test smells: a metrics-based approach for general fixture and eager test. IEEE Trans Softw Eng 33(12):800–817CrossRefGoogle Scholar
  38. Wermelinger M, Yu Y (2008) Analyzing the evolution of eclipse plugins. In: Proceedings of the 2008 international working conference on mining software repositories, MSR ’08. ACM, New York, pp 133–136CrossRefGoogle Scholar
  39. Zaidman A, van Rompaey B, Demeyer S, van Deursen A (2008) Mining software repositories to study co-evolution of production & test code. In: Proceedings 1st international conference on software testing verification and validation (ICST). IEEE Computer Society, pp 220–229Google Scholar

Copyright information

© Springer Science+Business Media New York 2012

Authors and Affiliations

  1. 1.Delft University of TechnologyDelftNetherlands

Personalised recommendations