Considering Polymorphism in Change-Based Test Suite Reduction

  • Ali Parsai
  • Quinten David Soetens
  • Alessandro Murgia
  • Serge Demeyer
Part of the Lecture Notes in Business Information Processing book series (LNBIP, volume 199)

Abstract

With the increasing popularity of continuous integration, algorithms for selecting the minimal test-suite to cover a given set of changes are in order. This paper reports on how polymorphism can handle false negatives in a previous algorithm which uses method-level changes in the base-code to deduce which tests need to be rerun.We compare the approach with and without polymorphism on two distinct cases –PMD and CruiseControl– and discovered an interesting trade-off: incorporating polymorphism results in more relevant tests to be included in the test suite (hence improves accuracy), however comes at the cost of a larger test suite (hence increases the time to run the minimal test-suite).

Keywords

test selection unit-testing change-based test selection polymorphism ChEOPSJ 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Andrews, J.H., Briand, L.C., Labiche, Y.: Is mutation an appropriate tool for testing experiments (software testing). In: Proceedings of the 27th International Conference on Software Engineering, ICSE 2005, pp. 402–411 (2005)Google Scholar
  2. 2.
    Beck, K.: Test Driven Development: By Example. Addison-Wesley Longman Publishing Co., Inc., Boston (2002)Google Scholar
  3. 3.
    Booch, G.: Object Oriented Analysis and Design with Application. Pearson Education India (2006)Google Scholar
  4. 4.
    Dasso, A., Funes, A.: Verification, Validation And Testing in Software Engineering. Idea Group Publishing (2007)Google Scholar
  5. 5.
    Demeyer, S., Tichelaar, S., Steyaert, P.: FAMIX 2.0 - The FAMOOS information exchange model. Technical report, University of Berne (1999)Google Scholar
  6. 6.
    Engström, E., Runeson, P., Skoglund, M.: A systematic review on regression test selection techniques. Journal Information and Software Technology 52(1), 14–30 (2010)CrossRefGoogle Scholar
  7. 7.
    Engström, E., Skoglund, M., Runeson, P.: Empirical evaluations of regression test selection techniques: A systematic review. In: Proceedings of the Second ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM 2008, pp. 22–31. ACM, New York (2008)CrossRefGoogle Scholar
  8. 8.
    Fluri, B., Wuersch, M., PInzger, M., Gall, H.: Change distilling: Tree differencing for fine-grained source code change extraction. IEEE Transactions on Software Engineering 33(11), 725–743 (2007)CrossRefGoogle Scholar
  9. 9.
    Fowler, M.: Continuous integration. Technical report (May 2006), http://www.martinfowler.com/
  10. 10.
    Hayes, J.H., Dekhtyar, A., Janzen, D.S.: Towards traceable test-driven development. In: Proceedings of the 2009 ICSE Workshop on Traceability in Emerging Forms of Software Engineering, TEFSE 200, pp. 26–30. IEEE Computer Society, Washington, DC (2009)CrossRefGoogle Scholar
  11. 11.
    Hurdugaci, V., Zaidman, A.: Aiding software developers to maintain developer tests. In: Proceedings of the 2012 16th European Conference on Software Maintenance and Reengineering, CSMR 2012, pp. 11–20. IEEE Computer Society, Washington, DC (2012)CrossRefGoogle Scholar
  12. 12.
    McGregor, J.D.: Test early, test often (2007)Google Scholar
  13. 13.
    Van Rompaey, B., Demeyer, S.: Establishing traceability links between unit test cases and units under test. In: Proceedings of the 2009 European Conference on Software Maintenance and Reengineering, CSMR 2009, pp. 209–218. IEEE Computer Society, Washington, DC (2009)Google Scholar
  14. 14.
    Rothermel, G., Harrold, M.J.: Analyzing regression test selection techniques. IEEE Transactions on Software Engineering 22(8), 529–551 (1996)CrossRefGoogle Scholar
  15. 15.
    Runeson, P.: A survey of unit testing practices. IEEE Software 23(4), 22–29 (2006)CrossRefGoogle Scholar
  16. 16.
    Saff, D., Ernst, M.D.: An experimental evaluation of continuous testing during development. In: ISSTA 2004, Proceedings of the 2004 International Symposium on Software Testing and Analysis, Boston, MA, USA, July 12-14, pp. 76–85 (2004)Google Scholar
  17. 17.
    Soetens, Q.D., Demeyer, S.: Cheopsj: Change-based test optimization. In: Proceedings of the 16th European Conference on Software Maintenance and Reengineering (CSMR), pp. 535–538 (March 2012)Google Scholar
  18. 18.
    Soetens, Q.D., Demeyer, S., Zaidman, A.: Change-based test selection in the presence of developer tests. In: Proceedings of the 17th European Conference on Software Maintenance and Reengineering (CSMR), pp. 101–110 (March 2013)Google Scholar
  19. 19.
    van Deursen, A., Moonen, L.: The video store revisited – thoughts on refactoring and testing. In: Proceedings of the Int’l Conf. eXtreme Programming and Flexible Processes in Software Engineering (XP), Sardinia, Italy, pp. 71–76 (2002)Google Scholar
  20. 20.
    Yin, R.K.: Case Study Research: Design and Methods. Applied Social Research Methods. SAGE Publications (2003)Google Scholar
  21. 21.
    Zaidman, A., Rompaey, B., Deursen, A., Demeyer, S.: Studying the co-evolution of production and test code in open source and industrial developer test processes through repository mining. Empirical Software Engineering 16(3), 325–364 (2011)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Ali Parsai
    • 1
  • Quinten David Soetens
    • 1
  • Alessandro Murgia
    • 1
  • Serge Demeyer
    • 1
  1. 1.University of AntwerpAntwerpenBelgium

Personalised recommendations