Advertisement

A comprehensive study of pseudo-tested methods

  • Oscar Luis Vera-PérezEmail author
  • Benjamin Danglot
  • Martin Monperrus
  • Benoit Baudry
Article

Abstract

Pseudo-tested methods are defined as follows: they are covered by the test suite, yet no test case fails when the method body is removed, i.e., when all the effects of this method are suppressed. This intriguing concept was coined in 2016, by Niedermayr and colleagues, who showed that such methods are systematically present, even in well-tested projects with high statement coverage. This work presents a novel analysis of pseudo-tested methods. First, we run a replication of Niedermayr’s study with 28K+ methods, enhancing its external validity thanks to the use of new tools and new study subjects. Second, we perform a systematic characterization of these methods, both quantitatively and qualitatively with an extensive manual analysis of 101 pseudo-tested methods. The first part of the study confirms Niedermayr’s results: pseudo-tested methods exist in all our subjects. Our in-depth characterization of pseudo-tested methods leads to two key insights: pseudo-tested methods are significantly less tested than the other methods; yet, for most of them, the developers would not pay the testing price to fix this situation. This calls for future work on targeted test generation to specify those pseudo-tested methods without spending developer time.

Keywords

Software testing Software developers Pseudo-tested methods Test quality Program analysis 

Notes

Acknowledgments

We would like to acknowledge the invaluable help and feedback provided by the development teams of authzforce, spoon and pdfbox. We also express our appreciation to Simon Urli, Daniel Le Berre, Arnaud Blouin, Marko Ivanković, Goran Petrovic and Andy Zaidman for their feedback and their very accurate suggestions. This work has been partially supported by the EU Project STAMP ICT-16-10 No.731529 and by the Wallenberg Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

References

  1. Andrews JH, Briand LC, Labiche Y (2005) Is Mutation an Appropriate Tool for Testing Experiments? In: Proceedings of the 27th International Conference on Software engineering, ICSE ’05. ACM, New York, pp 402–411Google Scholar
  2. Androutsopoulos K, Clark D, Dan H, Hierons RM, Harman M (2014) An analysis of the relationship between conditional entropy and failed error propagation in software testing. In: Proceedings of the 36th international conference on software engineering. ACM, pp 573–583Google Scholar
  3. Coles H, Laurent T, Henard C, Papadakis M, Ventresque A (2016) PIT: A Practical Mutation Testing Tool for Java (Demo). In: Proceedings of the 25th International Symposium on Software Testing and analysis, ISSTA 2016. ACM, New York, pp 449–452Google Scholar
  4. Daran M, Thévenod-Fosse P (1996) Software Error Analysis: A Real Case Study Involving Real Faults And Mutations. In: Proceedings of the 1996 ACM SIGSOFT International Symposium on Software Testing and analysis, ISSTA ’96. ACM, New York, pp 158–171CrossRefGoogle Scholar
  5. Delahaye M, Bousquet LD (2013) A Comparison of Mutation Analysis Tools for Java. In: 2013 13th International, Conference on Quality Software, pp 187–195Google Scholar
  6. Delamaro ME, Offutt J, Ammann P (2014) Designing deletion mutation operators. In: 2014 IEEE Seventh International Conference on Software Testing, Verification and Validation, pp 11–20Google Scholar
  7. DeMillo RA, Lipton RJ, Sayward FG (1978) Hints on test data selection Help for the practicing programmer. Computer 11(4):34–41CrossRefGoogle Scholar
  8. Deng L, Offutt J, Li N (2013) Empirical evaluation of the statement deletion mutation operator. In: 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation, pp 84–93Google Scholar
  9. Durelli VHS, Souza NMD, Delamaro ME (2017) Are deletion mutants easier to identify manually? In: 2017 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp 149–158Google Scholar
  10. Gopinath R, Ahmed I, Alipour MA, Jensen C, Groce A (2017) Does choice of mutation tool matter? Softw Qual J 25(3):871–920CrossRefGoogle Scholar
  11. Gopinath R, Jensen C, Groce A (2014) Mutations: How close are they to real faults?. In: 2014 IEEE 25th international symposium on Software reliability engineering (ISSRE). IEEE, pp 189–200Google Scholar
  12. Gourlay JS (1983) A mathematical framework for the investigation of testing. IEEE Transactions on software engineering SE-9(6):686–709CrossRefGoogle Scholar
  13. Jahangirova G, Clark D, Harman M, Tonella P (2016) Test oracle assessment and improvement. In: Proceedings of the 25th International Symposium on Software Testing and Analysis. ACM, pp 247–258Google Scholar
  14. Just R, Schweiggert F, Kapfhammer GM (2011) MAJOR An efficient and extensible tool for mutation analysis in a Java compiler. In: Proceedings of the International Conference on Automated Software Engineering (ASE), pp 612–615Google Scholar
  15. Just R, Jalali D, Inozemtseva L, Ernst MD, Holmes R, Fraser G (2014) Are Mutants a Valid Substitute for Real Faults in Software Testing? In: Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software engineering, FSE 2014. ACM, New York, pp 654–665Google Scholar
  16. Kampenes VB, Dybå T, Hannay JE, Sjøberg DIK (2007) A systematic review of effect size in software engineering experiments. Inf Softw Technol 49(11-12):1073–1086CrossRefGoogle Scholar
  17. Kintis M, Papadakis M, Papadopoulos A, Valvis E, Malevris N (2016) Analysing and Comparing the Effectiveness of Mutation Testing Tools: A Manual Study. In: 2016 IEEE, 16th International Working Conference on Source Code Analysis and Manipulation (SCAM), pp 147–156Google Scholar
  18. Laurent T, Papadakis M, Kintis M, Henard C, Traon YL, Ventresque A (2017) Assessing and Improving the Mutation Testing Practice of PIT. In: 2017 IEEE, International Conference on Software Testing, Verification and Validation (ICST), pp 430–435Google Scholar
  19. Niedermayr R, Juergens E, Wagner S (2016) Will my tests tell me if I break this code?. In: Proceedings of the International Workshop on Continuous Software Evolution and delivery. ACM Press, New York, pp 23–29Google Scholar
  20. Petrovic G, Ivankovic M (2018) State of mutation testing at google. In: Proceedings of the 40th International Conference on Software Engineering 2017 (SEIP)Google Scholar
  21. Schuler D, Zeller A (2013) Checked coverage: an indicator for oracle quality. Softw Test Verification Reliab 23(7):531–551CrossRefGoogle Scholar
  22. Shull FJ, Carver JC, Vegas S, Juristo N (2008) The role of replications in Empirical Software Engineering. Empir Softw Eng 13(2):211–218CrossRefGoogle Scholar
  23. Staats M, Whalen MW, Heimdahl MP (2011) Programs, tests, and oracles: the foundations of testing revisited. In: Proceedings of the 33rd international conference on software engineering. ACM, pp 391–400Google Scholar
  24. Untch RH (2009) On reduced neighborhood mutation analysis using a single mutagenic operator. In: Proceedings of the 47th Annual Southeast Regional Conference, ACM-SE 47. ACM, New York, pp 71:1–71:4Google Scholar
  25. Vera-Pérez OL, Monperrus M, Baudry B (2018) Descartes: A pitest engine to detect pseudo-tested methods. In: Proceedings of the 2018 33rd ACM/IEEE International Conference on Automated Software Engineering (ASE ’18), pp 908–911Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Oscar Luis Vera-Pérez
    • 1
    Email author
  • Benjamin Danglot
    • 2
  • Martin Monperrus
    • 3
  • Benoit Baudry
    • 3
  1. 1.Inria Rennes - Bretagne AtlantiqueRennesFrance
  2. 2.Inria Lille - Nord EuropeVilleneuve d’AscqFrance
  3. 3.KTH Royal Institute of Technology in StockholmStockholmSweden

Personalised recommendations