Pseudo-tested methods are defined as follows: they are covered by the test suite, yet no test case fails when the method body is removed, i.e., when all the effects of this method are suppressed. This intriguing concept was coined in 2016, by Niedermayr and colleagues, who showed that such methods are systematically present, even in well-tested projects with high statement coverage. This work presents a novel analysis of pseudo-tested methods. First, we run a replication of Niedermayr’s study with 28K+ methods, enhancing its external validity thanks to the use of new tools and new study subjects. Second, we perform a systematic characterization of these methods, both quantitatively and qualitatively with an extensive manual analysis of 101 pseudo-tested methods. The first part of the study confirms Niedermayr’s results: pseudo-tested methods exist in all our subjects. Our in-depth characterization of pseudo-tested methods leads to two key insights: pseudo-tested methods are significantly less tested than the other methods; yet, for most of them, the developers would not pay the testing price to fix this situation. This calls for future work on targeted test generation to specify those pseudo-tested methods without spending developer time.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
Compared to Niedermayr et al. (2016), we add two new transformations, one to return null and another to return an empty array. These additions allow to expand the scope of methods to be analyzed.
The computation of the Pearson coefficient and the Wilcoxon test were performed using the features of the R language.
The violin plot for pseudo-tested methods of commons-cli and jopt-simple are not displayed, as they have too few methods in this category.
Andrews JH, Briand LC, Labiche Y (2005) Is Mutation an Appropriate Tool for Testing Experiments? In: Proceedings of the 27th International Conference on Software engineering, ICSE ’05. ACM, New York, pp 402–411
Androutsopoulos K, Clark D, Dan H, Hierons RM, Harman M (2014) An analysis of the relationship between conditional entropy and failed error propagation in software testing. In: Proceedings of the 36th international conference on software engineering. ACM, pp 573–583
Coles H, Laurent T, Henard C, Papadakis M, Ventresque A (2016) PIT: A Practical Mutation Testing Tool for Java (Demo). In: Proceedings of the 25th International Symposium on Software Testing and analysis, ISSTA 2016. ACM, New York, pp 449–452
Daran M, Thévenod-Fosse P (1996) Software Error Analysis: A Real Case Study Involving Real Faults And Mutations. In: Proceedings of the 1996 ACM SIGSOFT International Symposium on Software Testing and analysis, ISSTA ’96. ACM, New York, pp 158–171
Delahaye M, Bousquet LD (2013) A Comparison of Mutation Analysis Tools for Java. In: 2013 13th International, Conference on Quality Software, pp 187–195
Delamaro ME, Offutt J, Ammann P (2014) Designing deletion mutation operators. In: 2014 IEEE Seventh International Conference on Software Testing, Verification and Validation, pp 11–20
DeMillo RA, Lipton RJ, Sayward FG (1978) Hints on test data selection Help for the practicing programmer. Computer 11(4):34–41
Deng L, Offutt J, Li N (2013) Empirical evaluation of the statement deletion mutation operator. In: 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation, pp 84–93
Durelli VHS, Souza NMD, Delamaro ME (2017) Are deletion mutants easier to identify manually? In: 2017 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp 149–158
Gopinath R, Ahmed I, Alipour MA, Jensen C, Groce A (2017) Does choice of mutation tool matter? Softw Qual J 25(3):871–920
Gopinath R, Jensen C, Groce A (2014) Mutations: How close are they to real faults?. In: 2014 IEEE 25th international symposium on Software reliability engineering (ISSRE). IEEE, pp 189–200
Gourlay JS (1983) A mathematical framework for the investigation of testing. IEEE Transactions on software engineering SE-9(6):686–709
Jahangirova G, Clark D, Harman M, Tonella P (2016) Test oracle assessment and improvement. In: Proceedings of the 25th International Symposium on Software Testing and Analysis. ACM, pp 247–258
Just R, Schweiggert F, Kapfhammer GM (2011) MAJOR An efficient and extensible tool for mutation analysis in a Java compiler. In: Proceedings of the International Conference on Automated Software Engineering (ASE), pp 612–615
Just R, Jalali D, Inozemtseva L, Ernst MD, Holmes R, Fraser G (2014) Are Mutants a Valid Substitute for Real Faults in Software Testing? In: Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software engineering, FSE 2014. ACM, New York, pp 654–665
Kampenes VB, Dybå T, Hannay JE, Sjøberg DIK (2007) A systematic review of effect size in software engineering experiments. Inf Softw Technol 49(11-12):1073–1086
Kintis M, Papadakis M, Papadopoulos A, Valvis E, Malevris N (2016) Analysing and Comparing the Effectiveness of Mutation Testing Tools: A Manual Study. In: 2016 IEEE, 16th International Working Conference on Source Code Analysis and Manipulation (SCAM), pp 147–156
Laurent T, Papadakis M, Kintis M, Henard C, Traon YL, Ventresque A (2017) Assessing and Improving the Mutation Testing Practice of PIT. In: 2017 IEEE, International Conference on Software Testing, Verification and Validation (ICST), pp 430–435
Niedermayr R, Juergens E, Wagner S (2016) Will my tests tell me if I break this code?. In: Proceedings of the International Workshop on Continuous Software Evolution and delivery. ACM Press, New York, pp 23–29
Petrovic G, Ivankovic M (2018) State of mutation testing at google. In: Proceedings of the 40th International Conference on Software Engineering 2017 (SEIP)
Schuler D, Zeller A (2013) Checked coverage: an indicator for oracle quality. Softw Test Verification Reliab 23(7):531–551
Shull FJ, Carver JC, Vegas S, Juristo N (2008) The role of replications in Empirical Software Engineering. Empir Softw Eng 13(2):211–218
Staats M, Whalen MW, Heimdahl MP (2011) Programs, tests, and oracles: the foundations of testing revisited. In: Proceedings of the 33rd international conference on software engineering. ACM, pp 391–400
Untch RH (2009) On reduced neighborhood mutation analysis using a single mutagenic operator. In: Proceedings of the 47th Annual Southeast Regional Conference, ACM-SE 47. ACM, New York, pp 71:1–71:4
Vera-Pérez OL, Monperrus M, Baudry B (2018) Descartes: A pitest engine to detect pseudo-tested methods. In: Proceedings of the 2018 33rd ACM/IEEE International Conference on Automated Software Engineering (ASE ’18), pp 908–911
We would like to acknowledge the invaluable help and feedback provided by the development teams of authzforce, spoon and pdfbox. We also express our appreciation to Simon Urli, Daniel Le Berre, Arnaud Blouin, Marko Ivanković, Goran Petrovic and Andy Zaidman for their feedback and their very accurate suggestions. This work has been partially supported by the EU Project STAMP ICT-16-10 No.731529 and by the Wallenberg Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Communicated by: Paolo Tonella
Appendix A: Source code for the Study Subjects
Appendix A: Source code for the Study Subjects
Column “Project” lists the projects included in the present study. Column “URL” contains links to the available source code. Column “Commit ID” contains the SHA-1 hash identifying the commit with the source code state that was used in this study.
About this article
Cite this article
Vera-Pérez, O.L., Danglot, B., Monperrus, M. et al. A comprehensive study of pseudo-tested methods. Empir Software Eng 24, 1195–1225 (2019). https://doi.org/10.1007/s10664-018-9653-2
- Software testing
- Software developers
- Pseudo-tested methods
- Test quality
- Program analysis