Skip to main content

Rotten green tests in Java, Pharo and Python

An empirical study

Abstract

Rotten Green Tests are tests that pass, but not because the assertions they contain are true: a rotten test passes because some or all of its assertions are not actually executed. The presence of a rotten green test is a test smell, and a bad one, because the existence of a test gives us false confidence that the code under test is valid, when in fact that code may not have been tested at all. This article reports on an empirical evaluation of the tests in a corpus of projects found in the wild. We selected approximately one hundred mature projects written in each of Java, Pharo, and Python. We looked for rotten green tests in each project, taking into account test helper methods, inherited helpers, and trait composition. Previous work has shown the presence of rotten green tests in Pharo projects; the results reported here show that they are also present in Java and Python projects, and that they fall into similar categories. Furthermore, we found code bugs that were hidden by rotten tests in Pharo and Python. We also discuss two test smells —missed fail and missed skip —that arise from the misuse of testing frameworks, and which we observed in tests written in all three languages.

This is a preview of subscription content, access via your institution.

Listing 1
Listing 2
Listing 3
Listing 4
Listing 5
Listing 6
Listing 7
Fig. 1
Listing 8
Listing 9
Listing 10
Fig. 2
Listing 11
Listing 12
Fig. 3
Listing 13
Listing 14
Listing 15
Listing 16
Listing 17
Listing 18
Listing 19
Listing 20
Fig. 4

Notes

  1. 1.

    https://www.tiobe.com/tiobe-index/

  2. 2.

    https://redmonk.com/sogrady/2021/03/01/language-rankings-1-21/

  3. 3.

    https://github.com/apache/commons-collections/blob/master/src/test/java/org/apache/commons/collections4/set/ListOrderedSetTest.java#L129

  4. 4.

    https://github.com/alibaba/Sentinel/blob/103fa307e57de1b6660a8a004e9d8f18283b18c9/sentinel-core/src/test/java/com/alibaba/csp/sentinel/slots/statistic/metric/BucketLeapArrayTest.java#L209

  5. 5.

    See commit at https://github.com/alibaba/Sentinel/commit/a65d16083dffd56069c0694d0f5417454d518b22#diff-c85162534c5c25c163e1279cd8f926b7L174

  6. 6.

    A false negative would be generated by a call site that the analysis labelled “executed” but that was not actually executed.

  7. 7.

    https://docs.python.org/fr/3/library/unittest.html

  8. 8.

    https://docs.pytest.org/en/latest/index.html

  9. 9.

    https://github.com/StevenCostiou/reflectivipy/

  10. 10.

    A list of the assertion methods in unittest can be found at https://docs.python.org/3/library/unittest.html

  11. 11.

    PYPL (PopularitY of Programming Language Index) https://pypl.github.io/PYPL.html was created by analyzing how often language tutorials are sought on Google. By this metric, Python and Java were the two top programming languages in March 2021.

  12. 12.

    Raw data are available at https://github.com/rmod-team/2020-rotten-green-tests-experiment-data

  13. 13.

    https://github.com/pallets/jinja

  14. 14.

    Repository accessed September 2019, commit 91a404073acac40a7945bf7d584e8b30bc7a08cb

  15. 15.

    with commit 9ca80538d9e9418ae658772516f9b7dfb1e02ccd

  16. 16.

    https://github.com/amaembo/streamex/blob/1190608bda70885f55ec791ebc0e76f89006db6a/src/test/java/one/util/streamex/InternalsTest.java#L59

  17. 17.

    https://docs.pytest.org/en/latest/reference/reference.html#pytest.raises

  18. 18.

    https://octoverse.github.com/

  19. 19.

    See the discussion on StackOverflow at https://stackoverflow.com/q/12939362/1168342

  20. 20.

    See https://docs.pytest.org/en/reorganize-docs/new-docs/user/skipping.html.

References

  1. Baudry B, Fleurey F, Jézéquel JM, Traon YL (2005) Automatic test case optimization: A bacteriologic algorithm. IEEE Softw 22(2):76–82

    Article  Google Scholar 

  2. Baudry B, Fleurey F, Traon YL (2006) Improving test suites for efficient fault localization. In: ICSE ’06: Proceeding of the 28th international conference on Software engineering. https://doi.org/10.1145/1134285.1134299. ACM Press, New York, pp 82–91

  3. Bavota G, Qusef A, Oliveto R, Lucia AD, Binkley D (2012) An empirical analysis of the distribution of unit test smells and their impact on software maintenance. In: International conference on software maintenance (ICSM). https://doi.org/10.1109/ICSM.2012.6405253. IEEE, pp 56–65

  4. Beszedes A, Gergely T, Schrettner L, Jasz J, Lango L, Gyimothy T (2012) Code coverage-based regression test selection and prioritization in WebKit. In: 2012 28th IEEE international conference on software maintenance (ICSM). https://doi.org/10.1109/ICSM.2012.6405252, pp 46–55

  5. Black AP, Ducasse S, Nierstrasz O, Pollet D, Cassou D, Denker M (2009) Pharo by example. Square Bracket Associates, Kehrsatz, Switzerland. http://books.pharo.org

  6. Blondeau V, Etien A, Anquetil N, Cresson S, Croisy P, Ducasse S (2016) Test case selection in industry: An analysis of issues related to static approaches. Softw Qual J :1–35. https://doi.org/10.1007/s11219-016-9328-4

  7. Bowes D, Tracy H, Petrié J, Shippey T, Turhan B (2017) How good are my tests?. In: Workshop on emerging trends in software metrics (WETSoM). IEEE/ACM

  8. Breugelmans M, Van Rompaey B (2008) TestQ: Exploring structural and maintenance characteristics of unit test suites. In: International workshop on advanced software development tools and techniques (WASDeTT)

  9. Costiou S, Aranega V, Denker M (2020) Sub-method, partial behavioral reflection with Reflectivity: Looking back on 10 years of use. Art Sci Eng Programm 4(3). https://doi.org/10.22152/programming-journal.org/2020/4/5

  10. Csallner C, Smaragdakis Y (2004) JCrasher: an automatic robust tester for Java. Softw Pract Exper 43

  11. Daniel B, Dig D, Gvero T, Jagannath V, Jiaa J, Mitchell D, Nogiec J, Tan SH, Marinov D (2011) Reassert: A tool for repairing broken unit tests. In: Proceedings of the 33rd international conference on software engineering, ICSE ’11. https://doi.org/10.1145/1985793.1985978. ACM, New York, pp 1010–1012

  12. Delplanque J, Ducasse S, Black AP, Polito G (2018) Rotten green tests: a first analysis. Tech. rep., Inria

  13. Delplanque J, Ducasse S, Black AP, Polito G, Etien A (2019) Rotten green tests. In: 2019 IEEE/ACM 41st int. conf. on software engineering (ICSE). pp 500–511. https://doi.org/10.1109/ICSE.2019.00062

  14. DeMillo RA, Lipton RJ, Sayward FG (1978) Hints on test data selection: Help for the practicing programmer. Computer 11(4):34–41. https://doi.org/10.1109/C-M.1978.218136

    Article  Google Scholar 

  15. van Deursen A, Moonen L, van den Bergh A, Kok G (2001) Refactoring test code. In: Marchesi M (ed) Proceedings of the 2nd international conference on extreme programming and flexible processes (XP2001), University of Cagliari, pp 92–95

  16. Ducasse S, Nierstrasz O, Schärli N, Wuyts R, Black AP (2006) Traits: A mechanism for fine-grained reuse. ACM Trans Program Lang Syst (TOPLAS) 28(2):331–388. https://doi.org/10.1145/1119479.1119483

    Article  Google Scholar 

  17. Ducasse S, Pollet D, Bergel A, Cassou D (2009) Reusing and composing tests with traits. In: TOOLS’09: Proceedings of the 47th international conference on objects, models, components, patterns, Zurich, Switzerland, pp 252–271

  18. Dustin E, Rashka J, Paul J (1999) Automated software testing :introduction, management, and performance. Addison-Wesley Professional, Boston

    Google Scholar 

  19. Gaelli M, Lanza M, Nierstrasz O, Wuyts R (2004) Ordering broken unit tests for focused debugging. In: 20th international conference on software maintenance (ICSM 2004). pp 114–123. https://doi.org/10.1109/ICSM.2004.1357796, http://scg.unibe.ch/archive/papers/Gael04aOrderingBrokenUnitTestsForFocusedDebugging.pdf

  20. Gligoric M, Groce A, Zhang C, Sharma R, Alipour MA, Marinov D (2013) Comparing non-adequate test suites using coverage criteria. In: International symposium on software testing and analysis

  21. Herzig K, Nagappan N (2015) Empirically detecting false test alarms using association rules. In: International conference on software engineering

  22. Huo C, Clause J (2014) Improving oracle quality by detecting brittle assertions and unused inputs in tests. Found Softw Eng

  23. Inozemtseva L, Holmes R (2014) Coverage is not strongly correlated with test suite effectiveness. In: International conference on software engineering

  24. Lingampally R, Gupta A, Jalote P (2007) A multipurpose code coverage tool for Java. In: HICSS 2007. 40th Annual Hawaii International Conference on System sciences, pp 261b–261b. https://doi.org/10.1109/HICSS.2007.24

  25. Martinez M, Etien A, Ducasse S, Fuhrman C (2020) Rtj: a Java framework for detecting and refactoring rotten green test cases. In: IEEE/ACM 42nd int. conf. on software engineering: companion proceedings (ICSE ’20 Companion), 5–11 Oct, 2020, Seoul, Republic of Korea. https://doi.org/10.1145/3377812.3382151, pp 69–72

  26. Meszaros G (2007) XUnit test patterns – refactoring test code. Addison Wesley, Boston

    Google Scholar 

  27. Mockus A, Nagappan N, Dinh-Trong TT (2009) Test coverage and post-verification defects: A multiple case study. In: Proceedings of the 2009 3rd international symposium on empirical software engineering and measurement, ESEM ’09. IEEE Computer Society, Washingto, pp 291–301. https://doi.org/10.1109/ESEM.2009.5315981

  28. Niedermayr R, Juergens E, Wagne S (2016) Will my tests tell me if I break this code?. In: International workshop on continuous software evolution and delivery. ACM Press, pp 23–29

  29. Poulding SM, Feldt R (2017) Generating controllably invalid and atypical inputs for robustness testing. In: IEEE international conference on software testing, verification and validation workshops. pp 81–84. https://doi.org/10.1109/ICSTW.2017.21

  30. Reichhart S, Gîrba T, Ducasse S (2007) Rule-based assessment of test quality. In: Journal of object technology, special issue. Proceedings of TOOLS Europe 2007, vol 6/9, pp 231–251

  31. Runeson P, Höst M (2009) Guidelines for conducting and reporting case study research in software engineering. Empir Softw Eng 14(2):131–164

    Article  Google Scholar 

  32. Schuler D, Zeller A (2013) Checked coverage: an indicator for oracle quality. Softw Test Verification Reliab 23:531–551. https://doi.org/10.1002/stvr.1497

    Article  Google Scholar 

  33. Shahrokni A, Feldt R (2011) Robustest: Towards a framework for automated testing of robustness in software. In: International conference on advances in system testing and validation LifeCycle

  34. Silva Junior N, Rocha L, Martins LA, Machado I (2020) A survey on test practitioners’ awareness of test smells. arXiv:200305613

  35. Tillmann N, Schulte W (2005) Parameterized unit tests. In: ESEC/SIGSOFT FSE. pp 253–262. ftp://ftp.research.microsoft.com/pub/tr/TR-2005-64.pdf,

  36. Van Rompaey B, Du Bois B, Demeyer S (2006a) Characterizing the relative significance of a test smell. In: Proceedings of ICSM 2006. https://doi.org/10.1109/ICSM.2006.18, pp 391–400

  37. Van Rompaey B, Du Bois B, Demeyer S (2006b) Improving test code reviews with metrics: a pilot study. Tech rep., Lab on Re-engineering, University of Antwerp

  38. Vera-Perez O, Danglot B, Monperrus M, Baudry B (2018) A comprehensive study of pseudo-tested methods. arXiv:1807.05030

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Vincent Aranega.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Communicated by: Tingting Yu

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Aranega, V., Delplanque, J., Martinez, M. et al. Rotten green tests in Java, Pharo and Python. Empir Software Eng 26, 130 (2021). https://doi.org/10.1007/s10664-021-10016-2

Download citation

Keywords

  • Testing
  • Rotten Green Tests
  • Empirical study
  • Software quality