Skip to main content
Log in

Excluding code from test coverage: practices, motivations, and impact

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

Test coverage measures the percentage of code that is covered (and uncovered) by tests. In practice, not all code is equally important for coverage analysis, like code that will not be executed during tests. Some coverage tools provide support for code exclusion from coverage reports, however, we are not yet aware of what code tends to be excluded, the reasons behind it, and the impact on coverage analysis. In this paper, we provide an empirical study to understand code exclusion practices, motivations, and impact on test coverage. We first mine popular Python projects that adopt test coverage to assess code exclusion practices. We find that (1) over 1/3 of the projects perform coverage exclusion, (2) 75% of the code is already created using the exclusion feature, and (3) developers exclude non-runnable, debug-only, and defensive code, but also platform-specific and conditional importing. Next, we explore the motivations behind the exclusions and the importance of test coverage in current days: (4) most code is excluded because it is already untested, low-level, or complex and (5) distinct test coverage recommendations are available for developers, such as covering the change, exploring coverage reports, and increasing coverage. Lastly, we assess the impact of code exclusion on test coverage. We detect that (6) code exclusion may impact test coverage, decreasing the number of statements that should be covered by tests and (7) test coverage can be refined by following code exclusion recommendations. Based on our findings, we discuss implications for both practitioners and researchers to improve coverage analysis, tools, and documentation as well as inspire novel research on test coverage.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data Availability

The datasets generated during and/or analysed during the current study are available in the Zenodo repository, https://doi.org/10.5281/zenodo.5703716.

Notes

  1. https://coverage.readthedocs.io/en/coverage-5.3/excluding.html

  2. https://github.com/gotwarlost/istanbul#ignoring-code-for-coverage

  3. https://github.com/scikit-learn/scikit-learn

  4. https://codecov.io/github/scikit-learn/scikit-learn

  5. GitHub ranking: https://bit.ly/2XHn2PY

  6. TIOBE ranking: https://www.tiobe.com/tiobe-index

  7. https://docs.python.org/3/library/trace.html

  8. https://pypistats.org/top

  9. Commit URL: https://bit.ly/3m5PDtf

  10. Commit URL: https://bit.ly/2V4orPI

  11. Commit URL: https://bit.ly/2V0IkYb

  12. https://docs.ray.io/en/master/getting-involved.html#testing

  13. https://pip.pypa.io/en/latest/development/getting-started/#running-tests

  14. https://devguide.python.org/coverage

  15. We manually inspected all full file names and detected three patterns for external code: lib, vendor, and thirdparty. Thus, the 377 cases refer to file names including these patterns.

  16. Notice that this number is larger than the 534 cases of RQ1 because here we are assessing version history, while in RQ1 we only assessed the last version of the systems.

  17. https://docs.python.org/3/library/__main__.html

  18. https://docs.python.org/3/library/typing.html#constant

  19. e.g., https://bit.ly/36QmUDA

  20. https://docs.python.org/3/library/sys.html

  21. https://docs.python.org/3/library/os.html

  22. Commit URL: https://bit.ly/2JHpO4M

  23. Commit URL: https://bit.ly/33O5rd6

  24. Commit URL: https://bit.ly/3lXQ48a

  25. https://docs.python.org/3/library/exceptions.html

  26. Commit URL: https://bit.ly/3oA0UTK

  27. https://docs.python.org/3/reference/datamodel.html#object.__repr__

  28. https://docs.python.org/3/library/exceptions.html#NotImplementedError

  29. Commit URL: https://bit.ly/37KJ0XB

  30. Commit URL: https://bit.ly/3mXXREl

  31. Commit URL: https://bit.ly/370DRv3

  32. Commit URL: https://bit.ly/36YjPS6

  33. Commit URL: https://bit.ly/33ZzM8K

  34. Commit URL: https://bit.ly/372NCJp

  35. Commit URL: https://bit.ly/3oGB0xG

  36. Commit URL: https://bit.ly/2W4m6EV

  37. Commit URL: https://bit.ly/2Kfl2vb

  38. Commit URL: https://bit.ly/2W42EIG

  39. Commit URL: https://bit.ly/373or9q

  40. Commit URL: https://bit.ly/2W45L32

  41. Commit URL: https://bit.ly/3naFKeC

  42. Commit URL: https://bit.ly/3oDVRS6

  43. Commit URL: https://bit.ly/3qKhaDs

  44. Commit URL: https://bit.ly/2W4BhOp

  45. Commit URL: https://bit.ly/3m1Wxil

  46. Commit URL: https://bit.ly/39YdWpZ

  47. Commit URL: https://bit.ly/2W286LX

  48. Testing guideline of Coala: https://bit.ly/3gAGCGR

  49. Issue URL: https://bit.ly/3gAQCjr

  50. https://github.com/gotwarlost/istanbul#ignoring-code-for-coverage

References

  • Anand S, Burke EK, Chen TY, Clark J, Cohen MB, Grieskamp W, Harman M, Harrold MJ, McMinn P, Bertolino A et al (2013) An orchestrated survey of methodologies for automated software test case generation. J Syst Softw 86(8):1978–2001

    Article  Google Scholar 

  • Arcuri A (2019) Restful api automated test case generation with evomaster. ACM Trans Softw Eng Methodol 28(1):1–37

    Article  MathSciNet  Google Scholar 

  • Borges H, Hora A, Valente MT (2016) Understanding the factors that impact the popularity of GitHub repositories. In: International conference on software maintenance and evolution, pp 334–344

  • Brito A, Valente MT, Xavier L, Hora A (2020) You broke my code: understanding the motivations for breaking changes in APIs. Empir Softw Eng 25(2):1458–1492

    Article  Google Scholar 

  • Brunetto M, Denaro G, Mariani L, Pezzé M On introducing automatic test case generation in practice: a success story and lessons learned. J Syst Soft

  • Chen B, Song J, Xu P, Hu X, Jiang ZM (2018) An automated approach to estimating code coverage measures via execution logs. In: International conference on automated software engineering, pp 305–316

  • coala (2021) https://github.com/coala/coala, (November, 2021)

  • Cobertura (2021) https://cobertura.github.io/cobertura, (November, 2021)

  • Code Coverage Best Practices (2021) https://testing.googleblog.com/2020/08/code-coverage-best-practices.htmlhttps://testing.googleblog.com/2020/08/code-coverage-best-practices.html, (November, 2021)

  • Codecov (2021) https://codecov.io, (November, 2021)

  • Coles H, Laurent T, Henard C, Papadakis M, Ventresque A (2016) Pit: a practical mutation testing tool for java. In: International symposium on software testing and analysis, pp 449–452

  • Contributing to Cookiecutter (2021) https://github.com/cookiecutter/cookiecutter/blob/master/CONTRIBUTING.md#Testing-with-tox, (November, 2021)

  • Contributing to DateUtil (2021) https://github.com/dateutil/dateutil/blob/master/CONTRIBUTING.md#testing, (November, 2021)

  • Contributing to FastAPI (2021) https://fastapi.tiangolo.com/contributing/#tests, (November, 2021)

  • Cooper S, Fulton MS (2018) Test case generation for uncovered code paths (2018). US Patent 9,990,272

  • Coverage.py (2021) https://coverage.readthedocs.io, (November, 2021)

  • Coveralls (2021) https://coveralls.io, November, 2021)

  • Cruzes DS, Dyba T (2011) Recommended steps for thematic synthesis in software engineering. In: International symposium on empirical software engineering and measurement, pp 275–284

  • Episkopos DC, Li JJ, Yee HS, Weiss DM (2011) Prioritize code for testing to improve code coverage of complex software (2011). US Patent 7,886,272

  • Fowler M (2018) Refactoring: improving the design of existing code Addison-Wesley professional

  • Hilton M, Bell J, Marinov D (2018) A large-scale study of test coverage evolution. In: International conference on automated software engineering, pp 53–63

  • Hora A (2021) What code is deliberately excluded from test coverage and why?. In: International conference on mining software repositories, pp 392–402

  • https://jestjs.io (2021) https://jestjs.io, (November, 2021)

  • Increase Test Coverage (Apache Superset) (2021) https://github.com/apache/superset/blob/master/CONTRIBUTING.md#testing, (November, 2021)

  • Increase Test Coverage (CPython) (2021) https://devguide.python.org/coverage, (November, 2021)

  • Istanbul (2021) https://istanbul.js.org, (November, 2021)

  • Ivanković M., Petrović G., Just R, Fraser G (2019) Code coverage at google. In: ACM Joint meeting on european software engineering conference and symposium on the foundations of software engineering, pp 955–963

  • JaCoCo Java Code Coverage Library (2021) https://www.eclemma.org/jacoco, (November, 2021)

  • Kochhar PS, Lo D, Lawall J, Nagappan N (2017) Code coverage and postrelease defects: a large-scale study on open source projects. IEEE Trans Reliab 66(4):1213–1228

    Article  Google Scholar 

  • Marick B, et al. (1999) How to misuse code coverage. In: Interational conference on testing computer software, pp 16–18

  • Marinescu P, Hosek P, Cadar C (2014) Covrig: a framework for the analysis of code, test, and coverage evolution in real software. In: International symposium on software testing and analysis, pp 93–104

  • Meszaros G (2007) xUnit test patterns: refactoring test code pearson education

  • Palomba F, Panichella A, Zaidman A, Oliveto R, De Lucia A (2016) Automatic test case generation: what if test code quality matters?. In: International symposium on software testing and analysis, pp 130–141

  • Panichella A, Kifetew FM, Tonella P (2017) Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets. IEEE Trans Softw Eng 44(2):122–158

    Article  Google Scholar 

  • PIT Mutation Testing (2021) https://pitest.org, (November, 2021)

  • Python Package Index (PyPI) (2021) https://pypi.org, (November, 2021)

  • Rapps S, Weyuker EJ (1985) Selecting software test data using data flow information. IEEE Trans Soft Eng, (4):367–375

  • Sinha S, Harrold MJ (2000) Analysis and testing of programs with exception handling constructs. IEEE Trans Softw Eng 26(9):849–871

    Article  Google Scholar 

  • Spadini D, Aniche M, Bacchelli A (2018) Pydriller: Python framework for mining software repositories. In: Joint meeting on european software engineering conference and symposium on the foundations of software engineering, pp 908–911

  • Testing and improving test coverage (scikit-learn) (2021) https://scikit-learn.org/dev/developers/contributing.html#testing-and-improving-test-coverage, (November, 2021)

  • Thomas D, Hunt A (2019) The pragmatic programmer: your journey to mastery Addison-Wesley professional

  • Winters T, Wright H, Manshreck T (2020) Software engineering at google: lessons learned from programming over time

  • Zaidman A, Van Rompaey B, van Deursen A, Demeyer S (2011) Studying the co-evolution of production and test code in open source and industrial developer test processes through repository mining. Empir Softw Eng 16(3):325–364

    Article  Google Scholar 

  • Zeller A, Gopinath R, Böhme M, Fraser G, Holler C (2019) The fuzzing book. In: The fuzzing book. Saarland university, https://www.fuzzingbook.org

  • Zhai H, Casalnuovo C, Devanbu P (2019) Test coverage in python programs. In: International conference on mining software repositories, pp 116–120

Download references

Acknowledgements

This research is supported by CAPES, CNPq, and FAPEMIG.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andre Hora.

Additional information

Communicated by: Kelly Blincoe, Mei Nagappan

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the Topical Collection: Mining Software Repositories (MSR)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hora, A. Excluding code from test coverage: practices, motivations, and impact. Empir Software Eng 28, 16 (2023). https://doi.org/10.1007/s10664-022-10259-7

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10664-022-10259-7

Keywords

Navigation