Skip to main content

On Technical Debt in Software Testing - Observations from Industry

  • Conference paper
  • First Online:
Leveraging Applications of Formal Methods, Verification and Validation. Software Engineering (ISoLA 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13702))

Included in the following conference series:

Abstract

Testing large complex systems in an agile way of working was a tough transition for systems having large active legacy and honouring backward compatibility. Transition from manual test to full test execution automation resulted in increased speed and manifested technical debt. The agile way of working in continuous build and test, creates a lot of repetition by execution of the same tests. Overlap between agile teams producing similar test cases, causes a constant growth of the test suites. Despite the obvious improvement of automating millions of test cases, the numbers provide a false sense of security for management on how well the system is tested. The causes of technical debt should be addressed, instead of managing the symptoms. Technical debt in software testing could be addressed by refactoring, supported by known techniques like cloning, similarity analysis, test suite reduction, optimization and reducing known test smells. Increasing the system quality can also be improved by utilizing metrics, e.g. code coverage and mutation score or use one of the many automated test design technologies. Why this is not addressed in the industry has many causes. In this paper we describe observations from several industries, with the focus on large complex systems. The contribution lies in reflecting on observations made in the last decade, and providing a vision which identifies improvements in the area of test automation and technical debt in software test, i.e. test code, test suites, test organisation, strategy and execution. Our conclusion is that many test technologies are now mature enough to be brought into regular use. The main hindrance is skills and incentive to do so for the developer, as well as a lack of well educated testers.

Supported by Ericsson AB and Mälardalen University.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. CodeChecker at github. https://github.com/Ericsson/codechecker. Accessed 05 May 2022

  2. Ericsson Smart Mining web-page and report. https://www.ericsson.com/en/enterprise/reports/connected-mining. Accessed 08 Aug 2022

  3. ETSI, European Standard. https://www.etsi.org. Accessed 05 May 2022

  4. Github tool. https://github.com. Accessed 05 May 2022

  5. ISO: ISO/IEC 25000: 2014, Systems and software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - Guide to SQuaRE webpage. https://www.iso.org/standard/35733.html. Accessed 08 Aug 2022

  6. ITU-T TTCN-3 Z-series Z.161-Z.169. https://www.itu.int/rec/T-REC-Z/en. Accessed 08 Aug 2022

  7. Jenkins tool. https://www.jenkins.io/. Accessed 05 May 2022

  8. Maven tool. https://maven.apache.org. Accessed 05 June 2022

  9. SonarCube tool. Accessed 08 May 2022

    Google Scholar 

  10. Agarwal, A., Gupta, S., Choudhury, T.: Continuous and integrated software development using DevOps. In: 2018 International Conference on Advances in Computing and Communication Engineering (ICACCE), pp. 290–293. IEEE (2018)

    Google Scholar 

  11. Ahmad, A., Leifler, O., Sandahl, K.: Empirical analysis of factors and their effect on test flakiness-practitioners’ perceptions. arXiv preprint arXiv:1906.00673 (2019)

  12. Al-Ahmad, A.S., Kahtan, H., Hujainah, F., Jalab, H.A.: Systematic literature review on penetration testing for mobile cloud computing applications. IEEE Access 7, 173524–173540 (2019)

    Article  Google Scholar 

  13. Ali, S., Briand, L.C., Hemmati, H., Panesar-Walawege, R.K.: A systematic review of the application and empirical investigation of search-based test case generation. IEEE Trans. Software Eng. 36(6), 742–762 (2009)

    Article  Google Scholar 

  14. Barboni, M., Bertolino, A., De Angelis, G.: What we talk about when we talk about software test flakiness. In: Paiva, A.C.R., Cavalli, A.R., Ventura Martins, P., Pérez-Castillo, R. (eds.) QUATIC 2021. CCIS, vol. 1439, pp. 29–39. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85347-1_3

    Chapter  Google Scholar 

  15. Battina, D.S.: Artificial intelligence in software test automation: a systematic literature review. Int. J. Emerging Technol. Innov. Res. (2019). https://www.jetir.org. UGC and ISSN Approved. ISSN 2349-5162

  16. Bjarnason, E., et al.: Challenges and practices in aligning requirements with verification and validation: a case study of six companies. Empir. Softw. Eng. 19(6), 1809–1855 (2014)

    Article  Google Scholar 

  17. van Bladel, B., Demeyer, S.: A novel approach for detecting type-IV clones in test code. In: 2019 IEEE 13th International Workshop on Software Clones (IWSC), pp. 8–12. IEEE (2019)

    Google Scholar 

  18. van Bladel, B., Demeyer, S.: Clone detection in test code: an empirical evaluation. In: 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER), pp. 492–500. IEEE (2020)

    Google Scholar 

  19. Chen, T.Y., et al.: Metamorphic testing: a review of challenges and opportunities. ACM Comput. Surv. (CSUR) 51(1), 1–27 (2018)

    Article  MathSciNet  Google Scholar 

  20. Collins, E., Dias-Neto, A., de Lucena, V.F., Jr.: Strategies for agile software testing automation: an industrial experience. In: 2012 IEEE 36th Annual Computer Software and Applications Conference Workshops, pp. 440–445. IEEE (2012)

    Google Scholar 

  21. Cordy, J.R., Roy, C.K.: The NiCad clone detector. In: 2011 IEEE 19th International Conference on Program Comprehension, pp. 219–220. IEEE (2011)

    Google Scholar 

  22. Diebold, P., Mayer, U.: On the usage and benefits of agile methods & practices. In: Baumeister, H., Lichter, H., Riebisch, M. (eds.) XP 2017. LNBIP, vol. 283, pp. 243–250. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-57633-6_16

    Chapter  Google Scholar 

  23. Dietrich, J., Rasheed, S., Tahir, A.: Flaky test sanitisation via on-the-fly assumption inference for tests with network dependencies. arXiv preprint arXiv:2208.01106 (2022)

  24. Dikert, K., Paasivaara, M., Lassenius, C.: Challenges and success factors for large-scale agile transformations: a systematic literature review. J. Syst. Softw. 119, 87–108 (2016)

    Article  Google Scholar 

  25. Eldh, S.: On test design. Ph.D. thesis, Mälardalen University (2011)

    Google Scholar 

  26. Eldh, S.: Test automation improvement model-TAIM 2.0. In: 2020 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp. 334–337. IEEE (2020)

    Google Scholar 

  27. Eldh, S., Punnekkat, S., Hansson, H., Jönsson, P.: Component testing is not enough - a study of software faults in telecom middleware. In: Petrenko, A., Veanes, M., Tretmans, J., Grieskamp, W. (eds.) FATES/TestCom -2007. LNCS, vol. 4581, pp. 74–89. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-73066-8_6

    Chapter  Google Scholar 

  28. Engelman, C.: MATHLAB: a program for on-line machine assistance in symbolic computations. In: Proceedings of the November 30–December 1, 1965, Fall Joint Computer Conference, Part II: Computers: Their Impact on Society, pp. 117–126 (1965)

    Google Scholar 

  29. Estdale, J., Georgiadou, E.: Applying the ISO/IEC 25010 quality models to software product. In: Larrucea, X., Santamaria, I., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2018. CCIS, vol. 896, pp. 492–503. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-97925-0_42

    Chapter  Google Scholar 

  30. ETSI: Methods for Testing and Specification (MTS); UML 2.0 action syntax feasibility study TR 102 205 v1.1.1 (2003–2005)

    Google Scholar 

  31. Felderer, M., Büchler, M., Johns, M., Brucker, A.D., Breu, R., Pretschner, A.: Security testing: a survey. Adv. Comput. 101, 1–51 (2016)

    Article  Google Scholar 

  32. Feldt, R.: Do system test cases grow old? In: 2014 IEEE Seventh International Conference on Software Testing, Verification and Validation, pp. 343–352. IEEE (2014)

    Google Scholar 

  33. Florea, R., Stray, V.: A global view on the hard skills and testing tools in software testing. In: 2019 ACM/IEEE 14th International Conference on Global Software Engineering (ICGSE), pp. 143–151. IEEE (2019)

    Google Scholar 

  34. Garousi, V., Zhi, J.: A survey of software testing practices in Canada. J. Syst. Softw. 86(5), 1354–1376 (2013)

    Article  Google Scholar 

  35. Grabowski, J., Hogrefe, D., Réthy, G., Schieferdecker, I., Wiles, A., Willcock, C.: An introduction to the testing and test control notation (TTCN-3). Comput. Netw. 42(3), 375–403 (2003)

    Article  MATH  Google Scholar 

  36. Grindal, M., Offutt, J., Mellin, J.: On the testing maturity of software producing organizations. In: Testing: Academic & Industrial Conference-Practice and Research Techniques (TAIC PART 2006), pp. 171–180. IEEE (2006)

    Google Scholar 

  37. Haindl, P., Plösch, R.: Towards continuous quality: measuring and evaluating feature-dependent non-functional requirements in DevOps. In: 2019 IEEE International Conference on Software Architecture Companion (ICSA-C), pp. 91–94. IEEE (2019)

    Google Scholar 

  38. Harman, M., McMinn, P., de Souza, J.T., Yoo, S.: Search based software engineering: techniques, taxonomy, tutorial. In: Meyer, B., Nordio, M. (eds.) LASER 2008-2010. LNCS, vol. 7007, pp. 1–59. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-25231-0_1

    Chapter  Google Scholar 

  39. Hasanain, W., Labiche, Y., Eldh, S.: An analysis of complex industrial test code using clone analysis. In: 2018 IEEE International Conference on Software Quality, Reliability and Security (QRS), pp. 482–489. IEEE (2018)

    Google Scholar 

  40. Jonsson, L.: Machine Learning-Based Bug Handling in Large-Scale Software Development, vol. 1936. Linköping University Electronic Press (2018)

    Google Scholar 

  41. Jonsson, L., Borg, M., Broman, D., Sandahl, K., Eldh, S., Runeson, P.: Automated bug assignment: ensemble-based machine learning in large scale industrial contexts. Empir. Softw. Eng. 21(4), 1533–1578 (2016)

    Article  Google Scholar 

  42. Jonsson, L., Broman, D., Sandahl, K., Eldh, S.: Towards automated anomaly report assignment in large complex systems using stacked generalization. In: 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation, pp. 437–446. IEEE (2012)

    Google Scholar 

  43. Karris, S.T.: Introduction to Simulink with Engineering Applications. Orchard Publications (2006)

    Google Scholar 

  44. Kaur, K., Jajoo, A., et al.: Applying agile methodologies in industry projects: benefits and challenges. In: 2015 International Conference on Computing Communication Control and Automation, pp. 832–836. IEEE (2015)

    Google Scholar 

  45. Kintis, M., Papadakis, M., Malevris, N.: Evaluating mutation testing alternatives: a collateral experiment. In: 2010 Asia Pacific Software Engineering Conference, pp. 300–309. IEEE (2010)

    Google Scholar 

  46. Kitanov, S., Monteiro, E., Janevski, T.: 5G and the fog-survey of related technologies and research directions. In: 2016 18th Mediterranean Electrotechnical Conference (MELECON), pp. 1–6. IEEE (2016)

    Google Scholar 

  47. Lattner, C., Adve, V.: LLVM: a compilation framework for lifelong program analysis & transformation. In: International Symposium on Code Generation and Optimization, CGO 2004, pp. 75–86. IEEE (2004)

    Google Scholar 

  48. Malm, J., Causevic, A., Lisper, B., Eldh, S.: Automated analysis of flakiness-mitigating delays. In: Proceedings of the IEEE/ACM 1st International Conference on Automation of Software Test, pp. 81–84 (2020)

    Google Scholar 

  49. Marick, B.: How to misuse code coverage. https://www.exampler.com/testing-com/writings/coverage.pdf. Accessed 05 May 2022

  50. Mårtensson, T., Ståhl, D., Bosch, J.: Exploratory testing of large-scale systems – testing in the continuous integration and delivery pipeline. In: Felderer, M., Méndez Fernández, D., Turhan, B., Kalinowski, M., Sarro, F., Winkler, D. (eds.) PROFES 2017. LNCS, vol. 10611, pp. 368–384. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-69926-4_26

    Chapter  Google Scholar 

  51. McMinn, P.: Search-based software test data generation: a survey. Softw. Test. Verif. Reliab. 14(2), 105–156 (2004)

    Article  Google Scholar 

  52. Navabi, Z.: VHDL: Analysis and Modeling of Digital Systems, vol. 2. McGraw-Hill, New York (1993)

    Google Scholar 

  53. Nethercote, N., Stuckey, P.J., Becket, R., Brand, S., Duck, G.J., Tack, G.: MiniZinc: towards a standard CP modelling language. In: Bessière, C. (ed.) CP 2007. LNCS, vol. 4741, pp. 529–543. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74970-7_38

    Chapter  Google Scholar 

  54. Papadakis, M., Kintis, M., Zhang, J., Jia, Y., Le Traon, Y., Harman, M.: Mutation testing advances: an analysis and survey. Adv. Comput. 112, 275–378 (2019)

    Article  Google Scholar 

  55. Parry, O., Kapfhammer, G.M., Hilton, M., McMinn, P.: A survey of flaky tests. ACM Trans. Softw. Eng. Methodol. (TOSEM) 31(1), 1–74 (2021)

    Article  Google Scholar 

  56. Petrović, G., Ivanković, M., Fraser, G., Just, R.: Does mutation testing improve testing practices? In: 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), pp. 910–921. IEEE (2021)

    Google Scholar 

  57. Pietrantuono, R., Bertolino, A., De Angelis, G., Miranda, B., Russo, S.: Towards continuous software reliability testing in DevOps. In: 2019 IEEE/ACM 14th International Workshop on Automation of Software Test (AST), pp. 21–27. IEEE (2019)

    Google Scholar 

  58. Planning, S.: The economic impacts of inadequate infrastructure for software testing. National Institute of Standards and Technology, p. 1 (2002)

    Google Scholar 

  59. Porkoláb, Z., Brunner, T.: The codecompass comprehension framework. In: Proceedings of the 26th Conference on Program Comprehension, pp. 393–396 (2018)

    Google Scholar 

  60. Rodríguez, P., Markkula, J., Oivo, M., Turula, K.: Survey on agile and lean usage in Finnish software industry. In: Proceedings of the 2012 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, pp. 139–148. IEEE (2012)

    Google Scholar 

  61. Roy, C.K., Cordy, J.R.: A survey on software clone detection research. Queen’s Sch. Comput. TR 541(115), 64–68 (2007)

    Google Scholar 

  62. Saxena, P.: OSI reference model - a seven layered architecture of OSI model. Int. J. Res. 1(10), 1145–1156 (2014)

    Google Scholar 

  63. Schulte, E., DiLorenzo, J., Weimer, W., Forrest, S.: Automated repair of binary and assembly programs for cooperating embedded devices. ACM SIGARCH Comput. Archit. News 41(1), 317–328 (2013)

    Article  Google Scholar 

  64. Segura, S., Fraser, G., Sanchez, A.B., Ruiz-Cortés, A.: A survey on metamorphic testing. IEEE Trans. Software Eng. 42(9), 805–824 (2016)

    Article  Google Scholar 

  65. Shahin, M., Babar, M.A., Zhu, L.: Continuous integration, delivery and deployment: a systematic review on approaches, tools, challenges and practices. IEEE Access 5, 3909–3943 (2017)

    Article  Google Scholar 

  66. Szabó, J.Z., Csöndes, T.: Titan, TTCN-3 test execution environment. Infocommun. J. 62(1), 27–31 (2007)

    Google Scholar 

  67. Tillmann, N., De Halleux, J., Xie, T., Gulwani, S., Bishop, J.: Teaching and learning programming and software engineering via interactive gaming. In: 2013 35th International Conference on Software Engineering (ICSE), pp. 1117–1126. IEEE (2013)

    Google Scholar 

  68. Van Deursen, A., Moonen, L., Van Den Bergh, A., Kok, G.: Refactoring test code. In: Proceedings of the 2nd International Conference on Extreme Programming and Flexible Processes in Software Engineering (XP2001), pp. 92–95. Citeseer (2001)

    Google Scholar 

  69. Wegener, J., Baresel, A., Sthamer, H.: Evolutionary test environment for automatic structural testing. Inf. Softw. Technol. 43(14), 841–854 (2001)

    Article  Google Scholar 

  70. Weimer, W., Forrest, S., Le Goues, C., Nguyen, T.: Automatic program repair with evolutionary computation. Commun. ACM 53(5), 109–116 (2010)

    Article  Google Scholar 

  71. Wiklund, K., Eldh, S., Sundmark, D., Lundqvist, K.: Technical debt in test automation. In: 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation, pp. 887–892. IEEE (2012)

    Google Scholar 

  72. Xie, T., Tillmann, N., De Halleux, J.: Educational software engineering: where software engineering, education, and gaming meet. In: 2013 3rd International Workshop on Games and Software Engineering: Engineering Computer Games to Enable Positive, Progressive Change (GAS), pp. 36–39. IEEE (2013)

    Google Scholar 

  73. Yang, J., Zhikhartsev, A., Liu, Y., Tan, L.: Better test cases for better automated program repair. In: Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, pp. 831–841 (2017)

    Google Scholar 

  74. Zaidman, A., Van Rompaey, B., Demeyer, S., Van Deursen, A.: Mining software repositories to study co-evolution of production & test code. In: 2008 1st International Conference on Software Testing, Verification, and Validation, pp. 220–229. IEEE (2008)

    Google Scholar 

  75. Zeller, A., Gopinath, R., Böhme, M., Fraser, G., Holler, C.: The fuzzing book (2019)

    Google Scholar 

  76. Zhu, H., Hall, P.A., May, J.H.: Software unit test coverage and adequacy. ACM Comput. Surv. (CSUR) 29(4), 366–427 (1997)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sigrid Eldh .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Eldh, S. (2022). On Technical Debt in Software Testing - Observations from Industry. In: Margaria, T., Steffen, B. (eds) Leveraging Applications of Formal Methods, Verification and Validation. Software Engineering. ISoLA 2022. Lecture Notes in Computer Science, vol 13702. Springer, Cham. https://doi.org/10.1007/978-3-031-19756-7_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19756-7_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19755-0

  • Online ISBN: 978-3-031-19756-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics