Comparing mutation coverage against branch coverage in an industrial setting

Abstract

The state-of-the-practice in software development is driven by constant change fueled by continues integration servers. Such constant change demands for frequent and fully automated tests capable to detect faults immediately upon project build. As the fault detection capability of the test suite becomes so important, modern software development teams continuously monitor the quality of the test suite as well. However, it appears that the state-of-the-practice is reluctant to adopt strong coverage metrics (namely mutation coverage), instead relying on weaker kinds of coverage (namely branch coverage). In this paper, we investigate three reasons that prohibit the adoption of mutation coverage in a continuous integration setting: (1) the difficulty of its integration into the build system, (2) the perception that branch coverage is “good enough”, and (3) the performance overhead during the build. Our investigation is based on a case study involving four open source systems and one industrial system. We demonstrate that mutation coverage reveals additional weaknesses in the test suite compared to branch coverage and that it is able to do so with an acceptable performance overhead during project build.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Notes

  1. 1.

    Mutation coverage is often expressed as a ratio rather than a percentage, but for the sake of consistency, we use it as a percentage here.

References

  1. 1.

    Aaltonen, K., Ihantola, P., Seppälä, O.: Mutation analysis vs. code coverage in automated assessment of students’ testing skills. In: Proceedings of the ACM International Conference Companion on Object Oriented Programming Systems Languages and Applications Companion—SPLASH ’10, OOPSLA ’10, pp. 153–160. ACM Press, New York (2010). https://doi.org/10.1145/1869542.1869567

  2. 2.

    Abdi, H.: Kendall rank correlation coefficient. In: The Concise Encyclopedia of Statistics, pp. 278–281. Springer New York (2007). https://doi.org/10.1007/978-0-387-32833-1_211

  3. 3.

    Abraham, R., Erwig, M.: Mutation operators for spreadsheets. IEEE Trans. Softw. Eng. 35(1), 94–108 (2009). https://doi.org/10.1109/TSE.2008.73

    Article  Google Scholar 

  4. 4.

    Acree, A.T., Jr.: On mutation. Ph.D. thesis, Georgia Institute of Technology, Atlanta, GA (1980)

  5. 5.

    Agresti, A.: Analysis of Ordinal Categorical Data, vol. 656. Wiley, New York (2010). https://doi.org/10.1002/9780470594001

    Book  MATH  Google Scholar 

  6. 6.

    Ahmed, Z., Zahoor, M., Younas, I.: Mutation operators for object-oriented systems: a survey. In: 2010 The 2nd International Conference on Computer and Automation Engineering (ICCAE), vol. 2, pp. 614–618. IEEE (2010). https://doi.org/10.1109/ICCAE.2010.5451692

  7. 7.

    Ammann, P., Delamaro, M.E., Offutt, J.: Establishing theoretical minimal sets of mutants. In: 2014 IEEE Seventh International Conference on Software Testing, Verification and Validation, pp. 21–30. IEEE (2014). https://doi.org/10.1109/ICST.2014.13

  8. 8.

    Ammann, P., Offutt, J.: Introduction to Software Testing, 2nd edn. Cambridge University Press, Cambridge (2016). https://doi.org/10.1017/9781316771273

    Book  Google Scholar 

  9. 9.

    Andrews, J.H., Briand, L.C., Labiche, Y.: Is mutation an appropriate tool for testing experiments? In: Proceedings of the 27th International Conference on Software Engineering—ICSE ’05, pp. 402–411. ACM Press, New York (2005). https://doi.org/10.1145/1062455.1062530

  10. 10.

    Andrews, J.H., Briand, L.C., Labiche, Y., Namin, A.S.: Using mutation analysis for assessing and comparing testing coverage criteria. IEEE Trans. Softw. Eng. 32(8), 608–624 (2006). https://doi.org/10.1109/tse.2006.83

    Article  Google Scholar 

  11. 11.

    Assylbekov, B., Gaspar, E., Uddin, N., Egan, P.: Investigating the correlation between mutation score and coverage score. In: 2013 UKSim 15th International Conference on Computer Modelling and Simulation, pp. 347–352. IEEE (2013). https://doi.org/10.1109/UKSim.2013.28

  12. 12.

    Baudry, B., Fleurey, F., Traon, Y.L.: Improving test suites for efficient fault localization. In: Proceeding of the 28th International Conference on Software Engineering—ICSE ’06, pp. 82–91. ACM Press, New York (2006). https://doi.org/10.1145/1134285.1134299

  13. 13.

    Bauersfeld, S., Vos, T.E.J., Lakhotia, K.: Unit testing tool competitions—lessons learned. In: Vos, E.T., Lakhotia, K., Bauersfeld, S. (eds.) Future Internet Testing: First International Workshop, FITTEST 2013, Istanbul, 12 Nov 2013, Revised Selected Papers, pp. 75–94. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-07785-7_5

  14. 14.

    Beck, K.: Test Driven Development: By Example. Addison-Wesley Longman Publishing Co. Inc, Boston (2002)

    Google Scholar 

  15. 15.

    Bjerke-Gulstuen, K., Larsen, E.W., Stålhane, T., Dingsøyr, T.: High level test driven development—shift left. In: Lassenius, C., Dingsøyr, T., Paasivaara, M. (eds.) Agile Processes in Software Engineering and Extreme Programming, pp. 239–247. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-18612-2_23

  16. 16.

    Blondeau, V., Etien, A., Anquetil, N., Cresson, S., Croisy, P., Ducasse, S.: Test case selection in industry: an analysis of issues related to static approaches. Softw. Qual. J. 25(4), 1203–1237 (2017). https://doi.org/10.1007/s11219-016-9328-4

    Article  Google Scholar 

  17. 17.

    Booch, G.: Object Oriented Design: With Applications. The Benjamin/Cummings Series in Ada and Software Engineering. Benjamin/Cummings Pub. (1991). http://books.google.be/books?id=w5VQAAAAMAAJ

  18. 18.

    Bradbury, J.S., Cordy, J.R., Dingel, J.: ExMAn: a generic and customizable framework for experimental mutation analysis. In: Second Workshop on Mutation Analysis (Mutation 2006—ISSRE Workshops 2006), MUTATION ’06, p. 4. IEEE, Washington, DC (2006). https://doi.org/10.1109/mutation.2006.5

  19. 19.

    Brooks, F.P.: No silver bullet essence and accidents of software engineering. Computer 20(4), 10–19 (1987). https://doi.org/10.1109/mc.1987.1663532

    Article  Google Scholar 

  20. 20.

    Brosgol, B.: Do-178C: the next avionics safety standard. ACM SIGAda Ada Lett. 31(3), 5–6 (2011). https://doi.org/10.1145/2070336.2070341

    Article  Google Scholar 

  21. 21.

    Budd, T.A.: Mutation analysis of program test data. Ph.D. thesis, Yale University, New Haven (1980). AAI8025191

  22. 22.

    Chen, B., Song, J., Xu, P., Hu, X., Jiang, Z.M.J.: An automated approach to estimating code coverage measures via execution logs. In: Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ASE 2018, pp. 305–316. ACM, New York (2018). https://doi.org/10.1145/3238147.3238214

  23. 23.

    Chen, H.Y., Hu, S.: Two new kinds of class level mutants for object-oriented programs. In: 2006 IEEE International Conference on Systems, Man and Cybernetics, vol. 3, pp. 2173–2178. IEEE (2006). https://doi.org/10.1109/icsmc.2006.385183

  24. 24.

    DeMilli, R., Offutt, A.: Constraint-based automatic test data generation. IEEE Trans. Softw. Eng. 17(9), 900–910 (1991). https://doi.org/10.1109/32.92910

    Article  Google Scholar 

  25. 25.

    DeMillo, R.A., Lipton, R.J., Sayward, F.G.: Hints on test data selection: help for the practicing programmer. Computer 11(4), 34–41 (1978). https://doi.org/10.1109/C-M.1978.218136

    Article  Google Scholar 

  26. 26.

    Derezińska, A., Rudnik, M.: Quality evaluation of object-oriented and standard mutation operators applied to c# programs. In: Furia, C.A., Nanz, S. (eds.) Objects, Models, Components, Patterns, pp. 42–57. Springer, Berlin (2012). https://doi.org/10.1007/978-3-642-30561-0_5

  27. 27.

    Do, H., Rothermel, G.: On the use of mutation faults in empirical assessments of test case prioritization techniques. IEEE Trans. Softw. Eng. 32(9), 733–752 (2006). https://doi.org/10.1109/tse.2006.92

    Article  Google Scholar 

  28. 28.

    Fawcett, T.: An introduction to ROC analysis. Pattern Recognit. Lett. 27(8), 861–874 (2006). https://doi.org/10.1016/j.patrec.2005.10.010

    MathSciNet  Article  Google Scholar 

  29. 29.

    Fowler, M.: Continuous integration. Tech. rep., http://www.martinfowler.com/ (2006). http://www.martinfowler.com/articles/continuousIntegration.html. Accessed 9 May 2020

  30. 30.

    Frankl, P.G., Weiss, S.N.: An experimental comparison of the effectiveness of branch testing and data flow testing. IEEE Trans. Softw. Eng. 19(8), 774–787 (1993). https://doi.org/10.1109/32.238581

    Article  Google Scholar 

  31. 31.

    Frankl, P.G., Weiss, S.N., Hu, C.: All-uses vs mutation testing: an experimental comparison of effectiveness. J. Syst. Softw. 38(3), 235–253 (1997). https://doi.org/10.1016/s0164-1212(96)00154-9

    Article  Google Scholar 

  32. 32.

    Fraser, G., Zeller, A.: Mutation-driven generation of unit tests and oracles. IEEE Trans. Softw. Eng. 38(2), 278–292 (2012). https://doi.org/10.1109/TSE.2011.93

    Article  Google Scholar 

  33. 33.

    Garousi, V., Yildirim, E.: Introducing automated GUI testing and observing its benefits: an industrial case study in the context of law-practice management software. In: 2018 IEEE International Conference on Software Testing, Verification and Validation Workshops—NEXTA Workshop, pp. 138–145. IEEE (2018). https://doi.org/10.1109/icstw.2018.00042

  34. 34.

    Gay, G., Staats, M., Whalen, M., Heimdahl, M.P.E.: The risks of coverage-directed test case generation. IEEE Trans. Softw. Eng. 41(8), 803–819 (2015). https://doi.org/10.1109/tse.2015.2421011

    Article  Google Scholar 

  35. 35.

    Gligoric, M., Groce, A., Zhang, C., Sharma, R., Alipour, M.A., Marinov, D.: Comparing non-adequate test suites using coverage criteria. In: Proceedings of the 2013 International Symposium on Software Testing and Analysis—ISSTA 2013, ISSTA 2013, pp. 302–313. ACM Press, New York (2013). https://doi.org/10.1145/2483760.2483769

  36. 36.

    Gopinath, R., Jensen, C., Groce, A.: Code coverage for suite evaluation by developers. In: Proceedings of the 36th International Conference on Software Engineering—ICSE 2014, ICSE 2014, pp. 72–82. ACM Press, New York (2014). https://doi.org/10.1145/2568225.2568278

  37. 37.

    Grün, B.J.M., Schuler, D., Zeller, A.: The impact of equivalent mutants. In: 2009 International Conference on Software Testing, Verification, and Validation Workshops—mutation workshop, pp. 192–199. IEEE (2009). https://doi.org/10.1109/ICSTW.2009.37

  38. 38.

    Harman, M., Jia, Y., Langdon, W.B.: Strong higher order mutation-based test data generation. In: Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering—SIGSOFT/FSE ’11, ESEC/FSE ’11, pp. 212–222. ACM Press, New York (2011). https://doi.org/10.1145/2025113.2025144

  39. 39.

    Haschemi, S., Weißleder, S.: A generic approach to run mutation analysis. In: Bottaci, L., Fraser, G. (eds.) Testing—Practice and Research Techniques, Lecture Notes in Computer Science, vol. 6303, pp. 155–164. Springer, Berlin (2010). https://doi.org/10.1007/978-3-642-15585-7_15

  40. 40.

    Hayhurst, K., Veerhusen, D.: A practical approach to modified condition/decision coverage. In: 20th DASC. 20th Digital Avionics Systems Conference (Cat. No.01CH37219), vol. 1, pp. 1B2/1–1B2/10 vol.1. IEEE (2001). https://doi.org/10.1109/dasc.2001.963305

  41. 41.

    Hemmati, H.: How effective are code coverage criteria? In: 2015 IEEE International Conference on Software Quality, Reliability and Security, pp. 151–156. IEEE (2015). https://doi.org/10.1109/QRS.2015.30

  42. 42.

    Hopkins, R., Jenkins, K.: Eating the IT Elephant: Moving from Greenfield Development to Brownfield. IBM Press, Indianapolis (2008)

    Google Scholar 

  43. 43.

    Hutchins, M., Foster, H., Goradia, T., Ostrand, T.: Experiments on the effectiveness of dataflow- and control-flow-based test adequacy criteria. In: Proceedings of 16th International Conference on Software Engineering, ICSE ’94, pp. 191–200. IEEE Computer Society Press, Los Alamitos, CA (1994). https://doi.org/10.1109/ICSE.1994.296778

  44. 44.

    Inozemtseva, L., Holmes, R.: Coverage is not strongly correlated with test suite effectiveness. In: Proceedings of the 36th International Conference on Software Engineering—ICSE 2014, ICSE 2014, pp. 435–445. ACM Press, New York (2014). https://doi.org/10.1145/2568225.2568271

  45. 45.

    ISO: Road vehicles—Functional safety (2011)

  46. 46.

    Janjic, W., Atkinson, C.: Utilizing software reuse experience for automated test recommendation. In: 2013 8th International Workshop on Automation of Software Test (AST), AST ’13, pp. 100–106. IEEE, Piscataway (2013). https://doi.org/10.1109/iwast.2013.6595799. http://dl.acm.org/citation.cfm?id=2662413.2662436

  47. 47.

    Jia, Y., Harman, M.: An analysis and survey of the development of mutation testing. IEEE Trans. Softw. Eng. 37(5), 649–678 (2011). https://doi.org/10.1109/TSE.2010.62

    Article  Google Scholar 

  48. 48.

    Just, R.: The Major mutation framework: efficient and scalable mutation analysis for java. In: Proceedings of the 2014 International Symposium on Software Testing and Analysis—ISSTA 2014, ISSTA 2014, pp. 433–436. ACM Press, New York (2014). https://doi.org/10.1145/2610384.2628053

  49. 49.

    Kaczanowski, T.: Practical Unit Testing with TestNG and Mockito. Tomasz Kaczanowski, Cracow (2012)

    Google Scholar 

  50. 50.

    Kandl, S., Chandrashekar, S.: Reasonability of MC/DC for safety-relevant software implemented in programming languages with short-circuit evaluation. Computing 97(3), 261–279 (2014). https://doi.org/10.1007/s00607-014-0418-5

    MathSciNet  Article  Google Scholar 

  51. 51.

    King, K.N., Offutt, A.J.: A fortran language system for mutation-based software testing. Softw. Pract. Exp. 21(7), 685–718 (1991). https://doi.org/10.1002/spe.4380210704

    Article  Google Scholar 

  52. 52.

    Kracht, J.S., Petrovic, J.Z., Walcott-Justice, K.R.: Empirically evaluating the quality of automatically generated and manually written test suites. In: 2014 14th International Conference on Quality Software, pp. 256–265. IEEE (2014). https://doi.org/10.1109/qsic.2014.33

  53. 53.

    Kurtz, B., Ammann, P., Delamaro, M.E., Offutt, J., Deng, L.: Mutant subsumption graphs. In: Proceedings of the 2014 IEEE International Conference on Software Testing, Verification, and Validation Workshops—Mutation Workshop, ICST ’14, pp. 176–185. IEEE Computer Society, Washington, DC, USA (2014). https://doi.org/10.1109/ICSTW.2014.20

  54. 54.

    Lee, H.J., Ma, Y.S., Kwon, Y.R.: Empirical evaluation of orthogonality of class mutation operators. In: 11th Asia-Pacific Software Engineering Conference, APSEC ’04, pp. 512–518. IEEE, Washington, DC (2004). https://doi.org/10.1109/apsec.2004.49

  55. 55.

    Li, N., Meng, X., Offutt, J., Deng, L.: Is bytecode instrumentation as good as source code instrumentation: An empirical study with industrial tools (experience report). In: 2013 IEEE 24th International Symposium on Software Reliability Engineering (ISSRE), pp. 380–389. IEEE (2013). https://doi.org/10.1109/ISSRE.2013.6698891

  56. 56.

    Li, N., Offutt, J.: Test oracle strategies for model-based testing. IEEE Trans. Softw. Eng. 43(4), 372–395 (2017). https://doi.org/10.1109/tse.2016.2597136

    Article  Google Scholar 

  57. 57.

    Li, N., Praphamontripong, U., Offutt, J.: An experimental comparison of four unit test criteria: Mutation, edge-pair, all-uses and prime path coverage. In: 2009 International Conference on Software Testing, Verification, and Validation Workshops—Mutation Workshop, pp. 220–229. IEEE (2009). https://doi.org/10.1109/ICSTW.2009.30

  58. 58.

    Li, N., West, M., Escalona, A., Durelli, V.H.S.: Mutation testing in practice using ruby. In: 2015 IEEE Eighth International Conference on Software Testing, Verification and Validation Workshops—Mutation Workshop, pp. 1–6. IEEE (2015). https://doi.org/10.1109/ICSTW.2015.7107453

  59. 59.

    Ma, Y.S., Kwon, Y.R., Offutt, J.: Inter-class mutation operators for java. In: 13th International Symposium on Software Reliability Engineering, 2002. Proceedings, pp. 352–363. IEEE Comput. Soc (2002). https://doi.org/10.1109/issre.2002.1173287

  60. 60.

    Ma, Y.S., Offutt, J., Kwon, Y.R.: MuJava. In: Proceeding of the 28th International Conference on Software Engineering—ICSE ’06, ICSE ’06, pp. 827–830. ACM Press, New York (2006). https://doi.org/10.1145/1134285.1134425

  61. 61.

    Madeyski, L., Orzeszyna, W., Torkar, R., Jozala, M.: Overcoming the equivalent mutant problem: a systematic literature review and a comparative experiment of second order mutation. IEEE Trans. Softw. Eng. 40(1), 23–42 (2014). https://doi.org/10.1109/TSE.2013.44

    Article  Google Scholar 

  62. 62.

    Marick, B.: Experience with the cost of different coverage goals for testing. In: Proceedings of the Ninth Pacific Northwest Software Quality Conference, pp. 147–164 (1991)

  63. 63.

    Marick, B., Foundations, T.: How to misuse code coverage. In: 16th International Conference and Exposition on Testing Computer Software (1999)

  64. 64.

    Mathur, A.P., Wong, W.E.: An empirical comparison of data flow and mutation-based test adequacy criteria. Softw. Test. Verif. Reliab. 4(1), 9–31 (1994). https://doi.org/10.1002/stvr.4370040104

    Article  Google Scholar 

  65. 65.

    McGregor, J.D.: Test early, test often. J. Object Technol. 6(4), 7–14 (2007). https://doi.org/10.5381/jot.2007.6.4.c1

    Article  Google Scholar 

  66. 66.

    Memon, A., Gao, Z., Nguyen, B., Dhanda, S., Nickell, E., Siemborski, R., Micco, J.: Taming google-scale continuous testing. In: Proceedings of the 39th International Conference on Software Engineering: Software Engineering in Practice Track, ICSE-SEIP ’17, pp. 233–242. IEEE Press, Piscataway (2017). https://doi.org/10.1109/ICSE-SEIP.2017.16

  67. 67.

    Offutt, A.J., Lee, A., Rothermel, G., Untch, R.H., Zapf, C.: An experimental determination of sufficient mutant operators. ACM Trans. Softw. Eng. Methodol. 5(2), 99–118 (1996). https://doi.org/10.1145/227607.227610

    Article  Google Scholar 

  68. 68.

    Offutt, A.J., Pan, J.: Automatically detecting equivalent mutants and infeasible paths. Softw. Test. Verif. Reliab. 7(3), 165–192 (1997). https://doi.org/10.1002/(sici)1099-1689(199709)7:3<165::aid-stvr143>3.0.co;2-u

    Article  Google Scholar 

  69. 69.

    Offutt, A.J., Pan, J., Tewary, K., Zhang, T.: An experimental evaluation of data flow and mutation testing. Softw. Pract. Exp. 26(2), 165–176 (1996). https://doi.org/10.1002/(SICI)1097-024X(199602)26:2<165::AID-SPE5>3.0.CO;2-K

    Article  Google Scholar 

  70. 70.

    Offutt, A.J., Untch, R.H.: Mutation 2000: uniting the orthogonal. In: Wong, W. (ed.) Mutation Testing for the New Century. The Springer International Series on Advances in Database Systems, vol. 24, pp. 34–44. Springer, New York (2001). https://doi.org/10.1007/978-1-4757-5939-6_7

    Google Scholar 

  71. 71.

    Offutt, A.J., Voas, J.M.: Subsumption of condition coverage techniques by mutation testing. Tech. rep., George Mason University (1996)

  72. 72.

    Papadakis, M., Henard, C., Harman, M., Jia, Y., Traon, Y.L.: Threats to the validity of mutation-based test assessment. In: Proceedings of the 25th International Symposium on Software Testing and Analysis—ISSTA 2016, ISSTA 2016, pp. 354–365. ACM Press, New York (2016). https://doi.org/10.1145/2931037.2931040

  73. 73.

    Papadakis, M., Jia, Y., Harman, M., Traon, Y.L.: Trivial compiler equivalence: a large scale empirical study of a simple, fast and effective equivalent mutant detection technique. In: 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, vol. 1, pp. 936–946. IEEE (2015). https://doi.org/10.1109/icse.2015.103

  74. 74.

    Papadakis, M., Kintis, M., Zhang, J., Jia, Y., Traon, Y.L., Harman, M.: Mutation testing advances: an analysis and survey. Adv. Comput. (2018). https://doi.org/10.1016/bs.adcom

    Article  Google Scholar 

  75. 75.

    Parsai, A., Demeyer, S.: Do null-type mutation operators help prevent null-type faults? In: Catania, B., Královič, R., Nawrocki, J., Pighizzini, G. (eds.) SOFSEM 2019: Theory and Practice of Computer Science, pp. 419–434. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-10801-4_33. http://link.springer.com/chapter/10.1007%2F978-3-030-10801-4_33

  76. 76.

    Parsai, A., Demeyer, S., Busser, S.D.: C++11/14 mutation operators based on common fault patterns. In: Medina-Bulo, I., Merayo, M.G., Hierons, R. (eds.) Testing Software and Systems, pp. 102–118. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99927-2_9. http://link.springer.com/chapter/10.1007%2F978-3-319-99927-2_9

  77. 77.

    Parsai, A., Murgia, A., Demeyer, S.: Evaluating random mutant selection at class-level in projects with non-adequate test suites. In: Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering—EASE ’16, EASE ’16, pp. 11:1–11:10. ACM Press, New York (2016). https://doi.org/10.1145/2915970.2915992

  78. 78.

    Parsai, A., Murgia, A., Demeyer, S.: LittleDarwin: A feature-rich and extensible mutation testing framework for large and complex java systems. In: Dastani, M., Sirjani, M. (eds.) Fundamentals of Software Engineering: 7th International Conference, FSEN 2017, Tehran, 26–28 April 2017, Revised Selected Papers, pp. 148–163. Springer (2017). https://doi.org/10.1007/978-3-319-68972-2_10

  79. 79.

    Parsai, A., Soetens, Q.D., Murgia, A., Demeyer, S.: Considering polymorphism in change-based test suite reduction. In: Dingsøyr, T., Moe, N.B., Tonelli, R., Counsell, S., Gencel, C., Petersen, K. (eds.) Lecture Notes in Business Information Processing, pp. 166–181. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-14358-3_14

    Google Scholar 

  80. 80.

    Rising, L., Janoff, N.S.: The scrum software development process for small teams. IEEE Softw. 17(4), 26–32 (2000). https://doi.org/10.1109/52.854065

    Article  Google Scholar 

  81. 81.

    Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research in software engineering. Empir. Softw. Eng. 14(2), 131–164 (2008). https://doi.org/10.1007/s10664-008-9102-8

    Article  Google Scholar 

  82. 82.

    Sahinoglu, M., Spafford, E.H.: A bayes sequential statistical procedure for approving software products. In: Proceedings of the IFIP Conference on Approving Software Products (ASP’90), pp. 43–56 (1990)

  83. 83.

    Saleh, I., Nagi, K.: HadoopMutator: a cloud-based mutation testing framework. In: Schaefer, I., Stamelos, I. (eds.) Lecture Notes in Computer Science, pp. 172–187. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-14130-5_13

    Google Scholar 

  84. 84.

    Shahriar, H., Zulkernine, M.: Mutation-based testing of format string bugs. In: 2008 11th IEEE High Assurance Systems Engineering Symposium, HASE ’08, pp. 229–238. IEEE, Washington, DC (2008). https://doi.org/10.1109/hase.2008.8

  85. 85.

    Silva, R.A., do Rocio Senger de Souza, S., de Souza, P.S.L.: Mutation operators for concurrent programs in MPI. In: 2012 13th Latin American Test Workshop (LATW), LATW ’12, pp. 1–6. IEEE, Washington, DC (2012). https://doi.org/10.1109/latw.2012.6261240

  86. 86.

    Smith, B.H., Williams, L.: On guiding the augmentation of an automated test suite via mutation analysis. Empir. Softw. Eng. 14(3), 341–369 (2008). https://doi.org/10.1007/s10664-008-9083-7

    Article  Google Scholar 

  87. 87.

    Smith, B.H., Williams, L.: Should software testers use mutation analysis to augment a test set? J. Syst. Softw. 82(11), 1819–1832 (2009). https://doi.org/10.1016/j.jss.2009.06.031

    Article  Google Scholar 

  88. 88.

    Tengeri, D., Beszedes, A., Havas, D., Gyimothy, T.: Toolset and program repository for code coverage-based test suite analysis and manipulation. In: 2014 IEEE 14th International Working Conference on Source Code Analysis and Manipulation, pp. 47–52. IEEE (2014). https://doi.org/10.1109/SCAM.2014.38

  89. 89.

    Tengeri, D., Horvath, F., Beszedes, A., Gergely, T., Gyimothy, T.: Negative effects of bytecode instrumentation on java source code coverage. In: 2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER), vol. 1, pp. 225–235. IEEE (2016). https://doi.org/10.1109/saner.2016.61

  90. 90.

    Tian, J.: Software Quality Engineering: Testing, Quality Assurance and Quantifiable Improvement. Wiley, New York (2009). https://doi.org/10.1002/0471722324

    Book  Google Scholar 

  91. 91.

    Wei, Y., Meyer, B., Oriol, M.: Is branch coverage a good measure of testing effectiveness? In: Meyer, B., Nordio, M. (eds.) Empirical Software Engineering and Verification. Lecture Notes in Computer Science, vol. 7007, pp. 194–212. Springer, Berlin (2012). https://doi.org/10.1007/978-3-642-25231-0_5

    Google Scholar 

  92. 92.

    Wong, W., Mathur, A.P.: Reducing the cost of mutation testing: an empirical study. J. Syst. Softw. 31(3), 185–196 (1995). https://doi.org/10.1016/0164-1212(94)00098-0

    Article  Google Scholar 

  93. 93.

    Wong, W.E.: On mutation and data flow. Ph.D. thesis, Purdue University, West Lafayette (1993). UMI Order No. GAX94-20921

  94. 94.

    Yang, Q., Li, J.J., Weiss, D.M.: A survey of coverage-based testing tools. Comput. J. 52(5), 589–597 (2007). https://doi.org/10.1093/comjnl/bxm021

    Article  Google Scholar 

  95. 95.

    Zeng, F., Mao, L., Chen, Z., Cao, Q.: Mutation-based testing of integer overflow vulnerabilities. In: 2009 5th International Conference on Wireless Communications, Networking and Mobile Computing, WiCOM’09, pp. 4416–4419. IEEE, Piscataway (2009). https://doi.org/10.1109/wicom.2009.5302048

  96. 96.

    Zhang, L., Gligoric, M., Marinov, D., Khurshid, S.: Operator-based and random mutant selection: better together. In: 2013 28th IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 92–102. IEEE (2013). https://doi.org/10.1109/ASE.2013.6693070

  97. 97.

    Zhang, L., Hou, S.S., Hu, J.J., Xie, T., Mei, H.: Is operator-based mutant selection superior to random mutant selection? In: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering—ICSE ’10, ICSE ’10, pp. 435–444. ACM Press, New York (2010). https://doi.org/10.1145/1806799.1806863

  98. 98.

    Zhu, H., Hall, P.A.V., May, J.H.R.: Software unit test coverage and adequacy. ACM Comput. Surv. 29(4), 366–427 (1997). https://doi.org/10.1145/267580.267590

    Article  Google Scholar 

Download references

Acknowledgements

We would like to express our gratitude to the HE/Imaging IT Clinical Applications team at Agfa Healthcare, Belgium, for allowing us to conduct these analyses on the Segmentation component of the Impax ES medical imaging software. This work is sponsored by:

(a) \(\hbox {ITEA}^3\)TESTOMAT Project (number 16032), sponsored by VINNOVA – Sweden’s innovation agency;

(b) Flanders Make vzw, the strategic research centre for manufacturing industry.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Ali Parsai.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Parsai, A., Demeyer, S. Comparing mutation coverage against branch coverage in an industrial setting. Int J Softw Tools Technol Transfer 22, 365–388 (2020). https://doi.org/10.1007/s10009-020-00567-y

Download citation

Keywords

  • Software Testing
  • Mutation Testing
  • Branch Coverage
  • Continuous Integration
  • Industrial Setting