Advertisement

LittleDarwin: A Feature-Rich and Extensible Mutation Testing Framework for Large and Complex Java Systems

  • Ali Parsai
  • Alessandro Murgia
  • Serge Demeyer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10522)

Abstract

Mutation testing is a well-studied method for increasing the quality of a test suite. We designed LittleDarwin as a mutation testing framework able to cope with large and complex Java software systems, while still being easily extensible with new experimental components. LittleDarwin addresses two existing problems in the domain of mutation testing: having a tool able to work within an industrial setting, and yet, be open to extension for cutting edge techniques provided by academia. LittleDarwin already offers higher-order mutation, null type mutants, mutant sampling, manual mutation, and mutant subsumption analysis. There is no tool today available with all these features that is able to work with typical industrial software systems.

Keywords

Software testing Mutation testing Mutation testing tool Complex Java systems 

Notes

Acknowledgments

This work is sponsored by the Institute for the Promotion of Innovation through Science and Technology in Flanders through a project entitled Change-centric Quality Assurance (CHAQ) with number 120028.

References

  1. 1.
  2. 2.
    Acree Jr., A.T.: On mutation. Ph.D. thesis, Georgia Institute of Technology, Atlanta (1980)Google Scholar
  3. 3.
    Ammann, P., Delamaro, M.E., Offutt, J.: Establishing theoretical minimal sets of mutants. In: 2014 IEEE Seventh International Conference on Software Testing, Verification and Validation, pp. 21–30, March 2014Google Scholar
  4. 4.
    Andrews, J.H., Briand, L.C., Labiche, Y.: Is mutation an appropriate tool for testing experiments? In: Proceedings of the 27th International Conference on Software Engineering (ICSE 2005), pp. 402–411. ACM, New York (2005)Google Scholar
  5. 5.
    Beck, K.: Test-Driven Development: By Example. Addison-Wesley, Boston (2003). Kent Beck Signature BookGoogle Scholar
  6. 6.
    Budd, T.A.: Mutation analysis of program test data. Ph.D. thesis, Yale University, New Haven (1980). aAI8025191Google Scholar
  7. 7.
    Coles, H., Laurent, T., Henard, C., Papadakis, M., Ventresque, A.: Pit: a practical mutation testing tool for Java (demo). In: Proceedings of the 25th International Symposium on Software Testing and Analysis (ISSTA 2016), pp. 449–452. ACM, New York (2016)Google Scholar
  8. 8.
    DeMillo, R.A., Lipton, R.J., Sayward, F.G.: Hints on test data selection: help for the practicing programmer. Computer 11(4), 34–41 (1978)CrossRefGoogle Scholar
  9. 9.
    Fawcett, T.: An introduction to ROC analysis. Pattern Recognit. Lett. 27(8), 861–874 (2006). rOC Analysis in Pattern RecognitionCrossRefMathSciNetGoogle Scholar
  10. 10.
    Fowler, M., Foemmel, M.: Continuous integration. Technical report, Thoughtworks (2006)Google Scholar
  11. 11.
    Frankl, P.G., Weiss, S.N., Hu, C.: All-uses vs mutation testing: an experimental comparison of effectiveness. J. Syst. Softw. 38(3), 235–253 (1997)CrossRefGoogle Scholar
  12. 12.
    Gligoric, M., Zhang, L., Pereira, C., Pokam, G.: Selective mutation testing for concurrent code. In: Proceedings of the 2013 International Symposium on Software Testing and Analysis (ISSTA 2013), pp. 224–234. ACM, New York (2013)Google Scholar
  13. 13.
    Gopinath, R., Alipour, A., Ahmed, I., Jensen, C., Groce, A., et al.: An empirical comparison of mutant selection approaches. Oregon State University, Technical report (2015)Google Scholar
  14. 14.
    Jia, Y., Harman, M.: Constructing subtle faults using higher order mutation testing. In: Proceedings of the Eighth IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM 2008), pp. 249–258. Institute of Electrical & Electronics Engineers (IEEE), September 2008Google Scholar
  15. 15.
    Jia, Y., Harman, M.: Higher order mutation testing. Inf. Softw. Technol. 51(10), 1379–1393 (2009). Source Code Analysis and Manipulation (SCAM 2008)CrossRefGoogle Scholar
  16. 16.
    Jia, Y., Harman, M.: An analysis and survey of the development of mutation testing. IEEE Trans. Softw. Eng. 37(5), 649–678 (2011)CrossRefGoogle Scholar
  17. 17.
    Just, R., Jalali, D., Inozemtseva, L., Ernst, M.D., Holmes, R., Fraser, G.: Are mutants a valid substitute for real faults in software testing? In: Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014), pp. 654–665. ACM, New York (2014)Google Scholar
  18. 18.
    Kim, S., Clark, J.A., McDermid, J.A.: Class mutation: mutation testing for object-oriented programs. In: Proceedings of Net Object Days, pp. 9–12 (2000)Google Scholar
  19. 19.
    King, K.N., Offutt, A.J.: A Fortran language system for mutation-based software testing. Softw. Prac. Exp. 21(7), 685–718 (1991)CrossRefGoogle Scholar
  20. 20.
    Kintis, M., Papadakis, M., Malevris, N.: Isolating first order equivalent mutants via second order mutation. In: Proceedings of the 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation (ICST 2012), pp. 701–710. Institute of Electrical & Electronics Engineers (IEEE), April 2012Google Scholar
  21. 21.
    Kuhn, D.R.: Fault classes and error detection capability of specification-based testing. ACM Trans. Softw. Eng. Methodol. 8(4), 411–424 (1999)CrossRefGoogle Scholar
  22. 22.
    Kurtz, B., Ammann, P., Delamaro, M.E., Offutt, J., Deng, L.: Mutant subsumption graphs. In: 2014 IEEE Seventh International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp. 176–185, March 2014Google Scholar
  23. 23.
    Kurtz, B., Ammann, P., Offutt, J.: Static analysis of mutant subsumption. In: 2015 IEEE Eighth International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp. 1–10, April 2015Google Scholar
  24. 24.
    Kurtz, B.: On the utility of dominator mutants for mutation testing. In: Proceedings of the 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016), pp. 1088–1090. Association for Computing Machinery (ACM), New York (2016)Google Scholar
  25. 25.
    Ma, Y.S., Kwon, Y.R., Offutt, J.: Inter-class mutation operators for java. In: Proceedings of the 13th International Symposium on Software Reliability Engineering (ISSRE 2002), pp. 352–363. Institute of Electrical & Electronics Engineers (IEEE) (2002)Google Scholar
  26. 26.
    Ma, Y.S., Offutt, J., Kwon, Y.R.: MuJava: an automated class mutation system. Softw. Test. Verif. Reliab. 15(2), 97–133 (2005)CrossRefGoogle Scholar
  27. 27.
    Ma, Y.S., Offutt, J., Kwon, Y.R.: MuJava: a mutation system for Java. In: Proceedings of the 28th International Conference on Software Engineering (ICSE 2006), pp. 827–830. ACM, New York (2006)Google Scholar
  28. 28.
    McGregor, J.D.: Test early, test often. J. Object Technol. 6(4), 7–14 (2007). (column)CrossRefGoogle Scholar
  29. 29.
    Offutt, A.J., Lee, A., Rothermel, G., Untch, R.H., Zapf, C.: An experimental determination of sufficient mutant operators. ACM Trans. Softw. Eng. Methodol. 5(2), 99–118 (1996)CrossRefGoogle Scholar
  30. 30.
    Offutt, A.J., Pan, J.: Automatically detecting equivalent mutants and infeasible paths. Softw. Test. Verif. Reliab. 7(3), 165–192 (1997)CrossRefGoogle Scholar
  31. 31.
    Offutt, A.J., Untch, R.H.: Mutation 2000: uniting the orthogonal. In: Wong, W. (ed.) Mutation Testing for the New Century, The Springer International Series on Advances in Database Systems. The Springer International Series on Advances in Database Systems, vol. 24, pp. 34–44. Springer, Boston (2001). doi: 10.1007/978-1-4757-5939-6_7 CrossRefGoogle Scholar
  32. 32.
    Omar, E., Ghosh, S., Whitley, D.: HOMAJ: a tool for higher order mutation testing in AspectJ and Java. In: Proceedings of the IEEE Eighth International Conference on Software Testing, Verification and Validation Workshops (ICSTW 2014), pp. 165–170. IEEE Computer Society, Washington, DC (2014)Google Scholar
  33. 33.
    Osman, H., Lungu, M., Nierstrasz, O.: Mining frequent bug-fix code changes. In: Proceedings of the Software Evolution Week - IEEE Conference on Software Maintenance, Reengineering, and Reverse Engineering (CSMR-WCRE 2014), pp. 343–347. Institute of Electrical and Electronics Engineers (IEEE), February 2014Google Scholar
  34. 34.
    Papadakis, M., Henard, C., Harman, M., Jia, Y., Le Traon, Y.: Threats to the validity of mutation-based test assessment. In: Proceedings of the 25th International Symposium on Software Testing and Analysis (ISSTA 2016), pp. 354–365. ACM, New York (2016)Google Scholar
  35. 35.
    Papadakis, M., Malevris, N.: An empirical evaluation of the first and second order mutation testing strategies. In: Proceedings of the 2010 Third International Conference on Software Testing, Verification, and Validation Workshops (ICSTW 2010), pp. 90–99. IEEE Computer Society, Washington, DC, April 2010Google Scholar
  36. 36.
    Parsai, A.: Mutation analysis: an industrial experiment. Master’s thesis, University of Antwerp (2015)Google Scholar
  37. 37.
    Parsai, A., Murgia, A., Demeyer, S.: Evaluating random mutant selection at class-level in projects with non-adequate test suites. In: Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering (EASE 2016), pp. 11:1–11:10. ACM, New York (2016)Google Scholar
  38. 38.
    Parsai, A., Murgia, A., Demeyer, S.: A model to estimate first-order mutation coverage from higher-order mutation coverage. In: Proceedings of the IEEE International Conference on Software Quality, Reliability and Security (QRS 2016), pp. 365–373. Institute of Electrical and Electronics Engineers (IEEE), August 2016Google Scholar
  39. 39.
    Parsai, A., Soetens, Q.D., Murgia, A., Demeyer, S.: Considering polymorphism in change-based test suite reduction. In: Dingsøyr, T., Moe, N.B., Tonelli, R., Counsell, S., Gencel, C., Petersen, K. (eds.) XP 2014. LNBIP, vol. 199, pp. 166–181. Springer, Cham (2014). doi: 10.1007/978-3-319-14358-3_14 Google Scholar
  40. 40.
    Prechelt, L.: An empirical comparison of seven programming languages. Computer 33(10), 23–29 (2000)CrossRefGoogle Scholar
  41. 41.
    Schuler, D., Zeller, A.: Javalanche: efficient mutation testing for Java. In: Proceedings of the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE 2009), pp. 297–298. ACM, New York (2009)Google Scholar
  42. 42.
    Schuler, D., Zeller, A.: (Un-)covering equivalent mutants. In: Proceedings of the Third International Conference on Software Testing, Verification and Validation (ICST 2010), pp. 45–54. Saarland University, Saarbrucken, IEEE Computer Society, Washington, DC (2010)Google Scholar
  43. 43.
    Walsh, P.J.: A measure of test case completeness. Ph.D. thesis, State University of New York at Binghamton, Binghamton (1985)Google Scholar
  44. 44.
    Zhang, L., Gligoric, M., Marinov, D., Khurshid, S.: Operator-based and random mutant selection: better together. In: Proceedings of the 28th IEEE/ACM International Conference on Automated Software Engineering (ASE 2013), pp. 92–102. Institute of Electrical & Electronics Engineers (IEEE), November 2013Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2017

Authors and Affiliations

  1. 1.Antwerp Systems and Software Modelling LabUniversity of AntwerpAntwerpenBelgium

Personalised recommendations