Advertisement

Mutation Testing with Hyperproperties

  • Andreas FellnerEmail author
  • Mitra Tabaei Befrouei
  • Georg Weissenbacher
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11724)

Abstract

We present a new method for model-based mutation-driven test case generation. Mutants are generated by making small syntactical modifications to the model or source code of the system under test. A test case kills a mutant if the behavior of the mutant deviates from the original system when running the test. In this work, we use hyperproperties—which allow to express relations between multiple executions—to formalize different notions of killing for both deterministic as well as non-deterministic models. The resulting hyperproperties are universal in the sense that they apply to arbitrary reactive models and mutants. Moreover, an off-the-shelf model checking tool for hyperproperties can be used to generate test cases. We evaluate our approach on a number of models expressed in two different modeling languages by generating tests using a state-of-the-art mutation testing tool.

References

  1. 1.
    Mutation testing with hyperproperies benchmark models. https://git-service.ait.ac.at/sct-dse-public/mutation-testing-with-hyperproperties. Accessed 25 Apr 2019
  2. 2.
    Aichernig, B., Brandl, H., Jöbstl, E., Krenn, W., Schlick, R., Tiran, S.: MoMuT::UML model-based mutation testing for UML. In: 2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST), ICST, pp. 1–8, April 2015Google Scholar
  3. 3.
    Aichernig, B.K., Brandl, H., Jöbstl, E., Krenn, W., Schlick, R., Tiran, S.: Killing strategies for model-based mutation testing. Softw. Test. Verif. Reliab. 25(8), 716–748 (2015)CrossRefGoogle Scholar
  4. 4.
    Aichernig, B.K., Jöbstl, E., Tiran, S.: Model-based mutation testing via symbolic refinement checking (2014)Google Scholar
  5. 5.
    Arcaini, P., Gargantini, A., Riccobene, E.: Using mutation to assess fault detection capability of model review. Softw. Test. Verif. Reliab. 25(5–7), 629–652 (2015)CrossRefGoogle Scholar
  6. 6.
    Arcaini, P., Gargantini, A., Riccobene, E.: NuSeen: a tool framework for the NuSMV model checker. In: 2017 IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, Tokyo, Japan, 13–17 March 2017, pp. 476–483. IEEE Computer Society (2017)Google Scholar
  7. 7.
    Bardin, S., et al.: Sound and quasi-complete detection of infeasible test requirements. In: 8th IEEE International Conference on Software Testing, Verification and Validation, ICST 2015, Graz, Austria, 13–17 April 2015, pp. 1–10 (2015)Google Scholar
  8. 8.
    Bardin, S., Kosmatov, N., Cheynier, F.: Efficient leveraging of symbolic execution to advanced coverage criteria. In: Seventh IEEE International Conference on Software Testing, Verification and Validation, ICST 2014, Cleveland, Ohio, USA, 31 March 2014–4 April 2014, pp. 173–182 (2014)Google Scholar
  9. 9.
    Biere, A., Heljanko, K., Wieringa, S.: AIGER 1.9 and beyond (2011). fmv.jku.at/hwmcc11/beyond1.pdf
  10. 10.
    Boroday, S., Petrenko, A., Groz, R.: Can a model checker generate tests for non-deterministic systems? Electron. Notes Theor. Comput. Sci. 190(2), 3–19 (2007)CrossRefGoogle Scholar
  11. 11.
    Budd, T.A., Lipton, R.J., DeMillo, R.A., Sayward, F.G.: Mutation analysis. Technical report, DTIC Document (1979)Google Scholar
  12. 12.
    Cheng, S.-T., York, G., Brayton, R.K.: VL2MV: a compiler from verilog to BLIF-MV. HSIS Distribution (1993)Google Scholar
  13. 13.
    Clarkson, M.R., Finkbeiner, B., Koleini, M., Micinski, K.K., Rabe, M.N., Sánchez, C.: Temporal logics for hyperproperties. In: Abadi, M., Kremer, S. (eds.) POST 2014. LNCS, vol. 8414, pp. 265–284. Springer, Heidelberg (2014).  https://doi.org/10.1007/978-3-642-54792-8_15CrossRefGoogle Scholar
  14. 14.
    Clarkson, M.R., Schneider, F.B.: Hyperproperties. J. Comput. Secur. 18(6), 1157–1210 (2010)CrossRefGoogle Scholar
  15. 15.
    Fellner, A., Krenn, W., Schlick, R., Tarrach, T., Weissenbacher, G.: Model-based, mutation-driven test case generation via heuristic-guided branching search. In: Talpin, J.-P., Derler, P., Schneider, K. (eds.) Formal Methods and Models for System Design (MEMOCODE), pp. 56–66. ACM (2017)Google Scholar
  16. 16.
    Fellner, A., Befrouei, M.T., Weissenbacher, G.: Mutation Testing with Hyperproperties. arXiv e-prints, page arXiv:1907.07368, July 2019
  17. 17.
    Finkbeiner, B., Hahn, C., Hans, T.: MGHyper: checking satisfiability of HyperLTL formulas beyond the \(\exists ^*\forall ^*\) fragment. In: Lahiri, S.K., Wang, C. (eds.) ATVA 2018. LNCS, vol. 11138, pp. 521–527. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01090-4_31CrossRefGoogle Scholar
  18. 18.
    Finkbeiner, B., Rabe, M.N., Sánchez, C.: Algorithms for model checking HyperLTL and HyperCTL\(^*\). In: Kroening, D., Păsăreanu, C.S. (eds.) CAV 2015. LNCS, vol. 9206, pp. 30–48. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-21690-4_3CrossRefGoogle Scholar
  19. 19.
    Fraser, G., Wotawa, F., Ammann, P.E.: Testing with model checkers: a survey. Softw. Test. Verification Reliab. 19(3), 215–261 (2009)CrossRefGoogle Scholar
  20. 20.
    Gargantini, A., Heitmeyer, C.: Using model checking to generate tests from requirements specifications. In: Gargantini, A., Heitmeyer, C. (eds.) ACM SIGSOFT Software Engineering Notes, vol. 24, pp. 146–162. Springer, Heidelberg (1999).  https://doi.org/10.1145/318774.318939CrossRefGoogle Scholar
  21. 21.
    Hong, H.S., Lee, I., Sokolsky, O., Ural, H.: A temporal logic based theory of test coverage and generation. In: Katoen, J.-P., Stevens, P. (eds.) TACAS 2002. LNCS, vol. 2280, pp. 327–341. Springer, Heidelberg (2002).  https://doi.org/10.1007/3-540-46002-0_23CrossRefzbMATHGoogle Scholar
  22. 22.
    Howden, W.E.: Weak mutation testing and completeness of test sets. IEEE Trans. Softw. Eng. 8(4), 371–379 (1982)CrossRefGoogle Scholar
  23. 23.
    Lal, A., Reps, T.: Reducing concurrent analysis under a context bound to sequential analysis. Formal Methods Syst. Des. 35(1), 73–97 (2009)CrossRefGoogle Scholar
  24. 24.
    Marcozzi, M., Delahaye, M., Bardin, S., Kosmatov, N., Prevosto, V.: Generic and effective specification of structural test objectives. In: 2017 IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, Tokyo, Japan, 13–17 March 2017, pp. 436–441 (2017)Google Scholar
  25. 25.
    McMillan, K.L.: The SMV system. Technical report, CMU-CS-92-131, Carnegie Mellon University (1992)Google Scholar
  26. 26.
    Nelson, G.: A generalization of Dijkstra’s calculus. ACM Trans. Program. Lang. Syst. (TOPLAS) 11(4), 517–561 (1989)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Offutt, A.J.: Investigations of the software testing coupling effect. ACM Trans. Softw. Eng. Methodol. 1(1), 5–20 (1992)CrossRefGoogle Scholar
  28. 28.
    Rayadurgam, S., Heimdahl, M.P.E.: Coverage based test-case generation using model checkers. In: Engineering of Computer Based Systems (ECBS), pp. 83–91. IEEE (2001)Google Scholar
  29. 29.
    Tretmans, J.: Test generation with inputs, outputs and repetitive quiescence. Softw.-Concepts Tools 17(3), 103–120 (1996)zbMATHGoogle Scholar
  30. 30.
    Visser, W., Pǎsǎreanu, C.S., Khurshid, S.: Test input generation with Java pathfinder. ACM SIGSOFT Softw. Eng. Notes 29(4), 97–107 (2004)CrossRefGoogle Scholar
  31. 31.
    Wang, K., Sullivan, A., Khurshid, S.: Mualloy: a mutation testing framework for alloy. In: International Conference on Software Engineering: Companion (ICSE-Companion), pp. 29–32. IEEE (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Andreas Fellner
    • 1
    • 2
    Email author
  • Mitra Tabaei Befrouei
    • 2
  • Georg Weissenbacher
    • 2
  1. 1.AIT Austrian Institute of TechnologyViennaAustria
  2. 2.TU WienViennaAustria

Personalised recommendations