Advertisement

Coverage-Based Testing with Symbolic Transition Systems

  • Petra van den BosEmail author
  • Jan Tretmans
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11823)

Abstract

We provide a model-based testing approach for systems comprising both state-transition based control flow, and data elements such as variables and data-dependent transitions. We propose test generation and execution, based on model-coverage: we generate test cases that aim to reach all transitions of the model. To obtain a test case reaching a certain transition, we need to combine reachability in the control flow, and satisfiability of the data elements of the model. Concrete values for data parameters are generated on-the-fly, i.e., during test execution, such that received outputs from the system can be taken into account for the inputs later provided in test execution. Due to undecidability of the satisfiability problem, SMT solvers may return result ‘unknown’. Our algorithm deals with this explicitly. We implemented our method in Maude combined with Z3, and use this to demonstrate the applicability of our method on the Bounded Retransmission Protocol benchmark. We measure performance by counting the number of inputs and outputs needed to discover bugs in mutants, i.e., in non-conforming variants of the specification. As a result, we find that we perform 3 times better, according to the geometric mean, than when using random testing as implemented by the tool TorXakis.

Notes

Acknowledgments

This work was largely performed while the first author was visiting INRIA/the University of Lille. We would like to thank Vlad Rusu for making this visit possible, and for his feedback and support with using Maude.

References

  1. 1.
    Automata Wiki. automata.cs.ru.nl
  2. 2.
    Homepage Petra van den Bos. petravdbos.nl
  3. 3.
    van den Bos, P., Janssen, R., Moerman, J.: n-Complete test suites for IOCO. Softw. Qual. J. 27(2), 563–588 (2019)CrossRefGoogle Scholar
  4. 4.
    van den Bos, P., Stoelinga, M.: Tester versus bug: a generic framework for model-based testing via games. In: Orlandini, A., Zimmermann, M. (eds.) Proceedings Ninth International Symposium on Games, Automata, Logics, and Formal Verification. Electronic Proceedings in Theoretical Computer Science, 26–28th September 2018, Saarbrücken, Germany, vol. 277, pp. 118–132. Open Publishing Association (2018)Google Scholar
  5. 5.
    Clavel, M., et al.: Maude: specification and programming in rewriting logic. Theoret. Comput. Sci. 285(2), 187–243 (2002). Rewriting Logic and its ApplicationsGoogle Scholar
  6. 6.
    de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-78800-3_24CrossRefGoogle Scholar
  7. 7.
    Ehrig, H., Mahr, B.: Fundamentals of Algebraic Specification I – Equations and Initial Semantics. EATCS Monographs on Theoretical Computer Science. Springer, Heidelberg (1985).  https://doi.org/10.1007/978-3-642-69962-7CrossRefzbMATHGoogle Scholar
  8. 8.
    Frantzen, L., Tretmans, J., Willemse, T.A.C.: Test generation based on symbolic specifications. In: Grabowski, J., Nielsen, B. (eds.) FATES 2004. LNCS, vol. 3395, pp. 1–15. Springer, Heidelberg (2005).  https://doi.org/10.1007/978-3-540-31848-4_1CrossRefGoogle Scholar
  9. 9.
    Frantzen, L., Tretmans, J., Willemse, T.A.C.: A symbolic framework for model-based testing. In: Havelund, K., Núñez, M., Roşu, G., Wolff, B. (eds.) FATES/RV -2006. LNCS, vol. 4262, pp. 40–54. Springer, Heidelberg (2006).  https://doi.org/10.1007/11940197_3CrossRefGoogle Scholar
  10. 10.
    Friedman, G., Hartman, A., Nagin, K., Shiran, T.: Projected state machine coverage for software testing. In: Proceedings of the 2002 ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2002, pp. 134–143. ACM, New York (2002)CrossRefGoogle Scholar
  11. 11.
    Gaston, C., Le Gall, P., Rapin, N., Touil, A.: Symbolic execution techniques for test purpose definition. In: Uyar, M.Ü., Duale, A.Y., Fecko, M.A. (eds.) TestCom 2006. LNCS, vol. 3964, pp. 1–18. Springer, Heidelberg (2006).  https://doi.org/10.1007/11754008_1CrossRefGoogle Scholar
  12. 12.
    Godefroid, P., Levin, M.Y., Molnar, D.A.: SAGE: whitebox fuzzing for security testing. Commun. ACM 55(3), 40–44 (2012)CrossRefGoogle Scholar
  13. 13.
    Helmink, L., Sellink, M.P.A., Vaandrager, F.W.: Proof-checking a data link protocol. In: Barendregt, H., Nipkow, T. (eds.) TYPES 1993. LNCS, vol. 806, pp. 127–165. Springer, Heidelberg (1994).  https://doi.org/10.1007/3-540-58085-9_75CrossRefGoogle Scholar
  14. 14.
    Huang, W., Peleska, J.: Complete model-based equivalence class testing for nondeterministic systems. Formal Aspects Comput. 29(2), 335–364 (2017)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Huijben, M.: Efficient constrained random sampling for use in a model based testing tool. Master’s thesis, Institute for Computing and Information Sciences, Radboud University, Nijmegen, The Netherlands (2019)Google Scholar
  16. 16.
    Iyer, M.A.: Race: a word-level ATPG-based constraints solver system for smart random simulation, p. 299. Citeseer (2003)Google Scholar
  17. 17.
    Jeannet, B., Jéron, T., Rusu, V., Zinovieva, E.: Symbolic test selection based on approximate analysis. In: Halbwachs, N., Zuck, L.D. (eds.) TACAS 2005. LNCS, vol. 3440, pp. 349–364. Springer, Heidelberg (2005).  https://doi.org/10.1007/978-3-540-31980-1_23CrossRefzbMATHGoogle Scholar
  18. 18.
    King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Kitchen, N.: Markov Chain Monte Carlo stimulus generation for constrained random simulation. Ph.D. thesis, UC Berkeley (2010)Google Scholar
  20. 20.
    Li, J.J., Wong, W.E.: Automatic test generation from communicating extended finite state machine (CEFSM)-based models. In: Proceedings Fifth IEEE International Symposium on Object-Oriented Real-Time Distributed Computing, ISIRC 2002, pp. 181–185, April 2002Google Scholar
  21. 21.
    Petrenko, A.: Checking experiments for symbolic input/output finite state machines. In: 2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp. 229–237, April 2016Google Scholar
  22. 22.
  23. 23.
    Tretmans, J.: Test generation with inputs, outputs and repetitive quiescence. Softw.—Concepts Tools 17(3), 103–120 (1996)zbMATHGoogle Scholar
  24. 24.
    Tretmans, J.: Model based testing with labelled transition systems. In: Hierons, R.M., Bowen, J.P., Harman, M. (eds.) Formal Methods and Testing. LNCS, vol. 4949, pp. 1–38. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-78917-8_1CrossRefGoogle Scholar
  25. 25.
    Tretmans, J.: On the existence of practical testers. In: Katoen, J.-P., Langerak, R., Rensink, A. (eds.) ModelEd, TestEd, TrustEd. LNCS, vol. 10500, pp. 87–106. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68270-9_5CrossRefGoogle Scholar
  26. 26.
    Veanes, M., Bjørner, N.: Alternating simulation and IOCO. In: Petrenko, A., Simão, A., Maldonado, J.C. (eds.) ICTSS 2010. LNCS, vol. 6435, pp. 47–62. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-16573-3_5CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Institute for Computing and Information SciencesRadboud UniversityNijmegenThe Netherlands
  2. 2.ESI (TNO)EindhovenThe Netherlands

Personalised recommendations