Fault model-driven testing from FSM with symbolic inputs

  • Omer Nguena TimoEmail author
  • Alexandre Petrenko
  • S. Ramesh


Test generation based on one-by-one analysis of potential implementations in fault models is challenging; it is indeed impossible or inefficient to enumerate each and every implementation, even when a fault model defines a finite but a significant number of implementations. We propose an approach for fault model and constraint solving-based testing from a particular type of extended finite state machines called a symbolic input finite state machine (SIFSM). Transitions in SIFSMs are labeled with symbolic inputs, which are predicates on input variables having possibly infinite domains. Its implementations, mutants, are also represented by SIFSMs. The generated tests are complete in a given fault domain which is a set of mutants specified with a so-called mutation machine. We define a well-formed mutation SIFSM for describing various types of faults. Given a mutation SIFSM, we develop methods for evaluating the adequacy of a test suite and generating complete tests. Experimental results with the prototype tool we have developed indicate that the approach is applicable to industrial-like systems.


Extended FSM Conformance testing Mutation testing Fault model-based test generation Constraint solving 


Funding information

This work is supported in part by GM, NSERC of Canada and MESI (Ministère de l’Économie, Science et Innovation) of Gouvernement du Québec.


  1. Araujo, H.L., Carvalho, G., Sampaio, A., Mousavi, M.R., Taromirad, M. (2017). A process for sound conformance testing of cyber-physical systems. In Prooceedings of IEEE international conference on software testing, verification and validation workshops (pp. 46–50).Google Scholar
  2. Batth, S.S., Vieira, E.R., Cavalli, A., Uyar, M.U. (2007). Specification of timed efsm fault models in sdl. In Proceedings of the 27th IFIP WG 6.1 International conference on formal techniques for networked and distributed systems (pp. 50–65): Springer.Google Scholar
  3. Bessayah, F., Cavalli, A., Maja, W., Martins, E., Valenti, A.W. (2010). A fault injection tool for testing web services composition. In Proceedings of 5th international academic and industrial conference on testing – practice and research techniques (pp. 137–146). Berlin: Springer.Google Scholar
  4. Cavalcanti, A., & Simão, A. (2017). Fault-based testing for refinement in CSP. In Proceedings 29th Ifip WG 6.1 International conference on testing software and systems (pp. 21–37): Springer International Publishing.Google Scholar
  5. Cheng, K.T., & Krishnakumar, A.S. (1993). Automatic functional test generation using the extended finite state machine model. In 30Th ACM/IEEE design automation conference (pp. 86–91).Google Scholar
  6. Delamaro, M.E., Maldonado, J.C., Pasquini, A., Mathur, A.P. (2001). Interface mutation test adequacy criterion: an empirical evaluation. Empirical Software Engineering, 6(2), 111–142.CrossRefGoogle Scholar
  7. DeMillo, R.A., Lipton, R.J., Sayward, F.G. (1978). Hints on test data selection: help for the practicing programmer. Computer, 11(4), 34–41.CrossRefGoogle Scholar
  8. D’silva, V., Kroening, D., Weissenbacher, G. (2008). A survey of automated techniques for formal software verification. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 27(7), 1165–1178.CrossRefGoogle Scholar
  9. Eén, N., & Sörensson, N. (2004). An extensible SAT-solver. In Proceedings of 7th international conference on theory and applications of satisfiability testing. Lecture notes in computer science, (Vol. 2919 pp. 333–336): Springer.Google Scholar
  10. El-Fakih, K., Kolomeez, A., Prokopenko, S., Yevtushenko, N. (2008). Extended finite state machine based test derivation driven by user defined faults. In Proceedings of the 1st international conference on software testing, verification, and validation (pp. 308–317).Google Scholar
  11. El-Fakih, K., Yevtushenko, N., Bozga, M., Bensalem, S. (2016). Distinguishing extended finite state machine configurations using predicate abstraction. Journal of Software Engineering Research and Development, 4(1), 1.CrossRefGoogle Scholar
  12. Huang, W.-L., & Peleska, J. (2017). Complete model-based equivalence class testing for nondeterministic systems. Formal Aspects of Computing, 29(2), 335–364.MathSciNetCrossRefGoogle Scholar
  13. Huang, W.-L., & Peleska, J. (2013). Exhaustive model-based equivalence class testing. In Proceedings of the 25th IFIP WG 6.1 international conference on testing software and systems (pp. 49–64): Springer Berlin Heidelberg.Google Scholar
  14. Hübner, F., Huang, W.-L., Peleska, J. (2017). Experimental evaluation of a novel equivalence class partition testing strategy. In Proceedings of the 9th international conference on tests and proofs (pp. 155–172): Springer International Publishing.Google Scholar
  15. Jia, Y., & Harman, M. (2011). An analysis and survey of the development of mutation testing. IEEE Transactions on Software Engineering, 37(5), 649–678.CrossRefGoogle Scholar
  16. Leucker, M., & Schallhart, C. (2009). A brief account of runtime verification. The Journal of Logic and Algebraic Programming, 78(5), 293–303.CrossRefGoogle Scholar
  17. de Moura, L., & Bjørner, N. (2008). Z3: an efficient smt solver. In Proceedings of the 14th international conference on tools and algorithms for the construction and analysis of systems (pp. 337–340). Berlin: Springer.Google Scholar
  18. Nguena Timo, O., Petrenko, A., Ramesh, S. (2017). Multiple mutation testing from finite state machines with symbolic inputs. In Proceedings 29th IFIP WG 6.1 international conference on testing software and systems (pp. 36–51): Springer International Publishing.Google Scholar
  19. Nguena Timo, O., & Rollet, A. (2010). Conformance testing of variable driven automata. In Proceedings of 8th IEEE international workshop on factory communication systems (pp. 241–248).Google Scholar
  20. Parr, T. (2013). The definitive ANTLR 4 reference, 2nd edn.: Pragmatic Bookshelf.Google Scholar
  21. Păsăreanu, C.S., & Visser, W. (2009). A survey of new trends in symbolic execution for software testing and analysis. International Journal on Software Tools for Technology Transfer, 11(4), 339–353.CrossRefGoogle Scholar
  22. Peleska, J., & Huang, W.-L. (2016). Complete model-based equivalence class testing. International Journal on Software Tools for Technology Transfer, 18(3), 265–283.CrossRefGoogle Scholar
  23. Petrenko, A. (2001). Fault model-driven test derivation from finite state models: annotated bibliography. In Cassez, F., Jard, C., Rozoy, B., Ryan, M. (Eds.) Modeling and verification of parallel processes. MOVEP 2000. Lecture notes in computer science, Vol. 2067. Berlin: Springer.Google Scholar
  24. Petrenko, A. (2016). Checking experiments for symbolic input/output finite state machines. In Workshops proceedings of 9th international conference on software testing, verification and validation (pp. 229–237).Google Scholar
  25. Petrenko, A., Boroday, S., Groz, R. (1999). Confirming configurations in EFSM. In Wu, J., Chanson, S.T., Gao, Q (Eds.) Formal methods for protocol engineering and distributed systems. IFIP advances in information and communication technology, Vol. 28. Boston: Springer.Google Scholar
  26. Petrenko, A., Dury, A., Ramesh, S., Mohalik, S. (2013). A method and tool for test optimization for automotive controllers. In Workshops proceedings of 6th ieee international conference on software testing, verification and validation (pp. 198–207).Google Scholar
  27. Petrenko, A., Nguena Timo, O., Ramesh, S. (2016a). Multiple mutation testing from fsm. In Proceedings of the 6th IFIP WG 6.1 international conference on formal techniques for distributed objects, components, and systems (pp. 222–238): Springer International Publishing.Google Scholar
  28. Petrenko, A., Nguena Timo, O., Ramesh, S. (2016b). Test generation by constraint solving and fsm mutant killing. In Proceedings 28th IFIP WG 6.1 international conference on testing software and systems (pp. 36–51): Springer International Publishing.Google Scholar
  29. Petrenko, A., & Simao, A. (2015). Checking experiments for finite state machines with symbolic inputs. In Proceedings of 27th IFIP WG 6.1 international conference on testing software and systems (pp. 3–18): Springer International Publishing.Google Scholar
  30. Pretschner, A. (2017). Defect-based testing. Dependable Software Systems Engineering, 141–163.Google Scholar
  31. Scaife, N., Sofronis, C., Caspi, P., Tripakis, S., Maraninchi, F. (2004). Defining and translating a safe subset of simulink/stateflow into lustre. In Proceedings of the 4th ACM international conference on embedded software (pp. 259–268): ACM.Google Scholar
  32. Taromirad, M., & Mousavi, M.R. (2017). Gray-box conformance testing for symbolic reactive state machines. In Proceedings of the 7th international conference on fundamentals of software engineering (pp. 228–243): Springer International Publishing.Google Scholar
  33. Utting, M., Pretschner, A., Legeard, B. (2012). A taxonomy of model-based testing approaches. Software Testing, Verification and Reliability, 22(5), 297–312.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Computer Research Institute of Montreal - CRIMMontrealCanada
  2. 2.GM Global R&DWarrenUSA

Personalised recommendations