Advertisement

A systematic review of state-based test tools

  • Muhammad Shafique
  • Yvan Labiche
regular paper

Abstract

Model-based testing (MBT) is about testing a software system using a model of its behaviour. To benefit fully from MBT, automation support is required. The goal of this systematic review is determining the current state of the art of prominent MBT tool support where we focus on tools that rely on state-based models. We automatically searched different source of information including digital libraries and mailing lists dedicated to the topic. Precisely defined criteria are used to compare selected tools and comprise support for test adequacy and coverage criteria, level of automation for various testing activities and support for the construction of test scaffolding. Simple adequacy criteria are supported but not advanced ones; data(-flow) criteria are seldom supported; support for creating test scaffolding varies a great deal. The results of this review should be of interest to a wide range of stakeholders: software companies interested in selecting the most appropriate MBT tool for their needs; organizations willing to invest into creating MBT tool support; researchers interested in setting research directions.

Keywords

Model-based testing State-based testing Comparison Criteria Systematic review 

References

  1. 1.
    Ammann, P., Offutt, J.: Introduction to Software Testing. Cambridge University Press, Cambridge (2008)CrossRefMATHGoogle Scholar
  2. 2.
    Beizer, B.: Software Testing Techniques. Van Nostrand Reinhold, New York (1990)Google Scholar
  3. 3.
    Patton, R.: Software Testing. SAMS, Indianapolis (2005)Google Scholar
  4. 4.
    Boberg, J.: Early fault detection with model-based testing. In: ACM SIGPLAN Workshop on ERLANG, pp. 9–20. ACM, New york (2008)Google Scholar
  5. 5.
    Pretschner, A.: Model-based testing in Practice. In: Formal Methods. Lecture Notes in Computer Science, vol. 3582, pp. 537–541. Springer, Heidelberg (2005)Google Scholar
  6. 6.
    Broy, M., Jonsson, B., Katoen, J., Leucker, M., Pretschner, A.: Model-based testing of reactive systems, vol. 3472, Springer, Heidelberg (2005)Google Scholar
  7. 7.
    Utting, M., Legeard, B.: Practical Model-Based Testing: A Tools Approach. Morgan-Kaufmann, San Francisco (2006)Google Scholar
  8. 8.
    Mathur, A.P.: Foundations of Software Testing. Pearson Education, Upper Saddle River (2009)Google Scholar
  9. 9.
    Gossens, S., Belli, F., Beydeda, S., Cin, M.D.: View graphs for analysis and testing of programs at different abstraction levels. In: IEEE International Symposium on High-Assurance Systems Engineering, pp. 121–130 (2005)Google Scholar
  10. 10.
    Vegas, S., Juristo, N., Basili, V. R.: Maturing software engineering knowledge through classifications: a case study on unit testing techniques. IEEE Trans. Softw. Eng. 35(4), 551–565 (2009)Google Scholar
  11. 11.
    Pender, T.: UML Bible. Wiley, New York (2003)Google Scholar
  12. 12.
    Monin, J.: Understanding Formal Methods. Springer, Berlin (2003)CrossRefMATHGoogle Scholar
  13. 13.
    van Eijk, P.H.J., Vissers, C.A., Díaz, M.: The Formal Description Technique LOTOS: Results of the ESPRIT/SEDOS Project. Elsevier, Amsterdam (1989)MATHGoogle Scholar
  14. 14.
    Karris, S.T.: Introduction to Stateflow with Applications. Orchard Publications, Fremont (2007)Google Scholar
  15. 15.
    Murata, T.: Petri nets: properties, analysis and applications. Proc. IEEE. 77(4), 541–580 (1989)CrossRefGoogle Scholar
  16. 16.
    Halbwachs, N.: Synchronous Programming of Reactive Systems. Springer, Berlin (1993)CrossRefMATHGoogle Scholar
  17. 17.
    Binder, R.V.: Model-Based Testing User Survey: Results and Analysis. http://www.robertvbinder.com/docs/arts/MBT-User-Survey.pdf (2011). Accessed June 2012
  18. 18.
    Kitchenham, B., Charters, S.: Guidelines for Performing Systematic Literature Reviews in Software Engineering. Computer Science, Keele University, Staffordshire (2007)Google Scholar
  19. 19.
    Dyba, T., Kitchenham, B., Jorgensen, M.: Evidence-based software engineering for practitioners. IEEE Softw. 22(1), 58–65 (2005)CrossRefGoogle Scholar
  20. 20.
    Kitchenham, B., Dyba, T., Jorgensen, M.: Evidence-based software engineering. In: ACM/IEEE International Conference on Software Engineering, pp. 273–281 (2004)Google Scholar
  21. 21.
    Brereton, P., Kitchenham, B., Budgen, D., Turner, M., Khalil, M.: Lessons from applying the systematic literature review process within the software engineering domain. J. Syst. Softw. 80(4), 571–583 (2007)CrossRefGoogle Scholar
  22. 22.
    Dias Neto, A.C., Subramanyan, R., Vieira, M., Travassos, G.H.: A survey on model-based testing approaches: a systematic review. In: International Workshop on Empirical Assessment of Software Engineering Languages and Technologies (in Conjunction with IEEE/ACM ASE), pp. 31–36 (2007)Google Scholar
  23. 23.
    Hartman, A.: Model-Based Test Generation Tools. http://cm.techwell.com/sites/default/files/articles/XDD6047filelistfilename1_0.pdf (2002). Accessed June 2012 (2002)
  24. 24.
    Bruegge, B., Dutoit, A.H.: Object-Oriented Software Engineering using UML, Patterns, and Java. Prentice Hall, Englewood Cliffs (2004)Google Scholar
  25. 25.
    Chanson, S.T., Zhu, J.: A unified approach to protocol test sequence generation. In: Proceedings. Twelfth Annual Joint Conference of the IEEE Computer and Communications Societies. Networking: Foundation for the Future, pp. 106–114 (1993)Google Scholar
  26. 26.
    Jacky, J., Veanes, M., Campbell, C., Schulte, W.: Model-Based Software Testing and Analysis with C#. Cambridge University Press, Cambridge (2008)Google Scholar
  27. 27.
    Binder, R.: Testing Object-Oriented Systems: Models, Patterns, and Tools. Addison-Wesley, Reading (1999)Google Scholar
  28. 28.
    Jorgensen, P.: Software Testing-a Craftsman’s Approach. Auerbach Publications, Boca Raton (2008)MATHGoogle Scholar
  29. 29.
    Lewis, W.: Software Testing and Continuous Quality Improvement. CRC Press, Boca Raton (2008)CrossRefGoogle Scholar
  30. 30.
    Pezze, M., Young, M.: Software Testing and Analysis. Wiley, New York (2007)MATHGoogle Scholar
  31. 31.
    Poston, R.M.: Automating Specification-Based Software Testing. IEEE Computer Society, Los Alamitos (1997)Google Scholar
  32. 32.
    Dustin, E., Garrett, T., Gauf, B.: Implementing Automated Software Testing: How to Lower Costs while Raising Quality. Addison-Wesley, Upper Saddle River (2009)Google Scholar
  33. 33.
    Dustin, E., Rashka, J., Paul, J.: Automated Software Testing-introduction, Management, and Performance. Addison-Wesley, Upper Saddle River (1999)Google Scholar
  34. 34.
    Fewster, M.G.D.: Software Test Automation-Effective use of Test Execution Tools. Addison-Wesley, Upper Saddle River (1999)MATHGoogle Scholar
  35. 35.
    Li, K., Wu, M.: Effective Software Test Automation: Developing an Automated Software Testing Tool. Sybex, Alamdea (2004)Google Scholar
  36. 36.
    Weyuker, E.J.: On testing non-testable programs. Comput. J. 25(4), 465–470 (1982)CrossRefGoogle Scholar
  37. 37.
    Briand, L., Labiche, Y., He, S.: Automating regression test selection based on UML Designs. Inf. Softw. Technol. 51(1), 16–30 (2009)CrossRefGoogle Scholar
  38. 38.
    Mansour, N., Takkoush, H., Nehme, A.: UML-based regression testing for OO software. J. Softw. Maint. Evolut. Res. Pract. 23, 51–68 (2011)CrossRefGoogle Scholar
  39. 39.
    Goga, N.: Comparing TorX, Autolink, TGV and UIO Test Algorithms, SDL 2001: Meeting UML, pp. 379–402. Springer, Berlin (2001)CrossRefGoogle Scholar
  40. 40.
    Offutt, A.J., Liu, S., Abdurazik, A., Ammann, P.: Generating test data from state-based specifications. Softw. Test. Verif. Reliab. 13, 25–53 (2003)CrossRefGoogle Scholar
  41. 41.
    Lee, D., Yannakakis, M.: Principles and methods of testing finite state machines: a survey. Proc. IEEE 84, 1090–1123 (1996)CrossRefGoogle Scholar
  42. 42.
    Mouchawrab, S., Briand, L.C., Labiche, Y., Penta, M.D.: Assessing, comparing, and combining state machine-based testing and structural testing: a series of experiments. IEEE Trans. Softw. Eng. 37, 161–187 (2011) Google Scholar
  43. 43.
    Khalil, M., Labiche, Y.: On the round trip path testing strategy. In: IEEE 21st International Symposium on Software Reliability Engineering (ISSRE), pp. 388–397 (2010)Google Scholar
  44. 44.
    Briand, L.C., Labiche, Y., Wang, Y.: Using simulation to empirically investigate test coverage criteria. In: IEEE/ACM International Conference on Software Engineering, pp. 86–95 (2004)Google Scholar
  45. 45.
    Ammann, P., Offutt, A., Hong, H.: Coverage criteria for logical expressions. In: International Symposium on Software Reliability Engineering, pp. 99–107 (2003)Google Scholar
  46. 46.
    Vouk, M., Tai, K., Paradkar, A.: Empirical studies of predicate-based software testing. In: 5th International Symposium on Software Reliability Engineering, pp. 55–64 (1994)Google Scholar
  47. 47.
    Ostrand, T.J., Balcer, M.J.: The category-partition method for specifying and generating functional test. Commun. ACM 31, 676–686 (1988)CrossRefGoogle Scholar
  48. 48.
    Reid, S.: An empirical analysis of equivalence partitioning, boundary value analysis and random testing. In: Software Metrics Symposium, pp. 64–73 (1997)Google Scholar
  49. 49.
    Maity, S., Nayak, A.: Improved test generation algorithms for pair-wise testing. In: IEEE International Symposium on Software Reliability Engineering, pp. 235–244 (2005)Google Scholar
  50. 50.
    Burr, K., Young, W.: Combinatorial test techniques: table-based automation, test generation, and code coverage. In: International Conference on Software Testing Analysis and Review, pp. 503–513 (1998)Google Scholar
  51. 51.
    Antoniol, G., Briand, L.C., Penta, M.D., Labiche, Y.: A case study using the round-trip strategy for state-based class testing. In: 13th International Symposium on Software Reliability Engineering, pp. 269–279 (2002)Google Scholar
  52. 52.
    Saifan, A., Dingel, J.: Model-Based Testing of Distributed Systems. School of Computing, Queen’s University, Canada (2008)Google Scholar
  53. 53.
    Saifan, A., Dingel, J.: A survey of using model-based testing to improve quality attributes in distributed systems. In: Advanced Techniques in Computing Sciences and Software Engineering, pp. 283–288, Springer, Netherlands (2010)Google Scholar
  54. 54.
    Hartman, A.: AGEDIS-model based test generation tool. In: ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 129–132 (2004)Google Scholar
  55. 55.
    Utting, M., Pretschner, A., Legeard, B.: A Taxonomy of Model-Based Testing. Department of Computer Science, The University of Waikato, New Zealand (2006)Google Scholar
  56. 56.
    Belinfante, A., El-Ramly, M., Horstmann, M.: Tools for test case generation. In: Model-Based Testing of Reactive Systems. LNCS, pp. 391–438. Springer, Berlin (2005)Google Scholar
  57. 57.
    Utting, M., Pretschner, A., Legeard, B.: A taxonomy of model-based testing approaches. J. Softw. Test. Verif. Reliab. 22(5), 297–312 (2011)CrossRefGoogle Scholar
  58. 58.
    Zander, J., Schieferdecker, I., Mostermna, P.J.: A taxonomy of model-based testing for embedded systems from multiple industry domains. In: Model-Based Testing for Embedded Systems, pp. 3–22. CRC Press, Boca Raton (2012)Google Scholar
  59. 59.
    Schieferdecker, I.: Model-based testing. IEEE Softw. 29(1), 14–18 (2012)CrossRefGoogle Scholar
  60. 60.
    Sinha, A., Williams, C.E., Santhanam, P.: A measurement framework for evaluating model-based test generation tools. IBM Syst. J. 45(3), 501–514 (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.Computer Systems Group, School of Computer ScienceUniversity of WaterlooWaterlooCanada
  2. 2.Software Quality Engineering Lab, Systems and Computer EngineeringCarleton UniversityOttawaCanada

Personalised recommendations