Model Based Testing with Labelled Transition Systems

  • Jan Tretmans
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4949)

Abstract

Model based testing is one of the promising technologies to meet the challenges imposed on software testing. In model based testing an implementation under test is tested for compliance with a model that describes the required behaviour of the implementation. This tutorial chapter describes a model based testing theory where models are expressed as labelled transition systems, and compliance is defined with the ‘ioco’ implementation relation. The ioco-testing theory, on the one hand, provides a sound and well-defined foundation for labelled transition system testing, having its roots in the theoretical area of testing equivalences and refusal testing. On the other hand, it has proved to be a practical basis for several model based test generation tools and applications. Definitions, underlying assumptions, an algorithm, properties, and several examples of the ioco-testing theory are discussed, involving specifications, implementations, tests, the ioco implementation relation and some of its variants, a test generation algorithm, and the soundness and exhaustiveness of this algorithm.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Phalippou, M.: Relations d’Implantation et Hypothèses de Test sur des Automates à Entrées et Sorties. PhD thesis, L’Université de Bordeaux I, France (1994)Google Scholar
  2. 2.
    Jard, C., Jéron, T.: TGV: Theory, Principles and Algorithms: A Tool for the Automatic Synthesis of Conformance Test Cases for Non-Deterministic Reactive Systems. Software Tools for Technology Transfer 7(4), 297–315 (2005)CrossRefGoogle Scholar
  3. 3.
    Hartman, A., Nagin, K.: The AGEDIS Tools for Model Based Testing. In: Int. Symposium on Software Testing and Analysis – ISSTA 2004, pp. 129–132. ACM Press, New York (2004)CrossRefGoogle Scholar
  4. 4.
    He, J., Turner, K.: Protocol-Inspired Hardware Testing. In: Csopaki, G., Dibuz, S., Tarnay, K. (eds.) Int. Workshop on Testing of Communicating Systems 12, pp. 131–147. Kluwer Academic Publishers, Dordrecht (1999)Google Scholar
  5. 5.
    Tretmans, J., Brinksma, E.: TorX: Automated Model Based Testing. In: Hartman, A., Dussa-Zieger, K. (eds.) First European Conference on Model-Driven Software Engineering, Imbuss, Möhrendorf, Germany, p. 13 (2003)Google Scholar
  6. 6.
    Tretmans, J.: Test generation with inputs, outputs and repetitive quiescence. Software—Concepts and Tools 17(3), 103–120 Also: Technical Report No. 96-26, Centre for Telematics and Information Technology, University of Twente, The Netherlands (1996)Google Scholar
  7. 7.
    Petrenko, A., Yevtushenko, N., Huo, J.L.: Testing Transition Systems with Input and Output Testers. In: Hogrefe, D., Wiles, A. (eds.) TestCom 2003. LNCS, vol. 2644, Springer, Heidelberg (2003)CrossRefGoogle Scholar
  8. 8.
    Brinksma, E., Alderden, R., Langerak, R., Lagemaat, J.v.d., Tretmans, J.: A formal approach to conformance testing. In: de Meer, J., Mackert, L., Effelsberg, W. (eds.) Second Int.Workshop on Protocol Test Systems, pp. 349–363. North-Holland, Amsterdam (1990)Google Scholar
  9. 9.
    Tretmans, J.: Testing Concurrent Systems: A Formal Approach. In: Baeten, J.C.M., Mauw, S. (eds.) CONCUR 1999. LNCS, vol. 1664, pp. 46–65. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  10. 10.
    Bernot, G., Gaudel, M.G., Marre, B.: Software testing based on formal specifications: a theory and a tool. Software Engineering Journal, 387–405 (November 1991)Google Scholar
  11. 11.
    Gaudel, M.C.: Testing can be formal, too. In: Mosses, P.D., Schwartzbach, M.I., Nielsen, M. (eds.) CAAP 1995, FASE 1995, and TAPSOFT 1995. LNCS, vol. 915, pp. 82–96. Springer, Heidelberg (1995)Google Scholar
  12. 12.
    ISO/IEC JTC1/SC21 WG7, ITU-T SG 10/Q.8: Information Retrieval, Transfer and Management for OSI – Framework: Formal Methods in Conformance Testing. Committee Draft CD 13245-1, Proposed ITU-T Recommendation Z.500. ISO – ITU-T, Geneve (1997)Google Scholar
  13. 13.
    Petrenko, A.: Fault Model-Driven Test Derivation from Finite State Models: Annotated Bibliography. In: Cassez, F., Jard, C., Rozoy, B., Dermot, M. (eds.) MOVEP 2000. LNCS, vol. 2067, pp. 196–205. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  14. 14.
    Campbell, C., W., G., Nachmanson, L., Schulte, W., Tillmann, N., Veanes, M.: Model-Based Testing of Object-Oriented Reactive Systems with Spec Explorer. Technical Report MSR-TR-2005-59, Microsoft Research, Redmond, USA (2005)Google Scholar
  15. 15.
    Koopman, P., Alimarine, A., Tretmans, J., Plasmeijer, R.: Gast: Generic Automated Software Testing. In: Peña, R., Arts, T. (eds.) IFL 2002. LNCS, vol. 2670, pp. 84–100. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  16. 16.
    Milner, R.: Communication and Concurrency. Prentice-Hall, Englewood Cliffs (1989)MATHGoogle Scholar
  17. 17.
    Bolognesi, T., Brinksma, E.: Introduction to the ISO specification language LOTOS. Computer Networks and ISDN Systems 14, 25–59 (1987)CrossRefGoogle Scholar
  18. 18.
    ISO: Information Processing Systems, Open Systems Interconnection, LOTOS - A Formal Description Technique Based on the Temporal Ordering of Observational Behaviour. International Standard IS-8807. ISO, Geneve (1989)Google Scholar
  19. 19.
    Lynch, N., Tuttle, M.: An introduction to Input/Output Automata. CWI Quarterly 2(3) (1989) 219–246 Also: Technical Report MIT/LCS/TM-373 (TM-351 revised), Massachusetts Institute of Technology, Cambridge, U.S.A. (1988)Google Scholar
  20. 20.
    Heerink, L.: Ins and Outs in Refusal Testing. PhD thesis, University of Twente, Enschede, The Netherlands (1998)Google Scholar
  21. 21.
    Krichen, M., Tripakis, S.: Black-Box Conformance Testing for Real-Time Systems. In: Graf, S., Mounier, L. (eds.) SPIN 2004. LNCS, vol. 2989, Springer, Heidelberg (2004)Google Scholar
  22. 22.
    Larsen, K., Mikucionis, M., Nielsen, B.: Online Testing of Real-Time Systems using Uppaal. In: Grabowski, J., Nielsen, B. (eds.) FATES 2004. LNCS, vol. 3395, pp. 79–94. Springer, Heidelberg (2005)Google Scholar
  23. 23.
    Brandán Briones, L., Brinksma, E.: A Test Generation Framework for quiescent Real-Time Systems. In: Grabowski, J., Nielsen, B. (eds.) FATES 2004. LNCS, vol. 3395, pp. 64–78. Springer, Heidelberg (2005)Google Scholar
  24. 24.
    Bijl, M.v.d., Rensink, A., Tretmans, J.: Action Refinement in Conformance Testing. In: Khendek, F., Dssouli, R. (eds.) TestCom 2005. LNCS, vol. 3502, pp. 81–96. Springer, Heidelberg (2005)Google Scholar
  25. 25.
    Frantzen, L., Tretmans, J., Willemse, T.: Test Generation Based on Symbolic Specifications. In: Grabowski, J., Nielsen, B. (eds.) FATES 2004. LNCS, vol. 3395, pp. 1–15. Springer, Heidelberg (2005)Google Scholar
  26. 26.
    De Nicola, R., Hennessy, M.: Testing Equivalences for Processes. Theoretical Computer Science 34, 83–133 (1984)MATHCrossRefMathSciNetGoogle Scholar
  27. 27.
    De Nicola, R.: Extensional equivalences for transition systems. Acta Informatica 24, 211–237 (1987)MATHCrossRefMathSciNetGoogle Scholar
  28. 28.
    Phillips, I.: Refusal testing. Theoretical Computer Science 50(2), 241–284 (1987)MATHCrossRefMathSciNetGoogle Scholar
  29. 29.
    Langerak, R.: A testing theory for LOTOS using deadlock detection. In: Brinksma, E., Scollo, G., Vissers, C.A. (eds.) Protocol Specification, Testing, and Verification IX, pp. 87–98. North-Holland, Amsterdam (1990)Google Scholar
  30. 30.
    Brinksma, E., Scollo, G., Steenbergen, C.: LOTOS specifications, their implementations and their tests. In: Bochmann, G.v., Sarikaya, B. (eds.) Protocol Specification, Testing, and Verification VI, pp. 349–360. North-Holland, Amsterdam (1987)Google Scholar
  31. 31.
    Segala, R.: Quiescence, fairness, testing, and the notion of implementation. In: Best, E. (ed.) CONCUR 1993. LNCS, vol. 715, pp. 324–338. Springer, Heidelberg (1993)Google Scholar
  32. 32.
    Vaandrager, F.: On the relationship between process algebra and Input/Output Automata. In: Logic in Computer Science, Sixth Annual IEEE Symposium, pp. 387–398. IEEE Computer Society Press, Los Alamitos (1991)CrossRefGoogle Scholar
  33. 33.
    Bijl, M.v.d., Rensink, A., Tretmans, J.: Compositional Testing with IOCO. In: Petrenko, A., Ulrich, A. (eds.) FATES 2003. LNCS, vol. 2931, pp. 86–100. Springer, Heidelberg (2004)Google Scholar
  34. 34.
    Huo, J.L., Petrenko, A.: On Testing Partially Specified IOTS through Lossless Queues. In: Groz, R., Hierons, R.M. (eds.) TestCom 2004. LNCS, vol. 2978, pp. 2004–2016. Springer, Heidelberg (2004)Google Scholar
  35. 35.
    Vries, R.d., Tretmans, J.: Towards Formal Test Purposes. In: Brinksma, E., Tretmans, J., eds.: Formal Approaches to Testing of Software – FATES, Number NS-01-4 in BRICS Notes Series, University of Aarhus, Denmark, BRICS, pp. 61–76 (2001)Google Scholar
  36. 36.
    Curgus, J., Vuong, S.: Sensitivity analysis of the metric based test selection. In: Kim, M., Kang, S., Hong, K. (eds.) Int.Workshop on Testing of Communicating Systems 10, pp. 200–219. Chapman & Hall, Boca Raton (1997)Google Scholar
  37. 37.
    Feijs, L., Goga, N., Mauw, S., Tretmans, J.: Test Selection, Trace Distance and Heuristics. In: Schieferdecker, I., König, H., Wolisz, A. (eds.) Testing of Communicating Systems XIV, pp. 267–282. Kluwer Academic Publishers, Dordrecht (2002)Google Scholar
  38. 38.
    Brinksma, E.: On the coverage of partial validations. In: Nivat, M., Rattray, C., Rus, T., Scollo, G. (eds.) AMAST 1993. BCS-FACS Workshops in Computing Series, pp. 247–254. Springer, Heidelberg (1993)Google Scholar
  39. 39.
    Jeannet, B., Jéron, T., Rusu, V., Zinovieva, E.: Symbolic Test Selection based on Approximate Analysis. In: Halbwachs, N., Zuck, L.D. (eds.) TACAS 2005. LNCS, vol. 3440, Springer, Heidelberg (2005)Google Scholar
  40. 40.
    Groz, R., Charles, O., Renévot, J.: Relating Conformance Test Coverage to Formal Specifications. In: Gotzhein, R. (ed.) FORTE 1996, Chapman & Hall, Boca Raton (1996)Google Scholar
  41. 41.
    Brinksma, E., Tretmans, J.: Testing Transition Systems: An Annotated Bibliography. In: Cassez, F., Jard, C., Rozoy, B., Dermot, M. (eds.) MOVEP 2000. LNCS, vol. 2067, pp. 187–195. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  42. 42.
    Groz, R., Risser, N.: Eight Years of Experience in Test Generation from FDTs using TVEDA. In: Mizuno, T., Shiratori, N., Higashino, T., Togashi, A. (eds.) Formal Desciption Techniques and Protocol Specification, Testing and Verification FORTE X /PSTV XVII 1997, Chapman & Hall, Boca Raton (1997)Google Scholar
  43. 43.
    Clarke, D., Jéron, T., Rusu, V., Zinovieva, E.: Automated Test and Oracle Generation for Smart-Card Applications. In: Attali, S., Jensen, T. (eds.) E-smart 2001. LNCS, vol. 2140, pp. 58–70. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  44. 44.
    Sardis, I.C., Heuillard, M.,, T.: AGEDIS Case Studies: Model-Based Testing in Industry. In: Hartman, A., Dussa-Zieger, K. (eds.) First European Conference on Model-Driven Software Engineering, Imbuss, Möhrendorf, Germany (2003)Google Scholar
  45. 45.
    Vries, R.d., Belinfante, A., Feenstra, J.: Automated Testing in Practice: The Highway Tolling System. In: Schieferdecker, I., König, H., Wolisz, A. (eds.) Testing of Communicating Systems XIV, pp. 219–234. Kluwer Academic Publishers, Dordrecht (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Jan Tretmans
    • 1
  1. 1.Embedded Systems Institute, Eindhoven, and Radboud University, NijmegenThe Netherlands

Personalised recommendations