Advertisement

Heuristics for ioco-Based Test-Based Modelling

(Extended Abstract)
  • Tim A. C. Willemse
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4346)

Abstract

Model-based conformance testing provides a mathematically sound technique to assess the quality of systems and check the correctness of a system with respect to a model. Most systems, however, are built or modified without documenting the (new) specifications, thereby limiting the use of model-based testing techniques. In this paper, we describe a method to obtain models automatically from an existing system, using model-based testing techniques relying on ioco-based testing. These models are useful for e.g. regression testing, or for the testing of different configurations of systems. We illustrate the effectiveness of our approach using a case-study in which we test mutants of the system against models that have been automatically extracted from the (correct) system.

Keywords

Regression Testing Label Transition System Automaton Learning Model Check Technique Learning Hypothesis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Angluin, D.: Learning regular sets from queries and counterexamples. Information and Computation 2(75), 87–106 (1987)CrossRefMathSciNetGoogle Scholar
  2. 2.
    Belinfante, A., Feenstra, J., de Vries, R.G., Tretmans, J., Goga, N., Feijs, L., Mauw, S., Heerink, L.: Formal test automation: A simple experiment. In: Csopaki, G., Dibuz, S., Tarnay, K. (eds.) Testcom ’99, pp. 179–196. Kluwer Academic Publishers, Dordrecht (1999)Google Scholar
  3. 3.
    Berg, T., Jonsson, B., Leucker, M., Saksena, M.: Insights to angluin’s learning. In: Etalle, S., Mukhopadhyay, S., Roychoudhury, A. (eds.) Proceedings of SVV 2003. ENTCS, vol. 118, pp. 3–18. Elsevier, Amsterdam (2005)Google Scholar
  4. 4.
    Godefroid, P.: Verisoft: A tool for the automatic analysis of concurrent reactive software. In: Grumberg, O. (ed.) CAV 1997. LNCS, vol. 1254, pp. 476–479. Springer, Heidelberg (1997)Google Scholar
  5. 5.
    Harrold, M.J.: Testing: A roadmap. In: Finkelstein, A. (ed.) ICSE - Future of SE Track, pp. 61–72. ACM Press, New York (2000)CrossRefGoogle Scholar
  6. 6.
    Hungar, H., Margaria, T., Steffen, B.: Domain-specific optimization in automata learning. In: Hunt Jr., W.A., Somenzi, F. (eds.) CAV 2003. LNCS, vol. 2725, pp. 315–327. Springer, Heidelberg (2003)Google Scholar
  7. 7.
    Hungar, H., Margaria, T., Steffen, B.: Test-based model generation for legacy systems. In: IEEE international test conference (ITC), pp. 971–980. IEEE Computer Society Press, Los Alamitos (2003)Google Scholar
  8. 8.
    Leung, H.K.N., White, L.J.: Insights into regression testing. Journal of Software Maintenance: Research and Practice 2, 209–222 (1990)CrossRefGoogle Scholar
  9. 9.
    Margaria, T., Raffelt, H., Steffen, B.: Knowledge-based relevance filtering for efficient system-level test-based model generation (to appear). Innovations in Systems and Software Engineering 1(2), 147–156 (2005)CrossRefGoogle Scholar
  10. 10.
    Peled, D., Vardi, M.Y., Yannakakis, M.: Black box checking. In: Wu, J., Chanson, S.T., Gao, Q. (eds.) Proceedings of FORTE/PSTV, vol. 156, pp. 225–240. Kluwer Academic Publishers, Dordrecht (1999)Google Scholar
  11. 11.
    Tretmans, J.: Test generation with inputs, outputs and repetetive quiescence. Software — Concepts and Tools 17(3), 103–120 (1996)zbMATHGoogle Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Tim A. C. Willemse
    • 1
  1. 1.Institute for Computing and Information Sciences (ICIS), Radboud University NijmegenThe Netherlands

Personalised recommendations