Advertisement

Model Generation by Moderated Regular Extrapolation

  • Andreas Hagerer
  • Hardi Hungar
  • Oliver Niese
  • Bernhard Steffen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2306)

Abstract

This paper introduces regular extrapolation, a technique that provides descriptions of systems or system aspects a posteriori in a largely automatic way. The descriptions come in the form of models which offer the possibility of mechanically producing system tests, grading test suites and monitoring running systems. Regular extrapolation builds models from observations via techniques from machine learning and finite automata theory. Also expert knowledge about the system enters the model construction in a systematic way. The power of this approach is illustrated in the context of a test environment for telecommunication systems.

Keywords

Model Checker Test Suite Expert Knowledge Public Switch Telephone Network Test Coordinator 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    D. Angluin. Learning regular sets from queries and counterexamples. Information and Computation, 2(75):87–106, 1987.CrossRefMathSciNetGoogle Scholar
  2. 2.
    E. Brinksma. A theory for the derivation of tests. Proc. of PSTV VIII, pages 63–74, 1988.Google Scholar
  3. 3.
    T.S. Chow. Testing software design modeled by finite-state machines. IEEE Transactions on Software Engineering, 4(3):178–187, 1978.CrossRefGoogle Scholar
  4. 4.
    European Computer Manufactures Association (ECMA). Services for computer supported telecommunications applications (CSTA) phase II/III, 1994/1998.Google Scholar
  5. 5.
    E.A. Emerson. Temporal and modal logic. In J. van Leeuwen, editor, Handbook of theoretical computer science. Elsevier, 1990.Google Scholar
  6. 6.
    J.C. Fernandez, C. Jard, T. Jéron, L. Nedelka, C. Viho. Using on-the-fly verification techniques for the generation of test suites. In Proc. CAV 1996, LNCS 1102. Springer Verlag, 1996Google Scholar
  7. 7.
    S. Fujiwara, G. von Bochmann, F. Khendek, M. Amalou, and A. Ghedamsi. Test selection based on finite state models. IEEE Trans. on Software Engineering, 17(6):591–603, 1991.CrossRefGoogle Scholar
  8. 8.
    A. Hagerer, H. Hungar, T. Margaria, O. Niese, B. Steffen, and H.-D. Ide. Demonstration of an operational procedure for the model-based testing of CTI systems. In Proc. of the 5th Int. Conf. on Fundamental Approaches to Software Engineering (FASE 2002), this Volume.Google Scholar
  9. 10.
    J.E. Hopcroft and J.D. Ullman. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley, 1979.Google Scholar
  10. 12.
    B. Jonsson, T. Margaria, G. Naeser, J. Nyström, and B. Steffen. Incremental requirement specification for evolving systems. Nordic Journal of Computing, vol. 8(1):65, Also in Proc. of Feature Interactions in Telecommunications and Software Systems 2000, 2001.Google Scholar
  11. 13.
    M.J. Kearns and U.V. Vazirani. An Introduction to Computational Learning Theory. MIT Press, 1994.Google Scholar
  12. 14.
    D. Lee and M. Yannakakis. Principles and methods of testing finite state machines — A survey. In Proc. of the IEEE, volume 84, pages 1090–1123, 1996.Google Scholar
  13. 15.
    A. Mazurkiewicz. Trace theory. Petri Nets, Applications and Relationship to other Models of Concurrency, LNCS 255, pages 279–324. Springer Verlag, 1987.Google Scholar
  14. 16.
    O. Niese, T. Margaria, A. Hagerer, M. Nagelmann, B. Steffen, G. Brune, and H. Ide. An automated testing environment for CTI systems using concepts for specification and verification of workflows. Annual Review of Communication, 54, 2000.Google Scholar
  15. 17.
    O. Niese, T. Margaria, and B. Steffen. Automated functional testing of web-based applications. In Proc. QWE 2001, 2001.Google Scholar
  16. 18.
    O. Niese, B. Steffen, T. Margaria, A. Hagerer, G. Brune, and H. Ide. Library-based design and consistency checks of system-level industrial test cases. In H. Huβmann, editor, Proc. FASE 2001, LNCS 2029, pages 233–248. Springer Verlag, 2001.Google Scholar
  17. 19.
    B. Steffen. Unifying models. In R. Reischuk and M. Morvan, editors, Proc. STACS’97, LNCS 1200, pages 1–20. Springer Verlag, 1997.Google Scholar
  18. 20.
    B. Steffen. Property oriented expansion. In Proc. Int. Static Analysis Symposium (SAS’96), LNCS 1145, pages 22–41. Springer Verlag, 996.Google Scholar
  19. 21.
    B. Steffen and T. Margaria. METAFrame in Practice: Design of Intelligent Network Services, LNCS 1710, pages 390–415. Springer Verlag, 1999.Google Scholar
  20. 22.
    Q.M. Tan and A. Petrenko. Test generation for specifications modeled by input/ output automata. In In Proc. Of 11th IFIP Workshop on Testing of Communicating Systems (IWTCS’98), pages 83–99, 1998.Google Scholar
  21. 23.
    J. Tretmans. Test generation with inputs, outputs, and quiescence. In Proc. TACAS’96, LNCS 1055, pages 127–146. Springer Verlag, 1996.Google Scholar
  22. 24.
    L.G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984.zbMATHCrossRefGoogle Scholar
  23. 25.
    A. Valmari. On-the-fly verification with stubborn sets. In Proc. CAV 1993, LNCS 697, pages 397–408. Springer Verlag, 1993.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Andreas Hagerer
    • 1
  • Hardi Hungar
    • 1
  • Oliver Niese
    • 2
  • Bernhard Steffen
    • 2
  1. 1.METAFrame Technologies GmbHDortmundGermany
  2. 2.Chair of Programming SystemsUniversity of DortmundGermany

Personalised recommendations