LearnLib: a framework for extrapolating behavioral models

  • Harald Raffelt
  • Bernhard Steffen
  • Therese Berg
  • Tiziana Margaria
Regular Paper


In this paper, we present the LearnLib, a library of tools for automata learning, which is explicitly designed for the systematic experimental analysis of the profile of available learning algorithms and corresponding optimizations. Its modular structure allows users to configure their own tailored learning scenarios, which exploit specific properties of their envisioned applications. As has been shown earlier, exploiting application-specific structural features enables optimizations that may lead to performance gains of several orders of magnitude, a necessary precondition to make automata learning applicable to realistic scenarios.


Automata learning Domain-specific optimization Experimentation Software library Grammar inference 


  1. 1.
    Alur, R., Cerny, P., Madhusudan, P., Nam, W.: Synthesis of interface specifications for java classes. In: POPL ’05: Proceedings of the 32nd ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 98–109. ACM Press, New York, NY, USA (2005)Google Scholar
  2. 2.
    Angluin D.: Learning regular sets from queries and counterexamples. Inf. Comput. 2(75), 87–106 (1987)CrossRefMathSciNetGoogle Scholar
  3. 3.
    Berg T., Grinchtein O., Jonsson B., Leucker M., Raffelt H., Steffen B.: On the correspondence between conformance testing and regular inference. In: Cerioli, M. (eds) Proceedings of 8th International Conference on Fundamental Approaches to Software Engineering (FASE’05), LNCS, vol. 3442, pp. 175–189. Springer, New York (2005)Google Scholar
  4. 4.
    Broy M., Jonsson B., Katoen J.P., Leucker M., Pretschner A.: Model-based Testing of Reactive Systems, LNCS, vol. 3472. Springer, New York (2005)Google Scholar
  5. 5.
    Brun, Y., Ernst, M.D.: Finding latent code errors via machine learning over program executions. In: Proceedings of the 26th International Conference on Software Engineering (ICSE’04), pp. 480–490. Edinburgh, Scotland (2004)Google Scholar
  6. 6.
    Chow T.S.: Testing software design modeled by finite-state machines. IEEE Trans. Softw. Eng. 4(3), 178–187 (1978)CrossRefGoogle Scholar
  7. 7.
    Cobleigh, J.M., Giannakopoulou, D., Pasareanu, C.S.: Learning assumptions for compositional verification. In: Proceedings of the 9th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2003), LNCS, vol. 2619, pp. 331–346. Springer, Berlin/Heidelberg (2003)Google Scholar
  8. 8.
    Cook J.E., Wolf A.L.: Discovering models of software processes from event-based data. (TOSEM) ACM Trans. Softw. Eng. Methodol. 7(3), 215–249 (1998)CrossRefGoogle Scholar
  9. 9.
    Cook, J.E., Du, Z., Liu, C., Wolf, A.L.: Discovering models of behavior for concurrent systems. Technical Report, New Mexico State University, Deppartment of Computer Science. NMSU-CS-2002-010 (2002)Google Scholar
  10. 10.
    de la Higuera C.: A bibliographical study of grammatical inference. Pattern Recognit. 38, 1332–1348 (2005)CrossRefGoogle Scholar
  11. 11.
    Ernst, M.D., Cockrell, J., Griswold, W.G., Notkin, D.: Dynamically discovering likely program invariants to support program evolution. IEEE Transactions on Software Engineering 27(2), 1–25 (2001). A previous version appeared in ICSE ’99, Proceedings of the 21st International Conference on Software Engineering, pp. 213–224. Los Angeles, CA, USA, May 19–21 (1999)Google Scholar
  12. 12.
    Ernst, M.D., Czeisler, A., Griswold, W.G., Notkin, D.: Quickly detecting relevant program invariants. In: Proceedings of 22nd International Conference on Software Engineering (ICSE’00), pp. 449–458 (2000)Google Scholar
  13. 13.
    Fujiwara S., von Bochmann G., Khendek F., Amalou M., Ghedamsi A.: Test selection based on finite state models. IEEE Trans. Softw. Eng. 17(6), 591–603 (1991)CrossRefGoogle Scholar
  14. 14.
    Garavel H.: Open/caesar: an open software architecture for verification, simulation, and testing. In: Steffen, B. (eds) Proceedings of the 1st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS’98), LNCS, vol. 1384, pp. 68–84. Springer, New York (1998)CrossRefGoogle Scholar
  15. 15.
    Groce A., Peled D., Yannakakis M.: Adaptive model checking. In: Katoen, J.P., Stevens, P. (eds) Proceedings of the 8th Internation Conference on Tools and Algorithms for the Construction and Analysis of Systems, LNCS, vol. 2280, pp. 357–370. Springer, New York (2002)CrossRefGoogle Scholar
  16. 16.
    Habermehl, P., Vojnar, T.: Regular model checking using inference of regular languages. In: Proceedings of 6th International Workshop on Verification of Infinite State Systems (INFINITY 2004), Electronic Notes in Theoretical Computer Science, vol. 138, pp. 21–36. Elsevier Science (2005)Google Scholar
  17. 17.
    Hagerer A., Margaria T., Niese O., Steffen B., Brune G., Ide H.D.: Efficient regression testing of cti-systems: Testing a complex call-center solution. Annu. Rev. Commun. Int. Eng. Consort. (IEC), Chicago (USA) 55, 1033–1040 (2001)Google Scholar
  18. 18.
    Hagerer A., Hungar H., Niese O., Steffen B.: Model generation by moderated regular extrapolation. In: Kutsche, H.W.R. (eds) Proceedings of the 5th International Conference on Fundamental Approaches to Software Engineering (FASE’02), LNCS, vol. 2306, pp. 80–95. Springer, Heidelberg, Germany (2002)Google Scholar
  19. 19.
  20. 20.
    Hungar H., Steffen B.: Behavior-based model construction. Int. J. Softw. Tools Technol. Transf. (STTT) 6(1), 4–14 (2004)Google Scholar
  21. 21.
    Hungar, H., Margaria, T., Steffen, B.: Test-based model generation for legacy systems. In: Proceedings of 2003 International Test Conference (ITC 2003), pp. 971–980. IEEE Computer Society, Charlotte, NC (2003)Google Scholar
  22. 22.
    Jörges, S., Kubczak, C., Nagel, R., Margaria, T., Steffen, B.: Model-driven development with the jabc. In: Proceedings of Haifa verification conference 2006 (HVC 2006), LNCS, vol. 4383, pp. 92–108. Springer, Berlin/Heidelberg (2007)Google Scholar
  23. 23.
    Kubczak, C., Margaria, T., Nagel, R., Steffen, B.: Plug and play with FMICS-jETI: beyond scripting and coding. ERCIM News N. 73, April 2008, pp. 41–42.
  24. 24.
    Lee D., Yannakakis M.: Principles and methods of testing finite state machines—a survey. Proc. IEEE 84(8), 1090–1126 (1996)CrossRefGoogle Scholar
  25. 25.
    Margaria, T., Niese, O., Steffen, B., Erochok, A.: System level testing of virtual switch (re-)configuration over ip. In: Proceedings of the IEEE European Test Workshop (ETW’02), pp. 67–74. IEEE Computer Society Press (2002). ETW2002Google Scholar
  26. 26.
    Margaria, T., Nagel, R., Steffen, B.: Remote integration and coordination of verification tools in JETI. In: Proceedings of the 12th IEEE International Conference on the Engineering of Computer-Based Systems (ECBS 2005), pp. 431–436. IEEE Computer Society (2005)Google Scholar
  27. 27.
    Margaria, T., Raffelt, H., Steffen, B.: Analyzing second-order effects between optimizations for system-level test-based model generation. In: Proceedings of IEEE International Test Conference (ITC’05), pp. 7, 467. IEEE Computer Society (2005)Google Scholar
  28. 28.
    Margaria, T., Hinchey, M.G., Raffelt, H., Rash, J., Rouff, C.A., Steffen, B.: Completing and adapting models of biological processes. In: Proceedings of IFIP Conference on Biologically Inspired Cooperative Computing (BiCC 2006), Santiago (Chile), pp. 43–54. Springer (2006)Google Scholar
  29. 29.
    Mariani, L., Pezzè, M.: A technique for verifying component-based software. In: Proceedings of Interantional Workshop on Test and Analysis of Component Based Systems (TACoS’04), pp. 17–30 (2004)Google Scholar
  30. 30.
    Müller-Olm M., Schmidt D., Steffen B.: Model-checking: a tutorial introduction. In: Cortesi, G.F.A. (eds) Proceedings of Static Analysis Symposium (SAS’99), Venice, Italy, LNCS, vol. 1694, pp. 330–354. Springer, Heidelberg, Germany (1999)Google Scholar
  31. 31.
    Niese, O., Steffen, B., Margaria, T., Hagerer, A., Brune, G., Ide, H.D.: Library-based design and consistency checking of system-level industrial test cases. In: Proceedings of the 4th International Conference on Fundamental Approaches to Software Engineering (FASE ’01), LNCS, vol. 2029, pp. 233–248. Springer, London, UK (2001)Google Scholar
  32. 32.
    Nimmer, J.W., Ernst, M.D.: Automatic generation of program specifications. In: Proceedings of the 2002 International Symposium on Software Testing and Analysis (ISSTA’02), pp. 229–239. Rome, Italy (2002)Google Scholar
  33. 33.
    Peled, D., Vardi, M.Y., Yannakakis, M.: Black box checking. In: Wu, J., Chanson, S.T., Gao, Q. (eds.) Proceedings of the Joint International Conference on Formal Description Techniques for Distributed System and Communication/Protocols and Protocol Specification, Testing and Verification FORTE/PSTV ’99: pp. 225–240. Kluwer Academic Publishers (1999)Google Scholar
  34. 34.
    Raffelt, H., Steffen, B.: Learnlib: A library for automata learning and experimentation. In: Baresi, L., Heckel, R. (eds.) Proceedings of 9th International Conference on Fundamental Approaches to Software Engineering (FASE 2006), LNCS, vol. 3922, pp. 377–380. Springer (2006)Google Scholar
  35. 35.
    Raffelt, H., Steffen, B., Margaria, T.: Dynamic testing via automata learning. In: Proceedings of the Haifa Verification Conference 2007 (HVC ’07), LNCS, vol. 4899, pp. 136–152. Springer, Berlin, Heidelberg (2008)Google Scholar
  36. 36.
    Sabnani K., Dahbura A.: A protocol test generation procedure. Comput. Netw. ISDN Syst. 15(4), 285–297 (1988)CrossRefGoogle Scholar
  37. 37.
    Shen, Y.N., Lombardi, F., Dahbura, A.T.: Protocol conformance testing using multiple uio sequences. In: Proceedings of the 9th International Symposium on Protocol Specification, Testing and Verification, pp. 131–143. North-Holland (1990)Google Scholar
  38. 38.
    Steffen, B., Hungar, H.: Behavior-based model construction. In: Mukhopadhyay, S., Zuck, L. (eds.) Proceedings of the 4th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI’03), LNCS, vol. 2575, pp. 5–19. Springer (2003)Google Scholar
  39. 39.
    Steffen, B., Margaria, T., Raffelt, H., Niese, O.: Efficient test-based model generation of legacy systems. In: Proceedings of the 9th IEEE International Workshop on High Level Design Validation and Test (HLDVT’04), pp. 95–100. IEEE Computer Society Press, Sonoma, CA, USA (2004)Google Scholar
  40. 40.
    Steffen B., Margaria T., Nagel R.: jETI: A tool for remote tool integration. In: Halbwachs, N., Zuck, L.D. (eds) Proceedings of 11th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS’05):, LNCS, vol. 3440, Springer, Edinburgh, UK (2005)Google Scholar
  41. 41.
    Vuong, S., Chan, W., Ito, M.: The UIOv-method for protocol test sequence generation. In: de Meer, J., Machert, L., Effelsberg, W. (eds.) Proceedings of 2nd International Workshop on Protocol Testing Systems (IWPTS’89), pp. 161–175. North-Holland (1990)Google Scholar
  42. 42.
    Xie, T., Notkin, D.: Mutually enhancing test generation and specification inference. In: Petrenko, A., Ulrich, A. (eds.) Proceedings of 3rd International Workshop on Formal Approaches to Testing of Software (FATES’03), LNCS, vol. 2931, pp. 60–69. Springer (2004)Google Scholar

Copyright information

© Springer-Verlag 2009

Authors and Affiliations

  • Harald Raffelt
    • 1
  • Bernhard Steffen
    • 1
  • Therese Berg
    • 2
  • Tiziana Margaria
    • 3
  1. 1.Chair of Programming SystemsTU DortmundDortmundGermany
  2. 2.Department of Information TechnologyUppsala UniversityUppsalaSweden
  3. 3.Chair of Services and Software EngineeringUniversität PotsdamPotsdamGermany

Personalised recommendations