Fully Automatic Testing with Functions as Specifications

  • Pieter Koopman
  • Rinus Plasmeijer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4164)


Although computer systems penetrate all facets of society, the software running those systems may contain many errors. Producing high quality software appears to be difficult and very expensive. Even determining the quality of software is not easy. Testing is by far the most used way to estimate the quality of software. Testing itself is not easy and time consuming.

In order to reduce the costs and increase the quality and speed of testing, testing should be automated itself. An automatical specification based test tool generates test data, executes the associated tests, and makes a fully automatically verdict based on a formal specification. Advantages of this approach are that one specifies properties instead of instances of these properties, test data are derived automatically instead of manually, the tests performed are always up to date with the current specification, and automatic testing is fast and accurate.

We will show that functions in a functional programming language can be used very well to model properties. One branch of the automatic test system Gast handles logical properties relating function arguments and results of a single function call. The other branch of Gast handles specifications of systems with an internal state.


State Machine Model Checker Search Tree Input Sequence Priority Queue 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
  2. 2.
    Bernot, G., Gaudel, M.C., Marre, B.: Software testing based on formal specifications: a theory and a tool. Software Engineering Journal, 387–405 (November 1991)Google Scholar
  3. 3.
    Koopman, P., Alimarine, A., Tretmans, J., Plasmeijer, R.: GAST: Generic Automated Software Testing. In: Peña, R., Arts, T. (eds.) IFL 2002. LNCS, vol. 2670, pp. 84–100. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  4. 4.
    Koopman, P., Plasmeijer, R.: Testing reactive systems with GAST. In: Gilmore, S. (ed.) Trends in Functional Programming, vol. 4, pp. 111–129 (2004)Google Scholar
  5. 5.
    van Weelden, A., et al.: On-the-Fly Formal Testing of a Smart Card Applet. SEC 2005. Or technical report NIII-R0403, at:
  6. 6.
    Claessen, K., Hughes, J.: QuickCheck: A lightweight Tool for Random Testing of Haskell Programs. In: ICFP, pp. 268–279. ACM, New York (2000), CrossRefGoogle Scholar
  7. 7.
    Plasmeijer, R., van Eekelen, M.: Clean language report version 2.1,
  8. 8.
    Tretmans, J.: Testing Concurrent Systems: A Formal Approach. In: Baeten, J.C.M., Mauw, S. (eds.) CONCUR 1999. LNCS, vol. 1664, pp. 46–65. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  9. 9.
    Núñez, M., Roderíguez, I.: Encoding PARM into (Timed) EFSMs. In: Peled, D.A., Vardi, M.Y. (eds.) FORTE 2002. LNCS, vol. 2529, pp. 1–16. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  10. 10.
    Lee, D., Yannakakis, M.: Principles and methods of testing finite state machines – a survey. Proc. IEEE 84(8), 1090–1126 (1996)CrossRefGoogle Scholar
  11. 11.
    ISO/IEC, 13568:2002 standard, See also:
  12. 12.
    Koopman, P., Plasmeijer, R.: Generic Generation of Elements of Types. In: Sixth Symposium on Trends in Functional Programming (TFP 2005), Tallin, Estonia, September 23-24 (2005)Google Scholar
  13. 13.
    Alimarine, A., Plasmeijer, R.: A Generic Programming Extension for Clean. In: Arts, T., Mohnen, M. (eds.) IFL 2002. LNCS, vol. 2312, pp. 168–185. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  14. 14.
    de Mol, M., van Eekelen, M., Plasmeijer, R.: Theorem Proving for Functional Programmers. In: Arts, T., Mohnen, M. (eds.) IFL 2002. LNCS, vol. 2312, pp. 55–71. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  15. 15.
    Holzmann, G.: The SPIN Model Checker (2004) ISBN 0-321-22862-6Google Scholar
  16. 16.
    Behrmann, G., David, A., Larsen, K.: A Tutorial on Uppaal. In: Bernardo, M., Corradini, F. (eds.) SFM-RT 2004. LNCS, vol. 3185, pp. 200–236. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  17. 17.
    Beyer, D., Chlipala, A., Henzinger, T., Jhala, R., Majumdar, R.: The Blast query language for software verification. In: Giacobazzi, R. (ed.) SAS 2004. LNCS, vol. 3148, pp. 2–18. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  18. 18.
    Chen, H., Dean, D., Wagner, D.: Model checking one million lines of C code. In: Proceedings 11th Annual NDSS, San Diego, CA (February 2004)Google Scholar
  19. 19.
    Dart, K.P., Stirling, L., Winikoff, M.: Kazmierczak Dart, Stirling, Winikoff: Verifying requirements through mathematical modelling and animation. Int. J. Softw. Eng. Know. Eng. 10(2), 251–273 (2000)CrossRefGoogle Scholar
  20. 20.
    Kemmerer, R.: Testing formal specifications to detect design errors. IEEE Tr. on Soft. Eng. 11(1), 32–43 (1985)CrossRefGoogle Scholar
  21. 21.
    Liu, S.: Verifying Consistency and Validity of Formal Specifications by Testing. In: Wing, J.M., Woodcock, J.C.P., Davies, J. (eds.) FM 1999. LNCS, vol. 1708, pp. 896–914. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  22. 22.
    Miller, T., Strooper, P.: A framework and tool support for the systematic testing of model-based specifications ACM Tr. Soft. Eng. and Meth., pp. 409–439 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Pieter Koopman
    • 1
  • Rinus Plasmeijer
    • 1
  1. 1.Institute for Computing and Information ScienceRadboud University NijmegenThe Netherlands

Personalised recommendations