Advertisement

Abstract

Conformiq Qtronic is a commercial tool for model driven testing. It derives tests automatically from behavioral system models. These are black-box tests [1] by nature, which means that they depend on the model and the interfaces of the system under test, but not on the internal structure (e.g. source code) of the implementation.

In this essay, which accompanies my invited talk, I survey the nature of Conformiq Qtronic, the main implementation challenges that we have encountered and how we have approached them.

Keywords

Model Check Model Transformation Session Initiation Protocol Symbolic Execution Constraint Solver 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Craig, R.D., Jaskiel, S.P.: Systematic Software Testing. Artech House Publishers (2002)Google Scholar
  2. 2.
    Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, A., Peterson, J., Sparks, R., Handley, M., Schooler, E.: SIP: Session initiation protocol. Request for Comments 3261, The Internet Society (2002)Google Scholar
  3. 3.
    Postel, J.: User datagram protocol. Request for Comments 768, The Internet Society (1980)Google Scholar
  4. 4.
    Abelson, H., Dybvig, R.K., Haynes, C.T., Rozas, G.J., Iv, N.I.A., Friedman, D.P., Kohlbecker, E., Steele, J.G.L., Bartley, D.H., Halstead, R., Oxley, D., Sussman, G.J., Brooks, G., Hanson, C., Pitman, K.M., Wand, M.: Revised report on the algorithmic language scheme. Higher Order Symbol. Comput. 11(1), 7–105 (1998)CrossRefMATHGoogle Scholar
  5. 5.
    Gunter, C.A.: Semantics of Programming Languages. MIT Press, Cambridge (1992)MATHGoogle Scholar
  6. 6.
    Plotkin, G.D.: A Structural Approach to Operational Semantics. Technical Report DAIMI FN-19, University of Aarhus (1981)Google Scholar
  7. 7.
    Huima, A. (ed.): CQλ specification. Technical report, Conformiq Software (2003) Available upon requestGoogle Scholar
  8. 8.
    Clarke, E.M., Grumberg, O., Long, D.E.: Model checking and abstraction. ACM Trans. Program. Lang. Syst. 16(5), 1512–1542 (1994)CrossRefGoogle Scholar
  9. 9.
    Ammann, P., Black, P.: Abstracting formal specifications to generate software tests via model checking. In: DASC 1999. Proceedings of the 18th Digital Avionics Systems Conference, vol. 2, IEEE, New York (1999)Google Scholar
  10. 10.
    Reps, T., Turnidge, T.: Program specialization via program slicing. In: Danvy, O., Glueck, R., Thiemann, P. (eds.) Proceedings of the Dagstuhl Seminar on Partial Evaluation, Schloss Dagstuhl, Wadern, Germany, pp. 409–429. Springer, New York (1996)CrossRefGoogle Scholar
  11. 11.
    Weiser, M.: Program slicing. In: ICSE 1981. Proceedings of the 5th international conference on Software engineering, Piscataway, NJ, USA, pp. 439–449. IEEE Press, New York (1981)Google Scholar
  12. 12.
    Clarke Jr., E.M., Grumberg, O., Peled, D.A.: Model Checking. MIT Press, Cambridge (2000)Google Scholar
  13. 13.
    Luo, G., Petrenko, A., Bochmann, G.V.: Selecting test sequences for partially-specified nondeterministic finite state machines. Technical Report IRO-864 (1993)Google Scholar
  14. 14.
    Lee, D., Yannakakis, M.: Principles and methods of testing finite state machines - A survey. Proceedings of the IEEE 84, 1090–1126 (1996)CrossRefGoogle Scholar
  15. 15.
    Pyhälä, T., Heljanko, K.: Specification coverage aided test selection. In: Lilius, J., Balarin, F., Machado, R.J. (eds.) ACSD 2003. Proceeding of the 3rd International Conference on Application of Concurrency to System Design, Guimaraes, Portugal, pp. 187–195. IEEE Computer Society, Washington (2003)Google Scholar
  16. 16.
    Tretmans, J.: A formal approach to conformance testing. In: Proc. 6th International Workshop on Protocols Test Systems. Number C-19 in IFIP Transactions, pp. 257–276 (1994)Google Scholar
  17. 17.
    Luo, G., von Bochmann, G., Petrenko, A.: Test selection based on communicating nondeterministic finite state machines using a generalized wp-method. IEEE Transactions on Software Engineering SE-20(2), 149–162 (1994)Google Scholar
  18. 18.
    Feijs, L., Goga, N., Mauw, S.: Probabilities in the TorX test derivation algorithm. In: Proc. SAM 2000, SDL Forum Society (2000)Google Scholar
  19. 19.
    Petrenko, A., Yevtushenko, N., Huo, J.L.: Testing transition systems with input and output tester. In: Hogrefe, D., Wiles, A. (eds.) TestCom 2003. LNCS, vol. 2644, Springer, Heidelberg (2003)CrossRefGoogle Scholar
  20. 20.
    Tretmans, J.: Test generation with inputs, outputs and repetitive quiescence. Software—Concepts and Tools 17(3), 103–120 (1996)MATHGoogle Scholar
  21. 21.
    Veanes, M., Campbell, C., Schulte, W., Tillmann, N.: Online testing with model programs. In: ESEC/FSE-13. Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering, pp. 273–282. ACM Press, New York, NY, USA (2005)CrossRefGoogle Scholar
  22. 22.
    Object Management Group: Unified Modeling Language: Superstructure. Technical Report formal/2007-02-05 (2007)Google Scholar
  23. 23.
    Selic, B.: UML 2: a model-driven development tool. IBM Syst. J. 45(3), 607–620 (2006)CrossRefGoogle Scholar
  24. 24.
    Gosling, J., Joy, B., Steele, G., Bracha, G.: The Java Language Specification, 3rd edn. Prentice-Hall, Englewood Cliffs (2005)MATHGoogle Scholar
  25. 25.
    Michaelis, M.: Essential C# 2.0. Addison-Wesley, London (2006)Google Scholar
  26. 26.
    Conformiq Software: Conformiq Qtronic User Manual. Conformiq Software, Publicly available as part of product download (2007)Google Scholar
  27. 27.
    Milner, R.: A theory of type polymorphism in programming. Journal of Computer and System Science 17(3), 348–375 (1978)MathSciNetCrossRefMATHGoogle Scholar
  28. 28.
    Nielson, F., Nielson, H.R., Hankin, C.: Principles of Program Analysis. Springer, Heidelberg (1999)CrossRefMATHGoogle Scholar
  29. 29.
    Pierce, B.C.: Types and Programming Languages. MIT Press, Cambridge (2002)MATHGoogle Scholar
  30. 30.
    Budd, T.A., DeMillo, R.A., Lipton, R.J., Sayward, F.G.: Theoretical and empirical studies on using program mutation to test the functional correctness of programs. In: POPL 1980. Proceedings of the 7th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 220–233. ACM Press, New York, USA (1980)Google Scholar
  31. 31.
    Offutt, A.J., Lee, S.: An empirical evaluation of weak mutation. IEEE Transactions on Software Engineering 20(5), 337–344 (1994)CrossRefGoogle Scholar
  32. 32.
    Zhu, H., Hall, P., May, J.: Software unit test coverage and adequacy. ACM Computing Surveys 29(4), 366–427 (1997)CrossRefGoogle Scholar
  33. 33.
    Larsen, K.G., Mikucionis, M., Nielsen, B.: Online testing of real-time systems using UPPAAL. In: Grabowski, J., Nielsen, B. (eds.) FATES 2004. LNCS, vol. 3395, pp. 79–94. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  34. 34.
    Bohnenkamp, H., Belinfante, A.: Timed testing with TorX. In: Fitzgerald, J.A., Hayes, I.J., Tarlecki, A. (eds.) FM 2005. LNCS, vol. 3582, pp. 173–188. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  35. 35.
    Briones, L., Brinksma, E.: Testing real-time multi input-output systems. In: Lau, K.-K., Banach, R. (eds.) ICFEM 2005. LNCS, vol. 3785, pp. 264–279. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  36. 36.
    Steele Jr., G.L., Gabriel, R.P.: The evolution of Lisp. ACM SIGPLAN Notices 28(3), 231–270 (1993)CrossRefGoogle Scholar
  37. 37.
    Object Management Group: Meta Object Facility (MOF) Core Specification. Technical Report formal/06-01-01 (2006)Google Scholar
  38. 38.
    Budinsky, F., Steinberg, D., Merks, E., Ellersick, R., Grose, T.J.: Eclipse Modeling Framework, 1st edn. Addison-Wesley, London (2003)Google Scholar
  39. 39.
    Object Management Group: MOF 2.0/XMI Mapping Specification. Technical Report formal/05-09-01 (2005)Google Scholar
  40. 40.
    Mellor, S.J., Scott, K., Uhl, A., Weise, D.: MDA Distilled. Addison-Wesley, London (2004)Google Scholar
  41. 41.
    Kleppe, A., Warmer, J., Bast, W.: MDA Explained. Addison-Wesley, London (2003)Google Scholar
  42. 42.
    Aho, A.V., Johnson, S.C., Ullman, J.D.: Deterministic parsing of ambiguous grammars. Commun. ACM 18(8), 441–452 (1975)MathSciNetCrossRefMATHGoogle Scholar
  43. 43.
    Aycock, J., Horspool, R.N.: Faster generalized LR parsing. In: Jähnichen, S. (ed.) CC 1999 and ETAPS 1999. LNCS, vol. 1575, pp. 32–46. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  44. 44.
    Rozenberg, G. (ed.): Handbook of Graph Grammars and Computing by Graph Transformation, vol. 1. World Scientific, Singapore (1997)MATHGoogle Scholar
  45. 45.
    Engelfriet, J., Rozenberg, G.: Node replacement graph grammars. In: 44, pp. 1–94Google Scholar
  46. 46.
    Drewes, F., Kreowski, H.J., Habel, A.: Hyperedge replacement graph grammars. In: 44, pp. 95–162Google Scholar
  47. 47.
    Nupponen, K.: The design and implementation of a graph rewrite engine for model transformations. Master’s thesis, Helsinki University of Technology (2005)Google Scholar
  48. 48.
    Vainikainen, T.: Applying graph rewriting to model transformations. Master’s thesis, Helsinki University of Technology (2005)Google Scholar
  49. 49.
    Blom, J., Hessel, A., Jonsson, B., Pettersson, P.: Specifying and generating test cases using observer automata. In: Grabowski, J., Nielsen, B. (eds.) FATES 2004. LNCS, vol. 3395, pp. 125–139. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  50. 50.
    Gotlieb, A., Botella, B., Rueher, M.: Automatic test data generation using constraint solving techniques. In: ISSTA 1998. Proceedings of the 1998 ACM SIGSOFT international symposium on Software testing and analysis, pp. 53–62. ACM Press, New York, USA (1998)Google Scholar
  51. 51.
    Khurshid, S., Pasareanu, C.S.: Generalized symbolic execution for model checking and testing. In: Garavel, H., Hatcliff, J. (eds.) ETAPS 2003 and TACAS 2003. LNCS, vol. 2619, pp. 553–568. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  52. 52.
    Lee, G., Morris, J., Parker, K., Bundell, G.A., Lam, P.: Using symbolic execution to guide test generation: Research articles. Softw. Test. Verif. Reliab. 15(1), 41–61 (2005)CrossRefGoogle Scholar
  53. 53.
    Dechter, R.: Constraint Processing. Morgan Kaufmann Publishers, San Francisco (2003)MATHGoogle Scholar
  54. 54.
    Apt, K.R.: Principles of Constraint Programming. Cambridge University Press, Cambridge (2003)CrossRefMATHGoogle Scholar
  55. 55.
    The Unicode Consortium: The Unicode Standard, Version 5.0. 5th edn. Addison-Wesley Professional (2006)Google Scholar
  56. 56.
    Jones, R., Lins, R.D.: Garbage Collection: Algorithms for Automatic Dynamic Memory Management. Wiley, Chichester (1996)MATHGoogle Scholar
  57. 57.
    Dechter, R.: Bucket elimination: A unifying framework for reasoning. Artificial Intelligence 113(1-2), 41–85 (1999)MathSciNetCrossRefMATHGoogle Scholar
  58. 58.
    Cunha, J.C., Rana, O.F. (eds.): Grid Computing: Software Environments and Tools, 1st edn. Springer, Heidelberg (2005)MATHGoogle Scholar
  59. 59.
    Cousot, P.: Abstract interpretation. ACM Computing Surveys 28(2), 324–328 (1996)CrossRefGoogle Scholar
  60. 60.
    Bozga, M., Fernandez, J.C., Ghirvu, L.: Using static analysis to improve automatic test generation. In: Schwartzbach, M.I., Graf, S. (eds.) ETAPS 2000 and TACAS 2000. LNCS, vol. 1785, pp. 235–250. Springer, Heidelberg (2000)CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2007

Authors and Affiliations

  • Antti Huima
    • 1
  1. 1.Conformiq Software LtdFinland

Personalised recommendations