Advertisement

A Semantic Framework for Test Coverage

  • Laura Brandán Briones
  • Ed Brinksma
  • Mariëlle Stoelinga
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4218)

Abstract

Since testing is inherently incomplete, test selection has vital importance. Coverage measures evaluate the quality of a test suite and help the tester select test cases with maximal impact at minimum cost. Existing coverage criteria for test suites are usually defined in terms of syntactic characteristics of the implementation under test or its specification. Typical black-box coverage metrics are state and transition coverage of the specification. White-box testing often considers statement, condition and path coverage. A disadvantage of this syntactic approach is that different coverage figures are assigned to systems that are behaviorally equivalent, but syntactically different. Moreover, those coverage metrics do not take into account that certain failures are more severe than others, and that more testing effort should be devoted to uncover the most important bugs, while less critical system parts can be tested less thoroughly.

This paper introduces a semantic approach to black box test coverage. Our starting point is a weighted fault model (or WFM), which augments a specification by assigning a weight to each error that may occur in an implementation. We define a framework to express coverage measures that express how well a test suite covers such a specification, taking into account the error weight. Since our notions are semantic, they are insensitive to replacing a specification by one with equivalent behaviour. We present several algorithms that, given a certain minimality criterion, compute a minimal test suite with maximal coverage. These algorithms work on a syntactic representation of WFMs as fault automata. They are based on existing and novel optimization problems. Finally, we illustrate our approach by analyzing and comparing a number of test suites for a chat protocol.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ball, T.: A theory of predicate-complete test coverage and generation. In: de Boer, F.S., Bonsangue, M.M., Graf, S., de Roever, W.-P. (eds.) FMCO 2004. LNCS, vol. 3657, pp. 1–22. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  2. 2.
    Belinfante, A., Feenstra, J., Vries, R., Tretmans, J., Goga, N., Feijs, L., Mauw, S., Heerink, L.: Formal test automation: A simple experiment. In: Int. Workshop on Testing of Communicating Systems, vol. 12, pp. 179–196 (1999)Google Scholar
  3. 3.
    Belinfante, A., Frantzen, L., Schallhart, C.: Tools for test case generation. In: Model-Based Testing of Reactive Systems, pp. 391–438 (2004)Google Scholar
  4. 4.
    Brandán Briones, L., Brinksma, E.: A test generation framework for quiescent real-time systems. In: Grabowski, J., Nielsen, B. (eds.) FATES 2004. LNCS, vol. 3395, pp. 64–78. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  5. 5.
    Brandán Briones, L., Brinksma, E., Stoelinga, M.: A semantic framework for test coverage (extended version). Tech. Rep. TR-CTIT-06-24, Centre for Telematics and Information Technology, University of Twente (2006)Google Scholar
  6. 6.
    Campbell, C., Grieskamp, W., Nachmanson, L., Schulte, W., Tillmann, N., Veanes, M.: Model-based testing of object-oriented reactive systems. Tech. Rep. MSR-TR-2005-59 (2005)Google Scholar
  7. 7.
    ETSI. Es 201 873-6 v1.1.1 (2003-02). Methods for testing and specification (mts). In: The Testing and Test Control Notation version 3: TTCN-3 Control Interface (TCI). ETSI Standard (2003)Google Scholar
  8. 8.
    Jard, C., Jéron, T.: TGV: theory, principles and algorithms. STTT 7(4), 297–315 (2005)CrossRefGoogle Scholar
  9. 9.
    Lee, D., Yannakakis, M.: Principles and methods of testing finite state machines - A survey. Proceedings of the IEEE 84, 1090–1126 (1996)CrossRefGoogle Scholar
  10. 10.
    Myers, G.: The Art of Software Testing. Wiley & Sons, Chichester (1979)Google Scholar
  11. 11.
    Myers, G., Sandler, C., Badgett, T., Thomas, T.: The Art of Software Testing. Wiley & Sons, Chichester (2004)Google Scholar
  12. 12.
    Nachmanson, L., Veanes, M., Schulte, W., Tillmann, N., Grieskamp, W.: Optimal strategies for testing nondeterministic systems. In: International Symposium on Software Testing and Analysis, pp. 55–64. ACM Press, New York (2004)Google Scholar
  13. 13.
    Nicola, R., Hennessy, M.: Testing equivalences for processes. In: Proceedings ICALP, vol. 154 (1983)Google Scholar
  14. 14.
    Tardos, E.: A strongly polynomial minimum cost circulation algorithm. Combinatorica 5(3), 247–255 (1985)zbMATHCrossRefMathSciNetGoogle Scholar
  15. 15.
    Tretmans, J.: Test generation with inputs, outputs and repetitive quiescence. Software-Concepts and Tools 17(3), 103–120 (1996)zbMATHGoogle Scholar
  16. 16.
    Tretmans, J., Brinksma, E.: TorX: Automated model-based testing. In: First European Conference on Model-Driven Software Engineering (2003)Google Scholar
  17. 17.
    Ural, H.: Formal methods for test sequence generation. Computer Communications Journal 15(5), 311–325 (1992)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Laura Brandán Briones
    • 1
  • Ed Brinksma
    • 1
    • 2
  • Mariëlle Stoelinga
    • 1
  1. 1.Faculty of Computer ScienceUniversity of TwenteThe Netherlands
  2. 2.Embedded Systems InstituteThe Netherlands

Personalised recommendations