A Semantic Framework for Test Coverage
Since testing is inherently incomplete, test selection has vital importance. Coverage measures evaluate the quality of a test suite and help the tester select test cases with maximal impact at minimum cost. Existing coverage criteria for test suites are usually defined in terms of syntactic characteristics of the implementation under test or its specification. Typical black-box coverage metrics are state and transition coverage of the specification. White-box testing often considers statement, condition and path coverage. A disadvantage of this syntactic approach is that different coverage figures are assigned to systems that are behaviorally equivalent, but syntactically different. Moreover, those coverage metrics do not take into account that certain failures are more severe than others, and that more testing effort should be devoted to uncover the most important bugs, while less critical system parts can be tested less thoroughly.
This paper introduces a semantic approach to black box test coverage. Our starting point is a weighted fault model (or WFM), which augments a specification by assigning a weight to each error that may occur in an implementation. We define a framework to express coverage measures that express how well a test suite covers such a specification, taking into account the error weight. Since our notions are semantic, they are insensitive to replacing a specification by one with equivalent behaviour. We present several algorithms that, given a certain minimality criterion, compute a minimal test suite with maximal coverage. These algorithms work on a syntactic representation of WFMs as fault automata. They are based on existing and novel optimization problems. Finally, we illustrate our approach by analyzing and comparing a number of test suites for a chat protocol.
Unable to display preview. Download preview PDF.
- 2.Belinfante, A., Feenstra, J., Vries, R., Tretmans, J., Goga, N., Feijs, L., Mauw, S., Heerink, L.: Formal test automation: A simple experiment. In: Int. Workshop on Testing of Communicating Systems, vol. 12, pp. 179–196 (1999)Google Scholar
- 3.Belinfante, A., Frantzen, L., Schallhart, C.: Tools for test case generation. In: Model-Based Testing of Reactive Systems, pp. 391–438 (2004)Google Scholar
- 5.Brandán Briones, L., Brinksma, E., Stoelinga, M.: A semantic framework for test coverage (extended version). Tech. Rep. TR-CTIT-06-24, Centre for Telematics and Information Technology, University of Twente (2006)Google Scholar
- 6.Campbell, C., Grieskamp, W., Nachmanson, L., Schulte, W., Tillmann, N., Veanes, M.: Model-based testing of object-oriented reactive systems. Tech. Rep. MSR-TR-2005-59 (2005)Google Scholar
- 7.ETSI. Es 201 873-6 v1.1.1 (2003-02). Methods for testing and specification (mts). In: The Testing and Test Control Notation version 3: TTCN-3 Control Interface (TCI). ETSI Standard (2003)Google Scholar
- 10.Myers, G.: The Art of Software Testing. Wiley & Sons, Chichester (1979)Google Scholar
- 11.Myers, G., Sandler, C., Badgett, T., Thomas, T.: The Art of Software Testing. Wiley & Sons, Chichester (2004)Google Scholar
- 12.Nachmanson, L., Veanes, M., Schulte, W., Tillmann, N., Grieskamp, W.: Optimal strategies for testing nondeterministic systems. In: International Symposium on Software Testing and Analysis, pp. 55–64. ACM Press, New York (2004)Google Scholar
- 13.Nicola, R., Hennessy, M.: Testing equivalences for processes. In: Proceedings ICALP, vol. 154 (1983)Google Scholar
- 16.Tretmans, J., Brinksma, E.: TorX: Automated model-based testing. In: First European Conference on Model-Driven Software Engineering (2003)Google Scholar