Empirical Software Engineering

, Volume 13, Issue 4, pp 343–368 | Cite as

Analysing the effectiveness of rule-coverage as a reduction criterion for test suites of grammar-based software

Article

Abstract

The term grammar-based software describes software whose input can be specified by a context-free grammar. This grammar may occur explicitly in the software, in the form of an input specification to a parser generator, or implicitly, in the form of a hand-written parser. Grammar-based software includes not only programming language compilers, but also tools for program analysis, reverse engineering, software metrics and documentation generation. Hence, ensuring their completeness and correctness is a vital prerequisite for their use. In this paper we propose a strategy for the construction of test suites for grammar based software, and illustrate this strategy using the ISO C + +  grammar. We use the concept of grammar-rule coverage as a pivot for the reduction of an implementation-based test suite, and demonstrate a significant decrease in the size of this suite. The effectiveness of this reduced test suite is compared to the original test suite with respect to code coverage and more importantly, fault detection. This work greatly expands upon previous work in this area and utilises large scale mutation testing to compare the effectiveness of grammar-rule coverage to that of statement coverage as a reduction criterion for test suites of grammar-based software. This work finds that when grammar rule coverage is used as the sole criterion for reducing test suites of grammar based software, the fault detection capability of that reduced test suite is greatly diminished when compared to other coverage criteria such as statement coverage.

Keywords

Software testing Grammar-based software Test suite reduction Rule coverage Mutation testing 

Notes

Acknowledgements

This work has been funded under the Embark scheme administered by the Irish Research Council for Science, Engineering and Technology.

References

  1. Acostachioaie D (2000) DOC++: Open Source - Open Science - Open systems. Circles Electronic Magazine (36)Google Scholar
  2. Andrews JH, Briand LC, Labiche Y, Namin AS (2006) Using mutation analysis for assessing and comparing testing coverage criteria. IEEE Trans Softw Eng 32(8):608–624CrossRefGoogle Scholar
  3. Bazzichi F, Spadafora I (1982) An automatic generator for compiler testing. IEEE Trans Softw Eng 8(4):343–353CrossRefGoogle Scholar
  4. Celentano A, Crespi-Reghizzi S, Vigna PD, Ghezzi C, Granata G, Savoretti F (1980) Compiler testing using a sentence generator. Softw Pract Exp 10(11):897–918CrossRefGoogle Scholar
  5. Garey MR, Johnson DS (1979) Computers and intractability: a guide to the theory of NP-completeness. W.H. Freeman, San FranciscoMATHGoogle Scholar
  6. Gibbs TH, Malloy BA, Power JF (2003a) Decorating tokens to facilitate recognition of ambiguous language constructs. Softw Pract Exp 33(1):19–39MATHCrossRefGoogle Scholar
  7. Gibbs TH, Malloy BA, Power JF (2003b) Progression toward conformance of C + +  language compilers. Dr Dobbs J 28(11):54–60Google Scholar
  8. Harm J, Lämmel R (2000) Two-dimensional approximation coverage. Informatica 24(3):355–369MATHGoogle Scholar
  9. Harrold MJ, Gupta R, Soffa ML (1993) A methodology for controlling the size of a test Suite. ACM Trans Softw Eng Methodol 2(3):270–285CrossRefGoogle Scholar
  10. Heimdahl MPE, George D (2004) Test suite reduction for model based tests: effects on test quality and implications for testing. In: 19th IEEE international conference on automated software engineering. IEEE, Linz, pp 176–185CrossRefGoogle Scholar
  11. Hennessy M, Power JF (2005a) An analysis of rule coverage as a criterion in generating minimal test suites for grammar-based software. In: 20th IEEE international conference on automated software engineering. IEEE, Long Beach, pp 104–113CrossRefGoogle Scholar
  12. Hennessy M, Power JF (2005b) Generation strategies for test suites of grammar-based software. Technical Report NUIM-CS-TR-2005-02, Department of Computer Science, National Univesity of Ireland, MaynoothGoogle Scholar
  13. Hennessy M, Power JF, Malloy BA (2003) gccXfront : Exploiting gcc as a Front-end for Program Comprehension via XML/XSL. In: 11th international workshop on program comprehension, Portland, OR, 10–11 May 2003Google Scholar
  14. ISO/IEC JTC 1 (1998) International standard: programming languages - C + + . No. 14882:1998(E), 1st edn. American National Standards Institute, New YorkGoogle Scholar
  15. ISO/IEC JTC 1 (2003) International standard: programming languages - C + + . No. 14882:2003(E), 2nd edn. American National Standards Institute, New YorkGoogle Scholar
  16. Jones JA, Harrold MJ (2003) Test suite reduction and prioritization for modified condition/decision coverage. IEEE Trans Softw Eng Methodol 29(3):195–210CrossRefGoogle Scholar
  17. Klint P, Lämmel R, Verhoef C (2005) Toward an engineering discipline for grammarware. ACM Trans Softw Eng Methodol 14(3):331–380CrossRefGoogle Scholar
  18. Koppler R (1997) A systematic approach to fuzzy parsing. Softw Pract Exp 27(6):637–649CrossRefGoogle Scholar
  19. Lämmel R (2001) Grammar testing. In: Fundamental approaches to software engineering. LNCS, vol 2029. Springer, Berlin Heidelberg New York, pp 201–216CrossRefGoogle Scholar
  20. Ma Y-S, Offutt J, Kwon YR (2005) MuJava : an automated class mutation system. J Softw Test Verif Reliab 15(2):97–133CrossRefGoogle Scholar
  21. Malloy BA, Linde SA, Duffy EB, Power JF (2002) Testing C + +  compilers for ISO language conformance. Dr Dobbs J 27(6):71–78Google Scholar
  22. Malloy BA, Power JF (2001) An interpretation of Purdom’s algorithm for automatic generation of test-cases. In: 1st annual international conference on computer and information science, Orlando, FL, 3–5 October 2001Google Scholar
  23. Offutt AJ, Lee A, Rothermel G, Untch RH, Zapf C (1996) An experimental determination of sufficient mutant operators. ACM Trans Softw Eng Methodol 5(2):99–118CrossRefGoogle Scholar
  24. Offutt AJ, Alexander RT, Wu Y, Xiao Q, Hutchinson C (2001) A fault model for subtype inheritance and polymorphism. In: 12th international symposium on software reliability engineering, Hong Kong, 27–30 November 2001, pp 84–95Google Scholar
  25. Perennial (2007) Perennial test suite for ISO C + + . http://www.peren.com/pages/cppvs.htm. Accessed March 2008
  26. Plum Hall (2008) Plum Hall test suites. http://www.plumhall.com/. Accessed March 2008
  27. Power JF, Malloy BA (2002) Program annotation in XML: a parser-based approach. In: Working conference on reverse engineering, Richmond, 28 October–1 November 2002, pp 190–198Google Scholar
  28. Power JF, Malloy BA (2004) A metrics suite for grammar-based software. Softw Maint Evol Res Pract 16(6):405–426CrossRefGoogle Scholar
  29. Purdom P (1972) A sentence generator for testing parsers. BIT 12(3):366–375MATHCrossRefMathSciNetGoogle Scholar
  30. Roper M (1994) Software testing. McGraw-Hill, New YorkGoogle Scholar
  31. Rothermel G, Harrold MJ, Ostrin J, Hong C (1998) An empirical study of the effects of minimization on the fault detection capabilities of test suites. In: International conference on software maintenance, Bethesda, 16–19 November 1998, pp 34–43Google Scholar
  32. Sim SE, Holt RC, Easterbrook S (2002) On using a benchmark to evaluate C + +  extractors. In: 10th international workshop on program comprehension, Paris, 27–29 June 2002, pp 114–126Google Scholar
  33. Spinczyk O, Gal A, Schröder-Preikschat W (2002) AspectC++: an aspect-oriented extension to C + + . In: 40th international conference on technology of object-oriented languages and systems, Sydney, February 2002, pp 53–60Google Scholar
  34. Wong WE, Horgan JR, London S, Mathur AP (1998) Effect of test-set minimization on fault detection effectiveness. Softw Pract Exp 28(4):347–369CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  1. 1.Computer Science DepartmentNational University of IrelandCo. KildareIreland

Personalised recommendations