Skip to main content
Log in

PBCOV: a property-based coverage criterion

  • Published:
Software Quality Journal Aims and scope Submit manuscript

Abstract

Coverage criteria aim at satisfying test requirements and compute metrics values that quantify the adequacy of test suites at revealing defects in programs. Typically, a test requirement is a structural program element, and the coverage metric value represents the percentage of elements covered by a test suite. Empirical studies show that existing criteria might characterize a test suite as highly adequate, while it does not actually reveal some of the existing defects. In other words, existing structural coverage criteria are not always sensitive to the presence of defects. This paper presents PBCOV, a Property-Based COVerage criterion, and empirically demonstrates its effectiveness. Given a program with properties therein, static analysis techniques, such as model checking, leverage formal properties to find defects. PBCOV is a dynamic analysis technique that also leverages properties and is characterized by the following: (a) It considers the state space of first-order logic properties as the test requirements to be covered; (b) it uses logic synthesis to compute the state space; and (c) it is practical, i.e., computable, because it considers an over-approximation of the reachable state space using a cut-based abstraction.We evaluated PBCOV using programs with test suites comprising passing and failing test cases. First, we computed metrics values for PBCOV and structural coverage using the full test suites. Second, in order to quantify the sensitivity of the metrics to the absence of failing test cases, we computed the values for all considered metrics using only the passing test cases. In most cases, the structural metrics exhibited little or no decrease in their values, while PBCOV showed a considerable decrease. This suggests that PBCOV is more sensitive to the absence of failing test cases, i.e., it is more effective at characterizing test suite adequacy to detect defects, and at revealing deficiencies in test suites.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. Please refer to Table 6 for a glossary of all the symbols used in this paper.

  2. Typically MC/DC considers the coverage of a code decision predicate in terms of its clauses. Here the property and its atomic predicates are presented to MC/DC as the predicate and its clauses, respectively.

References

  • ABC. (2007). ABC: Berkeley logic synthesis and verification group. a system for sequential synthesis and verification, release 70930. http://www.eecs.berkeley.edu/alanmi/abc/.

  • Ammann, P., & Black, P. E. (2001). A specification-based coverage metric to evaluate test sets. International Journal of Reliability, Quality and Safety Engineering, 8(4), 239–248.

  • Ammann, P., & Offutt, J. (2008). Introduction to software testing (1st ed.). New York, NY: Cambridge University Press.

    Book  MATH  Google Scholar 

  • Ball, T. (2004). A theory of predicate-complete test coverage and generation. In In FMCO 2004: Symposium on formal methods for components and objects, pp 1–22.

  • Ball, T., Majumdar, R., Millstein, T., & Rajamani, SK. (2001). Automatic predicate abstraction of c programs. In Programming language design and implementation, ACM, New York, NY, USA, PLDI ’01, pp. 203–213.

  • Ball, T., Levin, V., & Rajamani, S. K. (2011). A decade of software model checking with slam. Communications of the ACM, 54, 68–76.

    Article  Google Scholar 

  • Barr, A. (2004). Find the bug: A iook of incorrect programs. Reading: Addison-Wesley Professional.

    Google Scholar 

  • Baudin, P., Filliâtre, J.C., Hubert, T., Marché, C., Monate, B., Moy, Y., et al. (2009). ACSL: ANSI C specification language (preliminary design V1.9). http://www.frama-c.cea.fr/acsl.html.

  • Boyapati, C., Khurshid, S., & Marinov, D. (2002). Korat: Automated testing based on java predicates. In International symposium on software testing and analysis (ISSTA), pp. 123–133.

  • Burnim, J., & Sen, K. (2008). Heuristics for scalable dynamic test generation. In International conference on automated software engineering, pp. 443–446.

  • Chang, J., & Richardson, DJ. (1999). Structural specification-based testing: Automated support and experimental evaluation. In European, software engineering conference, ESEC/FSE-7, pp. 285–302.

  • Clarke, E., Brumberg, J. O., & Peled, D. A. (1999). Model checking. Cambridge: MIT Press.

    Google Scholar 

  • Clarke, E., Kroening, D., & Lerda, F. (2004). A tool for checking ansi-c programs. In Tools and algorithms for the construction and analysis of systems, pp. 168–176.

  • Coen-Porisini, A., Denaro, G., Ghezzi, C., & Pezzé, M. (2001). Using symbolic execution for verifying safety-critical systems. ACM SIGSOFT Software Engineering Notes, 26, 142–151.

    Article  Google Scholar 

  • Cormen, T. H., Stein, C., Rivest, R. L., & Leiserson, C. E. (2009). Introduction to algorithms (3rd ed.). Cambridge: MIT Press.

    MATH  Google Scholar 

  • DeMillo, R. A., Lipton, R. J., & Sayward, F. G. (1978). Hints on test data selection: Help for the practicing programmer. Computer, 11, 34–41.

    Article  Google Scholar 

  • Do, H., Elbaum, S., & Rothermel, G. (2005). Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering, 10, 405–435.

    Article  Google Scholar 

  • Dutertre, B., & Moura, L. M. D. (2006). A fast linear-arithmetic solver for dpll(t). Computer Aided Verification.

  • Ernst, M. D., Perkins, J. H., Guo, P. J., McCamant, S., Pacheco, C., Tschantz, M. S., & Xiao, C. (2007). The daikon system for dynamic detection of likely invariants. Science of Computer Programming, 69, 35–45.

    Article  MATH  MathSciNet  Google Scholar 

  • Ford, L. R., & Fulkerson, D. R. (1956). Maximal flow through a network. Canadian Journal of Mathematics, 8, 399–404.

  • Gligoric, M., Gvero, T., Jagannath, V., Khurshid, S., Kuncak, V., & Marinov, D. (2010). Test generation through programming in udita. In International conference on software engineering, ACM, ICSE ’10.

  • Godefroid, P., Klarlund, N., & Sen, K. (2005). Dart: Directed automated random testing. In Proceedings of the 2005 ACM SIGPLAN conference on programming language design and implementation, ACM, PLDI ’05.

  • Gough, B. J., & Stallman, R. M. (2005). An introduction to GCC. Network Theory Ltd

  • Harder, M., Mellen, J., & Ernst, M. D. (2003). Improving test suites via operational abstraction. In International conference on software engineering, pp. 60–71.

  • Heimdahl, M. P. E., Rayadurgam, S., Visser, W., Devaraj, G., & Gao, J. (2003). Auto-generating test sequences using model checkers: A case study. In Workshop on formal approaches to testing of software, pp 42–59.

  • Holzmann, G. J. (1997). The model checker spin. IEEE Transactions on Software Engineering, 23, 279–295.

    Article  Google Scholar 

  • Horgan, J. R., & London, S. (1991). Data flow coverage and the c language. In Proceedings of the symposium on testing, analysis, and verification, TAV4, pp. 87–97.

  • Jaygarl, H., Lu, K. S., & Chang, C. K. (2010). Genred: A tool for generating and reducing object-oriented test cases. In IEEE annual computer software and applications conference, pp. 127–136.

  • Khurshid, S., & Marinov, D. (2004). Testera: Specification-based testing of java programs using sat. Automated Software Engineering, 11, 403–434.

    Article  Google Scholar 

  • Linz, P. (2012). An introduction to formal languages and automata (5th ed.). Burlington: Jones and Bartlett Learning.

    MATH  Google Scholar 

  • Martinian, E. (2010). Red-black tree c code. http://www.mit.edu/emin/source_code/red_black_tree.

  • Masri, W. (2010). Fault localization based on information flow coverage. Software Testing, Verification and Reliability, 20, 121–147.

    Article  Google Scholar 

  • Masri, W., & Abou-Assi, R. (2010). Cleansing test suites from coincidental correctness to enhance fault-localization. In International conference on software testing, verification and validation, ICST ’10.

  • Masri, W., Podgurski, A., & Leon, D. (2007). An empirical study of test case filtering techniques based on exercising information flows. IEEE Transactions on Software Engineering, 33, 454–477.

    Article  Google Scholar 

  • McMillan, K. L. (1998). The smv language: Cadence berkeley labs. Technical report.

  • Necula, G. C., Mcpeak, S., Rahul, S. P., & Weimer, W. (2002). Cil: Intermediate language and tools for analysis and transformation of c programs. In International conference on compiler, construction, pp. 213–228.

  • PBCOV-APPENDICES. (2013). PBCOV-APPENDICES. http://webfea.fea.aub.edu.lb/fadi/dkwk/doku.php?id=pbcov.

  • PBCOV-TOOL. (2013). PBCOV-TOOL. http://webfea.fea.aub.edu.lb/fadi/dkwk/doku.php?id=pbcov.

  • Rapps, S., & Weyuker, E. J. (1982). Data flow analysis techniques for test data selection. In International conference on Software engineering (pp. 272–278). CA, USA: Los Alamitos.

  • Santelices, R. A., Chittimalli, P. K., Apiwattanapong, T., Orso, A., & Harrold, M. J. (2008). Test-suite augmentation for evolving software. In ASE, pp 218–227.

  • Schuler, D., & Zeller, A. (2011). Assessing oracle quality with checked coverage. In International conference on software testing, verification and validation, pp. 90–99.

  • Torlak, E., & Jackson, D. (2007). Kodkod: A relational model finder. In Proceedings of the 13th international conference on tools and algorithms for the construction and analysis of systems, TACAS’07.

  • Visser, W., Havelund, K., Brat, G. P., Park, S., & Lerda, F. (2003). Model checking programs. Automated Software Engineering Journal, 10(2), 203–232.

    Article  Google Scholar 

  • Woodcock, J., Larsen, P. G., Bicarregui, J., & Fitzgerald, J. (2009). Formal methods: Practice and experience. ACM Computing Surveys, 41, 19:1–19:36.

    Article  Google Scholar 

  • Yang, J., & Evans, D. (2004). Dynamically inferring temporal properties. In SIGPLAN-SIGSOFT workshop on program analysis for software tools and engineering, PASTE ’04, pp 23–28.

  • Yang, X., Wang, J., & Yi, X. (2010). Slicing execution with partial weakest precondition for model abstraction of c programs. The Computer Journal, 53(1), 37–49.

    Article  Google Scholar 

  • Zaraket, F., & Masri, W. (2009). Property based coverage criterion. In International workshop on defects in large software systems, DEFECTS ’09, pp. 27–28.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fadi Zaraket.

Additional information

This paper builds on the short position paper (2 pages) entitled: “Property-based Coverage Criterion,” presented at the DEFECTS Workshop, Chicago, IL, July 2009. This research was supported in part by NSF (Grant# 0819987) and by the AUB University Research Board.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (PDF 323 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fawaz, K., Zaraket, F., Masri, W. et al. PBCOV: a property-based coverage criterion. Software Qual J 23, 171–202 (2015). https://doi.org/10.1007/s11219-014-9237-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11219-014-9237-3

Keywords

Navigation