Advertisement

A Theory of Predicate-Complete Test Coverage and Generation

  • Thomas Ball
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3657)

Abstract

Consider a program with m statements and n predicates, where the predicates are derived from the conditional statements and assertions in a program. An observable state is an evaluation of the n predicates under some state at a program statement. The goal of predicate-complete testing (PCT) is to evaluate all the predicates at every program state. That is, we wish to cover every reachable observable state (at most m × 2 n of them) in a program. PCT coverage subsumes many existing control-flow coverage criteria and is incomparable to path coverage. To support the generation of tests to achieve high PCT coverage, we show how to define an upper bound U and lower bound L to the (unknown) set of reachable observable states R. These bounds are constructed automatically using Boolean (predicate) abstraction over modal transition systems and can be used to guide test generation via symbolic execution. We define a static coverage metric as |L|/|U|, which measures the ability of the Boolean abstraction to achieve high PCT coverage.

Keywords

Partition Function Model Check Observable State Symbolic Execution Feasible Path 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [BEL75]
    Boyer, R., Elspas, B., Levitt, K.: SELECT–a formal system for testing and debugging programs by symbolic execution. SIGPLAN Notices 10(6), 234–245 (1975)CrossRefGoogle Scholar
  2. [BG99]
    Bruns, G., Godefroid, P.: Model checking partial state spaces with 3-valued temporal logics. In: Halbwachs, N., Peled, D.A. (eds.) CAV 1999. LNCS, vol. 1633, pp. 274–287. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  3. [BKM02]
    Boyapati, C., Khurshid, S., Marinov, D.: Korat: automated testing based on java predicates. In: Proceedings of the International Symposium on Software Testing and Analysis, pp. 123–133. ACM, New York (2002)CrossRefGoogle Scholar
  4. [BPS00]
    Bush, W.R., Pincus, J.D., Sielaff, D.J.: A static analyzer for finding dynamic programming errors. Software-Practice and Experience 30(7), 775–802 (2000)zbMATHCrossRefGoogle Scholar
  5. [CHJM04]
    Chlipala, A.J., Henzinger, T.A., Jhala, R., Majumdar, R.: Generating tests from counterexamples. In: ICSE 2004: International Conference on Software Engineering. ACM, New York (2004) (to appear)Google Scholar
  6. [CKY03]
    Clarke, E., Kroening, D., Yorav, K.: Behavioral consistency of C and Verilog programs using bounded model checking. In: Design Automation Conference, pp. 368–371 (2003)Google Scholar
  7. [Cla76]
    Clarke, L.A.: A system to generate test data and symbolically execute programs. IEEE Transactions on Software Engineering 2(3), 215–222 (1976)CrossRefGoogle Scholar
  8. [dAGJ04]
    de Alfaro, L., Godefroid, P., Jagadeesan, R.: Three-valued abstractions of games: uncertainty, but with precision. In: LICS 2004: Logic in Computer Science. LNCS. Springer, Heidelberg (2004) (to appear)Google Scholar
  9. [GBR98]
    Gotlieb, A., Botella, B., Rueher, M.: Automatic test data generation using constraint solving techniques. In: Proceedings of the International Symposium on Software Testing and Analysis, pp. 53–62. ACM, New York (1998)Google Scholar
  10. [GHJ01]
    Godefroid, P., Huth, M., Jagadeesan, R.: Abstraction-based model checking using modal transition systems. In: Larsen, K.G., Nielsen, M. (eds.) CONCUR 2001. LNCS, vol. 2154, pp. 426–440. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  11. [GMS98]
    Gupta, N., Mathur, A.P., Soffa, M.L.: Automated test data generation using an iterative relaxation method. In: FSE 1998: Foundations of Software Engineering. ACM, New York (1998)Google Scholar
  12. [God03]
    Godefroid, P.: Reasoning about abstract open systems with generalized module checking. In: Alur, R., Lee, I. (eds.) EMSOFT 2003. LNCS, vol. 2855, pp. 223–240. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  13. [GR03]
    Godefroid, P., Jagadeesan, R.: On the expressiveness of 3-valued models. In: Zuck, L.D., Attie, P.C., Cortesi, A., Mukhopadhyay, S. (eds.) VMCAI 2003. LNCS, vol. 2575, pp. 206–222. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  14. [Gri81]
    Gries, D.: The Science of Programming. Springer, Heidelberg (1981)zbMATHGoogle Scholar
  15. [GRS00]
    Giacobazzi, R., Ranzato, F., Scozzari, F.: Making abstract interpretations complete. Journal of the ACM 47(2), 361–416 (2000)zbMATHCrossRefMathSciNetGoogle Scholar
  16. [HME03]
    Harder, M., Mellen, J., Ernst, M.D.: Improving test suites via operational abstraction. In: ICSE 2003: International Conference on Software Engineering, pp. 60–71. ACM, New York (2003)Google Scholar
  17. [How76]
    Howden, W.E.: Reliability of the path analysis testing strategy. IEEE Transactions on Software Engineering 2, 208–215 (1976)CrossRefMathSciNetGoogle Scholar
  18. [JBW+94]
    Jasper, R., Brennan, M., Williamson, K., Currier, B., Zimmerman, D.: Test data generation and feasible path analysis. In: Proceedings of the International Symposium on Software Testing and Analysis, pp. 95–107. ACM, New York (1994)Google Scholar
  19. [JV00]
    Jackson, D., Vaziri, M.: Finding bugs with a constraint solver. In: Proceedings of the International Symposium on Software Testing and Analysis, pp. 14–25. ACM, New York (2000)CrossRefGoogle Scholar
  20. [Kor92]
    Korel, B.: Dynamic method of software test data generation. Software Testing, Verification and Reliability 2(4), 203–213 (1992)CrossRefGoogle Scholar
  21. [Mil99]
    Milner, R.: Communicating and Mobile Systems: the π-Calculus. Cambridge University Press, Cambridge (1999)Google Scholar
  22. [RHC]
    Ramamoorthy, C., Ho, S., Chen, W.: On the automated generation of program test data. IEEE Transactions on Software Engineering 2(4), 293–300Google Scholar
  23. [SG04]
    Shoham, S., Grumberg, O.: Monotonic abstraction-refinement for CTL. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 546–560. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  24. [Tai96]
    Tai, K.-C.: Theory of fault-based predicate testing for computer programs. IEEE Transactions on Software Engineering 22(8), 552–562 (1996)CrossRefGoogle Scholar
  25. [Tai97]
    Tai, K.-C.: Predicate-based test generation for computer programs. In: ICSE 1997: International Conference on Software Engineering, pp. 267–276 (1997)Google Scholar
  26. [YM89]
    Yates, D., Malevris, N.: Reducing the effects of infeasible paths in branch testing. In: Proceedings of the Symposium on Software Testing, Analysis, and Verification, pp. 48–54. ACM, New York (1989)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Thomas Ball
    • 1
  1. 1.Microsoft ResearchRedmondUSA

Personalised recommendations