Advertisement

An Empirical Study of Structural Constraint Solving Techniques

  • Junaid Haroon Siddiqui
  • Sarfraz Khurshid
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5885)

Abstract

Structural constraint solving allows finding object graphs that satisfy given constraints, thereby enabling software reliability tasks, such as systematic testing and error recovery. Since enumerating all possible object graphs is prohibitively expensive, researchers have proposed a number of techniques for reducing the number of potential object graphs to consider as candidate solutions. These techniques analyze the structural constraints to prune from search object graphs that cannot satisfy the constraints. Although, analytical and empirical evaluations of individual techniques have been done, comparative studies of different kinds of techniques are rare in the literature. We performed an experiment to evaluate the relative strengths and weaknesses of some key structural constraint solving techniques. The experiment considered four techniques using: a model checker, a SAT solver, a symbolic execution engine, and a specialized solver. It focussed on their relative abilities in expressing the constraints and formatting the output object graphs, and most importantly on their performance. Our results highlight the tradeoffs of different techniques and help choose a technique for practical use.

Keywords

Empirical comparison software testing tools model checking symbolic execution SAT state space exploration systematic testing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Artho, C., Drusinksy, D., Goldberg, A., Havelund, K., Lowry, M., Păsăreanu, C.S., Roşu, G., Visser, W.: Experiments with Test Case Generation and Runtime Analysis. In: Börger, E., Gargantini, A., Riccobene, E. (eds.) ASM 2003. LNCS, vol. 2589, pp. 87–107. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  2. 2.
    Ball, T., Rajamani, S.K.: Automatically Validating Temporal Safety Properties of Interfaces. In: Dwyer, M.B. (ed.) SPIN 2001. LNCS, vol. 2057, pp. 103–122. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  3. 3.
    Bayer, R.: Symmetric Binary B-trees: Data Structure and Maintenance Algorithms. Acta informatica 1(4), 290–306 (1972)zbMATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Beyer, D., Chlipala, A.J., Majumdar, R.: Generating Tests from Counterexamples. In: Proc. 2004 Int. Conf. Softw. Eng (ICSE), pp. 326–335 (2004)Google Scholar
  5. 5.
    Boyapati, C., Khurshid, S., Marinov, D.: Korat: Automated Testing based on Java Predicates. In: Proc. 2002 Int. Symp. Softw. Testing and Analysis (ISSTA), pp. 123–133 (2002)Google Scholar
  6. 6.
    Brat, G., et al.: Experimental Evaluation of Verification and Validation Tools on Martian Rover Software. Form. Methods Syst. Des. 25(2-3), 167–198 (2004)zbMATHCrossRefGoogle Scholar
  7. 7.
    Bush, W.R., Pincus, J.D., Sielaff, D.J.: A Static Analyzer for Finding Dynamic Programming Errors. Softw. Pract. Exper. 30(7), 775–802 (2000)zbMATHCrossRefGoogle Scholar
  8. 8.
    Cadar, C., et al.: EXE: Automatically Generating Inputs of Death. In: Proc. 13th Conf. Comput. and Commun. Security (CCS), pp. 322–335 (2006)Google Scholar
  9. 9.
    Chang, J., Richardson, D.: Structural Specification-Based Testing: Automated Support and Experimental Evaluation. In: Nierstrasz, O., Lemoine, M. (eds.) ESEC 1999 and ESEC-FSE 1999. LNCS, vol. 1687, pp. 285–302. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  10. 10.
    Clarke, E.M.: The Birth of Model Checking. 25 Years of Model Checking: History, Achievements, Perspectives, 1–26 (2008)Google Scholar
  11. 11.
    Clarke, L.A.: A System to Generate Test Data and Symbolically Execute Programs. IEEE Trans. Softw. Eng. 2(3), 215–222 (1976)CrossRefGoogle Scholar
  12. 12.
    Coen-Porisini, A., et al.: Using Symbolic Execution for Verifying Safety-critical Systems. In: Proc. 3rd joint meeting of the Euro. Softw. Eng. Conf. (ESEC) and Symp. Foundations of Softw. Eng. (FSE), pp. 142–151 (2001)Google Scholar
  13. 13.
    Corbett, J., et al.: Bandera: Extracting Finite-state Models from Java Source Code. In: Proc. 2000 Int. Conf. Softw. Eng. (ICSE), pp. 439–448 (2000)Google Scholar
  14. 14.
    Darga, P.T., Boyapati, C.: Efficient Software Model Checking of Data Structure Properties. In: Proc. 21st Annual Conf. Object Oriented Prog. Syst., Lang., and Applications (OOPSLA), pp. 363–382 (2006)Google Scholar
  15. 15.
    Donat, M.R.: Automating Formal Specification-Based Testing. In: Bidoit, M., Dauchet, M. (eds.) CAAP 1997, FASE 1997, and TAPSOFT 1997. LNCS, vol. 1214, pp. 833–847. Springer, Heidelberg (1997)CrossRefGoogle Scholar
  16. 16.
    Elkarablieh, B., et al.: STARC: Static Analysis for Efficient Repair of Complex Data. In: Proc. 22nd Annual Conf. Object Oriented Prog. Syst., Lang., and Applications (OOPSLA), pp. 387–404 (2007)Google Scholar
  17. 17.
    Elkarablieh, B., Marinov, D., Khurshid, S.: Efficient Solving of Structural Constraints. In: Proc. 2008 Int. Symp. Softw. Testing and Analysis (ISSTA), pp. 39–50 (2008)Google Scholar
  18. 18.
    Flanagan, C., et al.: Extended Static Checking for Java. In: Proc. 2002 Conf. Prog. Lang. Design and Implementation (PLDI), pp. 234–245 (2002)Google Scholar
  19. 19.
    Gligoric, M., et al.: Optimizing Generation of Object Graphs in Java PathFinder. In: Proc. 2nd Int. Conf. Softw. Testing Verification and Validation (ICST), pp. 51–60 (2009)Google Scholar
  20. 20.
    Godefroid, P.: Model Checking for Programming Languages using VeriSoft. In: Proc. 24th Symp. Principles of Prog. Lang. (POPL), pp. 174–186 (1997)Google Scholar
  21. 21.
    Godefroid, P., Klarlund, N., Sen, K.: DART: Directed Automated Random Testing. In: Proc. 2005 Conf. Prog. Lang. Design and Implementation (PLDI), pp. 213–223 (2005)Google Scholar
  22. 22.
    Goldberg, E., Novikov, Y.: BerkMin: A Fast and Robust SAT-solver. Discrete Appl. Math. 155(12), 1549–1561 (2007)zbMATHCrossRefMathSciNetGoogle Scholar
  23. 23.
    Goodenough, J.B., Gerhart, S.L.: Toward a Theory of Test Data Selection. In: Proc. Int. Conf. Reliable Softw. Technol., pp. 493–510 (1975)Google Scholar
  24. 24.
    Gotlieb, A., Botella, B., Rueher, M.: Automatic Test Data Generation using Constraint Solving Techniques. In: Proc. 1998 Int. Symp. Softw. Testing and Analysis (ISSTA), pp. 53–62 (1998)Google Scholar
  25. 25.
    Grieskamp, W., et al.: Generating Finite State Machines from Abstract State Machines. In: Proc. 2002 Int. Symp. Softw. Testing and Analysis (ISSTA), pp. 112–122 (2002)Google Scholar
  26. 26.
    Guibas, L.J., Sedgewick, R.: A Dichromatic Framework for Balanced Trees. In: Proc. 19th Annual Symp. Foundations of Comput. Sci. (FOCS), pp. 8–21 (1978)Google Scholar
  27. 27.
    Henzinger, T., et al.: Software Verification with BLAST. In: Proc. 10th Int. SPIN Workshop on Model Checking of Softw., pp. 235–239 (2003)Google Scholar
  28. 28.
    Huang, J.C.: An Approach to Program Testing. ACM Comput. Surv. 7(3), 113–128 (1975)CrossRefGoogle Scholar
  29. 29.
    Jackson, D.: Alloy: A Lightweight Object Modelling Notation. ACM Trans. Softw. Eng. and Methodology 11(2), 256–290 (2002)CrossRefGoogle Scholar
  30. 30.
    Jackson, D.: Software Abstractions: Logic, Language, and Analysis. The MIT Press, Cambridge (2006)Google Scholar
  31. 31.
    Jackson, D., Schechter, I., Shlyakhter, I.: ALCOA: The Alloy Constraint Analyzer. In: Proc. 2000 Int. Conf. Softw. Eng. (ICSE), pp. 730–733 (2000)Google Scholar
  32. 32.
    Khurshid, S., Marinov, D.: Checking Java Implementation of a Naming Architecture Using TestEra. Electr. Notes Theor. Comput. Sci. 55(3) (2001)Google Scholar
  33. 33.
    Khurshid, S., Marinov, D.: TestEra: Specification-Based Testing of Java Programs using SAT. Automated Softw. Eng. J. 11(4), 403–434 (2004)CrossRefGoogle Scholar
  34. 34.
    Khurshid, S., Păsăreanu, C.S., Visser, W.: Generalized symbolic execution for model checking and testing. In: Garavel, H., Hatcliff, J. (eds.) TACAS 2003. LNCS, vol. 2619, pp. 553–568. Springer, Heidelberg (2003)Google Scholar
  35. 35.
    King, J.C.: Symbolic Execution and Program Testing. Commun. ACM 19(7), 385–394 (1976)zbMATHCrossRefGoogle Scholar
  36. 36.
    Korel, B.: Automated Test Data Generation for Programs with Procedures. In: Proc. 1996 Int. Symp. Softw. Testing and Analysis (ISSTA), pp. 209–215 (1996)Google Scholar
  37. 37.
    Larson, E., Austin, T.: High Coverage Detection of Input-related Security Faults. In: Proc. 12th USENIX Security Symp., p. 9 (2003)Google Scholar
  38. 38.
    Marinov, D., Khurshid, S.: TestEra: A Novel Framework for Automated Testing of Java Programs. In: Proc. 16th Int. Conf. Automated Softw. Eng. (ASE), p. 22 (2001)Google Scholar
  39. 39.
    Misailovic, S., et al.: Parallel Test Generation and Execution with Korat. In: Proc. 6th joint meeting of the Euro. Softw. Eng. Conf. (ESEC) and Symp. Foundations of Softw. Eng. (FSE), pp. 135–144 (2007)Google Scholar
  40. 40.
    Moskewicz, M.W., et al.: Chaff: Engineering an Efficient SAT Solver. In: Proc. 38th Design Automation Conf. (DAC), pp. 530–535 (2001)Google Scholar
  41. 41.
    Offutt, J., Abdurazik, A.: Generating Tests from UML Specifications. In: Proc. 2nd Int. Conf. Unified Modeling Language, pp. 416–429 (1999)Google Scholar
  42. 42.
    Penix, J., et al.: Verification of Time Partitioning in the DEOS Scheduler Kernel. In: Proc. 2000 Int. Conf. Softw. Eng. (ICSE), pp. 488–497 (2000)Google Scholar
  43. 43.
    Ramamoorthy, C.V., Ho, S.B.F., Chen, W.T.: On the Automated Generation of Program Test Data. IEEE Trans. Softw. Eng. 2(4), 293–300 (1976)CrossRefGoogle Scholar
  44. 44.
    Sen, K., Marinov, D., Agha, G.: CUTE: A Concolic Unit Testing Engine for C. In: Proc. 5th joint meeting of the Euro. Softw. Eng. Conf. (ESEC) and Symp. Foundations of Softw. Eng. (FSE), pp. 263–272 (2005)Google Scholar
  45. 45.
    Shlyakhter, I.: Generating Effective Symmetry-breaking Predicates for Search Problems. Discrete Appl. Math. 155(12), 1539–1548 (2007)zbMATHCrossRefMathSciNetGoogle Scholar
  46. 46.
    Siddiqui, J.H., Khurshid, S.: PKorat: Parallel Generation of Structurally Complex Test Inputs. In: Proc. 2nd Int. Conf. Softw. Testing Verification and Validation (ICST), pp. 250–259 (2009)Google Scholar
  47. 47.
    Sorensson, N., Een, N.: An Extensible SAT-solver. In: Proc. 6th Int. Conf. Theor. and Applications of Satisfiability Testing (SAT), pp. 502–518 (2003)Google Scholar
  48. 48.
    Spivey, J.M.: The Z Notation: A Reference Manual. Prentice-Hall, Inc., Englewood Cliffs (1989)zbMATHGoogle Scholar
  49. 49.
    Sullivan, K., et al.: Software Assurance by Bounded Exhaustive Testing. In: Proc. 2004 Int. Symp. Softw. Testing and Analysis (ISSTA), pp. 133–142 (2004)Google Scholar
  50. 50.
    Visser, W., et al.: Model Checking Programs. In: Proc. 15th Int. Conf. Automated Softw. Eng. (ASE), p. 3 (2000)Google Scholar
  51. 51.
    Visser, W., Pǎsǎreanu, C.S., Khurshid, S.: Test Input Generation with Java PathFinder. In: Proc. 2004 Int. Symp. Softw. Testing and Analysis (ISSTA), pp. 97–107 (2004)Google Scholar
  52. 52.
    Xie, T., Marinov, D., Notkin, D.: Rostra: A Framework for Detecting Redundant Object-Oriented Unit Tests. In: Proc. 29th Int. Conf. Automated Softw. Eng. (ASE), pp. 196–205 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Junaid Haroon Siddiqui
    • 1
  • Sarfraz Khurshid
    • 1
  1. 1.The University of Texas at AustinAustin

Personalised recommendations