Combining Model Checking and Testing

Chapter

Abstract

Model checking and testing have a lot in common. Over the last two decades, significant progress has been made on how to broaden the scope of model checking from finite-state abstractions to actual software implementations. One way to do this consists of adapting model checking into a form of systematic testing that is applicable to industrial-size software. This chapter presents an overview of this strand of software model checking.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Anand, S., Godefroid, P., Tillmann, N.: Demand-driven compositional symbolic execution. In: Ramakrishnan, C.R., Rehof, J. (eds.) Intl. Conf. on Tools and Algorithms for the Construction and Analysis of Systems (TACAS). LNCS, vol. 4963, pp. 367–381. Springer, Heidelberg (2008) Google Scholar
  2. 2.
    Anand, S., Păsăreanu, C.S., Visser, W.: JPF-SE: a symbolic execution extension to Java PathFinder. In: Grumberg, O., Huth, M. (eds.) Intl. Conf. on Tools and Algorithms for the Construction and Analysis of Systems (TACAS). LNCS, vol. 4424, pp. 134–138. Springer, Heidelberg (2007) Google Scholar
  3. 3.
    Artzi, S., Kiezun, A., Dolby, J., Tip, F., Dig, D., Paradkar, A.M., Ernst, M.D.: Finding bugs in web applications using dynamic test generation and explicit-state model checking. IEEE Trans. Softw. Eng. 36(4), 474–494 (2010) Google Scholar
  4. 4.
    Ball, T., Rajamani, S.K.: The SLAM toolkit. In: Berry, G., Comon, H., Finkel, A. (eds.) Intl. Conf. on Computer-Aided Verification (CAV). LNCS, vol. 2102, pp. 260–264. Springer, Heidelberg (2001) Google Scholar
  5. 5.
    Barnett, M., Chang, B.E., DeLine, R., Jacobs, B., Leino, K.R.M.: Boogie: a modular reusable verifier for object-oriented programs. In: de Boer, F.S., Bonsangue, M.M., Graf, S., de Roever, W.P. (eds.) Formal Methods for Components and Objects (FMCO). LNCS, vol. 4111, pp. 364–387. Springer, Heidelberg (2005) Google Scholar
  6. 6.
    Beckman, N.E., Nori, A.V., Rajamani, S.K., Simmons, R.J.: Proofs from tests. In: Ryder, B.G., Zeller, A. (eds.) Intl. Symp. on Software Testing and Analysis (ISSTA), pp. 3–14. ACM, New York (2008) Google Scholar
  7. 7.
    Bensalem, S., Peled, D., Qu, H., Tripakis, S.: Generating path conditions for timed systems. In: Romijn, J., Smith, G., van de Pol, J. (eds.) Integrated Formal Methods (IFM). LNCS, vol. 3771, pp. 5–19. Springer, Heidelberg (2005) Google Scholar
  8. 8.
    Bensalem, S., Peled, D., Qu, H., Tripakis, S., Zuck, L.D.: Test case generation for ultimately periodic paths. In: Yorav, K. (ed.) Intl. Haifa Verification Conf. (HVC). LNCS, vol. 4899, pp. 120–135. Springer, Heidelberg (2007) Google Scholar
  9. 9.
    Beyer, D., Chlipala, A., Henzinger, T.A., Jhala, R., Majumdar, R.: Generating tests from counterexamples. In: Finkelstein, A., Estublier, J., Rosenblum, D.S. (eds.) Intl. Conf. on Software Engineering (ICSE), pp. 326–335. IEEE, Piscataway (2004) Google Scholar
  10. 10.
    Beyer, D., Henzinger, T.A., Keremoglu, M.E., Wendler, P.: Conditional model checking: a technique to pass information between verifiers. In: Tracz, W., Robillard, M.P., Bultan, T. (eds.) Intl. Symp. on Foundations of Software Engineering (FSE), p. 57. ACM, New York (2012) Google Scholar
  11. 11.
    Beyer, D., Henzinger, T.A., Théoduloz, G.: Program analysis with dynamic precision adjustment. In: Intl. Conf. on Automated Software Engineering (ASE), pp. 29–38. IEEE, Piscataway (2008) Google Scholar
  12. 12.
    Beyer, D., Holzer, A., Tautschnig, M., Veith, H.: Information reuse for multi-goal reachability analyses. In: Felleisen, M., Gardner, P. (eds.) European Symp. on Programming (ESOP). LNCS, vol. 7792, pp. 472–491. Springer, Heidelberg (2013) Google Scholar
  13. 13.
    van der Bijl, M., Rensink, A., Tretmans, J.: Compositional testing with ioco. In: Intl. Workshop on Formal Approaches to Testing of Software (FATES). LNCS, vol. 2931, pp. 86–100. Springer, Heidelberg (2003) Google Scholar
  14. 14.
    Boonstoppel, P., Cadar, C., Engler, D.R.: Rwset: attacking path explosion in constraint-based test generation. In: Ramakrishnan, C.R., Rehof, J. (eds.) Intl. Conf. on Tools and Algorithms for the Construction and Analysis of Systems (TACAS). LNCS, vol. 4963, pp. 351–366. Springer, Heidelberg (2008) Google Scholar
  15. 15.
    Bounimova, E., Godefroid, P., Molnar, D.A.: Billions and billions of constraints: whitebox fuzz testing in production. In: Notkin, D., Cheng, B.H.C., Pohl, K. (eds.) Intl. Conf. on Software Engineering (ICSE), pp. 122–131. IEEE/ACM, Piscataway/New York (2013) Google Scholar
  16. 16.
    Boyer, R.S., Elspas, B., Levitt, K.N.: SELECT—a formal system for testing and debugging programs by symbolic execution. ACM SIGPLAN Not. 10(6), 234–245 (1975) Google Scholar
  17. 17.
    Brat, G.P., Drusinsky, D., Giannakopoulou, D., Goldberg, A., Havelund, K., Lowry, M.R., Păsăreanu, C.S., Venet, A., Visser, W., Washington, R.: Experimental evaluation of verification and validation tools on Martian Rover software. Form. Methods Syst. Des. 25(2–3), 167–198 (2004) MATHGoogle Scholar
  18. 18.
    Burch, J.R., Clarke, E.M., McMillan, K.L., Dill, D.L., Hwang, L.J.: Symbolic model checking: \(10^{20}\) states and beyond. In: Symp. on Logic in Computer Science (LICS), pp. 428–439. IEEE, Piscataway (1990) Google Scholar
  19. 19.
    Burckhardt, S., Kothari, P., Musuvathi, M., Nagarakatte, S.: A randomized scheduler with probabilistic guarantees of finding bugs. In: Hoe, J.C., Adve, V.S. (eds.) Intl. Conf. on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pp. 167–178. ACM, New York (2010) Google Scholar
  20. 20.
    Burnim, J., Jalbert, N., Stergiou, C., Sen, K.: Looper: lightweight detection of infinite loops at runtime. In: Intl. Conf. on Automated Software Engineering (ASE), pp. 161–169. IEEE, Piscataway (2009) Google Scholar
  21. 21.
    Burnim, J., Juvekar, S., Sen, K.: WISE: automated test generation for worst-case complexity. In: Intl. Conf. on Software Engineering (ICSE), pp. 463–473. IEEE, Piscataway (2009) Google Scholar
  22. 22.
    Burnim, J., Necula, G., Sen, K.: Separating functional and parallel correctness using nondeterministic sequential specifications. In: USENIX Workshop on Hot Topics in Parallelism (HotPar). USENIX Association, Berkeley (2010) Google Scholar
  23. 23.
    Burnim, J., Necula, G.C., Sen, K.: Specifying and checking semantic atomicity for multithreaded programs. In: Gupta, R., Mowry, T.C. (eds.) Intl. Conf. on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pp. 79–90. ACM, New York (2011) Google Scholar
  24. 24.
    Burnim, J., Sen, K.: Heuristics for scalable dynamic test generation. In: Intl. Conf. on Automated Software Engineering (ASE), pp. 443–446. IEEE, Piscataway (2008) Google Scholar
  25. 25.
    Burnim, J., Sen, K.: Asserting and checking determinism for multithreaded programs. Commun. ACM 53(6), 97–105 (2010) Google Scholar
  26. 26.
    Burnim, J., Sen, K., Stergiou, C.: Sound and complete monitoring of sequential consistency for relaxed memory models. In: Abdulla, P.A., Leino, K.R.M. (eds.) Intl. Conf. on Tools and Algorithms for the Construction and Analysis of Systems (TACAS). LNCS, vol. 6605, pp. 11–25. Springer, Heidelberg (2011) MATHGoogle Scholar
  27. 27.
    Burnim, J., Sen, K., Stergiou, C.: Testing concurrent programs on relaxed memory models. In: Dwyer, M.B., Tip, F. (eds.) Intl. Symp. on Software Testing and Analysis (ISSTA), pp. 122–132. ACM, New York (2011) Google Scholar
  28. 28.
    Cadar, C., Dunbar, D., Engler, D.R.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: Draves, R., van Renesse, R. (eds.) Operating Systems Design and Implementation (OSDI), pp. 209–224. USENIX Association, Berkeley (2008) Google Scholar
  29. 29.
    Cadar, C., Engler, D.R.: Execution generated test cases: how to make systems code crash itself. In: Godefroid, P. (ed.) Intl. Symp. on Model Checking of Software (SPIN). LNCS, vol. 3639, pp. 2–23. Springer, Heidelberg (2005) Google Scholar
  30. 30.
    Cadar, C., Ganesh, V., Pawlowski, P.M., Dill, D.L., Engler, D.R.: EXE: automatically generating inputs of death. In: Juels, A., Wright, R.N., di Vimercati, S.D.C. (eds.) ACM Conf. on Computer and Communications Security (CCS), pp. 322–335. ACM, New York (2006) Google Scholar
  31. 31.
    Cadar, C., Godefroid, P., Khurshid, S., Păsăreanu, C.S., Sen, K., Tillmann, N., Visser, W.: Symbolic execution for software testing in practice: preliminary assessment. In: Taylor, R.N., Gall, H.C., Medvidovic, N. (eds.) Intl. Conf. on Software Engineering (ICSE), pp. 1066–1071. ACM, New York (2011) Google Scholar
  32. 32.
    Cadar, C., Sen, K.: Symbolic execution for software testing: three decades later. Commun. ACM 56(2), 82–90 (2013) Google Scholar
  33. 33.
    Chandra, S., Fink, S.J., Sridharan, M.: Snugglebug: a powerful approach to weakest preconditions. In: Hind, M., Diwan, A. (eds.) Conf. on Programming Language Design and Implementation (PLDI), pp. 363–374. ACM, New York (2009) Google Scholar
  34. 34.
    Chandra, S., Godefroid, P., Palm, C.: Software model checking in practice: an industrial case study. In: Tracz, W., Young, M., Magee, J. (eds.) Intl. Conf. on Software Engineering (ICSE), pp. 431–441. ACM, New York (2002) Google Scholar
  35. 35.
    Chang, J., Richardson, D.J., Sankar, S.: Structural specification-based testing with ADL. In: Intl. Symp. on Software Testing and Analysis (ISSTA), pp. 62–70. ACM, New York (1996) Google Scholar
  36. 36.
    Chipounov, V., Kuznetsov, V., Candea, G.: S2E: a platform for in-vivo multi-path analysis of software systems. In: Gupta, R., Mowry, T.C. (eds.) Intl. Conf. on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pp. 265–278. ACM, New York (2011) Google Scholar
  37. 37.
    Clarke, E.M., Biere, A., Raimi, R., Zhu, Y.: Bounded model checking using satisfiability solving. Form. Methods Syst. Des. 19(1), 7–34 (2001) MATHGoogle Scholar
  38. 38.
    Clarke, E.M., Emerson, E.A.: Design and synthesis of synchronization skeletons using branching-time temporal logic. In: Kozen, D. (ed.) Workshop on Logics of Programs. LNCS, vol. 131, pp. 52–71. Springer, Heidelberg (1981) Google Scholar
  39. 39.
    Clarke, E.M., Emerson, E.A., Sistla, A.P.: Automatic verification of finite-state concurrent systems using temporal logic specifications. ACM Trans. Program. Lang. Syst. 8(2), 244–263 (1986) MATHGoogle Scholar
  40. 40.
    Clarke, E.M., Kroening, D., Yorav, K.: Behavioral consistency of C and Verilog programs using bounded model checking. In: Design Automation Conf. (DAC), pp. 368–371. ACM, New York (2003) Google Scholar
  41. 41.
    Clarke, L.A.: A program testing system. In: ACM, vol. 176, pp. 488–491 (1976) Google Scholar
  42. 42.
    Clarke, L.A., Richardson, D.J.: Applications of symbolic evaluation. J. Syst. Softw. 5(1), 15–35 (1985) Google Scholar
  43. 43.
    Colby, C.: Analyzing the communication topology of concurrent programs. In: Jones, N.D. (ed.) Symposium on Partial Evaluation and Semantics-Based Program Manipulation, pp. 202–213. ACM, New York (1995) Google Scholar
  44. 44.
    Corbett, J.C.: Constructing abstract models of concurrent real-time software. In: Intl. Symp. on Software Testing and Analysis (ISSTA), pp. 250–260 (1996) Google Scholar
  45. 45.
    Corbett, J.C., Dwyer, M.B., Hatcliff, J., Laubach, S., Păsăreanu, C.S., Robby, Zheng, H.: Bandera: extracting finite-state models from Java source code. In: Ghezzi, C., Jazayeri, M., Wolf, A.L. (eds.) Intl. Conf. on Software Engineering (ICSE), pp. 439–448. ACM, New York (2000) Google Scholar
  46. 46.
    Cridlig, R.: Semantic analysis of shared-memory concurrent languages using abstract model-checking. In: Jones, N.D. (ed.) Symposium on Partial Evaluation and Semantics-Based Program Manipulation, pp. 214–225. ACM, New York (1995) Google Scholar
  47. 47.
    Csallner, C., Smaragdakis, Y.: Check’n’crash: combining static checking and testing. In: Roman, G., Griswold, W.G., Nuseibeh, B. (eds.) Intl. Conf. on Software Engineering (ICSE), pp. 422–431. ACM, New York (2005) Google Scholar
  48. 48.
    Dijkstra, E.W.: Guarded commands, nondeterminacy and formal derivation of programs. Commun. ACM 18(8), 453–457 (1975) MathSciNetMATHGoogle Scholar
  49. 49.
    Dillon, L.K., Yu, Q.: Oracles for checking temporal properties of concurrent systems. In: Wile, D.S. (ed.) Intl. Symp. on Foundations of Software Engineering (FSE), pp. 140–153. ACM, New York (1994) Google Scholar
  50. 50.
    Drusinsky, D.: The temporal rover and the ATG rover. In: Havelund, K., Penix, J., Visser, W. (eds.) Intl. Symp. on Model Checking of Software (SPIN). LNCS, vol. 1885, pp. 323–330. Springer, Heidelberg (2000) Google Scholar
  51. 51.
    Edelstein, O., Farchi, E., Goldin, E., Nir, Y., Ratsaby, G., Ur, S.: Framework for testing multi-threaded Java programs. Concurr. Comput. 15(3–5), 485–499 (2003) MATHGoogle Scholar
  52. 52.
    Elkarablieh, B., Godefroid, P., Levin, M.Y.: Precise pointer reasoning for dynamic test generation. In: Rothermel, G., Dillon, L.K. (eds.) Intl. Symp. on Software Testing and Analysis (ISSTA), pp. 129–140. ACM, New York (2009) Google Scholar
  53. 53.
    Emmi, M., Majumdar, R., Sen, K.: Dynamic test input generation for database applications. In: Rosenblum, D.S., Elbaum, S.G. (eds.) Intl. Symp. on Software Testing and Analysis (ISSTA), pp. 151–162. ACM, New York (2007) Google Scholar
  54. 54.
    Ernst, M.D.: Static and dynamic analysis: synergy and duality. In: ICSE Workshop on Dynamic Analysis (WODA), pp. 25–28. ACM, New York (2003) Google Scholar
  55. 55.
    Farzan, A., Holzer, A., Razavi, N., Veith, H.: Con2colic testing. In: Meyer, B., Baresi, L., Mezini, M. (eds.) Intl. Symp. on Foundations of Software Engineering (FSE), pp. 37–47. ACM, New York (2013) Google Scholar
  56. 56.
    Fernandez, J., Jard, C., Jéron, T., Viho, C.: Using on-the-fly verification techniques for the generation of test suites. In: Alur, R., Henzinger, T.A. (eds.) Intl. Conf. on Computer-Aided Verification (CAV). LNCS, vol. 1102, pp. 348–359. Springer, Heidelberg (1996) Google Scholar
  57. 57.
    Flanagan, C., Godefroid, P.: Dynamic partial-order reduction for model checking software. In: Palsberg, J., Abadi, M. (eds.) Symp. on Principles of Programming Languages (POPL), pp. 110–121. ACM, New York (2005) Google Scholar
  58. 58.
    Floyd, R.W.: Assigning meanings to programs. In: Mathematical Aspects of Computer Science, vol. 19, pp. 19–32 (1967) Google Scholar
  59. 59.
    Godefroid, P.: Partial-Order Methods for the Verification of Concurrent Systems—An Approach to the State-Explosion Problem. LNCS, vol. 1032. Springer, Heidelberg (1996) MATHGoogle Scholar
  60. 60.
    Godefroid, P.: Model checking for programming languages using VeriSoft. In: Lee, P., Henglein, F., Jones, N.D. (eds.) Symp. on Principles of Programming Languages (POPL), pp. 174–186. ACM, New York (1997) Google Scholar
  61. 61.
    Godefroid, P.: Software model checking: the VeriSoft approach. Form. Methods Syst. Des. 26(2), 77–101 (2005) Google Scholar
  62. 62.
    Godefroid, P.: Compositional dynamic test generation. In: Hofmann, M., Felleisen, M. (eds.) Symp. on Principles of Programming Languages (POPL), pp. 47–54. ACM, New York (2007) Google Scholar
  63. 63.
    Godefroid, P.: Higher-order test generation. In: Hall, M.W., Padua, D.A. (eds.) Conf. on Programming Language Design and Implementation (PLDI), pp. 258–269. ACM, New York (2011) Google Scholar
  64. 64.
    Godefroid, P., Hanmer, R.S., Jagadeesan, L.J.: Model checking without a model: an analysis of the heart-beat monitor of a telephone switch using VeriSoft. In: Intl. Symp. on Software Testing and Analysis (ISSTA), pp. 124–133 (1998) Google Scholar
  65. 65.
    Godefroid, P., Huth, M., Jagadeesan, R.: Abstraction-based model checking using modal transition systems. In: Larsen, K.G., Nielsen, M. (eds.) Intl. Conf. on Concurrency Theory (CONCUR). LNCS, vol. 2154, pp. 426–440. Springer, Heidelberg (2001) Google Scholar
  66. 66.
    Godefroid, P., Kiezun, A., Levin, M.Y.: Grammar-based whitebox fuzzing. In: Conf. on Programming Language Design and Implementation (PLDI), pp. 206–215. ACM, New York (2008) Google Scholar
  67. 67.
    Godefroid, P., Kinder, J.: Proving memory safety of floating-point computations by combining static and dynamic program analysis. In: Tonella, P., Orso, A. (eds.) Intl. Symp. on Software Testing and Analysis (ISSTA), pp. 1–12. ACM, New York (2010) Google Scholar
  68. 68.
    Godefroid, P., Klarlund, N., Sen, K.: DART: Directed Automated Random Testing. In: Sarkar, V., Hall, M.W. (eds.) Conf. on Programming Language Design and Implementation (PLDI), pp. 213–223. ACM, New York (2005) Google Scholar
  69. 69.
    Godefroid, P., Lahiri, S.K.: From program to logic: an introduction. In: Meyer, B., Nordio, M. (eds.) Tools for Practical Software Verification (LASER). LNCS, vol. 7682, pp. 31–44. Springer, Heidelberg (2012) Google Scholar
  70. 70.
    Godefroid, P., Lahiri, S.K., Rubio-González, C.: Statically validating must summaries for incremental compositional dynamic test generation. In: Yahav, E. (ed.) Intl. Symp. on Static Analysis (SAS). LNCS, vol. 6887, pp. 112–128. Springer, Heidelberg (2011) Google Scholar
  71. 71.
    Godefroid, P., Levin, M.Y., Molnar, D.A.: Active property checking. In: de Alfaro, L., Palsberg, J. (eds.) Intl. Conf. on Embedded Software (EMSOFT), pp. 207–216. ACM, New York (2008) Google Scholar
  72. 72.
    Godefroid, P., Levin, M.Y., Molnar, D.A.: Automated whitebox fuzz testing. In: Network and Distributed System Security Symposium (NDSS). The Internet Society, Reston (2008) Google Scholar
  73. 73.
    Godefroid, P., Levin, M.Y., Molnar, D.A.: SAGE: whitebox fuzzing for security testing. Commun. ACM 55(3), 40–44 (2012) Google Scholar
  74. 74.
    Godefroid, P., Luchaup, D.: Automatic partial loop summarization in dynamic test generation. In: Dwyer, M.B., Tip, F. (eds.) Intl. Symp. on Software Testing and Analysis (ISSTA), pp. 23–33. ACM, New York (2011) Google Scholar
  75. 75.
    Godefroid, P., Nori, A.V., Rajamani, S.K., Tetali, S.: Compositional may-must program analysis: unleashing the power of alternation. In: Hermenegildo, M.V., Palsberg, J. (eds.) Symp. on Principles of Programming Languages (POPL), pp. 43–56. ACM, New York (2010) Google Scholar
  76. 76.
    Godefroid, P., Taly, A.: Automated synthesis of symbolic instruction encodings from I/O samples. In: Vitek, J., Lin, H., Tip, F. (eds.) Conf. on Programming Language Design and Implementation (PLDI), pp. 441–452. ACM, New York (2012) Google Scholar
  77. 77.
    Gulavani, B.S., Henzinger, T.A., Kannan, Y., Nori, A.V., Rajamani, S.K.: SYNERGY: a new algorithm for property checking. In: Young, M., Devanbu, P.T. (eds.) Intl. Symp. on Foundations of Software Engineering (FSE), pp. 117–127. ACM, New York (2006) Google Scholar
  78. 78.
    Gunter, E.L., Peled, D.: Path exploration tool. In: Cleaveland, R. (ed.) Intl. Conf. on Tools and Algorithms for the Construction and Analysis of Systems (TACAS). LNCS, vol. 1579, pp. 405–419. Springer, Heidelberg (1999) Google Scholar
  79. 79.
    Gunter, E.L., Peled, D.: Model checking, testing and verification working together. Form. Asp. Comput. 17(2), 201–221 (2005) MATHGoogle Scholar
  80. 80.
    Gupta, N., Mathur, A.P., Soffa, M.L.: Generating test data for branch coverage. In: Intl. Conf. on Automated Software Engineering (ASE), pp. 219–228. IEEE, Piscataway (2000) Google Scholar
  81. 81.
    Havelund, K., Rosu, G.: Monitoring Java programs with Java PathExplorer. Electron. Notes Theor. Comput. Sci. 55(2), 200–217 (2001) Google Scholar
  82. 82.
    Helmstetter, C., Maraninchi, F., Maillet-Contoz, L., Moy, M.: Automatic generation of schedulings for improving the test coverage of systems-on-a-chip. In: Formal Methods in Computer Aided Design (FMCAD), pp. 171–178. IEEE, Piscataway (2006) Google Scholar
  83. 83.
    Hoare, C.A.R.: An axiomatic basis for computer programming. Commun. ACM 12(10), 576–580 (1969) MATHGoogle Scholar
  84. 84.
    Hoenicke, J., Leino, K.R.M., Podelski, A., Schäf, M., Wies, T.: It’s doomed; we can prove it. In: Cavalcanti, A., Dams, D. (eds.) World Congress on Formal Methods. LNCS, vol. 5850, pp. 338–353. Springer, Heidelberg (2009) Google Scholar
  85. 85.
    Holzmann, G.J., Smith, M.H.: A practical method for verifying event-driven software. In: Boehm, B.W., Garlan, D., Kramer, J. (eds.) Intl. Conf. on Software Engineering (ICSE), pp. 597–607. ACM, New York (1999) Google Scholar
  86. 86.
    Howden, W.E.: Symbolic testing and the DISSECT symbolic evaluation system. IEEE Trans. Softw. Eng. 3(4), 266–278 (1977) MATHGoogle Scholar
  87. 87.
    Jagadeesan, L.J., Porter, A.A., Puchol, C., Ramming, J.C., Votta, L.G.: Specification-based testing of reactive software: tools and experiments (experience report). In: Adrion, W.R., Fuggetta, A., Taylor, R.N., Wasserman, A.I. (eds.) Intl. Conf. on Software Engineering (ICSE), pp. 525–535. ACM, New York (1997) Google Scholar
  88. 88.
    Jalbert, N., Sen, K.: A trace simplification technique for effective debugging of concurrent programs. In: Roman, G., Sullivan, K.J. (eds.) Intl. Symp. on Foundations of Software Engineering (FSE), pp. 57–66. ACM, New York (2010) Google Scholar
  89. 89.
    Joshi, P., Naik, M., Park, C., Sen, K.: CalFuzzer: an extensible active testing framework for concurrent programs. In: Bouajjani, A., Maler, O. (eds.) Intl. Conf. on Computer-Aided Verification (CAV), pp. 675–681 (2009) Google Scholar
  90. 90.
    Joshi, P., Naik, M., Sen, K., Gay, D.: An effective dynamic analysis for detecting generalized deadlocks. In: Roman, G., Sullivan, K.J. (eds.) Intl. Symp. on Foundations of Software Engineering (FSE), pp. 327–336. ACM, New York (2010) Google Scholar
  91. 91.
    Joshi, P., Park, C., Sen, K., Naik, M.: A randomized dynamic program analysis technique for detecting real deadlocks. In: Hind, M., Diwan, A. (eds.) Conf. on Programming Language Design and Implementation (PLDI), pp. 110–120. ACM, New York (2009) Google Scholar
  92. 92.
    Joshi, P., Sen, K., Shlimovich, M.: Predictive testing: amplifying the effectiveness of software testing. In: Crnkovic, I., Bertolino, A. (eds.) Intl. Symp. on Foundations of Software Engineering (FSE), pp. 561–564 (2007) Google Scholar
  93. 93.
    Kannan, Y., Sen, K.: Universal symbolic execution and its application to likely data structure invariant generation. In: Ryder, B.G., Zeller, A. (eds.) Intl. Symp. on Software Testing and Analysis (ISSTA), pp. 283–294. ACM, New York (2008) Google Scholar
  94. 94.
    Khurshid, S., Păsăreanu, C.S., Visser, W.: Generalized symbolic execution for model checking and testing. In: Garavel, H., Hatcliff, J. (eds.) Intl. Conf. on Tools and Algorithms for the Construction and Analysis of Systems (TACAS). LNCS, vol. 2619, pp. 553–568. Springer, Heidelberg (2003) MATHGoogle Scholar
  95. 95.
    Killian, C.E., Anderson, J.W., Jhala, R., Vahdat, A.: Life, death, and the critical transition: finding liveness bugs in systems code. In: Balakrishnan, H., Druschel, P. (eds.) USENIX Symp. on Networked Systems Design and Implementation (NSDI). USENIX Association, Berkeley (2007) Google Scholar
  96. 96.
    King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976) MathSciNetMATHGoogle Scholar
  97. 97.
    Korel, B.: A dynamic approach of test data generation. In: IEEE Conference on Software Maintenance, pp. 311–317. IEEE, Piscataway (1990) Google Scholar
  98. 98.
    Kuznetsov, V., Chipounov, V., Candea, G.: Testing closed-source binary device drivers with DDT. In: Barham, P., Roscoe, T. (eds.) USENIX Annual Technical Conference. USENIX Association, Berkeley (2010) Google Scholar
  99. 99.
    Lichtenstein, O., Pnueli, A.: Checking that finite state concurrent programs satisfy their linear specification. In: Deusen, M.S.V., Galil, Z., Reid, B.K. (eds.) Symp. on Principles of Programming Languages (POPL), pp. 97–107. ACM, New York (1985) Google Scholar
  100. 100.
    Long, D.L., Clarke, L.A.: Data flow analysis of concurrent systems that use the rendezvous model of synchronization. In: Symposium on Testing, Analysis, and Verification, pp. 21–35 (1991) Google Scholar
  101. 101.
    Majumdar, R., Sen, K.: Hybrid concolic testing. In: Intl. Conf. on Software Engineering (ICSE), pp. 416–426. IEEE, Piscataway (2007) Google Scholar
  102. 102.
    Majumdar, R., Xu, R.: Directed test generation using symbolic grammars. In: Stirewalt, R.E.K., Egyed, A., Fischer, B. (eds.) Intl. Conf. on Automated Software Engineering (ASE), pp. 134–143. ACM, New York (2007) Google Scholar
  103. 103.
    Majumdar, R., Xu, R.: Reducing test inputs using information partitions. In: Bouajjani, A., Maler, O. (eds.) Intl. Conf. on Computer-Aided Verification (CAV). LNCS, vol. 5643, pp. 555–569. Springer, Heidelberg (2009) Google Scholar
  104. 104.
    Masticola, S.P., Ryder, B.G.: Non-concurrency analysis. In: Chen, M.C., Halstead, R. (eds.) Symposium on Principles and Practice of Parallel Programming (PPOPP), pp. 129–138. ACM, New York (1993) Google Scholar
  105. 105.
    Molnar, D.A., Wagner, D.: CatchConv: symbolic execution and run-time type inference for integer conversion errors. Tech. Rep. UCB/EECS-2007-23, EECS Department, University of California, Berkeley (2007) Google Scholar
  106. 106.
    de Moura, L.M., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) Intl. Conf. on Tools and Algorithms for the Construction and Analysis of Systems (TACAS). LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008) Google Scholar
  107. 107.
    Musuvathi, M., Engler, D.R.: Model checking large network protocol implementations. In: Morris, R., Savage, S. (eds.) USENIX Symp. on Networked Systems Design and Implementation (NSDI), pp. 155–168. USENIX Association, Berkeley (2004) Google Scholar
  108. 108.
    Musuvathi, M., Park, D.Y.W., Chou, A., Engler, D.R., Dill, D.L.: CMC: a pragmatic approach to model checking real code. In: Culler, D.E., Druschel, P. (eds.) Operating Systems Design and Implementation (OSDI), pp. 75–88. USENIX Association, Berkeley (2002) Google Scholar
  109. 109.
    Musuvathi, M., Qadeer, S.: Iterative context bounding for systematic testing of multithreaded programs. In: Ferrante, J., McKinley, K.S. (eds.) Conf. on Programming Language Design and Implementation (PLDI), pp. 446–455. ACM, New York (2007) Google Scholar
  110. 110.
    Musuvathi, M., Qadeer, S., Ball, T., Basler, G., Nainar, P.A., Neamtiu, I.: Finding and reproducing heisenbugs in concurrent programs. In: Draves, R., van Renesse, R. (eds.) Operating Systems Design and Implementation (OSDI), pp. 267–280. USENIX Association, Berkeley (2008) Google Scholar
  111. 111.
    Namjoshi, K.S., Kurshan, R.P.: Syntactic program transformations for automatic abstraction. In: Emerson, E.A., Sistla, A.P. (eds.) Intl. Conf. on Computer-Aided Verification (CAV). LNCS, vol. 1855, pp. 435–449. Springer, Heidelberg (2000) Google Scholar
  112. 112.
    Necula, G.C., McPeak, S., Weimer, W.: CCured: type-safe retrofitting of legacy code. In: Launchbury, J., Mitchell, J.C. (eds.) Symp. on Principles of Programming Languages (POPL), pp. 128–139. ACM, New York (2002) MATHGoogle Scholar
  113. 113.
    Nori, A.V., Rajamani, S.K.: An empirical study of optimizations in YOGI. In: Kramer, J., Bishop, J., Devanbu, P.T., Uchitel, S. (eds.) Intl. Conf. on Software Engineering (ICSE), pp. 355–364. ACM, New York (2010) Google Scholar
  114. 114.
    Offutt, A.J., Jin, Z., Pan, J.: The dynamic domain reduction procedure for test data generation. Softw. Pract. Exp. 29(2), 167–193 (1999) Google Scholar
  115. 115.
    Park, C., Sen, K.: Randomized active atomicity violation detection in concurrent programs. In: Harrold, M.J., Murphy, G.C. (eds.) Intl. Symp. on Foundations of Software Engineering (FSE), pp. 135–145. ACM, New York (2008) Google Scholar
  116. 116.
    Park, C., Sen, K., Hargrove, P., Iancu, C.: Efficient data race detection for distributed memory parallel programs. In: Lathrop, S., Costa, J., Kramer, W. (eds.) Conference on High Performance Computing Networking, Storage and Analysis (SC), pp. 51:1–51:12. ACM, New York (2011) Google Scholar
  117. 117.
    Park, C., Sen, K., Iancu, C.: Scaling data race detection for partitioned global address space programs. In: Malony, A.D., Nemirovsky, M., Midkiff, S.P. (eds.) International Conference on Supercomputing (ICS), pp. 47–58. ACM, New York (2013) Google Scholar
  118. 118.
    Peled, D.: All from one, one for all: on model checking using representatives. In: Courcoubetis, C. (ed.) Intl. Conf. on Computer-Aided Verification (CAV). LNCS, vol. 697, pp. 409–423. Springer, Heidelberg (1993) Google Scholar
  119. 119.
    Penix, J., Visser, W., Park, S., Păsăreanu, C.S., Engstrom, E., Larson, A., Weininger, N.: Verifying time partitioning in the DEOS scheduling kernel. Form. Methods Syst. Des. 26(2), 103–135 (2005) MATHGoogle Scholar
  120. 120.
    Person, S., Dwyer, M.B., Elbaum, S.G., Păsăreanu, C.S.: Differential symbolic execution. In: Harrold, M.J., Murphy, G.C. (eds.) Intl. Symp. on Foundations of Software Engineering (FSE), pp. 226–237. ACM, New York (2008) Google Scholar
  121. 121.
    Person, S., Yang, G., Rungta, N., Khurshid, S.: Directed incremental symbolic execution. In: Hall, M.W., Padua, D.A. (eds.) Conf. on Programming Language Design and Implementation (PLDI), pp. 504–515. ACM, New York (2011) Google Scholar
  122. 122.
    Qadeer, S., Rehof, J.: Context-bounded model checking of concurrent software. In: Halbwachs, N., Zuck, L.D. (eds.) Intl. Conf. on Tools and Algorithms for the Construction and Analysis of Systems (TACAS). LNCS, vol. 3440, pp. 93–107. Springer, Heidelberg (2005) MATHGoogle Scholar
  123. 123.
    Qadeer, S., Wu, D.: KISS: keep it simple and sequential. In: Pugh, W., Chambers, C. (eds.) Conf. on Programming Language Design and Implementation (PLDI), pp. 14–24. ACM, New York (2004) Google Scholar
  124. 124.
    Queille, J., Sifakis, J.: Specification and verification of concurrent systems in CESAR. In: Dezani-Ciancaglini, M., Montanari, U. (eds.) International Symposium on Programming. LNCS, vol. 137, pp. 337–351. Springer, Heidelberg (1982) Google Scholar
  125. 125.
    Ramamoorthy, C.V., Ho, S.F., Chen, W.T.: On the automated generation of program test data. IEEE Trans. Softw. Eng. 2(4), 293–300 (1976) Google Scholar
  126. 126.
    Razavi, N., Ivancic, F., Kahlon, V., Gupta, A.: Concurrent test generation using concolic multi-trace analysis. In: Jhala, R., Igarashi, A. (eds.) Asian Symp. on Programming Languages and Systems (APLAS), pp. 239–255. Springer, Heidelberg (2012) Google Scholar
  127. 127.
    Richardson, D.J.: TAOS: testing with analysis and oracle support. In: Intl. Symp. on Software Testing and Analysis (ISSTA), pp. 138–153 (1994) Google Scholar
  128. 128.
    Saxena, P., Akhawe, D., Hanna, S., Mao, F., McCamant, S., Song, D.: A symbolic execution framework for JavaScript. In: Symposium on Security and Privacy, pp. 513–528. IEEE, Piscataway (2010) Google Scholar
  129. 129.
    Saxena, P., Poosankam, P., McCamant, S., Song, D.: Loop-extended symbolic execution on binary programs. In: Rothermel, G., Dillon, L.K. (eds.) Intl. Symp. on Software Testing and Analysis (ISSTA), pp. 225–236. ACM, New York (2009) Google Scholar
  130. 130.
    Sen, K.: CATG: a concolic testing tool for sequential Java programs. https://github.com/ksen007/janala2
  131. 131.
    Sen, K.: Scalable automated methods for dynamic program analysis. Ph.D. thesis, University of Illinois at Urbana-Champaign (2006) Google Scholar
  132. 132.
    Sen, K.: Effective random testing of concurrent programs. In: Stirewalt, R.E.K., Egyed, A., Fischer, B. (eds.) Intl. Conf. on Automated Software Engineering (ASE), pp. 323–332. ACM, New York (2007) Google Scholar
  133. 133.
    Sen, K.: Race directed random testing of concurrent programs. In: Gupta, R., Amarasinghe, S.P. (eds.) Conf. on Programming Language Design and Implementation (PLDI), pp. 11–21. ACM, New York (2008) Google Scholar
  134. 134.
    Sen, K., Agha, G.: Automated systematic testing of open distributed programs. In: Baresi, L., Heckel, R. (eds.) Intl. Conf. on Fundamental Approaches to Software Engineering (FASE). LNCS, vol. 3922, pp. 339–356. Springer, Heidelberg (2006) Google Scholar
  135. 135.
    Sen, K., Agha, G.: CUTE and jCUTE: concolic unit testing and explicit path model-checking tools. In: Ball, T., Jones, R.B. (eds.) Intl. Conf. on Computer-Aided Verification (CAV). LNCS, vol. 4144, pp. 419–423. Springer, Heidelberg (2006) Google Scholar
  136. 136.
    Sen, K., Agha, G.: A race-detection and flipping algorithm for automated testing of multi-threaded programs. In: Bin, E., Ziv, A., Ur, S. (eds.) Intl. Haifa Verification Conf. (HVC). LNCS, vol. 4383, pp. 166–182. Springer, Heidelberg (2006) Google Scholar
  137. 137.
    Sen, K., Kalasapur, S., Brutch, T.G., Gibbs, S.: Jalangi: a selective record-replay and dynamic analysis framework for JavaScript. In: Meyer, B., Baresi, L., Mezini, M. (eds.) Intl. Symp. on Foundations of Software Engineering (FSE), pp. 488–498. ACM, New York (2013) Google Scholar
  138. 138.
    Sen, K., Marinov, D., Agha, G.: CUTE: a concolic unit testing engine for C. In: Wermelinger, M., Gall, H.C. (eds.) Intl. Symp. on Foundations of Software Engineering (FSE), pp. 263–272. ACM, New York (2005) Google Scholar
  139. 139.
    Siegel, S.F., Mironova, A., Avrunin, G.S., Clarke, L.A.: Using model checking with symbolic execution to verify parallel numerical programs. In: Pollock, L.L., Pezzè, M. (eds.) Intl. Symp. on Software Testing and Analysis (ISSTA), pp. 157–168. ACM, New York (2006) Google Scholar
  140. 140.
    Song, D.X., Brumley, D., Yin, H., Caballero, J., Jager, I., Kang, M.G., Liang, Z., Newsome, J., Poosankam, P., Saxena, P.: BitBlaze: a new approach to computer security via binary analysis. In: Sekar, R., Pujari, A.K. (eds.) International Conference on Information Systems Security (ICISS). LNCS, vol. 5352, pp. 1–25. Springer, Heidelberg (2008) Google Scholar
  141. 141.
    Stoller, S.D.: Model-checking multi-threaded distributed java programs. In: Havelund, K., Penix, J., Visser, W. (eds.) Intl. Symp. on Model Checking of Software (SPIN). LNCS, vol. 1885. Springer, Heidelberg (2000) Google Scholar
  142. 142.
    Taylor, R.N.: A general-purpose algorithm for analyzing concurrent programs. Commun. ACM 26(5), 362–376 (1983) MATHGoogle Scholar
  143. 143.
    Tillmann, N., de Halleux, J.: Pex—white box test generation for net. In: Beckert, B., Hähnle, R. (eds.) Intl. Conf. on Tests and Proofs (TAP). LNCS, vol. 4966, pp. 134–153. Springer, Heidelberg (2008) Google Scholar
  144. 144.
    Valmari, A.: Stubborn sets for reduced state space generation. In: Rozenberg, G. (ed.) International Conference on Applications and Theory of Petri Nets. LNCS, vol. 483, pp. 491–515. Springer, Heidelberg (1989) Google Scholar
  145. 145.
    Vardi, M.Y., Wolper, P.: An automata-theoretic approach to automatic program verification. In: Symp. on Logic in Computer Science (LICS), pp. 332–344. IEEE, Piscataway (1986) Google Scholar
  146. 146.
    Veanes, M., Campbell, C., Grieskamp, W., Schulte, W., Tillmann, N., Nachmanson, L.: Model-based testing of object-oriented reactive systems with Spec Explorer. In: Hierons, R.M., Bowen, J.P., Harman, M. (eds.) Formal Methods and Testing. LNCS, vol. 4949, pp. 39–76. Springer, Heidelberg (2008) MATHGoogle Scholar
  147. 147.
    Venet, A.: Abstract interpretation of the pi-calculus. In: Dam, M. (ed.) Analysis and Verification of Multiple-Agent Languages (LOMAPS). LNCS, vol. 1192, pp. 51–75. Springer, Heidelberg (1996) Google Scholar
  148. 148.
    Visser, W., Havelund, K., Brat, G.P., Park, S.: Model checking programs. In: Intl. Conf. on Automated Software Engineering (ASE), pp. 3–12. IEEE, Piscataway (2000) Google Scholar
  149. 149.
    Visser, W., Păsăreanu, C.S., Khurshid, S.: Test input generation with Java PathFinder. In: Avrunin, G.S., Rothermel, G. (eds.) Intl. Symp. on Software Testing and Analysis (ISSTA), pp. 97–107. ACM, New York (2004) Google Scholar
  150. 150.
    Vo, A., Vakkalanka, S.S., Delisi, M., Gopalakrishnan, G., Kirby, R.M., Thakur, R.: Formal verification of practical MPI programs. In: Reed, D.A., Sarkar, V. (eds.) Symposium on Principles and Practice of Parallel Programming (PPOPP), pp. 261–270. ACM, New York (2009) Google Scholar
  151. 151.
    Williams, N., Marre, B., Mouy, P., Roger, M.: Pathcrawler: automatic generation of path tests by combining static and dynamic analysis. In: Cin, M.D., Kaâniche, M., Pataricza, A. (eds.) European Dependable Computing Conference (EDCC), pp. 281–292. Springer, Heidelberg (2005) Google Scholar
  152. 152.
    Xu, R., Godefroid, P., Majumdar, R.: Testing for buffer overflows with length abstraction. In: Ryder, B.G., Zeller, A. (eds.) Intl. Symp. on Software Testing and Analysis (ISSTA), pp. 27–38. ACM, New York (2008) Google Scholar
  153. 153.
    Yang, J., Chen, T., Wu, M., Xu, Z., Liu, X., Lin, H., Yang, M., Long, F., Zhang, L., Zhou, L.: MODIST: transparent model checking of unmodified distributed systems. In: Rexford, J., Sirer, E.G. (eds.) USENIX Symp. on Networked Systems Design and Implementation (NSDI), pp. 213–228. USENIX Association, Berkeley (2009) Google Scholar
  154. 154.
    Yang, J., Twohey, P., Engler, D.R., Musuvathi, M.: Using model checking to find serious file system errors. In: Brewer, E.A., Chen, P. (eds.) Operating Systems Design and Implementation (OSDI), pp. 273–288. USENIX Association, Berkeley (2004) Google Scholar
  155. 155.
    Yannakakis, M., Lee, D.: Testing finite state machines (extended abstract). In: Koutsougeras, C., Vitter, J.S. (eds.) ACM Symp. on Theory of Computing (STOC), pp. 476–485. ACM, New York (1991) Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Microsoft ResearchRedmondUSA
  2. 2.University of California, BerkeleyBerkeleyUSA

Personalised recommendations