Aho, A.V., Sethi, R., Ullman, J.D.: Compilers: Principles, Techniques, and Tools. Addison-Wesley, Reading (1986)
MATH
Google Scholar
Albarghouthi, A., Gurfinkel, A., Chechik, M.: From under-approximations to over-approximations and back. In: Proceedings of TACAS, LNCS, vol. 7214, pp. 157–172. Springer, Berlin (2012). https://doi.org/10.1007/978-3-642-28756-5_12
Albert, E., Puebla, G., Hermenegildo, M.V.: Abstraction-carrying code. In: Proceedings of LPAR, LNCS, vol. 3452, pp. 380–397. Springer, Berlin (2004). https://doi.org/10.1007/978-3-540-32275-7_25
Apel, S., Beyer, D., Friedberger, K., Raimondi, F., von Rhein, A.: Domain types: abstract-domain selection based on variable usage. In: Proceedings of HVC, LNCS 8244, pp. 262–278. Springer, Berlin (2013). https://doi.org/10.1007/978-3-319-03077-7
Aquino, A., Bianchi, F.A., Chen, M., Denaro, G., Pezzè, M.: Reusing constraint proofs in program analysis. In: Proceedings of ISSTA, pp. 305–315. ACM, New York (2015). https://doi.org/10.1145/2771783.2771802
Baars, A.I., Harman, M., Hassoun, Y., Lakhotia, K., McMinn, P., Tonella, P., Vos, T.E.J.: Symbolic search-based testing. In: Proceedings of ASE, pp. 53–62. IEEE (2011). https://doi.org/10.1109/ASE.2011.6100119
Baluda, M.: EvoSE: evolutionary symbolic execution. In: Proceedings of A-TEST, pp. 16–19. ACM, New York (2015). https://doi.org/10.1145/2804322.2804325
Beckman, N., Nori, A.V., Rajamani, S.K., Simmons, R.J.: Proofs from tests. In: Proceedings of ISSTA, pp. 3–14. ACM, New York (2008). https://doi.org/10.1145/1390630.1390634
Besson, F., Cornilleau, P., Jensen, T.P.: Result certification of static program analysers with automated theorem provers. In: Proceedings of VSTTE, LNCS, vol. 8164, pp. 304–325. Springer, Berlin (2013). https://doi.org/10.1007/978-3-642-54108-7_16
Besson, F., Jensen, T.P., Pichardie, D.: Proof-carrying code from certified abstract interpretation and fixpoint compression. TCS 364(3), 273–291 (2006). https://doi.org/10.1016/j.tcs.2006.08.012
MathSciNet
Article
MATH
Google Scholar
Beyer, D.: Automatic verification of C and Java programs: SV-COMP 2019. In: Proceedings of TACAS (3), LNCS, vol. 11429, pp. 133–155. Springer, Berlin (2019). https://doi.org/10.1007/978-3-030-17502-3_9
Beyer, D.: First international competition on software testing (Test-Comp 2019). Int. J. Softw. Tools Technol, Transf (2020)
Google Scholar
Beyer, D., Chlipala, A.J., Henzinger, T.A., Jhala, R., Majumdar, R.: Generating tests from counterexamples. In: Proceedings of ICSE, pp. 326–335. IEEE (2004). https://doi.org/10.1109/ICSE.2004.1317455
Beyer, D., Chlipala, A.J., Henzinger, T.A., Jhala, R., Majumdar, R.: The Blast query language for software verification. In: Proceedings of SAS, LNCS, vol. 3148, pp. 2–18. Springer, Berlin (2004). https://doi.org/10.1007/978-3-540-27864-1_2
Beyer, D., Dangl, M.: Strategy selection for software verification based on Boolean features: a simple but effective approach. In: Proceedings of ISoLA, LNCS, vol. 11245, pp. 144–159. Springer, Berlin (2018). https://doi.org/10.1007/978-3-030-03421-4_11
Beyer, D., Dangl, M., Dietsch, D., Heizmann, M.: Correctness witnesses: exchanging verification results between verifiers. In: Proceedings of FSE, pp. 326–337. ACM, New York (2016). https://doi.org/10.1145/2950290.2950351
Beyer, D., Dangl, M., Dietsch, D., Heizmann, M., Stahlbauer, A.: Witness validation and stepwise testification across software verifiers. In: Proceedings of FSE, pp. 721–733. ACM, New York (2015). https://doi.org/10.1145/2786805.2786867
Beyer, D., Dangl, M., Lemberger, T., Tautschnig, M.: Tests from witnesses: execution-based validation of verification results. In: Proceedings of TAP, LNCS, vol. 10889, pp. 3–23. Springer, Berlin (2018). https://doi.org/10.1007/978-3-319-92994-1_1
Beyer, D., Dangl, M., Wendler, P.: Boosting k-induction with continuously-refined invariants. In: Proceedings of CAV, LNCS, vol. 9206, pp. 622–640. Springer, Berlin (2015). https://doi.org/10.1007/978-3-319-21690-4_42
Beyer, D., Dangl, M., Wendler, P.: A unifying view on SMT-based software verification. J. Autom. Reason. 60(3), 299–335 (2018). https://doi.org/10.1007/s10817-017-9432-6
MathSciNet
Article
MATH
Google Scholar
Beyer, D., Friedberger, K.: Domain-independent multi-threaded software model checking. In: Proceedings of ASE, pp. 634–644. ACM, New York (2018). https://doi.org/10.1145/3238147.3238195
Beyer, D., Gulwani, S., Schmidt, D.: Combining model checking and data-flow analysis. In: Clarke, E.M., Henzinger, T.A., Veith, H. (eds.) Handbook on Model Checking, pp. 493–540. Springer, Berlin (2018). https://doi.org/10.1007/978-3-319-10575-8_16
Beyer, D., Henzinger, T.A., Keremoglu, M.E., Wendler, P.: Conditional model checking: a technique to pass information between verifiers. In: Proceedings of FSE. ACM, New York (2012). https://doi.org/10.1145/2393596.2393664
Beyer, D., Henzinger, T.A., Théoduloz, G.: Program analysis with dynamic precision adjustment. In: Proceedings of ASE, pp. 29–38. IEEE (2008). https://doi.org/10.1109/ASE.2008.13
Beyer, D., Holzer, A., Tautschnig, M., Veith, H.: Information reuse for multi-goal reachability analyses. In: Proceedings of ESOP, LNCS, vol. 7792, pp. 472–491. Springer, Berlin (2013). https://doi.org/10.1007/978-3-642-37036-6_26
Beyer, D., Jakobs, M.C.: CoVeriTest: cooperative verifier-based testing. In: Proceedings of FASE, LNCS, vol. 11424, pp. 389–408. Springer, Berlin (2019). https://doi.org/10.1007/978-3-030-16722-6_23
Beyer, D., Jakobs, M.C.: Replication package for article ‘Cooperative, verifier-based testing with CoVeriTest’ in STTT. Zenodo (2020). https://doi.org/10.5281/zenodo.3666060
Article
Google Scholar
Beyer, D., Jakobs, M.C., Lemberger, T., Wehrheim, H.: Reducer-based construction of conditional verifiers. In: Proceedings of ICSE, pp. 1182–1193. ACM, New York (2018). https://doi.org/10.1145/3180155.3180259
Beyer, D., Keremoglu, M.E.: CPAchecker: a tool for configurable software verification. In: Proceedings of CAV, LNCS, vol. 6806, pp. 184–190. Springer, Berlin (2011). https://doi.org/10.1007/978-3-642-22110-1_16
Beyer, D., Keremoglu, M.E., Wendler, P.: Predicate abstraction with adjustable-block encoding. In: Proceedings of FMCAD, pp. 189–197. FMCAD (2010)
Beyer, D., Lemberger, T.: Symbolic execution with CEGAR. In: Proceedings of ISoLA, LNCS, vol. 9952, pp. 195–211. Springer, Berlin (2016). https://doi.org/10.1007/978-3-319-47166-2_14
Beyer, D., Lemberger, T.: Software verification: testing vs. model checking. In: Proceedings of HVC, LNCS, vol. 10629, pp. 99–114. Springer, Berlin (2017). https://doi.org/10.1007/978-3-319-70389-3_7
Beyer, D., Lemberger, T.: Conditional testing: Off-the-shelf combination of test-case generators. In: Proceedings ATVA, LNCS, vol. 11781, pp. 189–208. Springer, Berlin (2019). https://doi.org/10.1007/978-3-030-31784-3_11
Beyer, D., Lemberger, T.: TestCov: Robust test-suite execution and coverage measurement. In: Proceedings of ASE, pp. 1074–1077. IEEE (2019). https://doi.org/10.1109/ASE.2019.00105
Beyer, D., Löwe, S.: Explicit-state software model checking based on CEGAR and interpolation. In: Proceedings of FASE, LNCS, vol. 7793, pp. 146–162. Springer, Berlin (2013). https://doi.org/10.1007/978-3-642-37057-1_11
Beyer, D., Löwe, S., Novikov, E., Stahlbauer, A., Wendler, P.: Precision reuse for efficient regression verification. In: Proceedings of FSE, pp. 389–399. ACM, New York (2013). https://doi.org/10.1145/2491411.2491429
Beyer, D., Löwe, S., Wendler, P.: Refinement selection. In: Proceedings of SPIN, LNCS, vol. 9232, pp. 20–38. Springer, Berlin (2015). https://doi.org/10.1007/978-3-319-23404-5_3
Beyer, D., Löwe, S., Wendler, P.: Sliced path prefixes: an effective method to enable refinement selection. In: Proceedings of FORTE, LNCS, vol. 9039, pp. 228–243. Springer, Berlin (2015). https://doi.org/10.1007/978-3-319-19195-9_15
Beyer, D., Löwe, S., Wendler, P.: Reliable benchmarking: requirements and solutions. Int. J. Softw. Tools Technol. Transfer 21(1), 1–29 (2019). https://doi.org/10.1007/s10009-017-0469-y
Article
Google Scholar
Bianculli, D., Filieri, A., Ghezzi, C., Mandrioli, D.: Syntactic-semantic incrementality for agile verification. SCICO 97, 47–54 (2015). https://doi.org/10.1016/j.scico.2013.11.026
Article
Google Scholar
Biere, A., Cimatti, A., Clarke, E.M., Zhu, Y.: Symbolic model checking without BDDs. In: Proceedings of TACAS, LNCS, vol. 1579, pp. 193–207. Springer, Berlin (1999). https://doi.org/10.1007/3-540-49059-0_14
Blicha, M., Hyvärinen, A.E.J., Marescotti, M., Sharygina, N.: A cooperative parallelization approach for property-directed k-induction. In: Proceedings of VMCAI, LNCS, vol. 11990, pp. 270–292. Springer (2020). https://doi.org/10.1007/978-3-030-39322-9_13
Cadar, C., Dunbar, D., Engler, D.R.: Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: Proceedings of OSDI, pp. 209–224. USENIX Association (2008)
Cadar, C., Sen, K.: Symbolic execution for software testing: three decades later. CACM 56(2), 82–90 (2013). https://doi.org/10.1145/2408776.2408795
Article
Google Scholar
Carroll, M.D., Ryder, B.G.: Incremental data flow analysis via dominator and attribute updates. In: Proceedings of POPL, pp. 274–284. ACM, New York (1988). https://doi.org/10.1145/73560.73584
Chaieb, A.: Proof-producing program analysis. In: Proceedings of ICTAC, LNCS, vol. 4281, pp. 287–301. Springer, Berlin (2006). https://doi.org/10.1007/11921240_20
Chalupa, M., Vitovská, M., Strejcek, J.: Symbiotic 5: boosted instrumentation (competition contribution). In: Proceedings of TACAS, LNCS, vol. 10806, pp. 442–446. Springer, Berlin (2018). https://doi.org/10.1007/978-3-319-89963-3_29
Chebaro, O., Kosmatov, N., Giorgetti, A., Julliand, J.: Program slicing enhances a verification technique combining static and dynamic analysis. In: Proceedings of SAC, pp. 1284–1291. ACM, New York (2012). https://doi.org/10.1145/2245276.2231980
Cheng, W., Hüllermeier, E.: Combining instance-based learning and logistic regression for multilabel classification. Mach. Learn. 76(2–3), 211–225 (2009). https://doi.org/10.1007/s10994-009-5127-5
Article
Google Scholar
Chowdhury, A.B., Medicherla, R.K., Venkatesh, R.: VeriFuzz: program aware fuzzing (competition contribution). In: Proceedings of TACAS, part 3, LNCS, vol. 11429, pp. 244–249. Springer, Berlin (2019). https://doi.org/10.1007/978-3-030-17502-3_22
Christakis, M., Müller, P., Wüstholz, V.: Guiding dynamic symbolic execution toward unverified program executions. In: Proceedings of ICSE, pp. 144–155. ACM, New York (2016). https://doi.org/10.1145/2884781.2884843
Ciortea, L., Zamfir, C., Bucur, S., Chipounov, V., Candea, G.: Cloud9: a software testing service. ACM SIGOPS Oper. Syst. Rev. 43(4), 5–10 (2009). https://doi.org/10.1145/1713254.1713257
Article
Google Scholar
Clarke, E.M., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample-guided abstraction refinement. In: Proceedings CAV, LNCS, vol. 1855, pp. 154–169. Springer, Berlin (2000). https://doi.org/10.1007/10722167_15
Clarke, E.M., Henzinger, T.A., Veith, H., Bloem, R.: Handbook of Model Checking. Springer, Berlin (2018). https://doi.org/10.1007/978-3-319-10575-8
Book
MATH
Google Scholar
Clarke, E.M., Kröning, D., Lerda, F.: A tool for checking ANSI-C programs. In: Proceedings of TACAS, LNCS 2988, pp. 168–176. Springer, Berlin (2004). https://doi.org/10.1007/978-3-540-24730-2_15
Cousot, P., Cousot, R.: Systematic design of program-analysis frameworks. In: Proceedings of POPL, pp. 269–282. ACM, New York (1979). https://doi.org/10.1145/567752.567778
Csallner, C., Smaragdakis, Y.: Check ‘n’ crash: combining static checking and testing. In: Proceedings of ICSE, pp. 422–431. ACM, New York (2005). https://doi.org/10.1145/1062455.1062533
Czech, M., Hüllermeier, E., Jakobs, M., Wehrheim, H.: Predicting rankings of software verification tools. In: Proceedings of SWAN, pp. 23–26. ACM, New York (2017). https://doi.org/10.1145/3121257.3121262
Czech, M., Jakobs, M., Wehrheim, H.: Just test what you cannot verify! In: Proceedings of FASE, LNCS, vol. 9033, pp. 100–114. Springer, Berlin (2015). https://doi.org/10.1007/978-3-662-46675-9_7
Daca, P., Gupta, A., Henzinger, T.A.: Abstraction-driven concolic testing. In: Proceedings of VMCAI, LNCS, vol. 9583, pp. 328–347. Springer, Berlin (2016). https://doi.org/10.1007/978-3-662-49122-5_16
Demyanova, Y., Pani, T., Veith, H., Zuleger, F.: Empirical software metrics for benchmarking of verification tools. In: Proceedings of CAV, LNCS, vol. 9206, pp. 561–579. Springer, Berlin (2015). https://doi.org/10.1007/978-3-319-21690-4_39
Dijkstra, E.W.: A Discipline of Programming. Prentice-Hall, Englewood Cliffs (1976)
MATH
Google Scholar
Fraser, G., Wotawa, F., Ammann, P.: Testing with model checkers: a survey. Softw. Test. Verif. Reliab. 19(3), 215–261 (2009). https://doi.org/10.1002/stvr.402
Article
Google Scholar
Galeotti, J.P., Fraser, G., Arcuri, A.: Improving search-based test suite generation with dynamic symbolic execution. In: Proceedings of ISSRE, pp. 360–369. IEEE (2013). https://doi.org/10.1109/ISSRE.2013.6698889
Gargantini, A., Vavassori, P.: Using decision trees to aid algorithm selection in combinatorial interaction tests generation. In: Proceedings of ICST, pp. 1–10. IEEE (2015). https://doi.org/10.1109/ICSTW.2015.7107442
Ge, X., Taneja, K., Xie, T., Tillmann, N.: DyTa: dynamic symbolic execution guided with static verification results. In: Proceedings of ICSE, pp. 992–994. ACM, New York (2011). https://doi.org/10.1145/1985793.1985971
Ghezzi, C., Jazayeri, M., Mandrioli, D.: Fundamentals of Software Engineering, 2nd edn. Prentice Hall, Englewood Cliffs (2003)
MATH
Google Scholar
Godefroid, P., Klarlund, N., Sen, K.: Dart: directed automated random testing. In: Proceedings of PLDI, pp. 213–223. ACM, New York (2005). https://doi.org/10.1145/1065010.1065036
Godefroid, P., Levin, M.Y., Molnar, D.A.: Automated whitebox fuzz testing. In: Proceedings of NDSS. The Internet Society (2008)
Godefroid, P., Nori, A.V., Rajamani, S.K., Tetali, S.: Compositional may-must program analysis: unleashing the power of alternation. In: Proceedings of POPL, pp. 43–56. ACM, New York (2010). https://doi.org/10.1145/1706299.1706307
Graf, S., Saïdi, H.: Construction of abstract state graphs with Pvs. In: Proceedings of CAV, LNCS, vol. 1254, pp. 72–83. Springer, Berlin (1997). https://doi.org/10.1007/3-540-63166-6_10
Groce, A., Zhang, C., Eide, E., Chen, Y., Regehr, J.: Swarm testing. In: Proceedings of ISSTA, pp. 78–88. ACM, New York (2012). https://doi.org/10.1145/2338965.2336763
Gulavani, B.S., Henzinger, T.A., Kannan, Y., Nori, A.V., Rajamani, S.K.: Synergy: a new algorithm for property checking. In: Proceedings of FSE, pp. 117–127. ACM, New York (2006). https://doi.org/10.1145/1181775.1181790
Henzinger, T.A., Jhala, R., Majumdar, R., McMillan, K.L.: Abstractions from proofs. In: Proceedings of POPL, pp. 232–244. ACM, New York (2004). https://doi.org/10.1145/964001.964021
Henzinger, T.A., Jhala, R., Majumdar, R., Necula, G.C., Sutre, G., Weimer, W.: Temporal-safety proofs for systems code. In: Proceedings of CAV, LNCS, vol. 2404, pp. 526–538. Springer, Berlin (2002). https://doi.org/10.1007/3-540-45657-0_45
Henzinger, T.A., Jhala, R., Majumdar, R., Sanvido, M.A.A.: Extreme model checking. In: Verification: Theory and Practice, pp. 332–358 (2003). https://doi.org/10.1007/978-3-540-39910-0_16
Henzinger, T.A., Jhala, R., Majumdar, R., Sutre, G.: Lazy abstraction. In: Proceedings of POPL, pp. 58–70. ACM, New York (2002). https://doi.org/10.1145/503272.503279
Holík, L., Kotoun, M., Peringer, P., Soková, V., Trtík, M., Vojnar, T.: Predator shape analysis tool suite. In: Proceedings of HVC, LNCS, vol. 10028, pp. 202–209 (2016). https://doi.org/10.1007/978-3-319-49052-6_13
Holzer, A., Schallhart, C., Tautschnig, M., Veith, H.: FShell: Systematic test case generation for dynamic analysis and measurement. In: Proceedings of CAV, LNCS, vol. 5123, pp. 209–213. Springer, Berlin (2008). https://doi.org/10.1007/978-3-540-70545-1_20
Holzer, A., Schallhart, C., Tautschnig, M., Veith, H.: Query-driven program testing. In: Proceedings of VMCAI, LNCS, vol. 5403, pp. 151–166. Springer, Berlin (2009). https://doi.org/10.1007/978-3-540-93900-9_15
Holzer, A., Schallhart, C., Tautschnig, M., Veith, H.: How did you specify your test suite. In: Proceedings of ASE, pp. 407–416. ACM, New York (2010). https://doi.org/10.1145/1858996.1859084
Holzmann, G.J., Joshi, R., Groce, A.: Swarm verification. In: Proceedings of ASE, pp. 1–6. IEEE (2008). https://doi.org/10.1109/ASE.2008.9
Inkumsah, K., Xie, T.: Improving structural testing of object-oriented programs via integrating evolutionary testing and symbolic execution. In: Proceedings of ASE, pp. 297–306. IEEE (2008). https://doi.org/10.1109/ASE.2008.40
Jakobs, M.C.: CoVeriTest: Interleaving value and predicate analysis for test-case generation (competition contribution). Int. J. Softw. Tools Technol. Transf. (2020). https://doi.org/10.1007/s10009-020-00572-1
Article
Google Scholar
Jakobs, M.C.: CoVeriTest with dynamic partitioning of the iteration time limit (competition contribution). In: Proceedings of FASE, LNCS, vol. 12076, pp. 540–544. Springer, Berlin (2020). https://doi.org/10.1007/978-3-030-45234-6_30
Jakobs, M.C., Wehrheim, H.: Certification for configurable program analysis. In: Proceedings of SPIN, pp. 30–39. ACM, New York (2014). https://doi.org/10.1145/2632362.2632372
Jakobs, M.C., Wehrheim, H.: Programs from proofs: a framework for the safe execution of untrusted software. ACM Trans. Program. Lang. Syst. 39(2), 7:1–7:56 (2017). https://doi.org/10.1145/3014427
Article
Google Scholar
Jalote, P., Vangala, V., Singh, T., Jain, P.: Program partitioning: a framework for combining static and dynamic analysis. In: Proceedings of WODA, pp. 11–16. ACM, New York (2006). https://doi.org/10.1145/1138912.1138916
Jhala, R., Majumdar, R.: Software model checking. ACM Comput. Surv. 41, 4 (2009). https://doi.org/10.1145/1592434.1592438
Article
Google Scholar
Jia, X., Ghezzi, C., Ying, S.: Enhancing reuse of constraint solutions to improve symbolic execution. In: Proceedings of ISSTA, pp. 177–187. ACM, New York (2015). https://doi.org/10.1145/2771783.2771806
Jia, Y., Cohen, M.B., Harman, M., Petke, J.: Learning combinatorial interaction test generation strategies using hyperheuristic search. In: Proceedings of ICSE, pp. 540–550. IEEE (2015). https://doi.org/10.1109/ICSE.2015.71
Kim, Y., Xu, Z., Kim, M., Cohen, M.B., Rothermel, G.: Hybrid directed test suite augmentation: an interleaving framework. In: Proceedings of ICST, pp. 263–272. IEEE (2014). https://doi.org/10.1109/ICST.2014.39
King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976). https://doi.org/10.1145/360248.360252
MathSciNet
Article
MATH
Google Scholar
Kotthoff, L.: Algorithm selection for combinatorial search problems: a survey. In: Data Mining and Constraint Programming–Foundations of a Cross-Disciplinary Approach, LNCS, vol. 10101, pp. 149–190. Springer, Berlin (2016). https://doi.org/10.1007/978-3-319-50137-6_7
Lemieux, C., Sen, K.: FairFuzz: a targeted mutation strategy for increasing greybox fuzz testing coverage. In: Proceedings of ASE, pp. 475–485. ACM, New York (2018). https://doi.org/10.1145/3238147.3238176
Li, J., Zhao, B., Zhang, C.: Fuzzing: a survey. Cybersecurity 1(1), 6 (2018). https://doi.org/10.1186/s42400-018-0002-y
Article
Google Scholar
Li, K., Reichenbach, C., Csallner, C., Smaragdakis, Y.: Residual investigation: predictive and precise bug detection. In: Proceedings of ISSTA, pp. 298–308. ACM, New York (2012). https://doi.org/10.1145/2338965.2336789
Majumdar, R., Sen, K.: Hybrid concolic testing. In: Proceedings of ICSE, pp. 416–426. IEEE (2007). https://doi.org/10.1109/ICSE.2007.41
McMinn, P.: Search-based software test-data generation: a survey. Softw. Test. Verif. Reliab. 14(2), 105–156 (2004). https://doi.org/10.1002/stvr.294
Article
Google Scholar
Misailovic, S., Milicevic, A., Petrovic, N., Khurshid, S., Marinov, D.: Parallel test generation and execution with Korat. In: Proceedings of ESEC/FSE, pp. 135–144. ACM, New York (2007). https://doi.org/10.1145/1287624.1287645
Mudduluru, R., Ramanathan, M.K.: Efficient incremental static analysis using path abstraction. In: Proceedings of FASE, LNCS, vol. 8411, pp. 125–139. Springer, Berlin (2014). https://doi.org/10.1007/978-3-642-54804-8_9
Nguyen, T.L., Schrammel, P., Fischer, B., La Torre, S., Parlato, G.: Parallel bug-finding in concurrent programs via reduced interleaving instances. In: Proceedings of ASE, pp. 753–764. IEEE (2017). https://doi.org/10.1109/ASE.2017.8115686
Noller, Y., Kersten, R., Pasareanu, C.S.: Badger: Complexity analysis with fuzzing and symbolic execution. In: Proceedings of ISSTA, pp. 322–332. ACM, New York (2018). https://doi.org/10.1145/3213846.3213868
Pacheco, C., Lahiri, S.K., Ernst, M.D., Ball, T.: Feedback-directed random test generation. In: Proceedings of ICSE, pp. 75–84. IEEE (2007). https://doi.org/10.1109/ICSE.2007.37
Pasareanu, C.S., Visser, W.: A survey of new trends in symbolic execution for software testing and analysis. Int. J. Softw. Tools Technol. Transf. 11(4), 339–353 (2009). https://doi.org/10.1007/s10009-009-0118-1
Article
Google Scholar
Person, S., Yang, G., Rungta, N., Khurshid, S.: Directed incremental symbolic execution. In: Proceedings of PLDI, pp. 504–515. ACM, New York (2011). https://doi.org/10.1145/1993498.1993558
Post, H., Sinz, C., Kaiser, A., Gorges, T.: Reducing false positives by combining abstract interpretation and bounded model checking. In: Proceedings of ASE, pp. 188–197. IEEE (2008). https://doi.org/10.1109/ASE.2008.29
Rice, J.R.: The algorithm selection problem. Adv. Comput. 15, 65–118 (1976). https://doi.org/10.1016/S0065-2458(08)60520-3
Article
Google Scholar
Richter, C., Wehrheim, H.: PeSCo: predicting sequential combinations of verifiers (competition contribution). In: Proceedings of TACAS, LNCS, vol. 11429, pp. 229–233. Springer, Berlin (2019). https://doi.org/10.1007/978-3-030-17502-3_19
Rose, E.: Lightweight bytecode verification. J. Autom. Reason. 31(3–4), 303–334 (2003). https://doi.org/10.1023/B:JARS.0000021015.15794.82
Article
MATH
Google Scholar
Rothenberg, B., Dietsch, D., Heizmann, M.: Incremental verification using trace abstraction. In: Proceedings of SAS, LNCS, vol. 11002, pp. 364–382. Springer, Berlin (2018). https://doi.org/10.1007/978-3-319-99725-4_22
Ryder, B.G.: Incremental data flow analysis. In: Proceedings of POPL, pp. 167–176. ACM Press, New York (1983). https://doi.org/10.1145/567067.567084
Sakti, A., Guéhéneuc, Y., Pesant, G.: Boosting search based testing by using constraint based testing. In: Proceedings of SSBSE, LNCS, vol. 7515, pp. 213–227. Springer, Berlin (2012). https://doi.org/10.1007/978-3-642-33119-0_16
Seo, S., Yang, H., Yi, K.: Automatic construction of Hoare proofs from abstract interpretation results. In: Proceedings of APLAS, LNCS, vol. 2895, pp. 230–245. Springer, Berlin (2003). https://doi.org/10.1007/978-3-540-40018-9_16
Sery, O., Fedyukovich, G., Sharygina, N.: Incremental upgrade checking by means of interpolation-based function summaries. In: Proceedings of FMCAD, pp. 114–121. FMCAD Inc., Palo Alto (2012)
Sherman, E., Dwyer, M.B.: Structurally defined conditional data-flow static analysis. In: Proceedings of TACAS (2), LNCS, vol. 10806, pp. 249–265. Springer, Berlin (2018). https://doi.org/10.1007/978-3-319-89963-3_15
Siddiqui, J.H., Khurshid, S.: Scaling symbolic execution using ranged analysis. In: Leavens, G.T., Dwyer, M.B. (eds.) Proceedings of SPLASH, pp. 523–536. ACM, New York (2012). https://doi.org/10.1145/2384616.2384654
Sokolsky, O., Smolka, S.A.: Incremental model checking in the modal mu-calculus. In: Proceedings of CAV, LNCS, vol. 818, pp. 351–363. Springer, Berlin (1994). https://doi.org/10.1007/3-540-58179-0_67
Staats, M., Pasareanu, C.S.: Parallel symbolic execution for structural test generation. In: Proceedings of ISSTA, pp. 183–194. ACM, New York (2010). https://doi.org/10.1145/1831708.1831732
Stephens, N., Grosen, J., Salls, C., Dutcher, A., Wang, R., Corbetta, J., Shoshitaishvili, Y., Kruegel, C., Vigna, G.: Driller: augmenting fuzzing through selective symbolic execution. In: Proceedings of NDSS. Internet Society (2016). https://doi.org/10.14722/ndss.2016.23368
Tulsian, V., Kanade, A., Kumar, R., Lal, A., Nori, A.V.: MUX: algorithm selection for software model checkers. In: Proceedings of MSR. ACM, New York (2014). https://doi.org/10.1145/2597073.2597080
Visser, W., Geldenhuys, J., Dwyer, M.B.: Green: reducing, reusing, and recycling constraints in program analysis. In: Proceedings of FSE, pp. 58:1–58:11. ACM, New York (2012). https://doi.org/10.1145/2393596.2393665
Visser, W., Păsăreanu, C.S., Khurshid, S.: Test-input generation with Java PathFinder. In: Proceedings of ISSTA, pp. 97–107. ACM, New York (2004). https://doi.org/10.1145/1007512.1007526
Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: SATzilla: portfolio-based algorithm selection for SAT. J. Artif. Intell. Res. 32, 565–606 (2008). https://doi.org/10.1613/jair.2490
Article
MATH
Google Scholar
Xu, Z., Kim, Y., Kim, M., Rothermel, G.: A hybrid directed test-suite augmentation technique. In: Proceedings of ISSRE, pp. 150–159. IEEE (2011). https://doi.org/10.1109/ISSRE.2011.21
Yang, G., Dwyer, M.B., Rothermel, G.: Regression model checking. In: Proceedings of ICSM, pp. 115–124. IEEE (2009). https://doi.org/10.1109/ICSM.2009.5306334
Yang, G., Păsăreanu, C.S., Khurshid, S.: Memoized symbolic execution. In: Proceedings of ISSTA, pp. 144–154. ACM, New York (2012). https://doi.org/10.1145/2338965.2336771
Yorsh, G., Ball, T., Sagiv, M.: Testing, abstraction, theorem proving: Better together! In: Proceedings of ISSTA, pp. 145–156. ACM, New York (2006). https://doi.org/10.1145/1146238.1146255