Abstract
Abstract interpretation is a powerful tool in program verification. Several commercial or industrial scale implementations of abstract interpretation have demonstrated that this approach can verify safety properties of real-world code. However, using abstract interpretation tools is not always simple. If no user-provided hints are available, the abstract interpretation engine may lose precision during widening and produce an overwhelming number of false alarms. However, manually providing these hints is time consuming and often frustrating when re-running the analysis takes a lot of time.
We present an algorithm for program verification that combines abstract interpretation, symbolic execution and crowdsourcing. If verification fails, our procedure suggests likely invariants, or program patches, that provide helpful information to the verification engineer and makes it easier to find the correct specification. By complementing machine learning with well-designed games, we enable program analysis to incorporate human insights that help improve their scalability and usability.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Cousot, P., Cousot, R.: Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: POPL (1977)
Cousot, P., Cousot, R., Feret, J., Mauborgne, L., Miné, A., Monniaux, D., Rival, X.: The astrée analyzer. In: PLS (2005)
Cousot, P., Ganty, P., Raskin, J.-F.: Fixpoint-guided abstraction refinements. In: Riis Nielson, H., Filé, G. (eds.) SAS 2007. LNCS, vol. 4634, pp. 333–348. Springer, Heidelberg (2007)
Dietl, W., Dietzel, S., Ernst, M.D., Mote, N., Walker, B., Cooper, S., Pavlik, T., Popović, Z.: Verification games: making verification fun. In: Proceedings of the 14th Workshop on Formal Techniques for Java-like Programs, pp. 42–49. ACM (2012)
Ernst, M.D., Perkins, J.H., Guo, P.J., McCamant, S., Pacheco, C., Tschantz, M.S., Xiao, C.: The daikon system for dynamic detection of likely invariants. Sci. Comput. Program 69(1–3), 35–45 (2007)
Flanagan, C., Leino, K.R.M., Lillibridge, M., Nelson, G., Saxe, J.B., Stata, R.: Extended static checking for java. In: PLDI (2002)
Flanagan, C., Qadeer, S.: Predicate abstraction for software verification. In: POPL (2002)
Garg, P., Löding, C., Madhusudan, P., Neider, D.: ICE: a robust framework for learning invariants. In: Biere, A., Bloem, R. (eds.) CAV 2014. LNCS, vol. 8559, pp. 69–87. Springer, Heidelberg (2014)
Graf, S., Saidi, H.: Construction of abstract state graphs with PVS. In: Grumberg, O. (ed.) CAV 1997, vol. 1254, pp. 72–83. Springer, Heidelberg (1997)
Gulavani, B.S., Chakraborty, S., Nori, A.V., Rajamani, S.K.: Automatically refining abstract interpretations. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 443–458. Springer, Heidelberg (2008)
Kirchner, F., Kosmatov, N., Prevosto, V., Signoles, J., Yakobowski, B.: Frama-C: a software analysis perspective. Formal Aspects Comput. 27(3), 1–37 (2015)
Krishna, S., Puhrsch, C., Wies, T.: Learning invariants using decision trees. CoRR (2015)
Kroening, D., Tautschnig, M.: CBMC – C bounded model checker. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014 (ETAPS). LNCS, vol. 8413, pp. 389–391. Springer, Heidelberg (2014)
Logas, H., Whitehead, J., Mateas, M., Vallejos, R., Scott, L., Shapiro, D., Murray, J., Compton, K., Osborn, J., Salvatore, O., et al.: Software verification games: Designing xylem, the code of plants (2014)
Sharma, R., Aiken, A.: From invariant checking to invariant inference using randomized search. In: Biere, A., Bloem, R. (eds.) CAV 2014. LNCS, vol. 8559, pp. 88–105. Springer, Heidelberg (2014)
Sharma, R., Gupta, S., Hariharan, B., Aiken, A., Nori, A.V.: Verification as learning geometric concepts. In: Logozzo, F., Fähndrich, M. (eds.) Static Analysis. LNCS, vol. 7935, pp. 388–411. Springer, Heidelberg (2013)
Sharma, R., Nori, A.V., Aiken, A.: Interpolants as classifiers. In: Madhusudan, P., Seshia, S.A. (eds.) CAV 2012. LNCS, vol. 7358, pp. 71–87. Springer, Heidelberg (2012)
Zhang, L., Yang, G., Rungta, N., Person, S., Khurshid, S.: Feedback-driven dynamic invariant discovery. In: ISSTA (2014)
Acknowledgement
This work was supported in part by the National Science Foundation under grant contracts CCF 1423296 and CNS 1423298, and DARPA under agreement number FA8750-12-C-0225.
We gratefully acknowledge the contributions of our collaborators at UCSC especially Kate Compton, Heather Logas, Joseph Osborn, Zhongpeng Lin, Dylan Lederle-Ensign, Joe Mazeika, Afshin Mobrabraein, Chandranil Chakrabortii, Johnathan Pagnutti, Kelsey Coffman, Richard Vallejos, Lauren Scott, John Thomas Murray, Orlando Salvatore, Huascar Sanchez, Michael Shavlovsky, Daniel Cetina, Shayne Clementi, Chris Lewis, Dan Shapiro, Michael Mateas, E. James Whitehead Jr., at SRI John Murray, Min Yin, Natarajan Shankar, Sam Owre, and at CEA Florent Kirchner, Boris Yakobowski.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Fava, D., Signoles, J., Lemerre, M., Schäf, M., Tiwari, A. (2015). Gamifying Program Analysis. In: Davis, M., Fehnker, A., McIver, A., Voronkov, A. (eds) Logic for Programming, Artificial Intelligence, and Reasoning. LPAR 2015. Lecture Notes in Computer Science(), vol 9450. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-48899-7_41
Download citation
DOI: https://doi.org/10.1007/978-3-662-48899-7_41
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-48898-0
Online ISBN: 978-3-662-48899-7
eBook Packages: Computer ScienceComputer Science (R0)