Can Formal Methods Improve the Efficiency of Code Reviews?

  • Martin Hentschel
  • Reiner HähnleEmail author
  • Richard Bubel
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9681)


Code reviews are a provenly effective technique to find defects in source code as well as to increase its quality. Industrial software production often relies on code reviews as a standard QA mechanism. Surprisingly, though, tool support for reviewing activities is rare. Existing systems help to keep track of the discussion during the review, but do not support the reviewing activity directly. In this paper we argue that such support can be provided by formal analysis tools. Specifically, we use symbolic execution to improve the program understanding subtask during a code review. Tool support is realized by an Eclipse extension called Symbolic Execution Debugger. It allows one to explore visually a symbolic execution tree for the program under inspection. For evaluation we carefully designed a controlled experiment. We provide statistical evidence that with the help of symbolic execution defects are identified in a more effective manner than with a merely code-based view. Our work suggests that there is huge potential for formal methods not only in the production of safety-critical systems, but for any kind of software and as part of a standard development process.


Code review Symbolic execution Empirical evaluation 



We thank all participants of the evaluation for their valuable time and feedback.


  1. 1.
    Fagan, M.E.: Design and code inspections to reduce errors in program development. IBM Syst. J. 15(3), 182–211 (1976)CrossRefGoogle Scholar
  2. 2.
    Fagan, M.E.: Advances in software inspections. IEEE Trans. Softw. Eng. 12(7), 744–751 (1986)CrossRefGoogle Scholar
  3. 3.
    Humphrey, W.S.: A Discipline for Software Engineering. Addison-Wesley Longman Publishing Co., Inc, Boston (1995)Google Scholar
  4. 4.
    Humphrey, W.S.: Introduction to the Personal Software Process. Addison-Wesley Longman Publishing Co., Inc, Boston (1997)Google Scholar
  5. 5.
    Boyer, R.S., Elspas, B., Levitt, K.N.: SELECT–a formal system for testing and debugging programs by symbolic execution. ACM SIGPLAN Not. 10(6), 234–245 (1975)CrossRefGoogle Scholar
  6. 6.
    Burstall, R.M.: Program proving as hand simulation with a little induction. In: Information Processing 1974. Elsevier/North-Holland (1974)Google Scholar
  7. 7.
    Katz, S., Manna, Z.: Towards automatic debugging of programs. In: Proceedings of the International Conference on Reliable Software, pp. 143–155. ACM Press, Los Angeles (1975)Google Scholar
  8. 8.
    King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B.: Experimentation in Software Engineering. Springer, Heidelberg (2012)CrossRefzbMATHGoogle Scholar
  10. 10.
    Hentschel, M., Bubel, R., Hähnle, R.: Symbolic execution debugger (SED). In: Bonakdarpour, B., Smolka, S.A. (eds.) RV 2014. LNCS, vol. 8734, pp. 255–262. Springer, Heidelberg (2014)Google Scholar
  11. 11.
    Beckert, B., Hähnle, R., Schmitt, P.H.: Verification of Object-Oriented Software: The KeY Approach. LNCS, vol. 4334. Springer, Heidelberg (2007)zbMATHGoogle Scholar
  12. 12.
    Hentschel, M., Hähnle, R., Bubel, R.: Visualizing unbounded symbolic execution. In: Seidl, M., Tillmann, N. (eds.) TAP 2014. LNCS, vol. 8570, pp. 82–98. Springer, Heidelberg (2014)Google Scholar
  13. 13.
    Leavens, G.T., Poll, E., Clifton, C., Cheon, Y., Ruby, C., Cok, D., Müller, P., Kiniry, J., Chalin, P., Zimmerman, D.M., Dietl, W.: JML Reference Manual. Draft Revision 2344, 31, May 2013Google Scholar
  14. 14.
    Doolan, E.P.: Experience with Fagan’s inspection method. Softw. Pract. Exper. 22(2), 173–182 (1992)CrossRefGoogle Scholar
  15. 15.
    Russell, G.W.: Experience with inspection in ultralarge-scale development. IEEE Softw. 8(1), 25–31 (1991)CrossRefGoogle Scholar
  16. 16.
    Macdonald, F., Miller, J.: A comparison of computer support systems for software inspection. Autom. Softw. Eng. 6(3), 291–313 (1999)CrossRefGoogle Scholar
  17. 17.
    Miller, J., Macdonald, F., Ferguson, J.: Assisting management decisions in the software inspection process. Inf. Technol. Manag. 3(1–2), 67–83 (2002)CrossRefGoogle Scholar
  18. 18.
    Nick, M., Denger, C., Willrich, T.: Experience-based support for code inspections. In: Althoff, K.-D., Dengel, A.R., Bergmann, R., Nick, M., Roth-Berghofer, T.R. (eds.) WM 2005. LNCS (LNAI), vol. 3782, pp. 121–126. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  19. 19.
    McConnell, S.: Code Complete, 2nd edn. Microsoft Press, Redmond (2004)Google Scholar
  20. 20.
    Zeller, A.: Why Programs Fail–A Guide to Systematic Debugging, 2nd edn. Elsevier, San Francisco (2006)Google Scholar
  21. 21.
    Bloch, J.: Effective Java, 2nd edn. Prentice Hall, Upper Saddle River (2008)Google Scholar
  22. 22.
    Frigge, M., Hoaglin, D.C., Iglewicz, B.: Some implementations of the boxplot. Am. Stat. 43(1), 50–54 (1989)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Martin Hentschel
    • 1
  • Reiner Hähnle
    • 1
    Email author
  • Richard Bubel
    • 1
  1. 1.Department of Computer ScienceTU DarmstadtDarmstadtGermany

Personalised recommendations