Empirical Software Engineering

, Volume 21, Issue 3, pp 932–959 | Cite as

Investigating technical and non-technical factors influencing modern code review

  • Olga BaysalEmail author
  • Oleksii Kononenko
  • Reid Holmes
  • Michael W. Godfrey


When submitting patches for code review, individual developers are primarily interested in maximizing the chances of their patch being accepted in the least time possible. In principle, code review is a transparent process in which reviewers aim to assess the qualities of the patch on its technical merits in a timely manner; however, in practice the execution of this process can be affected by a variety of factors, some of which are external to the technical content of the patch itself. In this paper, we describe empirical studies of the code review processes for large, open source projects such as WebKit and Google Blink. We first consider factors that have been examined in previous studies — patch size, priority, and component — and then extend our enquiries to explore the effects of organization (which company is involved) and developer profile (review load and activity, patch writer experience) on code review response time and eventual outcome. Our approach uses a reverse engineered model of the patch submission process, and extracts key information from the issue-tracking and code review systems. Our findings suggest that these non-technical factors can significantly impact code review outcomes.


Code review collaboration technical and non-technical factors personal and organizational aspects WebKit Blink 



We thank the WebKit and Blink developers we talked to for their insights into the source code hierarchy and the review process.


  1. Bacchelli A, Bird C (2013) Expectations, outcomes, and challenges of modern code review. In: Proceedings of the 2013 international conference on software engineering, pp 712–721Google Scholar
  2. Baysal O, Holmes R (2012) A qualitative study of mozilla’s process management practices. Tech. Rep. CS-2012-10, David R. Cheriton School of Computer Science, University of Waterloo, Waterloo, Canada.
  3. Baysal O, Kononenko O, Holmes R, Godfrey M (2012) The secret life of patches: a firefox case study. In: Procedings of the 19th working conference on reverse engineering, pp 447–455Google Scholar
  4. Baysal O, Kononenko O, Holmes R, Godfrey MW (2013) The Influence of Non-technical Factors on Code Review. In: Proceedings of the Working Conference on Reverse Engineering, pp 122–131Google Scholar
  5. Beller M, Bacchelli A, Zaidman A, Juergens E (2014) Modern code reviews in open-source projects: Which problems do they fix? In: Proceedings of the 11th working conference on mining software repositories, pp 202–211Google Scholar
  6. Bitergia (2013) Reviewers and companies in the webkit project.
  7. Conway M (1968) How do committees invent? Datamation 14(4):28–31Google Scholar
  8. Herraiz I, German DM, Gonzalez-Barahona JM, Robles G (2008) Towards a simplification of the bug report form in eclipse. In: Proceedings of the 2008 international working conference on mining software repositories, pp 145–148Google Scholar
  9. Jiang Y, Adams B, German DM (2013) Will my patch make it? and how fast? – case study on the linux kernel. In: Proceedings of the 10th IEEE working conference on mining software repositories. San Francisco, CA, USGoogle Scholar
  10. Kruskal WH, Wallis WA (1952) Use of ranks in one-criterion variance analysis. J Am Stat Assoc 47(260):583–621CrossRefzbMATHGoogle Scholar
  11. Lehmann E, D’Abrera H (2006) Nonparametrics: statistical methods based on ranks. SpringerGoogle Scholar
  12. Massey FJ (1951) The kolmogorov-smirnov test for goodness of fit. J Am Stat Assoc 46(253):8–78CrossRefGoogle Scholar
  13. Nagappan N, Murphy B, Basili V (2008) The influence of organizational structure on software quality: an empirical case study. In: Proceedings of the 30th International Conference on Software Engeneering, pp 521–530Google Scholar
  14. Protalinski E (2013) Opera confirms it will follow google and ditch webkit for blink, as part of its commitment to chromium.
  15. Rigby P, German D (2006) A preliminary examination of code review processes in open source projects. Tech. Rep. DCS-305-IR, University of Victoria, CanadaGoogle Scholar
  16. Rigby PC, German DM, Storey MA (2008) Open source software peer review practices: a case study of the apache server. In: Proceedings of the 30th international conference on software engineering, pp 541–550Google Scholar
  17. Weissgerber P, Neu D, Diehl S (2008) Small patches get in! In: Proceedings of the 2008 international working conference on mining software repositories, pp 67–76Google Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  • Olga Baysal
    • 1
    Email author
  • Oleksii Kononenko
    • 2
  • Reid Holmes
    • 2
  • Michael W. Godfrey
    • 2
  1. 1.Department of Computer Science and Operations ResearchUniversité de MontréalMontréalCanada
  2. 2.David R. Cheriton School of Computer ScienceUniversity of WaterlooWaterlooCanada

Personalised recommendations