Skip to main content
Log in

Learning to Predict Code Review Completion Time In Modern Code Review

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

Modern Code Review (MCR) is being adopted in both open-source and proprietary projects as a common practice. MCR is a widely acknowledged quality assurance practice that allows early detection of defects as well as poor coding practices. It also brings several other benefits such as knowledge sharing, team awareness, and collaboration. For a successful review process, peer reviewers should perform their review tasks promptly while providing relevant feedback about the code change being reviewed. However, in practice, code reviews can experience significant delays to be completed due to various socio-technical factors which can affect the project quality and cost. That is, existing MCR frameworks lack tool support to help developers estimate the time required to complete a code review before accepting or declining a review request. In this paper, we aim to build and validate an automated approach to predict the code review completion time in the context of MCR. We believe that the predictions of our approach can improve the engagement of developers by raising their awareness regarding potential delays while doing code reviews. To this end, we formulate the prediction of the code review completion time as a learning problem. In particular, we propose a framework based on regression machine learning (ML) models based on 69 features that stem from 8 dimensions to (i) effectively estimate the code review completion time, and (ii) investigate the main factors influencing code review completion time. We conduct an empirical study on more than 280K code reviews spanning over five projects hosted on Gerrit. Results indicate that ML models significantly outperform baseline approaches with a relative improvement ranging from 7% to 49%. Furthermore, our experiments show that features related to the date of the code review request, the previous owner and reviewers’ activities as well as the history of their interactions are the most important features. Our approach can help further engage the change owner and reviewers by raising their awareness regarding potential delays based on the predicted code review completion time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. https://www.gerritcodereview.com

  2. https://www.reviewboard.org

  3. https://chromium.googlesource.com/chromium/src/+/HEAD/docs/code_reviews.md

  4. https://gerrit.LibreOffice.org/c/online/+/83224

  5. https://gerrit-review.googlesource.com/Documentation/rest-api-changes.html

  6. https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html

  7. https://gerrit-review.googlesource.com/Documentation/rest-api-changes.html#git-person-info

  8. https://networkx.org

  9. https://pydriller.readthedocs.io

  10. https://scikit-learn.org/stable/index.html

  11. https://numpy.org/

  12. https://android-review.googlesource.com

  13. https://codereview.qt-project.org/

  14. https://git.eclipse.org

  15. https://gerrit.LibreOffice.org

  16. https://review.opendev.org/

  17. https://github.com/klainfo/ScottKnottESD/tree/master

  18. https://cran.r-project.org/web/packages/Hmisc/Hmisc.pdf

  19. https://cran.r-project.org/web/packages/rms/rms.pdf

  20. https://github.com/stilab-ets/MCRDuration/blob/main/RQ3_selected_features.md

References

  • ___ (2022) MCRDuration Replication Package: https://github.com/stilab-ets/MCRDuration

  • Abdi H, Williams LJ (2010) Principal component analysis. Wiley interdisciplinary reviews: computational statistics 2(4):433–459

    Article  Google Scholar 

  • Ackerman AF, Fowler PJ, Ebenau RG (1984) Software inspections and the industrial production of software. In: Proc. of a symposium on Software validation: inspection-testing-verification-alternatives, pp 13–40

  • Alomar EA, AlRubaye H, Mkaouer MW, Ouni A, Kessentini M (2021) Refactoring practices in the context of modern code review: An industrial case study at xerox. In: IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), IEEE, pp 348–357

  • Bacchelli A, Bird C (2013) Expectations, outcomes, and challenges of modern code review. In: 2013 35th International Conference on Software Engineering (ICSE), IEEE, pp 712–721

  • Balachandran V (2013) Reducing human effort and improving quality in peer code reviews using automatic static analysis and reviewer recommendation. In: International Conference on Software Engineering (ICSE), pp 931–940

  • Bao L, Xing Z, Xia X, Lo D, Li S (2017) Who will leave the company?: a large-scale industry study of developer turnover by mining monthly work report. In: 2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR), IEEE, pp 170–181

  • Barry B, et al. (1981) Software engineering economics. New York 197

  • Baysal O, Kononenko O, Holmes R, Godfrey MW (2013) The influence of non-technical factors on code review. In: 2013 20th working conference on reverse engineering (WCRE), IEEE, pp 122–131

  • Baysal O, Kononenko O, Holmes R, Godfrey MW (2016) Investigating technical and non-technical factors influencing modern code review. Empirical Software Engineering 21(3):932–959

    Article  Google Scholar 

  • Beller M, Bacchelli A, Zaidman A, Juergens E (2014) Modern code reviews in open-source projects: Which problems do they fix? In: Proceedings of the 11th working conference on mining software repositories, pp 202–211

  • Bettenburg N, Nagappan M, Hassan AE (2015) Towards improving statistical modeling of software engineering data: think locally, act globally! Empirical Software Engineering 20(2):294–335

    Article  Google Scholar 

  • Boehm B, Clark B, Horowitz E, Westland C, Madachy R, Selby R (1995) Cost models for future software life cycle processes: Cocomo 2.0. Annals of software engineering 1(1):57–94

  • Bosu A, Carver JC (2014) Impact of developer reputation on code review outcomes in oss projects: An empirical investigation. In: Int. Symp. on Empirical Software Eng. and Measurement, pp 1–10

  • Briand LC, Wüst J, Daly JW, Porter DV (2000) Exploring the relationships between design measures and software quality in object-oriented systems. Journal of systems and software 51(3):245–273

    Article  Google Scholar 

  • Britto R, Freitas V, Mendes E, Usman M (2014) Effort estimation in global software development: A systematic literature review. In: 2014 IEEE 9th International Conference on Global Software Engineering, IEEE, pp 135–144

  • Choetkiertikul M, Dam HK, Tran T, Pham T, Ghose A, Menzies T (2018) A deep learning model for estimating story points. IEEE Transactions on Software Engineering 45(7):637–656

    Article  Google Scholar 

  • Chouchen M, Olongo J, Ouni A, Mkaouer MW (2021a) Predicting code review completion time in modern code review. arXiv preprint http://arxiv.org/abs/2109.15141arXiv:2109.15141

  • Chouchen M, Ouni A, Kula RG, Wang D, Thongtanunam P, Mkaouer MW, Matsumoto K (2021b) Anti-patterns in modern code review: Symptoms and prevalence. In: IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), pp 531–535

  • Chouchen M, Ouni A, Mkaouer MW, Kula RG, Inoue K (2021) Whoreview: A multi-objective search-based approach for code reviewers recommendation in modern code review. Applied Soft Computing 100:106908

    Article  Google Scholar 

  • Chulani S, Boehm B, Steece B (1999) Bayesian analysis of empirical software engineering cost models. IEEE Transactions on Software Engineering 25(4):573–583

    Article  Google Scholar 

  • Cohen J (2013) Statistical power analysis for the behavioral sciences. Academic press

  • Dejaeger K, Verbeke W, Martens D, Baesens B (2011) Data mining techniques for software effort estimation: a comparative study. IEEE transactions on software engineering 38(2):375–397

    Article  Google Scholar 

  • Doğan E, Tüzün E (2022) Towards a taxonomy of code review smells. Information and Software Technology 142:106737

    Article  Google Scholar 

  • Ebert F, Castor F, Novielli N, Serebrenik A (2019) Confusion in code reviews: Reasons, impacts, and coping strategies. In: 2019 IEEE 26th international conference on software analysis, evolution and reengineering (SANER), IEEE, pp 49–60

  • Ebert F, Castor F, Novielli N, Serebrenik A (2021) An exploratory study on confusion in code reviews. Empirical Software Engineering 26(1):1–48

    Article  Google Scholar 

  • Egelman CD, Murphy-Hill E, Kammer E, Hodges MM, Green C, Jaspan C, Lin J (2020) Predicting developers’ negative feelings about code review. In: 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE), IEEE, pp 174–185

  • Fagan M (2002) Design and code inspections to reduce errors in program development. In: Software pioneers, Springer, pp 575–607

  • Fan Y, Xia X, Lo D, Li S (2018) Early prediction of merged code changes to prioritize reviewing tasks. Empirical Software Engineering 23(6):3346–3393

    Article  Google Scholar 

  • Ferrucci F, Gravino C, Oliveto R, Sarro F (2010) Genetic programming for effort estimation: an analysis of the impact of different fitness functions. In: 2nd International Symposium on Search Based Software Engineering, IEEE, pp 89–98

  • Geurts P, Ernst D, Wehenkel L (2006) Extremely randomized trees. Machine learning 63(1):3–42

    Google Scholar 

  • Gousios G, Pinzger M, Deursen Av (2014) An exploratory study of the pull-based software development model. In: Proceedings of the 36th international conference on software engineering, pp 345–355

  • Gousios G, Zaidman A, Storey MA, Van Deursen A (2015) Work practices and challenges in pull-based development: The integrator’s perspective. In: 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, IEEE, vol 1, pp 358–368

  • Graves TL, Karr AF, Marron JS, Siy H (2000) Predicting fault incidence using software change history. IEEE Transactions on software engineering 26(7):653–661

    Article  Google Scholar 

  • Greiler M, Bird C, Storey MA, MacLeod L, Czerwonka J (2016) Code reviewing in the trenches: Understanding challenges, best practices and tool needs

  • Hannebauer C, Patalas M, Stünkel S, Gruhn V (2016) Automatically recommending code reviewers based on their expertise: An empirical comparison. In: IEEE/ACM International Conference on Automated Software Engineering, ACM, pp 99–110

  • Hassan AE (2009) Predicting faults using the complexity of code changes. In: 2009 IEEE 31st international conference on software engineering, IEEE, pp 78–88

  • Herbold S (2017) Comments on scottknottesd in response to an empirical comparison of model validation techniques for defect prediction models. IEEE Transactions on Software Engineering 43(11):1091–1094

    Article  Google Scholar 

  • Hindle A, German DM, Holt R (2008) What do large commits tell us? a taxonomical study of large commits. In: Proceedings of the 2008 international working conference on Mining software repositories, pp 99–108

  • Hirao T, McIntosh S, Ihara A, Matsumoto K (2020) Code reviews with divergent review scores: An empirical study of the openstack and qt communities. IEEE Transactions on Software Engineering

  • Huang Y, Jia N, Zhou X, Hong K, Chen X (2019) Would the patch be quickly merged? In: International Conference on Blockchain and Trustworthy Systems, Springer, pp 461–475

  • Islam K, Ahmed T, Shahriyar R, Iqbal A, Uddin G (2022) Early prediction for merged vs abandoned code changes in modern code reviews. Information and Software Technology 142:106756

    Article  Google Scholar 

  • Jiang Y, Adams B, German DM (2013) Will my patch make it? and how fast? case study on the linux kernel. In: Working Conference on Mining Software Repositories (MSR), pp 101–110

  • Jiarpakdee J, Tantithamthavorn C, Dam HK, Grundy J (2020) An empirical study of model-agnostic techniques for defect prediction models. IEEE Transactions on Software Engineering

  • Kamei Y, Shihab E, Adams B, Hassan AE, Mockus A, Sinha A, Ubayashi N (2012) A large-scale empirical study of just-in-time quality assurance. IEEE Transactions on Software Engineering 39(6):757–773

    Article  Google Scholar 

  • Khanan C, Luewichana W, Pruktharathikoon K, Jiarpakdee J, Tantithamthavorn C, Choetkiertikul M, Ragkhitwetsagul C, Sunetnanta T (2020) Jitbot: an explainable just-in-time defect prediction bot. In: Proceedings of the 35th IEEE/ACM international conference on automated software engineering, pp 1336–1339

  • Kocaguneli E, Menzies T, Keung J, Cok D, Madachy R (2012) Active learning and effort estimation: Finding the essential content of software effort estimation data. IEEE Transactions on Software Engineering 39(8):1040–1053

    Article  Google Scholar 

  • Kononenko O, Baysal O, Guerrouj L, Cao Y, Godfrey MW (2015) Investigating code review quality: Do people and participation matter? In: 2015 IEEE international conference on software maintenance and evolution (ICSME), IEEE, pp 111–120

  • Kononenko O, Baysal O, Godfrey MW (2016) Code review quality: How developers see it. In: Proceedings of the 38th international conference on software engineering, pp 1028–1038

  • Kovalenko V, Bacchelli A (2018) Code review for newcomers: is it different? In: Proceedings of the 11th International Workshop on Cooperative and Human Aspects of Software Engineering, pp 29–32

  • Kovalenko V, Tintarev N, Pasynkov E, Bird C, Bacchelli A (2018) Does reviewer recommendation help developers? IEEE Transactions on Software Engineering

  • Leguina A (2015) A primer on partial least squares structural equation modeling (pls-sem)

  • Liu FT, Ting KM, Zhou ZH (2008) Isolation forest. In: 2008 eighth ieee international conference on data mining, IEEE, pp 413–422

  • Louppe G, Wehenkel L, Sutera A, Geurts P (2013) Understanding variable importances in forests of randomized trees. Advances in neural information processing systems 26

  • MacLeod L, Greiler M, Storey MA, Bird C, Czerwonka J (2017) Code reviewing in the trenches: Challenges and best practices. IEEE Software 35(4):34–42

    Article  Google Scholar 

  • Maddila C, Bansal C, Nagappan N (2019) Predicting pull request completion time: a case study on large scale cloud services. In: 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp 874–882

  • Maddila C, Upadrasta SS, Bansal C, Nagappan N, Gousios G, van Deursen A (2020) Nudge: Accelerating overdue pull requests towards completion. arXiv preprint arXiv:2011.12468

  • Matsumoto S, Kamei Y, Monden A, Matsumoto Ki, Nakamura M (2010) An analysis of developer metrics for fault prediction. In: Proceedings of the 6th International Conference on Predictive Models in Software Engineering, pp 1–9

  • Messalas A, Kanellopoulos Y, Makris C (2019) Model-agnostic interpretability with shapley values. 2019 10th International Conference on Information. Intelligence, Systems and Applications (IISA), IEEE, pp 1–7

    Google Scholar 

  • Mockus A, Weiss DM (2000) Predicting risk of software changes. Bell Labs Technical Journal 5(2):169–180

    Article  Google Scholar 

  • Moser R, Pedrycz W, Succi G (2008) A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. In: Proceedings of the 30th international conference on Software engineering, pp 181–190

  • Mustaqeem M, Saqib M (2021) Principal component based support vector machine (pc-svm): a hybrid technique for software defect detection. Cluster Computing 24(3):2581–2595

    Article  Google Scholar 

  • Nagappan N, Ball T, Zeller A (2006) Mining metrics to predict component failures. In: Proceedings of the 28th international conference on Software engineering, pp 452–461

  • Oliveira AL (2006) Estimation of software project effort with support vector regression. Neurocomputing 69(13–15):1749–1753

    Article  Google Scholar 

  • Ouni A, Kula RG, Inoue K (2016) Search-based peer reviewers recommendation in modern code review. In: IEEE International Conference on Software Maintenance and Evolution (ICSME), pp 367–377

  • Patanamon T, Chakkrit T, Raula GK, Norihiro Y, Hajimu I, Ken-ichi M (2015) Who Should Review My Code? A File Location-Based Code-Reviewer Recommendation Approach for Modern Code Review. In: 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER)

  • Prykhodko S (2016) Developing the software defect prediction models using regression analysis based on normalizing transformations. Modern problems in testing of the applied software (PTTAS-2016). Abstracts of the Research and Practice Seminar, Poltava, Ukraine, pp 6–7

    Google Scholar 

  • Rajapaksha D, Tantithamthavorn C, Bergmeir C, Buntine W, Jiarpakdee J, Grundy J (2021) Sqaplanner: Generating data-informed software quality improvement plans. IEEE Transactions on Software Engineering

  • Rajbahadur GK, Wang S, Kamei Y, Hassan AE (2017) The impact of using regression models to build defect classifiers. In: 2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR), IEEE, pp 135–145

  • Rastogi A, Nagappan N, Gousios G, van der Hoek A (2018) Relationship between geographical location and evaluation of developer contributions in github. In: Proceedings of the 12th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, pp 1–8

  • Ribeiro MT, Singh S, Guestrin C (2016) Model-agnostic interpretability of machine learning. arXiv preprint arXiv:1606.05386

  • Rigby PC, Bird C (2013) Convergent contemporary software peer review practices. In: Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, pp 202–212

  • Rigby PC, Storey MA (2011) Understanding broadcast based peer review on open source software projects. In: 2011 33rd international conference on software engineering (ICSE), IEEE, pp 541–550

  • Romano J, Kromrey JD, Coraggio J, Skowronek J (2006) Appropriate statistics for ordinal level data: Should we really be using t-test and cohen’sd for evaluating group differences on the nsse and other surveys. In: Annual Meeting of the Florida Association of Institutional Research, pp 1–33

  • Ruangwan S, Thongtanunam P, Ihara A, Matsumoto K (2019) The impact of human factors on the participation decision of reviewers in modern code review. Empirical Software Engineering 24(2):973–1016

    Article  Google Scholar 

  • Sadowski C, Söderberg E, Church L, Sipko M, Bacchelli A (2018) Modern code review: a case study at google. In: Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice, pp 181–190

  • Saidani I, Ouni A, Chouchen M, Mkaouer MW (2020) Predicting continuous integration build failures using evolutionary search. Information and Software Technology 128:106392

    Article  Google Scholar 

  • Saini N, Britto R (2021) Using machine intelligence to prioritise code review requests. In: IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), pp 11–20

  • Sarro F, Petrozziello A, Harman M (2016) Multi-objective software effort estimation. In: 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE), IEEE, pp 619–630

  • Savor T, Douglas M, Gentili M, Williams L, Beck K, Stumm M (2016) Continuous deployment at facebook and oanda. In: 2016 IEEE/ACM 38th International Conference on Software Engineering Companion (ICSE-C), IEEE, pp 21–30

  • Seo YS, Bae DH (2013) On the value of outlier elimination on software effort estimation research. Empirical Software Engineering 18(4):659–698

    Article  Google Scholar 

  • Sharma P, Singh J (2017) Systematic literature review on software effort estimation using machine learning approaches. In: 2017 International Conference on Next Generation Computing and Information Systems (ICNGCIS), IEEE, pp 43–47

  • Shepperd M, MacDonell S (2012) Evaluating prediction systems in software project estimation. Information and Software Technology 54(8):820–827

    Article  Google Scholar 

  • Singh D, Singh B (2020) Investigating the impact of data normalization on classification performance. Applied Soft Computing 97:105524

    Article  Google Scholar 

  • Soares DM, de Lima Júnior ML, Murta L, Plastino A (2015) Acceptance factors of pull requests in open-source projects. In: Proceedings of the 30th Annual ACM Symposium on Applied Computing, pp 1541–1546

  • Tan M, Tan L, Dara S, Mayeux C (2015) Online defect prediction for imbalanced data. In: IEEE/ACM 37th IEEE International Conference on Software Engineering, vol 2, pp 99–108

  • Tantithamthavorn C, McIntosh S, Hassan AE, Matsumoto K (2016) Automated parameter optimization of classification techniques for defect prediction models. In: Proceedings of the 38th international conference on Software Engineering, pp 321–332

  • Tantithamthavorn C, McIntosh S, Hassan AE, Matsumoto K (2018) The impact of automated parameter optimization for defect prediction models

  • Tawosi V, Sarro F, Petrozziello A, Harman M (2021) Multi-objective software effort estimation: A replication study. IEEE Transactions on Software Engineering

  • Terrell J, Kofink A, Middleton J, Rainear C, Murphy-Hill E, Parnin C, Stallings J (2017) Gender differences and bias in open source: Pull request acceptance of women versus men. PeerJ Computer Science 3:e111

    Article  Google Scholar 

  • Thongtanunam P, Hassan AE (2020) Review dynamics and their impact on software quality. IEEE Transactions on Software Engineering

  • Thongtanunam P, McIntosh S, Hassan AE, Iida H (2015) Investigating code review practices in defective files: An empirical study of the qt system. In: 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories, IEEE, pp 168–179

  • Thongtanunam P, McIntosh S, Hassan AE, Iida H (2017) Review participation in modern code review. Empirical Software Engineering 22(2):768–817

    Article  Google Scholar 

  • Tian Y, Nagappan M, Lo D, Hassan AE (2015) What are the characteristics of high-rated apps? a case study on free android applications. In: 2015 IEEE international conference on software maintenance and evolution (ICSME), IEEE, pp 301–310

  • Trendowicz A, Münch J, Jeffery R (2008) State of the practice in software effort estimation: a survey and literature review. In: IFIP Central and East European Conference on Software Engineering Techniques, Springer, pp 232–245

  • Tsay J, Dabbish L, Herbsleb J (2014) Influence of social and technical factors for evaluating contribution in github. In: Proceedings of the 36th international conference on Software engineering, pp 356–366

  • Tsymbal A (2004) The problem of concept drift: definitions and related work. Computer Science Department, Trinity College Dublin 106(2):58

    Google Scholar 

  • Uchôa A, Barbosa C, Oizumi W, Blenílio P, Lima R, Garcia A, Bezerra C (2020) How does modern code review impact software design degradation? an in-depth empirical study. In: 2020 IEEE International Conference on Software Maintenance and Evolution (ICSME), IEEE, pp 511–522

  • Wang S, Bansal C, Nagappan N, Philip AA (2019) Leveraging change intents for characterizing and identifying large-review-effort changes. In: Proceedings of the Fifteenth International Conference on Predictive Models and Data Analytics in Software Engineering, pp 46–55

  • Wang S, Bansal C, Nagappan N (2021) Large-scale intent analysis for identifying large-review-effort code changes. Information and Software Technology 130:106408

    Article  Google Scholar 

  • Wei H, Hu C, Chen S, Xue Y, Zhang Q (2019) Establishing a software defect prediction model via effective dimension reduction. Information Sciences 477:399–409

    Article  MathSciNet  Google Scholar 

  • Weisstein EW (2004) Bonferroni correction. https://mathworldwolfram.com/

  • Westfall PH (2014) Kurtosis as peakedness, 1905–2014. rip. The American Statistician 68(3):191–195

  • Whigham PA, Owen CA, Macdonell SG (2015) A baseline model for software effort estimation. ACM Transactions on Software Engineering and Methodology (TOSEM) 24(3):1–11

    Article  Google Scholar 

  • Widmer G, Kubat M (1996) Learning in the presence of concept drift and hidden contexts. Machine learning 23(1):69–101

    Article  Google Scholar 

  • Wilcoxon F (1992) Individual comparisons by ranking methods. In: Breakthroughs in statistics, pp 196–202

  • Winters T, Manshreck T, Wright H (2020) Software engineering at google: Lessons learned from programming over time. O’Reilly Media

  • Xia T, Chen J, Mathew G, Shen X, Menzies T (2018) Why software effort estimation needs sbse. arXiv preprint arXiv:1804.00626

  • Xu Z, Liu J, Luo X, Yang Z, Zhang Y, Yuan P, Tang Y, Zhang T (2019) Software defect prediction based on kernel pca and weighted extreme learning machine. Information and Software Technology 106:182–200

    Article  Google Scholar 

  • Yang X, Yu H, Fan G, Huang Z, Yang K, Zhou Z (2021) An empirical study of model-agnostic interpretation technique for just-in-time software defect prediction. International Conference on Collaborative Computing: Networking. Springer, Applications and Worksharing, pp 420–438

    Google Scholar 

  • Yu Y, Wang H, Filkov V, Devanbu P, Vasilescu B (2015) Wait for it: Determinants of pull request evaluation latency on github. In: 2015 IEEE/ACM 12th working conference on mining software repositories, IEEE, pp 367–371

  • Zanetti MS, Scholtes I, Tessone CJ, Schweitzer F (2013) Categorizing bugs with social networks: a case study on four open source software communities. In: 2013 35th International Conference on Software Engineering (ICSE), IEEE, pp 1032–1041

  • Zhang W, Pan Z, Wang Z (2020) Prediction method of code review time based on hidden markov model. In: International Conference on Web Information Systems and Applications, Springer, pp 168–175

Download references

Acknowledgements

This research has been funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) RGPIN-2018-05960.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ali Ouni.

Ethics declarations

Conflicts of interest

The authors declare they have no financial or non-financial interests.

Additional information

Communicated by: Christoph Treude.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chouchen, M., Ouni, A., Olongo, J. et al. Learning to Predict Code Review Completion Time In Modern Code Review. Empir Software Eng 28, 82 (2023). https://doi.org/10.1007/s10664-023-10300-3

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10664-023-10300-3

Keywords

Navigation