Crucible official website (2019) https://www.atlassian.com/software/crucible
Gerrit Code Review (2019) https://www.gerritcodereview.com
GitHub official website (2019) https://github.com
About qt (2019) https://wiki.qt.io/About_Qt
Abelein U, Paech B (2015) Understanding the influence of user participation and involvement on system success–a systematic mapping study. Empir Softw Eng 20(1):28–81
Article
Google Scholar
Android (2020) Android gerrit online repository https://git.eclipse.org/r/q/status:open+-is:wip
Arlot S, Celisse A, et al. (2010) A survey of cross-validation procedures for model selection. Statistics Surveys 4:40–79
MathSciNet
Article
Google Scholar
Bacchelli A, Bird C (2013) Expectations, outcomes, and challenges of modern code review. In: Proceedings of the 2013 international conference on software engineering, ICSE ’13. ISBN 978-1-4673-3076-3, https://doi.org/10.1109/ICSE.2013.6606617. IEEE Press, Piscataway, pp 712–721
Baeza-Yates R, Ribeiro-Neto B, et al. (1999) Modern Information Retrieval, vol 463. ACM Press, New York
Google Scholar
Ball T, Kim J-M, Porter AA, Siy HP (1997) If your version control system could talk. In: ICSE Workshop on process modelling and empirical studies of software engineering, vol 11
Baum T, Liskin O, Niklas K, Schneider K (2016) A faceted classification scheme for change-based industrial code review processes. In: 2016 IEEE International conference on software quality, reliability and security (QRS), pp 74–85
Baum T, Leßmann H, Schneider K (2017) The choice of code review process: A survey on the state of the practice. In: Product-focused software process improvement. ISBN 978-3-319-69926-4. Springer International Publishing, Cham, pp 111–127
Baum T, Schneider K, Bacchelli A (2019) Associating working memory capacity and code change ordering with code review performance. Empirical Software Engineering, pp 1–37
Bavota G, Russo B (2015) Four eyes are better than two: on the impact of code reviews on software quality. In: 2015 IEEE International conference on software maintenance and evolution (ICSME), IEEE, pp 81–90
Baysal O, Kononenko O, Holmes R, Godfrey MW (2016) Investigating technical and non-technical factors influencing modern code review. Empir Softw Eng 21(3):932–959
Article
Google Scholar
Beckwith L, Kissinger C, Burnett M, Wiedenbeck S, Lawrance J, Blackwell A, Cook C (2006) Tinkering and gender in end-user programmers’ debugging. In: Proceedings of the SIGCHI conference on Human Factors in computing systems, pp 231–240
Beller M, Bacchelli A, Zaidman A, Juergens E (2014) Modern code reviews in open-source projects: Which problems do they fix?. In: Proceedings of the 11th working conference on mining software repositories, MSR 2014. ISBN 978-1-4503-2863-0. https://doi.org/10.1145/2597073.2597082. ACM, New York, pp 202–211
Bettenburg N, Just S, Schröter A., Weiss C, Premraj R, Zimmermann T (2008) What makes a good bug report?. In: Proceedings of the 16th ACM SIGSOFT international symposium on foundations of software engineering, pp 308–318
Bird C, Carnahan T, Greiler M (2015) Lessons learned from building and deploying a code review analytics platform. In: 2015 IEEE/ACM 12Th working conference on mining software repositories, IEEE, pp 191–201
Bishop CM (2006) Pattern recognition and machine learning. Springer
Bosu A, Carver JC (2014) Impact of developer reputation on code review outcomes in oss projects: an empirical investigation. In: Proceedings of the 8th ACM/IEEE international symposium on empirical software engineering and measurement, ACM, pp 33
Burnett MM, Beckwith L, Wiedenbeck S, Fleming SD, Cao J, Park TH, Grigoreanu V, Rector K (2011) Gender pluralism in problem-solving software. Interacting with Computers 23(5):450–460
Article
Google Scholar
Buse RPL, Weimer WR (2010) Learning a metric for code readability. IEEE Trans Softw Eng 36(4):546–558. ISSN 0098-5589. https://doi.org/10.1109/TSE.2009.70
Article
Google Scholar
Chandrashekar G, Sahin F (2014) A survey on feature selection methods. Computers & Electrical Engineering 40(1):16–28
Article
Google Scholar
Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP (2002) Smote: synthetic minority over-sampling technique. Journal of Artificial Intelligence Research 16:321–357
Article
Google Scholar
Couchbase (2020) Couchbase gerrit online repository http://review.couchbase.org/q/status:open
Czerwonka J, Greiler M, Tilford J (2015) Code reviews do not find bugs: how the current code review best practice slows us down. In: Proceedings of the 37th international conference on software engineering, Vol 2, IEEE Press, pp 27–28
Di Penta M, Cerulo L, Aversano L (2009) The life and death of statically detected vulnerabilities: an empirical study. Inf Softw Technol 51(10):1469–1484
Article
Google Scholar
Domingos PM (2012) A few useful things to know about machine learning. Commun Acm 55(10):78–87
Article
Google Scholar
Eclipse (2020) Eclipse gerrit online repository https://git.eclipse.org/r/q/status:open+-is:wip
Elkan C (2001) The foundations of cost-sensitive learning. In: International joint conference on artificial intelligence, vol 17, Lawrence Erlbaum Associates ltd, pp 973–978
Fagan M (2002) Design and code inspections to reduce errors in program development. In: Software pioneers, Springer, pp 575–607
Fink A (2003) How to design survey studies. Sage
Fluri B, Wuersch M, Pinzger M, Gall H (2007) Change distilling: Tree differencing for fine-grained source code change extraction. IEEE Trans Softw Eng 33(11):725–743
Article
Google Scholar
Fregnan E, Petrulio F, Di Geronimo L, Bacchelli A (2020) What happens in my code reviews? replication package. https://doi.org/10.5281/zenodo.5592254
Gata W, Grand G, Fatmasari R, Baharuddin B, Patras YE, Hidayat R, Tohari S, Wardhani NK (2019) Prediction of teachers’ lateness factors coming to school using c4. 5, random tree, random forest algorithm. In: 2Nd international conference on research of educational administration and management (ICREAM 2018), Atlantis Press, pp 161–166
Giger E, D’Ambros M, Pinzger M, Gall HC (2012) Method-level bug prediction. In: Proceedings of the 2012 ACM-IEEE international symposium on empirical software engineering and measurement, IEEE, pp 171–180
Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH (2009) The weka data mining software: an update. ACM SIGKDD Explorations Newsletter 11(1):10–18
Article
Google Scholar
Hall MA (1999) Correlation-based feature selection for machine learning
Jiarpakdee J, Tantithamthavorn C, Hassan AE (2019) The impact of correlated metrics on the interpretation of defect models. IEEE Transactions on Software Engineering
Johnson B, Song Y, Murphy-Hill E, Bowdidge R (2013) Why don’t software developers use static analysis tools to find bugs? In: Proceedings of the 2013 international conference on software engineering, IEEE Press, pp 672–681
Johnson RB, Onwuegbuzie AJ (2004) Mixed methods research: a research paradigm whose time has come. Educational Researcher 33(7):14–26
Article
Google Scholar
Kamei Y, Matsumoto S, Monden A, Matsumoto K-i, Adams B, Hassan AE (2010) Revisiting common bug prediction findings using effort-aware models. In: 2010 IEEE International conference on software maintenance, IEEE, pp 1–10
Kaner C, Falk J, Nguyen HQ (2000) Testing computer software. 2nd edn, Dreamtech press
Karegowda AG, Manjunath A, Jayaram M (2010) Comparative study of attribute selection using gain ratio and correlation based feature selection. International Journal of Information Technology and Knowledge Management 2(2):271–277
Google Scholar
Kemerer CF, Paulk MC (2009) The impact of design and code reviews on software quality: an empirical study based on psp data. IEEE Trans Softw Eng 35 (4):534–550
Article
Google Scholar
Kononenko O, Baysal O, Guerrouj L, Cao Y, Godfrey MW (2015) Investigating code review quality: Do people and participation matter? In: 2015 IEEE international conference on software maintenance and evolution (ICSME)
Kotsiantis S, Kanellopoulos D, Pintelas P (2006) Data preprocessing for supervised leaning. Int J Comput Sci 1(2):111–117
Google Scholar
Kovalenko V, Tintarev N, Pasynkov E, Bird C, Bacchelli A (2020) Does reviewer recommendation help developers?. IEEE Trans Softw Eng 46(7):710–731. https://doi.org/10.1109/TSE.2018.2868367
Article
Google Scholar
Krawczyk B (2016) Learning from imbalanced data: open challenges and future directions. Progress in Artif Intell 5(4):221–232
Article
Google Scholar
Krippendorff K (2011) Computing krippendorff’s alpha-reliability
Kumar L, Satapathy SM, Murthy LB (2019) Method level refactoring prediction on five open source java projects using machine learning techniques. In: Proceedings of the 12th innovations on software engineering conference (formerly known as India Software Engineering Conference), pp 1–10
Lessmann S, Baesens B, Mues C, Pietsch S (2008) Benchmarking classification models for software defect prediction: a proposed framework and novel findings. IEEE Trans Softw Eng 34(4):485–496
Article
Google Scholar
Likert R (1932) A technique for the measurement of attitudes. Archives of psychology
Longhurst R (2003) Semi-structured interviews and focus groups. Key Methods in Geography 3:143–156
Google Scholar
Lyons EE, Coyle AE (2007) Analysing qualitative data in psychology. Sage Publications Ltd
Mahboob T, Irfan S, Karamat A (2016) A machine learning approach for student assessment in e-learning using quinlan’s c4. 5, naive bayes and random forest algorithms. In: 2016 19Th international multi-topic conference (INMIC), IEEE, pp 1–8
Mäntylä MV, Lassenius C (2009) What types of defects are really discovered in code reviews?. IEEE Transactions on Software Engineering 35(3):430–448. ISSN 0098-5589. https://doi.org/10.1109/TSE.2008.71
Article
Google Scholar
McCabe TJ (1976) A complexity measure. IEEE Transactions on software Engineering 4:308–320
MathSciNet
Article
Google Scholar
McIntosh S, Kamei Y, Adams B, Hassan AE (2014) The impact of code review coverage and code review participation on software quality: a case study of the qt, vtk, and itk projects. In: Proceedings of the 11th working conference on mining software repositories, ACM, pp 192–201
McIntosh S, Kamei Y, Adams B, Hassan AE (2016) An empirical study of the impact of modern code review practices on software quality. Empir Softw Eng 21(5):2146–2189
Article
Google Scholar
Morales R, McIntosh S, Khomh F (2015) Do code review practices impact design quality? a case study of the qt, vtk, and itk projects. In: 2015 IEEE 22Nd international conference on software analysis, evolution, and reengineering (SANER), IEEE, pp 171–180
Newcomer KE, Hatry HP, Wholey JS (2015) Conducting semi-structured interviews. Handbook of practical program evaluation, 492
Ouni A, Kula RG, Inoue K (2016) Search-based peer reviewers recommendation in modern code review. In: 2016 IEEE International conference on software maintenance and evolution (ICSME), IEEE, pp 367–377
Paixao M, Krinke J, Han D, Harman M (2018) Crop: Linking code reviews to source code changes. In: 2018 IEEE/ACM 15Th international conference on mining software repositories (MSR), pp 46–49
Palomba F, Panichella A, De Lucia A, Oliveto R, Zaidman A (May 2016) A textual-based technique for smell detection. In: 2016 IEEE 24Th international conference on program comprehension (ICPC), pp 1–10
Pantiuchina J, Bavota G, Tufano M, Poshyvanyk D (2018) Towards just-in-time refactoring recommenders. In: 2018 IEEE/ACM 26Th international conference on program comprehension (ICPC), IEEE, pp 312–3123
Pascarella L, Spadini D, Palomba F, Bruntink M, Bacchelli A (2018) Information needs in contemporary code review. Proc ACM Hum-Comput Interact 2(CSCW):135:1–135:27. ISSN 2573-0142. https://doi.org/10.1145/3274404
Article
Google Scholar
Pecorelli F, Palomba F, Di Nucci D, De Lucia A (2019) Comparing heuristic and machine learning approaches for metric-based code smell detection. In: Proceedings of the 27th international conference on program comprehension, IEEE Press, pp 93–104
Porter A, Siy H, Votta L (1996) A review of software inspections. volume 42 of Advances in Computers, pp 39–76. Elsevier
Porter A, Siy H, Mockus A, Votta L (1998) Understanding the sources of variation in software inspections. ACM Trans Softw Eng Methodol (TOSEM) 7(1):41–79
Article
Google Scholar
Porter MF (1997) Readings in information retrieval. chapter An Algorithm for Suffix Stripping. Morgan Kaufmann Publishers Inc., San Francisco, pp 313–316. ISBN 1-55860-454-5. http://dl.acm.org/citation.cfm?id=275537.275705
Google Scholar
Portigal S (2013) Interviewing users: how to uncover compelling insights. Rosenfeld Media
Ram A, Sawant AA, Castelluccio M, Bacchelli A (2018) What makes a code change easier to review: An empirical investigation on code change reviewability. In: Proceedings of the 2018 26th ACM joint meeting on european software engineering conference and symposium on the foundations of software engineering, ESEC/FSE 2018, New York, ACM, pp 201–212, ISBN 978-1-4503-5573-5. https://doi.org/10.1145/3236024.3236080
Reich Y, Barai S (1999) Evaluating machine learning models for engineering problems. Artif Intell Eng 13(3):257–272
Article
Google Scholar
Rigby PC, Bird C (2013) Convergent contemporary software peer review practices. In: Proceedings of the 2013 9th joint meeting on foundations of software engineering, ACM, pp 202–212
Rigby PC, German DM, Cowen L, Storey M-A (2014) Peer review on open-source software projects: Parameters, statistical models, and theory. ACM Trans Softw Eng Methodol (TOSEM) 23(4):35
Article
Google Scholar
Sadowski C, Van Gogh J, Jaspan C, Soderberg E, Winter C (2015) Tricorder: Building a program analysis ecosystem. In: 2015 IEEE/ACM 37Th IEEE international conference on software engineering, vol 1, IEEE, pp 598–608
Sadowski C, Aftandilian E, Eagle A, Miller-Cushon L, Jaspan C (2018) Lessons from building static analysis tools at google. Commun ACM 61 (4):58–66
Article
Google Scholar
Santos MS, Soares JP, Abreu PH, Araujo H, Santos J (2018) Cross-validation for imbalanced datasets: Avoiding overoptimistic and overfitting approaches [research frontier]. IEEE Comput Intell Mag 13(4):59–76
Article
Google Scholar
Sauer C, Jeffery DR, Land L, Yetton P (2000) The effectiveness of software development technical reviews: a behaviorally motivated program of research. IEEE Trans Softw Eng 26(1):1–14
Article
Google Scholar
Shibuya B, Tamai T (2009) Understanding the process of participating in open source communities. In: Proceedings of the 2009 ICSE workshop on emerging trends in Free/Libre/Open source software research and development, IEEE Computer Society, pp 1–6
Spadini D, Aniche M, Storey M-A, Bruntink M, Bacchelli A (2018) When testing meets code review: Why and how developers review tests. In: 2018 IEEE/ACM 40Th international conference on software engineering (ICSE), IEEE, pp 677–687
Spadini D, Palomba F, Baum T, Hanenberg S, Bruntink M, Bacchelli A (2019) Test-driven code review: an empirical study. In: Proceedings of the 41st international conference on software engineering, IEEE Press, pp 1061–1072
Strüder S, Mukelabai M, Strüber D, Berger T (2020) Feature-oriented defect prediction. In: Proceedings of the 24th ACM conference on systems and software product line: Volume A-Volume A, pp 1–12
Tantithamthavorn C, Hassan AE (2018) An experience report on defect modelling in practice: Pitfalls and challenges. In: Proceedings of the 40th international conference on software engineering: software engineering in practice, ACM, pp 286–295
Tantithamthavorn C, McIntosh S, Hassan AE, Matsumoto K (2016) An empirical comparison of model validation techniques for defect prediction models. IEEE Trans Softw Eng 43(1):1–18
Article
Google Scholar
Tantithamthavorn C, McIntosh S, Hassan AE, Matsumoto K (2018) The impact of automated parameter optimization on defect prediction models. IEEE Transactions on Software Engineering
Thongtanunam P, McIntosh S, Hassan AE, Iida H (2015a) Investigating code review practices in defective files: an empirical study of the qt system. In: Proceedings of the 12th working conference on mining software repositories, IEEE Press, pp 168–179
Thongtanunam P, Tantithamthavorn C, Kula RG, Yoshida N, Iida H, Matsumoto K. -i. (2015b) Who should review my code? a file location-based code-reviewer recommendation approach for modern code review. In: 2015 IEEE 22Nd international conference on software analysis, evolution, and reengineering (SANER), IEEE, pp 141–150
Thongtanunam P, McIntosh S, Hassan AE, Iida H (2016) Revisiting code ownership and its relationship with software quality in the scope of modern code review. In: Proceedings of the 38th international conference on software engineering, ACM, pp 1039–1050
Vassallo C, Panichella S, Palomba F, Proksch S, Zaidman A, Gall HC (2018) Context is king: the developer perspective on the usage of static analysis tools. In: 2018 IEEE 25Th international conference on software analysis, evolution and reengineering (SANER), IEEE, pp 38–49
Vassallo C, Panichella S, Palomba F, Proksch S, Gall HC, Zaidman A (2019a) How developers engage with static analysis tools in different contexts. Empirical Software Engineering
Vassallo C, Proksch S, Gall HC, Di Penta M (2019a) Automated reporting of anti-patterns and decay in continuous integration. In: 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), IEEE, pp 105–115
Wiegers K (2002) Peer Reviews in Software: A Practical Guide. Addison-Wesley Longman Publishing Co., Inc., Boston. ISBN 0-201-73485-0
Google Scholar
Wolf T, Schroter A, Damian D, Nguyen T (2009) Predicting build failures using social network analysis on developer communication. In: 2009 IEEE 31St international conference on software engineering, IEEE, pp 1–11
Yujian L, Bo L (2007) A normalized levenshtein distance metric. IEEE Trans Pattern Anal Mach Intell 29(6):1091–1095. ISSN 0162-8828. https://doi.org/10.1109/TPAMI.2007.1078
Article
Google Scholar
Zanjani MB, Kagdi H, Bird C (2015) Automatically recommending peer reviewers in modern code review. IEEE Trans Softw Eng 42(6):530–543
Article
Google Scholar