Skip to main content

Developers perception of peer code review in research software development

Abstract

Context

Research software is software developed by and/or used by researchers, across a wide variety of domains, to perform their research. Because of the complexity of research software, developers cannot conduct exhaustive testing. As a result, researchers have lower confidence in the correctness of the output of the software. Peer code review, a standard software engineering practice, has helped address this problem in other types of software.

Objective

Peer code review is less prevalent in research software than it is in other types of software. In addition, the literature does not contain any studies about the use of peer code review in research software. Therefore, through analyzing developers perceptions, the goal of this work is to understand the current practice of peer code review in the development of research software, identify challenges and barriers associated with peer code review in research software, and present approaches to improve the peer code review in research software.

Method

We conducted interviews and a community survey of research software developers to collect information about their current peer code review practices, difficulties they face, and how they address those difficulties.

Results

We received 84 unique responses from the interviews and surveys. The results show that while research software teams review a large amount of their code, they lack formal process, proper organization, and adequate people to perform the reviews.

Conclusions

Use of peer code review is promising for improving the quality of research software and thereby improving the trustworthiness of the underlying research results. In addition, by using peer code review, research software developers produce more readable and understandable code, which will be easier to maintain.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Notes

  1. 1.

    http://www.ncsa.illinois.edu/Conferences/ETK17/

  2. 2.

    https://www.qualtrics.com/

  3. 3.

    https://se4science.github.io/SE-CODESE17/

References

  1. Ackerman AF, Buchwald LS, Lewski FH (1989) Software inspections: an effective verification process. IEEE Softw 6(3):31–36. https://doi.org/10.1109/52.28121

    Article  Google Scholar 

  2. Bacchelli A, Bird C (2013) Expectations, outcomes, and challenges of modern code review. In: Proceedings of the 2013 international conference on software engineering, ICSE ’13. IEEE Press, Piscataway, pp 712–721. http://dl.acm.org/citation.cfm?id=2486788.2486882

  3. Beller M, Bacchelli A, Zaidman A, Juergens E (2014a) Modern code reviews in open-source projects: Which problems do they fix?. In: Proceedings of the 11th working conference on mining software repositories, MSR 2014. Association for Computing Machinery, New York, pp 202–211. https://doi.org/10.1145/2597073.2597082

  4. Beller M, Bacchelli A, Zaidman A, Juergens E (2014b) Modern code reviews in open-source projects: Which problems do they fix?. In: Proceedings of the 11th working conference on mining software repositories, MSR 2014. ACM, New York. https://doi.org/10.1145/2597073.2597082, pp 202–211

  5. Bosu A, Carver JC (2013) Impact of peer code review on peer impression formation: A survey. In: 2013 ACM/IEEE international symposium on empirical software engineering and measurement. https://doi.org/10.1109/ESEM.2013.23, pp 133–142

  6. Bosu A, Carver J, Guadagno R, Bassett B, McCallum D, Hochstein L (2014) Peer impressions in open source organizations: A survey. J Syst Softw 94:4–15. https://doi.org/10.1016/j.jss.2014.03.061. http://www.sciencedirect.com/science/article/pii/S0164121214000818

    Article  Google Scholar 

  7. Bosu A, Greiler M, Bird C (2015) Characteristics of useful code reviews: An empirical study at microsoft. In: 2015 IEEE/ACM 12th working conference on mining software repositories. https://doi.org/10.1109/MSR.2015.21, pp 146–156

  8. Bosu A, Carver JC, Bird C, Orbeck J, Chockley C (2017) Process aspects and social dynamics of contemporary code review: Insights from open source development and industrial practice at microsoft. IEEE Trans Softw Eng 43(1):56–75

    Article  Google Scholar 

  9. Carver J, Chue Hong N, Thiruvathukal G (2016) Software engineering for science. CRC Press, Boca Raton

    Book  Google Scholar 

  10. Carver JC (2008) Se-cse 2008: The first international workshop on software engineering for computational science and engineering. In: Companion of the 30th international conference on software engineering, ICSE Companion ’08. Association for Computing Machinery, New York, pp 1071–1072. https://doi.org/10.1145/1370175.1370252

  11. Carver JC, Eisty N (2021) Data set from survey on peer code review in research software. https://doi.org/10.6084/m9.figshare.14736468. https://figshare.com/articles/dataset/_/14736468/0

  12. Ciolkowski M, Laitenberger O, Biffl S (2003) Software reviews, the state of the practice. IEEE Softw 20(6):46–51. https://doi.org/10.1109/MS.2003.1241366

    Article  Google Scholar 

  13. Clune T, Rood R (2011) Software testing and verification in climate model development. IEEE Softw 28(6):49–55. 10.1109/MS.2011.117

    Article  Google Scholar 

  14. Eisty NU, Thiruvathukal GK, Carver JC (2018) A survey of software metric use in research software development. In: 2018 IEEE 14th international conference on e-Science (e-Science), pp 212–222

  15. Eisty NU, Thiruvathukal GK, Carver JC (2019) Use of software process in research software development: A survey. In: Proceedings of the evaluation and assessment on software engineering, EASE ’19. ACM, New York. https://doi.org/10.1145/3319008.3319351., pp 276–282

  16. Fagan ME (1976) Design and code inspections to reduce errors in program development. IBM Syst J 15(3):182–211. 10.1147/sj.153.0182

    Article  Google Scholar 

  17. Faulk S, Loh E, Vanter MLVD, Squires S, Votta LG (2009) Scientific computing’s productivity gridlock: How software engineering can help. Comput Sci Eng 11(6):30–39

    Article  Google Scholar 

  18. Hannay JE, MacLeod C, Singer J, Langtangen HP, Pfahl D, Wilson G (2009) How do scientists develop and use scientific software?. In: 2009 ICSE workshop on software engineering for computational science and engineering. https://doi.org/10.1109/SECSE.2009.5069155, pp 1–8

  19. Heaton D, Carver JC (2015) Claims about the use of software engineering practices in science: A systematic literature review. Inform Softw Technol 67:207–219. https://doi.org/10.1016/j.infsof.2015.07.011. http://www.sciencedirect.com/science/article/pii/S0950584915001342

    Article  Google Scholar 

  20. Hook D, Kelly D (2009) Testing for trustworthiness in scientific software. In: 2009 ICSE workshop on soft. eng. for computational science and eng. https://doi.org/10.1109/SECSE.2009.5069163, pp 59–64

  21. Kanewala U, Bieman JM (2014) Testing scientific software: A systematic literature review. Inf Softw Technol 56(10):1219–1232. https://doi.org/10.1016/j.infsof.2014.05.006. http://www.sciencedirect.com/science/article/pii/S0950584914001232

    Article  Google Scholar 

  22. Mantyla MV, Lassenius C (2009) What types of defects are really discovered in code reviews? IEEE Trans Softw Eng 35(3):430–448. https://doi.org/10.1109/TSE.2008.71

    Article  Google Scholar 

  23. McIntosh S, Kamei Y, Adams B, Hassan AE (2014) The impact of code review coverage and code review participation on software quality: A case study of the qt, vtk, and itk projects. In: Proceedings of the 11th working conference on mining software repositories, MSR 2014. Association for Computing Machinery, New York. https://doi.org/10.1145/2597073.2597076, pp 192–201

  24. Morales R, McIntosh S, Khomh F (2015) Do code review practices impact design quality? a case study of the qt, vtk, and itk projects. In: 2015 IEEE 22nd international conference on software analysis, evolution, and reengineering (SANER). https://doi.org/10.1109/SANER.2015.7081827, pp 171–180

  25. Nangia U, Katz DS (2017) Track 1 paper: Surveying the u.s. national postdoctoral association regarding software use and training in research. https://doi.org/10.6084/m9.figshare.5328442.v3. https://figshare.com/articles/Track_1_Paper_Surveying_the_U_S_National_Postdoctoral_Association_Regarding_Software_Use_and_Training_in_Research/5328442/3

  26. Petre M, Wilson G (2014) Code review for and by scientists. In: Proc. 2nd workshop on sustainable software for science: practice and experience

  27. Potvin R, Levenberg J (2016) Why google stores billions of lines of code in a single repository. Commun ACM 59(7):78–87. https://doi.org/10.1145/2854146

    Article  Google Scholar 

  28. Remmel H, Paech B, Bastian P, Engwer C (2012) System testing a scientific framework using a regression-test environment. Comput Sci Eng 14(2):38–45

    Article  Google Scholar 

  29. Rigby P, Cleary B, Painchaud F, Storey M, German D (2012) Contemporary peer review in action: Lessons from open source development. IEEE Softw 29(6):56–61. https://doi.org/10.1109/MS.2012.24

    Article  Google Scholar 

  30. Rigby PC, German DM, Storey MA (2008) Open source software peer review practices: A case study of the apache server. In: Proceedings of the 30th international conference on software engineering, ICSE ’08. ACM, New York. https://doi.org/10.1145/1368088.1368162., pp 541–550

  31. Sadowski C, Söderberg E, Church L, Sipko M, Bacchelli A (2018) Modern code review: A case study at google. In: Proceedings of the 40th international conference on software engineering: software engineering in practice, ICSE-SEIP ’18. ACM, New York. https://doi.org/10.1145/3183519.3183525., pp 181–190

  32. Sanders R, Kelly D (2008) Dealing with risk in scientific software development. IEEE Softw 25(4):21–28

    Article  Google Scholar 

  33. Segal J (2005) When software engineers met research scientists: A case study. Empir Softw Eng 10. https://doi.org/10.1007/s10664-005-3865-y

  34. Segal J (2009) Some challenges facing software engineers developing software for scientists. In: 2009 ICSE workshop on software engineering for computational science and engineering. pp 9–14

  35. Sutherland A, Venolia G (2009) Can peer code reviews be exploited for later information needs?. In: 2009 31st international conference on software engineering - companion volume. https://doi.org/10.1109/ICSE-COMPANION.2009.5070996, pp 259–262

  36. Vilkomir SA, Swain WT, Poore JH, Clarno KT (2008) Modeling input space for testing scientific computational software: A case study. In: Bubak M, van Albada GD, Dongarra J, Sloot PMA (eds) Computational science – ICCS 2008. Springer, Berlin, pp 291–300

Download references

Acknowledgements

The authors acknowledge partial support from the US National Science Foundation (1445344). Nasir Eisty would like to thank his sponsors for the NCSA summer internship, Drs. Gabrielle Allen, Roland Hass and Daniel S. Katz. We would also like to thank the interview and survey participants.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Nasir U. Eisty.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Communicated by: Andy Zaidman

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Eisty, N.U., Carver, J.C. Developers perception of peer code review in research software development. Empir Software Eng 27, 13 (2022). https://doi.org/10.1007/s10664-021-10053-x

Download citation

Keywords

  • Code review
  • Survey
  • Research software
  • Software engineering