Skip to main content

Understanding peer review of software engineering papers

Abstract

Context

Peer review is a key activity intended to preserve the quality and integrity of scientific publications. However, in practice it is far from perfect.

Objective

We aim at understanding how reviewers, including those who have won awards for reviewing, perform their reviews of software engineering papers to identify both what makes a good reviewing approach and what makes a good paper.

Method

We first conducted a series of interviews with recognised reviewers in the software engineering field. Then, we used the results of those interviews to develop a questionnaire used in an online survey and sent out to reviewers from well-respected venues covering a number of software engineering disciplines, some of whom had won awards for their reviewing efforts.

Results

We analyzed the responses from the interviews and from 175 reviewers who completed the online survey (including both reviewers who had won awards and those who had not). We report on several descriptive results, including: Nearly half of award-winners (45%) are reviewing 20+ conference papers a year, while 28% of non-award winners conduct that many. The majority of reviewers (88%) are taking more than two hours on journal reviews. We also report on qualitative results. Our findings suggest that the most important criteria of a good review is that it should be factual and helpful, which ranked above others such as being detailed or kind. The most important features of papers that result in positive reviews are a clear and supported validation, an interesting problem, and novelty. Conversely, negative reviews tend to result from papers that have a mismatch between the method and the claims and from papers with overly grandiose claims. Further insights include, if not limited to, that reviewers view data availability and its consistency as being important or that authors need to make their contribution of the work very clear in their paper.

Conclusions

Based on the insights we gained through our study, we conclude our work by compiling a proto-guideline for reviewing. One hope we associate with our work is to contribute to the ongoing debate and contemporary effort to further improve our peer review models in the future.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Notes

  1. 1.

    http://tiny.cc/rosefest

  2. 2.

    A good tutorial on data disclosure when using a double-blind review process is provided by Daniel Graziotin: https://tinyurl.com/DBDisclose.

  3. 3.

    See, e.g., artifact evaluation track of ICSE 2021 https://doi.org/10.6084/m9.gshare.14123639 or the open science initiative of the EMSE journal https://github.com/emsejournal/openscience

  4. 4.

    https://github.com/acmsigsoft/EmpiricalStandards

  5. 5.

    https://github.com/researchart/patterns/blob/master/standards/artifact.md

  6. 6.

    http://www.icse-conferences.org/reports.html

  7. 7.

    https://neuripsconf.medium.com/what-we-learned-from-neurips-2020-reviewing-process-e24549eea38f

  8. 8.

    https://cs.gmu.edu/~offutt/stvr/17-3-sept2007.html

  9. 9.

    https://peerj.com/articles/cs-111/reviews/

  10. 10.

    https://openreview.net/forum?id=rklXaoAcFX&noteId=HyeF-4_9hm

  11. 11.

    https://doi.org/10.6084/m9.gshare.5086357.v1

  12. 12.

    https://www.slideshare.net/aserebrenik/peer-reviews-119010210

  13. 13.

    https://reviewqualitycollector.org

  14. 14.

    http://www.inf.fu-berlin.de/w/SE/ReviewQualityCollectorHome

  15. 15.

    http://cscw.acm.org/2019/CSCW-2020-changes.html

  16. 16.

    http://www.icse-conferences.org/reports.html

  17. 17.

    https://conf.researchr.org/track/icse-2021/icse-2021-papers#Call-for-Papers

References

  1. Drubin DG (2011) Any jackass can trash a manuscript, but it takes good scholarship to create one (how MBoC promotes civil and constructive peer review). Mol Biol Cell 22(5):525–527. https://doi.org/10.1091/mbc.e11-01-0002

    Article  Google Scholar 

  2. Fernandes JM (2014) Authorship trends in software engineering. Scientometrics 101(1):257–271. https://doi.org/10.1007/s11192-014-1331-6

    Article  Google Scholar 

  3. Horbach SPJM, Halffman W (2018) The ability of different peer review procedures to ag problematic publications. Scientometrics 118(1):339–373. https://doi.org/10.1007/s11192-018-2969-2

    Article  Google Scholar 

  4. Kitchenham B, Peeger SL (2002) Principles of survey research: part 1-6. ACM SIGSOFT Softw Eng Notes 27(5):17–20

    Article  Google Scholar 

  5. MacAuley D (2012) The role of the manuscript assessor. In: How to write a paper, chap. 16. https://doi.org/10.1002/9781118488713.ch16. Wiley, pp 102–114

  6. Mathew G, Agrawal A, Menzies T (2019) Finding trends in software research. IEEE Trans Softw Eng :1–1. https://doi.org/10.1109/tse.2018.2870388

  7. Mendez D, Graziotin D, Wagner S, Seibold H (2020) Open science in software engineering. In: Felderer M, Travassos G-H (eds) Contemporary empirical methods in software engineering. arXiv:1908.05899. Springer

  8. Nierstrasz O (1998) Identify the champion. Pattern Languages of Programming (PLoP). WUCS-98-25

  9. Ozkaya I (2021) Protecting the health and longevity of the peer-review process in the software engineering community. IEEE Softw 38(1):3–6. https://doi.org/10.1109/ms.2020.3028681

    Article  Google Scholar 

  10. Peterson DAM (2020) Dear reviewer 2: Go F’ Yourself. Soc Sci Q. https://doi.org/10.1111/ssqu.12824

  11. Petre M, et al. (2020) Reviewing computing education papers. In: Proceedings of the 2020 ACM conference on innovation and technology in computer science education. https://doi.org/10.1145/3341525.3394994. ACM

  12. Prechelt L, Graziotin D, Mendez D (2018) A community’s perspective on the status and future of peer review in software engineering. Inf Softw Technol 95:75–85. https://doi.org/10.1016/j.infsof.2017.10.019

    Article  Google Scholar 

  13. Price E (2014) The NIPS Experiment. http://blog.mrtz.org/2014/12/15/thenipsexperiment.html. Online; accessed August 2020

  14. Ragone A, Mirylenka K, Casati F, Marchese M (2013) On peer review in computer science: analysis of its effectiveness and suggestions for improvement. Scientometrics 97(2):317– 356. https://doi.org/10.1007/s11192-013-1002-z

    Article  Google Scholar 

  15. Ralph P, Robbes R (2020) The ACM SIGSOFT paper and peer review quality initiative. ACM SIGSOFT Softw Eng Notes 45(2):17–18. https://doi.org/10.1145/3385678.3385681

    Article  Google Scholar 

  16. Schimel J (2011) Writing science: how to write papers that get cited and proposals that get funded. Oxford University Press, Oxford

    Google Scholar 

  17. Seeber M, Bacchelli A (2017) Does single blind peer review hinder newcomers?. Scientometrics 113(1):567–585. https://doi.org/10.1007/s11192-017-2264-7

    Article  Google Scholar 

  18. Shaw M (2003) Writing good software engineering research papers: minitutorial. In: Proceedings of the ACM/IEEE international conference on software engineering, pp 726–736

  19. Shepperd M, Ajienka N, Counsell S (2018) The role and value of replication in empirical software engineering results. Inf Softw Technol 99:120–132. https://doi.org/10.1016/j.infsof.2018.01.006

    Article  Google Scholar 

  20. Smith AJ (1990) The task of the referee. Computer 23(4):65– 71

    Article  Google Scholar 

  21. Smith E, Loftin R, Murphy-Hill E, Bird C, Zimmermann T (2013) Improving developer participation rates in surveys. In: CHASE workshop at ICSE

  22. Soldani J, Kuhrmann M, Pfahl D (2020) Pains and gains of peer-reviewing in software engineering. ACM SIGSOFT Softw Eng Notes 45(1):12–13. https://doi.org/10.1145/3375572.3375575

    Article  Google Scholar 

  23. Spier R (2002) The history of the peer-review process. Trends Biotechnol 20 (8):357–358. https://doi.org/10.1016/s0167-7799(02)01985-6

    Article  Google Scholar 

  24. Squazzoni F, Brezis E, Marušić A (2017) Scientometrics of peer review. Scientometrics 113(1):501–502. https://doi.org/10.1007/s11192-017-2518-4.

    Article  Google Scholar 

  25. Storey M-A, Ernst NA, Williams C, Kalliamvakou E (2019) The who, what, how of software engineering research: a socio-technical framework. arXiv:1905.1284

  26. Tennant JP, et al. (2017) A multi-disciplinary perspective on emergent and future innovations in peer review. In: F1000Research. https://doi.org/10.12688/f1000research.12037.3, vol 6, p 1151

  27. Terrell J, Kofink A, Middleton J, Rainear C, Murphy-Hill E, Parnin C, Stallings J (2017) Gender differences and bias in open source: pull request acceptance of women versus men. Peer J Comput Sci 3:e111. https://doi.org/10.7717/peerj-cs.111

    Article  Google Scholar 

  28. Theisen C, Dunaiski M, Williams L, Visser W (2017) Writing good software engineering research papers: revisited. In: Proceedings of the ACM/IEEE international conference on software engineering. https://doi.org/10.1109/icse-c.2017.51

  29. Tomkins A, Zhang M, Heavlin WD (2017) Reviewer bias in single- versus double-blind peer review. Proc Natl Acad Sci 114(48):12708–12713. https://doi.org/10.1073/pnas.1707323114

    Article  Google Scholar 

  30. Tung AKH (2006) Impact of double blind reviewing on SIGMOD publication. ACM SIGMOD Record 35 (3):6–7. https://doi.org/10.1145/1168092.1168093

    Article  Google Scholar 

  31. Winslett M, Braganholo V (2016) H V Jagadish speaks out on PVLDB, CoRR and data-driven research. In: SIGMOD Record 42.2

  32. Wolfram D, Wang P, Hembree A, Park H (2020) Open peer review: promoting transparency in open science. Scientometrics 125(2):1033–1051. https://doi.org/10.1007/s11192-020-03488-4

    Article  Google Scholar 

  33. Zerbe WJ, Paulhus DL (1987) Socially desirable responding in organizational behavior: a reconception. Acad Manag Rev 12(2):250. https://doi.org/10.2307/258533

    Article  Google Scholar 

  34. Zong Q, Xie Y, Liang J (2020) Does open peer review improve citation count? Evidence from a propensity score matching analysis of PeerJ. Scientometrics 125 (1):607–623. https://doi.org/10.1007/s11192-020-03545-y.

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Neil A. Ernst.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Communicated by: Romain Robbes

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ernst, N.A., Carver, J.C., Mendez, D. et al. Understanding peer review of software engineering papers. Empir Software Eng 26, 103 (2021). https://doi.org/10.1007/s10664-021-10005-5

Download citation

Keywords

  • Peer review
  • Interview
  • Survey