Advertisement

Interrater Reliability of the Peer Review Process in Management Journals

  • Alexander T. Nicolai
  • Stanislaw Schmal
  • Charlotte L. Schuster
Chapter

Abstract

Peer review is an established method of assessing the quality and contribution of academic performance in most scientific disciplines. Up to now, little is known about interrater agreement among reviewers in management journals. This paper aims to provide an overview of agreement among the judgments of reviewers in management studies. The results of our literature review indicate a low level of agreement among reviewers in management journals. However, low consensus is not specific to management studies but widely present in other sciences as well. We discuss the consequences and implications of low judgment agreement for management research.

Keywords

Management Journal Interrater Reliability Management Study Peer Review Process Interrater Agreement 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Aguillo IF, Granadino B, Ortega JL, Prieto JA (2006) Scientific research activity and communication measured with cybermetrics indicators. J Am Soc Inf Sci Technol 57(10):1296–1302CrossRefGoogle Scholar
  2. Bartko JJ (1976) On various intraclass correlation reliability coefficients. Psychol Bull 83(5):762–765CrossRefGoogle Scholar
  3. Bedeian AG (2003) The manuscript review process: the proper roles of authors, referees, and editors. J Manag Inq 12(4):331–338Google Scholar
  4. Beyer JM, Roland GC, Fox WB (1995) The review process and the fates of manuscripts submitted to amj. Acad Manage J 38(5):1219–1260CrossRefGoogle Scholar
  5. Bornmann L (2008) Scientific peer review. An analysis of the peer review process from the perspective of sociology of science theories. Hum Archit 6(2):23–38Google Scholar
  6. Bornmann L (2011) Scientific peer review. Ann Rev Inf Sci Technol 45:199–245Google Scholar
  7. Bornmann L, Daniel H-D (2008) The effectiveness of the peer review process: inter-referee agreement and predictive validity of manuscript refereeing at angewandte chemie. Angew Chem Int Ed 47(38):7173–7178CrossRefGoogle Scholar
  8. Bornmann L, Mutz R, Daniel H-D (2010) A reliability-generalization study of journal peer reviews: a multilevel meta-analysis of inter-rater reliability and its determinants. PLoS ONE 5(12):e14331CrossRefGoogle Scholar
  9. Campanario JM (1998) Peer review for journals as it stands today-part 1. Sci Commun 19(3):181–211CrossRefGoogle Scholar
  10. Cicchetti DV (1980) Reliability of reviews for the American psychologist: a biostatistical assessment of the data. Am Psychol 35(3):300–303CrossRefGoogle Scholar
  11. Cicchetti DV (1991) The reliability of peer review for manuscript and grant submissions: a cross-disciplinary investigation. Behav Brain Sci 14(01):119–135CrossRefGoogle Scholar
  12. Cohen DJ (2007) The very separate worlds of academic and practitioner publications in human resource management: reasons for the divide and concrete solutions for bridging the gap. Acad Manag J 50:1013–1019CrossRefGoogle Scholar
  13. Cole S, Cole RJ, Simon AG (1981) Chance and consensus in peer review. Science 214(4523):881–886CrossRefGoogle Scholar
  14. Conger AJ, Ward DG (1984) Agreement among 2 × 2 agreement indices. Educ Psychol Meas 44(2):301–314CrossRefGoogle Scholar
  15. Cummings LL, Frost PJ, Vakil TF (1985) The manuscript review process: a view from inside on coaches, critics, and special cases. In: Cummings LL, Frost PJ (eds) Publishing in the organizational sciences. Irwin, Homeland, pp 469–508Google Scholar
  16. Daniel H-D (1993) Guardians of science: fairness and reliability of peer review. VCH, WeinheimCrossRefGoogle Scholar
  17. Fleiss JL, Cohen J (1973) The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educ Psychol Meas 33(3):613–619CrossRefGoogle Scholar
  18. Frey BS (2003) Publishing as prostitution? - choosing between one’s own ideas and academic success. Public Choice 116(1–2):205–223CrossRefGoogle Scholar
  19. Gans JS, Shepherd GB (1994) How are the mighty fallen: rejected classic articles by leading economists. J Econ Perspect 8(1):165–179CrossRefGoogle Scholar
  20. Goodman LA (1984) The analysis of cross-classified data having ordered categories. Harvard University Press, CambridgeGoogle Scholar
  21. Hargens LL, Herting JR (1990) A new approach to referees’ assessments of manuscripts. Soc Sci Res 19(1):1–16CrossRefGoogle Scholar
  22. Hendrick C (1976) Editorial comment. Pers Soc Psychol Bull 2:207–208CrossRefGoogle Scholar
  23. Hubbard R, Vetter DE, Littel EL (1998) Replication in strategic management: scientific testing for validity, generalizability, and usefulness. Strateg Manage J 19:243–254CrossRefGoogle Scholar
  24. Hunt JG, Blair JD (1987) Content, process, and the Matthew effect among management academics. J Air Waste Manage Assoc 13(2):191–210Google Scholar
  25. Ketchen D, Ireland RD (2010) From the editors upon further review: a survey of the academy of management journal’s editorial board. Acad Manage J 53(2):208–217CrossRefGoogle Scholar
  26. Kravitz RL, Franks P, Feldman MD, Gerrity M, Byrne C, Tierney WM (2010) Editorial peer reviewers’ recommendations at a general medical journal: are they reliable and do editors care? PLoS ONE 5(4):1–5CrossRefGoogle Scholar
  27. Kuhn T (1962) The structure of scientific revolutions, vol 2. University of Chicago Press, ChicagoGoogle Scholar
  28. Landis JR, Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics 33(1):159–174CrossRefGoogle Scholar
  29. Lazarus D (1982) Interreferee agreement and acceptance rates in physics. Behav Brain Sci 5(2):219–219CrossRefGoogle Scholar
  30. Lodahl JB, Gordon G (1972) The structure of scientific fields and the functioning of university graduate departments. Am Soc Rev 37(1):57–72CrossRefGoogle Scholar
  31. Marsh HW, Jayasinghe UW, Bond NW (2008) Improving the peer-review process for grant applications. Reliability, validity, bias, and generalizability. Am Psychol 63(3):160–168CrossRefGoogle Scholar
  32. Merton RK (1968) The Matthew effect in science. Science 159(3810):56–60CrossRefGoogle Scholar
  33. Miller CC (2006) From the editors: peer review in the organizational and management sciences: prevalence and effects of reviewer hostility, bias, and dissensus. Acad Manage J 49(3):425–431CrossRefGoogle Scholar
  34. Mutz R, Bornmann L, Daniel H-D (2012) Heterogeneity of inter-rater reliabilities of grant peer reviews and its determinants: a general estimating equations approach. PLoS ONE 7(10):1–10CrossRefGoogle Scholar
  35. Nicolai AT, Schulz A-C, Göbel M (2011) Between sweet harmony and a clash of cultures: does a joint academic–practitioner review reconcile rigor and relevance? J Appl Behav Sci 47(1):53–75CrossRefGoogle Scholar
  36. Pfeffer J (1993) Barriers to the advance of organizational science: paradigm development as a dependent variable. Acad Manage Rev 18(4):599–620Google Scholar
  37. Rowland F (2002) The peer-review process. Learned Publish 15(4):247–258CrossRefGoogle Scholar
  38. Schulz A-C, Nicolai A (2014) The intellectual link between management research and popularization media: a bibliometric analysis of the harvard business review. Acad Manage Learn Educ. forthcomingGoogle Scholar
  39. Shrout PE, Fleiss JL (1979) Intraclass correlations: uses in assessing rater reliability. Psychol Bull 86(2):420–428CrossRefGoogle Scholar
  40. Spitzer RL, Fleiss JL (1974) A re-analysis of the reliability of psychiatric diagnosis. Br J Psychiatry 125(587):341–347CrossRefGoogle Scholar
  41. Starbuck WH (2003) Turning lemons into lemonade: where is the value in peer reviews? J Manage Inq 12(4):344–351Google Scholar
  42. Starbuck WH (2005) How much better are the most-prestigious journals? The statistics of academic publication. Organ Sci 16(2):180–200CrossRefGoogle Scholar
  43. Tang M-C, Wang C-M, Chen K-H, Hsiang J (2012) Exploring alternative cyberbibliometrics for evaluation of scholarly performance in the social sciences and humanities in Taiwan. Proc Am Soc Inf Sci Technol 49(1):1–1CrossRefGoogle Scholar
  44. Thelwall M (2008) Bibliometrics to webometrics. J Inf Sci 34(4):605–621CrossRefGoogle Scholar
  45. Thelwall M, Haustein S, Larivière V, Sugimoto CR (2013) Do altmetrics work? Twitter and ten other social web services. PLoS ONE 8(5):e64841CrossRefGoogle Scholar
  46. Tinsley HE, Weiss DJ (1975) Interrater reliability and agreement of subjective judgments. J Couns Psychol 22(4):358–376CrossRefGoogle Scholar
  47. Watkins MW (1979) Chance and interrater agreement on manuscripts. Am Psychol 34(9):796–798CrossRefGoogle Scholar
  48. Weller AC (2001) Editorial peer review: its strengths and weaknesses, Asist monograph series. Hampton Press, New JerseyGoogle Scholar
  49. Weller K (2015) Social media and altmetrics: an overview of current alternative approaches to measuring scholarly impact. In: Welpe IM, Wollersheim J, Ringelhan S, Osterloh M (eds) Incentives and performance - governance of research organizations. Springer International Publishing AG, ChamGoogle Scholar
  50. Whitehurst GJ (1984) Interrater agreement for journal manuscript reviews. Am Psychol 39(1):22–28CrossRefGoogle Scholar
  51. Whitley R (1984) The development of management studies as a fragmented adhocracy. Soc Sci Inf 23(4–5):775–818CrossRefGoogle Scholar
  52. Zammuto RF (1984) Coping with disciplinary fragmentation. J Manage Educ 9(30):30–37CrossRefGoogle Scholar
  53. Zuckerman H, Merton RK (1971) Patterns of evaluation in science: institutionalisation, structure and functions of the referee system. Minerva 9(1):66–100CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Alexander T. Nicolai
    • 1
  • Stanislaw Schmal
    • 1
  • Charlotte L. Schuster
    • 1
  1. 1.Carl von Ossietzky UniversitätOldenburgGermany

Personalised recommendations