Skip to main content
Log in

How do journals of different rank instruct peer reviewers? Reviewer guidelines in the field of management

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

Current knowledge on peer review consists of general formulations of its goals and micro level accounts of its practice, while journals’ attempts to guide and shape peer review have hardly been investigated so far. This article addresses this gap by studying the content of the reviewer guidelines (RG) of 46 journals in the field of management, as editors may use guidelines to nudge reviewers considering all relevant criteria, properly, and consistently with the needs of the journal. The analysis reveals remarkable differences between the instructions for reviewers of journals of different rank. Average and low rank journals mostly use evaluation forms, they emphasize the empirical contribution and the quality of communication. RG of high rank journals are texts; they stress the theoretical contribution and methodological validity in strict terms. RG of very high rank journals stand even further apart, as they include 45% less gatekeeping instructions but four times more developmental instructions. While developmental instructions may help retaining the most innovative contributions, the fact that they are common only in very high rank journals may represent another case of cumulative advantage in science.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Also known as SLAMing—i.e. Stressing the Limiting Aspects of a Manuscript (Bedeian 2004).

  2. See footnote 1.

  3. In are subtle. some cases, the differences between categories For instance, assess whether “the knowledge gap is identified” (communication) serves a different purpose from assessing whether the manuscript “fills a knowledge gap” (novelty). Asking to control whether “p values are presented” (communication) differs from asking whether the “p values are significant” (validity). Three micro-categories of instruction (5 occurrences) could not be attributed clearly to one of the three categories. For instance, “assess whether the limitations are identified (and discussed)”.

  4. The coders identified a total of 825 micro-instructions (some repeated more than once in each text); in 66 cases the micro instructions were coded by only one of the coders, inter-rater agreement: (825–66)/825 = 0.920. It was agreed to retain 774 instructions; in these cases, the coders disagreed 63 times, inter-rater agreement: (774-63)/774 = 0.918.

  5. Level 4* is transformed to 5, whereas journals not in the AJG are not considered.

  6. Journals not in the AJG are excluded from the non-parametric tests.

  7. WOS account for about a third of the existing Academic/Scholarly journals, and less than 15% of social sciences journals (see: Ulrich’s periodical database, Mongeon and Paul-Hus, 2016).

  8. For example, Murphy and Zhu (2012) found 66% of the authors of articles in twelve top management journals ranked “4*” in AJG 2008 in year 2010–2011 were either from US, Canada or UK. By term of comparison, authors from these three countries accounted for 33% of the whole scientific production 2010–2011 in the field of management accounting and business(Source : www.scimagojr.com).

References

  • Alatalo, R. V., Mappes, J., & Elgar, M. A. (1997). Heritabilities and paradigm shifts. Nature,385(6615), 402.

    Google Scholar 

  • Allen, L., Jones, C., Dolby, K., Lynn, D., & Walport, M. (2009). Looking for landmarks: The role of expert review and bibliometric analysis in evaluating scientific publication outputs. PLoS ONE,4(6), e5910.

    Google Scholar 

  • Baker, M. (2016). 1,500 scientists lift the lid on reproducibility. Nature News,533(7604), 452.

    Google Scholar 

  • Baldwin, M. (2018). Scientific autonomy, public accountability, and the rise of “peer review” in the Cold War United States. Isis,109(3), 538–558.

    Google Scholar 

  • Balietti, S., Goldstone, R. L., & Helbing, D. (2016). Peer review and competition in the Art Exhibition Game. Proceedings of the National Academy of Sciences,113(30), 8414–8419.

    Google Scholar 

  • Bedeian, A. G. (2004). Peer review and the social construction of knowledge in the management discipline. Academy of Management Learning & Education,3(2), 198–216.

    Google Scholar 

  • Bornmann, L. (2008). Scientific peer review: An analysis of the peer review process from the perspective of sociology of science theories. Human Architecture,6(2), 23.

    Google Scholar 

  • Brembs, B., Button, K., & Munafò, M. (2013). Deep impact: Unintended consequences of journal rank. Frontiers in human Neuroscience,7, 291.

    Google Scholar 

  • Campanario, J. M. (1996). Have referees rejected some of the most-cited articles of all times? Journal of the Association for Information Science and Technology,47(4), 302–310.

    Google Scholar 

  • Campanario, J. M. (1998a). Peer review for journals as it stands today—Part 1. Science communication,19(3), 181–211.

    Google Scholar 

  • Campanario, J. M. (1998b). Peer review for journals as it stands today—Part 2. Science Communication,19(4), 277–306.

    Google Scholar 

  • Campanario, J. M. (2009). Rejecting and resisting nobel class discoveries: Accounts by Nobel laureates. Scientometrics,81(2), 549–565.

    Google Scholar 

  • Castellucci, F., & Ertug, G. (2010). What’s in it for them? Advantages of higher-status partners in exchange relationships. Academy of Management Journal,53(1), 149–166.

    Google Scholar 

  • Chen, J., & Konstan, J. A. (2010). Conference paper selectivity and impact. Communications of the ACM,53(6), 79–83.

    Google Scholar 

  • Cole, S., & Simon, G. A. (1981). Chance and consensus in peer review. Science,214(4523), 881–886.

    Google Scholar 

  • Corley, K. G., & Gioia, D. A. (2011). Building theory about theory building: What constitutes a theoretical contribution? Academy of Management Review,36(1), 12–32.

    Google Scholar 

  • Czarniawska, B., & Joerges, B. (1996). Travels of ideas. In B. Czarniawska & G. Sevo (Eds.), Translating organizational change (pp. 13–47). Berlin: Walter de Gruyter.

    Google Scholar 

  • Dickersin, K. (1990). The existence of publication bias and risk factors for its occurrence. JAMA,263(10), 1385–1389.

    Google Scholar 

  • Ellison, G. T. H., & Rosato, M. (2002). The impact of editorial guidelines on the classification of race/ethnicity in the British Medical Journal. Journal of Epidemiology & Community Health 56(2).

  • Evangelou, E., Siontis, K. C., Pfeiffer, T., & Ioannidis, J. P. (2012). Perceived information gain from randomized trials correlates with publication in high–impact factor journals. Journal of Clinical Epidemiology,65(12), 1274–1281.

    Google Scholar 

  • Eysenck, H. J., & Eysenck, S. B. G. (1992). Peer review: Advice to referees and contributors. Personality and Individual Differences,13(4), 393–399.

    Google Scholar 

  • Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE,4(5), e5738.

    Google Scholar 

  • Franklin, J. (2017). Results masked review: Peer review without publication bias. https://www.elsevier.com/connectreviewers-update/results-masked-review-peer-review-without-publication-bias.

  • Hackett, E. J., & Chubin, D. E. (2003). Peer review for the 21st century: Applications to education research. National Research Council workshop.

  • Hambrick, D. C. (2007). The field of management’s devotion to theory: Too much of a good thing? Academy of Management Journal,50(6), 1346–1352.

    Google Scholar 

  • Kahneman, D. (2011). Thinking, fast and slow. New York: Macmillan.

    Google Scholar 

  • Kalleberg, A. L. (2012). Social Forces at 90. Social Forces,91(1), 1–2.

    Google Scholar 

  • Kayes, D. C. (2002). Experiential learning and its critics: Preserving the role of experience in management learning and education. Academy of Management Learning & Education,1(2), 137–149.

    Google Scholar 

  • Knorr-Cetina, K. (1981). The manufacture of knowledge: An essay on the constructivist and contextual nature of science. Oxford: Pergamon Press.

    Google Scholar 

  • Kuhn, T. (1962). The structure of scientific revolutions. Chicago: University of Chicago Press.

    Google Scholar 

  • Kuhn, T. (1977). The essential tension. Chicago: University of Chicago Press.

    Google Scholar 

  • Lamont, M. (2009). How professors think. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Langer, M., König, C. J., & Honecker, H. (2019). What might get published in management and applied psychology? Experimentally manipulating implicit expectations of reviewers regarding hedges. Scientometrics,120(3), 1351–1371.

    Google Scholar 

  • Langfeldt, L. (2006). The policy challenges of peer review: Managing bias, conflict of interests and interdisciplinary assessment. Research Evaluation,15(1), 31–41.

    Google Scholar 

  • Latour, B. (1987). Science in action. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Lee, C., et al. (2013). Bias in peer review. Journal of the Association for Information Science and Technology,64(1), 2–17.

    Google Scholar 

  • Legge, K. (2001). Silver bullet or spent round? Assessing the meaning of the “high commitment management”/performance relationship. In J. Storey (Ed.), Human resource management: A critical text (pp. 21–36). London: Thomson Learning.

    Google Scholar 

  • Luukkonen, T. (2012). Conservatism and risk-taking in peer review: Emerging ERC practices. Research Evaluation,21(1), 48–60.

    Google Scholar 

  • McCook, A. (2006). Is peer review broken? Submissions are up, reviewers are overtaxed, and authors are lodging complaint after complaint about the process at top-tier journals. What’s wrong with peer review? The scientist,20(2), 26–35.

    Google Scholar 

  • Merton, R. K. (1968). The Matthew Effect in Science. Science,159(3810), 56–63.

    Google Scholar 

  • Merton, R. K. (1973). The sociology of science: Theoretical and empirical investigations. Chicago: University of Chicago press.

    Google Scholar 

  • Miner, J. B. (2003). The rated importance, scientific validity and practical usefulness of organizational behavior theories: A quantitative review. Academy of Management Learning and Education,2(3), 250–268.

    Google Scholar 

  • Mongeon, P., & Paul-Hus, A. (2016). The journal coverage of Web of Science and Scopus: A comparative analysis. Scientometrics,106(1), 213–228.

    Google Scholar 

  • Moran, G. (1998). Silencing scientists and scholars in other fields: Power, paradigm controls, peer review, and scholarly communication. Greenwich, CN: Ablex.

    Google Scholar 

  • Murphy, J., & Zhu, J. (2012). Neo-colonialism in the academy? Anglo-American domination in management journals. Organization,19(6), 915–927.

    Google Scholar 

  • Patriotta, G. (2017). Crafting papers for publication: Novelty and convention in academic writing. Journal of Management Studies,54(5), 747–759.

    Google Scholar 

  • Patterson, D. A. (2004). The health of research conferences and the dearth of big idea papers. Communication ACM,47(12), 23–24.

    Google Scholar 

  • Reale, E., & Zinilli, A. (2017). Evaluation for the allocation of university research project funding: Can rules improve peer review? Research Evaluation,26(3), 190–198.

    Google Scholar 

  • Romanelli, E. (1996). Becoming a reviewer: Lessons somewhat painfully learned. In P. J. Frost & M. S. Taylor (Eds.), Rhythms of academic life: Personal accounts of careers in academia (pp. 263–268). Thousand Oaks, CA: Sage.

    Google Scholar 

  • Sandström, U., & Hällsten, M. (2007). Persistent nepotism in peer-review. Scientometrics,74(2), 175–189.

    Google Scholar 

  • Sarigöl, E., Garcia, D., Scholtes, I., & Schweitzer, F. (2017). Quantifying the effect of editor–author relations on manuscript handling times. Scientometrics,113(1), 609–631.

    Google Scholar 

  • Schminke, M. (2002). From the editors: Tensions. Academy of Management Journal,45(3), 487–490.

    Google Scholar 

  • Seeber, M., & Bacchelli, A. (2017). Does single blind peer review hinder newcomers? Scientometrics,113(1), 567–585.

    Google Scholar 

  • Siler, K., Lee, K., Bero, L., et al. (2015). Measuring the effectiveness of scientific gatekeeping. Proceedings of the National Academy of Sciences,112(2), 360–365.

    Google Scholar 

  • Siler, K., & Strang, D. (2017). Peer review and scholarly originality: Let 1,000 flowers bloom, but don’t step on any. Science, Technology and Human Values,42(1), 29–61.

    Google Scholar 

  • Simmons, L. W., Tomkins, J. L., Kotiaho, J. S., & Hunt, J. (1999). Fluctuating paradigm. Proceedings of the Royal Society of London, Series B: Biological Sciences,266(1419), 593–595.

    Google Scholar 

  • Smith, R. (2006). Peer review: A flawed process at the heart of science and journals. Journal of the Royal Society of Medicine,99(4), 178–182.

    Google Scholar 

  • Squazzoni, F., Brezis, E., & Marusic, A. (2017). Scientometrics of peer review. Scientometrics,113(1), 501–502.

    Google Scholar 

  • Starbuck, W. H. (2003). How much better are the most prestigious journals? The statistics of academic publication. Unpublished manuscript, New York University.

  • Warren, L. (2003). Galileo didn’t publish his observations in scholarly journals. National Geographics,203(5), 15.

    Google Scholar 

  • Weller, A. C. (2001). Editorial peer review: Its strengths and weaknesses. Medford, New Jersey: Information Today Inc.

    Google Scholar 

  • Ziman, J. M. (1984). An introduction to science studies: The philosophical and social aspects of science and technology. Cambridge: Cambridge University Press.

    Google Scholar 

  • Zuckerman, H. (1977). Scientific elite: Nobel laureates in the United States. London: Transaction Publishers.

    Google Scholar 

  • Zuckerman, H., & Merton, R. K. (1971). Patterns of evaluation in science: Institutionalisation, structure and functions of the referee system. Minerva,9(1), 66–100.

    Google Scholar 

Download references

Acknowledgements

I am very grateful to the reviewer for the insightful comments, to Jelle Mampaey and Freek Van Deynze for their support in coding the reviewer guidelines and to Stefano Balietti for his helpful comments.

Funding

Funding was provided by Fonds Wetenschappelijk Onderzoek (Grant No. G.OC42.13N).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marco Seeber.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Seeber, M. How do journals of different rank instruct peer reviewers? Reviewer guidelines in the field of management. Scientometrics 122, 1387–1405 (2020). https://doi.org/10.1007/s11192-019-03343-1

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-019-03343-1

Keywords

Navigation