Human Studies

, Volume 38, Issue 1, pp 57–79 | Cite as

From Manuscript Evaluation to Article Valuation: The Changing Technologies of Journal Peer Review

Empirical Study/Analysis

Abstract

Born in the 17th century, journal peer review is an extremely diverse technology, constantly torn between two often incompatible goals: the validation of manuscripts conceived as a collective industrial-like reproducible process performed to assert scientific statements, and the dissemination of articles considered as a means to spur scientific discussion, raising controversies, and civically challenging a state of knowledge. Such a situation is particularly conducive to clarifying the processes of valuation and evaluation in journal peer review. In this article, such processes are considered as specific tests in order to emphasize the uncertain properties of pre-tests manuscripts. On the one hand, evaluation tests are examined at the core of the validation of manuscripts, such as defining the coordination of judging instances (editor-in-chief, editorial committee, outside reviewers) or controlling the modalities of inter-knowledge between reviewers and authors. They are also studied regarding the dissemination of articles, notably through the contemporary conception of a continuing evaluation test termed “post publication peer review”. On the other hand, valuation tests are both part of the validation of manuscripts, such as the weighting of different judgments of the same manuscript and the tensions that these hierarchies cause, and of the dissemination of articles, such as attention metrics recording the uses of articles. The conclusion sketches out how the articulation of these different tests has recently empowered readers as a new key judging instance for dissemination and for validation, potentially transforming the definition of peers, and thus the whole process of journal peer review.

Keywords

Anonymity Academic journals Evaluation Peer review Valuation studies 

References

  1. American Psychological Association. (1965). Publications in APA Journals: advices from the editors. American Psychologist, 20(9), 711f.Google Scholar
  2. Auranen, O., & Nieminen, M. (2010). University research funding and publication performance—An international comparison. Research Policy, 39(6), 822–834.CrossRefGoogle Scholar
  3. American Sociological Review. (1955). Notice to contributors, 20(3), 341.Google Scholar
  4. Archambault, E., Amyot, D., Deschamps, E., Nicol, A., Rebout, L., & Roberge, G. (2013). Proportion of open access peer-reviewed papers at the European and world levels20042011. Science Metrix for the European Commission DG Research & Innovation.Google Scholar
  5. Baruch, Y., Konrad, A. M., Aguinis, H., & Starbuck, W. H. (Eds.). (2008). Opening the black box of editorship. Palgrave Macmillan.Google Scholar
  6. Bazerman, C. (1988). Shaping written knowledge: the genre and activity of the experimental article in science. Madison, WI: The University of Wisconsin Press.Google Scholar
  7. Benedek, E. P. (1976). Editorial practices of psychiatric and related journals: implications for women. American Journal of Psychiatry, 133(1), 89–92.CrossRefGoogle Scholar
  8. Berg, L. D. (2001). Masculinism, emplacement, and positionality in peer review. The Professional Geographer, 53(4), 511–521.CrossRefGoogle Scholar
  9. Blank, R. M. (1991). The effects of double-blind versus single-blind reviewing: Experimental evidence from the American Economic Review. American Economic Review, 81(5), 1041–1067.Google Scholar
  10. Bohlin, I. (2004). Communication regimes in competition: The current transition in scholarly communication seen through the lens of the sociology of technology. Social Studies of Science, 34(3), 365–391.CrossRefGoogle Scholar
  11. Bollen, J., Van De Sompel, H., Hagberg, A., & Chute, R. (2009). A principal component analysis of 39 scientific impact measures. PLoS ONE, 4(6), e6022.CrossRefGoogle Scholar
  12. Boltanski, L., & Thévenot, L. (2006). On justification: Economies of worth. New Jersey: Princeton university Press.Google Scholar
  13. Bornmann, L. (2011). Scientific peer review. Annual Review of Information Science and Technology, 45(1), 197–245.CrossRefGoogle Scholar
  14. Broad, W., & Wade, N. (1982). Betrayers of the truth. Fraud and deceit in the hall of science. New York: Simon & Schuster.Google Scholar
  15. Bruno, I., & Didier, E. (2013). Benchmarking : l’état sous pression statistique. Paris: La Découverte.Google Scholar
  16. Burnham, J. C. (1990). The evolution of editorial peer review. Journal of the American Medical Association, 263(10), 1323–1329.CrossRefGoogle Scholar
  17. Butler, L., & McAllister, I. (2009). Metrics or peer review? Evaluating the 2001 UK research assessment exercise in political science. Political Studies Review, 7(1), 3–17.CrossRefGoogle Scholar
  18. Campanario, J. M. (1998). Peer review for journals as it stands today—Part 1. Science Communication, 19(3), 181–211.CrossRefGoogle Scholar
  19. Chubin, D. E., & Hackett, E. J. (1990). Peerless science: Peer review and U.S. science policy. Albany: State University of New York Press.Google Scholar
  20. Cicchetti, D. V. (1991). The reliability of peer review for manuscript and grant submissions: A cross-disciplinary investigation. Behavioral and Brain Sciences, 14(1), 119–135.CrossRefGoogle Scholar
  21. Cicchetti, D. V., & Conn, H. O. (1976). A statistical analysis of reviewer agreement and bias in evaluating medical abstracts. The Yale Journal of Biology and Medicine, 49(4), 373–383.Google Scholar
  22. Cole, S., Cole, J. R., & Simon, G. A. (1981). Chance and consensus in peer review. Science, 214(4523), 881–886.CrossRefGoogle Scholar
  23. Crane, D. (1967). The gate-keepers of science: Some factors affecting the selection of articles for scientific journals. The American Sociologist, 2(1), 195–201.Google Scholar
  24. Cronin, B., & Sugimoto, C. R. (2014). Beyond bibliometrics. Harnessing multidimensional indicators of scholarly impact. Cambridge: MIT Press.Google Scholar
  25. DeBakey, L. (1976). The scientific journal: editorial policies and practices: guidelines for editors, reviewers, and authors. Saint Louis: CV Mosby Company.Google Scholar
  26. Donovan, C. (2007). Introduction: Future pathways for science policy and research assessment: metrics vs peer review, quality vs impact. Science and Public Policy, 34(8), 538–542.CrossRefGoogle Scholar
  27. Douglas-Wilson, I. (1974). Twilight of the medical journal? British Medical Journal, 3(5926), 326–327.CrossRefGoogle Scholar
  28. Erikson, M. G., & Erlandson, P. (2014). A taxonomy of motives to cite. Social Studies of Science, 44(4), 625–637.CrossRefGoogle Scholar
  29. Espeland, W. N., & Stevens, M. L. (1998). Commensuration as a social process. Annual review of sociology, 24, 313–343.CrossRefGoogle Scholar
  30. Eysenbach, G. (2011). Can tweets predict citations? Metrics of social impact based on Twitter and correlation with traditional metrics of scientific impact. Journal of Medical Internet Research, 13(4), e123.CrossRefGoogle Scholar
  31. Frey, B. (2003). Publishing as prostitution? Choosing between one’s own ideas and academic success. Public Choice, 116(1–2), 205–223.CrossRefGoogle Scholar
  32. Glenn, N. D. (1976). The journal article review process: Some proposals for change. The American Sociologist, 11(3), 179–185.Google Scholar
  33. Godlee, F., Gale, C. R., & Martyn, C. N. (1998). Effect on the quality of peer review of blinding reviewers and asking them to sign their reports. Journal of the American Medical Association, 280(3), 237–240.CrossRefGoogle Scholar
  34. Gunnarsdottir, K. (2005). Scientific journal publications: On the role of electronic preprint exchange in the distribution of scientific literature. Social Studies of Science, 35(4), 549–579.CrossRefGoogle Scholar
  35. Hargens, L. L. (1988). Scholarly consensus and journal rejection rates. American Sociological Review, 53(1), 139–151.CrossRefGoogle Scholar
  36. Harnad, S. (1979). Creative disagreement. The Sciences, 19, 18–20.CrossRefGoogle Scholar
  37. Helgesson, C.-F., & Muniesa, F. (2013). For what it’s worth: An introduction to valuation studies. Valuation Studies, 1(1), 1–10.CrossRefGoogle Scholar
  38. Hicks, D., & Wang, J. (2011). Coverage and overlap of the new social sciences and humanities journal lists. Journal of the American Society for Information Science and Technology, 62(2), 284–294.CrossRefGoogle Scholar
  39. Hirschauer, S. (2010). Editorial judgments: A praxeology of ‘voting’ in peer review. Social Studies of Science, 40(1), 71–103.CrossRefGoogle Scholar
  40. Ingelfinger, F. J. (1969). Definition of ‘sole contribution’. New England Journal of Medicine, 281(12), 676–677.CrossRefGoogle Scholar
  41. Jones, R. (1974). Rights, wrongs and referees. New Scientist, 61(890), 758–759.Google Scholar
  42. Kennefick, D. (2005). Einstein Versus The Physical Review. Physics Today, 58(9), 43–48.CrossRefGoogle Scholar
  43. Knox, F. G. (1981). No unanimity about anonymity. Journal of Laboratory and Clinical Medicine, 97(1), 1–3.Google Scholar
  44. Kronick, D. A. (1962). A history of scientific and technical periodicals: the origins and development of the scientific and technical press, 1665-1790. Metuchen, N.J.: The Scarecrow Press.Google Scholar
  45. Lamont, M. (2009). How professors think: Inside the curious world of academic judgment. Cambridge, Mass.; London: Harvard University Press.Google Scholar
  46. Lamont, M. (2012). Toward a comparative sociology of valuation and evaluation. Annual Review of Sociology, 38(1), 201–221.CrossRefGoogle Scholar
  47. Lancester, F. W. (1995). Attitudes in academia toward feasibility and desirability of networked scholarly publishing. Library Trends, 43(4), 741–752.Google Scholar
  48. Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Milton Keynes: Open University Press.Google Scholar
  49. Latour, B., & Woolgar, S. (1979). Laboratory life: the social construction of scientific facts. Beverly Hills: Sage.Google Scholar
  50. Lee, C. J., Sugimoto, C. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the American Society for Information Science and Technology, 64(1), 2–17.CrossRefGoogle Scholar
  51. Lowry, R. P. (1967). Communications to the editors. The American Sociologist, 2(4), 220.Google Scholar
  52. Macdonald, S., & Kam, J. (2007). Aardvark et al.: quality journals and gamesmanship in management studies. Journal of Information Science, 33(6), 702–717.CrossRefGoogle Scholar
  53. Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research, 1(2), 161–175.CrossRefGoogle Scholar
  54. Merton, R. K. (1942). Science and technology in a democratic order. Journal of Legal and Political Sociology, 1, 115–126.Google Scholar
  55. Morgan, P. P. (1984). Anonymity in medical journals. Canadian Medical Association Journal, 131(9), 1007f.Google Scholar
  56. Nature. (1974). In defence of the anonymous referee. Nature, 249(5458), 601f.Google Scholar
  57. Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187–195.CrossRefGoogle Scholar
  58. Pontille, D., & Torny, D. (2010). The controversial policies of journal ratings: Evaluating social sciences and humanities. Research Evaluation, 19(5), 347–360.CrossRefGoogle Scholar
  59. Pontille, D., & Torny, D. (2012). Behind the scenes of scientific articles: Defining categories of fraud and regulating cases. Revue d’Épidémiologie et de Santé Publique, 60(4), 247–253.CrossRefGoogle Scholar
  60. Pontille, D., & Torny, D. (2013). La manufacture de l’évaluation scientifique : algorithmes, jeux de données, outils bibliométriques. Réseaux, 177, 25–61.CrossRefGoogle Scholar
  61. Pontille, D., & Torny, D. (2014). The blind shall see! The question of anonymity in journal peer review. Ada: A Journal of Gender, New Media, and Technology, 4, doi:10.7264/N3542KVW.
  62. Porter, J. R. (1964). The Scientific Journal - 300th Anniversary. Bacteriological Reviews, 28(3), 211–230.Google Scholar
  63. Priem, J., & Costello, K. L. (2010). How and why scholars cite on Twitter. Proceedings of the American Society for Information Science and Technology, 47(1), 1–4.CrossRefGoogle Scholar
  64. Schroter, S., Black, N., Evans, S., Godlee, F., Osorio, L., & Smith, R. (2008). What errors do peer reviewers detect, and does training improve their ability to detect them? Journal of the Royal Society of Medicine, 101(10), 507–514.CrossRefGoogle Scholar
  65. Shapin, S., & Schaffer, S. (1985). Leviathan and the air-pump: Hobbes, Boyle, and the experimental life. Princeton, N.J.: Princeton University Press.Google Scholar
  66. Shapiro, B. J. (2000). A culture of fact: England, 1550-1720. Ithaca and London: Cornell University Press.Google Scholar
  67. Smith, R. (2006). Peer review: a flawed process at the heart of science and journals. Journal of the Royal Society of Medicine, 99(4), 178–182.CrossRefGoogle Scholar
  68. Speck, B. W. (1993). Publication peer review: An annotated bibliography. Westport, CT: Greenwood Press.Google Scholar
  69. Van Noorden, R. (2011). Science publishing: The trouble with retractions. Nature, 478(7367), 26–28.CrossRefGoogle Scholar
  70. Van Rooyen, S., Godlee, F., Evans, S., Smith, R., & Black, N. (1998). Effect of blinding and unmasking on the quality of peer review. Journal of General Internal Medicine, 14(10), 622–624.Google Scholar
  71. Ward, W. D., & Goudsmit, S. A. (1967). Reviewer and author anonymity. Physics Today, 20(1), 12.CrossRefGoogle Scholar
  72. Ware, M., & Monkman, M. (2008). Peer review in scholarly journals: Perspective of the scholarly community - an international study. UK, Mark Ware Consulting: Bristol.Google Scholar
  73. Weller, A. (2001). Editorial peer review: Its strengths and weaknesses. Medford: Information Today, Inc.Google Scholar
  74. Wilhite, A. W., & Fong, E. A. (2012). Coercive citation in academic publishing. Science, 335(6068), 542f.Google Scholar
  75. Wilson, J. D. (1978). Peer review and publication. Presidential address before the 70th annual meeting of the American Society for Clinical Investigation, San Francisco, California, 30 April 1978. Journal of Clinical Investigation, 61(6), 1697–1701.Google Scholar
  76. Wouters, P. (1999). The citation culture. Amsterdam: University of Amsterdam.Google Scholar
  77. Wouters, P., & Costas, R. (2012). Users, narcissism and controltracking the impact of scholarly publications in the 21st century (p. 50). Utrecht, SURFfoundation.Google Scholar
  78. Zuckerman, H. A., & Merton, R. K. (1971). Patterns of evaluation in science: institutionalisation, structure and functions of the referee system. Minerva, 9(1), 66–100.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.Centre de Sociologie de l’InnovationCNRS (UMR7185) - Mines-ParisTechParisFrance
  2. 2.Risques, Travail, Marchés, ÉtatRiTME - INRA (UR 1323)Ivry-sur-SeineFrance

Personalised recommendations