Abstract
Born in the 17th century, journal peer review is an extremely diverse technology, constantly torn between two often incompatible goals: the validation of manuscripts conceived as a collective industrial-like reproducible process performed to assert scientific statements, and the dissemination of articles considered as a means to spur scientific discussion, raising controversies, and civically challenging a state of knowledge. Such a situation is particularly conducive to clarifying the processes of valuation and evaluation in journal peer review. In this article, such processes are considered as specific tests in order to emphasize the uncertain properties of pre-tests manuscripts. On the one hand, evaluation tests are examined at the core of the validation of manuscripts, such as defining the coordination of judging instances (editor-in-chief, editorial committee, outside reviewers) or controlling the modalities of inter-knowledge between reviewers and authors. They are also studied regarding the dissemination of articles, notably through the contemporary conception of a continuing evaluation test termed “post publication peer review”. On the other hand, valuation tests are both part of the validation of manuscripts, such as the weighting of different judgments of the same manuscript and the tensions that these hierarchies cause, and of the dissemination of articles, such as attention metrics recording the uses of articles. The conclusion sketches out how the articulation of these different tests has recently empowered readers as a new key judging instance for dissemination and for validation, potentially transforming the definition of peers, and thus the whole process of journal peer review.
Similar content being viewed by others
Notes
“The establishment of matters of fact in Boyle’s experimental programme utilized three technologies: a material technology embedded in the construction and operation of the air-pump ; a literary technology by means of which the phenomena produced by the pump were made known to those who were not direct witnesses ; and a social technology that incorporated the conventions of experimental philosophers should use in dealing with each other and considering knowledge-claims” (1985: 25).
Ernest Hart, editor-in-chief of the BMJ speaking to the American Association of Medical Editors in 1893 (cited by Burnham 1990: 1325).
For example, in 1965 this period was extended from 9 to 17 months for journals published by the American Psychological Association (1965).
Interview with one of the seven associate editors of a medical journal (carried out on the 06.12.2012).
Bornmann (2011) has shown that the Mertonian tradition never considers reviewers as coauthors of the manuscript, while european constructionist traditions have insisted on their actual role in rewriting manuscripts, whether directly or via authors themselves.
Interview with the editor-in-chief of a journal dedicated to gender studies (carried out on 26.10.2013).
As a result, the number of articles withdrawn after publication has increased exponentially over the last ten years (Van Noorden 2011).
References
American Psychological Association. (1965). Publications in APA Journals: advices from the editors. American Psychologist, 20(9), 711f.
Auranen, O., & Nieminen, M. (2010). University research funding and publication performance—An international comparison. Research Policy, 39(6), 822–834.
American Sociological Review. (1955). Notice to contributors, 20(3), 341.
Archambault, E., Amyot, D., Deschamps, E., Nicol, A., Rebout, L., & Roberge, G. (2013). Proportion of open access peer-reviewed papers at the European and world levels—2004–2011. Science Metrix for the European Commission DG Research & Innovation.
Baruch, Y., Konrad, A. M., Aguinis, H., & Starbuck, W. H. (Eds.). (2008). Opening the black box of editorship. Palgrave Macmillan.
Bazerman, C. (1988). Shaping written knowledge: the genre and activity of the experimental article in science. Madison, WI: The University of Wisconsin Press.
Benedek, E. P. (1976). Editorial practices of psychiatric and related journals: implications for women. American Journal of Psychiatry, 133(1), 89–92.
Berg, L. D. (2001). Masculinism, emplacement, and positionality in peer review. The Professional Geographer, 53(4), 511–521.
Blank, R. M. (1991). The effects of double-blind versus single-blind reviewing: Experimental evidence from the American Economic Review. American Economic Review, 81(5), 1041–1067.
Bohlin, I. (2004). Communication regimes in competition: The current transition in scholarly communication seen through the lens of the sociology of technology. Social Studies of Science, 34(3), 365–391.
Bollen, J., Van De Sompel, H., Hagberg, A., & Chute, R. (2009). A principal component analysis of 39 scientific impact measures. PLoS ONE, 4(6), e6022.
Boltanski, L., & Thévenot, L. (2006). On justification: Economies of worth. New Jersey: Princeton university Press.
Bornmann, L. (2011). Scientific peer review. Annual Review of Information Science and Technology, 45(1), 197–245.
Broad, W., & Wade, N. (1982). Betrayers of the truth. Fraud and deceit in the hall of science. New York: Simon & Schuster.
Bruno, I., & Didier, E. (2013). Benchmarking : l’état sous pression statistique. Paris: La Découverte.
Burnham, J. C. (1990). The evolution of editorial peer review. Journal of the American Medical Association, 263(10), 1323–1329.
Butler, L., & McAllister, I. (2009). Metrics or peer review? Evaluating the 2001 UK research assessment exercise in political science. Political Studies Review, 7(1), 3–17.
Campanario, J. M. (1998). Peer review for journals as it stands today—Part 1. Science Communication, 19(3), 181–211.
Chubin, D. E., & Hackett, E. J. (1990). Peerless science: Peer review and U.S. science policy. Albany: State University of New York Press.
Cicchetti, D. V. (1991). The reliability of peer review for manuscript and grant submissions: A cross-disciplinary investigation. Behavioral and Brain Sciences, 14(1), 119–135.
Cicchetti, D. V., & Conn, H. O. (1976). A statistical analysis of reviewer agreement and bias in evaluating medical abstracts. The Yale Journal of Biology and Medicine, 49(4), 373–383.
Cole, S., Cole, J. R., & Simon, G. A. (1981). Chance and consensus in peer review. Science, 214(4523), 881–886.
Crane, D. (1967). The gate-keepers of science: Some factors affecting the selection of articles for scientific journals. The American Sociologist, 2(1), 195–201.
Cronin, B., & Sugimoto, C. R. (2014). Beyond bibliometrics. Harnessing multidimensional indicators of scholarly impact. Cambridge: MIT Press.
DeBakey, L. (1976). The scientific journal: editorial policies and practices: guidelines for editors, reviewers, and authors. Saint Louis: CV Mosby Company.
Donovan, C. (2007). Introduction: Future pathways for science policy and research assessment: metrics vs peer review, quality vs impact. Science and Public Policy, 34(8), 538–542.
Douglas-Wilson, I. (1974). Twilight of the medical journal? British Medical Journal, 3(5926), 326–327.
Erikson, M. G., & Erlandson, P. (2014). A taxonomy of motives to cite. Social Studies of Science, 44(4), 625–637.
Espeland, W. N., & Stevens, M. L. (1998). Commensuration as a social process. Annual review of sociology, 24, 313–343.
Eysenbach, G. (2011). Can tweets predict citations? Metrics of social impact based on Twitter and correlation with traditional metrics of scientific impact. Journal of Medical Internet Research, 13(4), e123.
Frey, B. (2003). Publishing as prostitution? Choosing between one’s own ideas and academic success. Public Choice, 116(1–2), 205–223.
Glenn, N. D. (1976). The journal article review process: Some proposals for change. The American Sociologist, 11(3), 179–185.
Godlee, F., Gale, C. R., & Martyn, C. N. (1998). Effect on the quality of peer review of blinding reviewers and asking them to sign their reports. Journal of the American Medical Association, 280(3), 237–240.
Gunnarsdottir, K. (2005). Scientific journal publications: On the role of electronic preprint exchange in the distribution of scientific literature. Social Studies of Science, 35(4), 549–579.
Hargens, L. L. (1988). Scholarly consensus and journal rejection rates. American Sociological Review, 53(1), 139–151.
Harnad, S. (1979). Creative disagreement. The Sciences, 19, 18–20.
Helgesson, C.-F., & Muniesa, F. (2013). For what it’s worth: An introduction to valuation studies. Valuation Studies, 1(1), 1–10.
Hicks, D., & Wang, J. (2011). Coverage and overlap of the new social sciences and humanities journal lists. Journal of the American Society for Information Science and Technology, 62(2), 284–294.
Hirschauer, S. (2010). Editorial judgments: A praxeology of ‘voting’ in peer review. Social Studies of Science, 40(1), 71–103.
Ingelfinger, F. J. (1969). Definition of ‘sole contribution’. New England Journal of Medicine, 281(12), 676–677.
Jones, R. (1974). Rights, wrongs and referees. New Scientist, 61(890), 758–759.
Kennefick, D. (2005). Einstein Versus The Physical Review. Physics Today, 58(9), 43–48.
Knox, F. G. (1981). No unanimity about anonymity. Journal of Laboratory and Clinical Medicine, 97(1), 1–3.
Kronick, D. A. (1962). A history of scientific and technical periodicals: the origins and development of the scientific and technical press, 1665-1790. Metuchen, N.J.: The Scarecrow Press.
Lamont, M. (2009). How professors think: Inside the curious world of academic judgment. Cambridge, Mass.; London: Harvard University Press.
Lamont, M. (2012). Toward a comparative sociology of valuation and evaluation. Annual Review of Sociology, 38(1), 201–221.
Lancester, F. W. (1995). Attitudes in academia toward feasibility and desirability of networked scholarly publishing. Library Trends, 43(4), 741–752.
Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Milton Keynes: Open University Press.
Latour, B., & Woolgar, S. (1979). Laboratory life: the social construction of scientific facts. Beverly Hills: Sage.
Lee, C. J., Sugimoto, C. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the American Society for Information Science and Technology, 64(1), 2–17.
Lowry, R. P. (1967). Communications to the editors. The American Sociologist, 2(4), 220.
Macdonald, S., & Kam, J. (2007). Aardvark et al.: quality journals and gamesmanship in management studies. Journal of Information Science, 33(6), 702–717.
Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research, 1(2), 161–175.
Merton, R. K. (1942). Science and technology in a democratic order. Journal of Legal and Political Sociology, 1, 115–126.
Morgan, P. P. (1984). Anonymity in medical journals. Canadian Medical Association Journal, 131(9), 1007f.
Nature. (1974). In defence of the anonymous referee. Nature, 249(5458), 601f.
Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187–195.
Pontille, D., & Torny, D. (2010). The controversial policies of journal ratings: Evaluating social sciences and humanities. Research Evaluation, 19(5), 347–360.
Pontille, D., & Torny, D. (2012). Behind the scenes of scientific articles: Defining categories of fraud and regulating cases. Revue d’Épidémiologie et de Santé Publique, 60(4), 247–253.
Pontille, D., & Torny, D. (2013). La manufacture de l’évaluation scientifique : algorithmes, jeux de données, outils bibliométriques. Réseaux, 177, 25–61.
Pontille, D., & Torny, D. (2014). The blind shall see! The question of anonymity in journal peer review. Ada: A Journal of Gender, New Media, and Technology, 4, doi:10.7264/N3542KVW.
Porter, J. R. (1964). The Scientific Journal - 300th Anniversary. Bacteriological Reviews, 28(3), 211–230.
Priem, J., & Costello, K. L. (2010). How and why scholars cite on Twitter. Proceedings of the American Society for Information Science and Technology, 47(1), 1–4.
Schroter, S., Black, N., Evans, S., Godlee, F., Osorio, L., & Smith, R. (2008). What errors do peer reviewers detect, and does training improve their ability to detect them? Journal of the Royal Society of Medicine, 101(10), 507–514.
Shapin, S., & Schaffer, S. (1985). Leviathan and the air-pump: Hobbes, Boyle, and the experimental life. Princeton, N.J.: Princeton University Press.
Shapiro, B. J. (2000). A culture of fact: England, 1550-1720. Ithaca and London: Cornell University Press.
Smith, R. (2006). Peer review: a flawed process at the heart of science and journals. Journal of the Royal Society of Medicine, 99(4), 178–182.
Speck, B. W. (1993). Publication peer review: An annotated bibliography. Westport, CT: Greenwood Press.
Van Noorden, R. (2011). Science publishing: The trouble with retractions. Nature, 478(7367), 26–28.
Van Rooyen, S., Godlee, F., Evans, S., Smith, R., & Black, N. (1998). Effect of blinding and unmasking on the quality of peer review. Journal of General Internal Medicine, 14(10), 622–624.
Ward, W. D., & Goudsmit, S. A. (1967). Reviewer and author anonymity. Physics Today, 20(1), 12.
Ware, M., & Monkman, M. (2008). Peer review in scholarly journals: Perspective of the scholarly community - an international study. UK, Mark Ware Consulting: Bristol.
Weller, A. (2001). Editorial peer review: Its strengths and weaknesses. Medford: Information Today, Inc.
Wilhite, A. W., & Fong, E. A. (2012). Coercive citation in academic publishing. Science, 335(6068), 542f.
Wilson, J. D. (1978). Peer review and publication. Presidential address before the 70th annual meeting of the American Society for Clinical Investigation, San Francisco, California, 30 April 1978. Journal of Clinical Investigation, 61(6), 1697–1701.
Wouters, P. (1999). The citation culture. Amsterdam: University of Amsterdam.
Wouters, P., & Costas, R. (2012). Users, narcissism and control – tracking the impact of scholarly publications in the 21st century (p. 50). Utrecht, SURFfoundation.
Zuckerman, H. A., & Merton, R. K. (1971). Patterns of evaluation in science: institutionalisation, structure and functions of the referee system. Minerva, 9(1), 66–100.
Acknowledgments
The authors are grateful to Daniel Céfaï, Bénédicte Zimmermann, and three anonymous referees for their thoughtful comments and very helpful suggestions on a previous version of this text. We also thank Chris Hinton for his translating assistance.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Pontille, D., Torny, D. From Manuscript Evaluation to Article Valuation: The Changing Technologies of Journal Peer Review. Hum Stud 38, 57–79 (2015). https://doi.org/10.1007/s10746-014-9335-z
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10746-014-9335-z