Science and Engineering Ethics

, Volume 20, Issue 1, pp 55–75 | Cite as

Conflict(s) of Interest in Peer Review: Its Origins and Possible Solutions

  • Anton OleinikEmail author
Original Paper


Scientific communication takes place at two registers: first, interactions with colleagues in close proximity—members of a network, school of thought or circle; second, depersonalised transactions among a potentially unlimited number of scholars can be involved (e.g., author and readers). The interference between the two registers in the process of peer review produces a drift toward conflict of interest. Three particular cases of peer review are differentiated: journal submissions, grant applications and applications for tenure. The current conflict of interest policies do not cover all these areas. Furthermore, they have a number of flaws, which involves an excessive reliance on scholars’ personal integrity. Conflicts of interest could be managed more efficiently if several elements and rules of the judicial process were accepted in science. The analysis relies on both primary and secondary data with a particular focus on Canada.


Conflict of interest Peer review Transaction Tenure Journal submission Grant application 



The author is indebted to the Science and Engineering Ethics anonymous reviewer(s), Dr. Judith Adler, Prof. Volker Meja (both—Memorial University of Newfoundland, Canada) and Dr. Alexandre Metraux (University of Mannheim, Germany) for their valuable comments and suggestions. Sheryl Curtis of Communications WriteTouch (Montréal, Canada) helped improve its style.


  1. Adler, J. E. 1979 [1976]. Artists in offices: An Ethnography of an Academic Art Scene. New Brunswick, NJ: Transaction.Google Scholar
  2. Bakanic, V., McPhail, C., & Simon, R. J. (1987). The manuscript review and decision-making process. American Sociological Review, 52, 631–642.CrossRefGoogle Scholar
  3. Bourdieu, P. (1984). Homo Academicus. Paris: Editions de Minuit.Google Scholar
  4. Burt, R. (1992). The social structure of competition. In N. Nohria & R. G. Eccles (Eds.), Networks and organizations: Structure, and action (pp. 57–91). Boston, MA: Harvard Business School Press.Google Scholar
  5. Callon, M. (2002). From science as an economic activity to socioeconomic of scientific research: The dynamics of emerged and consolidated techno-economic networks. In P. Mirowski & E.-M. Sent (Eds.), Science bought and sold: Essays in the economics of science (pp. 277–317). Chicago, IL: The University of Chicago Press.Google Scholar
  6. Campanario, J. M. (1998). Peer review for journals as it stands today—Part 2. Science Communication, 19(4), 277–306.CrossRefGoogle Scholar
  7. CAUT (Canadian Association of University Teachers). (2009). CAUT almanac of post-secondary education in Canada, 2008–2009. Ottawa, ON: CAUT (Canadian Association of University Teachers).Google Scholar
  8. Collins, R. (1998). The sociology of philosophies: A global theory of intellectual change. Cambridge, MA: The Belknap Press of Harvard University Press.Google Scholar
  9. Commons, J. R. (1931). Institutional economics. American Economic Review, 21(4), 648–657.Google Scholar
  10. Concordia University. (1997). Concordia university code of ethics. Montréal, QC [cited 1 February 2010]. Available from; INTERNET.
  11. Cooper, R., Gupta, M., Wilkes, M., & Hoffman, J. (2006). Conflict of interest disclosure policies and practices of peer-reviewed biomedical journals. Journal of General Internal Medicine, 21(12), 1248–1252.CrossRefGoogle Scholar
  12. Crane, D. (1972). Invisible colleges: Diffusion of knowledge in scientific communities. Chicago, IL: The University of Chicago Press.Google Scholar
  13. Derrida, J. (1967). De la grammatologie. Paris: Editions de Minuit.Google Scholar
  14. Etzioni, A. (1971). The need for quality filters in information systems. Science. New series, 171(3967), 133.Google Scholar
  15. Forde-Mazrui, K. (1999). Jural districting: Selecting impartial juries through community representation. Vanderbilt Law Review, 52, 353–404.Google Scholar
  16. Gilbert, N. G., & Mulkay, M. (1984). Opening Pandora’s box: A sociological analysis of scientists’ discourse. Cambridge: Cambridge University Press.Google Scholar
  17. Gupta, P., Kaur, G., Sharma, B., Shah, D., & Choudhury, P. (2006). What is submitted and what gets accepted in Indian Pediatrics: Analysis of submissions, review process decision making, and criteria for rejection. Indian Pediatrics, 43, 479–489.Google Scholar
  18. Hastie, R., Penrod, S. D., & Pennington, N. (1983). Inside the jury. Cambridge, MA: Harvard University Press.Google Scholar
  19. Hodgson, C. (1995). Evaluation of cardiovascular grant-in-aid applications by peer review: Influence of internal and external reviewers and committees. Canadian Journal of Cardiology, 11(10), 864–868.Google Scholar
  20. Hodgson, C. (1997). How reliable is peer review? An examination of operating grant proposals simultaneously submitted to two similar peer review systems. Journal of Clinical Epidemiology, 50(11), 1189–1195.CrossRefGoogle Scholar
  21. International Blue Ribbon Panel. (2008). Promoting excellence in research: An assessment of peer-review practices at the social sciences and humanities research council. Report to the Council of the Social Sciences and Humanities Research Council of Canada.Google Scholar
  22. Lamont, M. (2009). How professors think: Inside the curious world of academic judgment. Cambridge, MA: Harvard University Press.Google Scholar
  23. Langfeldt, L. (2001). The decision-making constraints and processes of grant peer-review, and their effects on the review outcome. Social Studies of Science, 31(6), 820–841.CrossRefGoogle Scholar
  24. Lodge, D. (1985). Small world: An academic romance. London: Penguin Books.Google Scholar
  25. Lotman, Y. (1990). Universe of the mind: A semiotic theory of culture (A. Shukman, Trans.). Bloomington: Indiana University Press.Google Scholar
  26. Lubetsky, M. H., & Krane, J. A. (2009). Appealing outcomes: A study of the overturn rate of Canada’s appellate courts. Osgoode Hall Law Journal, 47, 131–149.Google Scholar
  27. Mallard, G., Lamont, M., & Guetzkow, J. (2009). Fairness as appropriateness: Negotiating epistemological differences in peer review. Science, Technology and Human Values, 34(5), 573–606.CrossRefGoogle Scholar
  28. Mayo, N., Brophy, J., Goldberg, M., Klein, M., Miller, S., Platt, R., et al. (2006). Peering at peer review revealed high degree of chance associated with funding of grant applications. Journal of Clinical Epidemiology, 26, 842–848.CrossRefGoogle Scholar
  29. Merton, R. K. 1973a [1968]. The Matthew effect in science. In The sociology of science: Theoretical and empirical investigations (pp. 439–459). Chicago, IL: The University of Chicago Press.Google Scholar
  30. Merton, R. K. 1973b [1963]. Multiple discoveries as strategic research site. In The sociology of science: Theoretical and empirical investigations (pp. 371–382). Chicago, IL: The University of Chicago Press.Google Scholar
  31. Merton, R. K., & Zuckerman, H. 1973 [1968]. Institutionalized patterns of evaluation in science. In The sociology of science: Theoretical and empirical investigations (pp. 460–496). Chicago, IL: The University of Chicago Press.Google Scholar
  32. Miller, R. I. (1972). Evaluating faculty performance. San Francisco, CA: Jossey-Bass.Google Scholar
  33. MUN (Memorial University of Newfoundland), (2007). Collective agreement between Memorial University of Newfoundland and Memorial University of Newfoundland Faculty Association, December 13, 2007–August 31, 2009. St. John’s: MUN Printing Services.Google Scholar
  34. North, D. C. (1990). Institutions, institutional change and economic performance. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  35. Oleinik, A. (2004). On universal versus specific categories of network capitalism: A reply to V. Barnett’s Note. Journal of Economic Issues, 38(4), 1040–1046.Google Scholar
  36. Oleinik, A. (2009). Inquiring into communication in science: Alternative approaches. Science in Context, 22(4), 613–646.CrossRefGoogle Scholar
  37. Oleinik, A. (2011). Mixing quantitative and qualitative content analysis: Triangulation at work. Quality & Quantity, 45(4), 859–873.CrossRefGoogle Scholar
  38. Petty, R., Fleming, M., & Fabrigar, L. (1999). The review process at PSPB: Correlates of interreviewer agreement and manuscript acceptance. Personality and Social Psychology Bulletin, 25(2), 188–203.CrossRefGoogle Scholar
  39. Polanyi, M. 2002 [1969]. The republic of science: Its political and economic theory. In P. Mirowski, & E.-M. Sent (Eds.), Science bought and sold: Essays in the economics of science (pp. 465–485). Chicago, IL: The University of Chicago Press.Google Scholar
  40. Ramulu, V., Levine, R., Herbert, R., & Wright, S. (2005). Development of a case report review instrument. International Journal of Clinical Practice, 59(4), 457–461.CrossRefGoogle Scholar
  41. Roth, W.-M. (2002). Editorial power/authorial suffering. Research in Science Education, 32, 215–240.CrossRefGoogle Scholar
  42. Rothwell, P., & Martyn, C. (2000). Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would be expected by chance alone? Brain, 123, 1964–1969.CrossRefGoogle Scholar
  43. Sandström, U., & Hällsten, M. (2008). Persistent nepotism in peer-review. Scientometric, 74(2), 175–189.CrossRefGoogle Scholar
  44. Secord, J. A. (2004). Knowledge in transit. Isis, 95, 654–672.CrossRefGoogle Scholar
  45. Shapin, S. (1994). A social history of truth: Civility and science in seventeenth-century England. Chicago, IL: The University of Chicago Press.Google Scholar
  46. Simon, R. J., Bakanic, V., & McPhail, C. (1986). Who complains to journal editors and what happens. Sociological Inquiry, 56(2), 259–271.CrossRefGoogle Scholar
  47. SSHRC (Social Sciences and Humanities Research Council of Canada). (2009). Standard research grants program: Manual for adjudication committee members. Ottawa: SSHRC.Google Scholar
  48. Statistics Canada. 2001. 2001 Census of Canada [electronic database]. Available at; INTERNET.
  49. Statistics Canada. (2002). Criminal prosecutions personnel and expenditures 2000/01. Ottawa: Canadian Centre for Justice Statistics.Google Scholar
  50. Strathern, M. (2000). The tyranny of transparency. British Educational Research Journal, 26(3), 309–321.CrossRefGoogle Scholar
  51. Tanovich, D. M., Paciocco, D. M., & Skurka, S. (1997). Jury selection and criminal trials: Skills, science and the law. Concord, ON: Irwin Law.Google Scholar
  52. Thorngate, W., Dawes, R. M., & Foddy, M. (2009). Judging merit. New York: Psychology Press.Google Scholar
  53. Thorngate, W., Faregh, N., & Young, M. (2002). Mining the archives: Analyses of CIHR research grant adjudications. Ottawa, ON: Carleton University.Google Scholar
  54. Weber, M. 1968 [1922]. Economy and society: An outline of interpretative sociology. New York: Bedminster Press.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2013

Authors and Affiliations

  1. 1.Memorial University of NewfoundlandSt. John’sCanada
  2. 2.Central Economics and Mathematics InstituteRussian Academy of SciencesMoscowRussia

Personalised recommendations