Science and Engineering Ethics

, Volume 22, Issue 2, pp 591–596 | Cite as

Coping with the Conflict-of-Interest Pandemic by Listening to and Doubting Everyone, Including Yourself

  • Lynn T. KozlowskiEmail author
Open Access


In light of the widespread existence of financial and non-financial issues that contribute to the appearance or fact of conflict of interest, it is proposed that conflict of interest should generally be assumed, no matter the source of financial support or the expressed declarations of conflicts and even with respect to one’s own work. No new model is advanced for modification of peer-review processes or for elaboration of author declarations of interest. Researchers should be assessing the quality of published work as best they can and make their own decisions on the appropriate use of the work. While some apparent sources of conflict are likely more obvious and serious than others, even subtler biases can influence scientific reports. Ignoring peer-reviewed contributions because of conflict-of-interest concerns is discouraged. Listening skeptically to all sources, including yourself, is encouraged.


Conflict of interest Industry funding Ethics Peer-review Bias in research Responsible conduct of research 


Conflict of interest was somewhat easier to deal with [but not easy, e.g. (Roseman et al. 2011)] when the focus was on finances. Exploring the ecology of bias and pursuing it down into the complex psychology of the individual with impaired rational judgment (Cain and Detsky 2008), abiding “moral psychological” prejudices (Haidt 2007), and needs for self-affirmation (Cohen and Sherman 2014), one can become hopeless about controlling the appearance or fact of conflict as an ingredient in all of our scientific products (Abdoul et al. 2012; Cain and Detsky 2008; Lampe 2012; The PloS Medicine Editors 2008). Richard Feynman may have captured the personal challenges to scientific integrity most succinctly: “The first principle is that you must not fool yourself–and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that” (Feynman 1985). Not fooling yourself remains an abiding challenge.

Playing on the Fields of Peer-Reviewed Research

Publishing peer-reviewed research, especially in higher-profile journals, often requires advocacy and argument to win through to publication. On scholarly issues, there will be “winners” who are proven right by findings and “losers” who drop in credibility and influence. Modern science is often a team sport with “star” teams and “star” players who savor and benefit from stardom (cf. Latour 1987). Perhaps applied sciences should “operate with an assumption of bias, with the onus of proof on applied medical scientists to facilitate the ‘data transparency’ necessary to validate their research” (Roseman et al. 2011).

Many journals now provide options for prospective authors to exclude certain reviewers as conflicted, and this can influence the likelihood of acceptance (Goldsmith et al. 2006) as can the recommendation of preferred reviewers (Schroter et al. 2006). When the battle lines have been drawn on a controversial issue, knowledgeable editors can be challenged to decide who should be fairly selected to perform reviews because they are likely to be aware that choice of one reviewer is likelier to lead to rejection while the choice of another would likelier lead to acceptance. Unavoidably, “party lines” can start to develop in certain publications, such that they can be viewed by authors to be more welcoming to articles holding certain positions. Such forces contribute to the evolution of new journals, to the extent that certain viewpoints can get closed out of publication in the existing journals. The proliferation of peer-reviewed journals further complicates the credibility of what makes it though peer-review. (On some days in past months I assume that I am not alone in having received two or three emails for new journals seeking editors and editorial board members.) Most of us will also have examples of peer-reviewed papers in prestigious journals that we ourselves would not have supported for publication anywhere.

Not all issues are laden with controversy, but in the area of public health, issues like needle exchange programs, sex education, mandatory vaccinations, and harm reduction in tobacco use are all charged topics that engage moral psychological values as well as scientific values (Alderman et al. 2010; Kozlowski 2013, 2015). If one’s research can have little effect on policy, profits, disability, or death, then, it may trigger less controversy; however, one should not underestimate the ego-involvement in defense of any intellectual offering.

The About-Face Test

H.L. Mencken, the noted social critic, offered a rule-of thumb for assessing conflict-of-interest (Mencken 1923):

When I encounter a new idea, whether aesthetic, political, theological, or epistemological, I ask myself, instantly and automatically, what would happen to its proponent if he should state its exact antithesis. If nothing would happen to him, than I am willing and eager to listen to him. But if he would lose anything valuable by a volte face – if stating his idea is profitable to him, if the act secures his roof, butters his parsnips, gets him a tip – then I hear him with one ear only. He is not a free man.

Volte face refers to one doing an about-face or making a U-turn, to reverse an opinion by 180°. Mencken’s account provides a useful broad metaphor for conflict of interest (not being free to take an opposing position), and the more we understand the complex sources of bias, the clearer it is that this is an intrinsic problem (Fanelli 2015; Ioannidis 2005; Ioannidis et al. 2014; McNutt 2014; Ware and Munafò 2015). The science of controversial issues is rife with individuals (and organizations) (Kozlowski 2015) who, as Mencken says, should be listened to “with only one ear.” These individuals are in effect not free (completely free) to adopt opinions that would be unsupportive of the opinions of their agencies, superiors, close colleagues or even their own previous positions.

Listening with One Ear—to Everyone

Peer-review has a long history, is still under-going change, and efforts to improve the process may have had limited success (Lee et al. 2013). Elaboration of author declarations of interest has been taking place, but may even be counter-productive (Loewenstein et al. 2012). I would encourage editors to avoid seeking unanimous opinions for offering publication. I also suggest that reviews by conflicted reviewers may at times be the most informed and valuable (Lee et al. 2013). Beyond that, I offer no new model for the methodology of peer-review, but I do encourage skepticism as well as direct evaluation by interested researchers of the quality of all peer-reviewed work. Listening to everyone, of course, does not mean believing everyone’s findings equally. One could propose a rating of conflicts according to their seriousness, and no doubt some sources of conflict merit heavy discounting (e.g., funding source and employer), but it is not clear that the most obvious or measurable conflicts are always the most serious. A “white hat bias” has been observed in which information becomes distorted “when in the service of what may be perceived to be righteous ends” (Cope and Allison 2010). Not fooling oneself may, in the end, be the greatest challenge and the hardest to measure.

What are we to do? Perhaps the best we can do is what Mencken did—listen with only one ear to views that are likely influenced by research party lines and institutional perspectives on the preferred questions, methods, and answers. Rather than developing an About-Face test to apply to reviewers and authors, I think the lesson of the psychology of bias is listen to all voices with one ear. So, when faced with industry-funded/conducted research on an important matter, whether the tobacco industry (which has been exiled from a number of journals) (Smith 2013), the pharmaceutical industry (which has generally not been excluded from any journal) (Elliott 2005; Smith 2013; Lexchin et al. 2003; Lexchin 2012; Washington 2011), or another business; or whether the reputation of the authors is at stake or anything that makes one suspicious that an about-face would be unwelcome, I encourage listening with one ear which has the advantage of still listening rather than ignoring or discounting completely. Since we too are subject to fooling ourselves, we should work to consider doubts about our own positions.

Systematic, critical reviews of literatures (e.g., Cochrane Database of Systematic Reviews, such as Singh et al. 2012) are important and helpful, but they too can be compromised by the quality of the work and biases lurking within publications being reviewed. Nevertheless, these reviews across several studies may provide better foundations for reports to the public or policymakers. Neither the public nor policymakers can be expected to assess the quality of research in the manner aspired to for advanced researchers. On certain topics like the health effects of diet, the public may have stopped listening to scientific reports because of the instability of the results (Nagler 2014). When only one interested funder is able to support certain research questions (Lexchin 2012), skeptical listening should certainly be emphasized, especially when the findings are not against interests. The exile of certain funders from peer-reviewed journals could prevent the identification of reliable, conflicting patterns of findings arising from different factions (e.g., Vartanian et al. 2007); such patterns can themselves be informative.

The point is to not fully discount anyone, but to be skeptical of everyone when listening carefully to all the reports one can find. Be wary of “arguments from authority” that may encourage belief in reports from the best journals and also be wary to not dismiss out-of-hand reports because of their funders. If conflict of interest is endemic and goes beyond finances to more challenging issues, it is probably best to listen to all voices, even our own, skeptically. The diversity of methods, approaches and biases may in the end lead to more robust conclusions and find things that a single perspectives will not. So, listen to everybody, but listen to everybody with one ear. The advantage of doubting yourself as well might even contribute to a change of position rather than to digging in deeper to defend what you have said before.



I have some external funding as an investigator on grants from the U.S. National Institutes of Health related to tobacco use and electronic cigarettes, but the writing of this piece was unrelated to these projects. I am solely responsible for the research on and writing of this piece and made the decision to submit for publication.


  1. Abdoul, H., Perrey, C., Tubach, F., Amiel, P., Durand-Zaleski, I., & Alberti, C. (2012). Non-financial conflicts of interest in academic grant evaluation: A qualitative study of multiple stakeholders in France. PLoS ONE, 7(4), e35247. doi: 10.1371/journal.pone.0035247.CrossRefGoogle Scholar
  2. Alderman, J., Dollar, K. M., & Kozlowski, L. T. (2010). Commentary: Understanding the origins of anger, contempt, and disgust in public health policy disputes: Applying moral psychology to harm reduction debates. Journal of Public Health Policy, 31(1), 1–16. doi: 10.1057/jphp.2009.52.CrossRefGoogle Scholar
  3. Cain, D. M., & Detsky, A. S. (2008). Everyone’s a little bit biased (even physicians). JAMA, 299(24), 2893–2895. doi: 10.1001/jama.299.24.2893.CrossRefGoogle Scholar
  4. Cohen, G. L., & Sherman, D. K. (2014). The psychology of change: Self-affirmation and social psychological intervention. Annual Review of Psychology, 65, 333–371. doi: 10.1146/annurev-psych-010213-115137.CrossRefGoogle Scholar
  5. Cope, M. B., & Allison, D. B. (2010). White hat bias: Examples of its presence in obesity research and a call for renewed commitment to faithfulness in research reporting. International Journal of Obesity (London), 34(1), 84–88; discussion 83. doi:  10.1038/ijo.2009.239.
  6. Elliott, C. (2005). Should journals publish industry-funded bioethics articles? Lancet, 366(9483), 422–424. doi: 10.1016/s0140-6736(05)66794-3.CrossRefGoogle Scholar
  7. Fanelli, D. (2015). We need more research on causes and consequences, as well as on solutions. Addiction, 110(1), 11–13. doi: 10.1111/add.12772.CrossRefGoogle Scholar
  8. Feynman, R. P. (1985). Surely you’re joking, Mr. Feynman!”: Adventures of a curious character. New York: W.W. Norton.Google Scholar
  9. Goldsmith, L. A., Blalock, E. N., Bobkova, H., & Hall, R. P, 3rd. (2006). Picking your peers. Journal of Investigative Dermatology, 126(7), 1429–1430. doi: 10.1038/sj.jid.5700387.CrossRefGoogle Scholar
  10. Haidt, J. (2007). The new synthesis in moral psychology. Science, 316(5827), 998–1002. doi: 10.1126/science.1137651.CrossRefGoogle Scholar
  11. Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. doi: 10.1371/journal.pmed.0020124.CrossRefGoogle Scholar
  12. Ioannidis, J. P., Munafo, M. R., Fusar-Poli, P., Nosek, B. A., & David, S. P. (2014). Publication and other reporting biases in cognitive sciences: Detection, prevalence, and prevention. Trends in Cognitive Science, 18(5), 235–241. doi: 10.1016/j.tics.2014.02.010.CrossRefGoogle Scholar
  13. Kozlowski, L. T. (2013). Ending versus controlling versus employing addiction in the tobacco-caused disease endgame: Moral psychological perspectives. Tobacco Control, 22 Suppl 1(Supplement 1), i31–i32. doi: 10.1136/tobaccocontrol-2012-050813.
  14. Kozlowski, L.T. (2015). The truncation of moral reasoning on harm reduction by individuals and organizations. Addiction (in press).Google Scholar
  15. Lampe, M. (2012). Science, human nature, and a new paradigm for ethics education. Science and Engineering Ethics, 18, 543–549. doi: 10.1007/s11948-012-9373-8.CrossRefGoogle Scholar
  16. Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Cambridge, Mass: Harvard University Press.Google Scholar
  17. Lee, C. J., Sugamota, C. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the American Society of Information Science, 64, 2–17. doi: 10.1002/asi.22784.CrossRefGoogle Scholar
  18. Lexchin, J. (2012). Those who have the gold make the evidence: How the pharmaceutical industry biases the outcomes of clinical trials of medications. Science and Engineering Ethics, 18, 247–261. doi: 10.1007/s11948-011-9265-3.CrossRefGoogle Scholar
  19. Lexchin, J., Bero, L. A., Djulbegovic, B., & Clark, O. (2003). Pharmaceutical industry sponsorship and research outcome and quality: Systematic review. British Medical Journal (BMJ), 326(7400), 1167.CrossRefGoogle Scholar
  20. Loewenstein, G., Sah, S., & Cain, D. M. (2012). The unintended consequences of conflict of interest disclosure. JAMA, 307(7), 669–670. doi: 10.1001/jama.2012.154.CrossRefGoogle Scholar
  21. McNutt, M. (2014). Raising the bar. Science, 345(6192), 9. doi: 10.1126/science.1257891.CrossRefGoogle Scholar
  22. Mencken, H. L. (1923, December 5). H.L. Mencken. The Nation, 117(3048), 647–648.Google Scholar
  23. Nagler, R. H. (2014). Adverse outcomes associated with media exposure to contradictory nutrition messages. Journal of Communication and Health, 19(1), 24–40. doi: 10.1080/10810730.2013.798384.CrossRefGoogle Scholar
  24. Roseman, M., Milette, K., Bero, L. A., Coyne, J. C., Lexchin, J., Turner, E. H., et al. (2011). Reporting of conflicts of interest in meta-analyses of trials of pharmacological treatments. JAMA, 305(10), 1008–1017. doi: 10.1001/jama.2011.257.CrossRefGoogle Scholar
  25. Schroter, S., Tite, L., Hutchings, A., & Black, N. (2006). Differences in review quality and recommendations for publication between peer reviewers suggested by authors or by editors. JAMA, 295(3), 314–317. doi: 10.1001/jama.295.3.314.CrossRefGoogle Scholar
  26. Singh, J., Kour, K., & Jayaram Mahesh, B. (2012). Acetylcholinesterase inhibitors for schizophrenia. Cochrane Database of Systematic Reviews, 2012(1), 1–101. doi: 10.1002/14651858.CD007967.pub2.Google Scholar
  27. Smith, R. (2013). Arguments against publishing tobacco funded research also apply to drug industry funded research. BMJ (Clinical Research ed.), 347, f6732.Google Scholar
  28. The PLoS Medicine Editors. (2008). Making sense of non-financial competing interests. PLoS Medicine, 5(9), e199. doi: 10.1371/journal.pmed.0050199.CrossRefGoogle Scholar
  29. Vartanian, L. R., Schwartz, M. B., & Brownell, K. D. (2007). Effects of soft drink consumption on nutrition and health: A systematic review and meta-analysis. American Journal of Public Health, 97(4), 667–675. doi: 10.2105/ajph.2005.083782.CrossRefGoogle Scholar
  30. Ware, J. J., & Munafò, M. R. (2015). Significance chasing in research practice: Causes, consequences and possible solutions. Addiction, 110(1), 4–8. doi: 10.1111/add.12673.CrossRefGoogle Scholar
  31. Washington, H. A. (2011). Flacking for Big Pharma: Drugmakers don’t just compromise doctors; they Also undermine the top medical journals and skew the findings of medical research. (cover story). American Scholar, 80(3), 22–34.Google Scholar

Copyright information

© The Author(s) 2015

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of Community Health and Health Behavior, School of Public Health and Health ProfessionsUniversity at Buffalo, State University of New YorkBuffaloUSA

Personalised recommendations