Advertisement

Science and Engineering Ethics

, Volume 25, Issue 1, pp 211–229 | Cite as

A Systems Approach to Understanding and Improving Research Integrity

  • Dennis M. GormanEmail author
  • Amber D. Elkins
  • Mark Lawley
Opinion Piece

Abstract

Concern about the integrity of empirical research has arisen in recent years in the light of studies showing the vast majority of publications in academic journals report positive results, many of these results are false and cannot be replicated, and many positive results are the product of data dredging and the application of flexible data analysis practices coupled with selective reporting. While a number of potential solutions have been proposed, the effects of these are poorly understood and empirical evaluation of each would take many years. We propose that methods from the systems sciences be used to assess the effects, both positive and negative, of proposed solutions to the problem of declining research integrity such as study registration, Registered Reports, and open access to methods and data. In order to illustrate the potential application of systems science methods to the study of research integrity, we describe three broad types of models: one built on the characteristics of specific academic disciplines; one a diffusion of research norms model conceptualizing researchers as susceptible, “infected” and recovered; and one conceptualizing publications as a product produced by an industry comprised of academics who respond to incentives and disincentives.

Keywords

Systems thinking System dynamics Research ethics Publish or perish Open data Registered reports 

Notes

Compliance with Ethical Standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. Alberts, B., Cicerone, R. J., Feinberg, S. E., Kamb, A., McNutt, M., Nerem, R. M., et al. (2015). Self-correction in science at work: Improve incentives to support research integrity. Science, 348, 1420–1422.CrossRefGoogle Scholar
  2. Alsheikh-Ali, A. A., Qureshi, W., Al-Mallah, M. H., & Ioannidis, J. P. A. (2011). Public availability of published research data in high-impact journals. PLoS ONE, 6(9), e24357. doi: 10.1371/journal.pone.0024357.CrossRefGoogle Scholar
  3. American Statistical Association. (2016). ASA statement on statistical significance and p-values. The American Statistician, 70, 131–133.Google Scholar
  4. Anderson, C. J., Bahnik, S., Barnett-Cowan, M., Bosco, F. A., Chandler, J., Chartier, C.R. et al. (2015). Response to Comment on “Estimating the reproducibility of psychological science.” Science, 351, 1037-c. doi:  10.1126/science.aad9163.
  5. Bateman, I., Kahneman, D., Munro, A., Starmer, C., & Sugden, R. (2005). Testing competing models of loss aversion: An adversarial collaboration. Journal of Public Economics, 89, 1561–1580.CrossRefGoogle Scholar
  6. Begley, C. G., & Ioannidis, J. P. A. (2015). Reproducibility in science: Improving the standard for basic and preclinical research. Circulation Research, 116, 116–125.CrossRefGoogle Scholar
  7. Benchimol, E. I., Smeeth, L., Guttman, A., Harron, K., Mohor, D., Petersen, I., et al. (2015). The Reporting of studies Conducted using Observational Routinely-collected health data (RECORD) Statement. PLoS Medicine, 12(1), e1001885.CrossRefGoogle Scholar
  8. Bender, M. E., Edwards, S., von Philipsborn, P., Steinbeis, F., Keil, T., & Tinnemann, P. (2015). Using co-authorship networks to map and analyse global neglected tropical disease research with an affiliation to Germany. PLoS Neglected Tropical Disease, 9(12), e0004182.CrossRefGoogle Scholar
  9. Benjamin, D. J., Berger, J. O., Johannesson, M., Nosek, B. A., Waggenmakers, E.-J., Berk, R., et al. (2017). Redefine statistical significance. PsyArXiv Preprints, https://psyarxiv.com/mky9j. doi: 10.17605/OSF.IO/MKY9 J.
  10. Bero, L. A., & Rennie, D. (1996). Influences on the quality of published drug studies. International Journal of Technilogical Assessment in Health Care, 12, 209–237.CrossRefGoogle Scholar
  11. Best, A., Clark, P. I., Leischow, S. J., & Trochim, W. M. K. (2007). Greater than the Sum: Systems Thinking in Tobacco Control. National Cancer Institute, U.S. Department of Health and Human Services, National Institutes of Health.Google Scholar
  12. Bettencourt, L. M. A., Clinton-Arias, A., Kaiser, D. I., & Castillo-Chavez, C. (2006). The power of a good idea: quantitative modeling of the spread of ideas from epidemiological models. Physica A, 364, 513–536.CrossRefGoogle Scholar
  13. BioMed Central (2016). Publish your study protocol. Retrieved from http://old.biomedcentral.com/authors/protocol.
  14. Cadsby, C. B., Croson, R., Marks, M., & Maynes, E. (2008). Step return versus net reward in the voluntary provision of a threshold public good: An adversarial collaboration. Public Choice, 135, 277–289.CrossRefGoogle Scholar
  15. Center for Open Science. Registered Reports: Peer review before results are known to align scientific values and practices. Journals that have adopted Registered Reports. https://cos.io/rr/?_ga=1.126554573.139903688.1493654853 Accessed September 6, 2017.
  16. Chambers, C. D., Feredoes, E., Muthukumaraswamy, S. D., & Etchells, P. J. (2014). Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond. AIMS Neuroscience, 1, 4–17.CrossRefGoogle Scholar
  17. Clemens, M. A. (2017). The meaning of failed replications: A review and proposal. Journal of Economic Surveys, 31, 326–342.CrossRefGoogle Scholar
  18. ClinicalTrials.gov (2017). Disclaimer. https://clinicaltrials.gov/ct2/about-site/disclaimer. Accessed September 4, 2017.
  19. Cope, M. B., & Allison, D. B. (2010). White hat bias: Examples of its presence in obesity research and a call for renewed commitment to faithfulness in research reporting. International Journal of Obesity, 34, 84–88.CrossRefGoogle Scholar
  20. Coyne, J. C., & de Voogd, J. N. (2012). Are we witnessing the decline effect in the Type D personality literature? What can be learned? Journal of Psychosomatic Research, 73, 40107.Google Scholar
  21. Dutilh, G., Vandekerckhove, J., Ly, A., Matzke, D., Pedroni, A., Frey, R., et al. (2017). A test of the diffusion model explanation for the worst performance rule using preregistration and blinding. Attention, Perception, and Psychophysics, 79, 713–725.CrossRefGoogle Scholar
  22. Edwards, M. A., & Roy, S. (2017). Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science, 34, 51–61.CrossRefGoogle Scholar
  23. Eisner, M. (2009). No effect in independent prevention trials: Can we reject the cynical view? Journal of Experimental Criminology, 5, 163–183.CrossRefGoogle Scholar
  24. Elkins, A. D., & Gorman, D. M. (2014). Systems theory in public health. In D. McQueen (Ed.) Oxford Bibliographies in Public Health. New York: Oxford University Press.Google Scholar
  25. Epstein, J. M., Parker, J., Cummings, D., & Hammond, R. A. (2008). Coupled dynamics of fear and disease: Mathematical and computational explorations. PLoS ONE, 3(12), e3955.CrossRefGoogle Scholar
  26. Etz, A., & Vandekerckhove, J. (2016). A Bayesian perspective on the Reproducibility Project: Psychology. PLoS ONE, 11(2), e0149794. doi: 10.1371/journal.pone.0149794.CrossRefGoogle Scholar
  27. Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE, 4(5), e5738.CrossRefGoogle Scholar
  28. Fanelli, D. (2010a). Negative results are disappearing from most disciplines and countries. Scientometrics, 90, 891–904.CrossRefGoogle Scholar
  29. Fanelli, D. (2010b). Do pressures to publish increase scientists’ bias? An empirical support for US States data. PLoS ONE, 5(4), e10271. doi: 10.1371/journal.pone.0010271.CrossRefGoogle Scholar
  30. Fanelli, D. (2012). “Positive” results increase down the hierarchy of science. PLoS ONE, 4(5), e10068.Google Scholar
  31. Fanelli, D. (2013). Redefine misconduct as distorted reporting. Nature, 494, 149.CrossRefGoogle Scholar
  32. Fanelli, D. (2016). Set up a “self-retraction” system for honest errors. Nature, 531, 415.CrossRefGoogle Scholar
  33. Fannelli, D. (2013). Positive results receive more citations, but only in some disciplines. Scientometrics, 94, 701–709.CrossRefGoogle Scholar
  34. Ferguson, C. J., & Heene, M. (2012). A vast graveyard of undead theories: Publication bias and psychological science’s aversion to the null. Perspectives on Psychological Science, 7, 555–561.CrossRefGoogle Scholar
  35. Fleming, P. S., Koletsi, D., Dwan, K., & Pamdis, N. (2015). Outcome discrepancies and selective reporting: Impacting the leading journals? PLoS ONE, 10(5), e0127495. doi: 10.1371/journal.pone.0127495.CrossRefGoogle Scholar
  36. Fonesca, B. P. F., Sampoaio, R. B., Fonseca, M. V. A., & Zicker, F. (2016). Co-authorship network analysis in health research: Method and potential use. Health Research Policy and Systems, 14, 34.CrossRefGoogle Scholar
  37. Fucci, D., Scanniello, G., Ramano, S., Shepperd, M., Sigweni, B., Uyaguari, F., et al. (2016). An external replication on the effects of test-driven development using a multi-site blind analysis approach. In ESEM’16: Proceedings of the 10 th ACM/IEEE International Symposium on Empirical Software and Measurement. Article No. 3 http://people.brunel.ac.uk/~csstmms/FucciEtAl_ESEM2016.pdf.
  38. Ghimire, S., Kyung, E., Kang, W., & Kim, E. (2012). Assessment of adherence to the CONSORT Statement for quality of reports on randomized controlled trial abstracts from four high-impact general medical journals. Trials, 13, 77. doi: 10.1186/1745-6215-13-77.CrossRefGoogle Scholar
  39. Gilbert, D. T., King. G., Pettigrew, S., & Wilson, T. D. (2015). Comment on “Estimating the reproducibility of psychological science.” Science, 351, 1037–b. doi:  10.1126/science.aad7243.
  40. Goodman, S. N., Fanelli, D., & Ioannidis, J. P. A. (2016). What does research reproducibility mean? Science Translational Medicine, 8, 341, ps12.Google Scholar
  41. Gorman, D. M. (2016). Can we trust positive findings of intervention research? The role of conflict of interest. Prevention Science. April 23 (Epub ahead of print).Google Scholar
  42. Gorman, D. M. (2017a). The decline effect in evaluations of the impact of the Strengthening Families Program for Youth 10-14 (SFP 10-14) on adolescent substance use. Children and Youth Services Review, 81, 29–39.CrossRefGoogle Scholar
  43. Gorman, D. M. (2017b). Has the National Registry of Evidence-based Programs and Practices (NREPP) lost its way? International Journal of Drug Policy, 45, 40–41.CrossRefGoogle Scholar
  44. Gorman, D. M. (2017c). Evidence-based practice as a driver of pseudoscience in prevention research. In A. B. Kaufman & J. Kaufman (Eds.), Pseudoscience. Cambridge: MIT Press.Google Scholar
  45. Hay, M., Andrews, M., Wilson, R., Callender, D., O’Malley, P. G., & Douglas, K. (2016). Reporting quality of randomized controlled abstracts among high-impact general medical journals: A review and analysis. British Medical Journal Open, 6(7), e011082. doi: 10.1136/bmjopen-2016-011082.Google Scholar
  46. Hirsch, G. B., Levine, R., & Miller, R. L. (2007). Using system dynamics modeling to understand the impact of social change initiatives. American Journal of Community Psychology, 39, 239–253.CrossRefGoogle Scholar
  47. Holder, H. D. (2010). Prevention programs in the 21st century: What we do not discuss in public. Addiction, 105, 578–581.CrossRefGoogle Scholar
  48. Humphreys, M., de la Sierra, R. S., & van der Windt, P. (2013). Fishing, commitment, and communication: A proposal for comprehensive nonbinding research registration. Political Analysis, 21, 1–20.CrossRefGoogle Scholar
  49. Ioannidis, J. P. A. (2005). Why most published research findings are false. PLos Med, 2(8), e124.CrossRefGoogle Scholar
  50. Ioannidis, J. P. A. (2008). Why most published research findings are false. PLoS Medicine, 2, e124 (0696–0701).Google Scholar
  51. Ioannidis, J. P. A. (2012). Scientific inbreeding and same-team replication: Type D personality as an example. Journal of Psychosomatic Research, 73, 408–410.CrossRefGoogle Scholar
  52. Ioannidis, J. P. A. (2014). How to make more published research true. PLoS Medicine, 11(10), 1001747.CrossRefGoogle Scholar
  53. Ioannidis, J. P. A. (2016). Evidence-based medicine has been hijacked: A report to David Sackett. Journal of Clinical Epidemiology, 73, 82–84.CrossRefGoogle Scholar
  54. Ioannidis, J. P. A., Greenland, S., Hlatky, M. A., Khoury, M. J., Macleod, M. R., Moher, D., et al. (2014a). Increasing value and reducing waste in research design, conduct, and analysis. Lancet, 383, 166–175.CrossRefGoogle Scholar
  55. Ioannidis, J. P. A., Munafo, M. R., Fusar-Poli, P., Nosek, B. A., & David, S. P. (2014b). Publication and other reporting biases in cognitive sciences: Detection, prevalence, and prevention. Trends in Cognitive Sciences, 18, 235–241.CrossRefGoogle Scholar
  56. Ioannidis, J. P. A., Tarone, R., & McLaughlin, J. K. (2011). The false-positive to false-negative ratio in epidemiologic studies. Epidemiology, 22(4), 450–456.CrossRefGoogle Scholar
  57. Johnson, V. E. (2013). Revised standards for statistical evidence. Proceedings of the National Academy of Science, 110(48), 19313–19317. doi: 10.1073/pnas.1313476110.CrossRefGoogle Scholar
  58. Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2, 196–217.CrossRefGoogle Scholar
  59. Klein, J. R., & Roodman, A. (2005). Blind analysis in nuclear and particle physics. Annual Review of Nuclear and Particle Science, 55, 141–163.CrossRefGoogle Scholar
  60. Kovanis, M., Porcher, R., Ravaud, P., & Trinquart, L. (2015). Complex systems approaches to scientific publication and peer-review system: Development of an agent-based model calibrated with empirical journal data. Scientometrics, 106, 695–715.CrossRefGoogle Scholar
  61. Kücük, B., Güler, N, & Eskici, B. (2008). A dynamic simulation model of academic publications and citations. In Proceedings of the 26th International Conference of the System Dynamics Society. Athens, Greece. Retrieved from https://www.systemdynamics.org/conferences/2008/proceed/papers/KUCUK339.pdf.
  62. Laura & John Arnold Foundation. (2016). Grants. Retrieved September 5, 2017 from http://www.arnoldfoundation.org/grants/.
  63. Luke, D. A., & Stamatikis, K. A. (2012). Systems science methods in public health: Dynamics, networks, and agents. Annual Review of Public Health, 33, 357–376.CrossRefGoogle Scholar
  64. MacCoun, R., & Perlmutter, S. (2015). Hide results to seek the truth. Nature, 526, 187–189.CrossRefGoogle Scholar
  65. MacCoun, R., & Perlmutter, S. (2017). Blind analysis as a corrective for confirmatory biasin physics and psychology. In S. O. Lilienfeld & I. Waldman (Eds.), Psychological Science under Scrutiny: Recent Challenges and Proposed Solutions. Wiley-Blackwell: Hoboken.Google Scholar
  66. Matzke, D., Nieuwenhuis, S., van Rijn, H., Slagter, H. A., van der Molen, M. W., & Wagenmakers, E.-J. (2015). The effect of horizontal eye movements on free recall: A preregistered adversarial collaboration. Journal of Experimental Psychology: General, 144, e1–e15.CrossRefGoogle Scholar
  67. McElreath, R., & Smaldino, P. E. (2015). Replication, communication, and the population dynamics of scientific discovery. PLoS ONE, 10(8), e0136088.CrossRefGoogle Scholar
  68. Melander, H., Ahlqvist-Rastad, J., Meijer, G., & Beermann, B. (2003). Evidence b(i)ased medicine—Selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications. British Medical Journal, 326, 1171–1175.CrossRefGoogle Scholar
  69. Mellers, B., Hertwig, R., & Kahneman, D. (2001). Do frequency representations eliminate conjunction effects? An exercise in adversarial collaboration. Psychological Science, 12, 269–275.CrossRefGoogle Scholar
  70. Miguel, E., Camerer, C., Casey, K., Cohen, J., Esterling, K. M., Gerber, A., et al. (2014). Promoting transparency in social science research. Science, 343, 30–31.CrossRefGoogle Scholar
  71. Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., et al. (2015). Promoting an open research culture. Science, 348, 1422–1425.CrossRefGoogle Scholar
  72. Nosek, B. A., & Lakens, D. (2014). Registered reports: A method to increase the credibility of published results. Social Psychology, 45, 137–141.CrossRefGoogle Scholar
  73. Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific Utopia II. Restructuring incentives and practices to promote truth over publishability. Psychological Science, 7, 615–631.Google Scholar
  74. Nuzzo, R. (2015). Fooling ourselves. Science, 526, 182–185.Google Scholar
  75. Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science, 349, acac4716.Google Scholar
  76. Rahmandad, H., & Sterman, J. (2008). Heterogeneity and network structure in the dynamics of diffusion: Comparing agent-based and differential equation models. Management Science, 54, 998–1014.CrossRefGoogle Scholar
  77. Satpute, S., Mehta, M., Bhete, S., & Kurle, D. (2016). Assessment of adherence to the statistical components of Consolidated Standards of Reporting Trials Statement for quality of reports on randomized controlled trials from five pharmacology journals. Perspectives in Clinical Research, 7, 128–131.CrossRefGoogle Scholar
  78. Schlitz, M., Wiseman, R., Watt, C., & Radin, D. (2006). Of two minds: Skeptic-proponent collaboration with parapsychology. British Journal of Psychology, 97, 313–322.CrossRefGoogle Scholar
  79. Schulz, K.F., Altman, D.G., Moher, D., & CONSORT Group. (2010). CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMC Medicine, 18, 18. doi: 10.1186/1741-7015-8-18.Google Scholar
  80. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366.CrossRefGoogle Scholar
  81. Smaldino & McElreath. (2016). The natural selection of bad science. Royal Society Open Science, 3, 160384. doi: 10.1098/rsos.160384.CrossRefGoogle Scholar
  82. Sterman, J. D. (2000). Business Dynamics: Systems Thinking and Modeling for a Complex World. Boston: Irwin/McGraw Hill.Google Scholar
  83. Sterman, J. D. (2006). Learning from evidence in a complex world. American Journal of Public Health, 96, 505–514.CrossRefGoogle Scholar
  84. Sterman, J. D., & Wittenberg, J. (1999). Path dependence, competition, and succession in the dynamics of scientific revolution. Organizational Science, 10, 322–341.CrossRefGoogle Scholar
  85. Szucs, D., & Ioannidis, J. P. A. (2017). When null hypothesis significance testing is unsuitable for research: A reassessment. Frontiers in Human Neuroscience, 11, 390. doi: 10.3389/fnhum.2017.00390.CrossRefGoogle Scholar
  86. Walker, K. F., Stevenson, G., & Thornton, J. G. (2014). Discrepencies between registration and publication of randomised controlled trias: An observational study. Journal of the Royal Society of Medicine Open, 5(5), 1–4.Google Scholar
  87. Yong, E. (2012). Replication studies: Bad copy. Nature, 485, 298–300.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2017

Authors and Affiliations

  1. 1.Department of Epidemiology and BiostatisticsTexas A&M UniversityCollege StationUSA
  2. 2.Department of Industrial and Systems EngineeringTexas A&M UniversityCollege StationUSA

Personalised recommendations