Conflicts of Interest, Selective Inertia, and Research Malpractice in Randomized Clinical Trials: An Unholy Trinity

Abstract

Recently a great deal of attention has been paid to conflicts of interest in medical research, and the Institute of Medicine has called for more research into this important area. One research question that has not received sufficient attention concerns the mechanisms of action by which conflicts of interest can result in biased and/or flawed research. What discretion do conflicted researchers have to sway the results one way or the other? We address this issue from the perspective of selective inertia, or an unnatural selection of research methods based on which are most likely to establish the preferred conclusions, rather than on which are most valid. In many cases it is abundantly clear that a method that is not being used in practice is superior to the one that is being used in practice, at least from the perspective of validity, and that it is only inertia, as opposed to any serious suggestion that the incumbent method is superior (or even comparable), that keeps the inferior procedure in use, to the exclusion of the superior one. By focusing on these flawed research methods we can go beyond statements of potential harm from real conflicts of interest, and can more directly assess actual (not potential) harm.

This is a preview of subscription content, log in to check access.

References

  1. Alperson, S., & Berger, V. W. (2013). Beyond Jadad: Some essential features in trial quality. Clinical Investigation, 3(12), 1119–1126.

    Article  Google Scholar 

  2. Altman, D. G. (1994). The scandal of poor medical research. BMJ, 308(6924), 283–284.

    Article  Google Scholar 

  3. Berger, V. W. (2000). Pros and cons of permutation tests in clinical trials. Statistics in Medicine, 19, 1319–1328.

    Article  Google Scholar 

  4. Berger, V. W. (2002). Improving the information content of categorical clinical trial endpoints. Controlled Clinical Trials, 23(5), 502–514.

    Article  Google Scholar 

  5. Berger, V. W. (2004). On the generation and ownership of alpha in medical studies. Controlled Clinical Trials, 25(6), 613–619.

    Article  Google Scholar 

  6. Berger, V. W. (2005). Selection bias and covariate imbalances in randomized clinical trials. Chichester: Wiley.

    Google Scholar 

  7. Berger, V. W. (2006a). Do not use blocked randomization. Headache, 46(2), 343.

    Article  Google Scholar 

  8. Berger, V. W. (2006b). Misguided precedent is not a reason to use permuted blocks. Headache, 46(7), 1210–1212.

    Article  Google Scholar 

  9. Berger, V. W. (2006c). Varying block sizes does not conceal the allocation. Journal of Critical Care, 21(2), 229.

    Article  Google Scholar 

  10. Berger, V. W. (2006d). Is the Jadad score the proper evaluation of trials. Journal of Rheumatology, 33(8), 1710.

    Google Scholar 

  11. Berger, V. W., & Alperson, S. Y. (2009). A general framework for the evaluation of clinical trial quality. Reviews on Recent Clinical Trials, 4(2), 79–88.

    Article  Google Scholar 

  12. Berger, V. W., Grant, W. C., & Vazquez, L. F. (2010). Sensitivity designs for preventing bias replication in randomized clinical trials. Statistical Methods in Medical Research, 19(4), 415–424.

    Article  Google Scholar 

  13. Berger, V. W., & Ivanova, A. (2002). Adaptive tests for ordinal data. JMASM, 1(2), 269–280.

    Google Scholar 

  14. Berger, V. W., Ivanova, A., & Deloria-Knoll, M. (2003a). Minimizing predictability while retaining balance through the use of less restrictive randomization procedures. Statistics in Medicine, 22(19), 3017–3028.

    Article  Google Scholar 

  15. Berger, V. W., Permutt, T., & Ivanova, A. (1998). The convex hull test for ordered categorical data. Biometrics, 54(4), 1541–1550.

    Article  Google Scholar 

  16. Berger, V. W., Rezvani, A., & Makarewicz, V. A. (2003b). Direct effect on validity of response run-in selection in clinical trials. Controlled Clinical Trials, 24(2), 156–166.

    Article  Google Scholar 

  17. Berger, V. W., & Vali, B. (2011). Intent-to-randomize corrections for missing data resulting from run-in selection bias in clinical trials for chronic conditions. Journal of Biopharmaceutical Statistics, 21(2), 263–270.

    Article  Google Scholar 

  18. Bookman, A. M., Williams, K. S. A., & Shainhouse, J. Z. (2004). Effect of a topical diclofenac solution for relieving symptoms of primary osteoarthritis of the knee: A randomized controlled trial. CMAJ, 171, 333–338.

    Article  Google Scholar 

  19. Bridoux, V., Moutel, G., Roman, H., Kianifard, B., Michot, F., Herve, C., et al. (2012). Methodological and ethical quality of randomized controlled clinical trials in gastrointestinal surgery. Journal of Gastrointestinal Surgery 1.

  20. Chalmers, T. C., Smith, H. J., Blackburn, B., et al. (1981). A method for assessing the quality of a randomized control trial. Controlled Clinical Trials, 2, 31–49.

    Article  Google Scholar 

  21. Chaudhry, S., Schroter, S., Smith, R., & Morris, J. (2002). Does declaration of competing interests affect readers’ perceptions? A randomized trial. BMJ, 325, 1391–1392.

    Article  Google Scholar 

  22. Dwan, K., Gamble, C., Williamson, P. R., Kirkham, J. J., & The Reporting Bias Group. (2013). Systematic review of the empirical evidence of study publication bias and outcome reporting bias—an updated review. PLoS ONE, 8(7), e66844. doi:10.1371/journal.pone.0066844.

    Article  Google Scholar 

  23. Fayers, P. M., & King, M. (2008). A highly significant difference in baseline characteristics: The play of chance of evidence of a more selective game? Quality of Life Research, 17, 1121–1123.

    Article  Google Scholar 

  24. Geary, R. C. (1947). Testing for normality. Biometrika, 34, 209–242.

  25. Harrington, A. (1997). The placebo effect. Cambridge: Harvard University Press.

    Google Scholar 

  26. Institute of Medicine. (2009). Conflict of interest in medical research, education, and practice. Washington, DC: The National Academies Press.

    Google Scholar 

  27. Ioannidis, J. P. A. (2005). Why most published research findings are false. PLOS Medicine, 2(8), e124.

    Article  Google Scholar 

  28. Jacobs, A. (2003). Clarification needed about possible bias and statistical testing. BMJ USA, 3, 93.

    Google Scholar 

  29. Jadad, A. R., Moore, R. A., Carroll, D., et al. (1996). Assessing the quality of reports of randomized clinical trials: Is blinding necessary? Controlled Clinical Trials, 17, 1–12.

    Article  Google Scholar 

  30. La Torre, G., Chiaradia, G., Gianfanga, F., De Laurentis, A., Boocia, S., & Ricciardi, W. (2006). Quality assessment in meta- analysis. Italian Journal of Public Health, 3, 44–50.

    Google Scholar 

  31. Lexchin, J. (2012a). Those who have the gold make the evidence: How the pharmaceutical industry biases the outcomes of clinical trials of medications. Science and Engineering Ethics, 18, 247–261.

    Article  Google Scholar 

  32. Lexchin, J. (2012b). Sponsorship bias in clinical research. The International Journal of Risk and Safety in Medicine, 24(4), 233–242.

    Google Scholar 

  33. Lundh, A., Sismondo, S., Lexchin, J., Busuioc, O. A., & Bero, L. (2012). Industry sponsorship and research outcome. The Cochrane Library 12.

  34. Matts, J. P., & McHugh, R. B. (1983). Conditional markov chain designs for accrual clinical trials. Biometrical Journal, 25, 563–577.

    Google Scholar 

  35. Palys, K. E., & Berger, V. W. (2013). A note on the jadad score as an efficient tool for measuring trial quality. Journal of Gastrointestinal Surgery, 17(6), 1170–1171. doi:10.1007/s11605-012-2106-0. (Epub 2012 Dec 12. PubMed PMID: 23233271).

    Article  Google Scholar 

  36. Panati, C. (1989). Panati’s extraordinary endings of practically everything and everybody. New York: Harper & Row.

    Google Scholar 

  37. Perlman, P., Possen, B. H., Legat, V. D., Rubenacker, A. S., Bockiger, U., & Stieben-Emmerling, L. (2013). When will we see people of negative height. Significance, 10(1), 46–48.

    Article  Google Scholar 

  38. Soares, J. F., & Wu, C. F. J. (1982). Some restricted randomization rules in sequential designs. Communications in Statistics Theory and Methods, 12, 2017–2034.

    Article  Google Scholar 

  39. Cytel (1995). StatXact-3 for Windows: Statistical software for exact nonparametric inference. Cytel Software Corporation, Cambridge.

  40. Towheed, T. E. (2006). Pennsaid therapy for osteoarthritis of the knee: A systematic review and metaanalysis of randomized controlled trials. Journal of Rheumatology, 33, 567–573.

    Google Scholar 

Download references

Acknowledgments

The review team offered insightful comments that resulted in a vastly improved revision.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Vance W. Berger.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Berger, V.W. Conflicts of Interest, Selective Inertia, and Research Malpractice in Randomized Clinical Trials: An Unholy Trinity. Sci Eng Ethics 21, 857–874 (2015). https://doi.org/10.1007/s11948-014-9576-2

Download citation

Keywords

  • Conflict of interest
  • Incentives
  • Selective inertia
  • Technology transfer