PharmacoEconomics

, Volume 29, Issue 7, pp 555–561

Prioritizing Comparative Effectiveness Research

Are Drug and Implementation Trials Equally Worth Funding?
Current Opinion
  • 28 Downloads

Abstract

Comparative effectiveness research (CER) is the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat and monitor a clinical condition, or to improve the delivery of care. The purpose of this article is to compare–within the scope of CER–the value of implementation and drug trials. Implementation trials have limitations similar to drug trials in terms of generalizability of results outside the trial setting and ability to identify best practice. However, in contrast to drug trials, implementation trials do not provide value in terms of ruling out harm, as implementation strategies are unlikely to cause harm in the first place. Still, implementation trials may provide good value when there is a high error probability in deciding whether implementation will be cost effective or if costs associated with making an erroneous decision are high. Yet the low risk of implementation programmes to cause harm may also allow for alternative approaches to identify best implementation practice, perhaps outside the scope of rigorous trials and testing.One such approach that requires further investigation is a competitive market for quality of care, where implementation programmes may be introduced without prior evaluation.

References

  1. 1.
    Sox HC, Greenfield S. Comparative effectiveness research: a report from the Institute of Medicine. Ann Intern Med 2009; 151 (3): 203–5PubMedGoogle Scholar
  2. 2.
    Drozda JP, Bufalino VJ, Fasules JW, et al. ACC 2009 Advocacy Position Statement: principles for comparative effectiveness research. J Am Coll Cardiol 2009; 54 (18): 1744–6CrossRefPubMedGoogle Scholar
  3. 3.
    Greene JA. Swimming upstream: comparative effectiveness research in the US. Pharmacoeconomics 2009; 27 (12): 979–82CrossRefPubMedGoogle Scholar
  4. 4.
    Naik AD, Petersen LA. The neglected purpose of comparative-effectiveness research. N Engl J Med 2009; 360 (19): 1929–31CrossRefPubMedGoogle Scholar
  5. 5.
    Brown PD. Adherence to guidelines for community-acquired pneumonia: does it decrease cost of care? Pharmacoeconomics 2004; 22 (7): 413–20CrossRefPubMedGoogle Scholar
  6. 6.
    Grimshaw JM, Ward J, Eccles MP. Getting research into practice. In: Penchon D, Guest C, Melzer D, et al., editors. Oxford handbook of public health practice. Oxford: Oxford University Press, 2001Google Scholar
  7. 7.
    Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess 2004; 8 (6): iii-iv, 1–72Google Scholar
  8. 8.
    Treweek S, Zwarenstein M. Making trials matter: pragmatic and explanatory trials and the problem of applicability. Trials 2009 Jun 3; 10: 37CrossRefPubMedGoogle Scholar
  9. 9.
    Zwarenstein M, Treweek S, Gagnier JJ, et al. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ 2008; 337: a2390CrossRefGoogle Scholar
  10. 10.
    Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy. JAMA 2003; 290 (12): 1624–32CrossRefPubMedGoogle Scholar
  11. 11.
    Weitz BA. Effectiveness in sales interactions: a contingency framework. J Mark 1981; 45 (1): 85–103CrossRefGoogle Scholar
  12. 12.
    Gandjour A. Educational epidemiology. JAMA 2004; 292 (24): 2969; author reply 2970-1CrossRefPubMedGoogle Scholar
  13. 13.
    Eccles M, Grimshaw J, Steen N, et al. The design and analysis of a randomized controlled trial to evaluate computerized decision support in primary care: the COGENT study. Fam Pract 2000; 17 (2): 180–6CrossRefPubMedGoogle Scholar
  14. 14.
    Osterberg L, Blaschke T. Adherence to medication. N Engl J Med 2005; 353 (5): 487–97CrossRefPubMedGoogle Scholar
  15. 15.
    Cabana MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999; 282 (15): 1458–65Google Scholar
  16. 16.
    Grol R, Wensing M. What drives change? Barriers to and incentives for achieving evidence-based practice. Med J Aust 2004; 180 (6 Suppl.): S57–60Google Scholar
  17. 17.
    Dans AL, Dans LF, Guyatt GH, et al. Users’ guides to the medical literature: XIV. How to decide on the applicability of clinical trial results to your patient. Evidence-Based Medicine Working Group. JAMA 1998; 279 (7): 545–9Google Scholar
  18. 18.
    Freemantle N, Hessel F. The applicability and generalizability of findings from clinical trials for health-policy decisions. Pharmacoeconomics 2009; 27 (1): 5–10CrossRefPubMedGoogle Scholar
  19. 19.
    Designing theoretically-informed implementation interventions. The Improved Clinical Effectiveness through Behavioural Research Group (ICEBeRG). Implement Sci 2006 Feb 23; 1: 4 [online]. Available from URL: http://www.implementationscience.com/content/1/1/4 [Accessed 2011 Mar 24]
  20. 20.
    Prior M, Guerin M, Grimmer-Somers K. The effectiveness of clinical guideline implementation strategies: a synthesis of systematic review findings. J Eval Clin Pract 2008; 14 (5): 888–97CrossRefPubMedGoogle Scholar
  21. 21.
    MacRae KD. Pragmatic versus explanatory trials. Int J Technol Assess Health Care 1989; 5 (3): 333–9CrossRefPubMedGoogle Scholar
  22. 22.
    Rossouw JE, Anderson GL, Prentice RL, et al. Risks and benefits of estrogen plus progestin in healthy post-menopausal women: principal results from the Women’s Health Initiative randomized controlled trial. JAMA 2002; 288 (3): 321–33CrossRefPubMedGoogle Scholar
  23. 23.
    Bertagnolli MM, Eagle CJ, Zauber AG, et al. Celecoxib for the prevention of sporadic colorectal adenomas. N Engl J Med 2006; 355 (9): 873–84CrossRefPubMedGoogle Scholar
  24. 24.
    Vandenbroucke JP. What is the best evidence for determining harms of medical treatment? CMAJ 2006; 174 (5): 645–6CrossRefPubMedGoogle Scholar
  25. 25.
    Grimshaw J, Eccles M, Thomas R, et al. Toward evidencebased quality improvement: evidence (and its limitations) of the effectiveness of guideline dissemination and implementation strategies 1966–1998. J Gen Intern Med 2006; 21 Suppl. 2: S14–20Google Scholar
  26. 26.
    Foy R, Eccles M, Grimshaw J. Why does primary care need more implementation research? Fam Pract 2001; 18 (4): 353–5CrossRefPubMedGoogle Scholar
  27. 27.
    Foy R, Eccles MP, Jamtvedt G, et al. What do we know about how to do audit and feedback? Pitfalls in applying evidence from a systematic review. BMC Health Serv Res 2005; 5: 50Google Scholar
  28. 28.
    Grimshaw JM, Zwarenstein M, Tetroe JM, et al. Looking inside the black box: a theory-based process evaluation alongside a randomised controlled trial of printed educational materials (the Ontario printed educational message, OPEM) to improve referral and prescribing practices in primary care in Ontario, Canada. Implement Sci 2007 Nov 26; 2: 38CrossRefPubMedGoogle Scholar
  29. 29.
    Berwick DM. The science of improvement. JAMA 2008; 299 (10): 1182–4CrossRefPubMedGoogle Scholar
  30. 30.
    Claxton K, Posnett J. An economic approach to clinical trial design and research priority-setting. Health Econ 1996; 5 (6): 513–24CrossRefPubMedGoogle Scholar
  31. 31.
    Claxton K, Fenwick E, Sculpher M. Decision-making with uncertainty: the value of information. In: Jones AM, editor. Elgar companion to health economics. London: Edward Elgar, 2006Google Scholar
  32. 32.
    Bradley F, Wiles R, Kinmonth AL, et al. Development and evaluation of complex interventions in health services research: case study of the Southampton Heart Integrated care Project (SHIP). The SHIP Collaborative Group. BMJ 1999; 318 (7185): 711–5Google Scholar
  33. 33.
    Campbell M, Fitzpatrick R, Haines A, et al. Framework for design and evaluation of complex interventions to improve health. BMJ 2000; 321 (7262): 694–6CrossRefPubMedGoogle Scholar
  34. 34.
    Campbell NC, Murray E, Darbyshire J, et al. Designing and evaluating complex interventions to improve health care. BMJ 2007; 334 (7591): 455–9CrossRefPubMedGoogle Scholar
  35. 35.
    Neuhauser D, Diaz M. Quality improvement research: are randomised trials necessary? Qual Saf Health Care 2007; 16 (1): 77–80CrossRefPubMedGoogle Scholar
  36. 36.
    Sculpher M, Claxton K. Establishing the cost-effectiveness of new pharmaceuticals under conditions of uncertainty: when is there sufficient evidence? Value Health 2005; 8 (4): 433–46CrossRefPubMedGoogle Scholar
  37. 37.
    Roumie CL, Elasy TA, Greevy R, et al. Improving blood pressure control through provider education, provider alerts, and patient education: a cluster randomized trial. Ann Intern Med 2006; 145 (3): 165–75PubMedGoogle Scholar

Copyright information

© Springer International Publishing AG 2011

Authors and Affiliations

  1. 1.Pennington Biomedical Research CenterLouisiana State UniversityBaton RougeUSA
  2. 2.Institute of Health Economics and Clinical EpidemiologyUniversity of CologneCologneGermany
  3. 3.The James A. Baker III Institute for Public PolicyRice UniversityHoustonUSA

Personalised recommendations