Advertisement

Applied Health Economics and Health Policy

, Volume 8, Issue 6, pp 387–391 | Cite as

The use of research abstracts in formulary decision making by the joint oncology drug review of Canada

  • Adam V. Weizman
  • Josh Griesman
  • Chaim M. BellEmail author
Short Communication

Abstract

Background and Objective

Opinions on the use of research abstracts in policy decision making are conflicting. We sought to evaluate the influence of research abstracts in guiding decisions of the Joint Oncology Drug Review of Canada (JODR), which conducts clinical and economic reviews of new cancer treatment drugs for formulary listing.

Methods

The minutes of the monthly meetings of the JODR between 2005 and 2007 were reviewed. One submission per drug indication was included. Elements evaluated included the level of evidence supporting each decision, the year the study was published and subsequent publication of any abstracts.

Results

There were 73 recommendations reviewed over the 36 months. Ten recommendations were deferred and eight recommendations were resubmissions, thus 55 recommendations underwent analysis. There were 31 recommendations based to some extent on abstracts, of which 14 (45%) were in favour of formulary listing and 17 (55%) were opposed. Twelve recommendations were based exclusively on abstracts, seven (58%) of which were in favour of formulary listing. As a comparison, published randomized controlled trials were part of the evidentiary base in 30 committee recommendations (55%). Of these, 17 (57%) were in favour of formulary listing, while 13 (43%) were opposed.

Conclusions

Research abstracts are commonly involved in evidence-based decision making for formulary listing. The rates of approving cancer drugs for funding by the JODR were similar among recommendations based on abstracts and other levels of evidence. Abstracts can play an important role in guiding decision making.

Keywords

Formulary Inclusion Research Abstract Data Extraction Form Formulary Listing Meeting Minute 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

The authors wish to thank all past and present members of the JODR and the associated Ministry of Health staff for their help in this study.

Dr Bell is a JODR member, supported by the Canadian Institutes of Health Research (CIHR) and the Canadian Patient Safety Institute chair in Patient Safety and Continuity of Care. The funding agencies had no role in the design and conduct of the study; collection, management, analysis or interpretation of the data; or preparation, review or approval of the manuscript. The corresponding author had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

References

  1. 1.
    Krzyzanowska MK, Pintilie M, Tannock IF. Factors associated with failure to publish large randomized trials presented at an oncology meeting. JAMA 2003 Jul 23; 290(4): 495–501PubMedCrossRefGoogle Scholar
  2. 2.
    Kelly JA. Scientific meeting abstracts: significance, access, and trends. Bull Med Libr Assoc 1998 Jan; 86(1): 68–76PubMedGoogle Scholar
  3. 3.
    Krzyzanowska MK, Pintilie M, Brezden-Masley C, et al. Quality of abstracts describing randomized trials in the proceedings of American Society of Clinical Oncology meetings: guidelines for improved reporting. J Clin Oncol 2004; 22(10): 1993–9PubMedCrossRefGoogle Scholar
  4. 4.
    Rubin HR, Redelmeier DA, Wu AW, et al. How reliable is peer review of scientific abstracts? Looking back at the 1991 annual meeting of the Society of General Internal Medicine. J Gen Intern Med 1993 May; 8(5): 255–8PubMedCrossRefGoogle Scholar
  5. 5.
    Preston CF, Bhandari M, Fulkerson E, et al. The consistency between scientific papers presented at the Orthopaedic Trauma Association and their subsequent full-text publication. J Orthop Trauma 2006; 20: 129–33PubMedCrossRefGoogle Scholar
  6. 6.
    Dhaliwal U, Kumar R. An observational study of the proceedings of the All India Ophthalmological Conference, 2000 and subsequent publication in indexed journals. Indian J Ophthalmol 2008 May–Jun; 56(3): 189–95PubMedCrossRefGoogle Scholar
  7. 7.
    PausJenssen AM, Singer PA, Detsky AS. Ontario’s Formulary Committee: how recommendations are made. Pharmacoeconomics 2003; 21(4): 285–94PubMedCrossRefGoogle Scholar
  8. 8.
    Pater JL, Browman GP, Brouwers MC, et al. Funding new cancer drugs in Ontario: closing the loop in the practice guidelines development cycle. J Clin Onc 2001 Jul 15; 19(14): 3392–6Google Scholar
  9. 9.
    Hirschfeld S, Pazdur R. Oncology drug development: United States Food and Drug Administration perspective. Crit Rev Oncol Hematol 2002; 42(2): 137–43PubMedCrossRefGoogle Scholar
  10. 10.
    Rosmarakis ES, Stoeriades ES, Vergidis PI, et al. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J 2005 May; 19: 673–80PubMedCrossRefGoogle Scholar
  11. 11.
    Cook DJ, Guyatt GH, Ryan G, et al. Should unpublished data be included in meta-analyses? Current convictions and controversies. JAMA 1993 Jun 2; 269(21): 2749–53PubMedCrossRefGoogle Scholar
  12. 12.
    Laupacis A. Inclusion of drugs in provincial drug benefit programs: who is making these decisions, and are they the right ones? CMAJ 2002 Jan 8; 166(1): 44–7PubMedGoogle Scholar

Copyright information

© Adis Data Information BV 2010

Authors and Affiliations

  • Adam V. Weizman
    • 1
  • Josh Griesman
    • 2
  • Chaim M. Bell
    • 1
    • 2
    • 3
    • 4
    Email author
  1. 1.Department of MedicineUniversity of TorontoTorontoCanada
  2. 2.Department of Medicine and Keenan Research Centre in the Li Ka Shing Knowledge InstituteSt. Michael’s HospitalTorontoCanada
  3. 3.Department of Health Policy Management and EvaluationUniversity of TorontoTorontoCanada
  4. 4.St. Michael’s HospitalTorontoCanada

Personalised recommendations