Academic Psychiatry

, Volume 33, Issue 3, pp 221–228 | Cite as

Integrating Statistical and Clinical Research Elements in Intervention-Related Grant Applications: Summary From an NIMH Workshop

  • Joel T. Sherrill
  • David I. Sommers
  • Andrew A. Nierenberg
  • Andrew C. Leon
  • Stephan Arndt
  • Karen Bandeen-Roche
  • Joel Greenhouse
  • Donald Guthrie
  • Sharon-Lise Normand
  • Katharine A. Phillips
  • M. Katherine Shear
  • Robert Woolson
Original Article

Abstract

Objective

The authors summarize points for consideration generated in a National Institute of Mental Health (NIMH) workshop convened to provide an opportunity for reviewers from different disciplines—specifically clinical researchers and statisticians—to discuss how their differing and complementary expertise can be well integrated in the review of intervention-related grant applications.

Methods

A 1-day workshop was convened in October, 2004. The workshop featured panel presentations on key topics followed by interactive discussion. This article summarizes the workshop and subsequent discussions, which centered on topics including weighting the statistics/data analysis elements of an application in the assessment of the application’s overall merit; the level of statistical sophistication appropriate to different stages of research and for different funding mechanisms; some key considerations in the design and analysis portions of applications; appropriate statistical methods for addressing essential questions posed by an application; and the role of the statistician in the application’s development, study conduct, and interpretation and dissemination of results.

Results

A number of key elements crucial to the construction and review of grant applications were identified. It was acknowledged that intervention-related studies unavoidably involve trade-offs. Reviewers are helped when applications acknowledge such trade-offs and provide good rationale for their choices. Clear linkage among the design, aims, hypotheses, and data analysis plan and avoidance of disconnections among these elements also strengthens applications.

Conclusion

The authors identify multiple points to consider when constructing intervention-related grant applications. The points are presented here as questions and do not reflect institute policy or comprise a list of best practices, but rather represent points for consideration.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Rounsaville BJ, Carroll KM, Onken LS: A stage model of behavioral therapies research: getting started and moving on from stage I. Clin Psychol: Science and Practice 2001; 8: 133–142Google Scholar
  2. 2.
    Kraemer HC, Mintz J, Noda A, et al: Caution regarding the use of pilot studies to guide power calculations for study proposals. Arch Gen Psychiatry 2006; 63: 484–489PubMedCrossRefGoogle Scholar
  3. 3.
    Appelbaum PS, Roth LH, Lidz CW, et al: False hopes and best data: consent to research and the therapeutic misconception. Hastings Center Report 1987; 20–24Google Scholar
  4. 4.
    Leber PD: The hazards of inference: the active control investigation. Epilepsia 1986; 30(suppl 1): S57–S63Google Scholar
  5. 5.
    Leon AC, Solomon DA: Toward rapprochement in the placebo control debate: a calculated compromise of power. Evaluation and Health Professions 2003; 26: 404–414CrossRefGoogle Scholar
  6. 6.
    Temple R, Ellenberg SS: Placebo-controlled trials and active-control trials in the evaluation of new treatments. Part 1: ethical and scientific issues. Ann Intern Med 2000; 133: 455–463PubMedCrossRefGoogle Scholar
  7. 7.
    National Advisory Mental Health Council: Treatment Research in Mental Illness: Improving the Nation’s Public Mental Health Care through NIMH Funded Interventions Research: A Report by the National Advisory Mental Health Council’s Workgroup on Clinical Trials 2005. Available at http://www.nimh.nih.gov/council/interventions_research.cfm
  8. 8.
    Wisniewski SR, Leon AC, Otto MW, et al: Prevention of missing data in clinical research studies. Biol Psychiatry 2006; 59: 997–1000PubMedCrossRefGoogle Scholar
  9. 9.
    Beunckens C, Molenberghs G, Kenward MG: Direct likelihood analysis versus simple forms of imputation for missing data in randomized clinical trials. Clin Trials 2005; 2: 379–386PubMedCrossRefGoogle Scholar
  10. 10.
    Leon AC, Mallinckrodt CH, Chuang-Stein C, et al: Attrition in randomized controlled clinical trials: methodological issues in psychopharmacology. Biol Psychiatry 2006; 59: 1001–1005PubMedCrossRefGoogle Scholar
  11. 11.
    Cohen J: Statistical Power Analysis for the Behavioral Sciences, 2nd ed. Hillsdale, NJ, Lawrence Erlbaum Associates, 1988Google Scholar
  12. 12.
    Kraemer HC, Morgan GA, Leech NL, et al: Measures of clinical significance. J Am Acad Child Adolesc Psychiatry 2003; 42: 1524–1529PubMedCrossRefGoogle Scholar
  13. 13.
    Kraemer HC, Kupfer DJ: Size of treatment effects and their importance to clinical research and practice. Biol Psychiatry 2006; 59: 990–996PubMedCrossRefGoogle Scholar
  14. 14.
    Korn, EL: Projections from previous studies: a caution. Controlled Clin Trials 1990; 11: 67–69PubMedCrossRefGoogle Scholar
  15. 15.
    Cook RJ, Sackett DL: The number needed to treat: a clinically useful measure of treatment effect. BMJ 1995; 310: 452–454PubMedCrossRefGoogle Scholar
  16. 16.
    Leon AC: Multiplicity-adjusted sample size requirements: a strategy to maintain statistical power when using the Bonferroni adjustment. J Clin Psychiatry 2004; 65: 1511–1514PubMedCrossRefGoogle Scholar
  17. 17.
    Roberts LW, Solomon Z, Roberts BB, et al: Ethics in psychiatry research. Acad Psychiatry 1998; 22: 1–20Google Scholar
  18. 18.
    Halpern SD, Karlawish JHT, Berlin JA: The continuing unethical conduct of underpowered clinical trials. JAMA 2002; 288: 358–362PubMedCrossRefGoogle Scholar
  19. 19.
    American Statistical Association Committee on Professional Ethics: Ethical guidelines for statistical practice, 1999. Available at http://www.amstat.org/
  20. 20.
    Tilley A: An Introduction to Research Methodology and Report Writing in Psychology. Brisbane, Calif, Pineapple Press, 1999Google Scholar
  21. 21.
    Ellenberg JH: Biostatistical collaboration in medical research. Biometrics 1990; 46: 1–32PubMedCrossRefGoogle Scholar
  22. 22.
    Bailar JC III: Communicating about statistics with a scientific audience, in Medical Use of Statistics. Edited by Bailar JC III, Mosteller F. NEJM Books, Waltham, Mass, 1986Google Scholar

Copyright information

© Academic Psychiatry 2009

Authors and Affiliations

  • Joel T. Sherrill
    • 1
  • David I. Sommers
    • 2
  • Andrew A. Nierenberg
    • 3
  • Andrew C. Leon
    • 4
  • Stephan Arndt
    • 5
  • Karen Bandeen-Roche
    • 6
  • Joel Greenhouse
    • 7
  • Donald Guthrie
    • 8
  • Sharon-Lise Normand
    • 9
  • Katharine A. Phillips
    • 10
  • M. Katherine Shear
    • 11
  • Robert Woolson
    • 12
  1. 1.Division of Services and Intervention Research (DSIR)NIMHBethesdaUSA
  2. 2.Division of Extramural ActivitiesNIMHRichmondUSA
  3. 3.Department of Psychiatry at MGHChapel HillUSA
  4. 4.Department of PsychiatryWeill Medical College of Cornell UniversityNew YorkUSA
  5. 5.Iowa Consortium for Substance Abuse ResearchUniversity of Iowa Hospitals and ClinicsIowa CityUSA
  6. 6.Department of BiostatisticsJohns Hopkins Bloomberg School of Public HealthBaltimoreUSA
  7. 7.Department of StatisticsCarnegie MellonPittsburghUSA
  8. 8.UCLALos AngelesUSA
  9. 9.Harvard Medical SchoolBostonUSA
  10. 10.Department of Psychiatry & Human BehaviorBrown UniversityProvidenceUSA
  11. 11.School of Social WorkColumbia UniversityNew YorkUSA
  12. 12.Department of Psychiatry/BiostatisticsMedical University of South CarolinaCharlestonUSA

Personalised recommendations