Advertisement

Issues in Assessment in Research Proposals

  • Helena Chmura Kraemer
Chapter

Abstract

A “successful” medical research grant proposal can be defined in two ways: (1) one to which reviewers assign a high enough priority score to attract funding to do the project, (2) one that results in study conclusions that benefit clinical decision making for the population sampled and/or moves research in that field a little forward. This discussion focuses on the issue of assessment and its effect on the success of a proposal. The quality of data (reliability, validity, sensitivity, level) is briefly reviewed. Questions are addressed related to how much data is too little, and how much data is too much, which data are necessary and/or desirable, and which might actually undermine the goals of the study, concluding with a few comments on issues related to data acquisition, cleaning, storing, monitoring and access. The connection between the data one intends to collect and the wisdom one hopes to gain from that data is fragile. Thus it is essential to structure proposals that will pass muster with review committees and that will contribute both to clinical decision making and scientific progress.

Keywords

Research Participant Randomized Clinical Trial Multiple Measure Ordinal Variable Total Variability 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Dunn, G. (1989). Design and Analysis of Reliability Studies. New York: Oxford University Press.Google Scholar
  2. Gibbons, J. D. (1993). Nonparametric Statistics: An Introduction. Newbury Park, CA: Sage Publications.Google Scholar
  3. Kraemer, H. (1992). How many raters? Toward the most reliable diagnostic consensus. Statistics in Medicine, 11, 317–331.PubMedCrossRefGoogle Scholar
  4. Kraemer, H. C. (1991). To increase power without increasing sample size. Psychopharmacology Bulletin, Special Feature: ACNP Proceedings, 27(3), 217–224.Google Scholar
  5. Kraemer, H. C., Giese-Davis, J., Yutsis, M., Neri, E., O’Hara, R., Gallagher-Thompson, D., Taylor, C. B., & Spiegel, D. (2006.). Decisions to optimized reliability of daytime cortisol slopes in an older population. American Journal of Geriatric Psychiatry, 14(4), 325–333.PubMedCrossRefGoogle Scholar
  6. Kraemer, H. C., & Thiemann, S. A. (1989). A strategy to use “soft” data effectively in randomized clinical trials. Journal of Consulting and Clinical Psychology, 57, 148–154.PubMedCrossRefGoogle Scholar
  7. Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174.PubMedCrossRefGoogle Scholar
  8. Perkins, D. O., Wyatt, R. J., & Bartko, J. J. (2000). Penny-wise and pound-foolish: the impact of measurement error on sample size requirements in clinical trials. Biological Psychiatry, 47, 762–766.PubMedCrossRefGoogle Scholar
  9. Scott, D. T., Spiker, D., Kraemer, H. C., Bauer, C. R., Bryant, D. M., Constantine, N. A., & Tyson, J. E. (Eds.). (1997). Possible Confounding Issues Concerning the Primary Child Outcomes. Stanford, CA: Stanford University Press.Google Scholar
  10. Tukey, J. W. (1979). Methodology, and the statistician’s responsibility for BOTH accuracy AND relevance. Journal of the American Statistical Association, 74, 786–793.CrossRefGoogle Scholar
  11. West, S. G., Duan, N., Pequegnat, W., Gaist, P., Des Jarlais, D.C. et al. (2008). Alternatives to the RCT. American Journal of Public Health, 98(8), 1359–1366.PubMedCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Stanford UniversityStanfordUSA
  2. 2.Department of PsychiatryUniversity of PittsburghPittsburghUSA

Personalised recommendations