Advances in Health Sciences Education

, Volume 5, Issue 2, pp 93–103

A Rationale for Using Synthetic Designs in Medical Education Research

  • Deirdre C. Lynch
  • Theodore W. Whitley
  • Stephen E. Willis
Article

Abstract

The extent to which the results of a study can be attributed to the intervention under investigation(i.e., internal validity) is an important consideration in interpreting study findings. There are many threats to the internal validity of designs frequently used in medical education research. Synthetic designs, which involve the integration of two or more weak designs, or the addition of design elements, may afford investigators greater control over confounding variables in medical education research. A rationale for using synthetic designs is presented and two examples of their use in medical education settings are examined. The concluding proposition is that synthetic designs allow investigators flexibility in planning research that is feasible in medical education settings. In addition, they may permit stronger causal inferences between interventions and results than traditional research designs.

internal validity medical education research synthetic designs 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bland, C.J., Schmitz, C.C., Stritter, F.T., Henry, R.C. & Aluise, J.J. (1990). Successful Faculty in Academic Medicine: Essential Skills and How to Acquire Them. New York: Springer.Google Scholar
  2. Campbell, D.T. & McCormack, T.H. (1957). Military experience and attitudes toward authority. Amer J Sociology 62: 482–490.CrossRefGoogle Scholar
  3. Campbell, D.T. & Stanley, J.C. (1963). Experimental and Quasi-Experimental Designs for Research. Boston: Houghton Mifflin.Google Scholar
  4. Chen, H. (1990). Theory-driven evaluations. Newbury Park, CA: Sage.Google Scholar
  5. Cook, T.D. & Campbell, D.T. (1979). Quasi-Experimentation: Design and Analysis Issues for Field Settings. Chicago: Rand McNally.Google Scholar
  6. Cordray, D.S. (1986). Quasi-experimental analysis: a mixture of methods and judgement. In: W.M. Trochim (ed.), Advances in Quasi-experimental Analysis and Design, pp. 9–27. San Francisco: Jossey-Bass.Google Scholar
  7. Dauphinee, W.D. (1996). Response to Christine McGuire's invited address. Acad Med 71 (10 suppl): S127–128.Google Scholar
  8. Fletcher, R.H., Fletcher, S.W. & Wagner, E.H. (1982). Clinical Epidemiology – The Essentials. Baltimore, MD: Williams & Wilkins.Google Scholar
  9. Huck, S.W., Cormier, W.H. & Bounds, W.G. (1974). Reading Statistics and Research. New York: Harper and Row.Google Scholar
  10. McCall, W.A. (1923). How to Experiment in Education. New York: Macmillan.Google Scholar
  11. McGuire, C.H. (1996). Contributions and challenges of medical education research. Acad Med 71 (10 suppl): S121–126.CrossRefGoogle Scholar
  12. Rossi, P.H. & Freeman, H.E. (1993). Evaluation: A Systematic Approach (5th Ed.). Newbury Park, CA: Sage.Google Scholar
  13. Simpson, D.E. (1993). Increasing the pool of medical education researchers. AcadMed 69: 654–657.Google Scholar
  14. Trochim, W.M. (1986). Editor's notes. In: W.M. Trochim (ed.), Advances in Quasi-experimental Analysis and Design, pp. 1–7. San Francisco: Jossey-Bass.Google Scholar

Copyright information

© Kluwer Academic Publishers 2000

Authors and Affiliations

  • Deirdre C. Lynch
    • 1
  • Theodore W. Whitley
    • 1
  • Stephen E. Willis
    • 1
  1. 1.East Carolina University School of MedicineUSA
  2. 2.Generalist Physician ProgramEast Carolina University School of MedicineGreenvilleUSA

Personalised recommendations