When Are Prescriptive Statements in Educational Research Justified?

  • Scott C. MarleyEmail author
  • Joel R. Levin


A prescriptive statement is a recommendation that, if a course of action is taken, then a desirable outcome will likely occur. For example, in reading research recommending that teachers apply an intervention targeted at a specific reading skill to improve children’s reading performance is a prescriptive statement. In our view, these statements require thorough scientific understanding of causal relationships that follow from scientifically credible research and are generalizable across varied contexts. In this article, we consider both epistemological issues and research credibility indicators (i.e., internal and external validity issues) that restrict one’s ability to make prescriptive statements. After presenting an argument for why prescriptive statements should be made sparingly, we describe a stage model of programmatic educational intervention research that is analogous to medical research’s phase model and which emphasizes the importance of including appropriate comparison groups, observing replicated findings across different populations and situational contexts, demonstrating statistical relationships between interventions and outcomes, accounting for and ruling out potential alternative explanations for obtained effects, and ultimately conducting randomized field trials. We conclude with the prescriptive statement that only after achieving these high standards of research credibility should educational researchers offer prescriptive statements.


Prescriptive statements Causal claims Validity Generalizability Programmatic research 


  1. Berkowitz, L., & Donnerstein, E. (1982). External validity is more than skin deep. The American Psychologist, 37(3), 245–257.CrossRefGoogle Scholar
  2. Bracht, G. H., & Glass, G. V. (1968). The external validity of experiments. American Educational Research Journal, 5(4), 437–474.Google Scholar
  3. Brown, A. L. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. Journal of the Learning Sciences, 2(2), 141–178.CrossRefGoogle Scholar
  4. Brunswik, E. (1955). Representative design and probabilistic theory in a functional psychology. Psychological Review, 62(3), 193–217.CrossRefGoogle Scholar
  5. Byar, D. P., Simon, R. M., Friedewald, W. T., Schlesselman, J. J., DeMets, D. L., Ellenberg, J. H., et al. (1976). Randomized clinical trials: Perspectives on some recent ideas. The New England Journal of Medicine, 295(2), 74.CrossRefGoogle Scholar
  6. Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Boston: Houghton Mifflin.Google Scholar
  7. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues for field settings. Chicago: Rand McNally.Google Scholar
  8. Cronbach, L. J., & Shapiro, K. (1982). Designing evaluations of educational and social programs. San Francisco: Jossey-Bass.Google Scholar
  9. Cronbach, L. J., & Snow, R. E. (1977). Aptitudes and instructional methods: A handbook for research on interactions. New York: Irvington Publishers.Google Scholar
  10. Derry, S. J., Levin, J. R., Osana, H. P., Jones, M. S., & Peterson, M. (2000). Fostering students' statistical and scientific thinking: Lessons learned from an innovative college course. American Educational Research Journal, 37(3), 747.Google Scholar
  11. Finn, J. D., Gerber, S. B., Achilles, C. M., & Boyd-Zaharias, J. (2001). The enduring effects of small classes. Teachers College Record, 103(2), 145–183.CrossRefGoogle Scholar
  12. Freedman, D. A. (1991). Statistical models and shoe leather. Sociological Methodology, 21, 291–313.CrossRefGoogle Scholar
  13. Friedman, L. M., Furberg, C., & DeMets, D. L. (1998). Fundamentals of clinical trials. New York: Springer Verlag.Google Scholar
  14. Fuchs, D., Fuchs, L. S., Thompson, A., Al Otaiba, S., Yen, L., Yang, N. J., et al. (2001). Is reading important in reading readiness programs?: A randomized field trial with teachers as program implementers. Journal of Educational Psychology, 93(2), 251–267.CrossRefGoogle Scholar
  15. Gersten, R., Baker, S., & Lloyd, J. W. (2000). Designing high-quality research in special education: Group experimental design. Journal of Special Education, 34(1), 2.CrossRefGoogle Scholar
  16. Glenberg, A. M., Brown, M., & Levin, J. R. (2007). Enhancing comprehension in small reading groups using a manipulation strategy. Contemporary Educational Psychology, 32(3), 389–399.CrossRefGoogle Scholar
  17. Goertzel, T. (2002). Myths of murder and multiple regression. Skeptical Inquirer, 26(1), 19–23.Google Scholar
  18. Greene, M. (1994). Epistemology and educational research: The influence of recent approaches to knowledge. Review of Research in Education, 20, 423–464.Google Scholar
  19. Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of Qualitative Research (pp. 105–117). Thousand Oaks: Sage Publications.Google Scholar
  20. Kaptchuk, T. J. (2001). The double-blind, randomized, placebo-controlled trial gold standard or golden calf? Journal of Clinical Epidemiology, 54(6), 541–549.CrossRefGoogle Scholar
  21. Kazdin, A. E., & Nock, M. K. (2003). Delineating mechanisms of change in child and adolescent therapy: Methodological issues and research recommendations. Journal of Child Psychology and Psychiatry, 44(8), 1116–1129.CrossRefGoogle Scholar
  22. Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed.). New York: Guilford Press.Google Scholar
  23. Koro-Ljungberg, M., Yendol-Hoppey, D., Smith, J. J., & Hayes, S. B. (2009). Epistemological awareness, instantiation of methods, and uninformed methodological ambiguity in qualitative research projects. Educational Researcher, 38(9), 687.CrossRefGoogle Scholar
  24. Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago: University of Chicago Press.Google Scholar
  25. Levin, J. R. (1994). Crafting educational intervention research that's both credible and creditable. Educational Psychology Review, 6(3), 231–243.CrossRefGoogle Scholar
  26. Levin, J. R. (2004). Random thoughts on the (in)credibility of educational-psychological intervention research. Educational Psychologist, 39(3), 173–184.CrossRefGoogle Scholar
  27. Levin, J. R., & O'Donnell, A. M. (1999). What to do about educational research's credibility gaps? Issues in Education: Contributions from Educational Psychology, 5(2), 177–229.Google Scholar
  28. Marlatt, G. A. (1973). Are college students "real people"? The American Psychologist, 28(9), 852–853.CrossRefGoogle Scholar
  29. Marley, S. C., Levin, J. R., & Glenberg, A. M. (2007). Improving Native American children’s listening comprehension through concrete representations. Contemporary Educational Psychology, 32(3), 537–550.CrossRefGoogle Scholar
  30. Marley, S. C., Levin, J. R., & Glenberg, A. M. (2010). What cognitive benefits do dynamic visual representations of a narrative text afford young Native American readers? Journal of Experimental Education, 78(3), 395–417.CrossRefGoogle Scholar
  31. Mayer, R. E. (2005). The failure of educational research to impact educational reform: Six obstacles to educational reform. In G. D. Phye, D. H. Robinson, & J. R. Levin (Eds.), Empirical methods for evaluating educational interventions (pp. 67–81). San Diego: Elsevier Academic Press.CrossRefGoogle Scholar
  32. Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons' responses and performances as scientific inquiry into score meaning. The American Psychologist, 50(9), 741–749.CrossRefGoogle Scholar
  33. Mook, D. G. (1983). In defense of external invalidity. The American Psychologist, 38(4), 379–387.CrossRefGoogle Scholar
  34. Mosteller, F., & Boruch, R. (Eds.). (2002). Evidence matters: Randomized trials in education research. Washington, DC: Brookings Institution Press.Google Scholar
  35. Mulaik, S. A. (2009). Linear causal modeling with structural equations. London: Chapman & Hall.CrossRefGoogle Scholar
  36. Murnane, R. J., & Willett, J. B. (2011). Methods matter: Improving causal inference in educational and social science research. New York: Oxford University Press.Google Scholar
  37. Oakes, W. (1972). External validity and the use of real people as subjects. The American Psychologist, 27(10), 959–962.CrossRefGoogle Scholar
  38. Pencil, M. (1976). Salt passage research: The state of the art. The Journal of Communication, 26(4), 31–36.CrossRefGoogle Scholar
  39. Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods. Thousand Oaks: Sage Publications.Google Scholar
  40. Robinson, D. H., & Levin, J. R. (2011). The not-so-quiet revolution: Cautionary comments on the rejection of hypothesis testing in favor of a "causal" modeling alternative. Journal of Modern Applied Statistical Methods.Google Scholar
  41. Robinson, D. H., Levin, J. R., Thomas, G. D., Pituch, K. D., & Vaughan, S. (2007). The incidence of causal statements in teaching and learning research journals. American Educational Research Journal, 44(2), 400–413.CrossRefGoogle Scholar
  42. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.Google Scholar
  43. Shaw, S. M., Walls, S. M., Dacy, B. S., Levin, J. R., & Robinson, D. H. (2010). A follow-up note on prescriptive statements in nonintervention research studies. Journal of Educational Psychology, 102(4), 982–988.CrossRefGoogle Scholar
  44. Wilkinson, L., & The Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. The American Psychologist, 54(8), 594–604.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.University of New MexicoAlbuquerqueUSA
  2. 2.University of ArizonaTucsonUSA

Personalised recommendations