Advertisement

Literature Reviews and Meta Analysis

  • Joseph A. Durlak

Abstract:

This chapter discusses the most common research methodology in psychology: the literature review. Reviews generally have three purposes: (1) to critically evaluate and summarize a body of research; (2) to reach some conclusions about that research; and, finally, (3) to offer suggestions for future work. The basic and expert competencies required for completing a high quality literature review are described by discussing seven major components of reviews along with relevant questions that should be answered to assess the successful completion of each component. A major focus is on meta-analysis, but guidelines are pertinent in assessing the quality of various types of reviews including reviews of theories and clinical applications. Readers are directed to additional helpful resources in order to aide them in becoming critical consumers or producers of good literature reviews.

Keywords

Pool Standard Deviation Methodological Feature Basic Competency Rigorous Review Anxiety Outcome 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. American Psychological Association (2001). Publication manual of the American Psychological Association (5th ed.). Washington, DC: Author.Google Scholar
  2. Bem, D. J. (1995). Writing a review article for Psychological Bulletin. Psychological Bulletin, 118, 172–177.CrossRefGoogle Scholar
  3. Boote, D. N., & Beile, P. (2005). Scholars before researchers: On the centrality of the dissertation literature re-view in research preparation. Educational Research-er, 34, 3–15.Google Scholar
  4. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.Google Scholar
  5. Cooper, H. (2008). The search for meaningful ways to express the effects of interventions. Child Development Perspectives, 2, 181–186.CrossRefGoogle Scholar
  6. Cooper, H., & Hedges, L. V. (Eds.). (1994). The handbook of research synthesis. New York: Russell Sage Foundation.Google Scholar
  7. Cuipers, P., van Straten, A., & Warmerdam, L. (2007). Behavioral activation treatments of depression: A meta-analysis. Clinical Psychology Review, 27, 318–326.CrossRefGoogle Scholar
  8. Cumming, G., & Finch, S. (2005). Inference by eye; confidence intervals and how to read pictures of data. American Psychologist, 60, 170–180.PubMedCrossRefGoogle Scholar
  9. Dickersin, K. (1997). How important is publication bias?: A synthesis of available data. AIDS Education and Prevention, 9, 15–21.PubMedGoogle Scholar
  10. Drotar, D. (2000). Reviewing and editing manuscripts for scientific journals. In Drotar, D. (Ed.), Handbook of research in pediatric and child clinical psychology (pp. 409–424). New York: Kluwer/Plenum.Google Scholar
  11. Durlak, J. A. (1995). Understanding meta-analysis. In Grimm, L., & Yarnold P. (Eds.), Reading and understanding multivariate statistics (pp. 319–352). Washington, DC: American Psychological Association.Google Scholar
  12. Durlak, J. A. (2000). How to evaluate a meta-analysis. In Drotar, D. (Ed.), Handbook of research in pediatric and clinical child psychology (pp. 395–407). New York: Kluwer/Plenum.Google Scholar
  13. Durlak, J. A. (2003). Basic principles of meta analysis. In Roberts, M., & Ilardi, S. S. (Eds.), Methods of research in clinical psychology: A Handbook (pp. 196–209). Malden, MA: Blackwell.Google Scholar
  14. Durlak, J. A. (in press). How to select, calculate, and interpret effect sizes. Journal of Pediatric Psychology.Google Scholar
  15. Durlak, J. A., Celio, C. I., Pachan, M. K., & Schellinger, K. B. (in press). Sometimes it is the researchers, not the research that goes “Off the rails”: The value of clear, complete and precise information in scientific reports. In Streiner, D., & Sidani, S. (Eds.), When research studies go off the rails. New York: Guilford.Google Scholar
  16. Durlak, J. A., & Dupre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Com-munity Psychology, 41, 327–350.CrossRefGoogle Scholar
  17. Durlak, J. A., Meerson, I., & Ewell-Foster, C. (2003). Meta-analysis. In Thomas, J. C., & Hersen, M. (Eds.), Understanding research in clinical and counseling psychology: A textbook (pp. 243–267). Mahwah, NJ: Erlbaum.Google Scholar
  18. Durlak, J. A., Weissberg, R. P., & Pachan, M. (in press). A meta-analysis of after-school programs that seek to promote personal and social skills in children and adolescents. American Journal of Community Psychology.Google Scholar
  19. Durlak, J. A., Weissberg, R. P., Taylor, R. D., Dymnicki, A. B., & Schellinger, K. B. (in press). The impact of enhancing students’ social and emotional learning: a meta-analysis of school-based universal interventions. Child Development. Unpublished manuscript, Loyola University, Chicago.Google Scholar
  20. Durlak, J. A., & Wells, A. M. (1998). Evaluation of indicated preventive intervention (secondary prevention) mental health programs for children and adolescents. American Journal of Community Psychology, 26, 775–802.PubMedCrossRefGoogle Scholar
  21. Fleiss, J. L. (1994). Measures of effect size for categorical data. In Cooper, H., & Hedges, L. V. (Eds.), The handbook of research synthesis (pp. 245–260). New York: Russell Sage.Google Scholar
  22. Galvan, J. L. (2004). Writing literature reviews. Glendale, CA: Pyczak.Google Scholar
  23. Haddock, C. K., Rinsdkopf, D., & Shadish, W. R. (1998). Using odds ratios as effect sizes for meta-analysis of dichotomous data: A primer on methods and issues. Psychological Methods, 3, 339–353.CrossRefGoogle Scholar
  24. Hahn, R., Fuqua-Whitley, D., Wethington, H., Lowy, J., Crosby, A., Fullilove, M., et al. (2007). Effectiveness of universal school-based programs to prevent violent and aggressive behavior: A systematic review. Ameri-can Journal of Preventive Medicine, 33(Suppl. 2S), 114–129.CrossRefGoogle Scholar
  25. Haney, P., & Durlak, J. A. (1998). Changing self-esteem in children and adolescents: A meta-analytic review. Journal of Clinical Child Psychology, 27, 423–433.PubMedCrossRefGoogle Scholar
  26. Hartmann, D. P. (Ed.). (1982). Using observers to study behavior: New directions for methodology of social and behavioral sciences. San Francisco: Jossey-BassGoogle Scholar
  27. Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. New York: Academic.Google Scholar
  28. Higgins, J. P., Thompson, S. G., Deeks, J. J., & Altman, D. G. (2003). Measuring inconsistency in meta-analyses. British Medical Journal, 327, 557–560.PubMedCrossRefGoogle Scholar
  29. Hill, J. C., Bloom, H. S., Black, A. R., & Lipsey, M. W. (2008). Empirical benchmarks for interpreting effect sizes in research. Child Development Perspectives, 2, 172–177.CrossRefGoogle Scholar
  30. Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis (2nd ed.). Thousand Oaks, CA: Sage.Google Scholar
  31. Lipsey, M. W., & Wilson, D. B. (1993). The efficacy of psychological, educational, and behavioral treatment. Confirmation from meta-analysis. American Psychol-ogist, 48, 1181–1209.CrossRefGoogle Scholar
  32. Lipsey, M. W., & Wilson D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.Google Scholar
  33. Maxwell, J. A. (2006a). Literature reviews of, and for, educational research: A commentary on Boote and Beile’s “Scholars before researchers.” Educational Researcher, 35, 28–31.CrossRefGoogle Scholar
  34. Maxwell, J. A. (2006b). On “Literature reviews of, and for, educational research”: A response to the critique. Educational Researcher, 35, 32–35.CrossRefGoogle Scholar
  35. McGrath, R. E., & Meyer, B. (2006). When effect sizes disagree: The case of r and d. Psychological Methods, 11, 386–401.PubMedCrossRefGoogle Scholar
  36. Oxman, A. D. (1994). Checklists for review articles. British Medical Journal, 309, 648–651.PubMedCrossRefGoogle Scholar
  37. Peters, J. L., Sutton, A. J., Jones, D. R., Abrams, K. R., & Rushton, L. (2007). Performance of the trim and fill method in the presence of publication bias and between-study variability. Statistics in Medicine, 26, 4544–4562.PubMedCrossRefGoogle Scholar
  38. Randolph, J. J., & Edmondson, R. S. (2005). Using the Binomial Effect Size Display (BESD) to present the magnitude of effect sizes to the evaluation audience. Practical Assessment, Research and Evaluation, 10(14). Retrieved January 21, 2008, from http://pareonline.net/getvn.asp?v = 10&n = 14.
  39. Rosenthal, R. (1991). Meta-analytic procedures for social research (rev. ed.). Newbury Park, CA: Sage.Google Scholar
  40. Rosenthal, R. (2001). Meta-analytic procedures for social research (rev. ed.). Newbury Park, CA: Sage.Google Scholar
  41. Rosenthal, R., & Rubin, D. B. (1982). A simple general purpose display of magnitude and experimental effect. Journal of Educational Psychology, 74, 166–169.CrossRefGoogle Scholar
  42. Ruscio, J. (2008). A probability-based measure of effect size: Robustness to base rates and other factors. Psychological Methods, 13, 19–30.PubMedCrossRefGoogle Scholar
  43. Sternberg, R. J. (1991). Editorial. Psychological Bulletin, 109, 3–4.CrossRefGoogle Scholar
  44. Sternberg, R. J., Hojjat, M., Brigockas, M. G., & Grigorenko, E. L. (1997). Getting in: Criteria for acceptance of manuscripts in Psychological Bulletin. Psychological Bulletin, 121, 321–323.CrossRefGoogle Scholar
  45. Taylor, B., Wylie, E., Dempster, M., & Donnelly, M. (2007). Systematically retrieving research: A case study evaluating seven databases. Research on Social Work Practice, 17, 697–706.CrossRefGoogle Scholar
  46. Terrin, N., Schmid, C. H., Lau, J., & Oklin, I. (2003). Adjusting for publication bias in the presence of heterogeneity. Statistics in Medicine, 22, 2113–2126.PubMedCrossRefGoogle Scholar
  47. Tobler, N. S., Roona, M. R., Ochshorn, P., Marshall, D. G., Streke, A. V., & Stackpole, K. M. (2000). School-based adolescent drug prevention programs: 1998 meta-analysis. The Journal of Primary Prevention, 20, 275–336.CrossRefGoogle Scholar
  48. Vacha-Haase, T., & Thompson, B. (2004). How to estimate and interpret effect sizes. Journal of Counseling Psychology, 51, 473–481.CrossRefGoogle Scholar
  49. Valentine, J., & Cooper, H. (2003). Effect size substantive interpretation guidelines: Issues in the interpretation of effect sizes. Washington, DC: What Works Clearinghouse. Retrieved August 22, 2008, from http://ies.ed.gov/ncee/wwc/references/iDocViewer/Doc.aspx?docId = 1&tocId = 5.
  50. Volker, M. A. (2006). Reporting effect sizes in school psychology research. Psychology in the Schools, 43, 653–672.CrossRefGoogle Scholar
  51. Weisz, J. R., Jensen-Doss, A., & Hawley, K. M. (2006). Evidence-based youth psychotherapies versus usual clinical care. American Psychologist, 61, 671–689.PubMedCrossRefGoogle Scholar
  52. Wilson, D. B., Gottfredson, D. C., & Najaka, S. S. (2001). School-based prevention of problem behaviors: A meta-analysis. Journal of Quantitative Criminology, 17, 247–272.CrossRefGoogle Scholar
  53. Wilson, D. B., & Lipsey, M. W. (2001). The role of method in treatment effectiveness research: Evi-dence from meta-analysis. Psychological Methods, 6, 413–429.PubMedCrossRefGoogle Scholar
  54. Wolf, F. M. (1986). Meta-analysis: Quantitative methods for research synthesis. Beverly Hills, CA: Sage.Google Scholar

Copyright information

© Springer Science+Business Media LLC 2010

Authors and Affiliations

  • Joseph A. Durlak
    • 1
  1. 1.Loyola UniversityChicagoUSA

Personalised recommendations