Journal of Experimental Criminology

, Volume 5, Issue 1, pp 1–23 | Cite as

Drawing conclusions about causes from systematic reviews of risk factors: The Cambridge Quality Checklists

  • Joseph MurrayEmail author
  • David P. Farrington
  • Manuel P. Eisner


Systematic reviews summarize evidence about the effects of social interventions on crime, health, education, and social welfare. Social scientists should also use systematic reviews to study risk factors, which are naturally occurring predictors of these outcomes. To do this, the quality of risk factor research needs to be evaluated. This paper presents three new methodological quality checklists to identify high-quality risk factor research. They are designed so that reviewers can separately summarize the best evidence about correlates, risk factors, and causal risk factors. Studies need appropriate samples and measures to draw valid conclusions about correlates. Studies need prospective longitudinal data to draw valid conclusions about risk factors. And, in the absence of experimental evidence, controlled studies need to compare changes in risk factors over time with changes in outcomes to draw valid conclusions about causal risk factors.


Causes Correlates Methodological quality Observational studies Risk factors Systematic reviews 



The authors are grateful to David Humphreys for his help with this paper and to the British Academy and the UK Economic and Social Research Council (grant RES-000-22-2311) for financially supporting the research.


  1. Altman, D. G. (2001). Systematic reviews in health care—systematic reviews of evaluations of prognostic variables. British Medical Journal, 323, 224–228.CrossRefGoogle Scholar
  2. Arceneaux, K., Gerber, A. S., & Green, D. P. (2006). Comparing experimental and matching methods using a large-scale voter mobilization experiment. Political Analysis, 14, 37–62.CrossRefGoogle Scholar
  3. Bloom, H. S., Michalopoulos, C., Hill, C. J., & Lei, Y. (2002). Can nonexperimental comparison group methods match the findings from a random assignment evaluation of mandatory welfare-to-work programs? (working paper). New York: Manpower Demonstration Research Corporation.Google Scholar
  4. Boruch, R. F. (1997). Randomized experiments for planning and evaluation. Thousand Oaks, CA: Sage.Google Scholar
  5. Carmines, E. G., & Zeller, R. A. (1979). Reliability and validity assessment (Sage university paper series on quantitative applications in the social sciences no. 17). Thousand Oaks, CA: Sage.Google Scholar
  6. Chalmers, T. C., Smith, H., Blackburn, B., Silverman, B., Schroeder, B., Reitman, D., et al. (1981). A method for assessing the quality of a randomized control trial. Controlled Clinical Trials, 2, 31–49.CrossRefGoogle Scholar
  7. Christenfeld, N. J. S., Sloan, R. P., Carroll, D., & Greenland, S. (2004). Risk factors, confounding, and the illusion of statistical control. Psychosomatic Medicine, 66, 868–875.CrossRefGoogle Scholar
  8. Concato, J., Feinstein, A. R., & Holford, T. R. (1993). The risk of determining risk with multivariable models. Annals of Internal Medicine, 118, 201–210.Google Scholar
  9. Conn, V. S., & Rantz, M. J. (2003). Research methods: managing primary study quality in meta-analyses. Research in Nursing and Health, 26, 322–333.CrossRefGoogle Scholar
  10. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: design and analysis issues for field settings. Chicago, IL: Rand-McNally.Google Scholar
  11. Deeks, J. J., Dinnes, J., D’Amico, R., Sowden, A. J., Sakarovitch, C., Song, F., et al. (2003). Evaluating non-randomised intervention studies. Health Technology Assessment, 7(iii-x), 1–173.Google Scholar
  12. Dehejia, R. H., & Wahba, S. (1999). Causal effects in nonexperimental studies: reevaluating the evaluation of training programs. Journal of the American Statistical Association, 94, 1053–1062.CrossRefGoogle Scholar
  13. Downs, S. H., & Black, N. (1998). The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. Journal of Epidemiology and Community Health, 52, 377–384.CrossRefGoogle Scholar
  14. Egger, M., Davey Smith, G., & Altman, D. G. (Eds.) (2001). Systematic reviews in health care: meta-analysis in context (2nd ed.). London: BMJ.Google Scholar
  15. Farrington, D. P. (1988). Studying changes within individuals: the causes of offending. In M. Rutter (Ed.), Studies of psychosocial risk: the power of longitudinal data (pp. 158–183). Cambridge: Cambridge University Press.Google Scholar
  16. Farrington, D. P. (1989). Self-reported and official offending from adolescence to adulthood. In M. W. Klein (Ed.), Cross-national research on self-reported crime and delinquency (pp. 399–423). Dordrecht, Netherlands: Kluwer.Google Scholar
  17. Farrington, D. P. (2000). Explaining and preventing crime: the globalization of knowledge—the American Society of Criminology 1999 presidential address. Criminology, 38, 1–24.CrossRefGoogle Scholar
  18. Farrington, D. P. (2003). Methodological quality standards for evaluation research. Annals of the American Academy of Political and Social Science, 587, 49–68.CrossRefGoogle Scholar
  19. Farrington, D. P., & Petrosino, A. (2001). The Campbell Collaboration Crime and Justice Group. Annals of the American Academy of Political and Social Science, 578, 35–49.CrossRefGoogle Scholar
  20. Farrington, D. P., Gottfredson, D. C., Sherman, L. W., & Welsh, B. C. (2002a). The Maryland scientific methods scale. In L. W. Sherman, D. P. Farrington, B. C. Welsh, & D. L. MacKenzie (Eds.), Evidence-based crime prevention (pp. 13–21). London: Routledge.Google Scholar
  21. Farrington, D. P., Loeber, R., Yin, Y., & Anderson, S. J. (2002b). Are within-individual causes of delinquency the same as between-individual causes? Criminal Behaviour and Mental Health, 12, 53–68.CrossRefGoogle Scholar
  22. Ferriter, M., & Huband, N. (2005). Does the non-randomized controlled study have a place in the systematic review? A pilot study. Criminal Behaviour and Mental Health, 15, 111–120.CrossRefGoogle Scholar
  23. Fleiss, J. L. (1981). Statistical methods for rates and proportions. New York: Wiley.Google Scholar
  24. Forgatch, M. S., & DeGarmo, D. S. (1999). Parenting through change: an effective prevention program for single mothers. Journal of Consulting and Clinical Psychology, 67, 711–724.CrossRefGoogle Scholar
  25. Glasziou, P., Vandenbroucke, J., & Chalmers, I. (2004). Assessing the quality of research. British Medical Journal, 328, 39–41.CrossRefGoogle Scholar
  26. Glazerman, S., Levy, D. M., & Myers, D. (2003). Nonexperimental versus experimental estimates of earnings impacts. Annals of the American Academy of Political and Social Science, 589, 63–93.CrossRefGoogle Scholar
  27. Glymour, C. (1986). Comment: Statistics and metaphysics. Journal of the American Statistical Association, 81, 964–966.CrossRefGoogle Scholar
  28. Hardt, J., & Rutter, M. (2004). Validity of adult retrospective reports of adverse childhood experiences: review of the evidence. Journal of Child Psychology and Psychiatry, 45, 260–273.CrossRefGoogle Scholar
  29. Hawton, K., Sutton, L., Haw, C., Sinclair, J., & Deeks, J. J. (2005). Schizophrenia and suicide: systematic review of risk factors. British Journal of Psychiatry, 187, 9–20.CrossRefGoogle Scholar
  30. Henry, B., Moffitt, T. E., Caspi, A., & Silva, P. A. (1994). On the remembrance of things past: a longitudinal evaluation of the retrospective method. Psychological Assessment, 6, 92–101.CrossRefGoogle Scholar
  31. Higgins, J. P. T., & Green, S. (Eds.). (2006). Cochrane handbook for systematic reviews of interventions 4.2.6 (updated September 2006). In: The Cochrane Library, issue 4, 2006. Chichester, UK: Wiley.Google Scholar
  32. Hill, A. B. (1965). The environment and disease: association or causation? Proceedings of the Royal Society of Medicine, 15, 295–300.Google Scholar
  33. Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81, 945–960.CrossRefGoogle Scholar
  34. Jolliffe, D., & Farrington, D. P. (2004). Empathy and offending: a systematic review and meta-analysis. Aggression and Violent Behavior, 9, 441–476.CrossRefGoogle Scholar
  35. Jüni, P., Witchi, A., Bloch, R., & Egger, M. (1999). The hazards of scoring the quality of clinical trials for meta-analysis. Journal of the American Medical Association, 282, 1054–1060.CrossRefGoogle Scholar
  36. Kazdin, A. E., Kraemer, H. C., Kessler, R. C., Kupfer, D. J., & Offord, D. R. (1997). Contributions of risk-factor research to developmental psychopathology. Clinical Psychology Review, 17, 375–406.CrossRefGoogle Scholar
  37. Khan, K. S., ter Riet, G., Popay, J., Nixon, J., & Kleijnen, J. (2001). Study quality assessment. In Centre for Reviews and Dissemination (Ed.), Undertaking systematic reviews of research on effectiveness: CRD’s guidance for those carrying out or commissioning reviews (2nd ed.). York, England: York Publishing Services.Google Scholar
  38. Kraemer, H. C., Kazdin, A. E., Offord, D., Kessler, R. C., Jensen, P. S., & Kupfer, D. J. (1997). Coming to terms with the terms of risk. Archives of General Psychiatry, 54, 337–343.Google Scholar
  39. Kraemer, H. C., Lowe, K. K., & Kupfer, D. J. (2005). To your health: how to understand what research tells us about risk. New York: Oxford University Press.Google Scholar
  40. Labouvie, E. W. (1986). Methodological issues in the prediction of psychopathology: a life span perspective. In L. Erlenmeyer-Kimling & N. E. Miller (Eds.), Life span research on the prediction of psychopathology (pp. 137–155). Hillsdale, NJ: Erlbaum.Google Scholar
  41. Lalonde, R. J. (1986). Evaluating the econometric evaluations of training-programs with experimental data. American Economic Review, 76, 604–620.Google Scholar
  42. Lieberson, S. (1985). Making it count: the improvement of social research and theory. Berkeley, CA: University of California Press.Google Scholar
  43. Lipsey, M. W., & Derzon, J. H. (1998). Predictors of violent or serious delinquency in adolescence and early adulthood: a synthesis of longitudinal research. In D. P. Farrington & R. Loeber (Eds.), Serious and violent juvenile offenders: risk factors and successful interventions (pp. 86–105). Thousand Oaks, CA: Sage.Google Scholar
  44. Lipsey, M. W., & Landenberger, N. A. (2006). Cognitive-behavioral interventions. In B. C. Welsh & D. P. Farrington (Eds.), Preventing crime: what works for children, offenders, victims, and places (pp. 57–71). Dordrecht, The Netherlands: Springer.Google Scholar
  45. Lipsey, M. W., & Wilson, D. B. (1993). The efficacy of psychological, educational, and behavioral treatment—confirmation from metaanalysis. American Psychologist, 48, 1181–1209.CrossRefGoogle Scholar
  46. Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.Google Scholar
  47. Loeber, R., & Farrington, D. P. (2008). Advancing knowledge about causes in longitudinal studies: experimental and quasi-experimental methods. In A. M. Liberman (Ed.), The long view of crime: a synthesis of longitudinal research (pp. 257–279). New York: Springer.CrossRefGoogle Scholar
  48. Lösel, F., & Beelman, A. (2006). Child social skills training. In B. C. Welsh & D. P. Farrington (Eds.), Preventing crime: what works for children, offenders, victims, and places (pp. 33–54). Dordrecht, Netherlands: Springer.Google Scholar
  49. Lösel, F., & Köferl, P. (1989). Evaluation research on correctional treatment in West Germany: a meta-analysis. In H. Wegener, F. Lösel, & J. Haisch (Eds.), Criminal behavior and the justice system: psychological perspectives (pp. 334–355). New York: Springer.Google Scholar
  50. McCartney, K., Bub, K. L., & Burchinal, M. R. (2006). Selection, detection, and reflection. In K. McCartney, M. R. Burchinal, & K. L. Bub (Eds.), Best practices in quantitative methods for developmentalists. Monographs of the Society for Research in Child Development, Vol. 71, No. 3 (pp. 105–126). Boston, MA: Blackwell.Google Scholar
  51. Moher, D., Jadad, A. R., Nichol, G., Penman, M., Tugwell, P., & Walsh, S. (1995). Assessing the quality of randomized controlled trials. Controlled Clinical Trials, 16, 62–73.CrossRefGoogle Scholar
  52. Pelz, D. C., & Andrews, F. M. (1964). Detecting causal priorities in panel study data. American Sociological Review, 29, 836–848.CrossRefGoogle Scholar
  53. Perry, A., & Johnson, M. (2008). Applying the consolidated standards of reporting trials (CONSORT) to studies of mental health provision for juvenile offenders: a research note. Journal of Experimental Criminology, 4, 165–185.CrossRefGoogle Scholar
  54. Petrosino, A. (2003). Estimates of randomized controlled trials across six areas of childhood intervention: a bibliometric analysis. Annals of the American Academy of Political and Social Science, 589, 190–202.CrossRefGoogle Scholar
  55. Petrosino, A., Boruch, R. F., Farrington, D. P., Sherman, L. W., & Weisburd, D. (2003a). Toward evidence-based criminology and criminal justice: systematic reviews, the Campbell Collaboration, and the Crime and Justice Group. International Journal of Comparative Criminology, 3, 42–61.Google Scholar
  56. Petrosino, A., Turpin-Petrosino, C., & Buehler, J. (2003b). Scared Straight and other juvenile awareness programs for preventing juvenile delinquency: a systematic review of the randomized experimental evidence. Annals of the American Academy of Political and Social Science, 589, 41–62.CrossRefGoogle Scholar
  57. Petticrew, M., & Roberts, H. (2006). Systematic reviews in the social sciences: a practical guide. Oxford: Blackwell.Google Scholar
  58. Pratt, T. C., McGloin, J. M., & Fearn, N. E. (2006). Maternal cigarette smoking during pregnancy and criminal/deviant behavior: a meta-analysis (vol. 50, pp. 672–690).Google Scholar
  59. Rhee, S. H., & Waldman, I. D. (2002). Genetic and environmental influences on antisocial behavior: a meta-analysis of twin and adoption studies. Psychological Bulletin, 128, 49–529.CrossRefGoogle Scholar
  60. Robins, L. N. (1992). The role of prevention experiments in discovering causes of children’s antisocial behavior. In J. McCord & R. E. Tremblay (Eds.), Preventing antisocial behavior: interventions from birth through adolescence (pp. 3–18). New York: Guilford.Google Scholar
  61. Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70, 41–55.CrossRefGoogle Scholar
  62. Rubin, D. B., & Thomas, N. (1996). Matching using propensity scores: relating theory to practice. Biometrics, 52, 249–264.CrossRefGoogle Scholar
  63. Rutter, M. (1981). Epidemiological/longitudinal strategies and causal research in child-psychiatry. Journal of the American Academy of Child and Adolescent Psychiatry, 20, 513–544.CrossRefGoogle Scholar
  64. Rutter, M. (1988). Longitudinal data in the study of causal processes: some uses and some pitfalls. In M. Rutter (Ed.), Studies of psychosocial risk: the power of longitudinal data (pp. 1–28). Cambridge: Cambridge University Press.Google Scholar
  65. Rutter, M. (2003a). Crucial paths from risk indicator to causal mechanism. In B. B. Lahey, T. E. Moffitt, & A. Caspi (Eds.), Causes of conduct disorder and juvenile delinquency (pp. 3–24). New York: Guilford.Google Scholar
  66. Rutter, M. (2003b). Using sex differences in psychopathology to study causal mechanisms: unifying issues and research strategies. Journal of Child Psychology and Psychiatry, 44, 1092–1115.CrossRefGoogle Scholar
  67. Sanderson, S., Tatt, I. D., & Higgins, J. P. T. (2007). Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. International Journal of Epidemiology, 36, 666–676.CrossRefGoogle Scholar
  68. Shadish, W. R., & Ragsdale, K. (1996). Random versus nonrandom assignment in controlled experiments: do you get the same answer? Journal of Consulting and Clinical Psychology, 64, 1290–1305.CrossRefGoogle Scholar
  69. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.Google Scholar
  70. Shah, B. R., Laupacis, A., Hux, J. E., & Austin, P. C. (2005). Propensity score methods gave similar results to traditional regression modeling in observational studies: a systematic review. Journal of Clinical Epidemiology, 58, 550–559.CrossRefGoogle Scholar
  71. Sherman, L. W., Gottfredson, D., MacKenzie, D., Eck, J., Reuter, P., & Bushway, S. (1997). Preventing crime: what works, what doesn’t, what’s promising. Report to the U.S. Congress. Washington, DC: US Department of Justice.Google Scholar
  72. Smith, J. A., & Todd, P. E. (2001). Reconciling conflicting evidence on the performance of propensity-score matching methods. American Economic Review, 91, 112–118.CrossRefGoogle Scholar
  73. Stolzenberg, R. M., & Relles, D. A. (1997). Tools for intuition about sample selection bias and its correction. American Sociological Review, 62, 494–507.CrossRefGoogle Scholar
  74. The Cochrane Collaboration (2007). The name behind the Cochrane Collaboration. Retrieved July, 2007, from
  75. Valentine, J. C., & Cooper, H. (2008). A systematic and transparent approach for assessing the methodological quality of intervention effectiveness research: the study design and implementation assessment device (Study DIAD). Psychological Methods, 13, 130–149.CrossRefGoogle Scholar
  76. Wakschlag, L. S., Pickett, K. E., Cook, E., Benowitz, N. L., & Leventhal, B. L. (2002). Maternal smoking during pregnancy and severe antisocial behavior in offspring: a review. American Journal of Public Health, 92, 966–974.CrossRefGoogle Scholar
  77. Weisburd, D., Lum, C. M., & Petrosino, A. (2001). Does research design affect study outcomes in criminal justice? Annals of the American Academy of Political and Social Science, 578, 50–70.CrossRefGoogle Scholar
  78. Wells, L. E., & Rankin, J. H. (1991). Families and delinquency: a meta-analysis of the impact of broken homes. Social Problems, 38, 71–93.CrossRefGoogle Scholar
  79. Wikström, P.-O. H. (2007). In search of causes and explanations of crime. In R. D. King & E. Wincup (Eds.), Doing research on crime and justice (2nd ed., pp. 117–139). Oxford: Oxford University Press.Google Scholar
  80. Wilson, D. B., & Lipsey, M. W. (2001). The role of method in treatment effectiveness research: evidence from meta-analysis. Psychological Methods, 6, 413–429.CrossRefGoogle Scholar
  81. Winship, C., & Morgan, S. L. (1999). The estimation of causal effects from observational data. Annual Review of Sociology, 25, 659–706.CrossRefGoogle Scholar
  82. Yarrow, M. R., Campbell, J. D., & Burton, R. V. (1970). Recollections of childhood: a study of the retrospective method. Monographs of the Society for Research in Child Development, 35(iii-iv), 1–83.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2009

Authors and Affiliations

  • Joseph Murray
    • 1
    Email author
  • David P. Farrington
    • 1
  • Manuel P. Eisner
    • 1
  1. 1.Institute of CriminologyUniversity of CambridgeCambridgeUK

Personalised recommendations