Skip to main content

Advertisement

Log in

Are criminologists describing randomized controlled trials in ways that allow us to assess them? Findings from a sample of crime and justice trials

  • Published:
Journal of Experimental Criminology Aims and scope Submit manuscript

Abstract

Descriptive validity is an important factor in assessing the transparent reporting of randomized controlled trials (RCTs). Measures of validity in crime and justice have reported on this issue but there has been a lack of standardization in comparison to other discipline areas (e.g., healthcare) where tools such as the Consolidated Standards of Reporting Trials (CONSORT) Statement have improved reporting standards. In this study, we evaluate crime and justice trials from five different settings (community, prevention, policing, correctional, and court) and assess the extent to which they transparently report information using the CONSORT Statement as a guide. Overall, the findings suggest that crime and justice studies have low descriptive validity. Reporting was poor on methods of randomization, outcome measures, statistical analyses, and study findings, though much better in regard to reporting of background and participant details. We found little evidence of improvement in reporting over time and no significant relationship between the number of CONSORT items reported and size of the trial sample. In conclusion, we argue that the state of descriptive validity in crime and justice is inadequate, and must change if we are to develop higher-quality studies that can be assessed systematically. We suggest the adoption of a modified CONSORT Statement for crime and justice research.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. The CONSORT Statement requires that effect sizes are clearly stated by the author or that enough information is reported in the paper to enable the effect size to be calculated.

  2. Each item within the domains was ranked as either ‘High’ (where the item was reported as ‘yes’ in over 50% of cases), ‘Medium’ (where the item was reported as ‘yes’ between 30 and 49% of cases) or ‘Low’ (where the item was reported as ‘yes’ in less than 29% of cases).

  3. Authors reporting that participants were ‘randomly assigned’ to the comparison groups in the abstract and/or title were given credit in accordance with the CONSORT Statement guidelines. Acceptable terms included participants assigned to interventions using ‘random allocation’. Studies solely using the term experimental in the title and/or abstract with no reference to randomization in the abstract were not credited under the CONSORT Statement guidelines.

  4. Of the 62 studies, 22 were published reports funded through an array of different sources (e.g., government agencies and charities). As a crude measure of comparison, the mean score of the overall CONSORT Statement for published reports was 30.78 vs. 32.01 for journal articles. Our provisional analysis suggests that published reports (albeit much longer in length) do not report a greater number of CONSORT items in comparison to information reported in journal articles.

References

  • Adetugbo, K., & Williams, H. (2000). How well are randomized controlled trials reported in the dermatology literature? Archive of Dermatology, 136, 381–385.

    Article  Google Scholar 

  • Altman, D. G. (2001). The revised CONSORT Statement for reporting randomized trials: Explanation and elaboration. Annals of Internal Medicine, 134, 663–694.

    Google Scholar 

  • Altman, D. G. (2005). Endorsement of the CONSORT statement by high impact medical journal: a survey of instructions for authors. British Medical Journal, 330, 1056–1057.

    Article  Google Scholar 

  • Bolland, M. J., Grey, A., & Reid, I. R. (2007). The randomised controlled trial to meta-analysis ratio: original data versus systematic reviews in the literature. The New Zealand Medical Journal, 120, 1265.

    Google Scholar 

  • Boruch, R. F. (1997). Randomized field experiments for planning and evaluation: A practical guide. Newbury Park: Sage Publications.

    Google Scholar 

  • Boruch, R. F., De Moya, D. & Snyder, B. (2002). The Importance of Randomized Field Trials in Education and Related Areas. In Evidence Matters, edited by Mosteller and Boruch, pp. 64–65.

  • Boruch, R. F. (2007). The null hypothesis is not called that for nothing: statistical tests in randomized trials. Journal of Experimental Criminology, 3, 1–20.

    Article  Google Scholar 

  • Boutron, I., Moher, D., Tugwell, P., Giraudeau, B., Poiraudeau, S., Nizard, R., et al. (2005). A checklist to evaluate a report of a nonpharmocological trial (CLEAR NPT) was developed using consensus. Journal of Clinical Epidemiology, 58(12), 1233–1240.

    Article  Google Scholar 

  • Campbell, D. T. (1957). Factors relevant to the validity of experiments in social settings. Psychological Bulletin, 54, 297–312.

    Article  Google Scholar 

  • Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research on teaching. In N. L. Gage (Ed.), Handbook of research on teaching (pp. 171–246). Chicago: Rand McNally.

    Google Scholar 

  • Campbell, M. K., Elbourne, D. R., & Altman, D. G. (2004). CONSORT statement: extension to cluster randomized trials. British Medical Journal, 328, 702–708.

    Article  Google Scholar 

  • Chan, A., & Altman, D. G. (2005). Epidemiology and reporting of randomised trials published in PubMed journals. Lancet, 365, 1159–1162.

    Article  Google Scholar 

  • Cheng, K., Smyth, R. L., Motley, J., O’Hea, U., & Ashby, D. (2000). Randomized controlled trials in cystic fibrosis (1966–1997) categorised by time, design and intervention. Paediatric Pulmonology, 29, 1–7.

    Article  Google Scholar 

  • Dersimonian, R., Charetter, L. J., McPeek, B., & Mosteller, F. (1982). Reporting on methods in clinical trials. New England Journal of Medicine, 306, 1332–1337.

    Article  Google Scholar 

  • Deschenes, E. P., Turner, S., & Petersilia, J. (1995). A dual experiment in intensive community supervision: Minnesota’s prison diversion and enhanced supervised release programs. The Prison Journal, 75(3), 330–356.

    Article  Google Scholar 

  • Dittman, M. (2004). Guidelines seek to prevent bias in reporting randomized trials. APA Monitor, 35(8), 20.

    Google Scholar 

  • Eck, J. E., & Wartell, J. (1998). Improving the management of rental properties with drug problems: a randomized experiment. Crime Prevention Studies, 9, 161–185.

    Google Scholar 

  • Farrington, D. P. (1983). Randomized experiments on crime and justice. In M. Tonry & N. Morris (Eds.), Crime and Justice: an annual review of research, Volume 4. Chicago: University of Chicago Press.

    Google Scholar 

  • Farrington, D. P. (2003). Methodological quality standards for evaluation research. Annals of American Academy of Political and Social Science, 587(1), 49–68.

    Article  Google Scholar 

  • Farrington, D. P., & Welsh, B. C. (2005). Randomized experiments in criminology: what have we learned in the last two decades? Journal of Experimental Criminology, 1(1), 9–38.

    Article  Google Scholar 

  • Freemantle, N. (2001). Interpreting the results of secondary end points and subgroup analyses in clinical trials: should we lock the crazy aunt in the attic? British Medical Journal, 322(7292), 989–991.

    Article  Google Scholar 

  • Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellman, S. et al. (2005). Standards of evidence: efficacy, effectiveness and dissemination. Prevention Science, 1–25.

  • Gagnier, J.J., Boon, H., Rochon, P., Moher, D., Barnes, J. & Bombardier, C. for the CONSORT group. (2006a). Reporting randomized controlled trial of herbal interventions: an elaborated CONSORT statement. Annals of Internal Medicine, 144(5), 364–370.

    Google Scholar 

  • Gagnier, J. J., DeMelo, J., Boon, H., Rochon, P., & Bombardier, C. (2006b). Quality of reporting of randomized controlled trials of herbal medicine interventions. American Journal of Medicine, 119(9), 800.e1–800.e11.

    Google Scholar 

  • Gottfredson, D. C., Najaka, S. S., & Kearley, B. (2003). Effectiveness of drug treatment courts: evidence from a randomized trial. Criminology and Public Policy, 2(2), 171–196.

    Article  Google Scholar 

  • Graf, J., Doig, G. S., Cook, D. J., Vincent, J. L., & Sibbald, W. J. (2002). Randomized controlled trials in sepsis: has methodological quality improved over time? Critical Care Medicine, 30(2), 461–472.

    Article  Google Scholar 

  • Grossman, J. B., & Tierney, J. P. (1998). Does mentoring work? An impact study of the big brother big sisters program. Evaluation Review, 22(3), 403–426.

    Article  Google Scholar 

  • Henggeler, S. W., Clingempeel, G. W., Brondino, M. J., & Pickrel, S. G. (2002). Four year follow-up of multisystemic therapy with substance-abusing and substance-dependent juvenile offenders. Journal of American Academy of Child Adolescent Psychiatry, 41(7), 868–874.

    Article  Google Scholar 

  • Hopewell, S., Altman, D. G., Moher, D., & Schulz, K. F. (2008). Endorsement of the CONSORT Statement by high impact factor medical journals: a survey of journal editors and journals ‘Instructions to Authors’. Trials, 9, 20.

    Article  Google Scholar 

  • Ioannidis, J. P., Cappelleri, J. C., & Lau, J. (1998). Issues in comparisons between meta-analyses and large trials. JAMA, 279, 1089–1093.

    Article  Google Scholar 

  • Jüni, P., & Egger, M. (2006). Commentary: empirical evidence of attrition bias in clinical trials. International Journal of Epidemiology, 35(6), 1595.

    Article  Google Scholar 

  • Jüni, P., Altman, D.G., & Egger, M. (2001). Systematic reviews in health care: Assessing the quality of controlled clinical trials. British Medical Journal, 323(7303), 42–46.

    Google Scholar 

  • Jüni, P., Witschi, A., Bloch, R., & Egger, M. (1999). The hazards of scoring the quality of clinical trials for meta-analysis. JAMA, 282, 1054–1060.

    Article  Google Scholar 

  • Killias, M., Aebi, M., & Ribeaud, D. (2000). Does community service rehabilitate better than short term imprisonment? Results of a controlled experiment. The Howard Journal, 39(1), 40–57.

    Article  Google Scholar 

  • Kjaergard, L. L., Villumsen, J. & Gluud, C., (1999). Quality of randomised clinical trials affects estimates of intervention efficacy. Proceedings of the 7th Cochrane Colloquium. Universitia S. Tommaso D’Aquino Rome. Milan: Centro Cochrane Italiano, (poster B10).

  • Kjaergard, L. L., Villumsen, J., & Gluud, C. (2001). Reported methodological quality and discrepancies between large and small randomized trials in meta-analyses. Annals of Intern Medicine, 135, 982–989.

    Google Scholar 

  • Lipsey, M. (1992). Juvenile delinquency treatment: a meta-analytic inquiry into the variability of effects. In T. D. Cook, H. Cooper, D. S. Cordray, H. Hartmann, L. V. Hedges, R. J. Light, T. A. Louis, & F. Mosteller (Eds.), Meta-analysis for explanation: a casebook. New York: Russell Sage Foundation.

    Google Scholar 

  • Logan, C. H. (1972). Evaluation research in crime and delinquency: a re-appraisal. Journal of Criminal Law, Criminology and Police Science, 63, 378–387.

    Article  Google Scholar 

  • Lösel, F., & Köferl, P. (1989). Evaluation research on correctional treatment in West Germany: A meta-analysis. In H. Wegener, F. Lösel, & J. Haisch (Eds.), Criminal behavior and the justice system: psychological perspectives (pp. 334–355). Berlin Heidelberg New York: Springer.

    Google Scholar 

  • Marlowe, D. B., Festinger, D. S., Lee, P. A., Schepise, M. M., Hazzard, J. E. R., Merrill, J. C., et al. (2003). Are judicial status hearings a key component of drug court? During treatment data from a randomized trial. Criminal Justice and Behavior, 30(2), 141–162.

    Article  Google Scholar 

  • Moher, D., Schulz, K.F. & Altman, D. for the CONSORT Group. (2001). The CONSORT Statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. JAMA, 285, 1987–1991.

    Article  Google Scholar 

  • Moher, D., Jones, A., & Lepage, L. (2001). Use of the CONSORT statement and quality of reports of randomized trials: a comparative before and after evaluation. JAMA, 285, 1992–1995.

    Article  Google Scholar 

  • Oberwittler, D., & Wikström, P. O. (2009). Why small is better. Advancing the study of the role of behavioral contexts in crime causation. In D. Weisburd, W. Bernasco, & G. Bruinsma (Eds.), Putting crime in its place (pp. 33–58). Berlin Heidelberg New York: Springer.

    Google Scholar 

  • Ortmann, R. (2000). The effectiveness of social therapy in prison: a randomized experiment. Crime and Delinquency, 46(2), 214–232.

    Article  Google Scholar 

  • Perry, A. E., & Johnson, M. (2008). Applying the Consolidated Standards of Reporting Trials (CONSORT) to studies of mental health provision for juvenile offenders: a research note. Journal of Experimental Criminology, 4, 165–185.

    Article  Google Scholar 

  • Petrosino, A. J. (1997). What works? Revisited Again: a meta- analysis of randomized experiments in individually-focused crime reduction interventions. Ph.D. dissertation, Rutgers University. Ann Harbour, MI: University Microfilms.

  • Petrosino, A. J., Kiff, P., & Lavenberg, J. (2006). Research note: Randomized field experiments published in the British Journal of Criminology, 1960–2004. Journal of Experimental Criminology, 2, 99–111.

    Article  Google Scholar 

  • Piaggio, G., Elbourne, D. R., Altman, D. G., Pocock, S. J., & Evans, S. J. W. (2006). Reporting of noninferiority and equivalence randomized trials. An extension of the CONSORT statement. JAMA, 295(10), 1147–1172.

    Google Scholar 

  • Plint, A. C., Moher, D., Morrison, A., Schulz, K., Altman, D., Hill, C., et al. (2006). Does the CONSORT checklist improve the quality of reporting randomized controlled trials? A systematic review. The Medical Journal of Australia, 185(5), 263–267.

    Google Scholar 

  • Prady, S. L., Richmond, S. J., Morton, V. M., & MacPherson, H. (2008). A systematic evaluation of the impact of STRICTA and CONSORT recommendations on quality of reporting for acupuncture trials. Plos One, 3(2), 1–10.

    Article  Google Scholar 

  • Schweinhart, L. J. (2005). The High/scope Perry preschool study through age 40. High Scope Press at www.highscope.org/. Accessed October 6th 2009.

  • Shadish, W. T., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton-Mifflin.

    Google Scholar 

  • Shea, B., Boers, M., Grimshaw, J. M., Hamel, C., & Bouter, L. M. (2006). Does updating improve the methodological and reporting quality of systematic reviews? BMC Medical Research Methodology, 6, 27.

    Article  Google Scholar 

  • Sherman, L. W., & Weisburd, D. (1995). General deterrent effect of police patrol in crime “hot spots”: a randomized controlled trial. Justice Quarterly, 12(4), 625–648.

    Article  Google Scholar 

  • Sherman, L. W., Gottfredson, D. C., MacKenzie, D. L., Eck, J., Reuter, P. & Bushway, S. D. (1997). Preventing crime: what works, what doesn’t, what’s promising. US Department of Justice. National Institute of Justice.

  • Sherman, L. W. (2003). Misleading evidence and evidence led policy: Making social science more experimental. Special Edition. Annals of the American Academy of Political and Social Science, 589, (September 2003). Entire Issue.

  • Torgerson, C. J., Torgerson, D. J., Birks, Y. F., & Porthouse, J. (2005). A comparison of RCTs in health and education. British Educational Research Journal, 31(6), 761–785.

    Article  Google Scholar 

  • Weisburd, D., & Eck, J. (2004). What can police do to reduce crime, disorder and fear. Annals of the American Academy of Political and Social Science, 593, 42–65.

    Article  Google Scholar 

  • Welsh, B. C. (2007). Evidence-based crime prevention: scientific basis, trends, results and importance for Canada. National Crime Prevention Centre. ISBN 978-0-662-46606-2.

  • Wexler, H. K., Melnick, G., Lowe, L., & Peters, J. (1999). Three year re-incarceration outcomes for Amity in prison therapeutic community and aftercare in California. The Prison Journal, 79(3), 321–336.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Wee Zi Tan for her assistance with the data entry and coding of information for this study. This research was presented at the International Jerry Lee Symposium in Washington, DC, 5th–6th of May, 2008.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amanda E. Perry.

Appendix I:

Appendix I:

CONSORT Statement Flow Diagram

figure 5

Rights and permissions

Reprints and permissions

About this article

Cite this article

Perry, A.E., Weisburd, D. & Hewitt, C. Are criminologists describing randomized controlled trials in ways that allow us to assess them? Findings from a sample of crime and justice trials. J Exp Criminol 6, 245–262 (2010). https://doi.org/10.1007/s11292-010-9099-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11292-010-9099-z

Keywords

Navigation