Journal of Experimental Criminology

, Volume 11, Issue 4, pp 541–563 | Cite as

The production of criminological experiments revisited: the nature and extent of federal support for experimental designs, 2001–2013

  • Cody W. Telep
  • Joel H. Garner
  • Christy A. Visher



To assess the nature and extent of funding for randomized experiments in criminology and criminal justice from the National Institute of Justice (NIJ) since 2000.


Based on data from official records of grant awards made by NIJ between fiscal years 2001 and 2013, we categorized awards based on whether they were for randomized experiments, non-experimental evaluation research, non-evaluation social science research, social science program support, forensic science and technology research, or forensic science and technology support.


While the bulk of NIJ funding goes to forensic science and technology support, among the 800 social science awards we found a total of 99 awards for experiments. Support for the use of experimental designs increased during this 13-year period and was substantially greater than the support for the use of experimental designs in the 1990s. The awards for experiments between 2001 and 2013 went to a variety of researchers and research organizations and addressed a wide array of criminal justice program areas.


Our findings document a marked increase in funding for experiments in recent years compared to the 1991–2000 period, when just 21 awards were made for experimental work. These findings suggest that NIJ has responded to a series of critiques regarding the methodological quality of funded projects by placing a greater emphasis on high-quality social science research.


Awards Federal funding Grants National Institute of Justice Randomized experiments 



We thank Dorothy Lee in the Office of General Counsel in the Office of Justice Programs for her assistance in providing award data from the Grants Management System and the content specialists in the National Criminal Justice Reference Service for their help in obtaining final reports for awards. Thanks also to Ronald Hubbard for his research assistance.


  1. Baker, S. H., & Rodriguez, O. (1977). Random time quota selection – an alternative to random selection in experimental evaluations. New York: Vera Institute of Justice.Google Scholar
  2. Baker, S. H., & Sadd, S. (1979). The court employment project evaluation - final report. New York: Vera Institute of Justice.Google Scholar
  3. Braga, A. A., Welsh, B. C., & Bruinsma, G. J. N. (2013). Integrating experimental and observational methods to improve criminology and criminal justice policy. In B. C. Welsh, A. A. Braga, & G. J. N. Bruinsma (Eds.), Experimental criminology: Prospects for advancing science and public policy (pp. 277–298). New York: Cambridge University Press.CrossRefGoogle Scholar
  4. Clear, T. C. (2010). Policy and evidence: the challenge of the american society of criminology: 2009 presidential address to theAmerican society of criminology. Criminology, 48, 1–25.CrossRefGoogle Scholar
  5. Coalition for Evidence-Based Policy. (2002). Bringing evidence-driven progress to education: a recommended strategy for the U.S. Department of Education. Washington, DC: Coalition for Evidence-Based Policy.Google Scholar
  6. Department of Justice. (2009). U.S. Department of Justice audit of the National Institute of Justice’s practices for awarding grants and contracts in fiscal years 2005 through 2007. Washington, DC: U.S. Department of Justice Office of the Inspector General Audit Division.Google Scholar
  7. Farrington, D. P. (2006). Key longitudinal- experimental studies in criminology. Journal of Experimental Criminology, 2, 121–141.CrossRefGoogle Scholar
  8. Farrington, D. P., & MacKenzie, D. L. (2013). Long-term follow-ups of experimental interventions. Journal of Experimental Criminology, 9, 385–388.CrossRefGoogle Scholar
  9. Farrington, D. P., & Petrosino, A. (2001). The Campbell collaboration crime and justice group. The Annals of the American Academy of Political and Social Science, 578, 35–49.CrossRefGoogle Scholar
  10. Farrington, D. P., Gottfredson, D., Sherman, L. W., & Welsh, B. (2002). The Maryland scientific methods scale. In L. W. Sherman, D. P. Farrington, B. Welsh, & D. MacKenzie (Eds.), Evidence-based crime prevention (pp. 13–21). New York: Routledge.Google Scholar
  11. Garner, J. H., & Maxwell, C. D. (2000). What are the lessons of the police arrest studies? Journal of Aggression, Maltreatment & Trauma, 4, 83–114.CrossRefGoogle Scholar
  12. Garner, J. H., & Visher, C. A. (1988). Policy experiments come of age. NIJ Reports, 211, 2–8.Google Scholar
  13. Garner, J. H., & Visher, C. A. (2003). The production of criminological experiments. Evaluation Review, 27, 316–335.CrossRefGoogle Scholar
  14. General Accounting Office (2002a). Drug courts: Better DOJ data collection and evaluation efforts needed to measure impact of drug court programs. Washington: U.S. General Accounting Office.Google Scholar
  15. General Accounting Office (2002b). Justice impact evaluations: one Byrne evaluation was rigorous; all reviewed violence against women office evaluations were problematic. Washington: U.S. General Accounting Office.Google Scholar
  16. General Accounting Office (2003). Justice outcome evaluations. Design and implementation of studies require more NIJ attention. Washington: U.S. General Accounting Office.Google Scholar
  17. Herk, M. (2009). The coalition for evidence-based policy: Its role in advancing evidence-based reform, 2004–2009. New York: William T. Grant Foundation.Google Scholar
  18. Laub, J. H. (2011). The national institute of justice response to the report of the national research council: Strengthening the national institute of justice. Washington: National Institute of Justice, U.S. Department of Justice.Google Scholar
  19. Lempert, R. O., & Visher, C. A. (1987). Randomized field experiments in criminal justice: Workshop proceedings. Washington: National Institute of Justice, U.S. Department of Justice.Google Scholar
  20. Logan, C. H. (1972). Evaluation research in crime and delinquency: A reappraisal. Journal of Criminal Law, Criminology, and Police Science, 63, 378–387.CrossRefGoogle Scholar
  21. Lum, C., Koper, C., & Telep, C. W. (2011). The Evidence-Based Policing Matrix. Journal of Experimental Criminology, 7, 3–26.CrossRefGoogle Scholar
  22. Mears, D. P. (2007). Towards rational and evidence-based crime policy. Journal of Criminal Justice, 35, 667–682.CrossRefGoogle Scholar
  23. Mullen, J., Carlson, K., Earle, R., Blew, C., & Li, L. (1974). Pre-trial services: An evaluation of policy related research. Cambridge: Abt Associates.Google Scholar
  24. National Institute of Justice. (2002a). National drug court evaluation multi-site longitudinal impact study. Washington: National Institute of Justice, U.S. Department of Justice.Google Scholar
  25. National Institute of Justice. (2002b). Solicitation for the evaluation of the serious and violentoffender reentry initiative. Washington: National Institute of Justice, U.S. Department of Justice.Google Scholar
  26. National Institute of Justice. (2013). Building and enhancing criminal justice researcher practitioner partnerships FY 2013. Washington: National Institute of Justice, U.S. Department of Justice.Google Scholar
  27. National Research Council (1977). Understanding crime: An evaluation of the National Institute of Law Enforcement and Criminal Justice. S. O. White & S. Krislov (Eds.). Washington, DC: The National Academies Press.Google Scholar
  28. National Research Council (1979). The rehabilitation of criminal offenders: problems and prospects. Panel on Research on Rehabilitative Techniques. L. Sechrest, S. O. White, & E. D. Brown (Eds.). Washington, DC: The National Academies Press.Google Scholar
  29. National Research Council (1986). Criminal careers and “career criminals”, vol. 1. Panel on Research on Criminal Careers. A. Blumstein, J. Cohen. J. A. Roth, & C. A. Visher (Eds.). Washington, DC: The National Academies Press.Google Scholar
  30. National Research Council. (1978). Deterrence and incapacitation: estimating the effects of criminal sanctions on crime rates. Panel on Research on Deterrent and Incapacitative Effects. A. Blumstein, J. Cohen, & D. Nagin (Eds.). Washington, DC: The National Academies PressGoogle Scholar
  31. National Research Council. (1993). Understanding and preventing violence, vol. 1. Panel on the Understanding and Control of Violent Behavior. A. J. Reiss, Jr. & J. A. Roth (Eds.). Washington, DC: The National Academies Press.Google Scholar
  32. National Research Council. (2005). Improving evaluation of anticrime programs. Committee on Improving Evaluation of Anti-Crime Programs. M. W. Lipsey, (Ed.). Washington, DC: The National Academies Press.Google Scholar
  33. National Research Council. (2010). Strengthening the National Institute of Justice. Committee on Assessing the Research Program of the National Institute of Justice. C. F. Wellford, B. M. Chemers, & J. A. Schuck, (Eds.). Washington, DC: The National Academies Press.Google Scholar
  34. Nuttall, C. (2003). The Home Office and random allocation experiments. Evaluation Review, 27, 267–289.CrossRefGoogle Scholar
  35. Pager, D. (2003). The mark of a criminal record. American Journal of Sociology, 108, 937–975.CrossRefGoogle Scholar
  36. Palmer, T., & Petrosino, A. (2003). The “experimenting agency”: the California Youth Authority Research Division. Evaluation Review, 27, 228–266.CrossRefGoogle Scholar
  37. Roman, J. K., Reid, S. E., Chalfin, A. J., & Knight, C. R. (2009). The DNA field experiment: a randomized trial of the cost-effectiveness of using DNA to solve property crimes. Journal of Experimental Criminology, 5, 345–369.CrossRefGoogle Scholar
  38. Sampson, R. J., Raudenbush, S. W., & Earls, F. (1997). Neighborhoods and violent crime: a multilevel study of collective efficacy. Science, 277, 918–924.CrossRefGoogle Scholar
  39. Sampson, R. J., Winship, C., & Knight, C. (2013). Translating causal claims: principles and strategies for policy-relevant criminology. Criminology and Public Policy, 12, 587–616.CrossRefGoogle Scholar
  40. Sherman, L. W. (2013). The rise of evidence-based policing: Targeting, testing, and tracking. In M. Tonry (Ed.), Crime and Justice in America, 1975–2025 (42, pp. 377–451). Chicago: University of Chicago Press.Google Scholar
  41. Sherman, L. W., & Berk, R. A. (1984). The specific deterrent effects of arrest for domestic assault. American Sociological Review, 49, 261–272.CrossRefGoogle Scholar
  42. Sherman, L., Gottfredson, D., MacKenzie, D., Eck, J., Reuter, P., & Bushway, S. (1997). Preventing crime: What works, what doesn’t, what’s promising. Washington: National Institute of Justice, U.S. Department of Justice.Google Scholar
  43. Visher, C. A., & Weisburd, D. (1998). Identifying what works: recent trends in crime prevention strategies. Crime, Law and Social Change, 28, 223–242.CrossRefGoogle Scholar
  44. Wallace, J. W. (2011). Review of the coalition for evidence-based policy. New York: MDRC.Google Scholar
  45. Weisburd, D. (2010). Justifying the use of non-experimental methods and disqualifying the use of randomized controlled trials: Challenging folklore in evaluation research in crime and justice. Journal of Experimental Criminology, 6, 209–227.CrossRefGoogle Scholar
  46. Welsh, B. C. (2006). Evidence-based policing for crime prevention. In D. Weisburd & A. A. Braga (Eds.), Police innovation: Contrasting perspectives (pp. 305–321). New York: Cambridge University Press.Google Scholar
  47. Zimring, F. E. (1974). Measuring the impact of pretrial diversion from the criminal justice system. The University of Chicago Law Review, 41, 224–241.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2015

Authors and Affiliations

  • Cody W. Telep
    • 1
  • Joel H. Garner
    • 2
  • Christy A. Visher
    • 3
  1. 1.School of Criminology and Criminal JusticeArizona State UniversityPhoenixUSA
  2. 2.Criminology and Criminal JusticePortland State UniversityPortlandUSA
  3. 3.Department of Sociology and Criminal JusticeUniversity of DelawareNewarkUSA

Personalised recommendations