Skip to main content

Advertisement

Log in

Defending and managing the pipeline: lessons for running a randomized experiment in a correctional institution

  • Published:
Journal of Experimental Criminology Aims and scope Submit manuscript

Abstract

Objectives

To discuss the challenges faced in an experimental prisoner reentry evaluation with regard to managing the pipeline of eligible cases.

Methods

This paper uses a case study approach, coupled with a review of the relevant literature on issues of case flow in experimental studies in criminal justice settings. Included are recommendations for researchers on the management of case flow, reflections on the major research design issues encountered, and a listing of dilemmas that are likely to plague experimental evaluations of prisoner reentry programs.

Results

Particularly in a jail setting, anticipating the timing of release of a prisoner to the community is probably impossible given the large number of issues that impact release, many of which will be unanticipated. A detailed pipeline study is critical to the success of an experimental study targeting returning prisoners. Pipeline studies should be conducted under what will be the true conditions and context for enrollment, given all eligibility criteria.

Conclusions

With continued and systematic documentation of enrollment challenges in future experimental evaluations of reentry programs, as well as other experimental evaluations that involve individuals, academics can build a deep literature that would help facilitate future successful randomized experiments in the criminal justice field.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Others have also lamented the paucity of resources and literature discussing the non-technical aspects of the research specifically related to planning and study enrollment (see Rezmovic et al. 1981; Petersilia 1989).

  2. We do not report on final project outcomes.

  3. Supportive housing is the combination of permanent, affordable housing with supportive services aimed at helping residents maintain housing stability. While not all supportive housing programs are the same, shared components include affordability (tenants generally do not pay more than 30–50 % of their income in rent) and a range of services, including coordinated case management, health and mental health services, substance use treatment and recovery, vocational and employment services, money management, life skills, household establishment, and tenant advocacy.

  4. In many jurisdictions, vouchers are tied to Federal definitions of chronic homelessness. A long stay in jail or prison does not qualify a person as chronically homeless.

  5. Evaluability assessment (EA) is a process for assessing the overall feasibility of an evaluation before the evaluation takes places. EA helps researchers select designs for evaluations that are feasible, relevant, and useful. EA is described in detail in Wholey (2004).

References

  • Bell, J. B. (2004). Managing evaluation projects. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (2nd ed., pp. 571–603). San Francisco: Jossey-Bass.

    Google Scholar 

  • Bickman, L. (1985). Randomized field experiments in education: implementation issues. New Directions for Program Evaluation, 28, 39–53.

    Article  Google Scholar 

  • Boruch, R. (1997). Randomized experiments for planning and evaluation. Thousand Oaks: Sage.

    Google Scholar 

  • Clarke, R., & Cornish, D. (1972). The controlled trial in institutional research: Paradigm or pitfall for penal evaluators? London: HMSO.

    Google Scholar 

  • Conner, R. F. (1977). Selecting a control group: an analysis of the randomization process in twelve social reform programs. Evaluation Quarterly, 1, 194–244.

    Article  Google Scholar 

  • Davis, R. L., & Auchter, B. (2010). National Institute of Justice funding of experimental studies of violence against women: a critical look at implementation issues and policy implications. Journal of Experimental Criminology, 6, 377–95.

    Article  Google Scholar 

  • Devine, J., Wright, J., & Joyner, L. (1994). Issues in implementing a randomized experiment in a field setting. New Directions for Program Evaluation, 63, 27–40.

    Article  Google Scholar 

  • Eck, J. (2002). Learning from experience in problem oriented policing and crime prevention: The positive function of weak evaluations and the negative functions of strong ones. In N. Tilley (Ed.), Evaluation for crime prevention: Crime prevention studies, 4 (pp. 93–117). Monsey: Criminal Justice Press.

    Google Scholar 

  • Farrington, D. P. (1983). Randomized experiments on crime and justice. In M. Tonry & N. Morris (Eds.), Crime and justice ( Vol. 4, pp. 257–308). Chicago: University of Chicago Press.

    Google Scholar 

  • Feder, L., Jolin, A., & Feyerherm, W. (2000). Lessons from two randomized experiments in criminal justice settings. Crime and Delinquency, 46, 380–400.

    Article  Google Scholar 

  • Fontaine, J., Gilchrist-Scott, D., & Horvath, A. (2011). Supportive housing for the disabled reentry population: The District of Columbia frequent users service enhancement pilot program. Washington, DC: Urban Institute.

    Google Scholar 

  • Goldkamp, J. (2008). Missing the target and missing the point: “Successful” random assignment but misleading results. Journal of Experimental Criminology, 4, 83–115.

    Article  Google Scholar 

  • Gondolf, E. W. (2010). Lessons from a successful and failed random assignment testing batterer program innovations. Journal of Experimental Criminology, 6, 355–376.

    Article  Google Scholar 

  • Gueron, J. M. (2002). The politics of random assignment: Implementing studies and affecting policy. In F. Mosteller & R. Boruch (Eds.), Evidence matters: Randomized trials in education research (pp. 15–49). Washington, DC: Brookings Institution Press.

    Google Scholar 

  • Hatry, H. T. (2004). Using agency records. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (2nd ed., pp. 396–411). San Francisco: Jossey-Bass.

    Google Scholar 

  • Hatry, H. T., & Newcomer, K. E. (2004). Pitfalls of evaluation. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (2nd ed., pp. 547–570). San Francisco: Jossey-Bass.

    Google Scholar 

  • Lum, C., & Yang, S. M. (2005). Why do evaluation researchers in crime and justice choose non-experimental methods? Journal of Experimental Criminology, 1, 191–213.

    Article  Google Scholar 

  • Morell, J. A. (2010). Evaluation in the face of uncertainty. NY: Guilford Press.

    Google Scholar 

  • Moser, R. (2007). Reentry housing: Systems, programs and policy. Presentation to the North Carolina Department of Corrections (May). New York, NY: Corporation for Supportive Housing.

  • National Research Council. (2005). Improving evaluation of anticrime programs. Committee on Improving evaluation of anti-crime programs. Washington, DC: The National Academies Press. Committee on Law and Justice, Division of Behavioral and Social Sciences and Education.

    Google Scholar 

  • Nightingale, D. S., & Rossman, S. B. (2004). Collecting data in the field. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (2nd ed., pp. 363–395). San Francisco: Jossey-Bass.

    Google Scholar 

  • Petersilia, J. (1989). Implementing random experiments: lessons from BJA’s intensive supervision project. Evaluation Review, 13, 435–458.

    Article  Google Scholar 

  • Rezmovic, E. L., Cook, T. J., & Dobson, L. D. (1981). Beyond random assignment: factors affecting evaluation integrity. Evaluation Review, 5, 51–67.

    Article  Google Scholar 

  • Roman, C., Fontaine, J., & Burt, M. (2009). System change accomplishments of the Corporation for Supportive Housing’s Returning Home Initiative- Summary Brief. Washington, DC: Urban Institute.

    Google Scholar 

  • Rossi, P., Lipsey, M., & Freeman, H. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks: Sage.

    Google Scholar 

  • Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.

    Google Scholar 

  • Sherman, L. W. (2007). The power few: experimental criminology and the reduction of harm. Journal of Experimental Criminology, 3, 299–321.

    Article  Google Scholar 

  • Solomon, A. L., Osborne, J. W. L., LoBuglio, S. F., Mellow, J., & Mukamal, D. (2008). Life after lockup: Improving reentry from jail to the community. Washington, DC: Urban Institute.

    Google Scholar 

  • Weisburd, D. (2003). Ethical practice and evaluation of the interventions in crime and justice: the moral imperative for randomized trials. Evaluation Review, 27, 336–354.

    Article  Google Scholar 

  • Wholey, J. S. (2004). Evaluability assessment. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (2nd ed., pp. 33–62). San Francisco: Jossey-Bass.

    Google Scholar 

  • Wolff, N. (2000). Using randomized controlled trials to evaluate socially complex services: problems, challenges and recommendations. The Journal of Mental Health Policy and Economics, 3, 97–109.

    Article  Google Scholar 

Download references

Acknowledgments

Part of the research was funded by grant 2007-IJ-CX-0022 from the National institute of Justice. The research was also supported by the Corporation for Supportive Housing. The authors gratefully acknowledge the support of Dr. Eileen Couture, Dr. Carlos Quezada-Gomez, and Terri Marshall in their role as local investigator for the study, and wholeheartedly thank Doris Weiland of Temple University for administering the random assignment protocol and helping us troubleshoot problems with the pipeline. The authors also wish to acknowledge the contributions of the reviewers whose comments improved this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Caterina G. Roman.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Roman, C.G., Fontaine, J., Fallon, J. et al. Defending and managing the pipeline: lessons for running a randomized experiment in a correctional institution. J Exp Criminol 8, 307–329 (2012). https://doi.org/10.1007/s11292-012-9155-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11292-012-9155-y

Keywords

Navigation