Date: 12 Jul 2009
Ensuring safety, implementation and scientific integrity of clinical trials: lessons from the Criminal Justice–Drug Abuse Treatment Studies Data and Safety Monitoring Board
Rent the article at a discountRent now
* Final gross prices may vary according to local VAT.Get Access
Data and safety monitoring boards (DSMBs) provide independent oversight to bio-medical clinical trials, ensuring safe and ethical treatment of research participants, data quality, and credibility of study findings. Recently, the type of research monitored by DSMBs has been expanded to include randomized clinical trials of behavioral and psychosocial interventions in community and justice based settings. This paper focuses on the development and role of a DSMB created by the National Institute on Drug Abuse (NIDA) to monitor six multi-site clinical trials conducted within the Criminal Justice–Drug Abuse Treatment Studies (CJ-DATS). We believe this is one of the first such applications of formal DSMBs in justice settings. Special attention is given to developing processes for measuring and monitoring a range of implementation issues for research conducted within criminal justice settings. Lessons learned and recommendations to enhance future DSMB work within this area are discussed.
Gary Field, Ph.D. is retired.
Del Boca, F. K., & Darkes, J. (2007a). Enhancing the validity and utility of randomized clinical trials in addictions treatment research: II. Participant samples and assessment. Addiction, 102(8), 1194–1203.CrossRef
Del Boca, F. K., & Darkes, J. (2007b). Enhancing the validity and utility of randomized clinical trials in addictions treatment research: I. Treatment implementation and research design. Addiction, 102(7), 1047–1056.CrossRef
Dennis, M. L. (1990). Assessing the validity of randomized field experiments: an example from drug abuse treatment research. Evaluation Review, 14, 347–373.CrossRef
Dennis, M. L., Perl, H. I., Huebner, R. B., & McLellan, A. T. (2000). Twenty-five strategies for improving the design, implementation and analysis of health services research related to alcohol and other drug abuse treatment. Addiction, 95 (Suppl. 3), S281–S308.
El-Bassel, N., Gilbert, L., & Rajah, V. (2003). The relationship between drug abuse and sexual performance among women on methadone: heightening the risk of sexual intimate violence and HIV. Addictive Behaviors, 28(8), 1385–1403.CrossRef
El-Bassel, N., Gilbert, N., Wu, E., Go, H., & Hill, J. (2005). HIV and intimate partner violence among methadone-maintained women in New York City. Social Science & Medicine, 61(1), 171–183.CrossRef
Ellenberg, S. S., Fleming, T. R., & DeMets, D. L. (2002). Data monitoring committees in clinical trials: a practical perspective. West Sussex, England: Wiley.CrossRef
Fixsen, D. L., Naoom, S. F., Blas, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: a synthesis of the literature. Tampa, FL: National Implementation Research Network.
Fleming, T. R., & DeMets, D. L. (1993). Monitoring of clinical trials: issues and recommendations. Controlled Clinical Trials, 14, 183–197.CrossRef
Friedmann, P., Katz, E., Rhodes, A., Taxman, F., O’Connell, D., Frisman, L., et al. (2008). Collaborative behavioral management for drug-involved parolees: rationale and design of the step’n out study. Journal of Offender Rehabilitation, 47(3), 290–318.CrossRef
Lipsey, M. W. (1997). What can you build with thousands of bricks? Musings on the cumulation of knowledge in program evaluation. New Directions for Evaluation, 1997(76), 7–24.CrossRef
Lipsey, M. W., & Cullen, F. T. (2007). The effectiveness of correctional rehabilitation: a review of systematic reviews. Annual review of Law and Social Science, 3, 297–320.CrossRef
Lum, C., & Yang, S. M. (2005). Why do evaluation researchers in crime and justice choose non-experimental methods? Journal of Experimental Criminology, 1(2), 191–213.CrossRef
Mears, D., & Butts, J. (2009). Using performance monitoring to improve the accountability, operations, and effectiveness of juvenile justice. Criminal Justice Policy Review, 19(3), 264–284.CrossRef
Miller, W. R., Yahne, C. E., Moyers, T. B., Martinez, J., & Pittitano, M. (2004). A randomized trial of methods to help clinicians learn motivational interviewing. Journal of Consulting and Clinical Psychology, 72(6), 1050–1062.CrossRef
Moher, D., Schulz, K. F., & Altman, D. G. (2001). The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. BMC Medical Research Methodology, 1(2). Retrieved on 17 April 2008 from http://www.biomedcentral.com/content/pdf/1471-2288-1-2.pdf.
National Institute on Drug Abuse (NIDA). (2004). Division of Epidemiology, Services and Prevention Research (DESPR) data and safety monitoring board standard operating procedures. Rockville, MD: National Institute on Drug Abuse (NIDA). Retrieved on 21 May 2008 from http://cjdats.org/content_documents/DSMB%20SOP%20Revised%20June’04_%20final%20agr.pdf.
National Institutes of Health. (2000). Further guidance on a data and safety monitoring for phase I and phase II trials. Notice: OD-00-038. Retrieved on 21 May 2008 from http://grants.Nih.gov/grants/guide/notice-files/NOT-OD-oo-038.html.
Petrosino, A., & Soydan, H. (2005). The impact of program developers as evaluators on criminal recidivism: results from meta-analyses of experimental and quasi-experimental research. Journal of Experimental Criminology, 1(4), 435–450.CrossRef
Petry, N., Roll, J., Rounsaville, B., Ball, S., Stitzer, M., Peirce, J., et al. (2008). Serious adverse events in randomized psychosocial treatment studies: safety or arbitrary edicts? Journal of Counseling and Clinical Psychology, 76(6), 1076–1082.CrossRef
Prendergast, M., Cartier, J., & Hall, E. (2005). CJDATS brief report: Transitional case management (TCM). Retrieved 28 April 2009, http://cjdats.org/ka/ka-3.cfm?content_item_id=343.
Scott, C. K. (2004). A replicable model for achieving over 90% follow-up rates in longitudinal studies of substance abusers. Drug and Alcohol Dependence, 74(1), 21–36.CrossRef
Sherman, L. W. (2006). To develop and test: the inventive difference between evaluation and experimentation. Journal of Experimental Criminology, 2(3), 393–406.CrossRef
Sholomskas, D. E., Syracuse-Stewart, G., Rounsaville, B. J., Ball, S. A., Nuro, K. F., & Carroll, K. M. (2005). We don’t train in vain: a dissemination trial of three strategies of training clinicians in cognitive-behavioral therapy. Journal of Consulting and Clinical Psychology, 73(1), 106–115.CrossRef
Zlotnick, C., Clarke, J. G., Friedmann, P. D., Roberts, M. B., Sacks, S., & Melnick, G. (2008). Gender differences in comorbid disorders among offenders in prison substance abuse treatment programs. Behavior Science Law, 26(4), 403–412.CrossRef
- Ensuring safety, implementation and scientific integrity of clinical trials: lessons from the Criminal Justice–Drug Abuse Treatment Studies Data and Safety Monitoring Board
Journal of Experimental Criminology
Volume 5, Issue 3 , pp 323-344
- Cover Date
- Print ISSN
- Online ISSN
- Springer Netherlands
- Additional Links
- Clinical trial
- Criminal justice
- Health services research
- Author Affiliations
- 1. National Institute on Drug Abuse, Rockville, MD, USA
- 6. National Institute on Drug Abuse, 6001 Executive Blvd., rm 4222, msc 9565, Bethesda, MD, 20892-9565, USA
- 2. Chestnut Health Systems, Bloomington, IL, USA
- 3. Columbia University, New York, NY, USA
- 4. Friends Research Institute, Baltimore, MD, USA
- 5. Oregon Department of Corrections, Oregon, OR, USA