Journal of Experimental Criminology

, Volume 5, Issue 3, pp 323–344 | Cite as

Ensuring safety, implementation and scientific integrity of clinical trials: lessons from the Criminal Justice–Drug Abuse Treatment Studies Data and Safety Monitoring Board

  • Redonna K. Chandler
  • Michael L. Dennis
  • Nabila El-Bassel
  • Robert P. Schwartz
  • Gary Field
Article
  • 90 Downloads

Abstract

Data and safety monitoring boards (DSMBs) provide independent oversight to bio-medical clinical trials, ensuring safe and ethical treatment of research participants, data quality, and credibility of study findings. Recently, the type of research monitored by DSMBs has been expanded to include randomized clinical trials of behavioral and psychosocial interventions in community and justice based settings. This paper focuses on the development and role of a DSMB created by the National Institute on Drug Abuse (NIDA) to monitor six multi-site clinical trials conducted within the Criminal Justice–Drug Abuse Treatment Studies (CJ-DATS). We believe this is one of the first such applications of formal DSMBs in justice settings. Special attention is given to developing processes for measuring and monitoring a range of implementation issues for research conducted within criminal justice settings. Lessons learned and recommendations to enhance future DSMB work within this area are discussed.

Keywords

Clinical trial CJ-DATS Criminal justice DSMB Health services research 

References

  1. Del Boca, F. K., & Darkes, J. (2007a). Enhancing the validity and utility of randomized clinical trials in addictions treatment research: II. Participant samples and assessment. Addiction, 102(8), 1194–1203.CrossRefGoogle Scholar
  2. Del Boca, F. K., & Darkes, J. (2007b). Enhancing the validity and utility of randomized clinical trials in addictions treatment research: I. Treatment implementation and research design. Addiction, 102(7), 1047–1056.CrossRefGoogle Scholar
  3. Dennis, M. L. (1990). Assessing the validity of randomized field experiments: an example from drug abuse treatment research. Evaluation Review, 14, 347–373.CrossRefGoogle Scholar
  4. Dennis, M. L., Perl, H. I., Huebner, R. B., & McLellan, A. T. (2000). Twenty-five strategies for improving the design, implementation and analysis of health services research related to alcohol and other drug abuse treatment. Addiction, 95 (Suppl. 3), S281–S308.Google Scholar
  5. El-Bassel, N., Gilbert, L., & Rajah, V. (2003). The relationship between drug abuse and sexual performance among women on methadone: heightening the risk of sexual intimate violence and HIV. Addictive Behaviors, 28(8), 1385–1403.CrossRefGoogle Scholar
  6. El-Bassel, N., Gilbert, N., Wu, E., Go, H., & Hill, J. (2005). HIV and intimate partner violence among methadone-maintained women in New York City. Social Science & Medicine, 61(1), 171–183.CrossRefGoogle Scholar
  7. Ellenberg, S. S., Fleming, T. R., & DeMets, D. L. (2002). Data monitoring committees in clinical trials: a practical perspective. West Sussex, England: Wiley.CrossRefGoogle Scholar
  8. Fixsen, D. L., Naoom, S. F., Blas, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: a synthesis of the literature. Tampa, FL: National Implementation Research Network.Google Scholar
  9. Fleming, T. R., & DeMets, D. L. (1993). Monitoring of clinical trials: issues and recommendations. Controlled Clinical Trials, 14, 183–197.CrossRefGoogle Scholar
  10. Friedmann, P., Katz, E., Rhodes, A., Taxman, F., O’Connell, D., Frisman, L., et al. (2008). Collaborative behavioral management for drug-involved parolees: rationale and design of the step’n out study. Journal of Offender Rehabilitation, 47(3), 290–318.CrossRefGoogle Scholar
  11. Lipsey, M. W. (1997). What can you build with thousands of bricks? Musings on the cumulation of knowledge in program evaluation. New Directions for Evaluation, 1997(76), 7–24.CrossRefGoogle Scholar
  12. Lipsey, M. W., & Cullen, F. T. (2007). The effectiveness of correctional rehabilitation: a review of systematic reviews. Annual review of Law and Social Science, 3, 297–320.CrossRefGoogle Scholar
  13. Lum, C., & Yang, S. M. (2005). Why do evaluation researchers in crime and justice choose non-experimental methods? Journal of Experimental Criminology, 1(2), 191–213.CrossRefGoogle Scholar
  14. Mears, D., & Butts, J. (2009). Using performance monitoring to improve the accountability, operations, and effectiveness of juvenile justice. Criminal Justice Policy Review, 19(3), 264–284.CrossRefGoogle Scholar
  15. Miller, W. R., Yahne, C. E., Moyers, T. B., Martinez, J., & Pittitano, M. (2004). A randomized trial of methods to help clinicians learn motivational interviewing. Journal of Consulting and Clinical Psychology, 72(6), 1050–1062.CrossRefGoogle Scholar
  16. Moher, D., Schulz, K. F., & Altman, D. G. (2001). The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. BMC Medical Research Methodology, 1(2). Retrieved on 17 April 2008 from http://www.biomedcentral.com/content/pdf/1471-2288-1-2.pdf.
  17. National Institute on Drug Abuse (NIDA). (2004). Division of Epidemiology, Services and Prevention Research (DESPR) data and safety monitoring board standard operating procedures. Rockville, MD: National Institute on Drug Abuse (NIDA). Retrieved on 21 May 2008 from http://cjdats.org/content_documents/DSMB%20SOP%20Revised%20June’04_%20final%20agr.pdf.Google Scholar
  18. National Institutes of Health. (2000). Further guidance on a data and safety monitoring for phase I and phase II trials. Notice: OD-00-038. Retrieved on 21 May 2008 from http://grants.Nih.gov/grants/guide/notice-files/NOT-OD-oo-038.html.
  19. Petrosino, A., & Soydan, H. (2005). The impact of program developers as evaluators on criminal recidivism: results from meta-analyses of experimental and quasi-experimental research. Journal of Experimental Criminology, 1(4), 435–450.CrossRefGoogle Scholar
  20. Petry, N., Roll, J., Rounsaville, B., Ball, S., Stitzer, M., Peirce, J., et al. (2008). Serious adverse events in randomized psychosocial treatment studies: safety or arbitrary edicts? Journal of Counseling and Clinical Psychology, 76(6), 1076–1082.CrossRefGoogle Scholar
  21. Prendergast, M., Cartier, J., & Hall, E. (2005). CJDATS brief report: Transitional case management (TCM). Retrieved 28 April 2009, http://cjdats.org/ka/ka-3.cfm?content_item_id=343.
  22. Scott, C. K. (2004). A replicable model for achieving over 90% follow-up rates in longitudinal studies of substance abusers. Drug and Alcohol Dependence, 74(1), 21–36.CrossRefGoogle Scholar
  23. Sherman, L. W. (2006). To develop and test: the inventive difference between evaluation and experimentation. Journal of Experimental Criminology, 2(3), 393–406.CrossRefGoogle Scholar
  24. Sholomskas, D. E., Syracuse-Stewart, G., Rounsaville, B. J., Ball, S. A., Nuro, K. F., & Carroll, K. M. (2005). We don’t train in vain: a dissemination trial of three strategies of training clinicians in cognitive-behavioral therapy. Journal of Consulting and Clinical Psychology, 73(1), 106–115.CrossRefGoogle Scholar
  25. Zlotnick, C., Clarke, J. G., Friedmann, P. D., Roberts, M. B., Sacks, S., & Melnick, G. (2008). Gender differences in comorbid disorders among offenders in prison substance abuse treatment programs. Behavior Science Law, 26(4), 403–412.CrossRefGoogle Scholar

Copyright information

© United States Government 2009

Authors and Affiliations

  • Redonna K. Chandler
    • 1
    • 6
  • Michael L. Dennis
    • 2
  • Nabila El-Bassel
    • 3
  • Robert P. Schwartz
    • 4
  • Gary Field
    • 5
  1. 1.National Institute on Drug AbuseRockvilleUSA
  2. 2.Chestnut Health SystemsBloomingtonUSA
  3. 3.Columbia UniversityNew YorkUSA
  4. 4.Friends Research InstituteBaltimoreUSA
  5. 5.Oregon Department of CorrectionsOregonUSA
  6. 6.National Institute on Drug AbuseBethesdaUSA

Personalised recommendations