Skip to main content
Log in

Risk-Based Data Monitoring: Quality Control in Central Nervous System (CNS) Clinical Trials

  • Clinical Trials: Analytical Reports
  • Published:
Therapeutic Innovation & Regulatory Science Aims and scope Submit manuscript

Abstract

Monitoring the quality of clinical trial efficacy outcome data has received increased attention in the past decade, with regulatory guidance encouraging it to be conducted proactively, and remotely. However, the methods utilized to develop and implement risk-based data monitoring (RBDM) programs vary, and there is a dearth of published material to guide these processes in the context of central nervous system (CNS) trials. We reviewed regulatory guidance published within the past 6 years, generic white papers, and studies applying RBDM to data from CNS clinical trials. Methodologic considerations and system requirements necessary to establish an effective, real-time risk-based monitoring platform in CNS trials are presented. Key RBDM terms are defined in the context of CNS trial data, such as “critical data,” “risk indicators,” “noninformative data,” and “mitigation of risk.” Additionally, potential benefits of, and challenges associated with implementation of data quality monitoring are highlighted. Application of methodological and system requirement considerations to real-time monitoring of clinical ratings in CNS trials has the potential to minimize risk and enhance the quality of clinical trial data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Junod SW. US Food and Drug Administration. FDA and Clinical Drug Trials: a short history. http://www.fda.gov/aboutfda/whatwedo/history/overviews/ucm304485.htm. Published 2016. Accessed May 1, 2017.

  2. US Food and Drug Administration. Guidance for industry oversight of clinical investigations—a risk-based approach to monitoring. http://www.fda.gov/downloads/Drugs/Guidances/UCM269919.pdf. Published 2013. Accessed May 1, 2017.

  3. European Medicines Agency. Reflection paper on risk based management in clinical trials. http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2013/11/WC500155491.pdf. Published 2013. Accessed May 1, 2017.

  4. Medicines and Healthcare Products Regulatory Agency. Risk-adapted approaches to the management of clinical trials of investigational medicinal products. http://www.mhra.gov.uk/home/groups/l-ctu/documents/websiteresources/con111784.pdf. Published 2011. Accessed May 1, 2017.

  5. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use. Integrated addendum to ICH E6(R1): guideline for good clinical practice E6(R2). http://www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Efficacy/E6/E6_R2__Addendum_Step2.pdf. Published 2015. Accessed May 1, 2017.

  6. Oracle Health Sciences. Beyond SDV: enabling holistic, strategic risk-based monitoring. http://www.oracle.com/us/industries/health-sciences/hs-enabling-strategic-risk-wp-2165355.pdf. Published 2016. Accessed May 1, 2017.

  7. TransCelerate BioPharma. Risk based monitoring. http://www.transceleratebiopharmainc.com/initiatives/risk-based-monitoring/. Published 2017. Accessed May 1, 2017.

  8. Kent DM, Rothwell PM, Ioannidis JPA, et al. Assessing and reporting heterogeneity in treatment effects in clinical trials: a proposal. Trials. 2010;11:85.

    Article  Google Scholar 

  9. McCann DJ, Petry NM, Bresell A, et al. Medication nonadherence, “professional subjects,” and apparent placebo responders: overlapping challenges for medications development. J Clin Psychopharmacol 2015;35:566–573.

    Article  Google Scholar 

  10. Chow SC, Chang M. Adaptive design methods in clinical trials—a review. Orphanet J Rare Dis 2008;3:11.

    Article  Google Scholar 

  11. Muller MJ, Szegedi A. Effects of interrater reliability of psychopathologic assessment on power and sample size calculations in clinical trials. J Clin Psychopharmacol. 2002;22:318–325.

    Article  Google Scholar 

  12. Khan A, Yavorsky C, Liechti S, et al. Assessing the sources of unreliability (rater, subject, time-point) in a failed clinical trial using items of the Positive and Negative Syndrome Scale (PANSS). J Clin Psychopharmacol. 2013;33:109–117.

    Article  Google Scholar 

  13. Kobak K, Brown B, Sharp I, et al. Sources of unreliability in depression ratings. J Clin Psychopharmacol. 2009;29:82–85.

    Article  Google Scholar 

  14. Yavorsky C, Tran L, Di Clemente G, et al. Testing the results of risk-based data-monitoring algorithms for the PANSS: what is the predictive ability of this approach given the empirical results in a pooled sample from eight Phase III Schizophrenia trials? Poster presentation at the 2015 International Congress on Schizophrenia Research (ICOSR), Colorado Springs, CO, March 2015.

  15. Rabinowitz J, Schooler N, Anderson A, et al. Consistency checks to improve measurement with the Positive and Negative Syndrome Scale (PANSS). Schizophr Res 2017;190:74–76.

    Article  Google Scholar 

  16. DeFries A, Rothman B, Yavorsky C, et al. Quantifying rater drift on the HAM-D: implications for reliability, sample size, and ongoing training strategy. Eur Neuropsychopharmacol. 2011;21:S357–S358.

    Article  Google Scholar 

  17. Engelhardt N, Masotti M, Wolanski K, et al. What can data monitoring tell us about where to focus efforts to remediate problematic scoring of the PANSS? Poster presentation at the 2015 American Society of Clinical Psychopharmacology (ASCP) conference, Miami, FL, June 2015.

  18. Daniel DG, Kott A. Risk-based data quality monitoring utilizing data analytics and recorded PANSS interviews in global schizophrenia trials. Poster presentation at the 10th Anniversary International Society of Clinical Trials Methodology (ISCTM) Meeting, Philadelphia, PA, February 2014.

  19. Yavorsky C, Di Clemente G, Wolanski K. Retrospective analysis of a failed depression trial: what can we say about the impact of poor rating practice using data-monitoring algorithms? Poster presentation at the 2014 Autumn International Society for CNS Clinical Trials and Methodology (ISCTM) Conference, Boston, MA, October 2014.

  20. Barnes S, Katta N, Sanford N, et al. Technology considerations to enable the risk-based monitoring methodology. Therapeutic Innovation & Regulatory Science. 2014;48:536–545.

    Article  Google Scholar 

  21. Wilson B, Provencher T, Gough J, et al. Defining a central monitoring capability: sharing the experience of TransCelerate BioPharma’s approach, Part 1. Therapeutic Innovation & Regulatory Science. 2014;48:529–535.

    Article  Google Scholar 

  22. Gough J, Wilson B, Zerola M, et al. Defining a central monitoring capability: sharing the experience of TransCelerate BioPharma’s approach, Part 2. Therapeutic Innovation & Regulatory Science. 2016;50:8–14.

    Article  Google Scholar 

  23. Codd EF. A relational model of data for large shared data banks. Commun ACM. 1970;13:377–387.

    Article  Google Scholar 

  24. Knepper D, Fenske C, Nadolny P, et al. Detecting data quality issues in clinical trials: current practices and recommendations. Therapeutic Innovation & Regulatory Science. 2016;50:15.

    Article  Google Scholar 

  25. Knepper D, Lindblad AS, Sharma G, et al. Statistical monitoring in clinical trials: best practices for detecting data anomalies suggestive of fabrication or misconduct. Therapeutic Innovation & Regulatory Science. 2016;50:144–154.

    Article  Google Scholar 

  26. Daniel DG, Busner J, McNamara C. Ongoing monitoring and feedback decreases error rates and improves internal consistency of PANSS ratings in an international clinical trial. Poster presentation at the 2010 Autumn International Society for CNS Clinical Trials and Methodology (ISCTM) Conference, Baltimore, MD, October 2010.

  27. Bromley TA, Ormont MA. Data monitoring and ongoing individualized rater education diminishes ADASCog administration and scoring errors. Poster presentation at the 2010 Autumn International Society for CNS Clinical Trials and Methodology (ISCTM) Conference, Baltimore, MD, October 2010.

  28. Miller DS, McNamara C, Samuelson P, et al. Is an in-study surveillance program effective at reducing error rates for both experienced and novice raters? Poster presentation at the International Society for CNS Clinical Trials and Methodology (ISCTM) 7th Annual Scientific Meeting, Washington, DC, February 2011.

  29. Yavorsky C, Meares K, McNamara C, et al. Precision of estimating the sample size needed to power clinical trials: what is the effect of risk-based monitoring? Poster presentation at the American College of Neuropsychopharmacology (ACNP) 2016 Annual Meeting, Hollywood, FL, December 2016.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cynthia McNamara PhD.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

McNamara, C., Engelhardt, N., Potter, W. et al. Risk-Based Data Monitoring: Quality Control in Central Nervous System (CNS) Clinical Trials. Ther Innov Regul Sci 53, 176–182 (2019). https://doi.org/10.1177/2168479018774325

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1177/2168479018774325

Keywords

Navigation