Exploring differences in adverse symptom event grading thresholds between clinicians and patients in the clinical trial setting

  • Thomas M. AtkinsonEmail author
  • Lauren J. Rogak
  • Narre Heon
  • Sean J. Ryan
  • Mary Shaw
  • Liora P. Stark
  • Antonia V. Bennett
  • Ethan Basch
  • Yuelin Li
Original Article – Clinical Oncology



Symptomatic adverse event (AE) monitoring is essential in cancer clinical trials to assess patient safety, as well as inform decisions related to treatment and continued trial participation. As prior research has demonstrated that conventional concordance metrics (e.g., intraclass correlation) may not capture nuanced aspects of the association between clinician and patient-graded AEs, we aimed to characterize differences in AE grading thresholds between doctors (MDs), registered nurses (RNs), and patients using the Bayesian Graded Item Response Model (GRM).


From the medical charts of 393 patients aged 26–91 (M = 62.39; 43% male) receiving chemotherapy, we retrospectively extracted MD, RN and patient AE ratings. Patients reported using previously developed Common Terminology Criteria for Adverse Events (CTCAE) patient-language adaptations called STAR (Symptom Tracking and Reporting). A GRM was fitted to calculate the latent grading thresholds between MDs, RNs and patients.


Clinicians have overall higher average grading thresholds than patients when assessing diarrhea, dyspnea, nausea and vomiting. However, RNs have lower grading thresholds than patients and MDs when assessing constipation. The GRM shows higher variability in patients’ AE grading thresholds than those obtained from clinicians.


The present study provides evidence to support the notion that patients report some AEs that clinicians might not consider noteworthy until they are more severe. The availability of GRM methodology could serve to enhance clinical understanding of the patient symptomatic experience and facilitate discussion where AE grading discrepancies exist. Future work should focus on capturing explicit AE grading decision criteria from MDs, RNs, and patients.


Patient-reported outcomes Adverse events Clinical trials Clinician–patient agreement Item response theory Neoplasms 


Compliance with ethical standards


This project was supported by a National Institutes of Health Support Grant (NCI 2 P30 CA08748-48), which provides partial support for the Behavioral Research Methods Core Facility used in conducting this investigation. This study was also supported by a grant from the Society of Memorial Sloan Kettering.

Conflict of interest

The authors declare that there is no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.


  1. Atkinson TM, Li Y, Coffey CW, Sit L, Shaw M, Lavene D, Basch E (2012) Reliability of adverse symptom event reporting by clinicians. Qual Life Res 21(7):1159–1164. doi: 10.1007/s11136-011-0031-4 CrossRefPubMedGoogle Scholar
  2. Atkinson TM, Ryan SJ, Bennett AV, Stover AM, Saracino RM, Rogak LJ, Jewell ST, Matsoukas K, Li Y, Basch E (2016) The association between clinician-based common terminology criteria for adverse events (CTCAE) and patient-reported outcomes (PRO): a systematic review. Supp Care Cancer 24:3669–3676. doi: 10.1007/s00520-016-3297-9 CrossRefGoogle Scholar
  3. Baldwin P, Bernstein J, Wainer H (2009) Hip psychometrics. Stat Med 28(17):2277–2292. doi: 10.1002/sim.3616 CrossRefPubMedGoogle Scholar
  4. Basch E (2010) The missing voice of patients in drug-safety reporting. N Engl J Med 362(10):865–869. doi: 10.1056/NEJMp0911494 CrossRefPubMedPubMedCentralGoogle Scholar
  5. Basch E (2012) Beyond the FDA PRO guidance: steps toward integrating meaningful patient-reported outcomes into regulatory trials and US drug labels. Value Health 15(3):401–403. doi: 10.1016/j.jval.2012.03.1385 CrossRefPubMedGoogle Scholar
  6. Basch E (2014) New frontiers in patient-reported outcomes: adverse event reporting, comparative effectiveness, and quality assessment. Annu Rev Med 65:307–317. doi: 10.1146/annurev-med-010713-141500 CrossRefPubMedGoogle Scholar
  7. Basch E (2016) Missing patients’ symptoms in cancer care delivery—the importance of patient-reported outcomes. JAMA Oncol 2(4):433–434. doi: 10.1001/jamaoncol.2015.4719 CrossRefPubMedGoogle Scholar
  8. Basch E, Artz D, Dulko D, Scher K, Sabbatini P, Hensley M et al (2005) Patient online self-reporting of toxicity symptoms during chemotherapy. J Clin Oncol 23(15):3552–3561. doi: 10.1200/JCO.2005.04.275 CrossRefPubMedGoogle Scholar
  9. Basch E, Artz D, Iasonos A, Speakman J, Shannon K, Lin K et al (2007a) Evaluation of an online platform for cancer patient self-reporting of chemotherapy toxicities. J Am Med Inform Assoc 14(3):264–268. doi: 10.1197/jamia.M2177
  10. Basch E, Iasonos A, Barz A, Culkin A, Kris MG, Artz D, ... Schrag, D (2007b) Long-term toxicity monitoring via electronic patient-reported outcomes in patients receiving chemotherapy. J Clin Oncol 25(34):5374–5380. doi: 10.1200/JCO.2007.11.2243
  11. Basch E, Jia X, Heller G, Barz A, Sit L, Fruscione M et al (2009) Adverse symptom event reporting by patients vs clinicians: relationships with clinical outcomes. J Natl Cancer Inst 101(23):1624–1632. doi: 10.1093/jnci/djp386 CrossRefPubMedPubMedCentralGoogle Scholar
  12. Basch E, Reeve BB, Mitchell SA, Clauser SB, Minasian LM, Dueck AC et al (2014) Development of the National Cancer Institute’s patient-reported outcomes version of the common terminology criteria for adverse events (PRO-CTCAE). J Natl Cancer Inst, 106(9). doi: 10.1093/jnci/dju244
  13. Basch E, Wood WA, Schrag D, Sima CS, Shaw M, Rogak LJ et al (2015) Feasibility and clinical impact of sharing patient-reported symptom toxicities and performance status with clinical investigators during a phase 2 cancer treatment trial. Clin Trials. doi: 10.1177/1740774515615540 PubMedGoogle Scholar
  14. Basch E, Deal AM, Kris MG, Scher HI, Hudis CA, Sabbatini P et al (2016) Symptom monitoring with patient-reported outcomes during routine cancer treatment: a randomized controlled trial. J Clin Oncol 34(6):557–565. doi: 10.1200/JCO.2015.63.0830 CrossRefPubMedGoogle Scholar
  15. Cirillo M, Venturini M, Ciccarelli L, Coati F, Bortolami O, Verlato G (2009) Clinician versus nurse symptom reporting using the National Cancer Institute-Common Terminology Criteria for Adverse Events during chemotherapy: results of a comparison based on patient’s self-reported questionnaire. Ann Oncol 20(12):1929–1935. doi: 10.1093/annonc/mdp287 CrossRefPubMedGoogle Scholar
  16. Dueck AC, Mendoza TR, Mitchell SA, Reeve BB, Castro KM, Rogak LJ et al (2015) Validity and reliability of the U.S. National Cancer Institute’s patient-reported outcomes version of the common terminology criteria for adverse events (PRO-CTCAE). JAMA Oncol. doi: 10.1001/jamaoncol.2015.2639 PubMedPubMedCentralGoogle Scholar
  17. Hay JL, Atkinson TM, Reeve BB, Mitchell SA, Mendoza TR, Willis G et al (2014) Cognitive interviewing of the US National Cancer Institute’s patient-reported outcomes version of the common terminology criteria for adverse events (PRO-CTCAE). Qual Life Res 23(1):257–269. doi: 10.1007/s11136-013-0470-1 CrossRefPubMedGoogle Scholar
  18. Karnofsky DA, Burchenal JH (1949) The clinical evaluation of chemotherapeutic agents in cancer. In: Macleod CM (ed) Evaluation of chemotherapeutic agents. Columbia University Press, New York, pp 199–205Google Scholar
  19. National Cancer Institute, National Institutes of Health, U.S. Department of Health and Human Services. Common Terminology Criteria for Adverse Events (CTCAE) Version 4.0. Published May 28, 2009; Revised Version 4.03 June 14, 2010. Accessed 9 Dec 2016
  20. Plummer M (2016) JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling.
  21. R Development Core Team (2016) R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna. Accessed 9 Dec 2016
  22. Rosner B (2005) Fundamentals of biostatistics. Duxbury, BelmontGoogle Scholar
  23. Shrout PE, Fleiss JL (1979) Intraclass correlations: Uses in assessing rater reliability. Psychol Bull 86:420–428. doi: 10.1037//0033-2909.86.2.420 CrossRefPubMedGoogle Scholar
  24. Silverman BW (1986) Density estimation. Chapman and Hall, LondonCrossRefGoogle Scholar
  25. Trotti A, Colevas AD, Setser A, Basch E (2007) Patient-reported outcomes and the evolution of adverse event reporting in oncology. J Clin Oncol 25:5121–5127CrossRefPubMedGoogle Scholar
  26. Wang WC, Wilson M (2005) The Rasch testlet model. Appl Psychol Meas 29(2):126–149. doi: 10.1177/0146621604271053 CrossRefGoogle Scholar
  27. Samejima F (1997) Graded response model Handbook of modern item response theory. Springer, New York, pp 85–100Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2017

Authors and Affiliations

  • Thomas M. Atkinson
    • 1
    Email author
  • Lauren J. Rogak
    • 2
  • Narre Heon
    • 2
  • Sean J. Ryan
    • 2
    • 3
  • Mary Shaw
    • 1
  • Liora P. Stark
    • 1
  • Antonia V. Bennett
    • 4
  • Ethan Basch
    • 2
    • 4
  • Yuelin Li
    • 1
  1. 1.Department of Psychiatry and Behavioral SciencesMemorial Sloan Kettering Cancer CenterNew YorkUSA
  2. 2.Department of Epidemiology and BiostatisticsMemorial Sloan Kettering Cancer CenterNew YorkUSA
  3. 3.City University of New YorkNew YorkUSA
  4. 4.University of North Carolina-Chapel HillChapel HillUSA

Personalised recommendations