Advertisement

Determining Follow-Up Imaging Study Using Radiology Reports

  • Sandeep Dalal
  • Vadiraj HombalEmail author
  • Wei-Hung Weng
  • Gabe Mankovich
  • Thusitha Mabotuwana
  • Christopher S. Hall
  • Joseph FullerIII
  • Bruce E. Lehnert
  • Martin L. Gunn
Original Paper

Abstract

Radiology reports often contain follow-up imaging recommendations. Failure to comply with these recommendations in a timely manner can lead to delayed treatment, poor patient outcomes, complications, unnecessary testing, lost revenue, and legal liability. The objective of this study was to develop a scalable approach to automatically identify the completion of a follow-up imaging study recommended by a radiologist in a preceding report. We selected imaging-reports containing 559 follow-up imaging recommendations and all subsequent reports from a multi-hospital academic practice. Three radiologists identified appropriate follow-up examinations among the subsequent reports for the same patient, if any, to establish a ground-truth dataset. We then trained an Extremely Randomized Trees that uses recommendation attributes, study meta-data and text similarity of the radiology reports to determine the most likely follow-up examination for a preceding recommendation. Pairwise inter-annotator F-score ranged from 0.853 to 0.868; the corresponding F-score of the classifier in identifying follow-up exams was 0.807. Our study describes a methodology to automatically determine the most likely follow-up exam after a follow-up imaging recommendation. The accuracy of the algorithm suggests that automated methods can be integrated into a follow-up management application to improve adherence to follow-up imaging recommendations. Radiology administrators could use such a system to monitor follow-up compliance rates and proactively send reminders to primary care providers and/or patients to improve adherence.

Keywords

Medical informatics applications Radiology Natural language processing Supervised machine learning Follow-up studies 

Notes

Compliance with Ethical Standards

Conflicts of Interest

Authors TM, VH, SD, and CH are employees of Philips working in collaboration with the University of Washington, Department of Radiology under an industry-supported master research agreement. This manuscript details original research performed under this agreement in compliance with the Sunshine Act, but does not employ an existing Philips product. Authors TM and CH also have Adjunct Faculty Appointments with the University of Washington.

References

  1. 1.
    Sistrom CL, Dreyer KJ, Dang PP, Weilburg JB, Boland GW, Rosenthal DI, Thrall JH: Recommendations for additional imaging in radiology reports: multifactorial analysis of 5.9 million examinations. Radiology 253(2):453–461, 2009CrossRefGoogle Scholar
  2. 2.
    Larson PA, Berland LL, Griffith B, et al. Actionable findings and the role of IT support: report of the ACR actionable reporting work groupGoogle Scholar
  3. 3.
    Kadom N, Doherty G, Close M et al.: Safety-net academic hospital experience in following up noncritical yet potentially significant radiologist recommendations. Am J Roentgenol 209(5):982–986, 2017CrossRefGoogle Scholar
  4. 4.
    Blagev DP, Lloyd JF, Conner K, Dickerson J, Adams D, Stevens SM, Woller SC, Evans RS, Elliott CG: Follow-up of incidental pulmonary nodules and the radiology report. J Am Coll Radiol 11(4):378–383, 2014CrossRefGoogle Scholar
  5. 5.
    Kulon M, “Lost to Follow-Up: automated Detection of Patients Who Missed Follow-Ups Which Were Recommended on Radiology Reports”, SIIM Conference Proceedings 2016.Google Scholar
  6. 6.
    Sloan CE, Chadalavada SC, Cook TS et al.: Assessment of follow-up completeness and notification preferences for imaging findings of possible cancer: what happens after radiologists submit their reports? Acad Radiol 21(12):1579–1586, 2014CrossRefGoogle Scholar
  7. 7.
    Cook TS, Lalevic D, Sloan C, Chadalavada SC, Langlotz CP, Schnall MD, Zafar HM: Implementation of an Automated Radiology Recommendation-Tracking Engine for Abdominal Imaging Findings of Possible cancer. J Am Coll Radiol 14(5):629–636, 2017CrossRefGoogle Scholar
  8. 8.
    Dang PA, Kalra MK, Blake MA, Schultz TJ, Halpern EF, Dreyer KJ: Extraction of Recommendation Features in Radiology with Natural Language Processing: Exploratory Study. AJR 191:313–320, 2008CrossRefGoogle Scholar
  9. 9.
    Yetisgen-Yildiz M, Gunn ML, Xia F, Payne TH: Automatic identification of critical follow-up recommendation sentences in radiology reports. AMIA Annual Symp Proc 2011:1593–1602Google Scholar
  10. 10.
    Dutta S, Long WJ, Brown DF, Reisner AT: Automated detection using natural language processing of radiologists recommendations for additional imaging of incidental findings. Ann Emerg Med 62(2):162–169, 2013CrossRefGoogle Scholar
  11. 11.
    Oliveira L, Tellis R, Qian Y et al., Follow-up Recommendation Detection on Radiology, Reports with Incidental Pulmonary Nodules. Stud Health Technol Inform. 2015; 216:1028.Google Scholar
  12. 12.
    Gunn M.L., Yetisgen M, Lehnert B.E. et al., Automating Radiology Quality and Efficiency Measures with Natural Language Processing, Radiological Society of North America 2015 Scientific Assembly and Annual Meeting.Google Scholar
  13. 13.
    Gunn M.L., Lehnert B.E., Hall C et al., Impact of Patient Location and Radiology Subspecialty on Imaging Follow-up Recommendation Rate, Radiological Society of North America 2015 Scientific Assembly and Annual Meeting.Google Scholar
  14. 14.
    Mabotuwana T, Hall C, Dalal S, Tieder J, Gunn M. Extracting follow-up recommendations and associated anatomy from Radiology Reports. in 16th World Congress on Medical and Health Informatics (MedInfo2017). Aug 2017. Hangzhou.Google Scholar
  15. 15.
    Geurts P, Ernst D, Wehenkel L: Extremely randomized trees. Machine Learning 63(1):3–42, 2006CrossRefGoogle Scholar
  16. 16.
    Rosse C, Mejino JLV: A reference ontology for biomedical informatics: the Foundational Model of Anatomy. Journal of Biomedical Informatics 36(6):478–500, 2003CrossRefGoogle Scholar
  17. 17.
    Mikolov T, Sutskever I, Chen K et al., (2013) Distributed Representations of Words and Phrases and their Compositionality. Advances in Neural Information Processing Systems 26 (NIPS 2013), p. 3111–3119Google Scholar
  18. 18.
    Blacoe W, Lapata M, A Comparison of Vector-based Representations for Semantic Composition, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, p. 546–556.Google Scholar
  19. 19.
    Al-Mutairi A, Meyer AND, Chang P et al.: Lack of Timely Follow-Up of Abnormal Imaging Results and Radiologists’ Recommendations. J Am Coll Radiol 12(4):385–389, 2015CrossRefGoogle Scholar
  20. 20.
    Wandtke B, Gallagher S: “Closing the Loop: A Radiology Follow-up Recommendation Tracking System,” RSNA Quality Storyboards 2016.Google Scholar
  21. 21.
    Mabotuwana T, Hombal V, Dalal S, Hall C, Gunn M. Determining Adherence to Follow-up Imaging Recommendations. Journal of American College of Radiology 2018 Mar; p. 422–428Google Scholar
  22. 22.
    Gunn ML, Lehnert BE, Hall CS, et al. Use of conditional statements in radiology follow-recommendation sentences: relationship to follow up compliance. Radiological Society of North America 101st Scientific Assembly and Annual Meeting; Chicago. 2015.Google Scholar

Copyright information

© Society for Imaging Informatics in Medicine 2019

Authors and Affiliations

  • Sandeep Dalal
    • 1
  • Vadiraj Hombal
    • 1
    Email author
  • Wei-Hung Weng
    • 2
  • Gabe Mankovich
    • 1
  • Thusitha Mabotuwana
    • 3
  • Christopher S. Hall
    • 3
  • Joseph FullerIII
    • 4
  • Bruce E. Lehnert
    • 4
  • Martin L. Gunn
    • 4
  1. 1.Clinical Informatics Solutions and ServicesPhilips Research North AmericaCambridgeUSA
  2. 2.Department of Biomedical InformaticsHarvard Medical SchoolBostonUSA
  3. 3.Radiology SolutionsPhilips HealthcareBothellUSA
  4. 4.Department of RadiologyUniversity of WashingtonSeattleUSA

Personalised recommendations