Abstract
Purpose
To evaluate if and to what extent the degree of subspecialization in abdominal imaging (AI) affects rates of discrepancies identified on review of body CT studies initially interpreted by board-certified radiologists not specialized in AI.
Method and Materials
AI division radiologists at one academic medical center were classified as primary or secondary members of the division based on whether they perform more or less than 50% of their clinical duties in AI. Primary AI division radiologists were further subdivided based on whether or not they focus their clinical duties almost exclusively in AI. All AI radiologists performed subspecialty review of all after-hours body CT studies initially interpreted by any non-division radiologist. The discrepancies identified in the subspecialty review of consecutive after-hours body CT scans performed between 7/1/10 and 12/31/10 were analyzed and placed into one of three categories: (1) discrepancies that potentially affect patient care (“clinically relevant discrepancies”, or CRD); (2) discrepancies that would not affect patient care (“incidental discrepancies”, or ID); and (3) other types of comments. Rates of CRD and ID detection were compared between subgroups of Abdominal Imaging Division radiologists divided by the degree of subspecialization.
Results
1303 studies met the inclusion criteria. Of 742 cases reviewed by primary members of the AI division, 33 (4.4%) had CRD and 78 (10.5%) had ID. Of 561 cases reviewed by secondary members of the AI division, 11 (2.0%) had CRD and 36 (6.5%) had ID. The differences between the groups for both types of discrepancies were statistically significant (p = 0.01). When primary members of the AI division were further subdivided based on extent of clinical focus on abdominal imaging, rates of CRD and ID detection were higher for the subgroup with more clinical focus on abdominal imaging.
Conclusion
The degree of AI subspecialization affects the rate of clinically relevant and ID identified in body CT interpretations initially rendered by board certified but non-abdominal imaging subspecialized radiologists.
Similar content being viewed by others
References
Amis ES Jr, Dunnick NR (2009) Improvement in radiology education: joint efforts of the American Board of Radiology and the Diagnostic Radiology Residency Review Committee. J Am Coll Radiol 6:103–105
DiPiro PJ, vanSonnenberg E, Tumeh SS, Ros PR (2002) Volume and impact of second-opinion consultations by radiologists at a tertiary care cancer center: data. Acad Radiol 9:1430–1433
Siegle RL, Baram EM, Reuter SR, et al. (1998) Rates of disagreement in imaging interpretation in a group of community hospitals. Acad Radiol 5:148–154
Eakins C, Ellis WD, Pruthi S, et al. (2012) Second opinion interpretations by specialty radiologists at a pediatric hospital: rate of disagreement and clinical implications. Am J Roentgenol 199:916–920
Jordan YJ, Jordan JE, Lightfoote JB, Ragland KD (2012) Quality outcomes of reinterpretation of brain CT studies by subspecialty experts in stroke imaging. Am J Roentgenol 199:1365–1370
Ondategui-Parra S, Erturk SM, Ros PR (2006) Survey of the use of quality indicators in academic radiology departments. Am J Roentgenol 187:W451–W455
Briggs GM, Flynn PA, Worthington M, Rennie I, McKinstry CS (2008) The role of specialist neuroradiology second opinion reporting: is there added value? Clin Radiol 63:791–795
Jeffers AB, Saghir A, Camacho M (2012) Formal reporting of second-opinion CT interpretation: experience and reimbursement in the emergency department setting. Emerg Radiol 19:187–193
Le AH, Licurse A, Catanzano TM (2007) Interpretation of head CT scans in the emergency department by fellows versus general staff non-neuroradiologists: a closer look at the effectiveness of a quality control program. Emerg Radiol 14:311–316
Tilleman EH, Phoa SS, Van Delden OM, et al. (2003) Reinterpretation of radiological imaging in patients referred to a tertiary referral centre with a suspected pancreatic or hepatobiliary malignancy: impact on treatment strategy. Eur Radiol 13:1095–1099
Halsted MJ (2004) Radiology peer review as an opportunity to reduce errors and improve patient care. J Am Coll Radiol 1:984–987
Kruskal JB, Anderson S, Yam CS, Sosna J (2009) Strategies for establishing a comprehensive quality and performance improvement program in a radiology department. Radiographics 29:315–329
Mahgerefteh S, Kruskal JB, Yam CS, Blachar A, Sosna J (2009) Peer review in diagnostic radiology: current state and a vision for the future. Radiographics 29:1221–1231
Wechsler RJ, Spettell CM, Kurtz AB, et al. (1996) Effects of training and experience in interpretation of emergency body CT scans. Radiology 199:717–720
Zan E, Yousem DM, Carone M, Lewin JS (2010) Second-opinion consultations in neuroradiology. Radiology 255:135–141
Borgstede JP, Lewis RS, Bhargavan M, Sunshine JH (2004) RADPEER quality assurance program: a multifacility study of interpretive disagreement rates. J Am Coll Radiol 1:59–65
Jackson VP, Cushing T, Abujudeh HH, et al. (2009) RADPEER scoring white paper. J Am Coll Radiol 6:21–25
Soffa DJ, Lewis RS, Sunshine JH, Bhargavan M (2004) Disagreement in interpretation: a method for the development of benchmarks for quality assurance in imaging. J Am Coll Radiol 1:212–217
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Bell, M.E., Patel, M.D. The degree of abdominal imaging (AI) subspecialization of the reviewing radiologist significantly impacts the number of clinically relevant and incidental discrepancies identified during peer review of emergency after-hours body CT studies. Abdom Imaging 39, 1114–1118 (2014). https://doi.org/10.1007/s00261-014-0139-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00261-014-0139-4