Skip to main content
Log in

The degree of abdominal imaging (AI) subspecialization of the reviewing radiologist significantly impacts the number of clinically relevant and incidental discrepancies identified during peer review of emergency after-hours body CT studies

  • Published:
Abdominal Imaging Aims and scope Submit manuscript

Abstract

Purpose

To evaluate if and to what extent the degree of subspecialization in abdominal imaging (AI) affects rates of discrepancies identified on review of body CT studies initially interpreted by board-certified radiologists not specialized in AI.

Method and Materials

AI division radiologists at one academic medical center were classified as primary or secondary members of the division based on whether they perform more or less than 50% of their clinical duties in AI. Primary AI division radiologists were further subdivided based on whether or not they focus their clinical duties almost exclusively in AI. All AI radiologists performed subspecialty review of all after-hours body CT studies initially interpreted by any non-division radiologist. The discrepancies identified in the subspecialty review of consecutive after-hours body CT scans performed between 7/1/10 and 12/31/10 were analyzed and placed into one of three categories: (1) discrepancies that potentially affect patient care (“clinically relevant discrepancies”, or CRD); (2) discrepancies that would not affect patient care (“incidental discrepancies”, or ID); and (3) other types of comments. Rates of CRD and ID detection were compared between subgroups of Abdominal Imaging Division radiologists divided by the degree of subspecialization.

Results

1303 studies met the inclusion criteria. Of 742 cases reviewed by primary members of the AI division, 33 (4.4%) had CRD and 78 (10.5%) had ID. Of 561 cases reviewed by secondary members of the AI division, 11 (2.0%) had CRD and 36 (6.5%) had ID. The differences between the groups for both types of discrepancies were statistically significant (p = 0.01). When primary members of the AI division were further subdivided based on extent of clinical focus on abdominal imaging, rates of CRD and ID detection were higher for the subgroup with more clinical focus on abdominal imaging.

Conclusion

The degree of AI subspecialization affects the rate of clinically relevant and ID identified in body CT interpretations initially rendered by board certified but non-abdominal imaging subspecialized radiologists.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Amis ES Jr, Dunnick NR (2009) Improvement in radiology education: joint efforts of the American Board of Radiology and the Diagnostic Radiology Residency Review Committee. J Am Coll Radiol 6:103–105

    Article  PubMed  Google Scholar 

  2. DiPiro PJ, vanSonnenberg E, Tumeh SS, Ros PR (2002) Volume and impact of second-opinion consultations by radiologists at a tertiary care cancer center: data. Acad Radiol 9:1430–1433

    Article  PubMed  Google Scholar 

  3. Siegle RL, Baram EM, Reuter SR, et al. (1998) Rates of disagreement in imaging interpretation in a group of community hospitals. Acad Radiol 5:148–154

    Article  CAS  PubMed  Google Scholar 

  4. Eakins C, Ellis WD, Pruthi S, et al. (2012) Second opinion interpretations by specialty radiologists at a pediatric hospital: rate of disagreement and clinical implications. Am J Roentgenol 199:916–920

    Article  Google Scholar 

  5. Jordan YJ, Jordan JE, Lightfoote JB, Ragland KD (2012) Quality outcomes of reinterpretation of brain CT studies by subspecialty experts in stroke imaging. Am J Roentgenol 199:1365–1370

    Article  Google Scholar 

  6. Ondategui-Parra S, Erturk SM, Ros PR (2006) Survey of the use of quality indicators in academic radiology departments. Am J Roentgenol 187:W451–W455

    Article  Google Scholar 

  7. Briggs GM, Flynn PA, Worthington M, Rennie I, McKinstry CS (2008) The role of specialist neuroradiology second opinion reporting: is there added value? Clin Radiol 63:791–795

    Article  CAS  PubMed  Google Scholar 

  8. Jeffers AB, Saghir A, Camacho M (2012) Formal reporting of second-opinion CT interpretation: experience and reimbursement in the emergency department setting. Emerg Radiol 19:187–193

    Article  PubMed  Google Scholar 

  9. Le AH, Licurse A, Catanzano TM (2007) Interpretation of head CT scans in the emergency department by fellows versus general staff non-neuroradiologists: a closer look at the effectiveness of a quality control program. Emerg Radiol 14:311–316

    Article  PubMed  Google Scholar 

  10. Tilleman EH, Phoa SS, Van Delden OM, et al. (2003) Reinterpretation of radiological imaging in patients referred to a tertiary referral centre with a suspected pancreatic or hepatobiliary malignancy: impact on treatment strategy. Eur Radiol 13:1095–1099

    PubMed  Google Scholar 

  11. Halsted MJ (2004) Radiology peer review as an opportunity to reduce errors and improve patient care. J Am Coll Radiol 1:984–987

    Article  PubMed  Google Scholar 

  12. Kruskal JB, Anderson S, Yam CS, Sosna J (2009) Strategies for establishing a comprehensive quality and performance improvement program in a radiology department. Radiographics 29:315–329

    Article  PubMed  Google Scholar 

  13. Mahgerefteh S, Kruskal JB, Yam CS, Blachar A, Sosna J (2009) Peer review in diagnostic radiology: current state and a vision for the future. Radiographics 29:1221–1231

    Article  PubMed  Google Scholar 

  14. Wechsler RJ, Spettell CM, Kurtz AB, et al. (1996) Effects of training and experience in interpretation of emergency body CT scans. Radiology 199:717–720

    Article  CAS  PubMed  Google Scholar 

  15. Zan E, Yousem DM, Carone M, Lewin JS (2010) Second-opinion consultations in neuroradiology. Radiology 255:135–141

    Article  PubMed  Google Scholar 

  16. Borgstede JP, Lewis RS, Bhargavan M, Sunshine JH (2004) RADPEER quality assurance program: a multifacility study of interpretive disagreement rates. J Am Coll Radiol 1:59–65

    Article  PubMed  Google Scholar 

  17. Jackson VP, Cushing T, Abujudeh HH, et al. (2009) RADPEER scoring white paper. J Am Coll Radiol 6:21–25

    Article  PubMed  Google Scholar 

  18. Soffa DJ, Lewis RS, Sunshine JH, Bhargavan M (2004) Disagreement in interpretation: a method for the development of benchmarks for quality assurance in imaging. J Am Coll Radiol 1:212–217

    Article  PubMed  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maitray D. Patel.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bell, M.E., Patel, M.D. The degree of abdominal imaging (AI) subspecialization of the reviewing radiologist significantly impacts the number of clinically relevant and incidental discrepancies identified during peer review of emergency after-hours body CT studies. Abdom Imaging 39, 1114–1118 (2014). https://doi.org/10.1007/s00261-014-0139-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00261-014-0139-4

Keywords

Navigation