Skip to main content

Advertisement

Log in

Deep learning-based endoscopic anatomy classification: an accelerated approach for data preparation and model validation

  • Published:
Surgical Endoscopy Aims and scope Submit manuscript

Abstract

Background

Photodocumentation during endoscopy procedures is one of the indicators for endoscopy performance quality; however, this indicator is difficult to measure and audit in the endoscopy unit. Emerging artificial intelligence technology may solve this problem, which requires a large amount of material for model development. We developed a deep learning-based endoscopic anatomy classification system through convolutional neural networks with an accelerated data preparation approach.

Patients and methods

We retrospectively collected 8,041 images from esophagogastroduodenoscopy (EGD) procedures and labeled them using two experts for nine anatomical locations of the upper gastrointestinal tract. A base model for EGD image multiclass classification was first developed, and an additional 6,091 images were enrolled and classified by the base model. A total of 5,963 images were manually confirmed and added to develop the subsequent enhanced model. Additional internal and external endoscopy image datasets were used to test the model performance.

Results

The base model achieved total accuracy of 96.29%. For the enhanced model, the total accuracy was 96.64%. The overall accuracy improved with the enhanced model compared with the base model for the internal test dataset without narrowband images (93.05% vs. 91.25%, p < 0.01) or with narrowband images (92.74% vs. 90.46%, p < 0.01). The total accuracy was 92.56% of the enhanced model on the external test dataset.

Conclusions

We constructed a deep learning-based model with an accelerated approach that can be used for quality control in endoscopy units. The model was also validated with both internal and external datasets with high accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Attardo S, Chandrasekar VT, Spadaccini M, Maselli R, Patel HK, Desai M et al (2020) Artificial intelligence technologies for the detection of colorectal lesions: the future is now. World J Gastroenterol 26:5606–5616

    Article  Google Scholar 

  2. Luo H, Xu G, Li C, He L, Luo L, Wang Z et al (2019) Real-time artificial intelligence for detection of upper gastrointestinal cancer by endoscopy: a multicentre, case-control, diagnostic study. Lancet Oncol 20:1645–1654

    Article  CAS  Google Scholar 

  3. Aoki T, Yamada A, Aoyama K, Saito H, Tsuboi A, Nakada A et al (2019) Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest Endosc 89:357–63 e2

    Article  Google Scholar 

  4. Pimenta-Melo AR, Monteiro-Soares M, Libanio D, Dinis-Ribeiro M (2016) Missing rate for gastric cancer during upper gastrointestinal endoscopy: a systematic review and meta-analysis. Eur J Gastroenterol Hepatol 28:1041–1049

    Article  Google Scholar 

  5. Park JM, Huo SM, Lee HH, Lee BI, Song HJ, Choi MG (2017) Longer observation time increases proportion of neoplasms detected by esophagogastroduodenoscopy. Gastroenterology 153:460–9 e1

    Article  Google Scholar 

  6. Park JM, Lim CH, Cho YK, Lee BI, Cho YS, Song HJ et al (2019) The effect of photo-documentation of the ampulla on neoplasm detection rate during esophagogastroduodenoscopy. Endoscopy 51:115–124

    Article  Google Scholar 

  7. Marques S, Bispo M, Pimentel-Nunes P, Chagas C, Dinis-Ribeiro M (2017) Image documentation in gastrointestinal endoscopy: review of recommendations. GE Port J Gastroenterol 24:269–274

    Article  Google Scholar 

  8. Rey JF, Lambert R, Committee EQA (2001) ESGE recommendations for quality control in gastrointestinal endoscopy: guidelines for image documentation in upper and lower GI endoscopy. Endoscopy 33:901–903

    Article  CAS  Google Scholar 

  9. Emura F, Sharma P, Arantes V, Cerisoli C, Parra-Blanco A, Sumiyama K et al (2020) Principles and practice to facilitate complete photodocumentation of the upper gastrointestinal tract: World Endoscopy Organization position statement. Dig Endosc 32:168–179

    Article  Google Scholar 

  10. Beg S, Ragunath K, Wyman A, Banks M, Trudgill N, Pritchard DM et al (2017) Quality standards in upper gastrointestinal endoscopy: a position statement of the British Society of Gastroenterology (BSG) and Association of Upper Gastrointestinal Surgeons of Great Britain and Ireland (AUGIS). Gut 66:1886–1899

    Article  Google Scholar 

  11. Bisschops R, Rutter MD, Areia M, Spada C, Domagk D, Kaminski MF et al (2021) Overcoming the barriers to dissemination and implementation of quality measures for gastrointestinal endoscopy: European Society of Gastrointestinal Endoscopy (ESGE) and United European Gastroenterology (UEG) position statement. Endoscopy. https://doi.org/10.1177/2050640620981366

    Article  PubMed  Google Scholar 

  12. Borgli H, Thambawita V, Smedsrud PH, Hicks S, Jha D, Eskeland SL et al (2020) HyperKvasir, a comprehensive multi-class image and video dataset for gastrointestinal endoscopy. Sci Data 7:283

    Article  Google Scholar 

  13. Zhang H, Wu C, Zhang Z, Zhu Y, Lin H, Zhang Z, et al. ResNeSt: Split-Attention Networks. 2020: arXiv:2004.08955.

  14. He K, Zhang X, Ren S, Sun J. (2016) Deep residual learning for image recognition. 2016 IEEE Conference on computer vision and pattern recognition (CVPR). p. 770–778

  15. Deng J, Dong W, Socher R, Li L, Kai L, Li F-F. (2009) ImageNet: a large-scale hierarchical image database. 2009 IEEE Conference on computer vision and pattern recognition. p. 248–255

  16. Lin T-Y, Goyal P, Girshick R, He K, Dollár P. Focal loss for dense object detection. 2017: arXiv:1708.02002.

  17. Dietterich TG (1998) Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput 10:1895–1923

    Article  CAS  Google Scholar 

  18. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. (2017) Grad-CAM: visual explanations from deep networks via gradient-based localization. 2017 IEEE International conference on computer vision (ICCV). p 618–626

  19. Barkun AN, Almadi M, Kuipers EJ, Laine L, Sung J, Tse F et al (2019) Management of nonvariceal upper gastrointestinal bleeding: guideline recommendations from the international consensus group. Ann Intern Med 171:805–822

    Article  Google Scholar 

  20. Penny HA, Kurien M, Wong E, Ahmed R, Ejenavi E, Lau M et al (2016) Changing trends in the UK management of upper GI bleeding: is there evidence of reduced UK training experience? Frontline Gastroenterol 7:67–72

    Article  CAS  Google Scholar 

  21. Cohen J, Pike IM (2015) Defining and measuring quality in endoscopy. Gastrointest Endosc 81:1–2

    Article  Google Scholar 

  22. Bisschops R, Areia M, Coron E, Dobru D, Kaskas B, Kuvaev R et al (2016) Performance measures for upper gastrointestinal endoscopy: a European Society of Gastrointestinal Endoscopy (ESGE) quality improvement initiative. Endoscopy 48:843–864

    Article  Google Scholar 

  23. Rutter MD, Senore C, Bisschops R, Domagk D, Valori R, Kaminski MF et al (2016) The European society of gastrointestinal endoscopy quality improvement initiative: developing performance measures. Endoscopy 48:81–89

    PubMed  Google Scholar 

  24. Ang TL, Carneiro G (2021) Artificial intelligence in gastrointestinal endoscopy. J Gastroenterol Hepatol 36:5–6

    Article  Google Scholar 

  25. Takiyama H, Ozawa T, Ishihara S, Fujishiro M, Shichijo S, Nomura S et al (2018) Automatic anatomical classification of esophagogastroduodenoscopy images using deep convolutional neural networks. Sci Rep 8:7497

    Article  Google Scholar 

  26. Xu Z, Tao Y, Wenfang Z, Ne L, Zhengxing H, Jiquan L et al (2019) Upper gastrointestinal anatomy detection with multi-task convolutional neural networks. Healthc Technol Lett 6:176–180

    Article  Google Scholar 

  27. He Q, Bano S, Ahmad OF, Yang B, Chen X, Valdastri P et al (2020) Deep learning-based anatomical site classification for upper gastrointestinal endoscopy. Int J Comput Assist Radiol Surg 15:1085–1094

    Article  Google Scholar 

  28. Chen CH, Lee YW, Huang YS, Lan WR, Chang RF, Tu CY et al (2019) Computer-aided diagnosis of endobronchial ultrasound images using convolutional neural network. Comput Methods Programs Biomed 177:175–182

    Article  Google Scholar 

  29. Chang RF, Lee CC, Lo CM (2019) Quantitative diagnosis of rotator cuff tears based on sonographic pattern recognition. PLoS ONE 14:e0212741

    Article  CAS  Google Scholar 

  30. Choi SJ, Khan MA, Choi HS, Choo J, Lee JM, Kwon S et al (2021) Development of artificial intelligence system for quality control of photo documentation in esophagogastroduodenoscopy. Surg Endosc. https://doi.org/10.1007/s00464-020-08236-6

    Article  PubMed  PubMed Central  Google Scholar 

  31. Yen HH, Wu PY, Su PY, Yang CW, Chen YY, Chen MF et al (2021) Performance comparison of the deep learning and the human endoscopist for bleeding peptic ulcer disease. J Med Biol Eng. https://doi.org/10.1007/s40846-021-00608-0

    Article  Google Scholar 

Download references

Funding

The authors thank the Ministry of Science and Technology of Taiwan (MOST 109-2634-F-002-026 and MOST 110-2634-F-002-009) and Changhua Christian Hospital (109-CCH-IRP-008, 110-CCH-IRP-020) for financial support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hsu-Heng Yen.

Ethics declarations

Disclosures

Yuan-Yen Chang, Pai-Chi Li, Ruey-Feng Chang, Chih-Da Yao, Yang-Yuan Chen, Wen-Yen Chang and Hsu-Heng Yen have no conflict of interest or financial ties to disclose.

Ethical approval

The present study was approved by the Ethical Review Committee of Changhua Christian Hospital. All procedures were performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chang, YY., Li, PC., Chang, RF. et al. Deep learning-based endoscopic anatomy classification: an accelerated approach for data preparation and model validation. Surg Endosc 36, 3811–3821 (2022). https://doi.org/10.1007/s00464-021-08698-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00464-021-08698-2

Keywords

Navigation