Advertisement

Improved Artifact Detection in Endoscopy Imaging Through Profile Pruning

Conference paper
  • 161 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12722)

Abstract

Endoscopy is a highly operator dependent procedure. During any endoscopic surveillance of hollow organs, the presence of several imaging artifacts such as blur, specularity, floating debris and pixel saturation is inevitable. Artifacts affect the quality of diagnosis and treatment as they can obscure features relevant for assessing inflammation and morphological changes that are characteristic to precursors of cancer. In addition, they affect any automated analysis. It is therefore desired to detect and localise areas that are corrupted by artifacts such that these frames can either be discarded or the presence of these features can be taken into account during diagnosis. Such an approach can largely minimise the amount of false detection rates. In this work, we present a novel bounding box pruning approach that can effectively improve artifact detection and provide high localisation scores of diverse artifact classes. To this end, we train an EfficientDet architecture by minimising a focal loss, and compute the Bhattacharya distance between probability density of the pre-computed instance specific mean profiles of 7 artifact categories with that of the predicted bounding box profiles. Our results show that this novel approach is able to improve commonly used metrics such as mean average precision and intersection-over-union, by a large margin.

Keywords

Endoscopy imaging Artifact detection Deep learning Artifact profile 

References

  1. 1.
    Ali, S., et al.: An objective comparison of detection and segmentation algorithms for artefacts in clinical endoscopy. Sci. Rep. 10(1), 1–15 (2020)CrossRefGoogle Scholar
  2. 2.
    Ali, S., et al.: Deep learning for detection and segmentation of artefact and disease instances in gastrointestinal endoscopy. Med. Image Anal. 70, 102002 (2021)CrossRefGoogle Scholar
  3. 3.
    Sánchez, F.J., Bernal, J., Sánchez-Montes, C., de Miguel, C.R., Fernández-Esparrach, G.: Bright spot regions segmentation and classification for specular highlights detection in colonoscopy videos. Mach. Vis. Appl. 28(8), 917–936 (2017).  https://doi.org/10.1007/s00138-017-0864-0CrossRefGoogle Scholar
  4. 4.
    Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263–7271 (2017)Google Scholar
  5. 5.
    Liu, W., et al.: SSD: single shot MultiBox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  6. 6.
    Lin, T.-Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)Google Scholar
  7. 7.
    Subramanian, A., Srivatsan, K.: Exploring deep learning based approaches for endoscopic artefact detection and segmentation. In: EndoCV@ ISBI, pp. 51–56 (2020)Google Scholar
  8. 8.
    Oksuz, I., Clough, J.R., King, A.P., Schnabel, J.A.: Artefact detection in video endoscopy using retinanet and focal loss function. In: Proceedings of the 2019 Challenge on Endoscopy Artefacts Detection (EAD2019), Venice, Italy, 8th April 2019, vol. 2366 (2019)Google Scholar
  9. 9.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)Google Scholar
  10. 10.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497 (2015)
  11. 11.
    Polat, G., Sen, D., Inci, A., Temizel, A.: Endoscopic artefact detection with ensemble of deep neural networks and false positive elimination. In: EndoCV@ ISBI, pp. 8–12 (2020)Google Scholar
  12. 12.
    Zhang, P., Li, X., Zhong, Y.: Ensemble mask-aided R-CNN. In: ISBI, pp. 6154–6162 (2018)Google Scholar
  13. 13.
    Yang, S., Cheng, G.: Endoscopic artefact detection and segmentation with deep convolutional neural network. In: Proceedings of the 2019 Challenge on Endoscopy Artefacts Detection (EAD2019), Venice, Italy, 8th April 2019, vol. 2366 (2019)Google Scholar
  14. 14.
    Tan, M., Pang, R., Le, Q.V.: EfficientDet: scalable and efficient object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10781–10790 (2020)Google Scholar
  15. 15.
    Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019)Google Scholar
  16. 16.
    Liu, S., Qi, L., Qin, H., Shi, J., Jia, J.: Path aggregation network for instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8759–8768 (2018)Google Scholar
  17. 17.
    Neubeck, A., Van Gool, L.: Efficient non-maximum suppression. In: 18th International Conference on Pattern Recognition (ICPR 2006), vol. 3, pp. 850–855. IEEE (2006)Google Scholar
  18. 18.
    Kailath, T.: The divergence and Bhattacharyya distance measures in signal selection. IEEE Trans. Commun. Technol. 15(1), 52–60 (1967)CrossRefGoogle Scholar
  19. 19.
    Coleman, G.B., Andrews, H.C.: Image segmentation by clustering. Proc. IEEE 67(5), 773–785 (1979)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.Department of Engineering Science, Big Data InstituteUniversity of OxfordOxfordUK
  2. 2.Oxford NIHR Biomedical Research CentreOxfordUK

Personalised recommendations