Skip to main content
Log in

Learning visual representations with optimum-path forest and its applications to Barrett’s esophagus and adenocarcinoma diagnosis

  • Intelligent Biomedical Data Analysis and Processing
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Considering the increase in the number of the Barrett’s esophagus (BE) in the last decade, and its expected continuous increase, methods that can provide an early diagnosis of dysplasia in BE-diagnosed patients may provide a high probability of cancer remission. The limitations related to traditional methods of BE detection and management encourage the creation of computer-aided tools to assist in this problem. In this work, we introduce the unsupervised Optimum-Path Forest (OPF) classifier for learning visual dictionaries in the context of Barrett’s esophagus (BE) and automatic adenocarcinoma diagnosis. The proposed approach was validated in two datasets (MICCAI 2015 and Augsburg) using three different feature extractors (SIFT, SURF, and the not yet applied to the BE context A-KAZE), as well as five supervised classifiers, including two variants of the OPF, Support Vector Machines with Radial Basis Function and Linear kernels, and a Bayesian classifier. Concerning MICCAI 2015 dataset, the best results were obtained using unsupervised OPF for dictionary generation using supervised OPF for classification purposes and using SURF feature extractor with accuracy nearly to \(78\%\) for distinguishing BE patients from adenocarcinoma ones. Regarding the Augsburg dataset, the most accurate results were also obtained using both OPF classifiers but with A-KAZE as the feature extractor with accuracy close to \(73\%\). The combination of feature extraction and bag-of-visual-words techniques showed results that outperformed others obtained recently in the literature, as well as we highlight new advances in the related research area. Reinforcing the significance of this work, to the best of our knowledge, this is the first one that aimed at addressing computer-aided BE identification using bag-of-visual-words and OPF classifiers, being the application of unsupervised technique in the BE feature calculation the major contribution of this work. It is also proposed a new BE and adenocarcinoma description using the A-KAZE features, not yet applied in the literature.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. https://endovissub-barrett.grand-challenge.org/home/.

References

  1. Afonso LCS, Papa JP, Papa LP, Marana AN, Rocha AR (2012) Automatic visual dictionary generation through optimum-path forest clustering. In: 2012 19th IEEE international conference on image processing, pp 1897–1900

  2. Alcantarilla PF, Nuevo J, Bartoli A (2013) Fast explicit diffusion for accelerated features in nonlinear scale spaces. In: Proceedings of the British machine vision conference, BMVC, pp 1–11

  3. Amorim WP, Falcão AX, Papa JP, Carvalho MH (2016) Improving semi-supervised learning through optimum connectivity. Pattern Recognit 60:72–85

    Article  Google Scholar 

  4. Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-up robust features (surf). Comput Vis Image Underst 110(3):346–359

    Article  Google Scholar 

  5. Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2(3):1–27

    Article  Google Scholar 

  6. Csurka G, Dance CR, Fan L, Willamowski J, Bray C (2004) Visual categorization with bags of keypoints. In: Proceedings of the workshop on statistical learning in computer vision, pp 1–22

  7. Dent J (2011) Barrett’s esophagus: a historical perspective, an update on core practicalities and predictions on future evolutions of management. J Gastroenterol Hepatol 26:11–30

    Article  Google Scholar 

  8. Falcão AX, Stolfi J, Lotufo RA (2004) The image foresting transform: theory, algorithms, and applications. IEEE Trans Pattern Anal Mach Intell 26(1):19–29

    Article  Google Scholar 

  9. Fei-Fei L, Perona P (2005) A bayesian hierarchical model for learning natural scene categories. In: Proceedings of the IEEE conference on computer vision and pattern recognition, CVPR, vol 2, pp 524–531

  10. González-Castro V, Valdés-Hernández MC, Armitage PA, Wardlaw JM (2016) Automatic rating of perivascular spaces in brain MRI using bag of visual words. In: ICIAR, vol 9730. Springer, pp 642–649

  11. Hassan AR, Haque MA (2015) Computer-aided gastrointestinal hemorrhage detection in wireless capsule endoscopy videos. Comput Biol Med 122:341–353

    Google Scholar 

  12. Itseez: Open source computer vision library (2015) https://github.com/itseez/opencv. Accessed 7 Jan 2019

  13. Klomp S, van der Sommen F, Swager AF, Zinger S, Schoon EJ, Curvers WL, Bergman JJ, de With PHN (2017) Evaluation of image features and classification methods for barrett’s cancer detection using vle imaging. In: Proceedings of the SPIE medical imaging, vol 10134, p 101340D

  14. Koh JEW, Ng EYK, Bhandary SV, Hagiwara Y, Laude A, Acharya UR (2018) Automated retinal health diagnosis using pyramid histogram of visual words and fisher vector techniques. Comput Biol Med 92:204–209

    Article  Google Scholar 

  15. Lagergren J, Lagergren P (2010) Oesophageal cancer. BMJ 341:c6280. https://doi.org/10.1136/bmj.c6280

    Article  Google Scholar 

  16. Lepage C, Rachet B, Jooste V (2008) Continuing rapid increase in esophageal adenocarcinoma in england and wales. Am J Gastroenterol 103:2694–2699

    Article  Google Scholar 

  17. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110

    Article  Google Scholar 

  18. Mendel R, Ebigbo A, Probst A, Messmann H, Palm C (2017) Barrett’s esophagus analysis using convolutional neural networks. Springer, Berlin, Heidelberg, pp 80–85

    Google Scholar 

  19. Nakamura RYM, Fonseca LMG, Santos JA, Torres RS, Yang XS, Papa JP (2014) Nature-inspired framework for hyperspectral band selection. IEEE Trans Geosci Remote Sens 52(4):2126–2137. https://doi.org/10.1109/TGRS.2013.2258351

    Article  Google Scholar 

  20. Papa JP, Falcão AX, Albuquerque VHC, Tavares JMRS (2012) Efficient supervised optimum-path forest classification for large datasets. Pattern Recognit 45(1):512–520

    Article  Google Scholar 

  21. Papa JP, Falcão AX, Suzuki CTN (2009) Supervised pattern classification based on optimum-path forest. Int J Imaging Syst Technol 19(2):120–131

    Article  Google Scholar 

  22. Papa JP, Fernandes SEN, Falcão AX (2017) Optimum-path forest based on k-connectivity: theory and applications. Pattern Recognit Lett 87:117–126

    Article  Google Scholar 

  23. Papa JP, Rocha AR (2011) Image categorization through optimum path forest and visual words. In: Proceedings of the 18th IEEE international conference on image processing, pp 3525–3528

  24. Papa JP, Suzuki CTN, Falcao XA LibOPF: a library for the design of optimum-path forest classifiers. Software version 2.1 available at http://www.ic.unicamp.br/~afalcao/libopf/index.html. Accessed 7 Jan 2019

  25. Peng X, Wang L, Wang X, Qiao Y (2016) Bag of visual words and fusion methods for action recognition: comprehensive study and good practice. Comput Vis Image Underst 150:109–125

    Article  Google Scholar 

  26. Phoa KN, Pouw RE, Bisschops R, Pech O, Ragunath K, Weusten BLAM et al (2016) Multimodality endoscopic eradication for neoplastic barrett oesophagus: results of an european multicentre study (euro-ii). Gut 65(4):555–562

    Article  Google Scholar 

  27. Pisani RJ, Nakamura RYM, Riedel PS, Zimback CRL, Falcão AX, Papa JP (2014) Toward satellite-based land cover classification through optimum-path forest. IEEE Trans Geosci Remote Sens 52(10):6075–6085

    Article  Google Scholar 

  28. Rocha LM, Cappabianco FAM, Falcão AX (2009) Data clustering as an optimum-path forest problem with applications in image analysis. Int J Imaging Syst Technol 19(2):50–68

    Article  Google Scholar 

  29. Seguí S, Drozdzal M, Pascual G, Radeva P, Malagelada C, Azpiroz F, Vitriá J (2016) Generic feature learning for wireless capsule endoscopy analysis. Comput Biol Med 79:163–172

    Article  Google Scholar 

  30. Seibel EJ, Carroll RE, Dominitz JA, Johnston RS, Melville CD, Lee CM, Seitz SM, Kimmey MB (2008) Tethered capsule endoscopy, a low-cost and high-performance alternative technology for the screening of esophageal cancer and barrett’s esophagus. IEEE Trans Biomed Eng 55(3):1032–1042

    Article  Google Scholar 

  31. Sharma P, Bergman JJGHM, Goda K, Kato M et al (2016) Development and validation of a classification system to identify high-grade dysplasia and esophageal adenocarcinoma in barrett’s esophagus using narrow-band imaging. Gastroenterology 150(3):591–598

    Article  Google Scholar 

  32. Sharma P, Brill J, Canto M, DeMarco D, Fennerty B, Gupta N, Laine L (2015) White paper aga: advanced imaging in barrett’s esophagus. Clin Gastroenterol Hepatol 13(13):2209–2218. https://doi.org/10.1016/j.cgh.2015.09.017

    Article  Google Scholar 

  33. Shi J, Malik J (2000) Normalized cuts and image segmentation. IEEE Trans Pattern Anal Mach Intell 22(8):888–905

    Article  Google Scholar 

  34. Souza Jr LA, Afonso LCS, Palm C, Papa JP (2017) Barrett’s esophagus identification using optimum-path forest. In: 30th SIBGRAPI conference on graphics, patterns and images, pp 308–314. https://doi.org/10.1109/SIBGRAPI.2017.47

  35. Souza Jr LA, Hook C, Papa JP, Palm C (2017) Barrett’s esophagus analysis using SURF features. Springer, Berlin, Heidelberg, pp 141–146. https://doi.org/10.1007/978-3-662-54345-0_34

    Google Scholar 

  36. Souza LA Jr, Palm C, Mendel R, Hook C, Ebigbo A, Probst A, Messmann H, Weber S, Papa JP (2018) A survey on barrett’s esophagus analysis using machine learning. Comput Biol Med 96:203–213. https://doi.org/10.1016/j.compbiomed.2018.03.014

    Article  Google Scholar 

  37. Suzuki CTN, Gomes JF, Falcão AX, Papa JP, Hoshino-Shimizu S (2013) Automatic segmentation and classification of human intestinal parasites from microscopy images. IEEE Trans Biomed Eng 60(3):803–812

    Article  Google Scholar 

  38. Swager AF, van der Sommen F, Zinger S, Meijer SL, Schoon EJ, Bergman J, de With PH, Curvers WL (2016) 237 Feasibility of a computer algorithm for detection of early barrett’s neoplasia using volumetric laser endomicroscopy. Gastroenterology 150(4, Supplement 1):S56

    Article  Google Scholar 

  39. van der Sommen F, Zinger S, Curvers WL, Bisschops R, Pech O, Weusten BLAM, Bergman JJGHM, de With PHN, Schoon EJ (2016) Computer-aided detection of early neoplastic lesions in barrett’s esophagus. Endoscopy 48(7):617–624

    Article  Google Scholar 

  40. Wilcoxon F (1945) Individual comparisons by ranking methods. Biom Bull 1(6):80–83

    Article  Google Scholar 

Download references

Acknowledgements

The authors are grateful to DFG Grant PA 1595/3-1, Capes/Alexander von Humboldt Foundation Grant No. BEX 0581-16-0, CNPq Grants 306166/2014-3 and 307066/2017-7, as well as FAPESP Grants 2013/07375-0, 2014/12236-1, and 2016/19403-6. This material is based upon work supported in part by funds provided by Intel® AI Academy program under Fundunesp Grant No. 2597.2017.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to João P. Papa.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

de Souza, L.A., Afonso, L.C.S., Ebigbo, A. et al. Learning visual representations with optimum-path forest and its applications to Barrett’s esophagus and adenocarcinoma diagnosis. Neural Comput & Applic 32, 759–775 (2020). https://doi.org/10.1007/s00521-018-03982-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-018-03982-0

Keywords

Navigation