Advertisement

Journal of Medical Systems

, 42:237 | Cite as

Annotating Early Esophageal Cancers Based on Two Saliency Levels of Gastroscopic Images

  • Dingyun Liu
  • Nini Rao
  • Xinming Mei
  • Hongxiu Jiang
  • Quanchi Li
  • ChengSi Luo
  • Qian Li
  • Chengshi Zeng
  • Bing Zeng
  • Tao Gan
Image & Signal Processing
Part of the following topical collections:
  1. Image & Signal Processing

Abstract

Early diagnoses of esophageal cancer can greatly improve the survival rate of patients. At present, the lesion annotation of early esophageal cancers (EEC) in gastroscopic images is generally performed by medical personnel in a clinic. To reduce the effect of subjectivity and fatigue in manual annotation, computer-aided annotation is required. However, automated annotation of EEC lesions using images is a challenging task owing to the fine-grained variability in the appearance of EEC lesions. This study modifies the traditional EEC annotation framework and utilizes visual salient information to develop a two saliency levels-based lesion annotation (TSL-BLA) for EEC annotations on gastroscopic images. Unlike existing methods, the proposed framework has a strong ability of constraining false positive outputs. What is more, TSL-BLA is also placed an additional emphasis on the annotation of small EEC lesions. A total of 871 gastroscopic images from 231 patients were used to validate TSL-BLA. 365 of those images contain 434 EEC lesions and 506 images do not contain any lesions. 101 small lesion regions are extracted from the 434 lesions to further validate the performance of TSL-BLA. The experimental results show that the mean detection rate and Dice similarity coefficients of TSL-BLA were 97.24 and 75.15%, respectively. Compared with other state-of-the-art methods, TSL-BLA shows better performance. Moreover, it shows strong superiority when annotating small EEC lesions. It also produces fewer false positive outputs and has a fast running speed. Therefore, The proposed method has good application prospects in aiding clinical EEC diagnoses.

Keywords

Gastroscopic image Early esophageal cancer Lesion annotation Visual saliency Superpixel segmentation 

Notes

Acknowledgments

This work was funded by the National Natural Science Foundation of China (61,872,405 and 61,720,106,004), the Sichuan Science and Technology Support Program (2015SZ0191), Key Project of Natural Science Foundation of Guangdong Province (2016A030311040), the Fundamental Research Funds for the Central Universities of China (ZYGX2016J189) and Scientific Platform Improvement Project of UESTC.

Compliance with Ethical Standards

Conflict of Interest

The authors declare that they have no conflict of interest.

Ethical Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

References

  1. 1.
    Whiteman, D. C., Esophageal cancer: Priorities for prevention. Curr. Epidemiol. 1(3):138–148, 2014.  https://doi.org/10.1007/s40471-014-0015-3.CrossRefGoogle Scholar
  2. 2.
    Zhang, S. W., Zheng, R. S., Zuo, T. T., Zeng, H. M., Wen, W. Q., and He, J., Mortality and survival analysis of esophageal cancer in China. Chin. J. Oncol. 38(9):709–715, 2016.  https://doi.org/10.3760/cma.j.issn.0253-3766.2016.09.014.CrossRefGoogle Scholar
  3. 3.
    Iakovidis, D. K., and Koulaouzidis, A., Software for enhanced video capsule endoscopy: Challenges for essential progress. Nat. Rev. Gastroenterol. Hepatol. 12(3):172–186, 2015.  https://doi.org/10.1038/nrgastro.2015.13.CrossRefPubMedGoogle Scholar
  4. 4.
    Liedlgruber, M., and Uhl, A., Computer-aided decision support systems for endoscopy in the gastrointestinal tract: A review. IEEE Rev. Biomed. Eng. 4(4):73–88, 2011.  https://doi.org/10.1109/RBME.2011.2175445.CrossRefPubMedGoogle Scholar
  5. 5.
    Chen, Y., and Lee, J., A review of machine-vision-based analysis of wireless capsule endoscopy video. Diagn. Ther. Endosc. 2012:1–9, 2012.  https://doi.org/10.1155/2012/418037.CrossRefGoogle Scholar
  6. 6.
    Mahmud, N., Cohen, J., Tsourides, K., and Tyler, M. B., Computer vision and augmented reality in gastrointestinal endoscopy. Gastroenterol. Rep. 3(3):179–184, 2015.  https://doi.org/10.1093/gastro/gov027.CrossRefGoogle Scholar
  7. 7.
    van der Sommen, F., Zinger, S., Schoon, E. J., and With, P. H. N. D., Computer-aided detection of early cancer in the esophagus using HD endoscopy images. SPIE Med. Imaging Int. Soc. Opt. Photon. 8670:86700V, 2013.  https://doi.org/10.1117/12.2001068.CrossRefGoogle Scholar
  8. 8.
    van der Sommen, F., Zinger, S., Schoon, E. J., and With, P. H. N. D., Supportive automatic annotation of early esophageal cancer using local gabor and color features. Neurocomputing 144(1):92–106, 2014.  https://doi.org/10.1016/j.neucom.2014.02.066.CrossRefGoogle Scholar
  9. 9.
    van der Sommen, F., Zinger, S., Curvers, W. L., Bisschops, R., Pech, O., Weusten, B. L., Bergman, J. J., With, P. H. N. D., and Schoon, E. J., Computer-aided detection of early neoplastic lesions in Barrett’s esophagus. Endoscopy 48(5):617–624, 2016.  https://doi.org/10.1055/s-0042-105284.CrossRefPubMedGoogle Scholar
  10. 10.
    Cong, Y., Wang, S., Liu, J., Cao, J., Yang, Y. S., and Luo, J. B., Deep sparse feature selection for computer aided endoscopy diagnosis. Pattern Recogn. 48(3):907–917, 2015.  https://doi.org/10.1016/j.patcog.2014.09.010.CrossRefGoogle Scholar
  11. 11.
    Nawarathna, R., Oh, J., Muthukudage, J., Tavanapong, W., Wong, J., de Groen, P. C., and Tang, S. J., Abnormal image detection in endoscopy videos using a filter Bank and local binary patterns. Neurocomputing 144(1):70–91, 2014.  https://doi.org/10.1016/j.neucom.2014.02.064.CrossRefPubMedPubMedCentralGoogle Scholar
  12. 12.
    Yuan, Y., Wang, J., Li, B., Meng, Q. H., Saliency based ulcer detection for wireless capsule endoscopy diagnosis. IEEE Trans. Med. Imag. 34 (10): 2046–2057, 2015.  https://doi.org/10.1109/TMI.2015.2418534.CrossRefGoogle Scholar
  13. 13.
    Yuan, Y., and Meng, Q. H., Polyp Classification Based on Bag of Features and Saliency in Wireless Capsule Endoscopy. In Proc. IEEE Int. Conf. Robot. Autom., 3930–3935, 2014.  https://doi.org/10.1109/ICRA.2014.6907429.
  14. 14.
    Yuan, Y., Li, B., and Meng, Q. H., Bleeding frame and region detection in the wireless capsule endoscopy video. IEEE J. Biomed. Health Inform. 20(2):624–630, 2015.  https://doi.org/10.1109/JBHI.2015.2399502.CrossRefPubMedGoogle Scholar
  15. 15.
    Iakovidis, D. K., and Koulaouzidis, A., Automatic lesion detection in capsule endoscopy based on color saliency: Closer to an essential adjunct for reviewing software. Gastrointest. Endosc. 80(5):877–883, 2014.  https://doi.org/10.1016/j.gie.2014.06.026.CrossRefPubMedGoogle Scholar
  16. 16.
    Chen, H., Wang, S., Ding, Y., and Qian, D., Saliency-based bleeding localization for wireless capsule endoscopy diagnosis. In. J. Biomed. imaging. 2017:1:8, 2017.  https://doi.org/10.1155/2017/8147632.CrossRefGoogle Scholar
  17. 17.
    Lee, S. M., Kim, K. M., and Jae, Y., Gastric carcinoma: Morphologic classifications and molecular changes. In gastric carcinoma-new insights into current management, InTech. Rijeka 7:130–175, 2013.Google Scholar
  18. 18.
    Kurihara, M., Shirakabe, H., Yarita, T., Lzumi, T., Miyasaka, K., Maruyama, T., and Kobayashi, S., Diagnosis of small early gastric cancer by X-ray, endoscopy, and biopsy. Cancer Detect. Prev. 4(1–4):377–383, 1981.PubMedGoogle Scholar
  19. 19.
    Liu, D. Y., Gan, T., Rao, N. N., Xu, G. G., Zeng, B., and Li, H. L., Automatic detection of early gastrointestinal Cancer lesions based on optimal feature extraction from gastroscopic images. J. Med. Imaging Health Inf. 5(2):296–302, 2015.  https://doi.org/10.1166/jmihi.2015.1390.CrossRefGoogle Scholar
  20. 20.
    Liu, D. Y., Gan, T., Rao, N. N., Xing, Y. W., Zheng, J., Li, S., Luo, C. S., Zhou, Z. J., and Wan, Y. L., Identification of lesion images from gastrointestinal endoscope based on feature extraction of combinational methods with and without learning process. Med. Image Anal. 32:281–294, 2016.  https://doi.org/10.1016/j.media.2016.04.007.CrossRefPubMedGoogle Scholar
  21. 21.
    Hou, X., Zhang, L., Saliency detection: A spectral residual approach. In IEEE Conference on Computer Vision and Pattern Recognition, 1–8, 2007.  https://doi.org/10.1109/CVPR.2007.383267.
  22. 22.
    Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., and Süsstrunk, S., SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11):2274–2282, 2012.  https://doi.org/10.1109/TPAMI.2012.120.CrossRefPubMedGoogle Scholar
  23. 23.
    Fu, Y., Zhang, W., Mandal, M., and Meng, M. Q., Computer-aided bleeding detection in WCE video. IEEE J. Biomed. Health Inform. 18(2):636–642, 2014.  https://doi.org/10.1109/JBHI.2013.2257819.CrossRefPubMedGoogle Scholar
  24. 24.
    Boschetto, D., Gambaretto, G., Grisan, E.. Automatic classification of endoscopic images for premalignant conditions of the esophagus. In Medical Imaging 2016: Biomedical Applications in Molecular, Structural, and Functional Imaging. International Society for Optics and Photonics, vol. 9788, pp. 978808, 2016.  https://doi.org/10.1117/12.2216826.
  25. 25.
    Tao, X. M., Tong, Z. J., Liu, Y., and Fu, D. D., SVM classifier for unbalanced data based on combination of ODR and BSMOTE. Control. Decis. 26(10):1535–1541, 2011.Google Scholar
  26. 26.
    Dice, L. R., Measures of the amount of ecologic association between species. Ecology 26(3):297–302, 1945.  https://doi.org/10.2307/1932409.CrossRefGoogle Scholar
  27. 27.
    Abdelsamea, M. M., Gnecco, G., and Gabe, M. M., An efficient self-organizing active contour model for image segmentation. Neurocomputing 149(PB):820–835, 2015.  https://doi.org/10.1016/j.neucom.2014.07.052.CrossRefGoogle Scholar
  28. 28.
    Chang, C. C., Lin, C. J., “LIBSVM: A Library for Support Vector Machines. http://www.csie.ntu.edu.tw/~cjlin/libsvm. 2007.
  29. 29.
    Bae, S. H., and Yoon, K. J., Polyp detection via imbalanced learning and discriminative feature learning. IEEE Trans. Med. Imag. 34(11):2379–2393, 2015.  https://doi.org/10.1109/TMI.2015.2434398.CrossRefGoogle Scholar
  30. 30.
    Ali, H., Yasmin, M., Sharif, M., and Rehmani, M. H., Computer assisted gastric abnormalities detection using hybrid texture descriptors for chromoendoscopy images. Comput. Methods Prog. Biomed. 157:39–47, 2018.  https://doi.org/10.1016/j.cmpb.2018.01.013.CrossRefGoogle Scholar
  31. 31.
    Ali, H., Sharif, M., Yasmin, M., and Rehmani, M. H., Computer-based classification of chromoendoscopy images using homogeneous texture descriptors. Comput. Methods Prog. Biomed. 88:84–92, 2017.  https://doi.org/10.1016/j.compbiomed.2017.07.002.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Dingyun Liu
    • 1
    • 2
    • 3
  • Nini Rao
    • 1
    • 2
    • 3
  • Xinming Mei
    • 1
    • 2
    • 3
    • 4
  • Hongxiu Jiang
    • 1
    • 2
    • 3
  • Quanchi Li
    • 1
    • 2
    • 3
  • ChengSi Luo
    • 1
    • 2
    • 3
  • Qian Li
    • 1
    • 2
    • 3
  • Chengshi Zeng
    • 1
    • 2
    • 3
  • Bing Zeng
    • 5
  • Tao Gan
    • 6
  1. 1.School of Life Science and TechnologyUniversity of Electronic Science and Technology of ChinaChengduChina
  2. 2.Center for Informational BiologyUniversity of Electronic Science and Technology of ChinaChengduChina
  3. 3.Key Laboratory for NeuroInformation of the Ministry of EducationUniversity of Electronic Science and Technology of ChinaChengduChina
  4. 4.Institute of Electronic and Information Engineering of UESTC in GuangdongDongguanChina
  5. 5.School of Communication and Information EngineeringUniversity Electronic Science and Technology of ChinaChengduChina
  6. 6.Digestive Endoscopic Center of West China HospitalSichuan UniversityChengduChina

Personalised recommendations