Uninformative Frame Detection in Colonoscopy Through Motion, Edge and Color Features

  • Mohammad Ali Armin
  • Girija Chetty
  • Fripp Jurgen
  • Hans De Visser
  • Cedric Dumas
  • Amir Fazlollahi
  • Florian Grimpen
  • Olivier Salvado
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9515)

Abstract

Colonoscopy is performed by using a long endoscope inserted in the colon of patients to inspect the internal mucosa. During the intervention, clinicians observe the colon under bright light to diagnose pathology and guide intervention. We are developing a computer aided system to facilitate navigation and diagnosis. One essential step is to estimate the camera pose relative to the colon from video frames. However, within every colonoscopy video is a large number of frames that provide no structural information (e.g. blurry or out of focus frames or those close to the colon wall). This hampers our camera pose estimation algorithm. To distinguish uninformative frames from informative ones, we investigated several features computed from each frame: corner and edge features matched with the previous frame, the percentage of edge pixels, and the mean and standard deviation of intensity in hue-saturation-value color space. A Random Forest classifier was used for classification. The method was validated on four colonoscopy videos that were manually classified. The resulting classification had a sensitivity of 75 % and specificity of 97 % for detecting uninformative frames. The proposed features not only compared favorably to existing techniques for detecting uninformative frames, but they also can be utilized for the camera navigation purpose.

Keywords

Optical colonoscopy Uninformative frames Colonoscopy quality Feature Random Forest 

References

  1. 1.
    Australian Institute of Health and Welfare. http://www.aihw.gov.au/
  2. 2.
    Liu, J., Subramanian, K.R., Yoo, T.S.: A robust method to track colonoscopy videos with non-informative images. Int. J. Comput. Assist. Radiol. Surg. 8, 575–592 (2013)CrossRefGoogle Scholar
  3. 3.
    Puerto-Souza, G.A., Staranowicz, A.N., Bell, C.S., Valdastri, P., Mariottini, G.-L.: A comparative study of ego-motion estimation algorithms for teleoperated robotic endoscopes. In: Luo, X., Reich, T., Mirota, D., Soper, T. (eds.) CARE 2014. LNCS, vol. 8899, pp. 64–76. Springer, Heidelberg (2014)Google Scholar
  4. 4.
    Mori, K., Deguchi, D., Sugiyama, J., Suenaga, Y., Toriwaki, J., Maurer, C.R., Takabatake, H., Natori, H.: Tracking of a bronchoscope using epipolar geometry analysis and intensity-based image registration of real and virtual endoscopic images. A preliminary version of this paper was presented at the Medical Image Computing and Computer-Assisted Intervention (MICCAI) Conference, Utrecht, The Netherlands (Mori et al. 2001). Med. Image Anal. 6, 321–336 (2002)Google Scholar
  5. 5.
    Rai, L., Helferty, J.P., Higgins, W.E.: Combined video tracking and image-video registration for continuous bronchoscopic guidance. Int. J. Comput. Assist. Radiol. Surg. 3, 315–329 (2008)CrossRefGoogle Scholar
  6. 6.
    Oh, J., Hwang, S., Lee, J., Tavanapong, W., Wong, J., de Groen, P.C.: Informative frame classification for endoscopy video. Med. Image Anal. 11, 110–127 (2007)CrossRefGoogle Scholar
  7. 7.
    Oh, J., Hwang, S., Cao, Y., Tavanapong, W., Liu, D., Wong, J., de Groen, P.C.: Measuring objective quality of colonoscopy. IEEE Trans. Biomed. Eng. 56, 2190–2196 (2009)CrossRefGoogle Scholar
  8. 8.
    Arnold, M., Ghosh, A., Lacey, G., Patchett, S., Mulcahy, H.: Indistinct frame detection in colonoscopy videos. In: 13th International Machine Vision and Image Processing Conference (IMVIP), pp. 47–52 (2009)Google Scholar
  9. 9.
    Mackiewicz, M., Berens, J., Fisher, M.: Wireless capsule endoscopy color video segmentation. IEEE Trans. Med. Imaging 27, 1769–1781 (2008)CrossRefGoogle Scholar
  10. 10.
    Oh, J., Hwang, S., Tavanapong, W., de Groen, P.C., Wong, J.: Blurry-frame detection and shot segmentation in colonoscopy videos. In: Proceedings of SPIE, pp. 531–542 (2003)Google Scholar
  11. 11.
    Bashar, M.K., Mori, K., Suenaga, Y., Kitasaka, T., Mekada, Y.: Detecting informative frames from wireless capsule endoscopic video using color and texture features. In: Metaxas, D., Axel, L., Fichtinger, G., Székely, G. (eds.) MICCAI 2008, Part II. LNCS, vol. 5242, pp. 603–610. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  12. 12.
    Random Forest (Regression, Classification and Clustering) implementation for MATLAB. https://code.google.com/p/randomforest-matlab/
  13. 13.
    Shi, J., Carlo, T.: Good features to track. In: CVPR, pp.593–600 (1994)Google Scholar
  14. 14.
    Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8, 679–698 (1986)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Mohammad Ali Armin
    • 1
    • 2
  • Girija Chetty
    • 1
  • Fripp Jurgen
    • 2
  • Hans De Visser
    • 2
  • Cedric Dumas
    • 2
  • Amir Fazlollahi
    • 2
  • Florian Grimpen
    • 3
  • Olivier Salvado
    • 2
  1. 1.Department of Computer ScienceUniversity of CanberraCanberraAustralia
  2. 2.CSIRO Biomedical InformaticsThe Australian e-Health Research Centre HerstonBrisbaneAustralia
  3. 3.Department of Gastroenterology and HepatologyRoyal Brisbane and Women’s HospitalHerstonAustralia

Personalised recommendations