Advertisement

A robust method to track colonoscopy videos with non-informative images

  • Jianfei LiuEmail author
  • Kalpathi R. Subramanian
  • Terry S. Yoo
Original Article

Abstract

Purpose

Continuously, optical and virtual image alignment can significantly supplement the clinical value of colonoscopy. However, the co-alignment process is frequently interrupted by non-informative images. A video tracking framework to continuously track optical colonoscopy images was developed and tested.

Methods

A video tracking framework with immunity to non-informative images was developed with three essential components: temporal volume flow, region flow, and incremental egomotion estimation. Temporal volume flow selects two similar images interrupted by non-informative images; region flow measures large visual motion between selected images; and incremental egomotion processing estimates significant camera motion by decomposing each large visual motion vector into a sequence of small optical flow vectors. The framework was extensively evaluated via phantom and colonoscopy image sequences. We constructed two colon-like phantoms, a straight phantom and a curved phantom, to measure actual colonoscopy motion.

Results

In the straight phantom, after 48 frames were excluded, the tracking error was \(<\)3 mm of 16 mm traveled. In the curved phantom, the error was \(<\)4 mm of 23.88 mm traveled after 72 frames were excluded. Through evaluations with clinical sequences, the robustness of the tracking framework was demonstrated on 30 colonoscopy image sequences from 22 different patients. Four specific sequences among these were chosen to illustrate the algorithm’s decreased sensitivity to (1) fluid immersion, (2) wall contact, (3) surgery-induced colon deformation, and (4) multiple non-informative image sequences.

Conclusion

A robust tracking framework for real-time colonoscopy was developed that facilitates continuous alignment of optical and virtual images, immune to non-informative images that enter the video stream. The system was validated in phantom testing and achieved success with clinical image sequences.

Keywords

Colonoscopy Tracking Region flow Temporal volume flow Egomotion 

Notes

Acknowledgments

The authors thank the Walter Reed Army Medical Center at National Cancer Institute for providing optical colonoscopy videos and corresponding CT scans. These data were used for virtual colonoscopy study, and all subjects gave their informed consent prior to their inclusion in the study. Conflict of interest The authors declare that they have no conflict of interest.

Supplementary material

ESM 1 (MOV 16387 kb)

11548_2013_814_MOESM2_ESM.pdf (249 kb)
ESM 2 (PDF 250 kb)

References

  1. 1.
    NCI: colon and rectal cancer (2010) National Cancer Institute. http://www.cancer.gov/cancertopics/types/colon-and-rectal
  2. 2.
    Baxter N, Rabeneck L (2010) Is the effectiveness of colonoscopy “good enough” for population-based screening? J Natl Cancer Inst 102:70–71PubMedCrossRefGoogle Scholar
  3. 3.
    Summers RM, Swift JA, Dwyer AJ, Choi JR, Pickhardt PJ (2009) Normalized distance along the colon centerline: a method for correlating polyp location on ct colonography and optical colonoscopy. AJR Am J Roentgenol 193(1):1296–1304PubMedCrossRefGoogle Scholar
  4. 4.
    Duncan JE, McNally MP, Sweeney WB, Gentry AB, Barlow DS, Jensen DW, Cash BD (2009) Ct colonography predictably overestimates colonic length and distance to polyps compared with optical colonoscopy. AJR Am J Roentgenol 193(5):1291–1295PubMedCrossRefGoogle Scholar
  5. 5.
    Liu J (2011) From pixel to region to temporal volume: a robust motion processing framework for visually-guided navigation. Ph.D. thesis, University of North Carolina at CharlotteGoogle Scholar
  6. 6.
    Hwang S, Oh J, Lee J, Tavanapong W, de Groen PC, Wong J (2007) Informative frame classification for endoscopy video. Med Image Anal 11–2:110–127Google Scholar
  7. 7.
    Oh J, Hwang S, Cao Y, Tavanapong W, Liu D, Wong J, de Groen P (2009) Measuring objective quality of colonoscopy. IEEE Trans Biomed Eng 56:2190–2196PubMedCrossRefGoogle Scholar
  8. 8.
    Mori K, Deguchi D, Akiyama K, Kitasaka T, Maurer CR Jr, Suenaga Y, Takabatake H, Mori M, Natori H (2005) Hybrid bronchoscope tracking using a magnetic tracking sensor and image registration. In: Proceedings of 8th MICCAI, pp 543–555Google Scholar
  9. 9.
    Deligianni F, Chung A, Yang GZ (2006) Non-rigid 2d–3d registration with catheter tip em tracking for patient specific bronchoscope simulation. In: Proceedings of 9th MICCAI, pp 281–288Google Scholar
  10. 10.
    Rai L, Helferty J, Higgins W (2008) Combined video tracking and image-video registration for continuous bronchoscopic guidance. Int J Comput Assist Radiol Surg 3(3–4):315–329CrossRefGoogle Scholar
  11. 11.
    Mori K, Deguchi D, Sugiyama J, Suenaga Y, Toriwaki J Jr, Maurer CM, Takabatake H, Natori H (2002) Tracking of a bronchoscope using epipolar geometry analysis and intensity-based image registration of real and virtual endoscopic images. Med Image Anal 6(3):321–336PubMedCrossRefGoogle Scholar
  12. 12.
    Bricault I, Ferretti G, Cinquin P (1998) Multi-level strategy for computer-assisted transbronchial biopsy. In: Proceedings of 1th MICCAI, pp 161–268Google Scholar
  13. 13.
    Helferty JP, Sherbondy AJ, Kiraly AP, Higgins WE (2005) System for live virtual-endoscopic guidance of bronchoscopy. In: Proceedings of IEEE CVPR, p 68Google Scholar
  14. 14.
    Helferty JP, Higgins WE (2002) Combined endoscopic video tracking and virtual 3d ct registration for surgical guidance. In: Proceedings of IEEE ICIP, pp 961–964Google Scholar
  15. 15.
    Rai L, Merritt SA, Higgins WE (2006) Real-time image-based guidance method for lung-cancer assessment. In: Proceedings of IEEE CVPR, pp 2437–2444Google Scholar
  16. 16.
    Deguchi D, Mori K, Suenaga Y, Hasegawa J, Toriwaki J, Batake HT, Natori H (2003) New image similarity measure for bronchoscope tracking based on image registration. In: Proceedings of 6th MICCAI, pp 399–406Google Scholar
  17. 17.
    Nagao J, Mori K, Enjouji T, Deguchi D (2004) Fast and accurate bronchoscope tracking using image registration and motion prediction. In: Proceedings of 7th MICCAI, pp 551–558Google Scholar
  18. 18.
    Higgins WE, Helferty JP, Lu K, Merritt SA, Rai L, Yu KC (2007) 3d ct-video fusion for image-guided bronchoscopy. Comput Med Imaging Graph 32:159–173PubMedCrossRefGoogle Scholar
  19. 19.
    Helferty JP, Sherbondy AJ, Kiraly AP, Higgins WE (2007) Computer-based system for the virtual-endoscopic guidance of bronchoscopy. Comput Vis Image Underst 108(1–2):171–187PubMedCrossRefGoogle Scholar
  20. 20.
    Deligianni F, Chung A, Yang GZ (2004) Patient-specific bronchoscope simulation with pq-space-based 2d/3d registration. Comput Aided Surg 9(5):215–226PubMedGoogle Scholar
  21. 21.
    Deligianni F, Chung A, Yang GZ (2006) Non-rigid 2d/3d registration for patient specific bronchoscopy simulation with statistical shape modelling. IEEE Trans Med Imaging 25(11):1462–1471PubMedCrossRefGoogle Scholar
  22. 22.
    Lowe D (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110CrossRefGoogle Scholar
  23. 23.
    Tuytelaars T, Gool LV (2004) Matching widely separated views based on affine invariant regions. Int J Comput Vis 59(1):61–85CrossRefGoogle Scholar
  24. 24.
    Matas J, Chum O, Urban M, Pajdla T (2002) Robust wide baseline stereo from maximally stable extremal regions. In: Proceedings of the British machine vision conference, pp 384–393Google Scholar
  25. 25.
    Mikolajczyk K, Schmid C (2004) Scale and affine invariant interest point detectors. Int J Comput Vis 60(1):63–86CrossRefGoogle Scholar
  26. 26.
    Liu R, Li Z, Jia J (2008) Image partial blur detection and classification. In: Proceedings of the IEEE CVPR (2008) June 27–28. Anchorage, AlaskaGoogle Scholar
  27. 27.
    Brox T, Bregler C, Malik J (2009) Large displacement optical flow. In: Proceedings of the IEEE CVPR, pp 41–48Google Scholar
  28. 28.
    Brox T, Bruhn A, Papenberg N, Weickert J (2004) High accuracy optical flow estimation based on a theory for warping. In: Proceedings of 8th ECCV, vol 4, pp 25–36Google Scholar
  29. 29.
    Brox T, Malik J (2010) Large displacement optical flow: descriptor matching in variational motion estimation. IEEE Trans Pattern Anal Mach Intell 33(3):500–513Google Scholar
  30. 30.
    Lindeberg T (1993) Scale-space theory in computer vision, 1st edn. Springer, BerlinGoogle Scholar
  31. 31.
    Young DM (1971) Iterative solution of large linear systems (Computer science and applied mathematics), 1st edn. Academic Press, LondonGoogle Scholar
  32. 32.
    Tuytelaars T, Mikolajczyk K (2008) Local invariant feature detectors: a survey, 1st edn. Now Publishers Inc, HanoverGoogle Scholar
  33. 33.
    Kadir T, Zisserman A, Brady M (2004) An affine invariant salient region detector. In: Proceedings of the European conference on computer vision, pp 404–416Google Scholar
  34. 34.
    Horn B, Schunck B (1981) Determining optical flow. Artif Intell 17(3):185–203CrossRefGoogle Scholar
  35. 35.
    Lucas BD, Kanade T (1981) An iterative image registration technique with an application to stereo vision. In: Proceedings of international joint conference on artificial intelligence, pp 281–288Google Scholar
  36. 36.
    Papenberg N, Bruhn A, Brox T, Didas S, Weickert J (2006) Highly accurate optic flow computation with theoretically justified warping. Int J Comput Vis 67(2):141–158Google Scholar
  37. 37.
    Ryan TW (1981) The prediction of cross-correlation accuracy in digital stereo-pair images. Ph.D. thesis, University of ArizonaGoogle Scholar
  38. 38.
    Li SZ (2001) Markov random field modeling in image analysis, 2nd edn. Springer, BerlinCrossRefGoogle Scholar
  39. 39.
    Felzenszwalb PF, Huttenlocher DP (2006) Efficient belief propagation for early vision. Int J Comput Vis 70(1):41–54CrossRefGoogle Scholar
  40. 40.
    Liu C, Yuen J, Torralba A (2011) Sift flow: dense correspondence across different scenes and its applications. IEEE Trans Pattern Anal Mach Intell 33(5):978–994PubMedCrossRefGoogle Scholar
  41. 41.
    Bruss AR, Horn BKP (1983) Passive navigation. Comput Vis Graph Image Process 21:3–20CrossRefGoogle Scholar
  42. 42.
    Reiger J, Lawton D (1985) Processing differential image motion. J Opt Soc Am A 2(2):354–359 Google Scholar
  43. 43.
    Heeger D, Jepson A (1992) Subspace methods for recovering rigid motion 1: algorithm and implementation. Int J Comput Vis 7(2):95–117Google Scholar
  44. 44.
    Lim J, Barnes N (2009) Estimation of the epipole using optical flow at antipodal points. Comput Vis Image Underst 114(2):245–253CrossRefGoogle Scholar
  45. 45.
    LEGO-Group T (2010) Company profile: an introduction to the lego group. http://www.lego.com
  46. 46.
    Zalis ME, Barish MA, Choi JR, Dachman AH, Fenlon HM, Ferrucci JT, Glick SN, Laghi A, Macari M, McFarland EG, Morrin MM, Pickhardt PJ, Soto J, Yee J (2005) Ct colonography reporting and data system: a consensus proposal. Radiology 236:3–9PubMedCrossRefGoogle Scholar
  47. 47.
    Weickert J, ter Haar Romeny BM, Viergever MA (1998) Efficient and reliable schemes for nonlinear diffusion filtering. IEEE Trans Image Process 7:398–410PubMedCrossRefGoogle Scholar
  48. 48.
    Luo J, Konofagou E (2010) A fast normalized cross-correlation calculation method for motion estimation. IEEE Trans Ultrason Ferroelectr Freq Control 57:1347–1357PubMedCrossRefGoogle Scholar
  49. 49.
    Nister D, Naroditsky O, Bergen J (2004) Visual odometry. In: Proceedings of IEEE CVPR, pp 652–659Google Scholar

Copyright information

© CARS 2013

Authors and Affiliations

  • Jianfei Liu
    • 1
    Email author
  • Kalpathi R. Subramanian
    • 2
  • Terry S. Yoo
    • 3
  1. 1.Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging SciencesClinical Center, National Institutes of HealthBethesdaUSA
  2. 2.Department of Computer ScienceThe University of North Carolina at CharlotteCharlotteUSA
  3. 3.Office of High Performance Computing and CommunicationsNational Library of Medicine, National Institutes of HealthBethesdaUSA

Personalised recommendations