Advertisement

Feature detection in biological tissues using multi-band and narrow-band imaging

  • Yuki Tamura
  • Tomohiro Mashita
  • Yoshihiro Kuroda
  • Kiyoshi Kiyokawa
  • Haruo Takemura
Original Article

Abstract

Purpose

In the past decade, augmented reality systems have been expected to support surgical operations by making it possible to view invisible objects that are inside or occluded by the skull, hands, or organs. However, the properties of biological tissues that are non-rigid and featureless require a large number of distributed features to track the movement of tissues in detail.

Methods

With the goal of increasing the number of feature points in organ tracking, we propose a feature detection using multi-band and narrow-band imaging and a new band selection method. The depth of light penetration into an object depends on the wavelength of light based on optical characteristics. We applied typical feature detectors to detect feature points using three selected bands in a human hand. To consider surgical situations, we applied our method to a chicken liver with a variety of light conditions.

Results

Our experimental results revealed that the image of each band exhibited a different distribution of feature points. In addition, the total number of feature points determined by the proposed method exceeded that of the R, G, and B images obtained using a normal camera. The results using a chicken liver with various light sources and intensities also show different distributions with each selected band.

Conclusions

We have proposed a feature detection method using multi-band and narrow-band imaging and a band selection method. The results of our experiments confirmed that the proposed method increased the number of distributed feature points. The proposed method was also effective for different light conditions.

Keywords

Multi-band imaging Narrow-band imaging Biological tissues Feature detection Augmented reality 

Notes

Compliance with ethical standards

Conflicts of interest

We have no conflicts of interest relationship with any companies or commercial organizations based on the definition of the Japanese Society of Medical and Biological Engineering.

References

  1. 1.
    Hashizume M, Yasunaga T, Tanoue K, Ieiri S, Konishi K, Kishi K, Nakamoto H, Ikeda D, Sakuma I, Fujie M, Dohi T (2008) New real-time MR image-guided surgical robotic system for minimally invasive precision surgery. Int J Comput Assist Radiol Surg 2:317–325CrossRefGoogle Scholar
  2. 2.
    Sano Y, Emura F, Ikematsu, H (2009) Narrow-band imaging. In: Wayne JD, Rex DK, Williams CB (eds) Colonoscopy: principles and practice, 2nd edn., pp. 514–526. http://onlinelibrary.wiley.com/doi/10.1002/9781444316902.ch38/summary
  3. 3.
    Nayar SK, Krishnan G, Grossberg MD, Raskar R (2006) Fast separation of direct and global components of a scene using high frequency illuminationm In: ACM SIGGRAPH 2006 papers on-SIGGRAPH ’06, pp. 935–944Google Scholar
  4. 4.
    Tanaka K, Mukaigawa Y, Kubo H, Matsushita Y, Yagi Y (2015) Recovering inner slices of translucent objects by multi-frequency illumination. In: The IEEE conference on computer vision and pattern recognition, pp. 5464–5472Google Scholar
  5. 5.
    Hayakawa Y, Yamashita H, Otsuburai T, Miyoseta Y, Sagawa M, Kondo A, Tsuji Y, Honda A (2010) Near-infrared radiation imaging for the detection of alien substances under the skin. Med. Imaging Inf. Sci. 27(3):50–54Google Scholar
  6. 6.
    Nishida K, Namita T, Kato Y, Shimizu K (2012) Improvement of transillumination image of blood vessels using multiple wavelengths of light. IEICE Tech. Rep. MBE2011-111 111(482):13–18Google Scholar
  7. 7.
    Elhawary H, Popovic A (2011) Robust feature tracking on the beating heart for a robotic-guided endoscope. Int J Med Robot Comput Assist Surg 7:459–468CrossRefGoogle Scholar
  8. 8.
    Bay H, Tuytelaars T, Gool LV (2006) SURF: speeded up robust features. Lecture Notes in Computer Science, vol. 3951 LNCS, pp. 404–417Google Scholar
  9. 9.
    Tomasi C (1991) Detection and Tracking of Point Features. Tech Rep CMU-CS-91-132 Image Rochester 91:1–22Google Scholar
  10. 10.
    Haouchine N, Dequidt J, Peterlik I, Kerrien E, Berger M (2013) Image-guided simulation of heterogeneous tissue deformation for augmented reality during hepatic surgery. In: 2013 IEEE international symposium on mixed and augmented reality, pp. 199–208Google Scholar
  11. 11.
    Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110CrossRefGoogle Scholar
  12. 12.
    Rosten E, Drummond T (2006) Machine learning for high-speed corner detection. In: Computer vision—ECCV 2006, vol 3951. Springer, Berlin, Heidelberg, pp 430–443Google Scholar
  13. 13.
    Puerto-Souza GA, Mariottini GL (2012) Hierarchical multi-affine (HMA) algorithm for fast and accurate feature matching in minimally-invasive surgical images. In: 2012 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 2007–2012Google Scholar
  14. 14.
    Ling H, David WJ (2005) Deformation invariant image matching. In: Tenth IEEE international conference on computer vision, 2005. ICCV 2005, vol. 2, pp. 1466–1473Google Scholar
  15. 15.
    Lobaton E, Vasudevan R, Alterovitz R, Bajcsy R (2011) Robust topological features for deformation invariant image matching. In: 2011 IEEE international conference on computer vision (ICCV), pp. 2516–2523Google Scholar

Copyright information

© CARS 2016

Authors and Affiliations

  • Yuki Tamura
    • 1
  • Tomohiro Mashita
    • 1
    • 2
  • Yoshihiro Kuroda
    • 3
  • Kiyoshi Kiyokawa
    • 1
    • 2
  • Haruo Takemura
    • 1
    • 2
  1. 1.Graduate School of Information Science and TechnologyOsaka UniversitySuita CityJapan
  2. 2.Cybermedia CenterOsaka UniversityToyonaka CityJapan
  3. 3.Graduate School of Engineering ScienceOsaka UniversityToyonaka CityJapan

Personalised recommendations