Advertisement

Wide-Area Shape Reconstruction by 3D Endoscopic System Based on CNN Decoding, Shape Registration and Fusion

  • Ryo FurukawaEmail author
  • Masaki Mizomori
  • Shinsaku Hiura
  • Shiro Oka
  • Shinji Tanaka
  • Hiroshi Kawasaki
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11041)

Abstract

For effective in situ endoscopic diagnosis and treatment, dense and large areal shape reconstruction is important. For this purpose, we develop 3D endoscopic systems based on active stereo, which projects a grid pattern where grid points are coded by line gaps. One problem of the previous works was that success or failure of 3D reconstruction depends on the stability of feature extraction from the images captured by the endoscope camera. Subsurface scattering or specularities on bio-tissues make this problem difficult. Another problem was that shape reconstruction area was relatively small because of limited field of view of the pattern projector compared to that of the camera. In this paper, to solve the first problem, learning-based approach, i.e., U-Nets, for efficient detection of grid lines and codes at the detected grid points under severe conditions, is proposed. To solve the second problem, an online shape-registration and merging algorithm for sequential frames is proposed. In the experiments, we have shown that we can train U-Nets to extract those features effectively for three specimens of cancers, and also conducted 3D scanning of shapes of a stomach phantom model and a surface inside a human mouth, in which wide-area surfaces are successfully recovered by shape registration and merging.

Notes

Acknowledgment

This work was supported by JSPS/KAKENHI 16H02849, 16KK0151, 18H04119, 18K19824, and MSRA CORE14.

References

  1. 1.
    Aoki, H., et al.: Proposal on 3D endoscope by using grid-based active stereo. In: The 35th EMBC (2013)Google Scholar
  2. 2.
    Furukawa, R., et al.: Calibration of a 3D endoscopic system based on active stereo method for shape measurement of biological tissues and specimen. In: The 36th EMBC, pp. 4991–4994 (2014)Google Scholar
  3. 3.
    Furukawa, R., et al.: 2-DOF auto-calibration for a 3D endoscope system based on active stereo. In: The 37th EMBC, pp. 7937–7941, August 2015Google Scholar
  4. 4.
    Furukawa, R., et al.: 3D endoscope system using DOE projector. In: The 38th EMBC, pp. 2091–2094 (2016)Google Scholar
  5. 5.
    Furukawa, R., Naito, M., Miyazaki, D., Baba, M., Hiura, S., Kawasaki, H.: HDR image synthesis technique for active stereo 3D endoscope system. In: The 39th EMBC, pp. 1–4 (2017)Google Scholar
  6. 6.
    Visentini-Scarzanella, M., Stoyanov, D., Yang, G.: Metric depth recovery from monocular images using shape-from-shading and specularities. In: ICIP, Orlando, USA, pp. 25–28 (2012)Google Scholar
  7. 7.
    Stoyanov, D., Scarzanella, M.V., Pratt, P., Yang, G.-Z.: Real-time stereo reconstruction in robotically assisted minimally invasive surgery. In: Jiang, T., Navab, N., Pluim, J.P.W., Viergever, M.A. (eds.) MICCAI 2010. LNCS, vol. 6361, pp. 275–282. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15705-9_34CrossRefGoogle Scholar
  8. 8.
    Grasa, O., Bernal, E., Casado, S., Gil, I., Montiel, J.: Visual slam for handheld monocular endoscope. IEEE Trans. Medical Imaging 33(1), 135–146 (2014)CrossRefGoogle Scholar
  9. 9.
    Lin, J., Clancy, N.T., Stoyanov, D., Elson, D.S.: Tissue surface reconstruction aided by local normal information using a self-calibrated endoscopic structured light system. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9349, pp. 405–412. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24553-9_50CrossRefGoogle Scholar
  10. 10.
    Besl, P.J., McKay, N.D.: Method for registration of 3-d shapes. In: Robotics-DL tentative, International Society for Optics and Photonics, pp. 586–606 (1992)Google Scholar
  11. 11.
    Curless, B., Levoy, M.: A volumetric method for building complex models from range images. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 303–312. ACM (1996)Google Scholar
  12. 12.
    Newcombe, R.A., et al.: KinectFusion: real-time dense surface mapping and tracking. In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 127–136. IEEE (2011)Google Scholar
  13. 13.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: MICCAI, Springer (2015) 234–241Google Scholar
  14. 14.
    Furukawa, R., Morinaga, H., Sanomura, Y., Tanaka, S., Yoshida, S., Kawasaki, H.: Shape acquisition and registration for 3D endoscope based on grid pattern projection. In: The 14th ECCV. Volume Part V I. (2016) 399–415Google Scholar
  15. 15.
    Carr, J.C., Fright, W.R., Beatson, R.K.: Surface interpolation with radial basis functions for medical imaging. IEEE transactions on medical imaging 16(1), 96–107 (1997)CrossRefGoogle Scholar
  16. 16.
    Rusu, R.B., Cousins, S.: 3d is here: Point cloud library (pcl). In: Robotics and automation (ICRA), 2011 IEEE International Conference on, IEEE (2011) 1–4Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Ryo Furukawa
    • 1
    Email author
  • Masaki Mizomori
    • 1
  • Shinsaku Hiura
    • 1
  • Shiro Oka
    • 2
  • Shinji Tanaka
    • 2
  • Hiroshi Kawasaki
    • 3
  1. 1.Hiroshima City UniversityHiroshimaJapan
  2. 2.Hiroshima University HospitalHiroshimaJapan
  3. 3.Kyushu UniversityFukuokaJapan

Personalised recommendations