Advertisement

Dense Depth Estimation from Stereo Endoscopy Videos Using Unsupervised Optical Flow Methods

Conference paper
  • 186 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12722)

Abstract

In the context of Minimally Invasive Surgery, estimating depth from stereo endoscopy plays a crucial role in three-dimensional (3D) reconstruction, surgical navigation, and augmentation reality (AR) visualization. However, the challenges associated with this task are three-fold: 1) feature-less surface representations, often polluted by artifacts, pose difficulty in identifying correspondence; 2) ground truth depth is difficult to estimate; and 3) an endoscopy image acquisition accompanied by accurately calibrated camera parameters is rare, as the camera is often adjusted during an intervention. To address these difficulties, we propose an unsupervised depth estimation framework (END-flow) based on an unsupervised optical flow network trained on un-rectified binocular videos without calibrated camera parameters. The proposed END-flow architecture is compared with traditional stereo matching, self-supervised depth estimation, unsupervised optical flow, and supervised methods implemented on the Stereo Correspondence and Reconstruction of Endoscopic Data (SCARED) Challenge dataset. Experimental results show that our method outperforms several state-of-the-art techniques and achieves a close performance to that of supervised methods.

Keywords

Stereo endoscopy Depth estimation Self supervised learning Stereo matching Optical flow 

References

  1. 1.
    Allan, M., et al.: Stereo correspondence and reconstruction of endoscopic data challenge. arXiv preprint arXiv:2101.01133 (2021)
  2. 2.
    Bernhardt, S., Abi-Nahed, J., Abugharbieh, R.: Robust dense endoscopic stereo reconstruction for minimally invasive surgery. In: Menze, B.H., Langs, G., Lu, L., Montillo, A., Tu, Z., Criminisi, A. (eds.) MCV 2012. LNCS, vol. 7766, pp. 254–262. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-36620-8_25CrossRefGoogle Scholar
  3. 3.
    Chen, L., Tang, W., John, N.W., Wan, T.R., Zhang, J.J.: Slam-based dense surface reconstruction in monocular minimally invasive surgery and its application to augmented reality. Comput. Methods Prog. Biomed 158, 135–146 (2018)CrossRefGoogle Scholar
  4. 4.
    Eddie”Edwards, P., Psychogyios, D., Speidel, S., Maier-Hein, L., Stoyanov, D.: Serv-ct: a disparity dataset from ct for validation of endoscopic 3d reconstruction. arXiv e-prints pp. arXiv-2012 (2020)Google Scholar
  5. 5.
    Farnebäck, G.: Two-frame motion estimation based on polynomial expansion. In: Bigun, J., Gustavsson, T. (eds.) SCIA 2003. LNCS, vol. 2749, pp. 363–370. Springer, Heidelberg (2003).  https://doi.org/10.1007/3-540-45103-X_50CrossRefGoogle Scholar
  6. 6.
    Geiger, A., Roser, M., Urtasun, R.: Efficient large-scale stereo matching. In: Kimmel, R., Klette, R., Sugimoto, A. (eds.) ACCV 2010. LNCS, vol. 6492, pp. 25–38. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-19315-6_3CrossRefGoogle Scholar
  7. 7.
    Geng, J., Xie, J.: Review of 3-d endoscopic surface imaging techniques. IEEE Sens. J. 14(4), 945–960 (2013)CrossRefGoogle Scholar
  8. 8.
    Godard, C., Mac Aodha, O., Brostow, G.J.: Unsupervised monocular depth estimation with left-right consistency. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 270–279 (2017)Google Scholar
  9. 9.
    Godard, C., Mac Aodha, O., Firman, M., Brostow, G.J.: Digging into self-supervised monocular depth estimation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3828–3838 (2019)Google Scholar
  10. 10.
    Hartley, R.I., Sturm, P.: Triangulation. Comput. Vision Image Underst. 68(2), 146–157 (1997)CrossRefGoogle Scholar
  11. 11.
    Hirschmuller, H.: Accurate and efficient stereo processing by semi-global matching and mutual information. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 2, pp. 807–814. IEEE (2005)Google Scholar
  12. 12.
    Kalia, M., Navab, N., Salcudean, T.: A real-time interactive augmented reality depth estimation technique for surgical robotics. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 8291–8297. IEEE (2019)Google Scholar
  13. 13.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  14. 14.
    Lin, J., et al.: Endoscopic depth measurement and super-spectral-resolution imaging. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 39–47. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_5CrossRefGoogle Scholar
  15. 15.
    Liu, L., et al.: Learning by analogy: Reliable supervision from transformations for unsupervised optical flow estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6489–6498 (2020)Google Scholar
  16. 16.
    Liu, X., et al.: Reconstructing sinus anatomy from endoscopic video – towards a radiation-free approach for quantitative longitudinal assessment. In: Martel, A.L., Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12263, pp. 3–13. Springer, Cham (2020).  https://doi.org/10.1007/978-3-030-59716-0_1CrossRefGoogle Scholar
  17. 17.
    Luo, X., Jayarathne, U.L., McLeod, A.J., Pautler, S.E., Schlacta, C.M., Peters, T.M.: Uncalibrated stereo rectification and disparity range stabilization: a comparison of different feature detectors. In: Medical Imaging 2016: Image-Guided Procedures, Robotic Interventions, and Modeling, vol. 9786, p. 97861C. International Society for Optics and Photonics (2016)Google Scholar
  18. 18.
    Lurie, K.L., Angst, R., Zlatev, D.V., Liao, J.C., Bowden, A.K.E.: 3d reconstruction of cystoscopy videos for comprehensive bladder records. Biomed. Opt. Exp. 8(4), 2106–2123 (2017)CrossRefGoogle Scholar
  19. 19.
    Mahmood, F., Durr, N.J.: Deep learning and conditional random fields-based depth estimation and topographical reconstruction from conventional endoscopy. Med. Image Anal. 48, 230–243 (2018)CrossRefGoogle Scholar
  20. 20.
    Mahmoud, N., Collins, T., Hostettler, A., Soler, L., Doignon, C., Montiel, J.M.M.: Live tracking and dense reconstruction for handheld monocular endoscopy. IEEE Trans. Medical Imag. 38(1), 79–89 (2018)CrossRefGoogle Scholar
  21. 21.
    Mayer, N., et al.: A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4040–4048 (2016)Google Scholar
  22. 22.
    Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3061–3070 (2015)Google Scholar
  23. 23.
    Mirota, D.J., Ishii, M., Hager, G.D.: Vision-based navigation in image-guided interventions. Ann. Rev. Biomed. Eng. 13 (2011)Google Scholar
  24. 24.
    Ozyoruk, K.B., et al.: Endoslam dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos. Med. Image Anal., 102058 (2021)Google Scholar
  25. 25.
    Paszke, A., et al.: Automatic differentiation in pytorch (2017)Google Scholar
  26. 26.
    Phan, T.B., Trinh, D.H., Lamarque, D., Wolf, D., Daul, C.: Dense optical flow for the reconstruction of weakly textured and structured surfaces: Application to endoscopy. In: 2019 IEEE International Conference on Image Processing (ICIP), pp. 310–314. IEEE (2019)Google Scholar
  27. 27.
    Pratt, P., Bergeles, C., Darzi, A., Yang, G.Z.: Practical intraoperative stereo camera calibration. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 667–675. Springer (2014)Google Scholar
  28. 28.
    Ren, Z., He, T., Peng, L., Liu, S., Zhu, S., Zeng, B.: Shape recovery of endoscopic videos by shape from shading using mesh regularization. In: Zhao, Y., Kong, X., Taubman, D. (eds.) ICIG 2017. LNCS, vol. 10668, pp. 204–213. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-71598-8_19CrossRefGoogle Scholar
  29. 29.
    Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vision 47(1), 7–42 (2002)CrossRefGoogle Scholar
  30. 30.
    Scharstein, D., Szeliski, R.: High-accuracy stereo depth maps using structured light. In: 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003, Proceedings, vol. 1, pp. I-I. IEEE (2003)Google Scholar
  31. 31.
    Song, J., Wang, J., Zhao, L., Huang, S., Dissanayake, G.: Mis-slam: real-time large-scale dense deformable slam system in minimal invasive surgery based on heterogeneous computing. IEEE Rob. Autom. Lett. 3(4), 4068–4075 (2018)CrossRefGoogle Scholar
  32. 32.
    Sun, D., Yang, X., Liu, M.Y., Kautz, J.: Pwc-net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8934–8943 (2018)Google Scholar
  33. 33.
    Visentini-Scarzanella, M., Sugiura, T., Kaneko, T., Koto, S.: Deep monocular 3D reconstruction for assisted navigation in bronchoscopy. Int. J. Comput. Assist. Radiol. Surg. 12(7), 1089–1099 (2017)CrossRefGoogle Scholar
  34. 34.
    Wang, L., et a.: Parallax attention for unsupervised stereo correspondence learning. IEEE Trans. Pattern Anal. Mach. Intell. (2020)Google Scholar
  35. 35.
    Wang, X.Z., Nie, Y., Lu, S.P., Zhang, J.: Deep convolutional network for stereo depth mapping in binocular endoscopy. IEEE Access 8, 73241–73249 (2020) CrossRefGoogle Scholar
  36. 36.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process 13(4), 600–612 (2004)CrossRefGoogle Scholar
  37. 37.
    Widya, A.R., Monno, Y., Okutomi, M., Suzuki, S., Gotoda, T., Miki, K.: Whole stomach 3D reconstruction and frame localization from monocular endoscope video. IEEE J. Transl. Eng. Health Med. 7, 1–10 (2019)CrossRefGoogle Scholar
  38. 38.
    Ye, M., Johns, E., Handa, A., Zhang, L., Pratt, P., Yang, G.Z.: Self-supervised siamese learning on stereo image pairs for depth estimation in robotic surgery. In: Hamlyn Symposium on Medical Robotics (2017)Google Scholar
  39. 39.
    Yin, Z., Shi, J.: Geonet: Unsupervised learning of dense depth, optical flow and camera pose. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1983–1992 (2018)Google Scholar
  40. 40.
    Zampokas, G., Tsiolis, K., Peleka, G., Mariolis, I., Malasiotis, S., Tzovaras, D.: Real-time 3D reconstruction in minimally invasive surgery with quasi-dense matching. In: 2018 IEEE International Conference on Imaging Systems and Techniques (IST), pp. 1–6. IEEE (2018)Google Scholar
  41. 41.
    Zhao, W., Liu, S., Shu, Y., Liu, Y.J.: Towards better generalization: joint depth-pose learning without posenet. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9151–9161 (2020)Google Scholar
  42. 42.
    Zhou, T., Brown, M., Snavely, N., Lowe, D.G.: Unsupervised learning of depth and ego-motion from video. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1851–1858 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.Center for Imaging ScienceRochester Institute of TechnologyRochesterUSA
  2. 2.Biomedical EngineeringRochester Institute of TechnologyRochesterUSA
  3. 3.Electrical Computer and Telecommunications Engineering TechnologyRochester Institute of TechnologyRochesterUSA

Personalised recommendations