Advertisement

Dense Hybrid Recurrent Multi-view Stereo Net with Dynamic Consistency Checking

Conference paper
  • 902 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12349)

Abstract

In this paper, we propose an efficient and effective dense hybrid recurrent multi-view stereo net with dynamic consistency checking, namely \(D^{2}\)HC-RMVSNet, for accurate dense point cloud reconstruction. Our novel hybrid recurrent multi-view stereo net consists of two core modules: 1) a light DRENet (Dense Reception Expanded) module to extract dense feature maps of original size with multi-scale context information, 2) a HU-LSTM (Hybrid U-LSTM) to regularize 3D matching volume into predicted depth map, which efficiently aggregates different scale information by coupling LSTM and U-Net architecture. To further improve the accuracy and completeness of reconstructed point clouds, we leverage a dynamic consistency checking strategy instead of prefixed parameters and strategies widely adopted in existing methods for dense point cloud reconstruction. In doing so, we dynamically aggregate geometric consistency matching error among all the views. Our method ranks \(1^{st}\) on the complex outdoor Tanks and Temples benchmark over all the methods. Extensive experiments on the in-door DTU dataset show our method exhibits competitive performance to the state-of-the-art method while dramatically reduces memory consumption, which costs only \(19.4\%\) of R-MVSNet memory consumption. The codebase is available at https://github.com/yhw-yhw/D2HC-RMVSNet.

Keywords

Multi-view stereo Deep learning Dense hybrid recurrent-MVSNet Dynamic consistency checking 

Notes

Acknowledgements

This project was supported by the National Key R&D Program of China (No. 2017YFB1002705, No. 2017YFB1002601) and NSFC of China (No. 61632003, No. 61661146002, No. 61872398).

Supplementary material

504439_1_En_39_MOESM1_ESM.pdf (13.7 mb)
Supplementary material 1 (pdf 14019 KB)

Supplementary material 2 (mp4 80238 KB)

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
    Aanæs, H., Jensen, R.R., Vogiatzis, G., Tola, E., Dahl, A.B.: Large-scale data for multiple-view stereopsis. IJCV 120(2), 153–168 (2016)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Chen, R., Han, S., Xu, J., Su, H.: Point-based multi-view stereo network. arXiv preprint arXiv:1908.04422 (2019)
  7. 7.
    Chen, R., Han, S., Xu, J., Su, H.: Point-based multi-view stereo network. In: ICCV (2019)Google Scholar
  8. 8.
    Ding, M., et al.: Learning depth-guided convolutions for monocular 3D object detection. In: CVPR (2020)Google Scholar
  9. 9.
    Dosovitskiy, A., et al.: FlowNet: learning optical flow with convolutional networks. In: ICCV (2015)Google Scholar
  10. 10.
    Flynn, J., Neulander, I., Philbin, J., Snavely, N.: DeepStereo: learning to predict new views from the world’s imagery. In: CVPR (2016)Google Scholar
  11. 11.
    Galliani, S., Lasinger, K., Schindler, K.: Massively parallel multiview stereopsis by surface normal diffusion. In: ICCV (2015)Google Scholar
  12. 12.
    Gu, X., Fan, Z., Zhu, S., Dai, Z., Tan, F., Tan, P.: Cascade cost volume for high-resolution multi-view stereo and stereo matching. arXiv preprint arXiv:1912.06378 (2019)
  13. 13.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, New York (2003)zbMATHGoogle Scholar
  14. 14.
    Huang, P.H., Matzen, K., Kopf, J., Ahuja, N., Huang, J.B.: DeepMVS: learning multi-view stereopsis. In: CVPR (2018)Google Scholar
  15. 15.
    Im, S., Jeon, H.G., Lin, S., Kweon, I.S.: DpsNET: end-to-end deep plane sweep stereo. arXiv preprint arXiv:1905.00538 (2019)
  16. 16.
    Ji, M., Gall, J., Zheng, H., Liu, Y., Fang, L.: SurfaceNet: an end-to-end 3D neural network for multiview stereopsis. In: ICCV (2017)Google Scholar
  17. 17.
    Kendall, A., et al.: End-to-end learning of geometry and context for deep stereo regression. In: ICCV (2017)Google Scholar
  18. 18.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  19. 19.
    Knapitsch, A., Park, J., Zhou, Q.Y., Koltun, V.: Tanks and temples: benchmarking large-scale scene reconstruction. TOG 36(4), 78 (2017)CrossRefGoogle Scholar
  20. 20.
    Lhuillier, M., Quan, L.: A quasi-dense approach to surface reconstruction from uncalibrated images. PAMI 27(3), 418–433 (2005)CrossRefGoogle Scholar
  21. 21.
    Luo, K., Guan, T., Ju, L., Huang, H., Luo, Y.: P-MVSNet: learning patch-wise matching confidence aggregation for multi-view stereo. In: ICCV (2019)Google Scholar
  22. 22.
    Moulon, P., Monasse, P., Marlet, R., et al.: OpenMVG. an open multiple view geometry library (2014)Google Scholar
  23. 23.
    Paszke, A., et al.: Automatic differentiation in PyTorch. In: NeurIPS Autodiff Workshop (2017)Google Scholar
  24. 24.
    Schonberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: CVPR (2016)Google Scholar
  25. 25.
    Schönberger, J.L., Zheng, E., Frahm, J.M., Pollefeys, M.: Pixelwise view selection for unstructured multi-view stereo. In: ECCV (2016)Google Scholar
  26. 26.
    Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: CVPR (2016)Google Scholar
  27. 27.
    Schönberger, J.L., Zheng, E., Pollefeys, M., Frahm, J.M.: Pixelwise view selection for unstructured multi-view stereo. In: ECCV (2016)Google Scholar
  28. 28.
    Seitz, S.M., Curless, B., Diebel, J., Scharstein, D., Szeliski, R.: A comparison and evaluation of multi-view stereo reconstruction algorithms. In: CVPR (2006)Google Scholar
  29. 29.
    Strecha, C., Von Hansen, W., Van Gool, L., Fua, P., Thoennessen, U.: On benchmarking camera calibration and multi-view stereo for high resolution imagery. In: CVPR (2008)Google Scholar
  30. 30.
    Tola, E., Strecha, C., Fua, P.: Efficient large-scale multi-view stereo for ultra high-resolution image sets. Mach. Vis. Appl. 23(5), 903–920 (2012)CrossRefGoogle Scholar
  31. 31.
    Xingjian, S., Chen, Z., Wang, H., Yeung, D.Y., Wong, W.K., Woo, W.C.: Convolutional LSTM network: a machine learning approach for precipitation nowcasting. In: NeurIPS, pp. 802–810 (2015)Google Scholar
  32. 32.
    Xu, Q., Tao, W.: Multi-scale geometric consistency guided multi-view stereo. In: CVPR (2019)Google Scholar
  33. 33.
    Yao, Y., Luo, Z., Li, S., Fang, T., Quan, L.: MVSnet: depth inference for unstructured multi-view stereo. In: ECCV (2018)Google Scholar
  34. 34.
    Yao, Y., Luo, Z., Li, S., Fang, T., Quan, L.: MVSnet: depth inference for unstructured multi-view stereo. In: ECCV (2018)Google Scholar
  35. 35.
    Yao, Y., et al.: BlendedMVS: a large-scale dataset for generalized multi-view stereo networks. arXiv preprint arXiv:1911.10127 (2019)
  36. 36.
    Yi, H., et al.: MMFace: a multi-metric regression network for unconstrained face reconstruction. In: CVPR (2019)Google Scholar
  37. 37.
    Yi, H., et al.: Pyramid multi-view stereo net with self-adaptive view aggregation. arXiv preprint arXiv:1912.03001 (2019)
  38. 38.
    Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122 (2015)

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Peking UniversityBeijingChina
  2. 2.HKUPokfulamHong Kong
  3. 3.TencentShenzhenChina
  4. 4.Kwai Inc.BeijingChina

Personalised recommendations