Advertisement

3D Reconstruction with Multi-view Texture Mapping

  • Xiaodan Ye
  • Lianghao Wang
  • Dongxiao Li
  • Ming Zhang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10636)

Abstract

In this paper, a novel 3D reconstruction with multi-view texture mapping method based on Kinect 2 is proposed. Camera poses of all chosen key frames are optimized according to photometric consistency. Optimized camera poses can make the projected point from vertices to different views get closer. A small range of translations with limited calculation is added in this method. A new form of data term and smoothness term in Markov Random Field (MRF) objective function is presented. The outlier images are rejected before view selection and Poisson blending are applied in the end. Experimental results show that our method achieves a high-quality 3D model with high fidelity texture.

Keywords

Camera poses optimization Markov Random Field (MRF) Texture mapping 

Notes

Acknowledgments

This work is supported in part by the National Natural Science Foundation of China (Grant No. 61401390).

References

  1. 1.
    Newcombe, R.A., et al.: KinectFusion: real-time dense surface mapping and tracking. In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality, Basel, pp. 127–136 (2011). doi: 10.1109/ISMAR.2011.6092378
  2. 2.
    Liu, S., Li, W., Ogunbona, P., Chow, Y.W.: Creating simplified 3D models with high quality textures. In: 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Adelaide, SA, pp. 1–8 (2015). doi: 10.1109/DICTA.2015.7371249
  3. 3.
    Zhou, Q.Y., Koltun, V.: Color map optimization for 3D reconstruction with consumer depth cameras. ACM Trans. Graph. 33(4), 1–10 (2014)Google Scholar
  4. 4.
    Jeon, J., Jung, Y., Kim, H., Lee, S.: Texture map generation for 3D reconstructed scenes. Vis. Comput. 32(6), 955–965 (2016)CrossRefGoogle Scholar
  5. 5.
    Wang, L., Kang, S.B., Szeliski, R., Shum, H.Y.: Optimal texture map reconstruction from multiple views. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, vol. 1, pp. I-347–I-354 (2001)Google Scholar
  6. 6.
    Lempitsky, V., Ivanov, D.: Seamless mosaicing of image-based texture maps. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–6 (2007)Google Scholar
  7. 7.
    Gal, R., Wexler, Y., Ofek, E., Hoppe, H., Cohen-Or, D.: Seamless montage for texturing models. Comput. Graph. Forum 29, 479–486 (2010)CrossRefGoogle Scholar
  8. 8.
    Crete, F., Dolmiere, T., Ladret, P., Nicolas, M.: The blur effect: perception and estimation with a new no-reference perceptual blur metric. In: Electronic Imaging, pp. 64920I–64920I-11 (2007)Google Scholar
  9. 9.
    Kerl, C., Sturm, J., Cremers, D.: Robust odometry estimation for RGB-D cameras. In: 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, pp. 3748–3754 (2013)Google Scholar
  10. 10.
    Waechter, M., Moehrle, N., Goesele, M.: Let there be color! Large-scale texturing of 3d reconstructions. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 836–850. Springer, Cham (2014). doi: 10.1007/978-3-319-10602-1_54 Google Scholar
  11. 11.
    Rez, P., Gangnet, M., Blake, A.: Poisson image editing. ACM Trans. Graph. 22(3), 313–318 (2003)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.College of Information Science and Electronic EngineeringZhejiang UniversityHangzhouChina
  2. 2.Zhejiang Provincial Key Laboratory of Information Processing, Communication and NetworkingHangzhouChina
  3. 3.State Key Lab for Novel Software TechnologyNanjing UniversityNanjingChina

Personalised recommendations