Advertisement

Virtual Reality

, Volume 19, Issue 2, pp 83–94 | Cite as

AR image generation using view-dependent geometry modification and texture mapping

  • Yuta NakashimaEmail author
  • Yusuke Uno
  • Norihiko Kawai
  • Tomokazu Sato
  • Naokazu Yokoya
Original Article

Abstract

Augmented reality (AR) applications often require virtualized real objects, i.e., virtual objects that are built based on real objects and rendered from an arbitrary viewpoint. In this paper, we propose a method for real object virtualization and AR image generation based on view-dependent geometry modification and texture mapping. The proposed method is a hybrid of model- and image-based rendering techniques that uses multiple input images of the real object as well as the object’s three-dimensional (3D) model obtained by an automatic 3D reconstruction technique. Even with state-of-the-art technology, the reconstructed 3D model’s accuracy can be insufficient, resulting in such visual artifacts as false object boundaries. The proposed method generates a depth map from a 3D model of a virtualized real object and expands its region in the depth map to remove the false object boundaries. Since such expansion reveals the background pixels in the input images, which is particularly undesirable for AR applications, we preliminarily extract object regions and use them for texture mapping. With our GPU implementation for real-time AR image generation, we experimentally demonstrated that using expanded geometry reduces the number of required input images and maintains visual quality.

Keywords

View-dependent geometry modification View-dependent texture mapping Augmented reality Free-viewpoint image generation 

Notes

Acknowledgments

This work is partly supported by the Japan Society for the Promotion of Science KAKENHI No. 23240024.

Supplementary material

Supplementary material 1 (mp4 80974 KB)

References

  1. Azuma R (1997) A survey of augmented reality. Presence 6(4):355–385Google Scholar
  2. Bastian JW, Ward B, Hill R, Hengel AVD, Dick AR (2010) Interactive modelling for AR applications. In: Proceedings of IEEE international symposium on mixed and augmented reality, pp 199–205Google Scholar
  3. Buehler C, Bosse M, McMillan L, Gortler S, Cohen M (2001) Unstructured lumigraph rendering. In: Proceedings of ACM SIGGRAPH, pp 425–432Google Scholar
  4. Chaurasia G, Duchene S, Sorkine-Hornun O, Drettakis G (2013) Depth synthesis and local warps for plausible image-based navigation. ACM Trans Graph 32(3):30:1–30:12CrossRefGoogle Scholar
  5. Davis A, Levoy M, Durand F (2012) Unstructured light fields. Comput Graph Forum 31(2):305–314CrossRefGoogle Scholar
  6. Debevec P, Yu Y, Borshukov G (1998) Efficient view-dependent image-based rendering with projective texture-mapping. In: Proceedings of rendering techniques, pp 105–116Google Scholar
  7. Furukawa Y, Curless B, Seitz SM, Szeliski R (2010) Towards internet-scale multi-view stereo. In: Proceedings of 2010 IEEE conference computer vision and pattern recognition, pp 1434–1441Google Scholar
  8. Gortler SJ, Grzeszczuk R, Szeliski R, Cohen MF (1996) The lumigraph. In: Proceedings of ACM SIGGRAPH, pp 43–54Google Scholar
  9. Irani M, Hassner T, Anandan P (2002) What does the scene look like from a scene point? In: Proceedings of 7th European conference computer vision, pp 883–897Google Scholar
  10. Jancosek M, Pajdla T (2011) Multi-view reconstruction preserving weakly-supported surfaces. In: Proceedings of 2011 IEEE conference computer vision and pattern recognition, pp 3121–3128Google Scholar
  11. Kato H, Billinghurst M (1999) Marker tracking and HMD calibration for a video-based augmented reality conferencing system. In: Proceedings of 2nd IEEE/ACM international workshop on augmented reality, pp 85–94Google Scholar
  12. Klein G, Murray D (2007) Parallel tracking and mapping for small AR workspaces. In: Proceedings of 6th IEEE/ACM international symposium on mixed and augmented reality, pp 225–234Google Scholar
  13. Kolev K, Brox T, Cremers D (2012) Fast joint estimation of silhouettes and dense 3D geometry from multiple images. IEEE Trans Pattern Anal Mach Intell 34(3):493–505CrossRefGoogle Scholar
  14. Levoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of ACM SIGGRAPH, pp 31–42Google Scholar
  15. Metaio GmbH. Metaio SDK. http://www.metaio.com/products/sdk/
  16. NVIDIA Developer Zone. ConjugateGradientUM. http://docs.nvidia.com/cuda/cuda-samples/#conjugategradient
  17. Renka R (1997) STRIPACK: Delaunay triangulation and Voronoi diagram on the surface of a sphere. ACM Trans Math Softw 23(3):416–434zbMATHMathSciNetCrossRefGoogle Scholar
  18. Rother C, Kolmogorov V, Blake A (2004) Grabcut: interactive foreground extraction using iterated graph cuts. ACM Trans Graph 23(3):309–314CrossRefGoogle Scholar
  19. Seitz SM, Curless B, Diebel J, Scharstein D, Szeliski R (2006) A comparison and evaluation of multi-view stereo reconstruction algorithms. In: Proceedings of 2006 IEEE conference computer vision and pattern recognition, pp 519–528Google Scholar
  20. Wu C (2007) SiftGPU: a GPU implementation of scale invariant feature transform (SIFT). http://ccwu.me/vsfm/
  21. Wu C (2011) VisualSFM: a visual structure from motion system. http://ccwu.me/vsfm/
  22. Wu C (2013) Towards linear-time incremental structure from motion. In: Proceedings of international conference on 3D vision, pp 127–134Google Scholar

Copyright information

© Springer-Verlag London 2015

Authors and Affiliations

  • Yuta Nakashima
    • 1
    Email author
  • Yusuke Uno
    • 1
  • Norihiko Kawai
    • 1
  • Tomokazu Sato
    • 1
  • Naokazu Yokoya
    • 1
  1. 1.Nara Institute of Science and Technology (NAIST)IkomaJapan

Personalised recommendations