Advertisement

Generalizing the Virtual Camera Pose for View Synthesis

  • Enric X. Martín
  • Antonio B. Martínez
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2749)

Abstract

View synthesis requires the ability to estimate the image projected from a scene to a point of view where has not been placed a real camera. Many methods have been developed and three-view rectification is one of the most used, nevertheless it has some restrictions. That arises when the plane containing the focus of the three cameras involved in the process is parallel to the view direction of any of the cameras. This paper deals with the geometry of the method and gives analytically a way for dodging the singularities in the position of the virtual camera. That allows us to obtain a synthetic view from a previously forbidden point and automate the process towards fast software or hardware implementations.

Keywords

Vector Versus View Direction Mixed Reality Virtual View Virtual Camera 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Chen S.E. and Williams, L. View interpolation for image synthesis. In Computer Graphics (SIGGRAPH’93) pages 279–288. 1993.Google Scholar
  2. 2.
    Takeo Kanade, Research Group in Computer Science, Carnegie Mellon Univ., Virtualized Reality Home Page: http://www.cs.cmu.edu/~virtualized~reality/Google Scholar
  3. 3.
    Stéphane Laveau, Olivier Faugeras, Oriented projective geometry for Computer Vision, Tech. Report INRIA Sophia-Antipolis, 1997.Google Scholar
  4. 4.
    Faugeras O. and Keriven R. Variational principles, surface evolution, pde’s, level set methods and the stereo problem. IEEE Trans. Image Processing 7, pages 336–344. 1998.zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Seitz S. and Kutulakos, K. Plenoptic Image Editing. International Journal on Computer Vision 48, pages 115–119, Kluwer academic publishers, 2002.zbMATHCrossRefGoogle Scholar
  6. 6.
    Kutulakos K. and Seitz S. A theory of Shape by Space Carving. International Journal on Computer Vision 38, pages 199–218. Kluwer academic publishers, 2000.zbMATHCrossRefGoogle Scholar
  7. 7.
    Naemura T., Harashima H.. The Ray-Based Approach to Augmented Spatial Communication and Mixed Reality. Mixed Reality, merging Real and Virtual Worlds. Edited by Y. Ohta, H. Tamura, Ed. Ohmsha. Tokyo 1999.Google Scholar
  8. 8.
    Katayama K., Tanaka K., Oshino T., Tamura H.; Media Technology Laboratory, Canon Inc. Kawasaki, Japan. A viewpoint dependent stereoscopic display using interpolation of multi-viewpoint images. Proc. SPIE. Stereoscopic Displays and Virtual Reality Systems II. Vol. 2049, pp. 11–20,1995.Google Scholar
  9. 9.
    Scharstein, Daniel. View Synthesis Using Stereo Vision. Ed. Springer-Verlag, Berlin 1999. ISSN 0302-9743.Google Scholar
  10. 10.
    Martin E.X. and Martinez A.B.. Generation of synthetic views for teleoperation in industrial processes. Proceedings of the IEEE International Conference on Factory Automation. ETFA’01. Sophia-Antipolis, October 2001.Google Scholar
  11. 11.
    Martinez A. B., Arboleda J.P., Martin E.X.. Mixed Reality in traffic scenes. Entertainment Computing, Technologies and Applications. Chapter 5. Kluwer Academic Publishers. 2003.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Enric X. Martín
    • 1
  • Antonio B. Martínez
    • 1
  1. 1.Department of System EngineeringPolytechnic University of CataloniaBarcelonaSpain

Personalised recommendations