Advertisement

VideoPlus: A Method for Capturing the Structure and Appearance of Immersive Environments

  • Camillo J. Taylor
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2018)

Abstract

This paper describes an approach to capturing the appearance and structure of immersive environments based on the video imagery obtained with an omnidirectional camera system. The scheme proceeds by recovering the 3D positions of a set of point and line features in the world from image correspondences in a small set of key frames in the image sequence. Once the locations of these features have been recovered the position of the camera during every frame in the sequence can be determined by using these recovered features as fiducials and estimating camera pose based on the location of corresponding image features in each frame. The end result of the procedure is an omnidirectional video sequence where every frame is augmented with its pose with respect to an absolute reference frame and a 3D model of the environment composed of point and line features in the scene.

By augmenting the video clip with pose information we provide the viewer with the ability to navigate the image sequence in new and interesting ways. More specifically the user can use the pose information to travel through the video sequence with a trajectory different from the one taken by the original camera operator. This freedom presents the end user with an opportunity to immerse themselves within a remote environment.

Keywords

Mobile Robot Point Feature Video Sequence Panoramic Image Camera Frame 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Terrance E. Boult. Remote reality via omni-directional imaging. In Scott Grisson, Janet McAndless, Omar Ahmad, Christopher Stapleton, Adele Newton, Celia Pearce, Ryan Ulyate, and Rick Parent, editors, Conference abstracts and applications: SIGGRAPH 98, July 14-21, 1998, Orlando, FL, Computer Graphics, pages 253–253, New York, NY 10036, USA, 1998. ACM Press.Google Scholar
  2. 2.
    S. E. Chen. Quicktime vr-an image-based approach to virtual environment navigation. In SIGGRAPH, pages 29–38, August 1995.Google Scholar
  3. 3.
    Satyan Coorg and Seth Teller. Automatic extraction of textured vertical facades from pose imagery. Technical report, MIT Computer Graphics Group, January 1998.Google Scholar
  4. 4.
    Paul E. Debevec, Camillo J. Taylor, and Jitendra Malik. Modeling and rendering architecture from photographs: A hybrid geometry-and image-based approach. In Proceedings of SIGGRAPH 96. In Computer Graphics Proceedings, Annual Conference Series, pages 11–21, New Orleans, LA, August 4-9 1996. ACM SIGGRAPH.Google Scholar
  5. 5.
    C. Geyer and K. Daniilidis. Catadioptric camera calibration. In International Conference on Computer Vision, pages 398–404, 1999.Google Scholar
  6. 6.
    Steven J. Gortler, Radek Grzeszczuk, Richard Szeliski, and Michael Cohen. The lumigraph. In Proceedings of SIGGRAPH 96. In Computer Graphics Proceedings, Annual Conference Series, pages 31–43, New Orleans, LA, August 4-9 1996. ACM SIGGRAPH.Google Scholar
  7. 7.
    Hiroshi Ishiguro, Takeshi Maeda, Takahiro Miyashita, and Saburo Tsuji. A strategy for acquiring an environmental model with panoramic sensing by a mobile robot. In IEEE Int. Conf. on Robotics and Automation, pages 724–729, 1994.Google Scholar
  8. 8.
    Hiroshi Ishiguro, Kenji Ueda, and Saburo Tsuji. Omnidirectional visual information for navigating a mobile robot. In IEEE Int. Conf. on Robotics and Automation, pages 799–804, 1993.Google Scholar
  9. 9.
    Hiroshi Ishiguro, Masashi Yamamoto, and Saburo Tsuji. Omnidirectional stereo. IEEE Trans. Pattern Anal. Machine Intell., 14(2):257–262, February 1992.CrossRefGoogle Scholar
  10. 10.
    J.E. Dennis Jr. and Robert B. Schnabel. Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, 1983.Google Scholar
  11. 11.
    Marc Levoy and Pat Hanrahan. Light field rendering. In Proceedings of SIGGRAPH 96. In Computer Graphics Proceedings, Annual Conference Series, pages 31–43, New Orleans, LA, August 4-9 1996. ACM SIGGRAPH.Google Scholar
  12. 12.
    Maxime Lhuillier and Long Quan. Image interpolation by joint view triangulation. In Proc. IEEE Conf. on Comp. Vision and Patt. Recog., volume 2, pages 139–145, June 1999.Google Scholar
  13. 13.
    A. Lippman. Movie maps: An application of the optical video-disc to computer graphics. In SIGGRAPH, pages 32–42, July 1980.Google Scholar
  14. 14.
    B.D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proc. 7th International Joint Conference on Artificial Intelligence, 1981.Google Scholar
  15. 15.
    Shree Nayar. Catadioptric omnidirectional camera. In Proc. IEEE Conf. on Comp. Vision and Patt. Recog., 1997.Google Scholar
  16. 16.
    Heung-Yeung Shum and Li-Wei He. Rendering with concentric mosaics. In SIGGRAPH, pages 299–306, August 1999.Google Scholar
  17. 17.
    Tomas Svoboda, Tomas Pajdla, and Vaclav Hlavac. Epipolar geometry for panoramic cameras. In European Conference on Computer Vision, pages 218–232. Springer, 1998.Google Scholar
  18. 18.
    R. Szeliski and H. Y. Shum. Creating full ciew panoramic image mosaics and texture-mapped models. In SIGGRAPH, pages 251–258, August 1997.Google Scholar
  19. 19.
    Camillo J. Taylor and David J. Kriegman. Minimization on the lie group so(3) and related manifolds. Technical Report 9405, Center for Systems Science, Dept. of Electrical Engineering, Yale University, New Haven, CT, April 1994.Google Scholar
  20. 20.
    Camillo J. Taylor and David J. Kriegman. Structure and motion from line segments in multiple images. IEEE Trans. Pattern Anal. Machine Intell., 17-11, November 1995.Google Scholar
  21. 21.
    Yasushi Yagi, Shinjiro Kawato, and Saburo Tsuji. Real-time omnidirectional image sensor (copis) for vision-guided navigation. IEEE Journal of Robotics and Automation, 10(1):11–21, February 1994.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Camillo J. Taylor
    • 1
  1. 1.GRASP Laboratory, CIS DepartmentUniversity of PennsylvaniaRm 335C Philadelphia

Personalised recommendations