Skip to main content
Log in

Abstract

Omni-directional stereoscopic images consist of two of omni-directional panoramic images, where each image is for the left eye and the right eye. The panoramic stereoscopic image provides a stereo sensation in a full 360° view. These omni-directional stereoscopic images cannot be produced by two omni-directional cameras from two viewpoints, but they can be constructed by mosaicking together the omni-directional images from four different positions around the user's position. This paper addresses a new technique for producing high-resolution, omni-directional stereoscopic images from one omni-directional sensor that is attached to a high-resolution digital still camera. This technique requires fewer photos and less time for the mosaic processes than previous approaches. A new display system is implemented on CAVE to interact with the stereoscopic images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. S.K. Nayar and T. Boult, “Omnidirectional Vision Systems: PI Report,” in Proceedings of the 1997 DARPA Image Understanding Workshop, May 1997.

  2. S.K. Nayar, “Omnidirectional Vision” in The Eighth International Symposium of Robotics Research, Hayama, Japan, Oct. 1997.

  3. P. Greguss, Panoramic imaging block for three-dimensional space, U.S. Patent 4,566,763 (28 Jan, 1986).

  4. Z. Zhu, E.M. Riseman, and A.R. Hanson, “Geometrical Modeling and Real-Time Vision Applications of Panoramic Annular Lens (PAL) Camera,” Technical Report TR #99-11, Computer Science Department, University of Massachusetts at Amherst, Feb. 1999.

  5. V.N. Peri and S.K. Nayar, “Generation of Perspective Panoramic Video from Omnidirection Video,” in Proc. of DARPA Image Understanding Workshop, New Orleans, May 1997.

  6. Y. Onoe, K. Yamazawa, H. Takemura, and N. Yokoya, “Telepresence by Real-Time View-Dependent Image Generation from Omnidirectional Video Streams,” Computer Vision and Image Understanging, vol. 71, no. 2, 1998, pp. 154–165.

    Article  Google Scholar 

  7. S. Peleg, M. Ben-Ezra, and Y. Pritch, “Ommistereo: Panoramic Stereo Imaging,” IEEE Transsction on Pattern Analysis any Machine Intelligence, vol. 23, no. 23, 2001, pp. 279–290.

    Article  Google Scholar 

  8. K. Yamaguchi, H. Takemura, K. Yamazawa, and N. Yokoya, “Real-Time Generation and Presentation of View-Dependent Binocular Stereo Images Using a Sequence of Omnidirectional images,” in Proc. 15th ICPR, vol. 4, 2000, pp. 482–486.

    Google Scholar 

  9. R. Szeliski, “Video Mosaic for Virtual Environments,” IEEE Computer Graphics and Applications (c) 1996 IEEE, vol. 16, no. 2, 1996, pp. 22–30.

    Article  Google Scholar 

  10. R. Szeliski and H-Y. Shum, “Creating Full View Panoramic Image Mosaics and Environment Maps,” Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, 1997, pp. 251–258.

  11. S.E. Chen, “QuickTime VR: An Image-Based Approach to Virtual Environment Navigation,” in Proceedings of the 22nd Annual ACM Conference on Computer Graphics, 1995, pp. 29–38.

  12. C.L. Zitnick and T. Kanade “A Cooperative Algorithm for Stereo Matching and Occlusion Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 7, 2000.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vajirasak Vanijja.

Additional information

Vajirasak Vanijja received the B. Eng. the M.Sc. from King Mongkut's University of Technology Thonburi(KMUTT), Bangkok, Thailand, in 1996, and 1998. He received the PhD. (Computer Science) from Japan Advance Institute of Science and Technology in, 2004.

He was a faculty of School of Information Technology, KMUTT, from 1998--2001 and responsible for industrial as well as academic research projects in the area of Multimedia, Computer Graphic and Image Processing. In 2001, he joined Multi-media Integral System Laboratory School of Information Science, JAIST, Ishikawa, Japan, where he was responsible for doing research in a project in the area of Omni-directional image application and Image-based Virtual Reality. In 2004 after the PhD course, he joined the school of information technology, KMUTT and takes care of some courses in software engineer and computer science programs of the school.

Susumu Horiguchi received the B. Eng. the M. Eng. and P.hD. degrees from Tohoku University in 1976, 1978 and 1981 respectively. He is currently a full professor in the Graduate School of Information Science, Tohoku University. He was a visiting scientist at the IBM Thomas J. Watson Research Center from 1986 to 1987 and a visiting professor in The Center for Advanced Studies, the University of Southwestern Louisiana and at the Department of Computer Science, Texas A & M University in the summers of 1994 and 1997. He was also a professor in the Graduate School of Information Science, JAIST (Japan Advanced Institute of Science and Technology). He has been involved in organizing international workshops, symposia and conferences sponsored by the IEEE, IEICE, IASTED and IPS. He has published over 150 papers technical papers on optical networks, interconnection networks, parallel algorithms, high performance computer architectures and VLSI/WSI architectures.?Prof. Horiguchi is a senior member of the IEEE Computer Society, and members of IEICE, IPS and IASTED.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Vanijja, V., Horiguchi, S. Omni-Directional Stereoscopic Images from One Omni-Directional Camera. J VLSI Sign Process Syst Sign Image Video Technol 42, 91–101 (2006). https://doi.org/10.1007/s11265-005-4168-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11265-005-4168-7

Keywords

Navigation