Advertisement

The Visual Computer

, Volume 21, Issue 1–2, pp 92–103 | Cite as

Surround video: a multihead camera approach

  • Frank Nielsen
original article

Abstract

We describe algorithms for creating, storing and viewing high-resolution immersive surround videos. Given a set of unit cameras designed to be almost aligned at a common nodal point, we first present a versatile process for stitching seamlessly synchronized streams of videos into a single surround video corresponding to the video of the multihead camera. We devise a general registration process onto raymaps based on minimizing a tailored objective function. We review and introduce new raymaps with good sampling properties. We then give implementation details on the surround video viewer and present experimental results on both real-world acquired and computer-graphics rendered full surround videos. We conclude by mentioning potential applications and discuss ongoing related activities.

Video supplements: http://www.csl.sony.co.jp/person/nielsen

Keywords

Virtual reality stitching environment mapping 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Aliaga DG, Funkhouser T, Yanovsky D, Carlbom I (2003) Sea of images: a dense sampling approach for rendering large indoor environments. IEEE Comput Graph Appl 23(6):22–30CrossRefGoogle Scholar
  2. 2.
    Australian Center for the Moving Image (2005) Magic Machines: A History of the Moving Image from Antiquity to 1900. http://www.acmi.net.au/AIC/MAGIC‘MACHINES.htmlGoogle Scholar
  3. 3.
    Benosman R, Kang SB (eds) (2001) Panoramic vision: sensors, theory and applications. Springer, Berlin Heidelberg New YorkGoogle Scholar
  4. 4.
    Cabral B, Olano M, Nemec P (1999) Reflection space image based rendering. Proceedings of the 26th annual conference on computer graphics and interactive techniques, pp 165–170. DOI 10.1145/311535.311553Google Scholar
  5. 5.
    Chen SE (1995) QuickTime VR: an image-based approach to virtual environment navigation. Proceedings of the 26th annual conference on computer graphics and interactive techniques, pp 29–38. DOI 10.1145/218380.218395Google Scholar
  6. 6.
    Conrady A (1919) Decentering lens systems. Mon Not R Astron Soc 79:385–390Google Scholar
  7. 7.
    Coorg S, Teller S (2000) Spherical mosaics with quaternions and dense correlation. Int J Comput Vis 37(3):259–273. DOI 10.1023/A:1008184124789Google Scholar
  8. 8.
    DodecaTM (2005) camera system. http://www.immersivemedia.com/technology.phpGoogle Scholar
  9. 9.
    El-Melegy MT, Farag AA (2003) Nonmetric lens distortion calibration: closed-form solutions, robust estimation and model selection. IEEE International Conference on Computer Vision, pp 554–559Google Scholar
  10. 10.
    Grossberg MD, Nayar SK (2001) A general imaging model and a method for finding its parameters. IEEE International Conference on Computer Vision, pp 108–115Google Scholar
  11. 11.
    Hsu S, Sawhney HS, Kumar R (2002) Automated mosaics via topology inference, IEEE Comput Graph Appl 22(2):44–54. DOI 10.1109/38.988746Google Scholar
  12. 12.
    McMillan L, Bishop G (1995) Plenoptic modeling: an image-based rendering system. Proceedings of the 22nd annual conference on computer graphics and interactive techniques, pp 39–46. DOI 10.1145/218380.218398Google Scholar
  13. 13.
    Milgram DL (1975) Computer methods for creating photomosaics. IEEE Trans Comput C-24:1113–1119Google Scholar
  14. 14.
    Nalwa VS (1996) A true omnidirectional viewer. Technical report, Bell Laboratories. http://www.fullview.comGoogle Scholar
  15. 15.
    Nayar SK, Karmarkar A (2000) 360×360 Mosaics. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2388–2395Google Scholar
  16. 16.
    Neumann J, Fermuller C, Aloimonos Y (2003) Polydioptric camera design and 3D motion estimation. IEEE Comput Vis Pattern Recogn 2:294–301Google Scholar
  17. 17.
    Nielsen F (2003) Plenoptic path and its applications. IEEE International Conference on Image Processing 1:793–796Google Scholar
  18. 18.
    Smolic A, Kimata H (2004) AHG on 3DAV Coding. ISO/IEC JTC1/SC29/WG11, MPEG04/M10795, Redmont, WA, USA, http://www.chiariglione.org/mpeg/Google Scholar
  19. 19.
    Swaminathan R, Nayar SK (2000) Non-metric calibration of wide-angle lenses and polycameras. IEEE Trans Pattern Anal Mach Intell 22(10):1172–1178CrossRefGoogle Scholar
  20. 20.
    Szeliski R, Shum HY (1997) Creating full view panoramic image mosaics and environment maps. Proceedings of the 24th annual conference on computer graphics and interactive techniques, pp 251–258. DOI 10.1145/258734.258861Google Scholar
  21. 21.
    Snyder JP, Tobler WR, Yang OH, Yang QH (2000) Map Projection Transformation: Principles and Applications. CRC PressGoogle Scholar
  22. 22.
    Tsai RY (1987) A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J Robot Automat 3(4):323–344CrossRefGoogle Scholar
  23. 23.
    Wong TT, Luk WS, Heng PA (1997) Sampling with Hammersley and Halton points. J Graph Tools 2(2):9–24CrossRefGoogle Scholar
  24. 24.
    Xiong Y, Turkowski K (1998) Registration, calibration and blending in creating high quality panoramas. IEEE Workshop on Algorithms of Computer Vision, pp 69–74Google Scholar

Copyright information

© Springer-Verlag 2005

Authors and Affiliations

  1. 1.Sony Computer Science LaboratoriesTokyoJapan

Personalised recommendations