Advertisement

Acquiring a Radiance Distribution to Superimpose Virtual Objects onto a Real Scene

  • Imari Sato
  • Yoichi Sato
  • Katsushi Ikeuchi
Chapter
Part of the The Springer International Series in Engineering and Computer Science book series (SECS, volume 640)

Abstract

This paper describes a new method for superimposing virtual objects with correct shadings onto an image of a real scene. Unlike the previously proposed methods, our method can measure a radiance distribution of a real scene automatically and use it for superimposing virtual objects appropriately onto a real scene. First, a geometric model of the scene is constructed from a pair of omni-directional images by using an omni-directional stereo algorithm. Then radiance of the scene is computed from a sequence of omni-directional images taken with different shutter speeds and mapped onto the constructed geometric model. The radiance distribution mapped onto the geometric model is used for rendering virtual objects superimposed onto the scene image. As a result, even for a complex radiance distribution, our method can superimpose virtual objects with convincing shadings and shadows cast onto the real scene. We successfully tested the proposed method by using real images to show its effectiveness.

Keywords

Augmented Reality Triangular Mesh Virtual Object Real Scene World Coordinate System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    R. T. Azuma, “A survey of augmented reality,” Presence: Teleoperators and Virtual Environments, vol. 6, no. 4, pp. 355–385, August 1997.Google Scholar
  2. [2]
    R. T. Azuma and G. Bishop, “Improving static and dynamic registration in an optical see-through HMD,” Proc. SIGGRAPH 94, pp. 197–204, July, 1994.Google Scholar
  3. [3]
    M. Bajura, H. Fuchs, and R. Ohbuchi, “Merging virtual objects with the real world: seeing ultrasound imagery within the patient,” Proc. SIG-GRAPH 92, pp. 203–210, 1992.Google Scholar
  4. [4]
    D. H. Ballard & C. M. Brown, Computer Vision, Prentice-Hall, Englewood Cliffs, NJ., 1982.Google Scholar
  5. [5]
    M.F. Cohen, S.E. Chen, J.R. Wallace, D.P. Greenberg, “A Progressive Refinement Approach to Fast Radiosity Image Generation,” Proc. SIG-GRAPH 88. pp. 75–84, 1988.Google Scholar
  6. [6]
    P. E. Debevec, “Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic Range Photography” Proc. SIGGRAPH 98, pp. 189–198, July, 1998.Google Scholar
  7. [7]
    P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” Proc. SIGGRAPH 97, pp. 369–378, August, 1997.Google Scholar
  8. [8]
    G. Drettakis, L. Robert, S. Bougnoux, “Interactive Common Illumination for Computer Augmented Reality” Proc. 8th Eurographics Workshop on Rendering, pp. 45–57, 1997.Google Scholar
  9. [9]
    O. Faugeras, Three-Dimensional Computer Vision: A Geometric View-point, MIT Press, Cambridge, MA., 1993.Google Scholar
  10. [10]
    A. Fournier, A. Gunawan and C. Romanzin, “Common Illumination between Real and Computer Generated Scenes,” Proc. Graphics Interface ‘93, pp.254–262, 1993.Google Scholar
  11. [11]
    A. S. Glassner, Graphics Gems, AP Professional, Cambridge, MA., 1990.Google Scholar
  12. [12]
    B. K. P. Horn, Robot Vision, The MIT Press, Cambridge, MA., 1986.Google Scholar
  13. [13]
    T. Kanade, A. Yoshida, K. Oda, H. Kano, and M. Tanaka, “A Video-Rate Stereo Machine and Its New Applications,” Proc. IEEE Conf. Computer Vision and Pattern Recognition 96, pp. 196–202, 1996.Google Scholar
  14. [14]
    S. K. Nayar, K. Ikeuchi, and T. Kanade, “Surface reflection: physical and geometrical perspectives,” IEEE Trans. PAMI, vol. 13, no. 7, pp. 611–634, 1991.CrossRefGoogle Scholar
  15. [15]
    Y. Sato, M. D. Wheeler, and K. Ikeuchi, “Object shape and reflectance modeling from observation,” Proc. of SIGGRAPH 97, pp. 379–387, August, 1997.Google Scholar
  16. [16]
    A. State, G. Hirota, D. T. Chen, W. F. Garrett, and M. A. Livingston, “Superior augmented-reality registration by integrating landmark tracking and magnetic tracking,” Proc. SIGGRAPH 96, pp. 429–438, August, 1996.Google Scholar
  17. [17]
    C. Tomasi and T. Kanade, “Shape and Motion from Image Streams under Orthography: a Factorization Method,” Int’l J. Computer Vision, vol. 9, no. 2, pp. 137–154, 1992.CrossRefGoogle Scholar
  18. [18]
    K. E. Torrance and E. M. Sparrow, “Theory for off-specular reflection from roughened surface,” J. Optical Society of America, vol.57, pp. 1105–1114,1967.CrossRefGoogle Scholar
  19. [19]
    R. Tsai, “A Versatile Camera Calibration Technique for High Accuracy Machine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses,” IEEE J. Robotics and Automation, vol. 3, no. 4, pp. 323–344, 1987.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2001

Authors and Affiliations

  • Imari Sato
  • Yoichi Sato
  • Katsushi Ikeuchi

There are no affiliations available

Personalised recommendations