Advertisement

Scene Depth Reconstruction on the GPU: A Post Processing Technique for Layered Fog

  • Tianshu Zhou
  • Jim X. Chen
  • Peter Smith
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4563)

Abstract

Realism is a key goal of most VR applications. As graphics computing power increases, new techniques are being developed to simulate important aspects of the human visual system, increasing the sense of ‘immersion‘ of a participant in a virtual environment. One aspect of the human visual system, depth cueing, requires accurate scene depth information in order to simulate. Yet many of the techniques for producing these effects require a trade-off between accuracy and performance, often resulting in specialized implementations that do not consider the need to integrate with other techniques or existing visualization systems. Our objective is to develop a new technique for generating depth based effects in real time as a post processing step performed on the GPU, and to provide a generic solution for integrating multiple depth dependent effects to enhance realism of the synthesized scenes. Using layered fog as an example, our new technique performs per pixel scene depth reconstruction accurately for the evaluation of fog integrals along line-of-sight. Requiring only the depth buffer from the rendering processing as input, our technique makes it easy to integrate into existing applications and uses the full power of the GPU to achieve real time frame rates.

Keywords

Virtual Environment Human Visual System Graphic Hardware Global Illumination Frame Buffer 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Biri, V., Michelin, S., Arques, D.: Real-time animation of realistic fog. In: Rendering Techniques 2002 (Proceedings of the Thirteenth Eurographics Workshop on Rendering) (June 2002)Google Scholar
  2. 2.
    Cerezo, E., Perez, F., Pueyo, X., Seron, F., Sillionn, F.X.: A survey on participating media rendering technique. The Visual Computer 21(5), 303–328 (2005)CrossRefGoogle Scholar
  3. 3.
    Heidich, W., Westermann, R., Seidel, H., Ertl, T.: Applications of pixel textures in visualization and realistic image synthesis. In: Proc. ACM Sym. On Interactive 3D Grahpics, pp. 127–134 (April 1999)Google Scholar
  4. 4.
    Legakis, J.: Fast muilti-layer fog. In: Siggraph 1998 Conference Abstracts and Applications, p. 266 (1998)Google Scholar
  5. 5.
    McNamara, A.: Visual Perception in Realistic Image Synthesis. Computer Graphics Forum 20(4), 211–224 (2001)zbMATHCrossRefGoogle Scholar
  6. 6.
    Perlin, K.: Using gabor functions to make atmosphere in computer graphics (year unknown), http://mrl.nyu.edu/~perlin/experiments/garbor/
  7. 7.
    Rost, R.J.: OpenGL Shading Language, 2nd edn. AddisonWesley, London, UK (2006)Google Scholar
  8. 8.
    Woo, M., Neider, J., Davis, T.: OpenGL Programming Guide, 5th edn. Addison Wesley, Reading (2005)Google Scholar
  9. 9.
    Zdrojewska, D.: Real time rendering of heterogenous fog based on the graphics hardware acceleration (2004), www.cescg.org/CESCG-2004/web/Zdrojewska-Dorota/
  10. 10.
    Zhou, T., Chen, J.X., Pullen, M.: Accurate Depth of Field Simulation in Real Time. Computer Graphics Forum (to appear)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Tianshu Zhou
    • 1
  • Jim X. Chen
    • 1
  • Peter Smith
    • 1
  1. 1.George Mason University George Mason University Member IEEE Computer Society 

Personalised recommendations