The Visual Computer

, Volume 26, Issue 2, pp 97–107 | Cite as

Selective rendering for efficient ray traced stereoscopic images

  • Cheng-Hung LoEmail author
  • Chih-Hsing Chu
  • Kurt Debattista
  • Alan Chalmers
Original Article


Depth-related visual effects are a key feature of many virtual environments. In stereo-based systems, the depth effect can be produced by delivering frames of disparate image pairs, while in monocular environments, the viewer has to extract this depth information from a single image by examining details such as perspective and shadows. This paper investigates via a number of psychophysical experiments, whether we can reduce computational effort and still achieve perceptually high-quality rendering for stereo imagery. We examined selectively rendering the image pairs by exploiting the fusing capability and depth perception underlying human stereo vision. In ray-tracing-based global illumination systems, a higher image resolution introduces more computation to the rendering process since many more rays need to be traced. We first investigated whether we could utilise the human binocular fusing ability and significantly reduce the resolution of one of the image pairs and yet retain a high perceptual quality under stereo viewing condition. Secondly, we evaluated subjects’ performance on a specific visual task that required accurate depth perception. We found that subjects required far fewer rendered depth cues in the stereo viewing environment to perform the task well. Avoiding rendering these detailed cues saved significant computational time. In fact it was possible to achieve a better task performance in the stereo viewing condition at a combined rendering time for the image pairs less than that required for the single monocular image. The outcome of this study suggests that we can produce more efficient stereo images for depth-related visual tasks by selective rendering and exploiting inherent features of human stereo vision.


Stereoscopic images Perceptually-guided rendering Virtual reality 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Adelson, S.J., Hodges, L.F.: Visible surface ray-tracing of stereoscopic images. In: ACM-SE 30: Proceedings of the 30th Annual Southeast Regional Conference, pp. 148–156. ACM, New York (1992) CrossRefGoogle Scholar
  2. 2.
    Ward, G.J.: The radiance lighting simulation and rendering system. In: SIGGRAPH’94: Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, pp. 459–472. ACM, New York (1994) CrossRefGoogle Scholar
  3. 3.
    Watt, A., Policarpo, F.: The Computer Images. Addison-Wesley, Reading (1999) Google Scholar
  4. 4.
    Barbour, C.G., Meyer, G.W.: Visual cues and pictorial limitations for computer generated photo-realistic images. Vis. Comput. 9(3), 151–165 (1992) CrossRefGoogle Scholar
  5. 5.
    Myszkowski, K., Tawara, T., Akamine, H., Seidel, H.-P.: Perception-guided global illumination solution for animation rendering. In: SIGGRAPH’01: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, pp. 221–230. ACM, New York (2001) CrossRefGoogle Scholar
  6. 6.
    Yee, H., Pattanaik, S., Greenberg, D.P.: Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments. ACM Trans. Graph. 20(1), 39–65 (2001) CrossRefGoogle Scholar
  7. 7.
    Meyer, G.W., Rushmeier, H.E., Cohen, M.F., Greenberg, D.P., Torrance, K.E.: An experimental evaluation of computer graphics imagery. ACM Trans. Graph. 5(1), 30–50 (1986) CrossRefGoogle Scholar
  8. 8.
    McNamara, A., Chalmers, A., Troscianko, T., Gilchrist, I.: Comparing real & synthetic scenes using human judgements of lightness. In: Proceedings of the Eurographics Workshop on Rendering Techniques 2000, pp. 207–218. Springer, Berlin (2000) Google Scholar
  9. 9.
    Cater, K., Chalmers, A., Ward, G.: Detail to attention: exploiting visual tasks for selective rendering. In: EGRW’03: Proceedings of the 14th Eurographics Workshop on Rendering, Eurographics Association, pp. 270–280 (2003) Google Scholar
  10. 10.
    Kjelldahl, L., Prime, M.: A study on how depth perception is affected by different presentation methods of 3d objects on a 2d display. Comput. Graph. 19(2), 199–202 (1995) CrossRefGoogle Scholar
  11. 11.
    Wanger, L.C., Ferwerda, J.A., Greenberg, D.P.: Perceiving spatial relationships in computer-generated images. IEEE Comput. Graph. Appl. 12(3), 44–51, 54–58 (1992) CrossRefGoogle Scholar
  12. 12.
    Servos, P., Goodale, A., Jakobson, L.S.: The role of binocular vision in prehension: a kinematic analysis. Vis. Res. 32(8), 1513–1521 (1992) CrossRefGoogle Scholar
  13. 13.
    Hubona, G.S., Wheeler, P.N., Shirah, G.W., Brandt, M.: The relative contributions of stereo, lighting, and background scenes in promoting 3d depth visualization. ACM Trans. Comput.-Hum. Interact. 6(3), 214–242 (1999) CrossRefGoogle Scholar
  14. 14.
    Wanger, L.: The effect of shadow quality on the perception of spatial relationships in computer generated imagery. In: SI3D’92: Proceedings of the 1992 Symposium on Interactive 3D Graphics, pp. 39–42. ACM, New York (1992) CrossRefGoogle Scholar
  15. 15.
    Hu, H.H., Gooch, A.A., Thompson, W.B., Smits, B.E., Rieser, J.J., Shirley, P.: Visual cues for imminent object contact in realistic virtual environment. In: VIS’00: Proceedings of the Conference on Visualization’00, pp. 179–185. IEEE Comput. Soc., Los Alamitos (2000) Google Scholar
  16. 16.
    Kajiya, J.T.: The rendering equation. In: SIGGRAPH’86: Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques (1986), pp. 143–150. ACM, New York (1986) CrossRefGoogle Scholar
  17. 17.
    Hu, H.H., Gooch, A.A., Creem-Regehr, S.H., Thompson, W.B.: Visual cues for perceiving distances from objects to surfaces. Presence: Teleoper. Virtual Environ. 11(6), 652–664 (2002) CrossRefGoogle Scholar
  18. 18.
    Horvitz, E., Lengyel, J.: Perception, attention, and resources: a decision-theoretic approach to graphics rendering. In: Proceedings of the 13th Conf. on Uncertainty in Artificial Intelligence, pp. 238–249. Morgan Kaufman, San Francisco (1997) Google Scholar
  19. 19.
    Perkins, M.G.: Data compression of stereopairs. IEEE Trans. Commun. 40(4), 684–696 (1992) CrossRefGoogle Scholar
  20. 20.
    Siegel, M., Sethuraman, S., McVeigh, J.S., Jordan, A.: Compression and interpolation of 3d-stereoscopic and multi-view video. In: Stereoscopic Displays and Virtual Reality Systems IV, vol. 3012, pp. 227–238 (1997) Google Scholar
  21. 21.
    Kim, S.H., Siegel, M., Son, J.-Y.: Synthesis of a high resolution 3d-stereoscopic image from a high resolution monoscopic image and a low resolution depth map. In: Proceedings of the 1998 SPIE/IS&T Conference, vol. 3295A, pp. 76–86 (1998) Google Scholar
  22. 22.
    Badt, S. Jr.: Two algorithms taking advantage of temporal coherence in ray tracing. Vis. Comput. 4(1), 123–132 (1988) zbMATHCrossRefGoogle Scholar
  23. 23.
    Wendt, G., Faul, F., Mausfeld, R.: Highlight disparity contributes to the authenticity and strength of perceived glossiness. J. Vis. 8(1), 1–10 (2008) CrossRefGoogle Scholar
  24. 24.
    eDimensional, Wireless 3D shutter glasses, eDimensional, Inc.,, updated 2004
  25. 25.
    Ward Larson, G., Shakespeare, R.: Rendering with RADIANCE: The Art and Science of Lighting Simulation. Morgan Kauffman, San Mateo (1998) Google Scholar
  26. 26.
    3DCombine, A stereoscopic image generation program. Shareware,
  27. 27.
    Presentation, High-precision program for stimulus delivery and experimental control for behavioral and physiological experiments, Neurobehavioral Systems, Inc.

Copyright information

© Springer-Verlag 2009

Authors and Affiliations

  • Cheng-Hung Lo
    • 1
    Email author
  • Chih-Hsing Chu
    • 1
  • Kurt Debattista
    • 2
  • Alan Chalmers
    • 2
  1. 1.Department of Industrial Engineering and Engineering ManagementNational Tsing Hua UniversityHsinchuTaiwan
  2. 2.Warwick Digital LabUniversity of WarwickCoventryUK

Personalised recommendations