Advertisement

Detecting Visual Convergence for Stochastic Global Illumination

Chapter
Part of the Studies in Computational Intelligence book series (SCI, volume 374)

Abstract

Photorealistic rendering, based on unbiased stochastic global illumination, is now within reach of any computer artist by using commercially or freely available softwares. One of the drawbacks of these softwares is that they do not provide any tool for detecting when convergence is reached, relying entirely on the user for deciding when stopping the computations. In this paper we detail two methods that aim at finding perceptual convergence thresholds for solving this problem. The first one uses the VDP image quality measurement for providing a global threshold. The second one uses SVM classifiers which are trained and used on small subparts of images and allow to take into account the heterogeneity of convergence through the image area. These two approaches are validated by using experimentations with human subjects.

Keywords

Computer Graphic Reference Image Human Visual System Noise Detection Noise Mask 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [CHT02]
    Davis, L.S., Huang, C., Townshend, J.R.G.: An assessment of support vector machines for land cover classification. International Journal of Remote Sensing 23, 725–749 (2002)CrossRefGoogle Scholar
  2. [Dal93]
    Daly, S.: The visible differences predictor: an algorithm for the assessment of image fidelity. In: Digital Images and Human Vision, vol. 4, pp. 124–125 (1993)Google Scholar
  3. [FP04]
    Farrugia, J.-P., Péroche, B.: A progressive rendering algorithm using an adaptive perceptually based image metric. Comput. Graph. Forum 23(3), 605–614 (2004)CrossRefGoogle Scholar
  4. [Itt00]
    Itti, L.: Models of Bottom-Up and Top-Down Visual Attention. bu|td|mod|psy|cv, Pasadena, California (January 2000)Google Scholar
  5. [Joa00]
    Joachims, T.: Estimating the generalization performance of a SVM efficiently. In: Langley, P. (ed.) Proceedings of ICML 2000, 17th International Conference on Machine Learning, Stanford, US, pp. 431–438. Morgan Kaufmann Publishers, San Francisco (2000)Google Scholar
  6. [Kaj86]
    Kajiya, J.T.: The rendering equation. SIGGRAPH Comput. Graph. 20(4), 143–150 (1986)CrossRefGoogle Scholar
  7. [KU85]
    Koch, C., Ullman, S.: Shifts in selective visual attention: Towards the underlying neural circuitry. Human Neurobiology 4, 219–227 (1985)Google Scholar
  8. [LC04]
    Longhurst, P., Chalmers, A.: User validation of image quality assessment algorithms. In: Theory and Practice of Computer Graphics, EGUK 2004, pp. 196–202. IEEE Computer Society, Los Alamitos (2004)CrossRefGoogle Scholar
  9. [Lux]
  10. [MB04]
    Melgani, F., Bruzzone, L.: Classification of Hyperspectral Remote Sensing Images With Support Vector Machines. IEEE Transactions on Geoscience and Remote Sensing 42, 1778–1790 (2004)CrossRefGoogle Scholar
  11. [Mys98]
    Myszkowski, K.: The visible differences predictor: applications to global illumination problems. In: Eurographics Rendering Workshop, pp. 233–236 (1998)Google Scholar
  12. [Nex]
  13. [NIK01]
    Niebur, E., Itti, L., Koch, C.: Controlling the focus of visual selective attention. In: Van Hemmen, L., Domany, E., Cowan, J. (eds.) Models of Neural Networks IV. Springer, Heidelberg (2001)Google Scholar
  14. [PFFG98]
    Pattanaik, S.N., Ferwerda, J.A., Fairchild, M.D., Greenberg, D.P.: A multiscale model of adaptation and spatial vision for realistic image display. Computer Graphics 32(Annual Conference Series), 287–298 (1998)Google Scholar
  15. [RPG99]
    Ramasubramanian, M., Pattanaik, S.N., Greenberg, D.P.: A perceptually based physical error metric for realistic image synthesis. In: Rockwood, A. (ed.) Siggraph 1999, Computer Graphics Proceedings, Los Angeles, pp. 73–82. Addison Wesley Longman, Amsterdam (1999)Google Scholar
  16. [Rus92]
    Russ, J.C.: The Image Processing Handbook. CRC Press, Boca Raton (1992)Google Scholar
  17. [Sar97]
    Sarnoff Corporation. Sarnoff JND vision model algorithm description and testing, VQEG (August 1997)Google Scholar
  18. [Tak09]
    Takouachet, N.: Utilisation de critères perceptifs pour la dètermination d’une condition d’arrêt dans les mèthodes d’illumination gobale. PhD thesis, Universitè du Littoral Côte d’Opale (January 2009)Google Scholar
  19. [TDR07]
    Takouachet, N., Delepoulle, S., Renaud, C.: A perceptual stopping condition for global illumination computations. In: Proc. Spring Conference on Computer Graphics (2007)Google Scholar
  20. [TG80]
    Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cognit. Psychol. 12(1), 97–136 (1980)CrossRefGoogle Scholar
  21. [Vap95a]
    Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (1995)zbMATHGoogle Scholar
  22. [Vap95b]
    Vapnik, V.: Statistical Learning Theory. John Wiley and Sons, Inc., New York (1995)zbMATHGoogle Scholar
  23. [VG97]
    Veach, E., Guibas, L.J.: Metropolis light transport. Computer Graphics 31(Annual Conference Series), 65–76 (1997)Google Scholar
  24. [Yee04]
    Yee, H.: A perceptual metric for production testing. Journal of Graphics Tools 9(4), 33–40 (2004)MathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.LISICCalais cedexFrance

Personalised recommendations