Pooling spike neural network for fast rendering in global illumination


The generation of photo-realistic images is a major topic in computer graphics. By using the principles of physical light propagation, images that are indistinguishable from real photographs can be generated. However, this computation is a very time-consuming task. When simulating the real behavior of light, images can take hours to be of sufficient quality. This paper proposes a bio-inspired architecture with spiking neurons for fast rendering in global illumination. The objective is to find the number of paths that are required for each image in order to be perceived identical to the visually converged one computed by the path tracing algorithm. The challenge is that the visually converged image is unknown so that we start from a very noisy image to converge toward the less noisy image. This architecture with functional parts of sparse encoding, dynamic learning, and decoding consists of a robust convergence measure on blocks. Different pooling strategies are performed in order to separate noise from signal in a deep learning process. The learning algorithm selects the most pertinent images using clustering dynamic learning. The system dynamic computes a learning parameter for each image based on its level of noise. The experiments are conducted on a global illumination set which contains a large number of images with different resolutions and noise levels computed using diffuse and specular rendering. With respect to the scenes with \(512\times 512\) resolution, 3232 different images are used for learning and 9696 images are used for testing. For the scenes with \(800\times 800\) resolution, the training and the testing data contain, respectively, 3760 and 6320 images. The result is a system composed from only two spike pattern association neurons that accurately predict the quality of images with respect to human psycho-visual scores. The pooling spike neural network has been compared with the support vector and fast relevance vector machines. The obtained results show that the proposed method gives promising efficiency in terms of accuracy (which is calculated as the mean square error on each block of the scenes and the variation of the actual thresholds of the perception models and the desired human psycho-visual scores) and less number of parameters.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22


  1. 1.

    Ikeda S, Watanabe S, Raytchev B, Tamaki T, Kaneda K (2015) Spectral rendering of interference phenomena caused by multilayer films under global illumination environment. ITE Trans Media Technol Appl 3(1):76–84

    Article  Google Scholar 

  2. 2.

    Hedman P, Karras T, Lehtinen J (2016) Sequential Monte Carlo instant radiosity. In: Proceedings of the 20th ACM SIGGRAPH symposium on interactive 3D graphics and games. pp 121–128

  3. 3.

    Parker SG, Bigler J, Dietrich A, Friedrich H, Hoberock J, Luebke D, Mcallister D, Mcguire M, Morley K, Robinson A, Stich M (2010) A general purpose ray tracing engine. ACM Trans Graph. https://doi.org/10.1145/1833351.1778803

  4. 4.

    Thiedemann S, Henrich N, Grosch T, Muller S (2011) Voxel-based global illumination. In: Proceeding I3D Symposium on interactive 3D graphics and games, pp 103–110

  5. 5.

    Volevich V, Myszkowski K, Khodulev A, Kopylov AE (2000) Using the visual differences predictor to improve performance of progressive global illumination computation. ACM Trans Graph 19(2):122–161

    Article  Google Scholar 

  6. 6.

    Shi J, Yan Q, Xu L, Jia J (2015) Hierarchical image saliency detection on extended CSSD. IEEE Trans Pattern Anal Mach Intell 38(4):717–729

    Article  Google Scholar 

  7. 7.

    Demirtas A, Reibman A, Jafarkhani H (2014) Full-reference quality estimation for images with different spatial resolutions. IEEE Trans Image Process 23(5):2069–2080

    MathSciNet  MATH  Article  Google Scholar 

  8. 8.

    Delepoulle S, Bigand A, Renaud C (2012) A no-reference computer generated images quality metrics and its application to denoising. In: IEEE intelligent systems IS12 conference, vol 1, pp 67–73

  9. 9.

    Constantin J, Bigand A, Constantin I, Hamad D (2015) Image noise detection in global illumination methods based on frvm. NeuroComputing 64:82–95

    Article  Google Scholar 

  10. 10.

    Ciregan D, Meier U, Schmidhuber J (2012) Multi-column deep neural networks for image classification. In: IEEE conference on computer vision and pattern recognition, pp 3642–3649. https://doi.org/10.1109/CVPR.2012.6248110

  11. 11.

    Hinton G, Deng L, Yu D, Dahl GE, Mohamed A, Jaitly N et al (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29(6):82–97. https://doi.org/10.1109/MSP.2012.2205597

    Article  Google Scholar 

  12. 12.

    Goldberg Y (2016) A primer on neural network models for natural language processing. J Artif Intell Res (JAIR) 57(1):345–420

    MathSciNet  MATH  Article  Google Scholar 

  13. 13.

    Florian RV (2012) The chronotron: a neuron that learns to fire temporally precise spike patterns. PLoS ONE 7(8):e40233. https://doi.org/10.1371/journal.pone.0040233

    Article  Google Scholar 

  14. 14.

    Schaffer JD (2017) Initial experiments evolving spiking neural networks with supervised learning capability. Procedia Comput Sci 114:184–191. https://doi.org/10.1016/j.procs.2017.09.034

    Article  Google Scholar 

  15. 15.

    Bohte SM, Kok JN, Poutr HL (2002) Error-backpropagation in temporary encoded networks of spiking neurons. Neurocomputing 48(1–4):17–37

    MATH  Article  Google Scholar 

  16. 16.

    Xu Y, Zeng X, Han L, Yang J (2013) A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks. Neural Netw 43:99–113

    MATH  Article  Google Scholar 

  17. 17.

    Constantin J, Constantin I, Rammouz R, Bigand A, Hamad D (2015) Perception of noise in global illumination algorithms based on spiking neural network. In: The IEEE third international conference on technological advances in electrical, electronics and computer engineering, pp 68–73

  18. 18.

    Takouachet N, Delepoulle S, Renaud C (2007) A perceptual stopping condition for global illumination computations. In: Proceedings of the spring conference on computer graphics, Budmerice, Slovakia, pp 61–68

  19. 19.

    Lubin J (1995) A visual discrimination model for imaging system design and evaluation. In: Peli E (ed) Vision models for target detection and recognition. World Scientific, Singapore, pp 245–283

    Google Scholar 

  20. 20.

    Longhurst P, Debattista K, Chalmers A (2006) A GPU based saliency map for high-fidelity selective rendering. In: AFRIGRAPH 2006 4th international conference on computer graphics, virtual reality, visualization and interaction in Africa. ACM SIGGRAPH, pp 21–29. ISBN 1-59593-288-7

  21. 21.

    Wang J, Borji A, Kuo C-CJ, Itti L (2016) Learning a combined model of visual saliency for fixation prediction. IEEE Trans Image Process 25(4):1566–1579

    MathSciNet  MATH  Article  Google Scholar 

  22. 22.

    Longhurst P, Chalmers A (2004) User validation of image quality assessment algorithms. In: TPCG 04: proceedings of the theory and practice of computer graphics 2004 (TPCG04). Washington, DC, USA, IEEE Computer Society, pp 196–202, ISBN 0-7695-2137-1. https://doi.org/10.1109/TPCG.2004.39

  23. 23.

    Mohemmed A, Kasabov N (2011) Incremental learning algorithm for spatio-temporal spike pattern classification. In: The 2012 international joint conference on neural networks (IJCNN), pp 1–6

  24. 24.

    Stefan S, Mohemmed A, Kasakov N (2011) Are probabilistic spiking neural networks suitable for reservoir computing. In: Proceedings of international joint conference on neural networks, pp. 3156–3163

  25. 25.

    Wald I, Kollig T, Benthin C, Keller A, Slusalleki P (2002) Interactive global illumination using fast ray tracing. In: Proceedings of the 13th Eurographics workshop on rendering, pp 15–24

  26. 26.

    Kajiya JT (1986) The rendering equation. In: ACM SIGGRAPH computer graphics, pp 143–150

    Article  Google Scholar 

  27. 27.

    Talbot J, Cline D, Egbert P (2005) Importance resampling for global illumination. In: Proceedings of the sixteenth Eurographics conference on rendering techniques, pp 139–146

  28. 28.

    Makandar A, Halalli B (2015) Image enhancement techniques using highpass and lowpass filters. Int J Comput Appl 109(14):21–27

    Google Scholar 

  29. 29.

    Dawood F, Rahmat R, Kadiman S, Abdullah L, Zamrin M (2012) Effect comparison of speckle noise reduction filters on 2D-echocardiographic. World Acad Sci Eng Technol 6(9):415–420

    Google Scholar 

  30. 30.

    Biswas P, Sarkar A, Mynuddin M (2015) Deblurring images using a Wiener filter. Int J Comput Appl 109(7):36–38

    Google Scholar 

  31. 31.

    Gao D, Liao Z, Lv Z, Lu Y (2015) Multi-scale statistical signal processing of cutting force in cutting tool condition monitoring. Int J Adv Manuf Technol 90(9):1843–1853

    Article  Google Scholar 

  32. 32.

    Shigeo A (2010) Support vector machines for pattern classification. Springer, Berlin ISBN-10: 9781849960977

    Google Scholar 

  33. 33.

    Ren J, ANN vs. SVM (2012) which one performs better in classification of MCCs in mammogram imaging. Knowl Based Syst 26(2):144–153

    Article  Google Scholar 

  34. 34.

    Brezhneva O, Tretyakov A (2011) An elementary proof of the Karush-Kuhn-Tucker theorem in normed linear spaces for problems with a finite number of inequality constraints. Optimization 60(5):613–618

    MathSciNet  MATH  Article  Google Scholar 

  35. 35.

    Tipping ME, Faul A C (2003) Fast marginal likelihood maximisation for sparse Bayesian models, In: Bishop CM, Frey BJ (eds) Proceedings of the ninth international workshop on artificial intelligence and statistics. Key West, FL. Jan 3–6

  36. 36.

    Kim J, Suga Y, Won S (2006) A new approach to fuzzy modeling of nonlinear dynamic systems with noise. Relevance Vector Learning Machine. IEEE Trans Fuzzy Syst 14(2):222–231

    Article  Google Scholar 

  37. 37.

    Tipping ME (2004) Bayesian inference: an introduction to principles and practice in machine learning. In: Advanced lectures in machine learning, Springer, New York, pp 41-62

    Google Scholar 

  38. 38.

    Faul AC, Tipping ME (2002) Analysis of sparse Bayesian learning. In: Advances in neural information processing systems, vol 14, pp 383–389

  39. 39.

    Press W, Teukolsky S, Vetterling W, Flanneryi B (2007) Numerical recipes, 3rd edn. In: The art of scientific computing. Cambridge University Press, Cambridge

  40. 40.

    Shi Y, Xiong F, Xiu R, Liu Y (2013) A comparative study of relevant vector machine and support vector machine in uncertainty analysis. In: International conference on quality, reliability, risk, maintenance, and safety engineering, pp 15–18

  41. 41.

    Yu Q, Tang H, Chen Tan K, Yu H (2014) A brain inspired spiking neural network model with temporal encoding and learning. Neurocomputing 138:3–13

    Article  Google Scholar 

  42. 42.

    Hu J, Tang H, Tan KC, Li H, Shi L (2013) A spike-timing-based integrated model for pattern recognition. Neural Comput 25(2):450–472

    MathSciNet  MATH  Article  Google Scholar 

  43. 43.

    Yu Q, Yan R, Tang H, Tan KC, Li H (2016) A spiking neural network system for robust sequence recognition. IEEE Trans Neural Netw Learn Syst 27(3):621–635

    MathSciNet  Article  Google Scholar 

  44. 44.

    Mohemmed A, Guoyu L, Kasabov N (2012) Evaluating SPAN incremental learning for handwritten digit recognition. In: Huang T et al (eds) ICONIP 2012, Part III, LNCS 7665, pp 670–677

    Google Scholar 

  45. 45.

    Pavlidis N, Tasoulis D, Plagianakos VP, Nikiforidis G, Vrahatis M (2005) Spiking neural network training using evolutionary algorithms. IEEE Int. Joint Conf. Neural Netw. 4:2190–2194

    Google Scholar 

  46. 46.

    Qu H, Xie X, Liu Y, Zhang M, Lu L (2015) Improved perception based spiking neuron learning rule for real-time user authentication. Neurocomputing 151:310–318

    Article  Google Scholar 

  47. 47.

    An S, Liu W, Venkatesh S (2007) Fast cross-validation algorithms for least squares support vector machine and kernel ridge regression. Pattern Recogn 40(8):2154–2162

    MATH  Article  Google Scholar 

Download references


This project has been funded with support from the Lebanese University under Grant Number 428/2015.

Author information



Corresponding author

Correspondence to Joseph Constantin.

Ethics declarations

Conflict of interest

This is to certify that all the authors have participated sufficiently in the work to take public responsibility for the content, including participation in the concept, design, analysis, writing, or revision of the manuscript. Furthermore, each author certifies that this material or similar material has not been and will not be submitted to or published in any other publication.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Constantin, J., Bigand, A. & Constantin, I. Pooling spike neural network for fast rendering in global illumination. Neural Comput & Applic 32, 427–446 (2020). https://doi.org/10.1007/s00521-018-3941-z

Download citation


  • Clustering-based dynamic learning
  • Global illumination
  • Sparse coding
  • Pooling spike neural network