Advertisement

Biologically Sound Neural Networks for Embedded Systems Using OpenCL

  • István Fehérvári
  • Anita Sobe
  • Wilfried Elmenreich
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7853)

Abstract

In this paper, we present an OpenCL implementation of a biologically sound spiking neural network with two goals in mind: First, applied neural dynamics should be accurate enough for bio-inspired training methods, thus resulting network data is reproducible in ”in vitro” experiments. The second is that the implementation produces code that runs adequately on up-to-date embedded graphical chips for fast on-board classification applications, e.g., video image processing. We describe the necessary steps required to implement an efficient algorithm using the OpenCL framework and present evaluation results of the execution time compared to traditional serial CPU code. We show that an optimized GPU kernel code can perform sufficiently fast to be used for future embedded neural processing.

Keywords

Execution Time Global Memory Neural Dynamic Spike Neural Network OpenCL Implementation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Maass, W.: Noisy spiking neurons with temporal coding have more computational power than sigmoidal neurons. Advances in Neural Information Processing Systems 9 (1997)Google Scholar
  2. 2.
    Pallipuram, V., Bhuiyan, M., Smith, M.: Evaluation of GPU architectures using spiking neural networks. In: 2011 Symposium on Application Accelerators in High-Performance Computing (SAAHPC), pp. 93–102 (July 2011)Google Scholar
  3. 3.
    Yudanov, D., Shaaban, M., Melton, R., Reznik, L.: GPU-based simulation of spiking neural networks with real-time performance amp; high accuracy. In: International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (July 2010)Google Scholar
  4. 4.
    Nageswaran, J., Dutt, N., Wang, Y., Delbrueck, T.: Computing spike-based convolutions on GPUs. In: IEEE International Symposium on Circuits and Systems (ISCAS 2009), pp. 1917–1920 (2009)Google Scholar
  5. 5.
    Yudanov, D., Reznik, L.: Scalable multi-precision simulation of spiking neural networks. In: 2012 IEEE World Congress on Computational Intelligence (WCCI 2012) (June 2012)Google Scholar
  6. 6.
    Bernhard, F., Keriven, R.: Spiking neurons on GPUs. In: Alexandrov, V.N., van Albada, G.D., Sloot, P.M.A., Dongarra, J. (eds.) ICCS 2006. LNCS, vol. 3994, pp. 236–243. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  7. 7.
    Ferrari, S., Mehta, B., Di Muro, G., VanDongen, A., Henriquez, C.: Biologically realizable reward-modulated hebbian training for spiking neural networks. In: IEEE International Joint Conference on Neural Networks, IJCNN 2008 (IEEE World Congress on Computational Intelligence), pp. 1780–1786 (June 2008)Google Scholar
  8. 8.
    Karimi, K., Dickson, N.G., Hamze, F.: A performance comparison of CUDA and OpenCL. In: CoRR abs/1005.2581 (2010), http://arxiv.org/abs/1005.2581
  9. 9.
    Maass, W.: Networks of spiking neurons: The third generation of neural network models. Neural Networks 10(9), 1659–1671 (1997)CrossRefGoogle Scholar
  10. 10.
    Gerstner, W., Kistler, W.M.: Spiking Neuron Models: Single Neurons, Populations, Plasticity, 1st edn. Cambridge University Press (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • István Fehérvári
    • 1
  • Anita Sobe
    • 2
  • Wilfried Elmenreich
    • 1
    • 3
  1. 1.Mobile Systems Group, Lakeside Labs, Institute for Networked and Embedded SystemsAlpen-Adria-Universität KlagenfurtAustria
  2. 2.Computer Science DepartmentUniversity of NeuchâtelSwitzerland
  3. 3.Complex Systems EngineeringUniversity of PassauGermany

Personalised recommendations