Advertisement

Gating Sensory Noise in a Spiking Subtractive LSTM

  • Isabella PozziEmail author
  • Roeland Nusselder
  • Davide Zambrano
  • Sander Bohté
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11139)

Abstract

Spiking neural networks are being investigated both as biologically plausible models of neural computation and also as a potentially more efficient type of neural network. Recurrent neural networks in the form of networks of gating memory cells have been central in state-of-the-art solutions in problem domains that involve sequence recognition or generation. Here, we design an analog Long Short-Term Memory (LSTM) cell where its neurons can be substituted with efficient spiking neurons, where we use subtractive gating (following the subLSTM in [1]) instead of multiplicative gating. Subtractive gating allows for a less sensitive gating mechanism, critical when using spiking neurons. By using fast adapting spiking neurons with a smoothed Rectified Linear Unit (ReLU)-like effective activation function, we show that then an accurate conversion from an analog subLSTM to a continuous-time spiking subLSTM is possible. This architecture results in memory networks that compute very efficiently, with low average firing rates comparable to those in biological neurons, while operating in continuous time.

Keywords

Spiking neurons LSTM Recurrent neural networks Supervised learning Reinforcement learning 

Notes

Acknowledgments

DZ is supported by NWO NAI project 656.000.005.

References

  1. 1.
    Costa, R., Assael, I.A., Shillingford, B., de Freitas, N., Vogels, T.: Cortical microcircuits as gated-recurrent neural networks. In: Advances in Neural Information Processing Systems, pp. 272–283 (2017)Google Scholar
  2. 2.
    Attwell, D., Laughlin, S.: An energy budget for signaling in the grey matter of the brain. J. Cereb. Blood Flow Metab. 21(10), 1133–1145 (2001)CrossRefGoogle Scholar
  3. 3.
    Esser, S., et al.: Convolutional networks for fast, energy-efficient neuromorphic computing. In: PNAS, p. 201604850, September 2016Google Scholar
  4. 4.
    Neil, D., Pfeiffer, M., Liu, S.C.: Learning to be efficient: algorithms for training low-latency, low-compute deep spiking neural networks (2016)Google Scholar
  5. 5.
    Diehl, P., Neil, D., Binas, J., Cook, M., Liu, S.C., Pfeiffer, M.: Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In: IEEE IJCNN, pp. 1–8, July 2015Google Scholar
  6. 6.
    O’Connor, P., Neil, D., Liu, S.C., Delbruck, T., Pfeiffer, M.: Real-time classification and sensor fusion with a spiking deep belief network. Front. Neurosci. 7, 178 (2013)Google Scholar
  7. 7.
    Hunsberger, E., Eliasmith, C.: Spiking deep networks with LIF neurons. arXiv preprint arXiv:1510.08829 (2015)
  8. 8.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput, 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  9. 9.
    Shrestha, A., et al.: A spike-based long short-term memory on a neurosynaptic processor (2017)Google Scholar
  10. 10.
    Davies, M., Srinivasa, N., Lin, T.H., Chinya, G., Cao, Y., Choday, S.H., Dimou, G., Joshi, P., Imam, N., Jain, S.: Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1), 82–99 (2018)CrossRefGoogle Scholar
  11. 11.
    Zambrano, D., Bohte, S.: Fast and efficient asynchronous neural computation with adapting spiking neural networks. arXiv preprint arXiv:1609.02053 (2016)
  12. 12.
    Bohte, S.: Efficient spike-coding with multiplicative adaptation in a spike response model. In: NIPS, vol. 25, pp. 1844–1852 (2012)Google Scholar
  13. 13.
    Gers, F.A., Schraudolph, N.N., Schmidhuber, J.: Learning precise timing with LSTM recurrent networks. J. Mach. Learn. Res. 3(Aug), 115–143 (2002)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Denève, S., Machens, C.K.: Efficient codes and balanced networks. Nature Neurosci. 19(3), 375–382 (2016)CrossRefGoogle Scholar
  15. 15.
    Bakker, B.: Reinforcement learning with long short-term memory. In: NIPS, vol. 14, pp. 1475–1482 (2002)Google Scholar
  16. 16.
    Harmon, M., Baird III, L.: Multi-player residual advantage learning with general function approximation. Wright Laboratory, 45433–7308 (1996)Google Scholar
  17. 17.
    Rombouts, J., Bohte, S., Roelfsema, P.: Neurally plausible reinforcement learning of working memory tasks. In: NIPS, vol. 25, pp. 1871–1879 (2012)Google Scholar
  18. 18.
    Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)
  19. 19.
    Greff, K., Srivastava, R.K., Koutník, J., Steunebrink, B.R., Schmidhuber, J.: LSTM: a search space odyssey. IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2222–2232 (2017)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Jozefowicz, R., Zaremba, W., Sutskever, I.: An empirical exploration of recurrent network architectures. In: International Conference on Machine Learning, pp. 2342–2350 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Isabella Pozzi
    • 1
    Email author
  • Roeland Nusselder
    • 1
  • Davide Zambrano
    • 1
  • Sander Bohté
    • 1
  1. 1.Centrum Wiskunde & InformaticaAmsterdamThe Netherlands

Personalised recommendations