Biological Cybernetics

, Volume 106, Issue 4–5, pp 201–217 | Cite as

Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks

Open Access
Original Paper

Abstract

Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a ‘recognizing RNN’ (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, e.g. fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of RNNs may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics.

Keywords

Recurrent neural networks Bayesian inference Nonlinear dynamics Human motion 

Notes

Acknowledgments

We thank both anonymous reviewers for the helpful and constructive comments on a previous version of thismanuscript.

Open Access

This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

References

  1. Archambeau C, Opper M, Shen Y, Cornford D, Shawe-Taylor J (2008) Variational inference for diffusion processes. In: Platt J, Koller D, Singer Y, Roweis S (eds) Advances in neural information processing systems. MIT Press, Cambridge, pp 17–24Google Scholar
  2. Bar M (2009) The proactive brain: memory for predictions. Philos Trans R Soc Lond B 364(1521): 1235–1243. doi:10.1098/rstb.2008.0310 CrossRefGoogle Scholar
  3. Blake R, Shiffrar M (2007) Perception of human motion. Annu Rev Psychol 58: 47–73. doi:10.1146/annurev.psych.57.102904.190152 PubMedCrossRefGoogle Scholar
  4. Boerlin M, Denève S (2011) Spike-based population coding and working memory. PLoS Comput Biol 7(2): e1001-080. doi:10.1371/journal.pcbi.1001080 CrossRefGoogle Scholar
  5. Buonomano DV, Maass W (2009) State-dependent computations: spatiotemporal processing in cortical networks. Nat Rev Neurosci 10(2): 113–125. doi:10.1038/nrn2558 PubMedCrossRefGoogle Scholar
  6. Cessac B, Samuelides M (2007) From neuron to neural networks dynamics. Eur Phys J 142: 7–88. doi:10.1140/epjst/e2007-00058-2 Google Scholar
  7. Connor JT, Martin RD, Atlas LE (1994) Recurrent neural networks and robust time series prediction. IEEE Trans Neural Netw 5(2): 240–254. doi:10.1109/72.279188 PubMedCrossRefGoogle Scholar
  8. Daunizeau J, Friston K, Kiebel S (2009) Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models. Physica D 238(21): 2089–2118. doi:10.1016/j.physd.2009.08.002 PubMedCrossRefGoogle Scholar
  9. Debanne D, Campanac E, Bialowas A, Carlier E, Alcaraz G (2011) Axon physiology. Physiol Rev 91(2): 555–602. doi:0.1152/physrev.00048.2009 PubMedCrossRefGoogle Scholar
  10. Denève S (2008) Bayesian spiking neurons i inference. Neural Comput 20(1): 91–117. doi:10.1162/neco.2008.20.1.91 PubMedCrossRefGoogle Scholar
  11. Denève S, Duhamel JR, Pouget A (2007) Optimal sensorimotor integration in recurrent cortical networks a neural implementation of Kalman filters. J Neurosci 27(21): 5744–5756. doi:10.1523/JNEUROSCI.3985-06.2007 PubMedCrossRefGoogle Scholar
  12. Doucet A, Tadić V (2003) Parameter estimation in general state-space models using particle methods. Ann Inst Stat Math 55: 409–422. doi:10.1007/BF02530508 Google Scholar
  13. Doucet, A, Freitas, N, Gordon, N (eds) (2001) Sequential Monte Carlo Methods in Practice. Springer, BerlinGoogle Scholar
  14. Elman JL (1990) Finding structure in time. Cogn Sci 14(2): 179–211. doi:10.1207/s15516709cog1402_1 CrossRefGoogle Scholar
  15. Friston KJ (2002) Bayesian estimation of dynamical systems an application to fMRI. NeuroImage 16(2): 513–530. doi:10.1006/nimg.2001.1044 PubMedCrossRefGoogle Scholar
  16. Friston K, Kiebel S (2009) Predictive coding under the free-energy principle. Philos Trans R Soc Lond B 364(1521): 1211–1221. doi:10.1098/rstb.2008.0300 CrossRefGoogle Scholar
  17. Friston KJ, Penny W, Phillips C, Kiebel S, Hinton G, Ashburner J (2002) Classical and bayesian inference in neuroimaging theory. NeuroImage 16(2): 465–483. doi:10.1006/nimg.2002.1090 PubMedCrossRefGoogle Scholar
  18. Friston KJ, Harrison L, Penny W (2003) Dynamic causal modelling. NeuroImage 19(4): 1273–1302PubMedCrossRefGoogle Scholar
  19. Friston K, Trujillo-Barreto N, Daunizeau J (2008) DEM A variational treatment of dynamic systems. NeuroImage 41(3): 849–885. doi:10.1016/j.neuroimage.2008.02.054 PubMedCrossRefGoogle Scholar
  20. Friston K, Stephan K, Li B, Daunizeau J (2010) Generalised filtering. Math Probl Eng. Article ID 621, 670. doi:10.1155/2010/621670
  21. Ghahramani Z, Beal MJ (2001) Propagation algorithms for variational bayesian learning. In: Leen T, Dietterich T, Tresp V (eds) Advances in neural information processing systems, vol 13. MIT Press, Cambridge, pp 507–513Google Scholar
  22. Hamker FH (2005) The reentry hypothesis the putative interaction of the frontal eye field ventrolateral prefrontal cortex and areas v4 it for attention and eye movement. Cereb Cortex 15(4): 431–447. doi:10.1093/cercor/bhh146 PubMedCrossRefGoogle Scholar
  23. Hammer B, Steil JJ (2002) Tutorial Perspectives on learning with rnns. In: Proceedings of European symposium on artificial neural networks (ESANN) d-side publi, pp 357–368Google Scholar
  24. Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313(5786): 504–507. doi:10.1126/science.1127647 PubMedCrossRefGoogle Scholar
  25. Jaeger H (2001) The “echo state” approach to analysing and training recurrent neural networks. GMD Report 148, German National Research Center for Information TechnologyGoogle Scholar
  26. Jaeger H, Lukosevicius M, Popovici D, Siewert U (2007) Optimization and applications of echo state networks with leaky-integrator neurons. Neural Netw 20(3): 335–352. doi:10.1016/j.neunet.2007.04.016 PubMedCrossRefGoogle Scholar
  27. Jazwinski AH (1970) Stochastic processes and filtering theory. Academic Press, New YorkGoogle Scholar
  28. Jirsa VK, Kelso JAS (2005) The excitator as a minimal model for the coordination dynamics of discrete and rhythmic movement generation. J Mot Behav 37(1): 35–51. doi:10.3200/JMBR.37.1.35-51 PubMedCrossRefGoogle Scholar
  29. Kantas N, Doucet A, Singh SS, Maciejowski JM (2009) Overview of sequential monte carlo methods for parameter estimation on general state space models. In: Proceedings of the 15th IFAC symposium on system identification (SYSID), Saint-Malo, FranceGoogle Scholar
  30. Kelso JAS (1995) Dynamic patterns: the self-organization of brain and behavior. MIT Press, CambridgeGoogle Scholar
  31. Kiebel SJ, David O, Friston KJ (2006) Dynamic causal modelling of evoked responses in EEG/MEG with lead field parameterization. NeuroImage 30(4): 1273–1284. doi:10.1016/j.neuroimage.2005.12.055 PubMedCrossRefGoogle Scholar
  32. Kiebel SJ, Daunizeau J, Friston KJ (2008) A hierarchy of time-scales and the brain. PLoS Comput Biol 4(11): e1000–209. doi:10.1371/journal.pcbi.1000209 CrossRefGoogle Scholar
  33. Kiebel SJ, Garrido MI, Moran R, Chen CC, Friston KJ (2009a) Dynamic causal modeling for eeg and meg. Hum Brain Mapp 30(6): 1866–1876. doi:10.1002/hbm.20775 PubMedCrossRefGoogle Scholar
  34. Kiebel SJ, von Kriegstein K, Daunizeau J, Friston KJ (2009b) Recognizing sequences of sequences. PLoS Comput Biol 5(8): e1000–464. doi:10.1371/journal.pcbi.1000464 CrossRefGoogle Scholar
  35. Lazar A, Pipa G, Triesch J (2009) Sorn a self-organizing recurrent neural network. Front Comput Neurosci 3: 23. doi:10.3389/neuro.10.023.2009 PubMedGoogle Scholar
  36. Legenstein R, Maass W (2007) What makes a dynamical system computationally powerful?. In: Haykin S, Principe JC, Sejnowski TJ, McWhirter JG (eds) New directions in statistical signal processing: from systems to brains.. MIT Press, Cambridge, pp 127–154Google Scholar
  37. Maass W, Natschlger T, Markram H (2002) Real-time computing without stable states a new framework for neural computation based on perturbations. Neural Comput 14(11): 2531–2560. doi:10.1162/089976602760407955 PubMedCrossRefGoogle Scholar
  38. Mel BW (2008) Why have dendrites? a computational perspective, Chap. 16. In: Stuart G, Spruston N , Häusser M (eds) Dendrites, 2nd edn. Oxford University Press, OxfordGoogle Scholar
  39. Miller EK, Cohen JD (2001) An integrative theory of prefrontal cortex function. Annu Rev Neurosci 24: 167–202. doi:10.1146/annurev.neuro.24.1.167 PubMedCrossRefGoogle Scholar
  40. Mottet D, Bootsma RJ (1999) The dynamics of goal-directed rhythmical aiming. Biol Cybern 80(4): 235–245. doi:10.1007/s004220050521 PubMedCrossRefGoogle Scholar
  41. Mumford D (1996) Pattern theory: a unifying perspective. In: Knill DC Richards W (eds) Perception as Bayesian inference. Cambridge University Press, CambridgeGoogle Scholar
  42. Narendra K, Parthasarathy K (1990) Identification and control of dynamical systems using neural networks. IEEE Trans Neural Netw 1(1): 4–27. doi:10.1109/72.80202 PubMedCrossRefGoogle Scholar
  43. Natarajan R, Huys QJM, Dayan P, Zemel RS (2008) Encoding and decoding spikes for dynamic stimuli. Neural Comput 20(9): http://www.mitpressjournals.org/doi/pdf/10.1162/neco.2008.01-07-436 2325–2360. doi:10.1162/neco.2008.01-07-436
  44. Parlos A, Menon S, Atiya A (2001) An algorithmic approach to adaptive state filtering using recurrent neural networks. IEEE Trans Neural Netw 12(6): 1411–1432. doi:10.1109/72.963777 PubMedCrossRefGoogle Scholar
  45. Pearlmutter BA (1989) Learning state space trajectories in recurrent neural networks. Neural Comput 1(2): http://www.mitpressjournals.org/doi/pdf/10.1162/neco.1989.1.2.263 263– 269. doi:10.1162/neco.1989.1.2.263
  46. Perdikis D, Huys R, Jirsa V (2011a) Complex processes from dynamical architectures with time-scale hierarchy. PLoS One 6(2): e10–589. doi:10.1371/journal.pone.0016589 CrossRefGoogle Scholar
  47. Perdikis D, Huys R, Jirsa VK (2011b) Time scale hierarchies in the functional organization of complex behaviors. PLoS Comput Biol 7(9): e1002–198. doi:10.1371/journal.pcbi.1002198 CrossRefGoogle Scholar
  48. Pissadaki EK, Sidiropoulou K, Reczko M, Poirazi P (2010) Encoding of spatio-temporal input characteristics by a ca1 pyramidal neuron model. PLoS Comput Biol 6(12): e1001–038. doi:10.1371/journal.pcbi.1001038 CrossRefGoogle Scholar
  49. Poirazi P, Brannon T, Mel BW (2003) Pyramidal neuron as two-layer neural network. Neuron 37(6): http://dx.doi.org 989–999. doi:10. 1016/S0896-6273(03)00149-1
  50. Rabinovich MI, Varona P, Selverston AI, Abarbanel HDI (2006) Dynamical principles in neuroscience. Rev Mod Phys 78(4): 1213–1265. doi:10.1103/RevModPhys.78.1213 CrossRefGoogle Scholar
  51. Rao RPN (2004) Bayesian computation in recurrent neural circuits. Neural Comput 16(1): 1–38. doi:10.1162/08997660460733976 PubMedCrossRefGoogle Scholar
  52. Rao RP, Ballard DH (1997) Dynamic model of visual recognition predicts neural response properties in the visual cortex. Neural Comput 9(4): 721–763PubMedCrossRefGoogle Scholar
  53. Rao RP, Ballard DH (1999) Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat Neurosci 2(1): 79–87. doi:10.1038/4580 PubMedCrossRefGoogle Scholar
  54. Rodrigues S, Chizhov AV, Marten F, Terry JR (2010) Mappings between a macroscopic neural-mass model and a reduced conductance-based model. Biol Cybern 102(5): 361–371. doi:10.1007/s00422-010-0372-z PubMedCrossRefGoogle Scholar
  55. Roweis S, Ghahramani Z (2001) Learning nonlinear dynamical systems using the expectation-maximization algorithm. In: Haykin S (eds) Kalman filtering and neural networks. Wiley, New York. doi:10.1002/0471221546 Google Scholar
  56. Schön TB, Wills A, Ninness B (2011) System identification of nonlinear state-space models. Automatica 47(1): 39–49. doi:10.1016/j.automatica.2010.10.013 CrossRefGoogle Scholar
  57. Schöner G (1990) A dynamic theory of coordination of discrete movement. Biol Cybern 63(4): 257–270. doi:10.1007/BF00203449 PubMedCrossRefGoogle Scholar
  58. Sidiropoulou K, Pissadaki EK, Poirazi P (2006) Inside the brain of a neuron. EMBO Rep 7(9): 886–892. doi:10.1038/sj.embor.7400789 PubMedCrossRefGoogle Scholar
  59. Sotero RC, Trujillo-Barreto NJ, Iturria-Medina Y, Carbonell F, Jimenez JC (2007) Realistically coupled neural mass models can generate eeg rhythms. Neural Comput 19(2): 478–512. doi:10.1162/neco.2007.19.2.478 PubMedCrossRefGoogle Scholar
  60. Spruston N (2008) Pyramidal neurons: dendritic structure and synaptic integration. Nat Rev Neurosci 9(3): 206–221. doi:10.1038/nrn2286 PubMedCrossRefGoogle Scholar
  61. Summerfield C, Egner T, Greene M, Koechlin E, Mangels J, Hirsch J (2006) Predictive codes for forthcoming perception in the frontal cortex. Science 314(5803): 1311–1314. doi:10.1126/science.1132028 PubMedCrossRefGoogle Scholar
  62. Taylor GW, Hinton GE (2009) Factored conditional restricted boltzmann machines for modeling motion style. In: Proceedings of the 26th international conference on machine learning (ICML)Google Scholar
  63. Ting-Ho Lo J (1994) Synthetic approach to optimal filtering. IEEE Trans Neural Netw 5(5): 803–811. doi:10.1109/72.317731 PubMedCrossRefGoogle Scholar
  64. Valpola H, Karhunen J (2002) An unsupervised ensemble learning method for nonlinear dynamic state-space models. Neural Comput 14(11): http://www.mitpressjournals.org/doi/pdf/10.1162/089976602760408017 2647–2692. doi:10.1162/089976602760408017
  65. van Wassenhove V, Grant KW, Poeppel D (2005) Visual speech speeds up the neural processing of auditory speech. Proc Natl Acad Sci USA 102(4): 1181–1186. doi:10.1073/pnas.0408949102 PubMedCrossRefGoogle Scholar
  66. Verstraeten D, Schrauwen B, D’Haene M, Stroobandt D (2007) An experimental unification of reservoir computing methods. Neural Netw 20(3): 391–403. doi:10.1016/j.neunet.2007.04.003 PubMedCrossRefGoogle Scholar
  67. Wan EA, van der Merwe R (2001) The unscented Kalman filter. In: Haykin S (eds) Kalman Filtering and Neural Networks. Wiley, New York. doi:10.1002/0471221546 Google Scholar
  68. Wan EA, Nelson AT (2001) Dual extended Kalman filter methods. In: Haykin S (eds) Kalman Filtering and Neural Networks. Wiley, New York. doi:10.1002/0471221546 Google Scholar
  69. Williams RJ, Zipser D (1989) A learning algorithm for continually running fully recurrent neural networks. Neural Comput 1(2): http://www.mitpressjournals.org/doi/pdf/10.1162/neco.1989.1.2.270 270–280. doi:10.1162/neco.1989.1.2.270
  70. Wilson R, Finkel L (2009) A neural implementation of the Kalman filter. In: Bengio Y, Schuurmans D, Lafferty J, Williams CKI, Culotta A (eds) Advances in neural information processing systems, vol 22. MIT Press, Cambridge, pp 2062–2070Google Scholar

Copyright information

© The Author(s) 2012

Authors and Affiliations

  1. 1.MPI for Human Cognitive and Brain SciencesLeipzigGermany

Personalised recommendations