Bridging the Gap Between Probabilistic and Deterministic Models: A Simulation Study on a Variational Bayes Predictive Coding Recurrent Neural Network Model

  • Ahmadreza Ahmadi
  • Jun Tani
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10636)


The current paper proposes a novel variational Bayes predictive coding RNN model, which can learn to generate fluctuated temporal patterns from exemplars. The model learns to maximize the lower bound of the weighted sum of the regularization and reconstruction error terms. We examined how this weighting can affect development of different types of information processing while learning fluctuated temporal patterns. Simulation results show that strong weighting of the reconstruction term causes the development of deterministic chaos for imitating the randomness observed in target sequences, while strong weighting of the regularization term causes the development of stochastic dynamics imitating probabilistic processes observed in targets. Moreover, results indicate that the most generalized learning emerges between these two extremes. The paper concludes with implications in terms of the underlying neuronal mechanisms for autism spectrum disorder.


Recurrent neural network Variational Bayes Predictive coding Generative model 


  1. 1.
    Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006)MATHGoogle Scholar
  2. 2.
    Chung, J., Kastner, K., Dinh, L., Goel, K., Courville, A.C., Bengio, Y.: A recurrent latent variable model for sequential data. In: Advances in Neural Information Processing Systems, pp. 2980–2988 (2015)Google Scholar
  3. 3.
    Van de Cruys, S., Evers, K., Van der Hallen, R., Van Eylen, L., Boets, B., de Wit, L., Wagemans, J.: Precise minds in uncertain worlds: predictive coding in autism. Psychol. Rev. 121(4), 649 (2014)CrossRefGoogle Scholar
  4. 4.
    Friston, K.: A theory of cortical responses. Philos. Trans. R. Soc. Lond. B Biol. Sci. 360(1456), 815–836 (2005)CrossRefGoogle Scholar
  5. 5.
    Gregor, K., Danihelka, I., Graves, A., Rezende, D., Wierstra, D.: Draw: a recurrent neural network for image generation. In: International Conference on Machine Learning, pp. 1462–1471 (2015)Google Scholar
  6. 6.
    Kingma, D., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  7. 7.
    Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)
  8. 8.
    Murata, S., Yamashita, Y., Arie, H., Ogata, T., Sugano, S., Tani, J.: Learning to perceive the world as probabilistic or deterministic via interaction with others: a neuro-robotics experiment. IEEE Trans. Neural Netw. Learn. Syst. 28(4), 830–848 (2017)CrossRefGoogle Scholar
  9. 9.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. Technical report, California Univ San Diego La Jolla Inst for Cognitive Science (1985)Google Scholar
  10. 10.
    Tani, J., Fukumura, N.: Embedding a grammatical description in deterministic chaos: an experiment in recurrent neural learning. Biol. Cybern. 72(4), 365–370 (1995)CrossRefGoogle Scholar
  11. 11.
    Yamashita, Y., Tani, J.: Emergence of functional hierarchy in a multiple timescale neural network model: a humanoid robot experiment. PLoS Comput. Biol. 4(11), e1000220 (2008)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Electrical EngineeringKAISTDaejeonKorea
  2. 2.Okinawa Institute of Science and TechnologyOkinawaJapan

Personalised recommendations