Skip to main content

Selection Criteria for Neuromanifolds of Stochastic Dynamics


We present ways of defining neuromanifolds – models of stochastic matrices – that are compatible with the maximization of an objective function such as the expected reward in reinforcement learning theory. Our approach is based on information geometry and aims to reduce the number of model parameters with the hope to improve gradient learning processes.


  • Extreme Point
  • Reinforcement Learning
  • Exponential Family
  • Hamilton Path
  • Deterministic Function

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, access via your institution.

Buying options

USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-94-007-4792-0_20
  • Chapter length: 8 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
USD   259.00
Price excludes VAT (USA)
  • ISBN: 978-94-007-4792-0
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   329.99
Price excludes VAT (USA)
Hardcover Book
USD   449.99
Price excludes VAT (USA)
Fig. 1
Fig. 2
Fig. 3
Fig. 4


  1. S. Amari. Natural Gradient Works Efficiently in Learning. Neural Comput., 10(2) (1998) 251–276.

    CrossRef  Google Scholar 

  2. S. Amari, K. Kurata, H. Nagaoka. Information Geometry of Boltzmann machines. IEEE T. Neural Networks, 3(2) (1992) 260–271.

    CAS  CrossRef  Google Scholar 

  3. N. Ay, T. Wennekers. Dynamical Properties of Strongly Interacting Markov Chains. Neural Networks 16 (2003) 1483–1497.

    PubMed  CrossRef  Google Scholar 

  4. G. Lebanon. Axiomatic geometry of conditional models. IEEE Transactions on Information Theory, 51(4) (2005) 1283–1294.

    CrossRef  Google Scholar 

  5. G. Montúfar, N. Ay. Refinements of Universal Approximation Results for DBNs and RBMs. Neural Comput. 23(5) (2011) 1306–1319.

    PubMed  CrossRef  Google Scholar 

  6. R. Sutton, A. Barto. Reinforcement Learning. MIT Press (1998).

    Google Scholar 

  7. R. Sutton, D. McAllester, S. Singh, Y. Mansour. Policy Gradient Methods for Reinforcement Learning with Function Approximation. Adv. in NIPS 12 (2000) 1057–1063.

    Google Scholar 

  8. K.G. Zahedi, N. Ay, R. Der. Higher Coordination With Less Control – A Result of Information Maximization in the Sensorimotor Loop. Adaptive Behavior 18 (3–4) (2010), 338–355.

    CrossRef  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Nihat Ay .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2013 Springer Science+Business Media Dordrecht

About this paper

Cite this paper

Ay, N., Montúfar, G., Rauh, J. (2013). Selection Criteria for Neuromanifolds of Stochastic Dynamics. In: Yamaguchi, Y. (eds) Advances in Cognitive Neurodynamics (III). Springer, Dordrecht.

Download citation