Skip to main content

Selection Criteria for Neuromanifolds of Stochastic Dynamics

  • Conference paper
  • First Online:
Advances in Cognitive Neurodynamics (III)

Abstract

We present ways of defining neuromanifolds – models of stochastic matrices – that are compatible with the maximization of an objective function such as the expected reward in reinforcement learning theory. Our approach is based on information geometry and aims to reduce the number of model parameters with the hope to improve gradient learning processes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 329.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. S. Amari. Natural Gradient Works Efficiently in Learning. Neural Comput., 10(2) (1998) 251–276.

    Article  Google Scholar 

  2. S. Amari, K. Kurata, H. Nagaoka. Information Geometry of Boltzmann machines. IEEE T. Neural Networks, 3(2) (1992) 260–271.

    Article  CAS  Google Scholar 

  3. N. Ay, T. Wennekers. Dynamical Properties of Strongly Interacting Markov Chains. Neural Networks 16 (2003) 1483–1497.

    Article  PubMed  Google Scholar 

  4. G. Lebanon. Axiomatic geometry of conditional models. IEEE Transactions on Information Theory, 51(4) (2005) 1283–1294.

    Article  Google Scholar 

  5. G. Montúfar, N. Ay. Refinements of Universal Approximation Results for DBNs and RBMs. Neural Comput. 23(5) (2011) 1306–1319.

    Article  PubMed  Google Scholar 

  6. R. Sutton, A. Barto. Reinforcement Learning. MIT Press (1998).

    Google Scholar 

  7. R. Sutton, D. McAllester, S. Singh, Y. Mansour. Policy Gradient Methods for Reinforcement Learning with Function Approximation. Adv. in NIPS 12 (2000) 1057–1063.

    Google Scholar 

  8. K.G. Zahedi, N. Ay, R. Der. Higher Coordination With Less Control – A Result of Information Maximization in the Sensorimotor Loop. Adaptive Behavior 18 (3–4) (2010), 338–355.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nihat Ay .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media Dordrecht

About this paper

Cite this paper

Ay, N., Montúfar, G., Rauh, J. (2013). Selection Criteria for Neuromanifolds of Stochastic Dynamics. In: Yamaguchi, Y. (eds) Advances in Cognitive Neurodynamics (III). Springer, Dordrecht. https://doi.org/10.1007/978-94-007-4792-0_20

Download citation

Publish with us

Policies and ethics