Advertisement

Information Geometry and Information Theory in Machine Learning

  • Kazushi Ikeda
  • Kazunori Iwata
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4985)

Abstract

Information geometry is a general framework of Riemannian manifolds with dual affine connections. Some manifolds (e.g. the manifold of an exponential family) have natural connections (e.g. e- and m-connections) with which the manifold is dually-flat. Conversely, a dually-flat structure can be introduced into a manifold from a potential function. This paper shows the case of quasi-additive algorithms as an example.

Information theory is another important tool in machine learning. Many of its applications consider information-theoretic quantities such as the entropy and the mutual information, but few fully recognize the underlying essence of them. The asymptotic equipartition property is one of the essence in information theory.

This paper gives an example of the property in a Markov decision process and shows how it is related to return maximization in reinforcement learning.

Keywords

Riemannian Manifold Reinforcement Learning Independent Component Analysis Exponential Family Markov Chain Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Amari, S.I.: Differential-Geometrical Methods in Statistics. Lecture Notes in Statistics, vol. 28. Springer, Heidelberg (1985)zbMATHGoogle Scholar
  2. 2.
    Amari, S.I., Nagaoka, H.: Methods of Information Geometry. Translations of Mathematical Monographs, vol. 191. AMS and Oxford Univ. Press, Oxford (2000)zbMATHGoogle Scholar
  3. 3.
    Cover, T.M., Thomas, J.A.: Elements of Information Theory, 2nd edn. John Wiley and Sons, Inc., Hoboken (2006)zbMATHGoogle Scholar
  4. 4.
    Grove, A.J., Littlestone, N., Schuurmans, D.: General convergence results for linear discriminant updates. Machine Learning 43(3), 173–210 (2001)zbMATHCrossRefGoogle Scholar
  5. 5.
    Ikeda, K.: Geometric properties of quasi-additive learning algorithms. IEICE Trans. Fundamentals E89-A(10), 2812–2817 (2006)CrossRefGoogle Scholar
  6. 6.
    Suzuki, J.: A markov chain analysis on simple genetic algorithms. IEEE Trans. on Systems, Man and Cybernetics 25(4), 655–659 (1995)CrossRefGoogle Scholar
  7. 7.
    Suzuki, J.: A further result on the markov chain model of genetic algorithms and its application to a simulated annealing-like strategy. IEEE Trans. on Systems, Man and Cybernetics, Part B, Cybernetics 28(1), 95–102 (1998)CrossRefGoogle Scholar
  8. 8.
    Iwata, K., Ikeda, K., Sakai, H.: The asymptotic equipartition property in reinforcement learning and its relation to return maximization. Neural Networks 19(1), 62–75 (2006)zbMATHCrossRefGoogle Scholar
  9. 9.
    Iwata, K., Ikeda, K., Sakai, H.: A statistical property of multi-agent learning based on Markov decision process. IEEE Trans. on Neural Networks 17(4), 829–842 (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Kazushi Ikeda
    • 1
  • Kazunori Iwata
    • 2
  1. 1.Department of Systems ScienceKyoto UniversityKyotoJapan
  2. 2.Department of Intelligent SystemsHiroshima City UniversityHiroshimaJapan

Personalised recommendations