Advertisement

Stochastic Weights Reinforcement Learning for Exploratory Data Analysis

  • Ying Wu
  • Colin Fyfe
  • Pei Ling Lai
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4668)

Abstract

We review a new form of immediate reward reinforcement learning in which the individual unit is deterministic but has stochastic synapses. 4 learning rules have been developed from this perspective and we investigate the use of these learning rules to perform linear projection techniques such as principal component analysis, exploratory projection pursuit and canonical correlation analysis. The method is very general and simply requires a reward function which is specific to the function we require the unit to perform. We also discuss how the method can be used to learn kernel mappings and conclude by illustrating its use on a topology preserving mapping.

Keywords

Reinforcement Learning Canonical Correlation Analysis Independent Component Analysis Reward Function Exploratory Data Analysis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bishop, C.M., Svensen, M., Williams, C.K.I.: Gtm: The generative topographic mapping. Neural Computation (1997)Google Scholar
  2. 2.
    Fyfe, C.: Two topographic maps for data visualization. Data Mining and Knowledge Discovery (2007) ISSN 1384-5810Google Scholar
  3. 3.
    Fyfe, C., Lai, P.L.: Immediate reward reinforcement learning for projective kernel methods. In: ESANN 2007. 14th European Symposium on Artificial Neural Networks (2007)Google Scholar
  4. 4.
    Fyfe, C., Lai, P.L.: Reinforcement learning reward functions for unsupervised learning. In: ISNN 2007. 4th International Symposium on Neural Networks (2007)Google Scholar
  5. 5.
    Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: A survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)Google Scholar
  6. 6.
    Kohonen, T.: Self-Organising Maps. Springer, Heidelberg (1995)Google Scholar
  7. 7.
    Lai, P.L., Fyfe, C.: Reinforcement learning for topographic mappings. In: 20th International Joint Conference on Artificial Neural Networks (2007)Google Scholar
  8. 8.
    Lai, P.L., Fyfe, C.: Kernel and nonlinear canonical correlation analysis. International Journal of Neural Systems 10(5), 365–377 (2001)Google Scholar
  9. 9.
    Likas, A.: A reinforcement learning approach to on-line clustering. Neural Computation (2000)Google Scholar
  10. 10.
    Ma, X., Likharev, K.K.: Global reinforcement learning in neural networks with stochastic synapses. IEEE Transactions on Neural Networks 18(2), 573–577 (2007)CrossRefGoogle Scholar
  11. 11.
    Scholkopf, B., Smola, A., Muller, K.-R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation 10, 1299–1319 (1998)CrossRefGoogle Scholar
  12. 12.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: an Introduction. MIT Press, Cambridge (1998)Google Scholar
  13. 13.
    Williams, R.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning 8, 229–256 (1992)zbMATHGoogle Scholar
  14. 14.
    Williams, R.J., Pong, J.: Function optimization using connectionist reinforcement learning networks. Connection Science 3, 241–268 (1991)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Ying Wu
    • 1
  • Colin Fyfe
    • 1
  • Pei Ling Lai
    • 2
  1. 1.Applied Computational Intelligence Research Unit, The University of PaisleyScotland
  2. 2.Southern Taiwan University of Technology, TainanTaiwan

Personalised recommendations