Advertisement

Clustering with Reinforcement Learning

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4881)

Abstract

We show how a previously derived method of using reinforcement learning for supervised clustering of a data set can lead to a sub-optimal solution if the cluster prototypes are initialised to poor positions. We then develop three novel reward functions which show great promise in overcoming poor initialization. We illustrate the results on several data sets. We then use the clustering methods with an underlying latent space which enables us to create topology preserving mappings. We illustrate this method on both real and artificial data sets.

Keywords

Latent Point Reinforcement Learning Reward Function Bernoulli Model Supervise Cluster 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Barbakh, W.: The family of inverse exponential k-means algorithms. Computing and Information Systems 11(1), 1–10 (2007)Google Scholar
  2. 2.
    Barbakh, W., Crowe, M., Fyfe, C.: A family of novel clustering algorithms. In: Corchado, E., Yin, H., Botti, V., Fyfe, C. (eds.) IDEAL 2006. LNCS, vol. 4224, pp. 283–290. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  3. 3.
    Barbakh, W., Fyfe, C.: Performance functions and clustering algorithms. Computing and Information Systems 10(2), 2–8 (2006)Google Scholar
  4. 4.
    Barbakh, W., Fyfe, C.: Tailoring local and global interactions in clustering algorithms. Technical Report 40, School of Computing, University of Paisley (March 2007), ISSN 1461-6122Google Scholar
  5. 5.
    Bishop, C.M., Svensen, M., Williams, C.K.I.: Gtm: The generative topographic mapping. Neural Computation (1997)Google Scholar
  6. 6.
    Fyfe, C.: Two topographic maps for data visualization. Data Mining and Knowledge Discovery 14, 207–224 (2007)CrossRefGoogle Scholar
  7. 7.
    Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: A survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)Google Scholar
  8. 8.
    Kohonen, T.: Self-Organising Maps. Springer, Heidelberg (1995)Google Scholar
  9. 9.
    Likas, A.: A reinforcement learning approach to on-line clustering. Neural Computation (2000)Google Scholar
  10. 10.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: an Introduction. MIT Press, Cambridge (1998)Google Scholar
  11. 11.
    Williams, R.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning 8, 229–256 (1992)zbMATHGoogle Scholar
  12. 12.
    Williams, R.J., Pong, J.: Function optimization using connectionist reinforcement learning networks. Connection Science 3, 241–268 (1991)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  1. 1.The University of PaisleyScotland

Personalised recommendations