Skip to main content

K-Means Initialization Methods for Improving Clustering by Simulated Annealing

  • Conference paper

Part of the Lecture Notes in Computer Science book series (LNAI,volume 5290)

Abstract

Clustering is defined as the task of dividing a data set such that elements within each subset are similar between themselves and are dissimilar to elements belonging to other subsets. This problem can be understood as an optimization problem that looks for the best configuration of the clusters among all possible configurations. K-means is the most popular approximate algorithm applied to the clustering problem, but it is very sensitive to the start solution and can get stuck in local optima. Metaheuristics can also be used to solve the problem. Nevertheless, the direct application of metaheuristics to the clustering problem seems to be effective only on small data sets. This work suggests the use of methods for finding initial solutions to the K-means algorithm in order to initialize Simulated Annealing and search solutions near the global optima.

Keywords

  • Combinatorial Optimization
  • Metaheuristics
  • K-means
  • Simulated Annealing

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-540-88309-8_14
  • Chapter length: 10 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   84.99
Price excludes VAT (USA)
  • ISBN: 978-3-540-88309-8
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   109.00
Price excludes VAT (USA)

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Jain, A.K., Murty, M.N., Flynn, P.J.: Data Clustering: A Review. ACM Computing Surveys 31, 264–323 (1999)

    CrossRef  Google Scholar 

  2. Forgy, E.W.: Cluster Analysis of Multivariate Data: Efficiency vs. Interpretability of Classifications. Biometrics 21, 768–780 (1965)

    Google Scholar 

  3. Selim, S.Z., Ismail, M.A.: K-means-type Algorithms: A Generalized Convergence Theorem and Characterization of Local Optimality. IEEE Transaction on Pattern Analysis and Machine Intelligence 6, 81–87 (1984)

    MATH  CrossRef  Google Scholar 

  4. Rayward-Smith, V.J.: Metaheuristics for Clustering in KDD. In: IEEE Congress on Evolutionary Computation, vol. 3, pp. 2380–2387 (2005)

    Google Scholar 

  5. Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by Simulated Annealing. Science 220, 671–680 (1983)

    CrossRef  MathSciNet  Google Scholar 

  6. Klein, R.W., Dubes, R.C.: Experiments in Projection and Clustering by Simulated Annealing. Pattern Recognition 22, 213–220 (1989)

    MATH  CrossRef  Google Scholar 

  7. Selim, S.Z., Alsultan, K.: A Simulated Annealing Algorithm for the Clustering Problem. Pattern Recognition 24, 1003–1008 (1991)

    CrossRef  MathSciNet  Google Scholar 

  8. Murty, C.A., Chowdhury, N.: In Search of Optimal Clusters using Genetic Algorithms. Pattern Recognition Letter 17, 825–832 (1996)

    CrossRef  Google Scholar 

  9. Hall, L.O., Özyurt, I.B., Bezdek, J.C.: Clustering with a Genetically Optimized Approach. IEEE Transaction on Evolutionary Computation 3, 103–112 (1999)

    CrossRef  Google Scholar 

  10. Alsultan, K.: A Tabu Search Approach to the Clustering Problem. Pattern Recognition 28, 1443–1451 (1995)

    CrossRef  Google Scholar 

  11. Merwe, D.W., Engelbrecht, A.P.: Data Clustering using Particle Swarm Optimization. In: Congress on Evolutionary Computation, vol. 1, pp. 215–220 (2003)

    Google Scholar 

  12. Kanade, P.M., Hall, L.O.: Fuzzy Ants Clustering by Centroid Positioning. In: IEEE International Conference on Fuzzy Systems, vol. 1, pp. 371–376 (2004)

    Google Scholar 

  13. Babu, G.P., Murty, M.N.: Simulated Annealing for Optimal Initial Seed Selection in K-means Algorithm. Indian Journal of Pure and Applied Mathematics 3, 85–94 (1994)

    Google Scholar 

  14. Su, T., Dy, J.: A Deterministic Method for Initializing K-means Clustering. In: IEEE International Conference on Tools with Artificial Intelligence, pp. 784–786 (2004)

    Google Scholar 

  15. Arthur, D., Vassilvitskii, S.: K-means++: The Advantages of Careful Seeding. In: Symposium on Discrete Algorithms (2007)

    Google Scholar 

  16. Jolliffe, I.T.: Principal Component Analysis. Springer, New York (1986)

    Google Scholar 

  17. Friedman, M.: The Use of Ranks to Avoid the Assumption of Normality Implicit in the Analysis of Variance. Journal of the American Statistical Association 32, 675–701 (1937)

    CrossRef  Google Scholar 

  18. Friedman, M.: A Comparison of Alternative Tests of Significance for the Problems of m Rankings. Annals of Mathematical Statistics 11, 86–92 (1940)

    MATH  CrossRef  MathSciNet  Google Scholar 

  19. Nemenyi, P.B.: Distribution-free Multiple Comparisons. PhD thesis. Princeton University (1963)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Perim, G.T., Wandekokem, E.D., Varejão, F.M. (2008). K-Means Initialization Methods for Improving Clustering by Simulated Annealing. In: Geffner, H., Prada, R., Machado Alexandre, I., David, N. (eds) Advances in Artificial Intelligence – IBERAMIA 2008. IBERAMIA 2008. Lecture Notes in Computer Science(), vol 5290. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-88309-8_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-88309-8_14

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-88308-1

  • Online ISBN: 978-3-540-88309-8

  • eBook Packages: Computer ScienceComputer Science (R0)