Advertisement

Acceleration of K-Means and Related Clustering Algorithms

  • Steven J. Phillips
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2409)

Abstract

This paper describes two simple modification of K-means and related algorithms for clustering, that improve the running time without changing the output. The two resulting algorithms are called Compare-means and Sort-means. The time for an iteration of K-means is reduced from O(ndk), where n is the number of data points, k the number of clusters and d the dimension, to O(ndγ + k 2 d + k 2 log k) for Sort-means. Here γk is the average over all points p of the number of means that are no more than twice as far as p is from the mean p was assigned to in the previous iteration. Compare-means performs a similar number of distance calculations as Sort-means, and is faster when the number of means is very large. Both modifications are extremely simple, and could easily be added to existing clustering implementations.

We investigate the empirical performance of the algorithms on three datasets drawn from practical applications. As a primary test case, we use the Isodata variant of K-means on a sample of 2.3 million 6-dimensional points drawn from a Landsat-7 satellite image. For this dataset, γ quickly drops to less than log2 k, and the running time decreases accordingly. For example, a run with k = 100 drops from an hour and a half to sixteen minutes for Compare-means and six and a half minutes for Sort-means. Further experiments show similar improvements on datasets derived from a forestry application and from the analysis of BGP updates in an IP network.

Keywords

Border Gateway Protocol Forest Cover Type Forestry Application Relate Cluster Algorithm Unnecessary Comparison 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    S. D. Bay. The UCI KDD Archive [http://kdd.ics.uci.edu]. Irvine, CA: University of California, Department of Information and Computer Science., 1999.Google Scholar
  2. 2.
    P. S. Bradley, U. M. Fayyad, and C. Reina. Scaling clustering algorithms to large databases. In Knowledge Discovery and Data Mining, pages 9–15, 1998.Google Scholar
  3. 3.
    A. Dempster, N. Laird, and D. Rubin. Maximum-likelihood from incomplete data via the EM algorithm. J. Royal Statistical Society B, 39:1–38, 1977.zbMATHMathSciNetGoogle Scholar
  4. 4.
    F. Farnstrom, J. Lewis, and C. Elkan. Scalability for clustering algorithms revisited. SIGKDD Explorations, 2(1):51–57, 2000.CrossRefGoogle Scholar
  5. 5.
    T. F. Gonzalez. Clustering to minimize the maximum intercluster distance. Theoretical Computer Science, 38:293–306, 1985.zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    J. R. Jensen. Introductory Digital Image Processing, A Remote Sensing Perspective. Prentice Hall, Upper Saddle River, NJ, 1996.Google Scholar
  7. 7.
    J. MacQueen. Some methods for classification and analysis of multivariate observations. In Proc. Fifth Berkeley Symposium on Mathematics, Statistics and Probability, volume 1, pages 281–296, 1967.MathSciNetGoogle Scholar
  8. 8.
    A. W. Moore. The anchors hierarchy: Using the triangle inequality to survive high dimensional data. In Proc. UAI-2000: The Sixteenth Conference on Uncertainty in Artificial Intelligence, 2000.Google Scholar
  9. 9.
    D. Pelleg and A. W. Moore. Accelerating exact k-means algorithms with geometric reasoning. In Proc. Fifth International Conference on Knowledge Discovery and Data Mining. AAAI Press, 1999.Google Scholar
  10. 10.
    J. T. Tou and R. C. Gonzalez. Pattern Recognition Principles. Addison-Wesley, Reading, MA, 1977.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Steven J. Phillips
    • 1
  1. 1.AT&T Labs-ResearchFlorham Park

Personalised recommendations