Advertisement

Theoretical Analysis of the k-Means Algorithm – A Survey

  • Johannes Blömer
  • Christiane Lammersen
  • Melanie SchmidtEmail author
  • Christian Sohler
Chapter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9220)

Abstract

The k-means algorithm is one of the most widely used clustering heuristics. Despite its simplicity, analyzing its running time and quality of approximation is surprisingly difficult and can lead to deep insights that can be used to improve the algorithm. In this paper we survey the recent results in this direction as well as several extension of the basic k-means method.

Keywords

Approximation Algorithm Cluster Center Separation Condition Input Point Approximation Guarantee 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Achlioptas, D., McSherry, F.: On spectral learning of mixtures of distributions. In: Auer, P., Meir, R. (eds.) COLT 2005. LNCS (LNAI), vol. 3559, pp. 458–469. Springer, Heidelberg (2005). doi: 10.1007/11503415_31 CrossRefGoogle Scholar
  2. 2.
    Ackermann, M.R., Blömer, J.: Coresets and approximate clustering for Bregman divergences. In: Proceedings of the 20th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2009), pp. 1088–1097. Society for Industrial and Applied Mathematics (SIAM) (2009). http://www.cs.uni-paderborn.de/uploads/tx_sibibtex/CoresetsAndApproximateClusteringForBregmanDivergences.pdf
  3. 3.
    Ackermann, M.R., Blömer, J.: Bregman clustering for separable instances. In: Kaplan, H. (ed.) SWAT 2010. LNCS, vol. 6139, pp. 212–223. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-13731-0_21 CrossRefGoogle Scholar
  4. 4.
    Ackermann, M.R., Blömer, J., Scholz, C.: Hardness and non-approximability of Bregman clustering problems. In: Electronic Colloquium on Computational Complexity (ECCC), vol. 18, no. 15, pp. 1–20 (2011). http://eccc.uni-trier.de/report/2011/015/, report no. TR11-015
  5. 5.
    Ackermann, M.R., Blömer, J., Sohler, C.: Clustering for metric and non-metric distance measures. ACM Trans. Algorithms 6(4), Article No. 59:1–26 (2010). Special issue on SODA 2008Google Scholar
  6. 6.
    Ackermann, M.R., Märtens, M., Raupach, C., Swierkot, K., Lammersen, C., Sohler, C.: Streamkm++: a clustering algorithm for data streams. ACM J. Exp. Algorithmics 17, Article No. 4, 1–30 (2012)Google Scholar
  7. 7.
    Aggarwal, A., Deshpande, A., Kannan, R.: Adaptive sampling for k-means clustering. In: Dinur, I., Jansen, K., Naor, J., Rolim, J. (eds.) APPROX/RANDOM -2009. LNCS, vol. 5687, pp. 15–28. Springer, Heidelberg (2009). doi: 10.1007/978-3-642-03685-9_2 CrossRefGoogle Scholar
  8. 8.
    Ailon, N., Jaiswal, R., Monteleoni, C.: Streaming k-means approximation. In: Proceedings of the 22nd Annual Conference on Neural Information Processing Systems, pp. 10–18 (2009)Google Scholar
  9. 9.
    Aloise, D., Deshpande, A., Hansen, P., Popat, P.: NP-hardness of Euclidean sum-of-squares clustering. Mach. Learn. 75(2), 245–248 (2009)CrossRefGoogle Scholar
  10. 10.
    Alsabti, K., Ranka, S., Singh, V.: An efficient \(k\)-means clustering algorithm. In: Proceeding of the First Workshop on High-Performance Data Mining (1998)Google Scholar
  11. 11.
    Arora, S., Kannan, R.: Learning mixtures of separated nonspherical Gaussians. Ann. Appl. Probab. 15(1A), 69–92 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Arthur, D., Manthey, B., Röglin, H.: \(k\)-means has polynomial smoothed complexity. In: Proceedings of the 50th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2009), pp. 405–414. IEEE Computer Society (2009)Google Scholar
  13. 13.
    Arthur, D., Vassilvitskii, S.: How slow is the k-means method? In: Proceedings of the 22nd ACM Symposium on Computational Geometry (SoCG 2006), pp. 144–153 (2006)Google Scholar
  14. 14.
    Arthur, D., Vassilvitskii, S.: k-means++: the advantages of careful seeding. In: Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2007), pp. 1027–1035. Society for Industrial and Applied Mathematics (2007)Google Scholar
  15. 15.
    Arthur, D., Vassilvitskii, S.: Worst-case and smoothed analysis of the ICP algorithm, with an application to the \(k\)-means method. SIAM J. Comput. 39(2), 766–782 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Awasthi, P., Blum, A., Sheffet, O.: Stability yields a PTAS for k-median and k-means clustering. In: FOCS, pp. 309–318 (2010)Google Scholar
  17. 17.
    Awasthi, P., Charikar, M., Krishnaswamy, R., Sinop, A.K.: The hardness of approximation of Euclidean k-means. In: SoCG 2015 (2015, accepted)Google Scholar
  18. 18.
    Balcan, M.F., Blum, A., Gupta, A.: Approximate clustering without the approximation. In: SODA, pp. 1068–1077 (2009)Google Scholar
  19. 19.
    Banerjee, A., Guo, X., Wang, H.: On the optimality of conditional expectation as a Bregman predictor. IEEE Trans. Inf. Theory 51(7), 2664–2669 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Banerjee, A., Merugu, S., Dhillon, I.S., Ghosh, J.: Clustering with Bregman divergences. J. Mach. Learn. Res. 6, 1705–1749 (2005)MathSciNetzbMATHGoogle Scholar
  21. 21.
    Belkin, M., Sinha, K.: Toward learning Gaussian mixtures with arbitrary separation. In: COLT, pp. 407–419 (2010)Google Scholar
  22. 22.
    Belkin, M., Sinha, K.: Learning Gaussian mixtures with arbitrary separation. CoRR abs/0907.1054 (2009)Google Scholar
  23. 23.
    Belkin, M., Sinha, K.: Polynomial learning of distribution families. In: FOCS, pp. 103–112 (2010)Google Scholar
  24. 24.
    Berkhin, P.: A survey of clustering data mining techniques. In: Kogan, J., Nicholas, C., Teboulle, M. (eds.) Grouping Multidimensional Data, pp. 25–71. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  25. 25.
    Braverman, V., Meyerson, A., Ostrovsky, R., Roytman, A., Shindler, M., Tagiku, B.: Streaming k-means on well-clusterable data. In: SODA, pp. 26–40 (2011)Google Scholar
  26. 26.
    Brubaker, S.C., Vempala, S.: Isotropic PCA and affine-invariant clustering. In: FOCS, pp. 551–560 (2008)Google Scholar
  27. 27.
    Chaudhuri, K., McGregor, A.: Finding metric structure in information theoretic clustering. In: COLT, pp. 391–402. Citeseer (2008)Google Scholar
  28. 28.
    Chaudhuri, K., Rao, S.: Learning mixtures of product distributions using correlations and independence. In: COLT, pp. 9–20 (2008)Google Scholar
  29. 29.
    Chen, K.: On coresets for k-median and k-means clustering in metric and Euclidean spaces and their applications. SIAM J. Comput. 39(3), 923–947 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Dasgupta, S.: Learning mixtures of Gaussians. In: FOCS, pp. 634–644 (1999)Google Scholar
  31. 31.
    Dasgupta, S.: How fast Is k-means? In: Schölkopf, B., Warmuth, M.K. (eds.) COLT-Kernel 2003. LNCS (LNAI), vol. 2777, p. 735. Springer, Heidelberg (2003). doi: 10.1007/978-3-540-45167-9_56 CrossRefGoogle Scholar
  32. 32.
    Dasgupta, S.: The hardness of \(k\)-means clustering. Technical report CS2008-0916, University of California (2008)Google Scholar
  33. 33.
    Dasgupta, S., Schulman, L.J.: A probabilistic analysis of EM for mixtures of separated, spherical Gaussians. J. Mach. Learn. Res. 8, 203–226 (2007)MathSciNetzbMATHGoogle Scholar
  34. 34.
    Feldman, D., Langberg, M.: A unified framework for approximating and clustering data. In: Proceedings of the 43th Annual ACM Symposium on Theory of Computing (STOC), pp. 569–578 (2011)Google Scholar
  35. 35.
    Feldman, D., Monemizadeh, M., Sohler, C.: A PTAS for \(k\)-means clustering based on weak coresets. In: Proceedings of the 23rd ACM Symposium on Computational Geometry (SoCG), pp. 11–18 (2007)Google Scholar
  36. 36.
    Feldman, J., O’Donnell, R., Servedio, R.A.: Learning mixtures of product distributions over discrete domains. SIAM J. Comput. 37(5), 1536–1564 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  37. 37.
    Fichtenberger, H., Gillé, M., Schmidt, M., Schwiegelshohn, C., Sohler, C.: BICO: BIRCH meets coresets for k-means clustering. In: Bodlaender, H.L., Italiano, G.F. (eds.) ESA 2013. LNCS, vol. 8125, pp. 481–492. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-40450-4_41 CrossRefGoogle Scholar
  38. 38.
    Frahling, G., Sohler, C.: Coresets in dynamic geometric data streams. In: Proceedings of the 37th STOC, pp. 209–217 (2005)Google Scholar
  39. 39.
    Gordon, A.: Null models in cluster validation. In: Gaul, W., Pfeifer, D. (eds.) From Data to Knowledge: Theoretical and Practical Aspects of Classification, Data Analysis, and Knowledge Organization, pp. 32–44. Springer, Heidelberg (1996)CrossRefGoogle Scholar
  40. 40.
    Guha, S., Meyerson, A., Mishra, N., Motwani, R., O’Callaghan, L.: Clustering data streams: theory and practice. IEEE Trans. Knowl. Data Eng. 15(3), 515–528 (2003)CrossRefGoogle Scholar
  41. 41.
    Hamerly, G., Drake, J.: Accelerating Lloyd’s algorithm for k-means clustering. In: Celebi, M.E. (ed.) Partitional Clustering Algorithms, pp. 41–78. Springer, Cham (2015)Google Scholar
  42. 42.
    Har-Peled, S., Kushal, A.: Smaller coresets for k-median and k-means clustering. Discrete Comput. Geom. 37(1), 3–19 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  43. 43.
    Har-Peled, S., Mazumdar, S.: On coresets for k-means and k-median clustering. In: Proceedings of the 36th Annual ACM Symposium on Theory of Computing (STOC 2004), pp. 291–300 (2004)Google Scholar
  44. 44.
    Har-Peled, S., Sadri, B.: How fast is the k-means method? In: SODA, pp. 877–885 (2005)Google Scholar
  45. 45.
    Hartigan, J.A.: Clustering Algorithms. Wiley, Hoboken (1975)zbMATHGoogle Scholar
  46. 46.
    Inaba, M., Katoh, N., Imai, H.: Applications of weighted Voronoi diagrams and randomization to variance-based k-clustering (extended abstract). In: Symposium on Computational Geometry (SoCG 1994), pp. 332–339 (1994)Google Scholar
  47. 47.
    Jain, A.K.: Data clustering: 50 years beyond k-means. Pattern Recogn. Lett. 31(8), 651–666 (2010)CrossRefGoogle Scholar
  48. 48.
    Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM Comput. Surv. 31(3), 264–323 (1999)CrossRefGoogle Scholar
  49. 49.
    Jain, K., Vazirani, V.V.: Approximation algorithms for metric facility location and k-median problems using the primal-dual schema and Lagrangian relaxation. J. ACM 48(2), 274–296 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  50. 50.
    Judd, D., McKinley, P.K., Jain, A.K.: Large-scale parallel data clustering. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 871–876 (1998)CrossRefGoogle Scholar
  51. 51.
    Kalai, A.T., Moitra, A., Valiant, G.: Efficiently learning mixtures of two Gaussians. In: STOC, pp. 553–562 (2010)Google Scholar
  52. 52.
    Kannan, R., Vempala, S.: Spectral algorithms. Found. Trends Theoret. Comput. Sci. 4(3–4), 157–288 (2009)MathSciNetzbMATHGoogle Scholar
  53. 53.
    Kannan, R., Salmasian, H., Vempala, S.: The spectral method for general mixture models. SIAM J. Comput. 38(3), 1141–1156 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  54. 54.
    Kanungo, T., Mount, D.M., Netanyahu, N.S., Piatko, C.D., Silverman, R., Wu, A.Y.: An efficient k-means clustering algorithm: analysis and implementation. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 881–892 (2002)CrossRefzbMATHGoogle Scholar
  55. 55.
    Kanungo, T., Mount, D.M., Netanyahu, N.S., Piatko, C.D., Silverman, R., Wu, A.Y.: A local search approximation algorithm for \(k\)-means clustering. Comput. Geom. 28(2–3), 89–112 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  56. 56.
    Kumar, A., Kannan, R.: Clustering with spectral norm and the \(k\)-means algorithm. In: Proceedings of the 51st Annual Symposium on Foundations of Computer Science (FOCS 2010), pp. 299–308. IEEE Computer Society (2010)Google Scholar
  57. 57.
    Kumar, A., Sabharwal, Y., Sen, S.: Linear-time approximation schemes for clustering problems in any dimensions. J. ACM 57(2), Article No. 5 (2010)Google Scholar
  58. 58.
    Lloyd, S.P.: Least squares quantization in PCM. Bell Laboratories Technical Memorandum (1957)Google Scholar
  59. 59.
    MacQueen, J.B.: Some methods for classification and analysis of multivariate observations. In: Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 281–297. University of California Press (1967)Google Scholar
  60. 60.
    Mahajan, M., Nimbhorkar, P., Varadarajan, K.: The planar k-means problem is NP-hard. In: Das, S., Uehara, R. (eds.) WALCOM 2009. LNCS, vol. 5431, pp. 274–285. Springer, Heidelberg (2009). doi: 10.1007/978-3-642-00202-1_24 CrossRefGoogle Scholar
  61. 61.
    Manthey, B., Röglin, H.: Worst-case and smoothed analysis of k-means clustering with Bregman divergences. JoCG 4(1), 94–132 (2013)MathSciNetzbMATHGoogle Scholar
  62. 62.
    Manthey, B., Rölin, H.: Improved smoothed analysis of the k-means method. In: Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 461–470. Society for Industrial and Applied Mathematics (2009)Google Scholar
  63. 63.
    Matoušek, J.: On approximate geometric k-clustering. Discrete Comput. Geom. 24(1), 61–84 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  64. 64.
    Matula, D.W., Shahrokhi, F.: Sparsest cuts and bottlenecks in graphs. Discrete Appl. Math. 27, 113–123 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
  65. 65.
    Moitra, A., Valiant, G.: Settling the polynomial learnability of mixtures of Gaussians. In: FOCS 2010 (2010)Google Scholar
  66. 66.
    Nock, R., Luosto, P., Kivinen, J.: Mixed Bregman clustering with approximation guarantees. In: Daelemans, W., Goethals, B., Morik, K. (eds.) ECML PKDD 2008. LNCS (LNAI), vol. 5212, pp. 154–169. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-87481-2_11 CrossRefGoogle Scholar
  67. 67.
    Ostrovsky, R., Rabani, Y., Schulman, L.J., Swamy, C.: The effectiveness of Lloyd-type methods for the k-means problem. In: FOCS, pp. 165–176 (2006)Google Scholar
  68. 68.
    Pelleg, D., Moore, A.W.: Accelerating exact k-means algorithms with geometric reasoning. In: Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 277–281 (1999)Google Scholar
  69. 69.
    Selim, S.Z., Ismail, M.A.: \(k\)-means-type algorithms: a generalized convergence theorem and characterization of local optimality. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 6(1), 81–87 (1984)CrossRefzbMATHGoogle Scholar
  70. 70.
    Steinhaus, H.: Sur la division des corps matériels en parties. Bulletin de l’Académie Polonaise des Sciences IV(12), 801–804 (1956)MathSciNetzbMATHGoogle Scholar
  71. 71.
    Tibshirani, R., Walther, G., Hastie, T.: Estimating the number of clusters in a dataset via the gap statistic. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 63, 411–423 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  72. 72.
    Vattani, A.: \(k\)-means requires exponentially many iterations even in the plane. In: Proceedings of the 25th ACM Symposium on Computational Geometry (SoCG 2009), pp. 324–332. Association for Computing Machinery (2009)Google Scholar
  73. 73.
    de la Vega, W.F., Karpinski, M., Kenyon, C., Rabani, Y.: Approximation schemes for clustering problems. In: Proceedings of the 35th Annual ACM Symposium on Theory of Computing (STOC 2003), pp. 50–58 (2003)Google Scholar
  74. 74.
    Vempala, S., Wang, G.: A spectral algorithm for learning mixture models. J. Comput. Syst. Sci. 68(4), 841–860 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  75. 75.
    Venkatasubramanian, S.: Choosing the number of clusters I-III (2010). http://blog.geomblog.org/p/conceptual-view-of-clustering.html. Accessed 30 Mar 2015
  76. 76.
    Zhang, T., Ramakrishnan, R., Livny, M.: BIRCH: a new data clustering algorithm and its applications. Data Min. Knowl. Disc. 1(2), 141–182 (1997)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Johannes Blömer
    • 1
  • Christiane Lammersen
    • 2
  • Melanie Schmidt
    • 3
    Email author
  • Christian Sohler
    • 4
  1. 1.Department of Computer ScienceUniversity of PaderbornPaderbornGermany
  2. 2.School of Computing ScienceSimon Fraser UniversityBurnabyCanada
  3. 3.Computer Science DepartmentCarnegie Mellon UniversityPittsburghUSA
  4. 4.Department of Computer ScienceTU Dortmund UniversityDortmundGermany

Personalised recommendations