Advertisement

Abstract

Aiming to unify known results about clustering mixtures of distributions under separation conditions, Kumar and Kannan [1] introduced a deterministic condition for clustering datasets. They showed that this single deterministic condition encompasses many previously studied clustering assumptions. More specifically, their proximity condition requires that in the target k-clustering, the projection of a point x onto the line joining its cluster center μ and some other center μ′, is a large additive factor closer to μ than to μ′. This additive factor can be roughly described as k times the spectral norm of the matrix representing the differences between the given (known) dataset and the means of the (unknown) target clustering. Clearly, the proximity condition implies center separation – the distance between any two centers must be as large as the above mentioned bound.

In this paper we improve upon the work of Kumar and Kannan [1] along several axes. First, we weaken the center separation bound by a factor of \(\sqrt{k}\), and secondly we weaken the proximity condition by a factor of k (in other words, the revised separation condition is independent of k). Using these weaker bounds we still achieve the same guarantees when all points satisfy the proximity condition. Under the same weaker bounds, we achieve even better guarantees when only (1 − ε)-fraction of the points satisfy the condition. Specifically, we correctly cluster all but a (ε + O(1/c 4))-fraction of the points, compared to O(k 2 ε)-fraction of [1], which is meaningful even in the particular setting when ε is a constant and k = ω(1). Most importantly, we greatly simplify the analysis of Kumar and Kannan. In fact, in the bulk of our analysis we ignore the proximity condition and use only center separation, along with the simple triangle and Markov inequalities. Yet these basic tools suffice to produce a clustering which (i) is correct on all but a constant fraction of the points, (ii) has k-means cost comparable to the k-means cost of the target clustering, and (iii) has centers very close to the target centers.

Our improved separation condition allows us to match the results of the Planted Partition Model of McSherry [2], improve upon the results of Ostrovsky et al [3], and improve separation results for mixture of Gaussian models in a particular setting.

Keywords

Separation Condition Singular Vector Spectral Norm Main Lemma True Center 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Kumar, A., Kannan, R.: Clustering with spectral norm and the k-means algorithm. In: FOCS (2010)Google Scholar
  2. 2.
    McSherry, F.: Spectral partitioning of random graphs. In: FOCS (2001)Google Scholar
  3. 3.
    Ostrovsky, R., Rabani, Y., Schulman, L.J., Swamy, C.: The effectiveness of lloyd-type methods for the k-means problem. In: FOCS, pp. 165–176 (2006)Google Scholar
  4. 4.
    Dasgupta, S.: Learning mixtures of gaussians. In: FOCS (1999)Google Scholar
  5. 5.
    Dasgupta, S., Schulman, L.: A probabilistic analysis of em for mixtures of separated, spherical gaussians. J. Mach. Learn. Res. (2007)Google Scholar
  6. 6.
    Sanjeev, A., Kannan, R.: Learning mixtures of arbitrary gaussians. In: STOC (2001)Google Scholar
  7. 7.
    Vempala, S., Wang, G.: A spectral algorithm for learning mixtures of distributions. Journal of Computer and System Sciences (2002)Google Scholar
  8. 8.
    Achlioptas, D., McSherry, F.: On Spectral Learning of Mixtures of Distributions. In: Auer, P., Meir, R. (eds.) COLT 2005. LNCS (LNAI), vol. 3559, pp. 458–469. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  9. 9.
    Chaudhuri, K., Rao, S.: Learning mixtures of product distributions using correlations and independence. In: COLT (2008)Google Scholar
  10. 10.
    Kannan, R., Salmasian, H., Vempala, S.: The spectral method for general mixture models. SIAM J. Comput. (2008)Google Scholar
  11. 11.
    Dasgupta, A., Hopcroft, J., Kannan, R., Mitra, P.: Spectral clustering with limited independence. In: SODA (2007)Google Scholar
  12. 12.
    Brubaker, S.C., Vempala, S.: Isotropic pca and affine-invariant clustering. In: FOCS (2008)Google Scholar
  13. 13.
    Coja-Oghlan, A.: Graph partitioning via adaptive spectral techniques. Comb. Probab. Comput. 19, 227–284 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
  14. 14.
    Awasthi, P., Sheffet, O.: Improved spectral-norm bounds for clustering, full version (2012), http://arxiv.org/abs/1206.3204
  15. 15.
    Kumar, A., Sabharwal, Y., Sen, S.: A simple linear time (1 + ε)-approximation algorithm for k-means clustering in any dimensions. In: FOCS (2004)Google Scholar
  16. 16.
    Cohen, W.W., Richman, J.: Learning to match and cluster large high-dimensional data sets for data integration. In: KDD, pp. 475–480 (2002)Google Scholar
  17. 17.
    Murzin, A.G., Brenner, S.E., Hubbard, T., Chothia, C.: Scop: a structural classification of proteins database for the investigation of sequences and structures. Journal of Molecular Biology 247(4), 536–540 (1995)Google Scholar
  18. 18.
    Kannan, R., Vempala, S.: Spectral algorithms. Found. Trends Theor. Comput. Sci. (March 2009)Google Scholar
  19. 19.
    Golub, G.H., Van Loan, C.F.: Matrix computations, 3rd edn. Johns Hopkins University Press, Baltimore (1996)zbMATHGoogle Scholar
  20. 20.
    Chaudhuri, K., Rao, S.: Beyond gaussians: Spectral methods for learning mixtures of heavy-tailed distributions. In: COLT (2008)Google Scholar
  21. 21.
    Kalai, A.T., Moitra, A., Valiant, G.: Efficiently learning mixtures of two gaussians. In: STOC 2010, pp. 553–562 (2010)Google Scholar
  22. 22.
    Moitra, A., Valiant, G.: Settling the polynomial learnability of mixtures of gaussians. In: FOCS 2010 (2010)Google Scholar
  23. 23.
    Belkin, M., Sinha, K.: Polynomial learning of distribution families. Computing Research Repository abs/1004.4, 103–112 (2010)Google Scholar
  24. 24.
    Schulman, L.J.: Clustering for edge-cost minimization (extended abstract). In: STOC, pp. 547–555 (2000)Google Scholar
  25. 25.
    Balcan, M.F., Blum, A., Gupta, A.: Approximate clustering without the approximation. In: SODA, pp. 1068–1077 (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Pranjal Awasthi
    • 1
  • Or Sheffet
    • 1
  1. 1.Carnegie Mellon UniversityPittsburghUSA

Personalised recommendations