Machine Learning

, Volume 106, Issue 5, pp 695–712 | Cite as

Non-redundant multiple clustering by nonnegative matrix factorization

Article
  • 324 Downloads

Abstract

Clustering is one of the basic tasks in data mining and machine learning which aims at discovering hidden structure in the data. For many real-world applications, there often exist many different yet meaningful clusterings while most of existing clustering methods only produce a single clustering. To address this limitation, multiple clustering, which tries to generate clusterings that are high quality and different from each other, has emerged recently. In this paper, we propose a novel alternative clustering method that generates non-redundant multiple clusterings sequentially. The algorithm is built upon nonnegative matrix factorization, and we take advantage of the nonnegative property to enforce the non-redundancy. Specifically, we design a quadratic term to measure the redundancy between the reference clustering and the new clustering, and incorporate it into the objective. The optimization problem takes on a very simple form, and can be solved efficiently by multiplicative updating rules. Experimental results demonstrate that the proposed algorithm is comparable to or outperforms existing multiple clustering methods.

Keywords

Multiple clustering Alternative clustering Nonnegative Matrix Factorization Multiplicative updating 

References

  1. Asuncion, A., & Newman, D. (2007). UCI machine learning repository.Google Scholar
  2. Bae, E., & Bailey, J. (2006). Coala: A novel approach for the extraction of an alternate clustering of high quality and high dissimilarity. In International Conference on Data Mining, (pp. 53–62).Google Scholar
  3. Baudat, G., & Anouar, F. (2000). Generalized discriminant analysis using a kernel approach. Neural Computation, 12(10), 2385–2404.CrossRefGoogle Scholar
  4. Belkin, A., & Niyogi, P. (2001). Laplacian eigenmaps and spectral techniques for embedding and clustering. Advances in Neural Information Processing Systems, 14, 585–591.Google Scholar
  5. Bezdek, J. C., & Pal, N. R. (1998). Some new indexes of cluster validity. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 28(3), 301–315.CrossRefGoogle Scholar
  6. Cai, D., He, X., Han, J., & Huang, T. S. (2011). Graph regularized nonnegative matrix factorization for data representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(8), 1548–1560.CrossRefGoogle Scholar
  7. Cardoso-Cachopo, A. (2007). Improving methods for single-label text categorization. PdD Thesis, Instituto Superior Tecnico, Universidade Tecnica de Lisboa.Google Scholar
  8. Caruana, R., Elhawary, M.F., Nguyen, N., & Smith, C. (2006). Meta clustering. In International Conference on Data Mining, (pp. 107–118).Google Scholar
  9. Cui, Y., Fern, X.Z., Dy, J.G. (2007). Non-redundant multi-view clustering via orthogonalization. In International Conference on Data Mining, (pp. 133–142).Google Scholar
  10. Dang, X.H., & Bailey, J. (2010). Generation of alternative clusterings using the CAMI approach. In International Conference on Data Mining, (pp. 118–129).Google Scholar
  11. Dang, X. H., & Bailey, J. (2014). Generating multiple alternative clusterings via globally optimal subspaces. Data Mining and Knowledge Discovery, 28(3), 569–592.MathSciNetCrossRefMATHGoogle Scholar
  12. Dang, X. H., & Bailey, J. (2015). A framework to uncover multiple alternative clusterings. Machine Learning, 98(1–2), 7–30.MathSciNetCrossRefMATHGoogle Scholar
  13. Dasgupta, S., & Ng, V. (2010). Mining clustering dimensions. In International Conference on Machine Learning, (pp. 263–270).Google Scholar
  14. Davidson, I., & Qi, Z. (2008). Finding alternative clusterings using constraints. In International Conference on Data Mining, (pp. 773–778).Google Scholar
  15. Ding, C. H. Q., Li, T., & Jordan, M. I. (2010). Convex and semi-nonnegative matrix factorizations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(1), 45–55.CrossRefGoogle Scholar
  16. Gondek, D., & Hofmann, T. (2003). Conditional information bottleneck clustering. In International Conference on Data Mining, (pp. 36–42).Google Scholar
  17. Gretton, A., Bousquet, O., Smola, A.J., Schölkopf, B. (2005). Measuring statistical dependence with hilbert-schmidt norms. In International conference on Algorithmic Learning Theory, (pp. 63–77).Google Scholar
  18. Hamers, L., Hemeryck, Y., Herweyers, G., Janssen, M., Keters, H., Rousseau, R., et al. (1989). Similarity measures in scientometric research: The jaccard index versus salton’s cosine formula. Information Processing and Management, 25(3), 315–318.CrossRefGoogle Scholar
  19. Hu, J., Qian, Q., Pei, J., Jin, R., & Zhu, S. (2015). Finding multiple stable clusterings. In International Conference on Data Mining, (pp. 171–180).Google Scholar
  20. Hubert, L., & Arabie, P. (1985). Comparing partitions. Journal of Classification, 2(1), 193–218.CrossRefMATHGoogle Scholar
  21. Jain, A. K., Murty, M. N., & Flynn, P. J. (1999). Data clustering: A review. ACM Computing Surveys, 31(3), 264–323.CrossRefGoogle Scholar
  22. Jain, P., Meka, R., & Dhillon, I. S. (2008). Simultaneous unsupervised learning of disparate clusterings. Statistical Analysis and Data Mining, 1(3), 195–210.MathSciNetCrossRefGoogle Scholar
  23. Lee, D. D., & Seung, H. S. (1999). Learning the parts of objects by non-negative matrix factorization. Nature, 401(6755), 788–791.CrossRefGoogle Scholar
  24. Lee, D.D., & Seung, H.S. (2001). Algorithms for non-negative matrix factorization. In Advances in Neural Information Processing Systems, (pp. 556–562).Google Scholar
  25. Lin, C. (2007). On the convergence of multiplicative update algorithms for nonnegative matrix factorization. IEEE Trans Neural Networks, 18(6), 1589–1596.CrossRefGoogle Scholar
  26. Meilă, M. (2007). Comparing clusteringsan information based distance. Journal of Multivariate Analysis, 98(5), 873–895.MathSciNetCrossRefMATHGoogle Scholar
  27. Niu, D., Dy, J.G., & Jordan, M.I. (2010). Multiple non-redundant spectral clustering views. In International Conference on Machine Learning, (pp. 831–838).Google Scholar
  28. Rand, W. M. (1971). Objective criteria for the evaluation of clustering methods. Journal of the American Statistical association, 66(336), 846–850.CrossRefGoogle Scholar
  29. Xing, E. P., Ng, A. Y., Jordan, M. I., & Russell, S. (2003). Distance metric learning with application to clustering with side-information. Advances in Neural Information Processing Systems, 15, 505–512.Google Scholar
  30. Xu, W., Liu, X., Gong, Y. (2003). Document clustering based on non-negative matrix factorization. In Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, (pp. 267–273).Google Scholar
  31. Yang, S., Yi, Z., Ye, M., & He, X. (2014). Convergence analysis of graph regularized non-negative matrix factorization. IEEE Transactions on Knowledge and Data Engineering, 26(9), 2151–2165.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2016

Authors and Affiliations

  1. 1.National Key Laboratory for Novel Software TechnologyNanjing UniversityNanjingChina

Personalised recommendations