Advertisement

Parameter-Free Hierarchical Co-clustering by n-Ary Splits

  • Dino Ienco
  • Ruggero G. Pensa
  • Rosa Meo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5781)

Abstract

Clustering high-dimensional data is challenging. Classic metrics fail in identifying real similarities between objects. Moreover, the huge number of features makes the cluster interpretation hard. To tackle these problems, several co-clustering approaches have been proposed which try to compute a partition of objects and a partition of features simultaneously. Unfortunately, these approaches identify only a predefined number of flat co-clusters. Instead, it is useful if the clusters are arranged in a hierarchical fashion because the hierarchy provides insides on the clusters. In this paper we propose a novel hierarchical co-clustering, which builds two coupled hierarchies, one on the objects and one on features thus providing insights on both them. Our approach does not require a pre-specified number of clusters, and produces compact hierarchies because it makes n −ary splits, where n is automatically determined. We validate our approach on several high-dimensional datasets with state of the art competitors.

Keywords

Mutual Information Normalize Mutual Information Rand Index Adjust Rand Index Cluster Hierarchy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Han, J., Kamber, M.: Data Mining: Concepts and Techniques. The Morgan Kaufmann Series in Data Management Systems. Morgan Kaufmann, San Francisco (2000)zbMATHGoogle Scholar
  2. 2.
    Hartigan, J.A.: Direct clustering of a data matrix. Journal of the American Statistical Association 67(337), 123–129 (1972)CrossRefGoogle Scholar
  3. 3.
    Kluger, Y., Basri, R., Chang, J., Gerstein, M.: Spectral biclustering of microarray data: coclustering genes and conditions. Genome Research 13, 703–716 (2003)CrossRefGoogle Scholar
  4. 4.
    Dhillon, I.S., Mallela, S., Modha, D.S.: Information-theoretic co-clustering. In: Proc. ACM SIGKDD 2003, Washington, USA, pp. 89–98. ACM, New York (2003)Google Scholar
  5. 5.
    Robardet, C., Feschet, F.: Comparison of three objective functions for conceptual clustering. In: Siebes, A., De Raedt, L. (eds.) PKDD 2001. LNCS (LNAI), vol. 2168, pp. 399–410. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  6. 6.
    Robardet, C.: Contribution à la classification non supervisée: proposition d’une methode de bi-partitionnement. PhD thesis, Université Claude Bernard - Lyon 1 (Juliet 2002)Google Scholar
  7. 7.
    Goodman, L.A., Kruskal, W.H.: Measures of association for cross classification. Journal of the American Statistical Association 49, 732–764 (1954)zbMATHGoogle Scholar
  8. 8.
    Robardet, C., Feschet, F.: Efficient local search in conceptual clustering. In: Jantke, K.P., Shinohara, A. (eds.) DS 2001. LNCS (LNAI), vol. 2226, pp. 323–335. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  9. 9.
    Forman, G.: An extensive empirical study of feature selection metrics for text classification. Journal of Machine Learning Research 3, 1289–1305 (2003)zbMATHGoogle Scholar
  10. 10.
    Strehl, A., Ghosh, J.: Cluster ensembles — a knowledge reuse framework for combining multiple partitions. Journal of Machine Learning Research 3, 583–617 (2002)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Hubert, L., Arabie, P.: Comparing partitions. Journal of Classification 2, 193–218 (1985)CrossRefzbMATHGoogle Scholar
  12. 12.
    Goodman, L.A., Kruskal, W.H.: Measure of association for cross classification ii: further discussion and references. Journal of the American Statistical Association 54, 123–163 (1959)CrossRefzbMATHGoogle Scholar
  13. 13.
    Slonim, N., Tishby, N.: Document clustering using word clusters via the information bottleneck method. In: Proc. SIGIR 2000, New York, NY, USA, pp. 208–215 (2000)Google Scholar
  14. 14.
    Cho, H., Dhillon, I.S., Guan, Y., Sra, S.: Minimum sum-squared residue co-clustering of gene expression data. In: Proc. SIAM SDM 2004, Lake Buena Vista, USA (2004)Google Scholar
  15. 15.
    Banerjee, A., Dhillon, I., Ghosh, J., Merugu, S., Modha, D.S.: A generalized maximum entropy approach to bregman co-clustering and matrix approximation. JMLR 8, 1919–1986 (2007)MathSciNetzbMATHGoogle Scholar
  16. 16.
    Anagnostopoulos, A., Dasgupta, A., Kumar, R.: Approximation algorithms for co-clustering. In: Proc. PODS 2008, Vancouver, BC, Canada, pp. 201–210 (2008)Google Scholar
  17. 17.
    Hosseini, M., Abolhassani, H.: Hierarchical co-clustering for web queries and selected urls. In: Benatallah, B., Casati, F., Georgakopoulos, D., Bartolini, C., Sadiq, W., Godart, C. (eds.) WISE 2007. LNCS, vol. 4831, pp. 653–662. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  18. 18.
    Costa, G., Manco, G., Ortale, R.: A hierarchical model-based approach to co-clustering high-dimensional data. In: Proc. of ACM SAC 2008, Fortaleza, Ceara, Brazil, pp. 886–890 (2008)Google Scholar
  19. 19.
    Heard, N.A., Holmes, C.C., Stephens, D.A., Hand, D.J., Dimopoulos, G.: Bayesian coclustering of anopheles gene expression time series: Study of immune defense response to multiple experimental challenges. Proc. Natl. Acad. Sci. (102), 16939–16944Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Dino Ienco
    • 1
  • Ruggero G. Pensa
    • 1
  • Rosa Meo
    • 1
  1. 1.Department of Computer ScienceUniversity of TorinoTurinItaly

Personalised recommendations