A k-Means-Like Algorithm for Clustering Categorical Data Using an Information Theoretic-Based Dissimilarity Measure

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9616)

Abstract

Clustering large datasets is one of the important research problems for many machine learning applications. The k-means is very popular and widely used due to its ease of implementation, linear time complexity in size of the data, and almost surely convergence to local optima. However, working only on numerical data prohibits it from being used for clustering categorical data. In this paper, we aim to introduce an extension of k-means algorithm for clustering categorical data. Basically, we propose a new dissimilarity measure based on an information theoretic definition of similarity that considers the amount of information of two values in the domain set. The definition of cluster centers is generalized using kernel density estimation approach. Then, the new algorithm is proposed by incorporating a feature weighting scheme that automatically measures the contribution of individual attributes for the clusters. In order to demonstrate the performance of the new algorithm, we conduct a series of experiments on real datasets from UCI Machine Learning Repository and compare the obtained results with several previously developed algorithms for clustering categorical data.

Keywords

Cluster analysis Categorical data clustering K-means Dissimilarity measures 

References

  1. 1.
    Aitchison, J., Aitken, C.G.G.: Multivariate binary discrimination by the kernel method. Biometrika 63(3), 413–420 (1976)MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Andritsos, P., Tsaparas, P., Miller, R.J., Sevcik, K.C.: LIMBO: A Scalable Algorithm to Cluster Categorical Data (2003)Google Scholar
  3. 3.
    Barbara, D., Couto, J., Li, Y.: COOLCAT: an entropy-based algorithm for categorical clustering. In: Proceedings of the Eleventh International Conference on Information and Knowledge Management, pp. 582–589 (2002)Google Scholar
  4. 4.
    Blake, C.L., Merz, C.J.: UCI Repository of Machine Learning Databases. Dept. of Information and Computer Science, University of California at Irvine (1998). http://www.ics.uci.edu/mlearn/MLRepository.html
  5. 5.
    Boriah, S., Chandola, V., Kumar V.: Similarity measures for categorical data: a comparative evaluation. In: Proceedings of the SIAM International Conference on Data Mining, SDM 2008, pp. 243–254 (2008)Google Scholar
  6. 6.
    Chen, L., Wang, S.: Central clustering of categorical data with automated feature weighting. In: Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, pp. 1260–1266 (2013)Google Scholar
  7. 7.
    Ganti, V., Gehrke, J., Ramakrishnan, R.: CATUS–clustering categorical data using summaries. In: Proceedings of the International Conference on Knowledge Discovery and Data Mining, San Diego, CA, pp. 73–83 (1999)Google Scholar
  8. 8.
    Gibson, D., Kleinberg, J., Raghavan, P.: Clustering categorical data: an approach based on dynamic systems. In: Proceedings of the 24th International Conference on Very Large Databases, New York, pp. 311–323 (1998)Google Scholar
  9. 9.
    Guha, S., Rastogi, R., Shim, K.: CURE: an efficient clustering algorithm for large databases. In: Proceedings of ACM SIGMOD International Conference on Management of Data, New York, pp. 73–84 (1998)Google Scholar
  10. 10.
    Guha, S., Rastogi, R., Shim, K.: ROCK: a robust clustering algorithm for categorical attributes. Inf. Syst. 25(5), 345–366 (2000)CrossRefGoogle Scholar
  11. 11.
    Han, J., Kamber, M.: Data Mining: Concepts and Techniques. Morgan Kaufmann Publishers, San Francisco (2001)MATHGoogle Scholar
  12. 12.
    Hathaway, R.J., Bezdek, J.C.: Local convergence of the \(c\)-means algorithms. Pattern Recogn. 19, 477–480 (1986)CrossRefMATHGoogle Scholar
  13. 13.
    Huang, Z.: Clustering large data sets with mixed numeric and categorical values. In: Lu, H., Motoda, H., Luu, H. (eds.) KDD: Techniques and Applications, pp. 21–34. World Scientific, Singapore (1997)Google Scholar
  14. 14.
    Huang, Z.: Extensions to the k-means algorithm for clustering large data sets with categorical values. Data Min. Knowl. Discov. 2, 283–304 (1998)CrossRefGoogle Scholar
  15. 15.
    Huang, Z., Ng, M.K.: A fuzzy \(k\)-modes algorithm for clustering categorical data. IEEE Trans. Fuzzy Syst. 7, 446–452 (1999)CrossRefGoogle Scholar
  16. 16.
    Huang, Z., Ng, M.K., Rong, H., Li, Z.: Automated variable weighting in \(k\)-means type clustering. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 657–668 (2005)CrossRefGoogle Scholar
  17. 17.
    Hubert, L., Arabie, P.: Comparing partitions. J. Classif. 2(1), 193–218 (1995)MathSciNetCrossRefMATHGoogle Scholar
  18. 18.
    Ienco, D., Pensa, R.G., Meo, R.: Context-based distance learning for categorical data clustering. In: Adams, N.M., Robardet, C., Siebes, A., Boulicaut, J.-F. (eds.) IDA 2009. LNCS, vol. 5772, pp. 83–94. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  19. 19.
    Ienco, D., Pensa, R.G., Meo, R.: From context to distance: learning dissimilarity for categorical data clustering. ACM Trans. Knowl. Discov. Data 6(1), 1–25 (2012)CrossRefGoogle Scholar
  20. 20.
    Lin, D.: An information-theoretic definition of similarity. In: Proceedings of the 15th International Conference on Machine Learning, pp. 296–304 (1998)Google Scholar
  21. 21.
    MacQueen, J.B.: Some methods for classification, analysis of multivariate observations. In: Proceedings of the Fifth Symposium on Mathematical Statistics and Probability, Berkelely, CA, vol. 1(AD 669871), pp. 281–297 (1967)Google Scholar
  22. 22.
    Ng, M.K., Li, M.J., Huang, J.Z., He, Z.: On the impact of dissimilarity measure in \(k\)-modes clustering algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 29, 503–507 (2007)CrossRefGoogle Scholar
  23. 23.
    Ralambondrainy, H.: A conceptual version of the \(k\)-means algorithm. Pattern Recog. Lett. 16, 1147–1157 (1995)CrossRefGoogle Scholar
  24. 24.
    San, O.M., Huynh, V.N., Nakamori, Y.: An alternative extension of the \(k\)-means algorithm for clustering categorical data. Int. J. Appl. Math. Comput. Sci. 14, 241–247 (2004)MathSciNetMATHGoogle Scholar
  25. 25.
    Selim, S.Z., Ismail, M.A.: k-Means-type algorithms: a generalized convergence theorem and characterization of local optimality. IEEE Trans. Pattern Anal. Mach. Intell. 6(1), 81–87 (1984)CrossRefMATHGoogle Scholar
  26. 26.
    Strehl, A., Ghosh, J.: Cluster ensembles–a knowledge reuse framework for combining multiple partitions. J. Mach. Learn. Res. 3, 583–617 (2003)MathSciNetMATHGoogle Scholar
  27. 27.
    Titterington, D.M.: A comparative study of kernel-based density estimates for categorical data. Technometrics 22(2), 259–268 (1980)MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.School of Knowledge ScienceJapan Advanced Institute of Science and TechnologyNomiJapan

Personalised recommendations