Advertisement

Abstract

This chapter presents a tutorial overview of the main clustering methods used in Data Mining. The goal is to provide a self-contained review of the concepts and the mathematics underlying clustering techniques. The chapter begins by providing measures and criteria that are used for determining whether two objects are similar or dissimilar. Then the clustering methods are presented, divided into: hierarchical, partitioning, density-based, model-based, grid-based, and soft-computing methods. Following the methods, the challenges of performing clustering in large data sets are discussed. Finally, the chapter presents how to determine the number of clusters.

Keywords

Clustering K-means Intra-cluster homogeneity Inter-cluster separability 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Al-Sultan K. S., A tabu search approach to the clustering problem, Pattern Recognition, 28:1443–1451, 1995.CrossRefGoogle Scholar
  2. Al-Sultan K. S., Khan M. M.: Computational experience on four algorithms for the hard clustering problem. Pattern Recognition Letters 17(3): 295–308, 1996.CrossRefGoogle Scholar
  3. Banfield J. D. and Raftery A. E.. Model-based Gaussian and non-Gaussian clustering. Biometrics, 49:803–821, 1993.MathSciNetzbMATHGoogle Scholar
  4. Bentley J. L. and Friedman J. H., Fast algorithms for constructing minimal spanning trees in coordinate spaces. IEEE Transactions on Computers, C-27(2):97–105, February 1978. 275Google Scholar
  5. Bonner, R., On Some Clustering Techniques. IBM journal of research and development, 8:22–32, 1964.zbMATHCrossRefGoogle Scholar
  6. Can F., Incremental clustering for dynamic information processing, in ACM Transactions on Information Systems, no. 11, pp 143–164, 1993.CrossRefGoogle Scholar
  7. Cheeseman P., Stutz J.: Bayesian Classification (AutoClass): Theory and Results. Advances in Knowledge Discovery and Data Mining 1996: 153–180Google Scholar
  8. Dhillon I. and Modha D., Concept Decomposition for Large Sparse Text Data Using Clustering. Machine Learning. 42, pp.143–175. (2001).CrossRefzbMATHGoogle Scholar
  9. Dempster A.P., Laird N.M., and Rubin D.B., Maximum likelihood from incomplete data using the EM algorithm. Journal of the Royal Statistical Society, 39(B), 1977.Google Scholar
  10. Duda, P. E. Hart and D. G. Stork, Pattern Classification, Wiley, New York, 2001.Google Scholar
  11. Ester M., Kriegel H.P., Sander S., and Xu X., A density-based algorithm for discovering clusters in large spatial databases with noise. In E. Simoudis, J. Han, and U. Fayyad, editors, Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD-96), pages 226–231, Menlo Park, CA, 1996. AAAI, AAAI Press.Google Scholar
  12. Estivill-Castro, V. and Yang, J. A Fast and robust general purpose clustering algorithm. Pacific Rim International Conference on Artificial Intelligence, pp. 208–218, 2000.Google Scholar
  13. Fraley C. and Raftery A.E., “How Many Clusters? Which Clustering Method? Answers Via Model-Based Cluster Analysis”, Technical Report No. 329. Department of Statistics University of Washington, 1998.Google Scholar
  14. Fisher, D., 1987, Knowledge acquisition via incremental conceptual clustering, in machine learning 2, pp. 139–172.Google Scholar
  15. Fortier, JJ. and Solomon, H. 1996. Clustering procedures. In proceedings of the Multivariate Analysis,’ 66, PR. Krishnaiah (Ed.), pp. 493–506.Google Scholar
  16. Gluck, M. and Corter, J., 1985. Information, uncertainty, and the utility of categories. Proceedings of the Seventh Annual Conference of the Cognitive Science Society (pp. 283–287). Irvine, California: Lawrence Erlbaum Associates.Google Scholar
  17. Guha, S., Rastogi, R. and Shim, K. CURE: An efficient clustering algorithm for large databases. In Proceedings of ACM SIGMOD International Conference on Management of Data, pages 73–84, New York, 1998.Google Scholar
  18. Han, J. and Kamber, M. Data Mining: Concepts and Techniques. Morgan Kaufmann Publishers, 2001.Google Scholar
  19. Hartigan, J. A. Clustering algorithms. John Wiley and Sons., 1975.Google Scholar
  20. Huang, Z., Extensions to the k-means algorithm for clustering large data sets with categorical values. Data Mining and Knowledge Discovery, 2(3), 1998.Google Scholar
  21. Hoppner F., Klawonn F., Kruse R., Runkler T., Fuzzy Cluster Analysis, Wiley, 2000.Google Scholar
  22. Hubert, L. and Arabie, P., 1985 Comparing partitions. Journal of Classification, 5. 193–218.CrossRefGoogle Scholar
  23. Jain, A.K. Murty, M.N. and Flynn, P.J. Data Clustering: A Survey. ACM Computing Surveys, Vol. 31, No.3, September 1999.Google Scholar
  24. Kaufman, L. and Rousseeuw, P.J., 1987, Clustering by Means of Medoids, In Y. Dodge, editor, Statistical Data Analysis, based on the LI Norm, pp. 405–416, Elsevier/North Holland, Amsterdam.Google Scholar
  25. Kim, D.J., Park, Y.W. and Park,. A novel validity index for determination of the optimal number of clusters. IEICE Trans. Inf., Vol. E84-D, no.2, 2001, 281–285.Google Scholar
  26. King, B. Step-wise Clustering Procedures, J. Am. Stat. Assoc. 69, pp. 86–101, 1967.CrossRefGoogle Scholar
  27. Larsen, B. and Aone, C. 1999. Fast and effective text mining using linear-time document clustering. In Proceedings of the 5th ACM SIGKDD, 16–22, San Diego, CA.Google Scholar
  28. Marcotorchino, J.F. and Michaud, P. Optimisation en Analyse Ordinale des Donns. Masson, ParisGoogle Scholar
  29. Mishra, S. K. and Raghavan, V. V., An empirical study of the performance of heuristic methods for clustering. In Pattern Recognition in Practice, E. S. Gelsema and L. N. Kanal, Eds. 425436, 1994.Google Scholar
  30. Murtagh, F. A survey of recent advances in hierarchical clustering algorithms which use cluster centers. Comput. J. 26 354–359, 1984.Google Scholar
  31. Ng, R. and Han, J. 1994. Very large data bases. In Proceedings of the 20th International Conference on Very Large Data Bases (VLDB94, Santiago, Chile, Sept.), VLDB Endowment, Berkeley, CA, 144155.Google Scholar
  32. Rand, W. M., Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association, 66: 846–850, 1971.CrossRefGoogle Scholar
  33. Ray, S., and Turi, R.H. Determination of Number of Clusters in K-Means Clustering and Application in Color Image Segmentation. Monash university, 1999.Google Scholar
  34. Selim, S.Z., and Ismail, M.A. K-means-type algorithms: a generalized convergence theorem and characterization of local optimality. In IEEE transactions on pattern analysis and machine learning, vol. PAMI-6, no. 1, January, 1984.Google Scholar
  35. Selim, S. Z. AND Al-Sultan, K. 1991. A simulated annealing algorithm for the clustering problem. Pattern Recogn. 24,10 (1991), 10031008.MathSciNetCrossRefGoogle Scholar
  36. Sneath, P., and Sokal, R. Numerical Taxonomy. W.H. Freeman Co., San Francisco, CA, 1973.zbMATHGoogle Scholar
  37. Strehl A. and Ghosh J., Clustering Guidance and Quality Evaluation Using Relationship-based Visualization, Proceedings of Intelligent Engineering Systems Through Artificial Neural Networks, 2000, St. Louis, Missouri, USA, pp 483–488.Google Scholar
  38. Strehl, A., Ghosh, J., Mooney, R.: Impact of similarity measures on web-page clustering. In Proc. AAAI Workshop on AI for Web Search, pp 58–64, 2000.Google Scholar
  39. Tibshirani, R., Walther, G. and Hastie, T., 2000. Estimating the number of clusters in a dataset via the gap statistic. Tech. Rep. 208, Dept. of Statistics, Stanford University.Google Scholar
  40. Tyron R. C. and Bailey D.E. Cluster Analysis. McGraw-Hill, 1970.Google Scholar
  41. Urquhart, R. Graph-theoretical clustering, based on limited neighborhood sets. Pattern recognition, vol. 15, pp. 173–187, 1982.zbMATHCrossRefGoogle Scholar
  42. Veyssieres, M.P. and Plant, R.E. Identification of vegetation state and transition domains in California’s hardwood rangelands. University of California, 1998.Google Scholar
  43. Wallace C. S. and Dowe D. L., Intrinsic classification by mml — the snob program. In Proceedings of the 7th Australian Joint Conference on Artificial Intelligence, pages 37–44, 1994.Google Scholar
  44. Wang, X. and Yu, Q. Estimate the number of clusters in web documents via gap statistic. May 2001.Google Scholar
  45. Ward, J. H. Hierarchical grouping to optimize an objective function. Journal of the American Statistical Association, 58:236–244, 1963.MathSciNetCrossRefGoogle Scholar
  46. Zahn, C. T., Graph-theoretical methods for detecting and describing gestalt clusters. IEEE trans. Comput. C-20 (Apr.), 68–86, 1971.Google Scholar

Copyright information

© Springer Science+Business Media, Inc. 2005

Authors and Affiliations

  • Lior Rokach
    • 1
  • Oded Maimon
    • 1
  1. 1.Department of Industrial EngineeringTel-Aviv UniversityUSA

Personalised recommendations