Advertisement

Unsupervised Learning for Data Clustering Based Image Segmentation

  • Xiaochun WangEmail author
  • Xiali Wang
  • Don Mitchell Wilkes
Chapter

Abstract

The purpose of this chapter is to introduce in a fairly concise manner the key ideas underlying the field of unsupervised learning from the perspective of clustering for image segmentation tasks. We begin with a briefly review of fundamental concepts in clustering and a quick tour of its four basic models, namely, partitioning-based, hierarchical, density-based, and graph-based approaches. This is followed by a short introduction to distance measures and a brief review on performance evaluation metrics of clustering algorithms. This introduction is necessarily incomplete given the enormous range of topics under the rubric of clustering. The hope is to provide a tutorial-level view of the field so that many topics covered here can be delved more deeply into and state-of-the-art research will be touched upon in the next four chapters.

Keywords

Unsupervised learning Clustering Partitioning-based clustering Hierarchical clustering Density-based clustering  Graph-based clustering Distance measures Internal evaluation index External evaluation index 

References

  1. Achtert, E., Böhm, C., Kriegel, H.-P., Kröger, P., Müller-Gorman, I., & Zimek, A. (2006a). Finding hierarchies of subspace clusters. In LNCS: Knowledge discovery in databases. Lecture notes in computer Science (Vol. 4213, pp. 446–453).Google Scholar
  2. Achtert, E., Böhm, C., & Kröger, P. (2006b). DeLi-Clu: Boosting robustness, completeness, usability, and efficiency of hierarchical clustering by a closest pair ranking. In LNCS: Advances in knowledge discovery and data mining. Lecture notes in computer science (Vol. 3918, pp. 119–128).Google Scholar
  3. Achtert, E., Böhm, C., Kröger, P., & Zimek, A. (2006c). Mining hierarchies of correlation clusters. In Proceedings of the 18th International Conference on Scientific and Statistical Database Management (SSDBM’06) (Vol. 1, pp. 119–128).Google Scholar
  4. Ankerst, M., Breunig, M. M., Kriegel, H.-P., & Sander, J. (1999). OPTICS: Ordering points to identify the clustering structure. In Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD/PODS’99) (pp. 49–60), PA,USA.Google Scholar
  5. Arthur, D., & Vassilvitskii, S. (2007). k-means++: The advantages of careful seeding. In Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms(SIAM’07) (pp. 1027–1035), Philadelphia, PA, USA.Google Scholar
  6. Bahmani, B., Moseley, B., Vattani, A., Kumar, R., & Vassilvitskii, S. (2012). Scalable K-means++. In Proceedings of the VLDB Endowment (PVLDB’12) (Vol. 5, no. 7, pp. 622–633).Google Scholar
  7. Bailey, K. (1994). Numerical taxonomy and cluster analysis. Typologies and Taxonomies, 34.Google Scholar
  8. Banerjee, A. (2004). Validating clusters using the Hopkins statistic. Proceedings of the IEEE International Conference on Fuzzy Systems, 1, 149–153.Google Scholar
  9. Cattell, R. B. (1943). The description of personality: Basic traits resolved into clusters. Journal of Abnormal and Social Psychology, 38(4), 476–506.CrossRefGoogle Scholar
  10. Cormen, T. T., Leiserson, C. E., & Rivest, R. L. (2009). Introduction to algorithms. Resonance, 1(9), 14–24.zbMATHGoogle Scholar
  11. Davies, D. L., & Bouldin, D. W. (1979). A cluster separation measure. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1(2), 224–227.Google Scholar
  12. Defays, D. (1977). An efficient algorithm for a complete-link method. The Computer Journal, British Computer Society, 20(4), 364–366.MathSciNetzbMATHGoogle Scholar
  13. Ding, C. (2007). A tutorial on spectral clustering. Journal of Statistics and Computing, 17, 395–416.MathSciNetCrossRefGoogle Scholar
  14. Donath, W. E., & Hoffman, A. J. (1973). Lower bounds for the partitioning of graphs. IBM Journal of Research and Development, 17, 420–425.MathSciNetCrossRefGoogle Scholar
  15. Driver, H. E., & Kroeber, A. L. (1932). Quantitative expression of cultural relationships. University of California Publications in American Archaeology and Ethnology, 31, 211–256.Google Scholar
  16. Duan, L., Xu, L., Guo, F., Lee, J., & Yan, B. (2007). A local-density based spatial clustering algorithm with noise. Information Systems, 32, 978–986.Google Scholar
  17. Dunn, J. (1974). Well separated clusters and optimal fuzzy partitions. Journal of Cybernetics, 4, 95–104.MathSciNetCrossRefGoogle Scholar
  18. Ester, M., Kriegel, H.-P., Sander, J., & Xu, X. (1996). A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD’96) (pp. 226–231), Portland, OR, USA: AAAI Press.Google Scholar
  19. Estivill-Castro, V. (2002). Why so many clustering algorithms—A position paper. ACM SIGKDD Explorations Newsletter, 4(1), 65–75.CrossRefGoogle Scholar
  20. Everitt, B., Landau, S., Leese, M., & Stahl, D. (2011). Cluster Analysis. Chichester, West Sussex, U.K.: Wiley Ltd.Google Scholar
  21. Fowkles, E. B., & Mallows, C. L. (1983). A method for comparing two hierarchical clusterings. Journal of the American Statistical Association, 78(383), 553–569.CrossRefGoogle Scholar
  22. Halkidi, M., Batistakis, Y., & Vazirgiannis, M. (2001). On clustering validation techniques. Journal of Intelligent Information Systems, 17(2–3), 107–145.Google Scholar
  23. Hinneburg, A., & Keim, D. A. (1998). An efficient approach to clustering in large multimedia databases with noise. In Proceedings of 4th International Conference on Knowledge Discovery and Data Mining (KDD’98) (pp. 58–65), New York City, NY, USA.Google Scholar
  24. Hopkins, B., & Skellam, J. G. (1954). A new method for determining the type of distribution of plant individuals. Annals of Botany, 18(2), 213–227.CrossRefGoogle Scholar
  25. Hubert, L., & Arabie, P. (1985). Comparing partitions. Journal of Classification, 2(1), 193–218.CrossRefGoogle Scholar
  26. Jaccard, P. (1901). Étude comparative de la distribution florale dans une portion des Alpes et des Jura. Bulletin de la Société Vaudoise des Sciences Naturelles, 37, 547–579.Google Scholar
  27. Jaccard, P. (1912). The distribution of the flora in the alpine zone. New Phytologist, 11, 37–50.CrossRefGoogle Scholar
  28. Jia, H., Ding, S., Xu, X., & Nie, R. (2014). The latest research progress on spectral clustering. Neural Computing & Applications, 24, 1477–1486.Google Scholar
  29. Kriegel, H.-P., Kröger, P., Sander, J., & Zimek, A. (2011). Density-based clustering. Data Mining and Knowledge Discovery, 1(3), 231–240.MathSciNetCrossRefGoogle Scholar
  30. Lloyd, S. (1982). Least squares quantization in PCM. IEEE Transactions on Information Theory, 28(2), 129–137.MathSciNetCrossRefGoogle Scholar
  31. Meila, M., & Shi, J. (2001). A random walks view of spectral segmentation. In Proceedings of the 8th International Workshop on Artificial Intelligence and Statistics (AISTATS’01).Google Scholar
  32. Ng, A., Jordan, M., & Weiss, Y. (2001). On spectral clustering: Analysis and an algorithm. Advances in Neural Information Processing Systems (pp. 849–856).Google Scholar
  33. Rand, W. M. (1971). Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association, 66(336), 846–850.CrossRefGoogle Scholar
  34. Rousseeuw, Peter J. (1987). Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Computational and Applied Mathematics, 20, 53–65.CrossRefGoogle Scholar
  35. Roweis, S. T., & Ghahramani, Z. (1999). A unifying review of linear Gaussian models. Neural Computation, 11(2), 305–345.CrossRefGoogle Scholar
  36. Shi, J., & Malik, J. (2000). Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8), 888–905.CrossRefGoogle Scholar
  37. Sibson, R. (1973). SLINK: an optimally efficient algorithm for the single-link cluster method. The Computer Journal. British Computer Society, 16(1), 30–34.MathSciNetGoogle Scholar
  38. Sneath, P. H., & Sokal, R. R. (1973). Numerical taxonomy. San Francisco: W. H. Freeman and Company.zbMATHGoogle Scholar
  39. Sokal, R., & Michener, C. (1958). A statistical method for evaluating systematic relationships. University of Kansas Science Bulletin, 38, 1409–1438.Google Scholar
  40. Tryon, R. C. (1939). Cluster analysis: Correlation profile and orthometric (factor) analysis for the isolation of unities in mind and personality. Ann Arbor, MI: Edwards Brothers.Google Scholar
  41. Zahn, C. T. (1971). Graph-theoretical methods for detecting and describing gestalt clusters. IEEE Transactions on Computers, c-20, 68–86.CrossRefGoogle Scholar
  42. Zubin, J. (1938). A technique for measuring like-mindedness. The Journal of Abnormal and Social Psychology, 33(4), 508–516.CrossRefGoogle Scholar

Copyright information

© Xi'an Jiaotong University Press 2020

Authors and Affiliations

  • Xiaochun Wang
    • 1
    Email author
  • Xiali Wang
    • 2
  • Don Mitchell Wilkes
    • 3
  1. 1.School of Software EngineeringXi’an Jiaotong UniversityXi’anChina
  2. 2.School of Information EngineeringChang’an UniversityXi’anChina
  3. 3.Department of Electrical Engineering and Computer ScienceVanderbilt UniversityNashvilleUSA

Personalised recommendations