Posterior Probability Convergence of k-NN Classification and K-Means Clustering

  • Heysem KayaEmail author
  • Olcay Kurşun
  • Fikret Gürgen
Conference paper


Centroid based clustering methods, such as K-Means, form Voronoi cells whose radii are inversely proportional to number of clusters, K, and the expectation of posterior probability distribution in the closest cluster is related to that of a k-Nearest Neighbor Classifier (k-NN) due to the Law of Large Numbers. The aim of this study is to examine the relationship of these two seemingly different concepts of clustering and classification, more specifically, the relationship between k of k-NN and K of K-Means. One specific application area of this correspondence is local learning. The study provides experimental convergence evidence and complexity analysis to address the relative advantages of two methods in local learning applications.


Clustering K-Means K-Medoids K-NN classification Local learning 


  1. 1.
    Alpayd\(\imath \)n, E.: Introduction to Machine Learning. MIT Press, Cambridge (2010)Google Scholar
  2. 2.
    Kaya, H., Kursun, O., Seker, H.: Stacking class probabilities obtained from view-based cluster ensembles. In: Rutkowski, L. et al. (eds.) Proceedings of the 10th International Conference on Artificial Intelligence and Soft Computing, ICAISC 2010, Part I, Springer-Verlag. LNAI 6113, pp. 397–404 (2010)Google Scholar
  3. 3.
    Asuncion, A., Newman, D.J.: UCI Machine Learning Repository. Department of Information and Computer Science, University of California, Irvine (2007)Google Scholar
  4. 4.
    Jiawei, H., Kamber, M.: Data Mining: Concepts and Techniques. Morgan Kaufmann Publishers, San Francisco (2006)Google Scholar
  5. 5.
    Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach, 3rd edn. Prentice-Hall, Upper Saddle River (2010)Google Scholar
  6. 6.
    Aha, D.W., Kibler, D., Albert, M.K.: Instance-based learning algorithms. Mach. Learn. 6, 37–66 (1991)Google Scholar
  7. 7.
    Broomhead, D.S., Lowe, D.: Multivariable functional interpolation and adaptive networks. Complex Syst. 2, 321–355 (1988)Google Scholar
  8. 8.
    Moody, J., Darken, C.: Fast learning in networks of locally-tuned processing units. Neural Comput. 1, 281–294 (1989)Google Scholar
  9. 9.
    Carpenter, G.A., Grossberg, S.: The ART of adaptive pattern recognition by a self-organizing neural network. IEEE Comput. 21(3), 77–88 (1988)Google Scholar
  10. 10.
    Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Comput. 3, 79–87 (1991)Google Scholar
  11. 11.
    Kohonen, T.: Self-Organizing Maps. Springer, Berlin (1995)Google Scholar
  12. 12.
    Bottou, L., Vapnik, V.: Local learning algorithms. Neural Comput. 4, 888–900 (1992)Google Scholar
  13. 13.
    Dietterich, T.G.: Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput. 10, 1895–1923 (1998)Google Scholar
  14. 14.
    Bohanec, M., Rajkovic, V.: Knowledge acquisition and explanation for multi-attribute decision making. In: 8th International Workshop on Expert Systems and their Applications, Avignon, France. pp.59–78 (1988)Google Scholar
  15. 15.
    Tsanas, A., Max, A.L., McSharry, P.E., Ramig, L.O.: Accurate telemonitoring of parkinson’s disease progression by non-invasive speech tests. IEEE Trans Biomed. Eng. 57(4), 884–893 (2010)Google Scholar
  16. 16.
    Sakar, C.O., Kursun, O.: Telediagnosis of parkinson’s disease using measurements of dysphonia. J. Med. Syst. 34(4), 591–599 (2010)Google Scholar

Copyright information

© Springer-Verlag London 2013

Authors and Affiliations

  1. 1.Department of Computer EngineeringBogazici UniversityBebekTurkey
  2. 2.Department of Computer EngineeringIstanbul UniversityAvcilarTurkey

Personalised recommendations