Encyclopedia of Database Systems

2018 Edition
| Editors: Ling Liu, M. Tamer Özsu

Nearest Neighbor Classification

  • Thomas SeidlEmail author
Reference work entry
DOI: https://doi.org/10.1007/978-1-4614-8265-9_561


k-nearest neighbor classification; k-NN classification; NN classification


Nearest neighbor classification is a machine learning method that aims at labeling previously unseen query objects while distinguishing two or more destination classes. As any classifier, in general, it requires some training data with given labels and, thus, is an instance of supervised learning. In the simplest variant, the query object inherits the label from the closest sample object in the training set. Common variants extend the decision set from the single nearest neighbor within the training data to the set of k nearest neighbors for any k > 1. The decision rule combines the labels from these kdecision objects, either by simple majority voting or by any distance-based or frequency-based weighting scheme, to decide the predicted label for the query object. Mean-based nearest neighbor classifiers group the training data and work on the means of classes rather than on the individual...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Ankerst M, Kastenmuller G, Kriegel H-P, Seidl T. Nearest neighbor classification in 3D protein databases. In: Proceedings of the 7th International Conference on Intelligent Systems for Molecular Biology; 1999. p. 34–43.Google Scholar
  2. 2.
    Athistos V. Nearest neighbor retrieval and classification. 2007. Available at: http://cs-people.bu.edu/athitsos/nearest-neighbors/
  3. 3.
    Djouadi A, Bouktache E. A fast algorithm for the nearest-neighbor classifier. IEEE Trans Pattern Anal Mach Intell. 1997;19(3):277–82.CrossRefGoogle Scholar
  4. 4.
    Duda RO, Hart PE, Stork DG. Pattern classification. 2nd ed. New York: Wiley; 2001.zbMATHGoogle Scholar
  5. 5.
    Efros AA, Berg AC, Mori G, Malik J. Recognizing action at a distance. In: Proceedings of the 9th IEEE Conference on Computer Vision; 2003. p. 726–33.Google Scholar
  6. 6.
    Ghosh AK, Chaudhuri P, Murthy CA. On visualization and aggregation of nearest neighbor classifiers. IEEE Trans Pattern Anal Mach Intell. 2005;27(10):1592–602.CrossRefGoogle Scholar
  7. 7.
    Han J, Kamber M. Data mining, concepts and techniques. 2nd ed. Amsterdam: Elsevier; 2006.zbMATHGoogle Scholar
  8. 8.
    Hastie T, Tibshirami R, Friedman J. The elements of statistical learning: data mining. Inference and prediction. Springer series in statistics. New York: Springer; 2001.CrossRefGoogle Scholar
  9. 9.
    Kriegel H-P, Pryakhin A, Schubert M. Multi-represented kNN-classification for large class sets. In: Proceedings of the 10th International Conference on Database Systems for Advanced Applications; 2005. p. 511–22.CrossRefGoogle Scholar
  10. 10.
    Li Y, Yang J, Han J. 1Continuous K-nearest neighbor search for moving objects. In: Proceedings of the 16th International Conference on Scientific and Statistical Database Management; 2004. p. 123–6.Google Scholar
  11. 11.
    Shibata T, Kato T, Wada TK-D. Decision tree: an accelerated and memory efficient nearest neighbor classifier. In: Proceedings of the 2003 IEEE International Conference on Data Mining; 2003. p. 641–4.Google Scholar
  12. 12.
    Veenman CJ, Reinders MJT. The nearest subclass classifier: a compromise between the nearest mean and nearest neighbor classifier. IEEE Trans Pattern Anal Mach Intell. 2005;27(9):1417–29.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.RWTH Aachen UniversityAachenGermany

Section editors and affiliations

  • Kyuseok Shim
    • 1
  1. 1.School of Elec. Eng. and Computer ScienceSeoul National Univ.SeoulRepublic of Korea