Accuracy and Specificity Trade-off in \(k\)-nearest Neighbors Classification

  • Luis HerranzEmail author
  • Shuqiang Jiang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9004)


The \(k\)-NN rule is a simple, flexible and widely used non-parametric decision method, also connected to many problems in image classification and retrieval such as annotation and content-based search. As the number of classes increases and finer classification is considered (e.g. specific dog breed), high accuracy is often not possible in such challenging conditions, resulting in a system that will often suggest a wrong label. However, predicting a broader concept (e.g. dog) is much more reliable, and still useful in practice. Thus, sacrificing certain specificity for a more secure prediction is often desirable. This problem has been recently posed in terms of accuracy-specificity trade-off. In this paper we study the accuracy-specificity trade-off in \(k\)-NN classification, evaluating the impact of related techniques (posterior probability estimation and metric learning). Experimental results show that a proper combination of \(k\)-NN and metric learning can be very effective and obtain good performance.


Feature Space Leaf Node Semantic Similarity Information Gain Pairwise Constraint 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



This work was supported in part by the National Natural Science Foundation of China: 61322212, 61035001 and 61350110237, in part by the Key Technologies R&D Program of China: 2012BAH18B02, in part by National Hi-Tech Development Program (863 Program) of China: 2014AA015202, and in part by the Chinese Academy of Sciences Fellowships for Young International Scientists: 2011Y1GB05.


  1. 1.
    Fergus, R., Bernal, H., Weiss, Y., Torralba, A.: Semantic label sharing for learning with many categories. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part I. LNCS, vol. 6311, pp. 762–775. Springer, Heidelberg (2010) CrossRefGoogle Scholar
  2. 2.
    Griffin, G., Perona, P.: Learning and using taxonomies for fast visual categorization. In: CVPR (2008)Google Scholar
  3. 3.
    Deng, J., Krause, J., Berg, A.C., Li, F.F.: Hedging your bets: optimizing accuracy-specificity trade-offs in large scale visual recognition. In: CVPR, pp. 3450–3457 (2012)Google Scholar
  4. 4.
    Hwang, S.J., Grauman, K., Sha, F.: Learning a tree of metrics with disjoint visual features. In: NIPS, pp. 621–629 (2011)Google Scholar
  5. 5.
    Weinberger, K.Q., Saul, L.K.: Distance metric learning for large margin nearest neighbor classification. JMLR 10, 207–244 (2009)zbMATHGoogle Scholar
  6. 6.
    Shen, C., Kim, J., Wang, L., van den Hengel, A.: Positive semidefinite metric learning using boosting-like algorithms. JMLR 13, 1007–1036 (2012)zbMATHGoogle Scholar
  7. 7.
    Kulis, B.: Metric learning: a survey. Found. Trends Mach. Learn. 5, 287–364 (2013)CrossRefGoogle Scholar
  8. 8.
    Fukunaga, K., Hostetler, L.: k-nearest-neighbor bayes-risk estimation. IEEE Trans. Inform. Theory 21, 285–293 (1975)CrossRefzbMATHMathSciNetGoogle Scholar
  9. 9.
    Atiya, A.F.: Estimating the posterior probabilities using the k-nearest neighbor rule. Neural Comput. 17, 731–740 (2005)CrossRefzbMATHGoogle Scholar
  10. 10.
    Platt, J.: Probabilistic outputs for support vector machines and comparison to regularized likelihood methods. In: Smola, A.J., Bartlett, P., Scholkopf, B., Schuurmans, D. (eds.) Advances in Large Margin Classifiers, pp. 61–74. MIT Press, Cambridge (1999)Google Scholar
  11. 11.
    Davis, J.V., Kulis, B., Jain, P., Sra, S., Dhillon, I.S.: Information-theoretic metric learning. In: ITML, pp. 209–216 (2007)Google Scholar
  12. 12.
    Wang, J., Yang, J., Yu, K., Lv, F., Huang, T.S., Gong, Y.: Locality-constrained linear coding for image classification. In: CVPR, pp. 3360–3367 (2010)Google Scholar
  13. 13.
    Budanitsky, A., Hirst, G.: Evaluating wordnet-based measures of lexical semantic relatedness. Comput. Linguist. 32, 13–47 (2006)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Key Laboratory of Intelligent Information Processing, Institute of Computing TechnologyChinese Academy of SciencesBeijingChina

Personalised recommendations