# Boosting *k*-NN for Categorization of Natural Scenes

- 1.1k Downloads
- 4 Citations

## Abstract

The *k*-nearest neighbors (*k*-NN) classification rule has proven extremely successful in countless many computer vision applications. For example, image categorization often relies on uniform voting among the nearest prototypes in the space of descriptors. In spite of its good generalization properties and its natural extension to multi-class problems, the classic *k*-NN rule suffers from high variance when dealing with sparse prototype datasets in high dimensions. A few techniques have been proposed in order to improve *k*-NN classification, which rely on either deforming the nearest neighborhood relationship by learning a distance function or modifying the input space by means of subspace selection. From the computational standpoint, many methods have been proposed for speeding up nearest neighbor retrieval, both for multidimensional vector spaces and nonvector spaces induced by computationally expensive distance measures.

In this paper, we propose a novel boosting approach for generalizing the *k*-NN rule, by providing a new *k*-NN boosting algorithm, called UNN (Universal Nearest Neighbors), for the *induction* of *leveraged* *k*-NN. We emphasize that UNN is a formal boosting algorithm in the original boosting terminology. Our approach consists in redefining the voting rule as a strong classifier that linearly combines predictions from the *k* closest prototypes. Therefore, the *k* nearest neighbors examples act as weak classifiers and their weights, called *leveraging coefficients*, are learned by UNN so as to minimize a *surrogate risk*, which upper bounds the empirical misclassification rate over training data. These leveraging coefficients allows us to distinguish the most relevant prototypes for a given class. Indeed, UNN does not affect the *k*-nearest neighborhood relationship, but rather acts on top of *k*-NN search.

We carried out experiments comparing UNN to *k*-NN, support vector machines (SVM) and AdaBoost on categorization of natural scenes, using state-of-the art image descriptors (Gist and Bag-of-Features) on real images from Oliva and Torralba (Int. J. Comput. Vis. 42(3):145–175, 2001), Fei-Fei and Perona (IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp. 524–531, 2005), and Xiao et al. (IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3485–3492, 2010). Results display the ability of UNN to compete with or beat the other contenders, while achieving comparatively small training and testing times.

## Keywords

Boosting*k*nearest neighbors Image categorization Scene classification

## Notes

### Acknowledgements

The authors would like to thank the reviewers for stimulating comments and discussions about our results, which helped to significantly improve the paper, and Dario Giampaglia and John Tassone for their help in handling experiments. The software UNN is available upon request to Michel Barlaud.

## References

- Amores, J., Sebe, N., & Radeva, P. (2006). Boosting the distance estimation: application to the
*k*-nearest neighbor classifier.*Pattern Recognition Letters*,*27*(3), 201–209. CrossRefGoogle Scholar - Athitsos, V., Alon, J., Sclaroff, S., & Kollios, G. (2008). BoostMap: an embedding method for efficient nearest neighbor retrieval.
*IEEE Transactions on Pattern Analysis and Machine Intelligence*,*30*(1), 89–104. CrossRefGoogle Scholar - Bartlett, P., & Traskin, M. (2007). Adaboost is consistent.
*Journal of Machine Learning Research*,*8*, 2347–2368. MathSciNetzbMATHGoogle Scholar - Bartlett, P., Jordan, M., & McAuliffe, D. (2006). Convexity, classification, and risk bounds.
*Journal of the American Statistical Association*,*101*, 138–156. MathSciNetzbMATHCrossRefGoogle Scholar - Bel Haj Ali, W., Piro, P., Crescence, L., Giampaglia, D., Ferhat, O., Darcourt, J., Pourcher, T., & Barlaud, M. (2012). Changes in the subcellular localization of a plasma membrane protein studied by bioinspired UNN learning classification of biologic cell images. In
*International conference on computer vision theory and applications (VISAPP)*. Google Scholar - Boutell, R., Luo, J., Shen, X., & Brown, C. M. (2004). Learning multi-label scene classification.
*Pattern Recognition*,*37*(9), 1757–1771. CrossRefGoogle Scholar - Brighton, H., & Mellish, C. (2002). Advances in instance selection for instance-based learning algorithms.
*Data Mining and Knowledge Discovery*,*6*, 153–172. MathSciNetzbMATHCrossRefGoogle Scholar - Cucala, L., Marin, J. M., Robert, C. P., & Titterington, D. M. (2009). A Bayesian reassessment of nearest-neighbor classification.
*Journal of the American Statistical Association*,*104*(485), 263–273. MathSciNetCrossRefGoogle Scholar - Dudani, S. (1976). The distance-weighted
*k*-nearest-neighbor rule.*IEEE Transactions on Systems, Man and Cybernetics*,*6*(4), 325–327. CrossRefGoogle Scholar - Escolano Ruiz, F., Suau Pérez, P., & Bonev, B. I. (2009).
*Information theory in computer vision and pattern recognition*. Berlin: Springer. CrossRefGoogle Scholar - Fei-Fei, L., & Perona, P. (2005). A Bayesian hierarchical model for learning natural scene categories. In
*IEEE computer society conference on computer vision and pattern recognition (CVPR)*(pp. 524–531). CrossRefGoogle Scholar - Fukunaga, K., & Flick, T. (1984). An optimal global nearest neighbor metric.
*IEEE Transactions on Pattern Analysis and Machine Intelligence*,*6*(3), 314–318. zbMATHCrossRefGoogle Scholar - García-Pedrajas, N., & Ortiz-Boyer, D. (2009). Boosting
*k*-nearest neighbor classifier by means of input space projection.*Expert Systems with Applications*,*36*(7), 10,570–10,582. CrossRefGoogle Scholar - Gionis, A., Indyk, P., & Motwani, R. (1999). Similarity search in high dimensions via hashing. In
*Proc. international conference on very large databases*(pp. 518–529). Google Scholar - Grauman, K., & Darrell, T. (2005). The pyramid match kernel: discriminative classification with sets of image features. In
*IEEE international conference on computer vision (ICCV)*(pp. 1458–1465). Google Scholar - Gupta, L., Pathangay, V., Patra, A., Dyana, A, & Das, S. (2007). Indoor versus outdoor scene classification using probabilistic neural network.
*EURASIP Journal on Applied Signal Processing*,*2007*(1), 123. Google Scholar - Hart, P. E. (1968). The condensed nearest neighbor rule.
*IEEE Transactions on Information Theory*,*14*, 515–516. CrossRefGoogle Scholar - Hastie, T., & Tibshirani, R. (1996). Discriminant adaptive nearest neighbor classification.
*IEEE Transactions on Pattern Analysis and Machine Intelligence*,*18*(6), 607–616. CrossRefGoogle Scholar - Holmes, C. C., & Adams, N. M. (2003). Likelihood inference in nearest-neighbour classification models.
*Biometrika*,*90*, 99–112. MathSciNetzbMATHCrossRefGoogle Scholar - Hsu, C. W., Chang, C. C., & Lin, C. J. (2003).
*A practical guide to support vector classification*. Tech. rep. Google Scholar - Jégou, H., Douze, M., & Schmid, C. (2011). Product quantization for nearest neighbor search.
*IEEE Transactions on Pattern Analysis and Machine Intelligence*,*33*(1), 117–128. CrossRefGoogle Scholar - Kakade, S., Shalev-Shwartz, S., & Tewari, A. (2009).
*Applications of strong convexity–strong smoothness duality to learning with matrices*. Tech. rep. Google Scholar - Lazebnik, S., Schmid, C., & Ponce, J. (2006). Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In
*IEEE computer society conference on computer vision and pattern recognition (CVPR)*(pp. 2169–2178). Google Scholar - Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints.
*International Journal of Computer Vision*,*60*(2), 91–110. CrossRefGoogle Scholar - Masip, D., & Vitrià, J. (2006). Boosted discriminant projections for nearest neighbor classification.
*Pattern Recognition*,*39*(2), 164–170. zbMATHCrossRefGoogle Scholar - Nguyen, X., Wainwright, M. J., & Jordan, M. I. (2009). On surrogate loss functions and
*f*-divergences.*Annals of Statistics*,*37*, 876–904. MathSciNetzbMATHCrossRefGoogle Scholar - Nock, R., & Nielsen, F. (2009a). Bregman divergences and surrogates for learning.
*IEEE Transactions on Pattern Analysis and Machine Intelligence*,*31*(11), 2048–2059. CrossRefGoogle Scholar - Nock, R., & Nielsen, F. (2009b). On the efficient minimization of classification calibrated surrogates. In
*Advances in neural information processing systems 21 (NIPS)*(pp. 1201–1208). Google Scholar - Nock, R., & Sebban, M. (2001). An improved bound on the finite-sample risk of the nearest neighbor rule.
*Pattern Recognition Letters*,*22*(3/4), 407–412. zbMATHCrossRefGoogle Scholar - Oliva, A., & Torralba, A. (2001). Modeling the shape of the scene: a holistic representation of the spatial envelope.
*International Journal of Computer Vision*,*42*(3), 145–175. zbMATHCrossRefGoogle Scholar - Paredes, R. (2006). Learning weighted metrics to minimize nearest-neighbor classification error.
*IEEE Transactions on Pattern Analysis and Machine Intelligence*,*28*(7), 1100–1110. CrossRefGoogle Scholar - Payne, A., & Singh, S. (2005). Indoor vs. outdoor scene classification in digital photographs.
*Pattern Recognition*,*38*(10), 1533–1545. CrossRefGoogle Scholar - Piro, P., Nock, R., Nielsen, F., & Barlaud, M. (2012). Leveraging
*k*-NN for generic classification boosting.*Neurocomputing*,*80*, 3–9. CrossRefGoogle Scholar - Quattoni, A., & Torralba, A. (2009). Recognizing indoor scenes. In
*IEEE computer society conference on computer vision and pattern recognition (CVPR)*. Google Scholar - Schapire, E., & Singer, Y. (1999). Improved boosting algorithms using confidence-rated predictions.
*Machine Learning Journal*,*37*, 297–336. zbMATHCrossRefGoogle Scholar - Serrano, N., Savakis, A. E., & Luo, J. B. (2004). Improved scene classification using efficient low-level features and semantic cues.
*Pattern Recognition*,*37*, 1773–1784. zbMATHCrossRefGoogle Scholar - Shakhnarovich, G., Darell, T., & Indyk, P. (2006).
*Nearest-neighbors methods in learning and vision*. Cambridge: MIT Press. Google Scholar - Sivic, J., & Zisserman, A. (2003). Video google: a text retrieval approach to object matching in videos. In
*IEEE international conference on computer vision (ICCV)*(Vol. 2, pp. 1470–1477). CrossRefGoogle Scholar - Swain, M. J., & Ballard, D. H. (1991). Color indexing.
*International Journal of Computer Vision*,*7*, 11–32. CrossRefGoogle Scholar - Torralba, A., Murphy, K., Freeman, W., & Rubin, M. (2003). Context-based vision system for place and object recognition. In
*IEEE international conference on computer vision (ICCV)*(pp. 273–280). CrossRefGoogle Scholar - Vedaldi, A., & Fulkerson, B. (2008). VLFeat: an open and portable library of computer vision algorithms. http://www.vlfeat.org.
- Vogel, J., & Schiele, B. (2007). Semantic modeling of natural scenes for content-based image retrieval.
*International Journal of Computer Vision*,*72*(2), 133–157. CrossRefGoogle Scholar - Xiao, J., Hays, J., Ehinger, A., Oliva, A., & Torralba, A. (2010). SUN database: large-scale scene recognition from abbey to zoo. In
*IEEE conference on computer vision and pattern recognition (CVPR)*, June 2010 (pp. 3485–3492). Google Scholar - Yu, K., Ji, L., & Zhang, X. (2002). Kernel nearest-neighbor algorithm.
*Neural Processing Letters*,*15*(2), 147–156. zbMATHCrossRefGoogle Scholar - Yuan, M., & Wegkamp, M. (2010). Classification methods with reject option based on convex risk minimization.
*Journal of Machine Learning Research*,*11*, 111–130. MathSciNetzbMATHGoogle Scholar - Zhang, H., Berg, C., Maire, M., & Malik, J. (2006). Svm-knn: discriminative nearest neighbor classification for visual category recognition. In
*IEEE computer society conference on computer vision and pattern recognition (CVPR)*(pp. 2126–2136). Google Scholar - Zhang, M. L., & Zhou, Z. H. (2007). Ml-knn: a lazy learning approach to multi-label learning.
*Pattern Recognition*,*40*(7), 2038–2048. zbMATHCrossRefGoogle Scholar - Zhu, J., Rosset, S., Zou, H., & Hastie, T. (2009). Multi-class adaboost.
*Statistics and Its Interface*,*2*, 349–360. MathSciNetzbMATHGoogle Scholar - Zuo, W., Zhang, D., & Wang, K. (2008). On kernel difference-weighted k-nearest neighbor classification.
*Pattern Analysis & Applications*,*11*(3–4), 247–257. MathSciNetCrossRefGoogle Scholar