A Modular Reduction Method for k-NN Algorithm with Self-recombination Learning
A difficulty faced by existing reduction techniques for k-NN algorithm is to require loading the whole training data set. Therefore, these approaches often become inefficient when they are used for solving large-scale problems. To overcome this deficiency, we propose a new method for reducing samples for k-NN algorithm. The basic idea behind the proposed method is a self-recombination learning strategy, which is originally designed for combining classifiers to speed up response time by reducing the number of base classifiers to be checked and improve the generalization performance by rearranging the order of training samples. Experimental results on several benchmark problems indicate that the proposed method is valid and efficient.
KeywordsTest Accuracy Near Neighbor Negative Class Positive Class Modular Reduction
Unable to display preview. Download preview PDF.
- 1.Dasarathy, B.V. (ed.): Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques. IEEE Computer Society Press, Los Alamitos (1991)Google Scholar
- 2.MacQueen, J.B.: Some Methods for Classification and Analysis of Multi Variate Observations. In: MacQueen, J.B. (ed.) Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297 (1967)Google Scholar
- 3.Kohonen, T.: Self-Organizing Maps. Springer, Germany (1995)Google Scholar
- 4.Hart, P.E.: The Condensed Nearest Neighbor Rule. IEEE Trans. on Information Theory 14(5), 515–516 (1967)Google Scholar
- 10.Zhao, H.: A Study on Min-max Modular Classifier (in Chinese). Doctoral dissertation of Shanghai Jiao Tong University (2005)Google Scholar
- 12.Ratsch, G., Onoda, T., Muller, K.: Soft Margins for AdaBoost. Machine Learning, 1–35 (2000)Google Scholar