Sequential Reduction Algorithm for Nearest Neighbor Rule

  • Marcin Raniszewski
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6375)

Abstract

An effective training set reduction is one of the main problems in constructing fast 1-NN classifiers. A reduced set should be significantly smaller and ought to result in a similar fraction of correct classifications as a complete training set. In this paper a sequential reduction algorithm for nearest neighbor rule is described. The proposed method is based on heuristic idea of sequential adding and eliminating samples. The performance of the described algorithm is evaluated and compared with three other well-known reduction algorithms based on heuristic ideas, on four real datasets extracted from images.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Clarke, F., Ekeland, I., Asuncion, A., Newman, D.J.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine (2007), http://www.ics.uci.edu/~mlearn/MLRepository.html Google Scholar
  2. 2.
    Bezdek, J.C., Kuncheva, L.I.: Nearest prototype classifier designs: an experimental study. International Journal of Intelligent Systems 16(12), 1445–1473 (2001)MATHCrossRefGoogle Scholar
  3. 3.
    Cerveron, V., Ferri, F.J.: Another move towards the minimum consistent subset: A tabu search approach to the condensed nearest neighbor rule. IEEE Trans. on Systems, Man and Cybernetics, Part B: Cybernetics 31(3), 408–413 (2001)CrossRefGoogle Scholar
  4. 4.
    Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. John Wiley & Sons, Inc., Chichester (2001)MATHGoogle Scholar
  5. 5.
    Hart, P.E.: The condensed nearest neighbor rule. IEEE Transactions on Information Theory IT-14(3), 515–516 (1968)Google Scholar
  6. 6.
    Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proc. 14th Int. Joint Conf. Artificial Intelligence, pp. 338–345 (1995)Google Scholar
  7. 7.
    Kuncheva, L.I.: Editing for the k-nearest neighbors rule by a genetic algorithm. Pattern Recognition Letters 16, 809–814 (1995)CrossRefGoogle Scholar
  8. 8.
    Kuncheva, L.I.: Fitness functions in editing k-NN reference set by genetic algorithms. Pattern Recognition 30(6), 1041–1049 (1997)CrossRefGoogle Scholar
  9. 9.
    Kuncheva, L.I., Bezdek, J.C.: Nearest prototype classification: clustering, genetic algorithms, or random search? IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 28(1), 160–164 (1998)CrossRefGoogle Scholar
  10. 10.
    Skalak, D.B.: Prototype and feature selection by sampling and random mutation hill climbing algorithms. In: 11th International Conference on Machine Learning, New Brunswick, NJ, USA, pp. 293–301 (1994)Google Scholar
  11. 11.
    Theodoridis, S., Koutroumbas, K.: Pattern Recognition, 3rd edn. Academic Press/Elsevier, USA (2006)MATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Marcin Raniszewski
    • 1
  1. 1.Faculty of Physics and Applied InformaticsUniversity of ŁódźŁódźPoland

Personalised recommendations