Some Notes on Twenty One 21 Nearest Prototype Classifiers

  • James C. Bezdek
  • Ludmila I. Kuncheva
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1876)


Comparisons made in two studies of 21 methods for finding prototypes upon which to base the nearest prototype classifier are discussed. The criteria used to compare the methods are by whether they: (i) select or extract point prototypes; (ii) employ pre- or post-supervision; and (iii) specify the number of prototypes a priori, or obtain this number “automatically”. Numerical experiments with 5 data sets suggest that pre-supervised, extraction methods offer a better chance for success to the casual user than postsupervised, selection schemes. Our calculations also suggest that methods which find the “best” number of prototypes “automatically” are not superior to user specification of this parameter.


Data condensation and editing Nearest neighbor classifiers Nearest prototype classifiers Post-supervision Pre-supervision 


  1. [1]
    J. C. Bezdek, J. Keller, R. Krishnapuram and N. R. Pal, Fuzzy Models and Algorithms for Pattern Recognition and Image Processing, Kluwer, Norwell, MA, 1999.zbMATHGoogle Scholar
  2. [2]
    L. I. Kuncheva and J.C. Bezdek, An integrated framework for generalized nearest prototype classifier design, International Journal of Uncertainty, Fuzziness and Knowledge-based Systems, 6(5), 1998, 437–457.zbMATHCrossRefGoogle Scholar
  3. [3]
    L.I. Kuncheva and J.C. Bezdek, On prototype selection: Genetic algorithms or random search?, IEEE Trans. on Systems, Man, and Cybernetics, C28(1), 1998, 160–164.Google Scholar
  4. [4]
    L.I. Kuncheva and J.C. Bezdek, Pre-supervised and post-supervised prototype classifier design, IEEE Trans. on Neural Networks, 10(5), 1999, 1142–1152.CrossRefGoogle Scholar
  5. [5]
    J. C. Bezdek and L. I. Kuncheva, Nearest prototype classifier designs: An experimental study, in review, IEEE Trans. on Fuzzy Systems, 2000.Google Scholar
  6. [6]
    J. C. Bezdek and L. I. Kuncheva, Point prototype generation and classifier design, in Kohonen Maps, eds. E. Oja and S. Kaski, Elsevier, Amsterdam, 1999, 71–96.CrossRefGoogle Scholar
  7. [7]
    P. E. Hart, The condensed nearest neighbor rule, IEEE Trans. on Information Theory, IT-14, 1968, 515–516.CrossRefGoogle Scholar
  8. [8]
    B.V. Dasarathy, Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques, Los Alamitos, California: IEEE Computer Society Press, 1991.Google Scholar
  9. [9]
    V. Cerveron and F. J. Ferri, Another move towards the minimum consistent subset: A tabu search approach to the condensed nearest neighbor rule, in review, IEEE Trans. SMC, 2000.Google Scholar
  10. [10]
    B.V. Dasarathy, Minimal consistent set (MCS) identification for optimal nearest neighbor decision systems design, IEEE Trans. on Systems, Man, and Cybernetics, 24, 1994, 511–517.CrossRefGoogle Scholar
  11. [11]
    D. L. Wilson, Asymptotic properties of nearest neighbor rules using edited data, IEEE Trans. on Systems Man and Cybernetics, SMC-2, 1972, 408–421.CrossRefGoogle Scholar
  12. [12]
    P.A. Devijver and J. Kittler, Pattern Recognition: A Statistical Approach, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1982.zbMATHGoogle Scholar
  13. [13]
    F. J. Ferri, Combining adaptive vector quantization and prototype selection techniques to improve nearest neighbor classifiers, Kybernetika, 34(4), 1998, 405–410.MathSciNetGoogle Scholar
  14. [14]
    D. B. Skalak, Prototype and feature selection by sampling and random mutation hill climbimg algorithms, Proc. 11th International Conference on Machine Learning, New Brunswick, N.J., Morgan Kaufmann, Los Alamitos, CA, 1994, 293–301.Google Scholar
  15. [15]
    E. I. Chang and R.P. Lippmann, Using genetic algorithms to improve pattern classification performance, in Advances in Neural Information Processing Systems, 3, R.P. Lippmann, J.E. Moody and D.S. Touretzky, Eds., San Mateo, CA: Morgan Kaufmann, 1991, 797–803.Google Scholar
  16. [16]
    L. I. Kuncheva, Editing for the k-nearest neighbors rule by a genetic algorithm, Pattern Recognition Letters, Special Issue on Genetic Algorithms, 16, 1995, 809–814.Google Scholar
  17. [17]
    L. I. Kuncheva, Fitness functions in editing k-NN reference set by genetic algorithms, Pattern Recognition, 30, 1997, 1041–1049.CrossRefGoogle Scholar
  18. [18]
    C. L. Chang Finding prototypes for nearest neighbor classification, IEEE Trans. Computer, 23(11), 1974, 1179–1184.zbMATHCrossRefGoogle Scholar
  19. [19]
    J. C. Bezdek, T. R. Reichherzer, G. S. Lim, and Y. Attikiouzel, Multiple prototype classifier design, IEEE Trans. on Systems, Man, and Cybernetics, C28(1), 1998, 67–79.Google Scholar
  20. [20]
    Y. Hamamoto, S. Uchimura and S. Tomita, A bootstrap technique for nearest neighbor classifierdesign, IEEE Trans. on Pattern Analysis and Machine Intelligence, 19(1), 1997, 73–79.CrossRefGoogle Scholar
  21. [21]
    T. Kohonen, Self-Organizing Maps, Springer, Germany, 1995.Google Scholar
  22. [22]
    S. Geva and J. Sitte, Adaptive nearest neighbor pattern classifier, IEEE Trans. on Neural Networks, 2(2), 1991, 318–322.CrossRefGoogle Scholar
  23. [23]
    R. Odorico, Learning vector quantization with training counters (LVQTC) Neural Networks, 10(6), 1997, 1083–1088.CrossRefGoogle Scholar
  24. [24]
    J. MacQueen, Some methods for classification and analysis of multivariate observations, Proc.Berkeley Symp. Math. Stat. and Prob., 1, eds. L. M. LeCam and J. Neyman, Univ. of California Press, Berkeley, 1967, 281–297.Google Scholar
  25. [25]
    A. Gersho and R. Gray, Vector Quantization and Signal Compression, Kluwer, Boston, 1992.zbMATHGoogle Scholar
  26. [26]
    N.B. Karayiannis, J.C. Bezdek, N.R. Pal, R.J. Hathaway, and P.-I. Pai, Repairs to GLVQ: A new family of competitive learning schemes, IEEE Trans. on Neural Networks, 7, 1996, 1062–1071.CrossRefGoogle Scholar
  27. [27]
    P. McKenzie, P. and M. Alder, Initializing the EM algorithm for use in Gaussian mixture modeling, in Pattern Recognition in Practice IV; Multiple Paradigms, Comparative Studies and Hybrid Systems, eds. E.S. Gelsema and L. N. Kanal, Elsevier, NY, 1994, 91–105.Google Scholar
  28. [28]
    E. Yair, K. Zeger and A. Gersho Competitive learning and soft competition for vector quantizer design, IEEE Trans. SP, 40(2), 1992, 294–309.CrossRefGoogle Scholar
  29. [29]
    J. C. Bezdek and N. R. Pal, Two soft relatives of learning vector quantization, Neural Networks, 8(5), 1995, 729–743.CrossRefGoogle Scholar
  30. [30]
    R. O. Duda and P.E. Hart, Pattern Classification and Scene Analysis, John Wiley & Sons, N.Y., 1973.zbMATHGoogle Scholar
  31. [31]
    J. Yen and C.W. Chang, A multi-prototype fuzzy c-means algorithm, Proc. European Congress on Intelligent Techniques and Soft Computing, Aachen, Germany, 1994, 539–543.Google Scholar
  32. [32]
    W. L Winston, Operations Research, Applications and Algorithms, 3rd ed., Duxbury Press, Belmont, CA., 1994.zbMATHGoogle Scholar
  33. [33]
    J. C. Bezdek, J. M. Keller, R. Krishnapuram, L. I. Kuncheva and N. R. Pal Will the real Iris data please stand up?, IEEE Trans. Fuzzy Systems, 7(3), 1999, 368–369.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • James C. Bezdek
    • 1
  • Ludmila I. Kuncheva
    • 2
  1. 1.Computer Science DepartmentUniversity of West FloridaPensacolaUSA
  2. 2.School of InformaticsUniversity of WalesBangorUK

Personalised recommendations