Skip to main content
Log in

Evolutionary Design of Nearest Prototype Classifiers

  • Published:
Journal of Heuristics Aims and scope Submit manuscript

Abstract

In pattern classification problems, many works have been carried out with the aim of designing good classifiers from different perspectives. These works achieve very good results in many domains. However, in general they are very dependent on some crucial parameters involved in the design. These parameters have to be found by a trial and error process or by some automatic methods, like heuristic search and genetic algorithms, that strongly decrease the performance of the method. For instance, in nearest prototype approaches, main parameters are the number of prototypes to use, the initial set, and a smoothing parameter. In this work, an evolutionary approach based on Nearest Prototype Classifier (ENPC) is introduced where no parameters are involved, thus overcoming all the problems that classical methods have in tuning and searching for the appropiate values. The algorithm is based on the evolution of a set of prototypes that can execute several operators in order to increase their quality in a local sense, and with a high classification accuracy emerging for the whole classifier. This new approach has been tested using four different classical domains, including such artificial distributions as spiral and uniform distibuted data sets, the Iris Data Set and an application domain about diabetes. In all the cases, the experiments show successfull results, not only in the classification accuracy, but also in the number and distribution of the prototypes achieved.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Aha, D. and K. Kibler. (1991). “Instance-Based Learning Algorithms.” Machine Learning 6, 37–66.

    Google Scholar 

  • Bermejo, S. and J. Cabestany. (2000). “A Batch Learning Algorithm Vector Quantization Algorithm for Nearest Neighbour Classification.” Neural Processing Letters 11, 173–184.

    Google Scholar 

  • Bezdek, J.C. and L.I. Kuncheva. (2001). “Nearest Neighbour Classifier Designs: An Experimental Study.” Inter-national Journal of Intelligent Systems 16, 1445–1473.

    Google Scholar 

  • Bezdek, J.C., T.R. Rechherzer, G.S. Lim, and Y. Attikiouzel. (1998). “Multiple-Prototype Classifier Design.” IEEE Transactions on Systems, Man and Cybernetics 28(1), 67–79.

    Google Scholar 

  • Blake, C.L. and C.J. Merz. (1998). “UCI Repository of Machine Learning Databases.”

  • Burrascano, P. (1991). “Learning Vector Quantization for the Probabilistic Neural Network.” IEEE Transactions on Neural Networks 2(4), 458–461.

    Google Scholar 

  • Cagnoni, S. and G. Valli. (1994). “OSLVQ: A Training Strategy for Optimum-Size Learning Vector Quantization Classifiers.” In IEEE International Conference in Neural Networks, pp. 762–775.

  • Duda, R.O. and P.E. Hart. (1973). Pattern Classification and Scene Analysis.John Wiley And Sons.

  • Fernández, F. and D. Borrajo. (2002). “On Determinism Handling While Learning Reduced State Space Repre-sentations.” In Proceedings of the European Conference on Artificial Intelligence (ECAI 2002), Lyon, France.

    Google Scholar 

  • Fernández, F. and P. Isasi. (2001). “Designing Nearest Neighbour Classifiers by the Evolution of a Population of Prototypes.” In Proceedings of the European Symposium on Artificial Neural Networks (ESANN'01), pp. 172–180.

  • Fernández, F. and P. Isasi. (2002). “Automatic Finding of Good Classifiers Following a Biologically Inspired Metaphor.” Computing and Informatics 21(3), 205–220.

    Google Scholar 

  • Frank, E. and I.H. Witten. (1998). “Generating Accurate Rule Sets Without Global Optimization.” In Proceedings of the Fifteenth International Conference on Machine Learnin.

  • Fritzke, B. (1994). “Growing Cell Structures—A Self-Organizing Network for Unsupervised and Supervised Learning.” Neural Networks 7(9), 1441–1460.

    Google Scholar 

  • Gersho, A. and R.M. Gray. (1992). Vector Quantization and Signal Compression. Kluwer Academic Publishers.

  • Hart, P.E. (1968). “The Condensed Nearest Neighbour Rule.” IEEE Transactions on Information Theory

  • John, G.H. and P. Langley. (1995). “Estimating Continuous Distributions in Bayesian Classifiers.” In Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, pp. 338–345.

  • Kohonen, T. (1984). Self-Organization and Associative Memory, 3rd ed. Berlin, Heidelberg: Springer, 1989.

    Google Scholar 

  • Kuncheva, L.I. and J.C. Bezdek. (1998). “Nearest Prototype Classification: Clustering, Genetic Algorithms, or Random Search?” IEEE Transactions on Systems, Man and Cybernetics 28(1), 160–164.

    Google Scholar 

  • Linde, Y., A. Buzo, and R.M. Gray. (1980). “An Algorithm for Vector Quantizer Design.” In IEEE Transactions on Communications,Vol 1, Com-28, No. 1, pp. 84–95.

    Google Scholar 

  • Lloyd, S.P. (1982). “Least Squares Quantization in PCM.” In IEEE Transactions on Information Theory, pp. 127–135.

  • Mao, K.Z., K.-C. Tan, and W. Ser. (2000). “Probabilistic Neural-Network Structure Determination for Pattern Classification.” IEEE Transactions on Neural Networks 11(4), 1009–1016.

    Google Scholar 

  • Merelo, J.J., A. Prieto, and F. Morán. (1998). “Optimization of Classifiers using Genetic Algorithms.” In Honavar, P. (ed.), Advances in Evolutionary Synthesis of Neural Systems. MIT Press.

  • Pal, N.R., J.C. Bezdek, and E.C.K. Tsao. (1993). “Generalized Clustering Networks and Kohonen's Self-Organizing Scheme.” IEEE Transactions on Neural Networks 4(4).

  • Patanè, G. and M. Russo. (2001). “The Enhanced LBG Algorithm.” Neural Networks 14, 1219–1237.

    Google Scholar 

  • Pérez, J.C. and E. Vidal. (1993). “Constructive Design of LVQ and DSM Classifiers.” In Mira, J., Cabestany, J., and Prieto, A. (eds.), New Trends in Neural Computation,Vol. 686 of Lecture Notes in Computer Science, Springer Verlag.

  • Quinlan, J.R. (1993). C4.5: Programs for Machine Learning. Morgan Kaufmann.

  • Ritter, G.L., H.B. Woodruff, S.R. Lowri, and T.L. Isenhour. (1975). “An Algorithm for a Selective Nearest Neigh-bour Decision Rule.” IEEE Transactions on Information Theory 21(6), 665–669.

    Google Scholar 

  • Russo, M. and G. Patanè. (2000). “ELBG Implementation.” International Journal of Knowledge Based Intelligent Engineering Systems 2(4), 94–109.

    Google Scholar 

  • Specht, D.F. (1990). “Probabilistic Neural Networks.” Neural Networks 3(1), 109–118.

    Google Scholar 

  • Wilson, D.R. and T.R. Martinez. (2000). “Reduction Techniques for Instance Based Learning Algorithms.” Machine Learning 38, 257–286.

    Google Scholar 

  • Witten, I.H. and E. Frank. (2000). Data Mining.Practical Machine Learning Tools and Techniques with Java Implementations.Morgan Kaufmann.

  • Zhao, Q. and T. Higuchi. (1996). “Evolutionary Learning of Nearest Neighbour MLP.” IEEE Transactions on Neural Networks 7(3), 762–767.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Fernández, F., Isasi, P. Evolutionary Design of Nearest Prototype Classifiers. Journal of Heuristics 10, 431–454 (2004). https://doi.org/10.1023/B:HEUR.0000034715.70386.5b

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/B:HEUR.0000034715.70386.5b

Navigation