Neural Computing and Applications

, Volume 18, Issue 2, pp 105–108 | Cite as

Particle swarm optimization for ensembling generation for evidential k-nearest-neighbour classifier

Original Article


The problem addressed in this paper concerns the ensembling generation for evidential k-nearest-neighbour classifier. An efficient method based on particle swarm optimization (PSO) is here proposed. We improve the performance of the evidential k-nearest-neighbour (EkNN) classifier using a random subspace based ensembling method. Given a set of random subspace EkNN classifier, a PSO is used for obtaining the best parameters of the set of evidential k-nearest-neighbour classifiers, finally these classifiers are combined by the “vote rule”. The performance improvement with respect to the state-of-the-art approaches is validated through experiments with several benchmark datasets.


Particle swarm optimization Evidential k-NN classifier Random subspace 


  1. 1.
    Opitz D, Maclin R (1999) Popular ensemble methods: an empirical study. J Artif Intell Res 11:169–198MATHGoogle Scholar
  2. 2.
    Kittler J (1998) On combining classifiers. IEEE Trans Pattern Anal Mach Intell 20(3):226–239CrossRefGoogle Scholar
  3. 3.
    Altıncay H, Demirekler M (2000) An information theoretic framework for weight estimation in the combination of probabilistic classifiers for speaker identification. Speech Commun 30(4):255–272CrossRefGoogle Scholar
  4. 4.
    Whitaker CJ, Kuncheva LI (2003) Examining the relationship between majority vote accuracy and diversity in bagging and boosting, Technical Report, School of Informatics, University of Wales, BangorGoogle Scholar
  5. 5.
    Zenobi G, Cunningham P (2001) Using diversity in preparing ensembles of classifiers based on different feature subsets to minimize generalization error. In: Raedt LD, Flach PA (eds) Proceedings of the 12th conference on machine learning, Lecture notes in computer science 2167, pp 576–587Google Scholar
  6. 6.
    Melville P, Mooney RJ (2003) Constructing diverse classifier ensembles using artificial training examples. In: Proceedings of the IJCAI, pp 505–510Google Scholar
  7. 7.
    Zhou Z, Yu Y (2005) Ensembling local learners through multimodal perturbation. IEEE Trans Syst Man Cybern B Cybern 35(4):725–735CrossRefGoogle Scholar
  8. 8.
    Breiman L (1996) Bagging predictors. Mach Learn 24:123–140MATHMathSciNetGoogle Scholar
  9. 9.
    Schapire RE (2002) The boosting approach to machine learning: an overview. In: MSRI workshop on nonlinear estimation and classification, BerkeleyGoogle Scholar
  10. 10.
    Ho TK (1998) The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell 20(8):832–844CrossRefGoogle Scholar
  11. 11.
    Whitley D (1994) A genetic algorithm tutorial. Stat Comput 4:65–85CrossRefGoogle Scholar
  12. 12.
    Gen M, Cheng R (1997) Genetic algorithms and engineering design. Wiley, New YorkGoogle Scholar
  13. 13.
    Leardi R (1994) Application of a genetic algorithm to feature selection under full validation conditions and to outlier detection. J Chemom 8:65–79CrossRefGoogle Scholar
  14. 14.
    Siedlecki W, Sklansky J (1989) A note on genetic algorithms for large scale feature selection. Pattern Recognit Lett 10:335–347MATHCrossRefGoogle Scholar
  15. 15.
    Zhang GP (2000) Neural networks for classification: a survey. IEEE Trans Syst Man Cybern C Appl Rev 30(4):451–462CrossRefGoogle Scholar
  16. 16.
    Kennedy J, Eberhart RC (1995a) Particle swarm optimization. In: Proceedings of IEEE international conference on neural networks, Perth, pp 1942–1948Google Scholar
  17. 17.
    Kennedy J, Eberhart RC (1995b) A new optimizer using particle swarm theory. In: Sixth international symposium on micro machine and human science, Nagoya, pp 39–43Google Scholar
  18. 18.
    Kennedy J, Spears WM (1998) Matching algorithms to problems: an experimental test of the particle swarm and some genetic algorithms on the multimodal problem generator. In: Proceedings of the IEEE international conference on evolutionary computation, pp 39–43Google Scholar
  19. 19.
    Wang X, Yang J, Teng X, Xia W, Jensen R (2006) Feature selection based on rough sets and particle swarm optimization. Pattern Recognit Lett (to appear)Google Scholar
  20. 20.
    Altıncay H (2006) Ensembling evidential k-nearest neighbor classifiers through multi-modal perturbatio. Appl Soft Comput (to appear)Google Scholar
  21. 21.
    Denoeux T (1995) A k-nearest neighbor classification rule based on Dempster-Shafer theory. IEEE Trans Syst Man Cybern 25(05):804–813CrossRefGoogle Scholar
  22. 22.
    Zouhal LM, Denoeux T (1998) An evidence-theoretic k-NN rule with parameter optimization. IEEE Trans Syst Man Cybern C 28(2):263–271CrossRefGoogle Scholar
  23. 23.
    Gabrys G, Ruta D (2006) Genetic algorithms in classifier fusion. Appl Soft Comput 6(4):337–347CrossRefGoogle Scholar
  24. 24.
    Yu E, Cho S (2006) Ensemble based on GA wrapper feature selection. Comput Industr Eng 51:111–116CrossRefGoogle Scholar
  25. 25.
    Nanni L, Lumini A (2006) Particle swarm optimization for prototype reduction. IEEE Trans Circuits Syst (submitted) Google Scholar
  26. 26.
    Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30MathSciNetGoogle Scholar
  27. 27.
    Kuncheva L (2004) Combining pattern classifiers. Wiley, New YorkGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2007

Authors and Affiliations

  1. 1.DEIS, IEIIT-CNR, Università di BolognaBolognaItaly

Personalised recommendations