Advertisement

Evaluation of Particle Swarm Optimization Effectiveness in Classification

  • I. De Falco
  • A. Della Cioppa
  • E. Tarantino
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3849)

Abstract

Particle Swarm Optimization (PSO) is a heuristic optimization technique showing relationship with Evolutionary Algorithms and strongly based on the concept of swarm. It is used in this paper to face the problem of classification of instances in multiclass databases. Only a few papers exist in literature in which PSO is tested on this problem and there are no papers showing a thorough comparison for it against a wide set of techniques typically used in the field. Therefore in this paper PSO performance is compared on nine typical test databases against those of nine classification techniques widely used for classification purposes. PSO is used to find the optimal positions of class centroids in the database attribute space, via the examples contained in the training set. Performance of a run, instead, is computed as the percentage of instances of testing set which are incorrectly classified by the best individual achieved in the run. Results show the effectiveness of PSO, which turns out to be the best on three out of the nine challenged problems.

Keywords

Particle Swarm Optimization Classification 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Eberhart, R.C., Kennedy, J.: A new optimizer using particle swarm theory. In: Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, pp. 39–43. IEEE Press, Piscataway (1995)CrossRefGoogle Scholar
  2. 2.
    Kennedy, J., Eberhart, R.C.: Particle swarm optimization. In: Proceedings of the IEEE International Conference on Neural Networks IV, pp. 1942–1948. IEEE Press, Piscataway (1995)CrossRefGoogle Scholar
  3. 3.
    Han, J., Kamber, M.: Data Mining: Concept and Techniques. Morgan Kaufmann, San Francisco (2001)Google Scholar
  4. 4.
    Hand, D.J., Mannila, H., Smyth, P.: Principles of Data Mining. The MIT Press, Cambridge (2001)Google Scholar
  5. 5.
    Sousa, T., Silva, A., Neves, A.: Particle swarm based data mining algorithms for classification tasks. Parallel Computing 30, 767–783 (2004)CrossRefGoogle Scholar
  6. 6.
    van der Merwe, D.W., Engelbrecht, A.P.: Data clustering using particle swarm optimization. In: Proceedings of the IEEE Congress on Evolutionary Computation. IEEE Press, Piscataway (2003)Google Scholar
  7. 7.
    Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learning. Addison–Wesley, Reading (1989)MATHGoogle Scholar
  8. 8.
    Fogel, L.J., Marsh, A.J., Walsh, M.J.: Artificial Intelligence through Simulated Evolution. Wiley & Sons, New York (1966)MATHGoogle Scholar
  9. 9.
    Shi, Y., Eberhart, R.C.: A modified particle swarm optimizer. In: Proceedings of the IEEE International Conference on Evolutionary Computation, pp. 69–73. IEEE Press, Piscataway (1998)Google Scholar
  10. 10.
    Blake, C.L., Merz, C.J.: UCI Repository of Machine Learning Databases, University of California, Irvine (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html
  11. 11.
    Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tool and Technique with Java Implementation. Morgan Kaufmann, San Francisco (2000)Google Scholar
  12. 12.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representation by back–propagation errors. Nature 323, 533–536 (1986)CrossRefGoogle Scholar
  13. 13.
    Hassoun, M.H.: Fundamentals of Artificial Neural Networks. The MIT Press, Cambridge (1995)MATHGoogle Scholar
  14. 14.
    Cleary, J.G., Trigg, L.E.: K *: An instance–based learner using an entropic distance measure. In: Proceedings of the 12th International Conference on Machine Learning, pp. 108–114 (1995)Google Scholar
  15. 15.
    Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)MATHMathSciNetGoogle Scholar
  16. 16.
    Webb, G.I.: Multiboosting: a technique for combining boosting and wagging. Machine Learning 40, 159–196 (2000)CrossRefGoogle Scholar
  17. 17.
    Kohavi, R.: Scaling up the accuracy of naive-bayes classifiers: a decision tree hybrid. In: Proceedings of the Second Internarional Conference on Knowledge Discovery and Data Mining, pp. 202–207 (1996)Google Scholar
  18. 18.
    Compton, P., Jansen, R.: Knowledge in context: a strategy for expert system maintenance. In: Proceedings of Artificial Intelligence, pp. 292–306. Springer, Berlin (1988)Google Scholar
  19. 19.
    Demiroz, G., Guvenir, A.: Classification by voting feature intervals. In: van Someren, M., Widmer, G. (eds.) ECML 1997. LNCS, vol. 1224, pp. 85–92. Springer, Heidelberg (1997)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • I. De Falco
    • 1
  • A. Della Cioppa
    • 2
  • E. Tarantino
    • 1
  1. 1.Institute of High Performance Computing and NetworkingNational Research Council of Italy (ICAR–CNR)NaplesItaly
  2. 2.Natural Computation Lab – DIIIEUniversity of SalernoFisciano (SA)Italy

Personalised recommendations