Diversity Analysis on Boosting Nominal Concepts

  • Nida Meddouri
  • Héla Khoufi
  • Mondher Sadok Maddouri
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7301)


In this paper, we investigate how the diversity of nominal classifier ensembles affects the AdaBoost performance [13]. Using 5 real data sets from the UCI Machine Learning Repository and 3 different diversity measures, we show that \(\mathcal{Q}\) Statistic measure is mostly correlated with AdaBoost performance for 2-class problems. The experimental results suggest that the performance of AdaBoost depend on the nominal classifier diversity that can be used as a stopping criteria in ensemble learning.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Aksela, M., Laaksonen, J.: Using diversity of errors for selecting members of a committee classifier. Pattern Recognition 39(4), 608–623 (2006)zbMATHCrossRefGoogle Scholar
  2. 2.
    Asuncion, A., Newman, D.: Machine Learning Repository (2007)Google Scholar
  3. 3.
    Gavin, B., Jeremy, W., Rachel, H., Xin, Y.: Diversity creation methods: A survey and categorisation. Journal of Information Fusion 6, 5–20 (2005)CrossRefGoogle Scholar
  4. 4.
    Brown, G., Kuncheva, L.I.: “Good” and “Bad” Diversity in Majority Vote Ensembles. In: El Gayar, N., Kittler, J., Roli, F. (eds.) MCS 2010. LNCS, vol. 5997, pp. 124–133. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  5. 5.
    Freund, Y.: Boosting a weak learning algorithm by majority. Information and Computation 121, 256–285 (1995)MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: 13th International Conference on Machine Learning, Bari, Italy (1996)Google Scholar
  7. 7.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Gacquer, D., Delcroix, V., Delmotte, F., Piechowiak, S.: On the Effectiveness of Diversity When Training Multiple Classifier Systems. In: Sossai, C., Chemello, G. (eds.) ECSQARU 2009. LNCS, vol. 5590, pp. 493–504. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  9. 9.
    Ko, A.H.R., Sabourin, R., Soares de Oliveira, L.E., de Souza Britto, A.: The implication of data diversity for a classifier-free ensemble selection in random subspaces. In: International Conference on Pattern Recognition, pp. 1–5 (2008)Google Scholar
  10. 10.
    Kohavi, R.: A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection. In: Actes d’International Joint Conference on Artificial Intelligence, pp. 1137–1143 (1995)Google Scholar
  11. 11.
    Kuncheva, L.I., Skurichina, M., Duin, R.P.W.: An experimental study on diversity for bagging and boosting with linear classifiers. Information Fusion 3(4), 245–258 (2002)CrossRefGoogle Scholar
  12. 12.
    Kuncheva, L.I., Rodriguez, J.J.: Classifier Ensembles for fMRI Data Analysis: An Experiment. Magnetic Resonance Imaging 28(4), 583–593 (2010)CrossRefGoogle Scholar
  13. 13.
    Meddouri, N., Maddouri, M.: Adaptive Learning of Nominal Concepts for Supervised Classification. In: Setchi, R., Jordanov, I., Howlett, R.J., Jain, L.C. (eds.) KES 2010. LNCS, vol. 6276, pp. 121–130. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  14. 14.
    Shipp, C.A., Kuncheva, L.I.: An investigation into how adaboost affects classifier diversity. In: Proc. of Information Processing and Management of Uncertainty in Knowledge-Based Systems, pp. 203–208 (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Nida Meddouri
    • 1
  • Héla Khoufi
    • 1
  • Mondher Sadok Maddouri
    • 2
  1. 1.Research Unit on Programming, Algorithmics and Heuristics - URPAH, Faculty of Science of Tunis - FSTTunis - El Manar UniversityTunisia
  2. 2.College of Community, HinakiyahTaibah University - Medinah MonawaraKingdom of Saoudi Arabia

Personalised recommendations