Advertisement

Robust Alternating AdaBoost

  • Héctor Allende-Cid
  • Rodrigo Salas
  • Héctor Allende
  • Ricardo Ñanculef
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4756)

Abstract

Ensemble methods are general techniques to improve the accuracy of any given learning algorithm. Boosting is a learning algorithm that builds the classifier ensembles incrementally. In this work we propose an improvement of the classical and inverse AdaBoost algorithms to deal with the problem of the presence of outliers in the data. We propose the Robust Alternating AdaBoost (RADA) algorithm that alternates between the classic and inverse AdaBoost to create a more stable algorithm. The RADA algorithm bounds the influence of the outliers to the empirical distribution, it detects and diminishes the empirical probability of “bad” samples, and it performs a more accurate classification under contaminated data.

We report the performance results using synthetic and real datasets, the latter obtained from a benchmark site.

Keywords

Machine ensembles AdaBoost Robust Learning Algorithms 

References

  1. 1.
    Allende, H., Nanculef, R., Salas, R.: Bootstrapping neural networks. In: Monroy, R., Arroyo-Figueroa, G., Sucar, L.E., Sossa, H. (eds.) MICAI 2004. LNCS (LNAI), vol. 2972, pp. 813–822. Springer, Heidelberg (2004)Google Scholar
  2. 2.
    Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning 36(1-2), 105–139 (1999)CrossRefGoogle Scholar
  3. 3.
    Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998)Google Scholar
  4. 4.
    Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)zbMATHMathSciNetGoogle Scholar
  5. 5.
    Duda, R., Hart, P., Stork, D.: Pattern classification. Wiley-Interscience, Chichester (2000)Google Scholar
  6. 6.
    Freund, Y., Schapire, R.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)zbMATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Kanamori, T., Takenouchi, T., Eguchi, S., Murata, N.: The most robust loss function for boosting. In: Pal, N.R., Kasabov, N., Mudi, R.K., Pal, S., Parui, S.K. (eds.) ICONIP 2004. LNCS, vol. 3316, pp. 496–501. Springer, Heidelberg (2004)Google Scholar
  8. 8.
    Kuncheva, L., Whitaker, C.: Using diversity with three variants of boosting: Aggressive, conservative and inverse. In: Roli, F., Kittler, J. (eds.) MCS 2002. LNCS, vol. 2364, pp. 81–90. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  9. 9.
    Valiant, L.G.: A theory of the learnable. Communications of the ACM 27(11), 1134–1142 (1984)zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Héctor Allende-Cid
    • 1
  • Rodrigo Salas
    • 2
  • Héctor Allende
    • 1
  • Ricardo Ñanculef
    • 1
  1. 1.Universidad Técnica Federico Santa María, Dept. de Informática, Casilla 110-V, ValparaísoChile
  2. 2.Universidad de Valparaíso, Departamento de Ingeniería Biomédica, ValparaísoChile

Personalised recommendations