Advertisement

A Robust Boosting Algorithm

  • Richard Nock
  • Patrice Lefaucheur
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2430)

Abstract

We describe a new Boosting algorithm which combines the base hypotheses with symmetric functions. Among its properties of practical relevance, the algorithm has significant resistance against noise, and is efficient even in an agnostic learning setting. This last property is ruled out for voting-based Boosting algorithms like AdaBoost. Experiments carried out on thirty domains, most of which readily available, tend to display the reliability of the classifiers built.

Keywords

Symmetric Function Concept Representation Target Concept Learning Sample Linear Separator 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. [BK99]
    E. Bauer and R. Kohavi. An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning Journal, 36:105–139, 1999.CrossRefGoogle Scholar
  2. [BKM98]
    C. L. Blake, E. Keogh, and C. J. Merz. UCI repository of machine learning databases, 1998. http://www.ics.uci.edu/~mlearn/MLRepository.html.
  3. [Bou92]
    S. Boucheron. Théorie de l’apprentissage, de l’approche formelle aux enjeux cognitifs. Hermes, 1992.Google Scholar
  4. [Bre96a]
    L. Breiman. Bagging predictors. Machine Learning Journal, 24:123–140, 1996.MATHMathSciNetGoogle Scholar
  5. [Bre96b]
    L. Breiman. Bias, Variance and Arcing classifiers. Technical Report 460, UC Berkeley, 1996.Google Scholar
  6. [FHT00]
    J. Friedman, T. Hastie, and R. Tibshirani. Additive Logistic Regression: A Statistical View of Boosting. Annals of Statistics, 28:337–374, 2000.MATHCrossRefMathSciNetGoogle Scholar
  7. [Fre95]
    Y. Freund. Boosting a weak learning algorithm by majority. Information and Computation, 121:256–285, 1995.MATHCrossRefMathSciNetGoogle Scholar
  8. [FS97]
    Y. Freund and R. E. Schapire. A Decision-Theoretic generalization of online learning and an application to Boosting. Journal of Computer and System Sciences, 55:119–139, 1997.MATHCrossRefMathSciNetGoogle Scholar
  9. [HS92]
    K-U. Höffgen and H. U. Simon. Robust trainability of single neurons. In Proceedings of the 5 th International Conference on Computational Learning Theory, 1992.Google Scholar
  10. [Kea88]
    M. J. Kearns. Thoughts on Hypothesis Boosting, 1988. ML class project.Google Scholar
  11. [KL88]
    M. J. Kearns and M. Li. Learning in the presence of malicious errors. In Proceedings of the 20 th ACM Symposium on the Theory of Computing, pages 267–280, 1988.Google Scholar
  12. [KM96]
    M. J. Kearns and Y. Mansour. On the boosting ability of top-down decision tree learning algorithms. Proceedings of the 28th Annual ACM Symposium on the Theory of Computing, pages 459–468, 1996.Google Scholar
  13. [KSS94]
    M. J. Kearns, R. E. Schapire, and L. M. Sellie. Toward efficient agnostic learning. Machine Learning Journal, 17:115–141, 1994.MATHGoogle Scholar
  14. [KV89]
    M. J. Kearns and L. Valiant. Cryptographic limitations on learning boolean formulae and finite automata. Proceedings of the 21 th ACM Symposium on the Theory of Computing, pages 433–444, 1989.Google Scholar
  15. [KV94]
    M. J. Kearns and U. V. Vazirani. An Introduction to Computational Learning Theory. M. I. T. Press, 1994.Google Scholar
  16. [NG95]
    R. Nock and O. Gascuel. On learning decision committees. In Proceedings of the 12 th International Conference on Machine Learning, pages 413–420, 1995.Google Scholar
  17. [NJ98]
    R. Nock and P. Jappy. On the power of decision lists. In Proceedings of the 15 th International Conference on Machine Learning, pages 413–420, 1998.Google Scholar
  18. [Qui94]
    J. R. Quinlan. C4.5: programs for machine learning. Morgan Kaufmann, 1994.Google Scholar
  19. [Qui96]
    J. R. Quinlan. Bagging, Boosting and C4.5. In Proceedings of AAAI’96, pages 725–730, 1996.Google Scholar
  20. [Sch90]
    R. E. Schapire. The strength of weak learnability. Machine Learning Journal, pages 197–227, 1990.Google Scholar
  21. [SFBL98]
    R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee. Boosting the Margin: A new explanation for the effectiveness of Voting methods. Annals of statistics, 26:1651–1686, 1998.MATHCrossRefMathSciNetGoogle Scholar
  22. [SS98]
    R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. In Proceedings of the 11 th International Conference on Computational Learning Theory, pages 80–91, 1998.Google Scholar
  23. [Val84]
    L. G. Valiant. A theory of the learnable. Communications of the ACM, 27:1134–1142, 1984.MATHCrossRefGoogle Scholar
  24. [Vap98]
    V. Vapnik. Statistical Learning Theory. John Wiley, 1998.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Richard Nock
    • 1
  • Patrice Lefaucheur
    • 1
  1. 1.Grimaag-Dépt Scientifique InterfacultaireUniversité des Antilles-GuyaneSchoelcherFrance

Personalised recommendations