Skip to main content

An Ensemble Method Based on AdaBoost and Meta-Learning

  • Conference paper
Advances in Artificial Intelligence (Canadian AI 2013)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7884))

Included in the following conference series:

Abstract

We propose a new machine learning algorithm: meta-boosting. Using the boosting method a weak learner can be converted into a strong learner by changing the weight distribution of the training examples. It is often regarded as a method for decreasing both the bias and variance although it mainly reduces variance. Meta-learning has the advantage of coalescing the results of multiple learners to improve accuracy, which is a bias reduction method. By combing boosting algorithms with different weak learners using the meta-learning scheme, both of the bias and variance are reduced. Our experiments demonstrate that this meta-boosting algorithm not only displays superior performance than the best results of the base-learners but that it also surpasses other recent algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Wolpert, D., Macready, W.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1, 67–82 (19 97)

    Article  Google Scholar 

  2. Gama, J.: Combining classification algorithms. PhD Thesis, University of Porto (1999)

    Google Scholar 

  3. Schapire, R.E.: The strength of weak learnability. Machine Learning 5, 197–227 (1990)

    Google Scholar 

  4. Philip, K.C., Salvatore, J.S.: Meta-learning for multistrategy and parallel learning. In: Proc. Second Intl. Work. Multistraegy Learning, pp. 150–165 (1993)

    Google Scholar 

  5. de Souza, É.N., Matwin, S.: Extending AdaBoost to Iteratively Vary Its Base Classifiers. In: Butz, C., Lingras, P. (eds.) Canadian AI 2011. LNCS, vol. 6657, pp. 384–389. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  6. Kohavi, R., Wolpert, D.H.: Bias plus variance decomposition for zero-one loss functions. In: 13th International Conference on Machine Learning, pp. 275–283 (1996)

    Google Scholar 

  7. Dietterich, T.G.: Bias-variance analysis of ensemble learning. In: 7th Course of the International School on Neural Networks, Ensemble Methods for Learning Machines (2002)

    Google Scholar 

  8. Holte, R.C.: Very simple classification rules perform well on most commonly used datasets. Machine Learning 11, 63–91 (1993)

    Article  MATH  Google Scholar 

  9. Domingos, P., Pazzani, M.: On the optimality of the simple Bayesian classifier under zero-one loss. Machine Learning 29, 103–130 (1997)

    Article  MATH  Google Scholar 

  10. Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning 36, 105–139 (1999)

    Article  Google Scholar 

  11. Domingos, P.: A unified bias-variance decomposition. In: Proceedings of 17th International Conference on Machine Learning. Morgan Kaufmann, Stanford (2000)

    Google Scholar 

  12. Webb, I., Zheng, Z.: Multistrategy ensemble learning: Reducing error by combining ensemble learning technique. IEEE Tran. on Knowledge and Data Engineering 16, 980–991 (2004)

    Article  Google Scholar 

  13. Philip, K.C., Salvatore, J.S.: Scaling learning by metalearning over disjoint and partially replicated data. In: Proc. Ninth Florida AI Research Symposium, pp. 151–155 (1996)

    Google Scholar 

  14. Frank, A., Asuncion, A.: UCI machine learning repository (2010)

    Google Scholar 

  15. Aha, D.W.: Lazy learning. Kluwer Academic Publishers (1997)

    Google Scholar 

  16. Quinlan, J.R.: C4.5: Programs for machine learning, vol. 1. Morgan Kaufmann (1993)

    Google Scholar 

  17. Kotsiantis, S.B., Zaharakis, I.D., Pintelas, P.E.: Machine learning: A review of classification and combining techniques. Artificial Intelligence Review 26, 159–190 (2006)

    Article  Google Scholar 

  18. Japkowicz, N., Mohak, S.: Evaluating Learning Algorithms: A Classification Perspective. Cambridge University Press (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Liu, X., Wang, X., Japkowicz, N., Matwin, S. (2013). An Ensemble Method Based on AdaBoost and Meta-Learning. In: Zaïane, O.R., Zilles, S. (eds) Advances in Artificial Intelligence. Canadian AI 2013. Lecture Notes in Computer Science(), vol 7884. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-38457-8_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-38457-8_27

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-38456-1

  • Online ISBN: 978-3-642-38457-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics