Top-k Parametrized Boost
Ensemble methods such as AdaBoost are popular machine learning methods that create highly accurate classifier by combining the predictions from several classifiers. We present a parametrized method of AdaBoost that we call Top-k Parametrized Boost. We evaluate our and other popular ensemble methods from a classification perspective on several real datasets. Our empirical study shows that our method gives the minimum average error with statistical significance on the datasets.
KeywordsEnsemble methods AdaBoost statistical significance
Unable to display preview. Download preview PDF.
- 1.Maclin, R., Opitz, D.: Popular ensemble methods: An empirical study. arXiv preprint arXiv:1106.0257 (2011)Google Scholar
- 3.Tan, P.N., Steinbach, M., Kumar, V.: Introduction to Data Mining, 1st edn. Addison-Wesley Longman Publishing Co., Inc., Boston (2005)Google Scholar
- 4.Mohri, M., Rostamizadeh, A., Talwalkar, A.: Foundations of machine learning. MIT press (2012)Google Scholar
- 5.Japkowicz, N., Shah, M.: Evaluating Learning Algorithms. Cambridge University Press (2011)Google Scholar
- 6.Asuncion, A., Newman, D.J.: UCI machine learning repository (2007)Google Scholar
- 8.Kanji, G.K.: 100 statistical tests. Sage (2006)Google Scholar
- 9.Liaw, A., Wiener, M.: Classification and regression by randomforest. R News 2, 18–22 (2002)Google Scholar