The AdaBoost Algorithm with the Imprecision Determine the Weights of the Observations

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8398)


This paper presents the AdaBoost algorithm that provides for the imprecision in the calculation of weights. In our approach the obtained values of weights are changed within a certain range of values. This range represents the uncertainty of the calculation of the weight of each element of the learning set. In our study we use the boosting by the reweighting method where each weak classifier is based on the recursive partitioning method. A number of experiments have been carried out on eight data sets available in the UCI repository and on two randomly generated data sets. The obtained results are compared with the original AdaBoost algorithm using appropriate statistical tests.


AdaBoost algorithm weight of the observation machine learning 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Kearns, M., Valiant, L.: Cryptographic limitations on learning boolean formulae and finite automata. J. Assoc. Comput. Mach. 41(1), 67–95 (1994)CrossRefzbMATHMathSciNetGoogle Scholar
  2. 2.
    Chunhua, S., Hanxi, L.: On the Dual Formulation of Boosting Algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence 32(12), 2216–2231 (2010)CrossRefGoogle Scholar
  3. 3.
    Oza, N.C.: Boosting with Averaged Weight Vectors. In: Windeatt, T., Roli, F. (eds.) MCS 2003. LNCS, vol. 2709, pp. 15–24. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  4. 4.
    Freund, Y., Schapire, R.: Experiments with a new boosting algorithm. In: Proceedings of the Thirteenth International Conference on Machine Learning, Bari, Italy, pp. 148–156 (1996)Google Scholar
  5. 5.
    Wozniak, M.: Proposition of Boosting Algorithm for Probabilistic Decision Support System. In: Bubak, M., van Albada, G.D., Sloot, P.M.A., Dongarra, J. (eds.) ICCS 2004. LNCS, vol. 3036, pp. 675–678. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  6. 6.
    Wozniak, M.: Boosted decision trees for diagnosis type of hypertension. In: Oliveira, J.L., Maojo, V., Martín-Sánchez, F., Pereira, A.S. (eds.) ISBMDA 2005. LNCS (LNBI), vol. 3745, pp. 223–230. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  7. 7.
    Kajdanowicz, T., Kazienko, P.: Boosting-based Multi-label Classification. Journal of Universal Computer Science 19(4), 502–520 (2013)Google Scholar
  8. 8.
    Freund, Y., Schapire, R.: A decision-theoretic generalization of on-line learning and an application to boostin. Journal of Computer and System Scienses 55(1), 119–139 (1997)CrossRefzbMATHMathSciNetGoogle Scholar
  9. 9.
    Schapire, R.E.: The Strenght of Weak Learnability. Machine Learning 5, 197–227 (1990)Google Scholar
  10. 10.
    Freund, Y.: Boosting a Weak Learning Algorithm by Majority. Information and Computation 121, 256–285 (1995)CrossRefzbMATHMathSciNetGoogle Scholar
  11. 11.
    Friedman, J., Hastie, T., Tibshirani, R.: Additive Logistic Regression: A Statistical View of Boosting. The Annals of Statistics 38, 337–374 (2000)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Seiffert, C., Khoshgoftaar, T.M., Hulse, J.V., Napolitano, A.: Resampling or Reweighting: A Comparison of Boosting Implementations. In: 2008 20th IEEE International Conference on Tools with Artificial Intelligence, pp. 445–451 (2008)Google Scholar
  13. 13.
    Dmitrienko, A., Chuang-Stein, C.: Pharmaceutical Statistics Using SAS: A Practical Guide. SAS Press (2007)Google Scholar
  14. 14.
    Murphy, P.M., Aha, D.W.: UCI repository for machine learning databases. Technical Report, Department of Information and Computer Science, University of California, Irvine (1994),
  15. 15.
    Duin, R.P.W., Juszczak, P., Paclik, P., Pekalska, E., de Ridder, D., Tax, D., Verzakov, S.: PR-Tools4.1, A Matlab Toolbox for Pattern Recognition. Delft University of Technology (2007)Google Scholar
  16. 16.
    Highleyman, W.H.: The design and analysis of pattern recognition experiments. Bell System Technical Journal 41, 723–744 (1962)CrossRefGoogle Scholar
  17. 17.
    Derrac, J., Garcia, S., Molina, D., Herrera, F.: A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation 1(1), 3–18 (2011)CrossRefGoogle Scholar
  18. 18.
    Trawinski, B., Smetek, M., Telec, Z., Lasota, T.: Nonparametric statistical analysis for multiple comparison of machine learning regression algorithms. International Journal of Applied Mathematics and Computer Science 22(4), 867–881 (2012)CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Department of Systems and Computer NetworksWroclaw University of TechnologyWroclawPoland

Personalised recommendations