Advertisement

Novel Approach to Gentle AdaBoost Algorithm with Linear Weak Classifiers

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12033)

Abstract

This paper presents the problem of calculating the value of the scoring function for weak classifiers operating in the sequential structure. An example of such a structure is Gentle AdaBoost algorithm whose modification we propose in this work. In the proposed approach the distance of the object from the decision boundary is scaled in decision regions defined by the weak classifier at first and later transformed by the log-normal function. The described algorithm was tested on sixth public available data sets and compared with Gentle AdaBoost algorithm.

Keywords

Gentle Boost algorithm Distance to the decision boundary Score function 

Notes

Acknowledgment

This work was supported by the National Science Centre, Poland under the grant no. 2017/25/B/ST6/01750.

References

  1. 1.
    Platt, J., et al.: Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Adv. Large Margin Classif. 10(3), 61–74 (1999)Google Scholar
  2. 2.
    Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms, 1st edn. Wiley-Interscience, Hoboken (2004)CrossRefGoogle Scholar
  3. 3.
    Lam, L., Suen, S.: Application of majority voting to pattern recognition: an analysis of its behavior and performance. IEEE Trans. Syst. Man Cybern.-Part A: Syst. Hum. 27(5), 553–568 (1997)CrossRefGoogle Scholar
  4. 4.
    Ruta, D., Gabrys, B.: Classifier selection for majority voting. Inf. Fusion 6(1), 63–81 (2005)CrossRefGoogle Scholar
  5. 5.
    Przybyła-Kasperek, M., Wakulicz-Deja, A.: Dispersed decision-making system with fusion methods from the rank level and the measurement level–a comparative study. Inf. Syst. 69, 124–154 (2017)CrossRefGoogle Scholar
  6. 6.
    Fumera, G., Roli, F.: A theoretical and experimental analysis of linear combiners for multiple classifier systems. IEEE Trans. Pattern Anal. Mach. Intell. 6, 942–956 (2005)CrossRefGoogle Scholar
  7. 7.
    Kittler, J., Alkoot, F.M.: Sum versus vote fusion in multiple classifier systems. IEEE Trans. Pattern Anal. Mach. Intell. 25(1), 110–115 (2003)CrossRefGoogle Scholar
  8. 8.
    Kuncheva, L.I., Bezdek, J.C., Duin, R.P.: Decision templates for multiple classifier fusion: an experimental comparison. Pattern Recogn. 34(2), 299–314 (2001)CrossRefGoogle Scholar
  9. 9.
    Woźniak, M., Graña, M., Corchado, E.: A survey of multiple classifier systems as hybrid systems. Inf. Fusion 16, 3–17 (2014)CrossRefGoogle Scholar
  10. 10.
    Xu, L., Krzyzak, A., Suen, C.Y.: Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Trans. Syst. Man Cybern. 22(3), 418–435 (1992)CrossRefGoogle Scholar
  11. 11.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Burduk, R.: The AdaBoost algorithm with the imprecision determine the weights of the observations. In: Nguyen, N.T., Attachoo, B., Trawiński, B., Somboonviwat, K. (eds.) ACIIDS 2014. LNCS (LNAI), vol. 8398, pp. 110–116. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-05458-2_12CrossRefGoogle Scholar
  13. 13.
    Shen, C., Li, H.: On the dual formulation of boosting algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 32(12), 2216–2231 (2010)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Oza, N.C.: Boosting with averaged weight vectors. In: Windeatt, T., Roli, F. (eds.) MCS 2003. LNCS, vol. 2709, pp. 15–24. Springer, Heidelberg (2003).  https://doi.org/10.1007/3-540-44938-8_2CrossRefGoogle Scholar
  15. 15.
    Freund, Y., Schapire, R.E., et al.: Experiments with a new boosting algorithm. In: ICML, vol. 96, pp. 148–156. Citeseer (1996)Google Scholar
  16. 16.
    Wozniak, M.: Proposition of boosting algorithm for probabilistic decision support system. In: Bubak, M., van Albada, G.D., Sloot, P.M.A., Dongarra, J. (eds.) ICCS 2004. LNCS, vol. 3036, pp. 675–678. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-24685-5_117CrossRefGoogle Scholar
  17. 17.
    Frejlichowski, D., Gościewska, K., Forczmański, P., Nowosielski, A., Hofman, R.: Applying image features and AdaBoost classification for vehicle detection in the ‘SM4Public’ system. In: Choraś, R.S. (ed.) Image Processing and Communications Challenges 7. AISC, vol. 389, pp. 81–88. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-23814-2_10CrossRefGoogle Scholar
  18. 18.
    Graczyk, M., Lasota, T., Trawiński, B., Trawiński, K.: Comparison of bagging, boosting and stacking ensembles applied to real estate appraisal. In: Nguyen, N.T., Le, M.T., Świątek, J. (eds.) ACIIDS 2010. LNCS (LNAI), vol. 5991, pp. 340–350. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-12101-2_35CrossRefGoogle Scholar
  19. 19.
    Kozik, R., Choraś, M.: The HTTP content segmentation method combined with AdaBoost classifier for web-layer anomaly detection system. In: Graña, M., López-Guede, J.M., Etxaniz, O., Herrero, Á., Quintián, H., Corchado, E. (eds.) SOCO/CISIS/ICEUTE -2016. AISC, vol. 527, pp. 555–563. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-47364-2_54CrossRefGoogle Scholar
  20. 20.
    Wu, S., Nagahashi, H.: Analysis of generalization ability for different AdaBoost variants based on classification and regression trees. J. Electrical Comput. Eng. 2015, 8 (2015)Google Scholar
  21. 21.
    Burduk, R., Bozejko, W.: Gentle AdaBoost algorithm with score function dependent on the distance to decision boundary. In: Saeed, K., Chaki, R., Janev, V. (eds.) CISIM 2019. LNCS, vol. 11703, pp. 303–310. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-28957-7_25CrossRefGoogle Scholar
  22. 22.
    Dmitrienko, A., Chuang-Stein, C., D’Agostino, R.B.: Pharmaceutical statisticsusing SAS: a practical guide. SAS Institute (2007)Google Scholar
  23. 23.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)zbMATHGoogle Scholar
  24. 24.
    Rejer, I.: Genetic algorithms for feature selection for brain computer interface. Int. J. Pattern Recogn. Artif. Intell. 29(5), 1559008 (2015)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Szenkovits, A., Meszlényi, R., Buza, K., Gaskó, N., Lung, R.I., Suciu, M.: Feature selection with a genetic algorithm for classification of brain imaging data. In: Stańczyk, U., Zielosko, B., Jain, L.C. (eds.) Advances in Feature Selection for Data and Pattern Recognition. ISRL, vol. 138, pp. 185–202. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-67588-6_10CrossRefGoogle Scholar
  26. 26.
    Giełczyk, A., Wawrzyniak, R., Choraś, M.: Evaluation of the existing tools for fake news detection. In: Saeed, K., Chaki, R., Janev, V. (eds.) CISIM 2019. LNCS, vol. 11703, pp. 144–151. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-28957-7_13CrossRefGoogle Scholar
  27. 27.
    Topolski, M.: Algorithm of multidimensional analysis of main features of PCA with blurry observation of facility features detection of carcinoma cells multiple myeloma. In: Burduk, R., Kurzynski, M., Wozniak, M. (eds.) CORES 2019. AISC, vol. 977, pp. 286–294. Springer, Cham (2020).  https://doi.org/10.1007/978-3-030-19738-4_29CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Faculty of ElectronicWroclaw University of Science and TechnologyWroclawPoland

Personalised recommendations