Advertisement

A Boosting-Based Decision Fusion Method for Learning from Large, Imbalanced Face Data Set

  • Xiaohui Yuan
  • Mohamed Abouelenien
  • Mohamed Elhoseny
Chapter
Part of the Studies in Big Data book series (SBD, volume 33)

Abstract

The acquisition of face images is usually limited due to policy and economy considerations, and hence the number of training examples of each subject varies greatly. The problem of face recognition with imbalanced training data has drawn attention of researchers and it is desirable to understand in what circumstances imbalanced data set affects the learning outcomes, and robust methods are needed to maximize the information embedded in the training data set without relying much on user introduced bias. In this article, we study the effects of uneven number of training images for automatic face recognition and proposed a boosting-based decision fusion method that suppresses the face recognition errors by training an ensemble with subsets of examples. By recovering the balance among classes in the subsets, our proposed multiBoost.imb method circumvents the class skewness and demonstrates improved performance. Experiments are conducted with four popular face data sets and two synthetic data sets. The results of our method exhibits superior performance in high imbalanced scenarios compared to AdaBoost.M1, SAMME, RUSboost, SMOTEboost, SAMME with SMOTE sampling and SAMME with random undersampling. Another advantage that comes with using subsets of examples is the significant gain in efficiency.

References

  1. 1.
    Liu, Y.-H., Chen, Y.-T.: Face recognition using total margin-based adaptive fuzzy support vector machines. IEEE Trans. Neural Netw. 18(1), 178–192 (2007)CrossRefGoogle Scholar
  2. 2.
    He, H., Edwardo, G.A.: Learning from imbalanced data. IEEE Trans. Knowl. Data Eng. 21(9), 1263–1284 (2009)Google Scholar
  3. 3.
    Freund, Y., Schapire, R.E.: A short introduction to boosting. J. Jpn. Soc. Artif. Intell. 14(5), 771–780 (1999)Google Scholar
  4. 4.
    Zhang, Y., Zhou, Z.-H.: Cost-sensitive face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 32(10), 1758–1769 (2010)CrossRefGoogle Scholar
  5. 5.
    Lu, J., Tan, Y.-P.: A doubly weighted approach for appearance-based subspace learning methods. IEEE Trans. Inf. Forensic Secur. 5(1), 71–78 (2010)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Liu, Y.-H., Chen, Y.-T., Lu, S.-S.: Face detection using kernel pca and imbalanced svm. In: Lecture Notes in Computer Science, International Conference on Natural Computation, vol. 4221, pp. 351–360 (2006)Google Scholar
  7. 7.
    Allwein, E.L., Schapire, R.E., Singer, Y.: Reducing multiclass to binary: a unifying approach for margin classifiers. J. Mach. Learn. Res. 1, 113–141 (2000)MathSciNetMATHGoogle Scholar
  8. 8.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997)MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Schapire, R.E., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. Machine Learning, pp. 80–91 (1999)Google Scholar
  10. 10.
    Dietterich, T.G., Bakiri, G.: Solving multiclass learning problems via error-correcting output codes. J. Artif. Intell. Res. 2, 263–286 (1995)MATHGoogle Scholar
  11. 11.
    Schapire, R.E.: Using output codes to boost multi-class learning problems. In: Proceedings of the 14th International Conference on Machine Learning, pp. 313–321 (1997)Google Scholar
  12. 12.
    Guruswami, V., Sahai, A.: Multiclass learning, boosting, and error-correcting codes. In: Proceedings of the 12th Annual Conference on Computational Learning Theory, pp. 145–155 (1999)Google Scholar
  13. 13.
    Zhu, J., Zou, H., Rosset, S., Hastie, T.: Multi-class adaboost. Stat. Interface 2, 349–360 (2009)MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    Mukherjee, I., Schapire, R.E.: A theory of multiclass boosting. In: Proceedings of Twenty-Fourth Annual Conference on Neural Information Processing Systems (2010)Google Scholar
  15. 15.
    Karakoulas, G., Shawe-Taylor, J.: Optimizing classifiers for imbalanced training sets. In: Proceedings of the 1998 Conference on Advances in Neural Information Processing Systems II, pp. 253–259, Cambridge, MA, USA. MIT Press (1999)Google Scholar
  16. 16.
    Sun, Y., Kamel, M.S., Wong, A.K.C., Wang, Y.: Cost-sensitive boosting for classification of imbalanced data. Pattern Recogn. 40, 3358–3378 (2007)Google Scholar
  17. 17.
    Wang, B.X., Japkowicz, N.: Boosting support vector machines for imbalanced data sets. In: Foundations of Intelligent Systems, pp. 38–47 (2008)Google Scholar
  18. 18.
    Fan, W., Stolfo, S.J., Zhang, J., Chan, P.K.: Adacost: misclassification cost-sensitive boosting. In: 16th International Conference on Machine Learning (1999)Google Scholar
  19. 19.
    Joshi, M.V., Kumar, V., Agarwal, R.C.: Evaluating boosting algorithms to classify rare classes: comparison and improvements. In: First IEEE International Conference on Data Mining, pp. 257–264 (2001)Google Scholar
  20. 20.
    Chawla, N.V., Lazarevic, A., Hall, L.O., Bowyer. K.W.: Smoteboost: improving prediction of the minority. In: Seventh European Conference on Principles and Practice of Knowledge Discovery in Databases, pp. 107–119 (2003)Google Scholar
  21. 21.
    Guo, H., Viktor, H.L.: Learning from imbalanced data sets with boosting and data generation: the databoost-im approach. SIGKDD Explor. 6(1), 30–39 (2004)Google Scholar
  22. 22.
    Geiler, O.J., Hong, L., Yue-Jian, G.: An adaptive sampling ensemble classifier for learning from imbalanced data sets. In: International MultiConference of Engineers and Computer Scientists, vol. 1, March 2010Google Scholar
  23. 23.
    Chen, S., He, H., Garcia, E.A.: RAMOBoost: ranked minority oversampling in boosting. IEEE Trans. Neural Netw. 21(10), 1624–1642 (2010)CrossRefGoogle Scholar
  24. 24.
    Seiffert, C., Khoshgoftaar, T.M., Van Hulse, J., Napolitano, A.: RUSBoost: a hybrid approach to alleviating class imbalance. IEEE Trans. Syst. Man. Cybern. Part A Syst. Hum. 40(1), 185–197 (2010)CrossRefGoogle Scholar
  25. 25.
    Galar, M., Fernandez, A., Barrenechea, E., Francisco, H.: EUSBoost: enhancing ensembles for highly imbalanced data-sets by evolutionary undersampling. Pattern Recogn. 46(12), 3460–3471 (2013)Google Scholar
  26. 26.
    Lu, J., Plataniotis, K.N., Venetsanopoulos, A.N., Li, S.Z.: Ensemble-based discriminant learning with boosting for face recognition. IEEE Trans. Neural Netw. 17(1), 166–178 (2006)CrossRefGoogle Scholar
  27. 27.
    Eibl, G., Pheiffer, K.-P.: Multiclass boosting for weak classifiers. J. Mach. Learn. Res. 6, 189–210 (2005)Google Scholar
  28. 28.
    Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3(1), 71–86 (1991)CrossRefGoogle Scholar
  29. 29.
    Belhumeur, P., Hespanha, J., Kriegman, D.: Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 711–720 (1997)CrossRefGoogle Scholar
  30. 30.
    Huang, G.B., Mattar, M., Lee, H., Learned-Miller, E.: Learning to align from scratch. In: Advances in Neural Information Processing Systems (NIPS), Lake Tahoe, Nevada, United States, December 3–6, 2012Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Xiaohui Yuan
    • 1
  • Mohamed Abouelenien
    • 1
  • Mohamed Elhoseny
    • 2
  1. 1.Department of Computer Science and EngineeringUniversity of North TexasDentonUSA
  2. 2.Faculty of Computers and InformationMansoura UniversityMansouraEgypt

Personalised recommendations