Online Non-linear Gradient Boosting in Multi-latent Spaces

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11191)


Gradient Boosting is a popular ensemble method that combines linearly diverse and weak hypotheses to build a strong classifier. In this work, we propose a new Online Non-Linear gradient Boosting (ONLB) algorithm where we suggest to jointly learn different combinations of the same set of weak classifiers in order to learn the idiosyncrasies of the target concept. To expand the expressiveness of the final model, our method leverages the non linear complementarity of these combinations. We perform an experimental study showing that ONLB (i) outperforms most recent online boosting methods in both terms of convergence rate and accuracy and (ii) learns diverse and useful new latent spaces.


  1. 1.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)CrossRefGoogle Scholar
  2. 2.
    Wolpert, D.H.: Stacked generalization. Neural Netw. 5(2), 241–259 (1992)CrossRefGoogle Scholar
  3. 3.
    Gama, J., Brazdil, P.: Cascade generalization. Mach. Learn. 41(3), 315–343 (2000)CrossRefGoogle Scholar
  4. 4.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 1189–1232 (2001)Google Scholar
  6. 6.
    Chen, T., Guestrin, C.: Xgboost: a scalable tree boosting system. In: SIGKDD, pp. 785–794. ACM (2016)Google Scholar
  7. 7.
    Ke, G., et al.: LightGBM: a highly efficient gradient boosting decision tree. In: NIPS (2017)Google Scholar
  8. 8.
    Oza, N.C.: Online bagging and boosting. In: 2005 IEEE International Conference on Systems, Man and cybernetics, vol. 3, pp. 2340–2345. IEEE (2005)Google Scholar
  9. 9.
    Grabner, H., Bischof, H.: On-line boosting and vision. In: CVPR, vol. 1, pp. 260–267. IEEE (2006)Google Scholar
  10. 10.
    Chen, S.T., Lin, H.T., Lu, C.J.: An online boosting algorithm with theoretical justifications. In: ICML (2012)Google Scholar
  11. 11.
    Beygelzimer, A., Kale, S., Luo, H.: Optimal and adaptive algorithms for online boosting. In: ICML (2015)Google Scholar
  12. 12.
    Jung, Y.H., Goetz, J., Tewari, A.: Online multiclass boosting. In: Advances in Neural Information Processing Systems, pp. 920–929 (2017)Google Scholar
  13. 13.
    Beygelzimer, A., Hazan, E., Kale, S., Luo, H.: Online gradient boosting. In: Advances in Neural Information Processing Systems, pp. 2458–2466 (2015)Google Scholar
  14. 14.
    Hu, H., Sun, W., Venkatraman, A., Hebert, M., Bagnell, J.A.: Gradient boosting on stochastic data streams. In: AISTATS, pp. 595–603 (2017)Google Scholar
  15. 15.
    García-Pedrajas, N., García-Osorio, C., Fyfe, C.: Nonlinear boosting projections for ensemble construction. J. Mach. Learn. Res. 8, 1–33 (2007)Google Scholar
  16. 16.
    Becker, C.J., Christoudias, C.M., Fua, P.: Non-linear domain adaptation with boosting. In: Advances in Neural Information Processing Systems, pp. 485–493 (2013)Google Scholar
  17. 17.
    Chapelle, O., Shivaswamy, P., Vadrevu, S., Weinberger, K., Zhang, Y., Tseng, B.: Boosted multi-task learning. Mach. Learn. 85(1–2), 149–173 (2011)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Scholz, M., Klinkenberg, R.: Boosting classifiers for drifting concepts. Intell. Data Anal. 11(1), 3–28 (2007)Google Scholar
  19. 19.
    Han, S., Meng, Z., Khan, A.S., Tong, Y.: Incremental boosting convolutional neural network for facial action unit recognition. In: NIPS, pp. 109–117 (2016)Google Scholar
  20. 20.
    Opitz, M., Waltner, G., Possegger, H., Bischof, H.: Bier-boosting independent embeddings robustly. In: CVPR, pp. 5189–5198 (2017)Google Scholar
  21. 21.
    Leistner, C., Saffari, A., Roth, P., Bischof, H.: On robustness of on-line boosting - a competitive study. In: 3rd ICCV Workshop on On-line Computer Vision (2009)Google Scholar
  22. 22.
    Gao, W., Jin, R., Zhu, S., Zhou, Z.H.: One-pass AUC optimization. In: International Conference on Machine Learning, pp. 906–914 (2013)Google Scholar
  23. 23.
    Blum, A., Kalai, A., Langford, J.: Beating the hold-out: bounds for k-fold and progressive cross-validation. In: COLT, pp. 203–208. ACM (1999)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Université de Lyon, Université Saint-Étienne Jean-Monnet, UMR CNRS 5516, Laboratoire Hubert-CurienSaint-ÉtienneFrance
  2. 2.WorldlineBezonsFrance

Personalised recommendations