Skip to main content

Gradient Boosting with Neural Networks

  • Chapter
  • First Online:
Effective Statistical Learning Methods for Actuaries III

Part of the book series: Springer Actuarial ((SPACLN))

  • 1325 Accesses

Abstract

Gradient boosting machines form a family of powerful machine learning techniques that have been applied with success in a wide range of practical applications. Ensemble techniques rely on simple averaging of models in the ensemble. The family of boosting methods adopts a different strategy to construct ensembles. In boosting algorithms, new models are sequentially added to the ensemble. At each iteration, a new weak base-learner is trained with respect to the error of the whole ensemble built so far.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Bissacco A, Yang MH, Soatto S (2007) Fast human pose estimation using appearance and motion via multi-dimensional boosting regression. In: IEEE conference on computer vision and pattern recognition, CVPR’07

    Google Scholar 

  • Chen T, Guestrin C (2016) XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, San Francisco, August 13–17, 2016. ACM, New York, pp 785–794

    Google Scholar 

  • Freund Y, Schapire R (1996) Experiments with a new boosting algorithm. In: Machine learning: proceedings of the thirteenth international conference, pp 148–156

    Google Scholar 

  • Freund Y, Schapire R (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55:119–139

    Article  MathSciNet  Google Scholar 

  • Friedman JH (1999) Greedy function approximation: a gradient boosting machine. Technical report, Dept. of Statistics, Stanford University

    Google Scholar 

  • Friedman JH, Hastie T, Tibshirani R (1998) Additive logistic regression: a statistical view of boosting. Technical report, Dept. of Statistics, Stanford University

    Google Scholar 

  • Hutchinson RA, Liu LP, Dietterich TG (2011) Incorporating boosted regression trees into ecological latent variable models. In: Twenty-fifth conference on artificial intelligence, AAAI’11, San Francisco, pp 1343–1348

    Google Scholar 

  • Mason L, Baxter J, Bartlett PL, Frean M (1999) Boosting algorithms as gradient descent. In: Solla SA, Leen TK, Muller K (eds) Advances in neural information processing system, vol 12. MIT Press, Cambridge, pp 512–518

    Google Scholar 

  • Pittman SJ, Brown KA (2011) Multi-scale approach for predicting fish species distributions across coral reef seascapes. PLoS ONE 6(5):e20583. https://doi.org/10.1371/journal.pone.0020583

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Denuit, M., Hainaut, D., Trufin, J. (2019). Gradient Boosting with Neural Networks. In: Effective Statistical Learning Methods for Actuaries III. Springer Actuarial(). Springer, Cham. https://doi.org/10.1007/978-3-030-25827-6_7

Download citation

Publish with us

Policies and ethics