Skip to main content

Feed-Forward Neural Networks

  • Chapter
  • First Online:

Part of the book series: Springer Actuarial ((SPACLN))

Abstract

This chapter introduces the general features of artificial neural networks. After a presentation of the mathematical neural cell, we focus on feed-forward networks. First, we discuss the preprocessing of data and next we present a survey of the different methods for calibrating such networks. Finally, we apply the theory to an insurance data set and compare the predictive power of neural networks and generalized linear models.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    A perceptron is a single artificial neuron using the Heaviside step function as the activation function. It was developed by Rosenblatt (1958) for image recognition.

References

  • Bishop C (1995) Neural networks for pattern recognition. Clarendon Press, Oxford

    MATH  Google Scholar 

  • Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control Signals Syst 2:303–314

    Article  MathSciNet  Google Scholar 

  • Denuit M, Hainaut D, Trufin J (2019) Effective statistical learning methods for actuaries: GLMs and extensions. Springer, Berlin

    Book  Google Scholar 

  • Efron B, Hinkley DV (1978) Assessing the accuracy of the maximum likelihood estimator: observed versus expected FisherInformation. Biometrika 65(3):457–487

    Article  MathSciNet  Google Scholar 

  • Hastie T, Tibshirani R, Friedman J (2009) The Elements of statistical learning: data mining, inference, and prediction, 2nd edn. Springer, New York

    Book  Google Scholar 

  • Haykin S (1994) Neural networks: a comprehensive foundation. Prentice-Hall, Saddle River

    MATH  Google Scholar 

  • Hornik K (1991) Approximation capabilities of multilayer feed-forward networks. Neural Netw 4:251–257

    Article  Google Scholar 

  • Hornik K, Stinchcombe M, White H (1989) Multi-layer feed-forward networks are universal approximators. Neural Netw 2:359–366.

    Article  Google Scholar 

  • McCulloch W, Pitts W (1943) A logical calculus of the ideas imminent in nervous activity. Bull Math Biophys 5:115–133

    Article  MathSciNet  Google Scholar 

  • McNelis PD (2005) Neural networks in finance: gaining predictive edge in the market. Advanced finance series. Elsevier Academic Press, Cambridge

    Google Scholar 

  • Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E (1953) Equation of state calculations by fast computing machines. J Chem Phys 21:1087–1092

    Article  Google Scholar 

  • Nelder JA, Wedderburn RWM (1972) Generalized linear models. J R Stat Soc Ser A 135(3):370–384

    Article  Google Scholar 

  • Ohlsson E, Johansson B (2010) Non-life insurance pricing with generalized linear models. Springer, Berlin

    Book  Google Scholar 

  • Parker D (1985) Learning logic, technical report TR-87. MIT Center for Research in Computational Economics and Management Science, Cambridge

    Google Scholar 

  • Riedmiller M, Braun H (1993) A direct adaptive method for faster backpropagation learning: the RPROP algorithm. In: Proceedings of the IEEE international conference on neural networks. IEEE, Piscataway, pp 586–591

    Chapter  Google Scholar 

  • Ripley BD (1996) Pattern recognition and neural networks. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Rosenblatt F (1958) The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 65:386–408

    Article  Google Scholar 

  • Rumelhart D, Hinton G, Williams R (1986) Learning internal representations by error propagation. In: Parallel distributed processing: explorations in the microstructure of cognition. MIT Press, Cambridge, pp 318–362

    Google Scholar 

  • Trufin J, Denuit M, Hainaut D (2019) Effective statistical learning methods for actuaries: tree-based methods. Springer, Berlin

    MATH  Google Scholar 

  • Widrow B, Hoff M (1960) Adaptive switching circuits, IRE WESCON convention record, vol 4. pp 96–104

    Google Scholar 

  • Wilks SS (1938) The Large-sample distribution of the likelihood ratio for testing composite hypotheses. Ann Math Stat 9:60–62

    Article  Google Scholar 

  • Wuthrich M, Buser C (2017) Data analytics for non-life insurance pricing. Swiss Finance Institute Research paper no.16-68. Available on SSRN https://ssrn.com/abstract=2870308

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Denuit, M., Hainaut, D., Trufin, J. (2019). Feed-Forward Neural Networks. In: Effective Statistical Learning Methods for Actuaries III. Springer Actuarial(). Springer, Cham. https://doi.org/10.1007/978-3-030-25827-6_1

Download citation

Publish with us

Policies and ethics