Properties of the Hermite Activation Functions in a Neural Approximation Scheme

  • Bartlomiej Beliczynski
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4432)


The main advantage to use Hermite functions as activation functions is that they offer a chance to control high frequency components in the approximation scheme. We prove that each subsequent Hermite function extends frequency bandwidth of the approximator within limited range of well concentrated energy. By introducing a scalling parameter we may control that bandwidth.


Activation Function Feedforward Neural Network High Frequency Component Hide Unit Hermite Polynomial 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
  2. 2.
    Cybenko, G.: Approximation by superposition of a sigmoidal function. Mathematics of Control, Signals, and Systems 2, 303–314 (1989)zbMATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Falhman, S., Lebiere, C.: The cascade correlation learning architecture. Technical report, CMU-CS-90-100 (1991)Google Scholar
  4. 4.
    Funahashi, K.: On the approximate realization of continuous mappings by neural networks. Neural Networks 2, 183–192 (1989)CrossRefGoogle Scholar
  5. 5.
    Girosi, F., Jones, M., Poggio, T.: Regularization theory and neural networks architecture. Neural Computation 7(2), 219–269 (1995)CrossRefGoogle Scholar
  6. 6.
    Hornik, K., Stinchombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Networks 2, 359–366 (1989)CrossRefGoogle Scholar
  7. 7.
    Kreyszig, E.: Introductory functional analysis with applications. J. Wiley, Chichester (1978)zbMATHGoogle Scholar
  8. 8.
    Kurkova, V.: Supervised learning with generalisation as an inverse problem. Logic Journal of IGPL 13, 551–559 (2005)zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Kwok, T., Yeung, D.: Constructive algorithms for structure learning in feedforward neural networks for regression problems. IEEE Trans.Neural Netw. 8(3), 630–645 (1997)CrossRefGoogle Scholar
  10. 10.
    Kwok, T., Yeung, D.: Objective functions for training new hidden units in constructive neural networks. IEEE Trans. Neural Networks 8(5), 1131–1148 (1997)CrossRefGoogle Scholar
  11. 11.
    Leshno, T., Lin, V., Pinkus, A., Schocken, S.: Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks 13, 350–373 (1993)Google Scholar
  12. 12.
    Linh, T.H.: Modern Generations of Artificial Neural Networks and their Applications in Selected Classification Problems (in Polish). Publishing House of the Warsaw University of Technology, Warsaw (2004)Google Scholar
  13. 13.
    Mackenzie, M.R., Tieu, A.K.: Hermite neural network correlation and application. IEEE Trans. on Signal Processing 51(12), 3210–3219 (2003)CrossRefMathSciNetGoogle Scholar
  14. 14.
    Mora, I., Khorasani, K.: Constructive feedforward neural networks using hermite polynomial activation functions. IEEE Transactions on Neural Networks 16, 821–833 (2005)CrossRefGoogle Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Bartlomiej Beliczynski
    • 1
  1. 1.Warsaw University of Technology, Koszykowa 75, 00-662 WarsawPoland

Personalised recommendations