Advertisement

Neural Processing Letters

, Volume 23, Issue 2, pp 111–119 | Cite as

A Modified Backpropagation Training Algorithm for Feedforward Neural Networks*

  • T. Kathirvalavakumar
  • P. Thangavel
Article

Abstract

In this paper, a new efficient learning procedure for training single hidden layer feedforward network is proposed. This procedure trains the output layer and the hidden layer separately. A new optimization criterion for the hidden layer is proposed. Existing methods to find fictitious teacher signal for the output of each hidden neuron, modified standard backpropagation algorithm and the new optimization criterion are combined to train the feedforward neural networks. The effectiveness of the proposed procedure is shown by the simulation results.

Keywords

linear error modified standard backpropagation nonlinear error optimization criterion single hidden layer network 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Abid, S., Fnaiech, F., Najim, M. 2001A fast feedforward training algorithm using a modified form of the standard backpropagation algorithmIEEE Transactions on Neural Networks12424430CrossRefGoogle Scholar
  2. 2.
    Asari, V.K. 2001Training of a feedforward multiple-valued neural networks by error backpropagation with a multilevel threshold functionIEEE Transactions on Neural Networks1215191521CrossRefGoogle Scholar
  3. 3.
    Bishop, C. 1995Neural Networks for Pattern RecognitionClarendonOxfordGoogle Scholar
  4. 4.
    Chowdhury, P.R., Singh, Y.P., Chansarkar, R.A. 1999Dynamic tunneling technique for efficient training of multilayer perceptronsIEEE. Transactions on Neural Networks104855Google Scholar
  5. 5.
    Hagan, M.T., Menhaj, M.B. 1994Training feedforward neural networks with the Marquardt algorithmIEEE Transactions on Neural Networks5989993CrossRefGoogle Scholar
  6. 6.
    Hinton, G. E.: Connectioniest Learning Procedure in Machine Learning: Paradigms and Methods, In: J. G. Carbonell, (ed.), pp. 185–234, Cambridge MA: MIT press 1989.Google Scholar
  7. 7.
    Kathirvalavakumar, T., Thangavel, P. 2003A new learning algorithm using simultaneous perturbation with weight initializationNeural Processing Letters175568CrossRefGoogle Scholar
  8. 8.
    Krogh, A., Hertz, J. 1992Generalization in a linear perceptron in the presence of noiseJournal of Physics A – Mathematical and General2511351147ADSMathSciNetGoogle Scholar
  9. 9.
    Kwok, T.Y., Yeung, D.Y. 1997Objective functions for training new hidden units in constructive neural networksIEEE Transactions on Neural Networks811311147Google Scholar
  10. 10.
    Lera, G., Pinzolas, M. 2002Neighborhood based Levenberg–Marquardt algorithm for neural network trainingIEEE Transactions on Neural Networks1312001203CrossRefGoogle Scholar
  11. 11.
    Parisi, R., DiClaudi, E.D., Orlandi, G., Rao, B.D. 1996A generalized learning paradigm exploiting the structure of feedforward neural networksIEEE Transactions on Neural Networks714501459CrossRefGoogle Scholar
  12. 12.
    Reed, R.D., Marks, R.J.,II 1999Neural Smithing Supervised Learning in Feedforward Artificial Neural NetworksMITCambridgeGoogle Scholar
  13. 13.
    Thangavel, P., Kathirvalavakumar, T. 2002Training feedforward networks using simultaneous perturbation with dynamic tunnelingNeurocomputing48691704CrossRefGoogle Scholar
  14. 14.
    Thangavel, P., Kathirvalavakumar, T. 2003Simultaneous perturbation for single hidden layer networks-cascade learningNeurocomputing50193209CrossRefGoogle Scholar
  15. 15.
    Yamamoto, Y., Nikiforuk, P.N. 2000A new supervised learning algorithm for multilayered and interconnected neural networksIEEE Transactions on Neural Networks113646CrossRefGoogle Scholar
  16. 16.
    Yam, J.Y.F., Chow, T.W.S. 1997Extended least squares based algorithm for training feedforward networksIEEE Transactions on Neural Networks8806810CrossRefGoogle Scholar
  17. 17.
    Yam, J.Y.F., Chow, T.W.S. 2000A weight initialization method for improving training speed in feedforward neural networksNeurocomputing30219232CrossRefGoogle Scholar
  18. 18.
    Yam, J.Y.F., Chow, T.W.S. 2001Feedforward networks training speed enhancement by optimal initialization of the synaptic coefficientsIEEE Transactions on Neural Networks12430434CrossRefGoogle Scholar
  19. 19.
    Yu, X., Onder Efe, M., Kaynak, O. 2002A general backpropagation algorithm for feedforward neural networks learningIEEE Transactions on Neural Networks13251259Google Scholar

Copyright information

© Springer 2006

Authors and Affiliations

  1. 1.Department of Computer ScienceV.H.N.S.N. CollegeVirudhunagarIndia
  2. 2.Department of Computer ScienceUniversity of MadrasChennaiIndia

Personalised recommendations