Advertisement

A Novel Learning Algorithm for Feedforward Neural Networks

  • Huawei Chen
  • Fan Jin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3971)

Abstract

A novel learning algorithm called BPWA for feedforward neural networks is presented, which adjusts the weights during both forward phase and backward phase. It calculates the minimum norm square solution as the weights between the hidden layer and output layer in the forward pass, while the backward pass adjusts the weights connecting the input layer to hidden layer according to error gradient descent algorithm. The algorithm is compared with Extreme learning Machine, BP algorithm and LMBP algorithm on function approximation and classification tasks. The experiments’ results demonstrate that the proposed algorithm performs well.

Keywords

Hide Layer Extreme Learning Machine Hide Neuron Feedforward Neural Network Output Weight 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hornik, K., Stinchcombe, M., White, H.: Multilayer Feedforward Networks are Universal Approximators. Neural Networks 2(5), 359–366 (1989)CrossRefGoogle Scholar
  2. 2.
    Chen, T., Chen, H.: Universal Approximation to Nonlinear Operators by Neural Networks with Arbitrary Activation Functions and Its Application to Dynamical Systems. IEEE Trans. Neural Networks 6(4), 911–917 (1995)CrossRefGoogle Scholar
  3. 3.
    Judd, J.S.: Learning in Networks is Hard. In: Proc. the 1st IEEE International Conference on Neural Networks, New York, vol. 2, pp. 685–692 (1987)Google Scholar
  4. 4.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Internal Representations by Error Propagation. In: Parallel Distributed Processing, vol. 1. MIT Press, Cambridge (1986)Google Scholar
  5. 5.
    Hagan, M.T., Menhaj, M.B.: Training Feedforward Networks with the Levenberg-Marquardt Algorithm. IEEE Trans. on Neural Networks 5(6), 989–993 (1994)CrossRefGoogle Scholar
  6. 6.
    Huang, G.-B., Zhu, Q.-Y., Siew, C.-K.: Extreme Learning Machine: a New Learning Scheme of Feedforward Neural Networks. In: Proceedings 2004 International Joint Conference on Neural Networks, vol. 2, pp. 985–990 (2004)Google Scholar
  7. 7.
    Huang, G.-B., Zhu, Q.-Y., Siew, C.-K., Saratchandran, P., Sundararajan, N.: Can Threshold Networks Be Trained Directly?. IEEE Transactions on Circuits and Systems II: Express Briefs (Accepted for future publication)Google Scholar
  8. 8.
    Huang, G.-B., Babri, H.A.: Upper Bounds on the Number of Hidden Neurons in Feedforward Networks with Arbitrary Bounded Nonlinear Activation Functions. IEEE Trans. on Neural Networks 9(1), 224–229 (1998)CrossRefGoogle Scholar
  9. 9.
    Bartlett, P.L.: The Sample Complexity of Pattern Classification with Neural Networks: the Size of the Weights Is More Important than the Size of the Network. IEEE Trans. on Information Theory 44(2), 525–536 (1998)MATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Chandra, P., Singh, Y.: An Activation Function Adapting Training Algorithm for Sigmoidal Feedforward Networks. Neurocomputing 61(10), 429–437 (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Huawei Chen
    • 1
  • Fan Jin
    • 1
  1. 1.School of Information Science & TechnologySouthwest Jiaotong UniversityChengduChina

Personalised recommendations