A Novel Learning Algorithm for Feedforward Neural Networks
A novel learning algorithm called BPWA for feedforward neural networks is presented, which adjusts the weights during both forward phase and backward phase. It calculates the minimum norm square solution as the weights between the hidden layer and output layer in the forward pass, while the backward pass adjusts the weights connecting the input layer to hidden layer according to error gradient descent algorithm. The algorithm is compared with Extreme learning Machine, BP algorithm and LMBP algorithm on function approximation and classification tasks. The experiments’ results demonstrate that the proposed algorithm performs well.
KeywordsHide Layer Extreme Learning Machine Hide Neuron Feedforward Neural Network Output Weight
Unable to display preview. Download preview PDF.
- 3.Judd, J.S.: Learning in Networks is Hard. In: Proc. the 1st IEEE International Conference on Neural Networks, New York, vol. 2, pp. 685–692 (1987)Google Scholar
- 4.Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Internal Representations by Error Propagation. In: Parallel Distributed Processing, vol. 1. MIT Press, Cambridge (1986)Google Scholar
- 6.Huang, G.-B., Zhu, Q.-Y., Siew, C.-K.: Extreme Learning Machine: a New Learning Scheme of Feedforward Neural Networks. In: Proceedings 2004 International Joint Conference on Neural Networks, vol. 2, pp. 985–990 (2004)Google Scholar
- 7.Huang, G.-B., Zhu, Q.-Y., Siew, C.-K., Saratchandran, P., Sundararajan, N.: Can Threshold Networks Be Trained Directly?. IEEE Transactions on Circuits and Systems II: Express Briefs (Accepted for future publication)Google Scholar