Abstract
This paper presents a modified structure of a neural network with tunable activation function and provides a new learning algorithm for the neural network training. Simulation results of XOR problem, Feigenbaum function, and Henon map show that the new algorithm has better performance than BP (back propagation) algorithm in terms of shorter convergence time and higher convergence accuracy. Further modifications of the structure of the neural network with the faster learning algorithm demonstrate simpler structure with even faster convergence speed and better convergence accuracy.
Similar content being viewed by others
References
Wu Youshou, Zhao Mingsheng, The neural model with tunable activation function and its supervised learning and application, Science in China (in Chinese), Ser. E, 2001, 31(3): 263–272.
Segee, B. E., Using spectral techniques for improved performance in ANN, Proc IEEE, 1993, 500–505.
Lee, S., Kil, R. M., A Gaussian potential function network with hierarchically self-organizing learning, Neural Network, 1991, 4: 207–224.
Stork, D. G., Allen, J. D. et al., How to solve the N-bit parity problem with two hidden units, Neural Networks, 1992, 5: 923–926.
Stork, D. G., A replay to Brown and Kom, Neural Networks, 1993, 6: 607–609.
Wu, Y. S., A new approach to design a simplest ANN for performing certain special problems, in Proceedings of the International Conference on Neural Information Proceeding, Beijing, 1995, 477–480.
Wu, Y. S., The research on constructing some group neural networks by the known input signal, Science in China (in Chinese), Ser. E, 1996, 26(2): 140–144.
Wu, Y. S., Zhao, M. S., Ding, X. Q., A new artificial neural network with tunable activation function and its application, Science in China (in Chinese), Ser. E, 1997, 27(1): 55–60.
Wu Youshou, Zhao Mingsheng, Ding Xiaoqing, A new perception model with tunable activation function, Chinese Journal of Electronics, 1996, 5(2): 55–62.
Lippmann, R. P., An introduction to computing neural networks, IEEE ASSP MAG., vol. 4, No. 2, Apr. 1987.
Rumelhart, D. E., McClelland, J. L., Parallel Distributed Processing, Vol. 1. Cambridge, MA: M. I. T. Press, 1986.
Scalero, R. S., Tepedelenlioglu, N., A fast new algorithm for training feedforward neural networks, IEEE Trans On Signal Processing, Jan, 1992, 40(1): 203–210.
Azimi-Sadjadi, M. R., Liou, R.-J., Fast learning process of multilayer neural networks using recursive least squares method, IEEE Trans on Signal Processing, Feb, 1992, 40(2): 447–450.
Quyang Shan, Bao Zheng, Liao Gui-Sheng, Robust recursive least squares learning algorithm for principal component analysis, IEEE Trans. on Neural Networks, Jan 2000, 11(1): 215–221.
Haykin, S., Adapt Filter Theory, Englewood Cliffs, NJ: Prentice-Hall, 1986.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Shen, Y., Wang, B. A fast learning algorithm of neural network with tunable activation function. Sci China Ser F 47, 126–136 (2004). https://doi.org/10.1360/02yf0263
Received:
Issue Date:
DOI: https://doi.org/10.1360/02yf0263