Abstract
The ridge polynomial neural network is one of the most popular higher-order neural networks, which has the powerful capability of approximating reasonable functions while avoiding the combinatorial increase in the number of weights required. In this paper, we study the convergence of gradient method with batch updating rule for ridge polynomial neural network, and a monotonicity theorem and two convergence theorems (including a weak convergence and a strong convergence) are proved. The experimental results demonstrate that the proposed theorems are valid.
Similar content being viewed by others
References
Giles CL, Maxwell T (1987) Learning, invariance, and generalization in a high-order neural network. Appl Opt 26(23):4972–4978
Rumelhart DE, Mcclelland JL (1986) Parallel distributed processing: explorations in the microstructure of cognition. MIT Press, Britian, Cambridge
Durbin R, Rumelhart DE (1989) Product units: a computationally powerful and biologically plausible extension to backpropagation networks. Neural Comput 1:133–142
Shin Y, Ghosh J (1991) The pi-sigma network: an efficient higher-order neural network for pattern classification and function approximation. Neural Netw 1:13–18
Shin Y, Ghosh J (1992) Approximation of multivariate functions using ridge polynomial networks. Neural Netw 2:380–385
Shin Y, Ghosh J (1995) Ridge polynomial networks. IEEE Trans Neural Netw 6(3):610–622
Voutriaridis C, Boutalis YS, Mertzios BG (2003) Ridge polynomial networks in pattern recognition. In: 4th EURASIP conference focused on video/image processing and multimedia communications. Croatia, pp 19–524
Ghazali R, Hussian AJ, Liatsis P (2011) Dynamic ridge polynomial neural network: forecasting the univariate non-stationary and stationary trading signal. Expert Syst Appl 38:3765–3776
Ghazali R, Hussian AJ, Nawi NM, Mohamad B (2009) Non-stationary and stationary prediction of financial time series using dynamic ridge polynomial neural network. Neurocomputing 72(10–12):2359–2367
Hacib T, Bihan YL, Mekideche MR, Ferkha N (2011) Ridge Polynomial neural network for non-destructive eddy current evaluation. Stud Comput Intell 327:185–199
Wu W, Yan X, Kang XD, Zhang C (2007) Convergence of asynchronous batch gradient method for pi-sigma neural networks. Discrete Continuous Dyn Impuls Syst A 14(S1):95–99
Zhang C, Wu W, Chen XH, Yan X (2008) Convergence of BP algorithm for product unit neural network with exponential weights. Neurocomputing 72(1–3):513–520
Li ZX, Wu W, Tian YL (2004) Convergence of an online gradient method for feedforward neural networks with stochastic inputs. J Comput Appl Math 163(1):165–176
Shao HM, Liu LJ, Zheng GF (2009) Convergence of a gradient algorithm with penalty for training two-layer neural networks. In: 2nd IEEE international conference on computer science and information technology. Beijing, China, pp 76–79
Wu W, Wang J, Cheng MS et al (2011) Convergence analysis of online gradient method for BP neural networks. Neural Netw 24:91–98
Wu W, Feng GR, Li ZX et al (2005) Deterministic convergence of an online gradient method for BP neural networks. IEEE Trans Neural Netw 16(3):533–540
Acknowledgments
This work is supported in part by (1) the Chinese National Natural Science Foundation (Grant No. 60763013), (2) the scientific research project of Guangxi Science and Technology Department (Grant No. 11107006-1) and (3) the scientific research project of Guangxi Education Department (Grant No.TLZ100715).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Yu, X., Deng, F. Convergence of gradient method for training ridge polynomial neural network. Neural Comput & Applic 22 (Suppl 1), 333–339 (2013). https://doi.org/10.1007/s00521-012-0915-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-012-0915-4