Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks
 820 Downloads
 4 Citations
Abstract
This paper presents new theoretical results on the backpropagation algorithm with smoothing \(L_{1/2}\) regularization and adaptive momentum for feedforward neural networks with a single hidden layer, i.e., we show that the gradient of error function goes to zero and the weight sequence goes to a fixed point as n (n is iteration steps) tends to infinity, respectively. Also, our results are more general since we do not require the error function to be quadratic or uniformly convex, and neuronal activation functions are relaxed. Moreover, compared with existed algorithms, our novel algorithm can get more sparse network structure, namely it forces weights to become smaller during the training and can eventually removed after the training, which means that it can simply the network structure and lower operation time. Finally, two numerical experiments are presented to show the characteristics of the main results in detail.
Keywords
Feedforward neural networks Adaptive momentum Smoothing \(L_{1/2}\) regularization ConvergenceBackground
A multilayer perceptron network trained with a highly popular algorithm known as the error backpropagation (BP) has been dominating in the neural network literature for over two decades (Haykin 2008). BP uses two practical ways to implement the gradient method: the batch updating approach that accumulates the weight corrections over the training epoch before performing the update, while the online learning approach updates the network weights immediately after each training sample is processed (Wilson and Martinez 2003).
Note that training is usually done by iteratively updating of the weights that reduces error value, which is proportional to the negative gradient of a sumsquare error (SSE) function. However, during the training of feedforward neural networks (FNN) with SSE, the weights might become very large or even unbounded. This drawback can be addressed by adding a regularization term to the error function. The extra term acts as a bruteforce to drive unnecessary weights to zero to prevent the weights from taking too large values and then it can be used to remove weights that are not needed, and is also called penalty term (Haykin 2008; Wu et al. 2006; Karnin 1990; Reed 1993; Saito and Nakano 2000).
There are four main different penalty approaches for BP training: weight decay procedure (Hinton 1989), weight elimination (Weigend et al. 1991), approximate smoother procedure (Moody and Rognvaldsson 1997) and inner product penalty (Kong and Wu 2001).
In the weight decay procedure, the complexity penalty term is defined as the squared norm of the weight vector, and all weights in the multilayer perceptron are treated equally. In the weight elimination procedure, the complexity penalty represents the complexity of the network as function of weight magnitudes relative to a preassigned parameter (Reed 1993).
In approximate smoother procedure, this penalty term is used for a multilayer perceptron with a single hidden layer and a single neuron in the output layer. Compared with the earlier methods, it does two things. First, it distinguishes between the roles of weights in the hidden layer and those in the output layer. Second, it captures the interactions between these two sets of weights, however, it is much more demanding in computational complexity than weight decay or weight elimination methods. In Kong and Wu (2001) the innerproduct form is proposed and its efficiency in general performance of controlling the weights is demonstrated. Convergence of the gradient method for the FNN has been considered by Zhang et al. (2015, 2009), Wang et al. (2012) and Shao and Zheng (2011).
The convergence of the gradient method with momentum is considered in Bhaya and Kaszkurewicz (2004), Torii and Hagan (2002), Zhang et al. (2006), in Bhaya and Kaszkurewicz (2004) and Torii and Hagan (2002) under the restriction that the error function is quadratic. Inspired by Chan and Fallside (1987), Zhang et al. (2006) considers the convergence of a gradient algorithm with adaptive momentum, without assuming the error function to be quadratic as in the existing results. However, in Zhang et al. (2006), the strong convergence result is based on the assumption that the error function is uniformly convex, which still seems a little intense.
The size of a hidden layer is one of the most important considerations when dealing with real life tasks using FNN. However, the existing pruning methods may not prune the unnecessary weights efficiently, so how to efficiently simplify the network structure becomes our main task.
The focus of this paper is on extension of \(L_{1/2}\) regularization beyond its basic concept though its augmentation with a momentum term. Also, there are some other applications of FNNs for optimization problems, such as the generalized gradient and recurrent neural network methods shown as Liu et al. (2012) and Liu and Cao (2010)
It is well known that a general drawback of gradient based BP learning process is its slow convergence. To accelerate learning, a momentum term is often added (Haykin 2001; Chan and Fallside 1987; Qiu et al. 1992; Istook and Martinez 2002). By adding momentum to the update formula, the current weight increment is a linear combination of the gradient of the error function and the previous weight increment. As a result, the updates respond not only to the local gradient but also to recent gradient in the error function. Selected reports discuss the NN training with momentum term in the literature (Torii and Hagan 2002; Perantonis and Karras 1995; Qian 1999).
As demonstrated in Torii and Hagan (2002), there always exists a momentum coefficient that will stabilize the steepest descent algorithm, regardless of the value of the learning rate (we will define it below). In addition, it shows how the value of the momentum coefficient changes the convergence properties. Momentum acceleration, its performance in terms of learning speed and scalability properties is evaluated and found superior to the performance of reputedly fast variants of the BP algorithm in several benchmark training tasks in Perantonis and Karras (1995). Qian (1999) shows that in the limit of continuous time, the momentum parameter is analogous to the mass of Newtonian particles that move through a viscous medium in a conservative force field.
In this paper, a modified batch gradient method with smoothing \(L_{1/2}\) regularization penalty and adaptive momentum algorithm (BGSAM) is proposed. It damps oscillations present in the \(L_{1/2}\) regularization and in the adaptive momentum algorithm (BGAM). In addition, without the requirement that the error function is quadratic or uniformly convex, we present a comprehensive study of the weak and strong convergence for BGSAM which offers an effective improvement in real life application.
The rest of this paper is arranged as follows. The algorithm BGSAM is described in “Batch gradient method with smoothing L _{1/2} regularization and adaptive momentum (BGSAM)” section. In “Convergence results” section, the convergence results of BGSAM are presented, and the detailed proofs of the main results are stated in the “Appendix”. The performance of BGSAM is compared to BGAM and the experimental results shown in “Numerical experiments” section. Concluding remarks are in “Conclusions” section.
Batch gradient method with smoothing \(L_{1/2}\) regularization and adaptive momentum (BGSAM)
Batch gradient method with \(L_{1/2}\) regularization and adaptive momentum (BGAM)
Smoothing \(L_{1/2}\) regularization and adaptive momentum (BGSAM)
Here the learning rate \(\eta\), the momentum coefficient vector of the nth training \(\alpha _W^n\) and other coefficients are the same as the description of algorithm BGAM. For each \(\alpha _{w_i}^n\), after each training epoch it is chosen as (10).
Convergence results
 (A1)

g(t), \(g'(t)\), \(g''(t)\) are uniformly bounded for \(t\in R\).
 (A2)

There exists a bounded region \(\Omega \subset R^n\) such that \(\{w_0^n\}_{n=0}^\infty \subset \Omega\).
 (A3)

\(0<\eta <\frac{1}{(M\lambda +C_1)(1+\alpha )^2}\), where \(M=\frac{\sqrt{6}}{4\sqrt{a^3}}\) and \(C_1\) is a constant defined in (16) below.
Theorem 1
 (i)
\(\lim \nolimits _{n\rightarrow \infty }E_W(W^n)=0\) . Moreover, ( A4) if there exists a compact set \(\Phi\) such that \(W^n\in \Phi\) and the set \(\Phi _0=\{W\in \Phi : E_W(W)=0\}\) contains finite points also holds, then we have the following convergence
 (ii)
\(\lim \nolimits _{n\rightarrow \infty }(W^n)=W^*,\) where \(W^*\in \Phi _0\).
Numerical experiments
This section presents the simulations that verify the performance of BGAM and BGSAM. Our theoretical results are experimentally verified with the 3bit parity problem, which is a typical benchmark problem in area of the neural networks.
3bit parity problem
Input  Output  Input  Output  

1  1  1  −1  1  1  −1  −1  −1  1 
1  1  −1  −1  0  −1  1  1  −1  0 
1  −1  1  −1  0  −1  −1  1  −1  1 
−1  −1  −1  −1  0  −1  1  −1  −1  1 
The performance results of BGAM and BGSAM are shown in the following figures. Figures 2, 3 and 4 present the comparison results for learning rate \(\eta\), penalty parameter \(\lambda\) and momentum term \(\alpha\) with 0.01, 0.0006 and 0.03, respectively.
Conclusions
In this paper, the smoothing \(L_{1/2}\) regularization term with adaptive momentum is introduced into the batch gradient learning algorithm to prune FNN. First, it removes the oscillation of the gradient value. Second, the convergence results for threelayer FNN are proved under certain relaxed conditions. Third, the algorithm is applied to a 3bit parity problem and the related results are supplied to support the theoretical findings above. Finally, this new algorithm will also effective for other types neural networks or big data processing.
Notes
Authors’ contributions
This work was carried out by the three authors, in collaboration. All authors read and approved the final manuscript.
Acknowledgements
This work was supported by National Science Foundation of China (Nos. 11201051, 11501431, 11302158) and National Science Foundation for Tian yuan of China (No. 11426167), and the Science Plan Foundation of the Education Bureau of Shaanxi Province. The authors are grateful to the anonymous reviewers and editors for their profitable comments and suggestions, which greatly improves this paper.
Competing interests
The authors declare that they have no competing interests.
References
 Bhaya A, Kaszkurewicz E (2004) Steepest descent with momentum for quadratic functions is a version of the conjugate gradient method. Neural Netw 17:65–71CrossRefGoogle Scholar
 Chan LW, Fallside F (1987) An adaptive training algorithm for backpropagation networks. Comput Speech Lang 2:205–218CrossRefGoogle Scholar
 Davis G (1994) Adaptive nonlinear approximations, Ph.D. thesis, New York UniversityGoogle Scholar
 Donoho DL (1995) Denoising by softthresholding. IEEE Trans Inf Theory 41:613–627CrossRefGoogle Scholar
 Donoho DL (2005) Neighborly polytopes and the sparse solution of underdetermined systems of linear equations, Technical report, Statistics Department, Stanford UniversityGoogle Scholar
 Fan QW, Wei W, Zurada JM (2014) Convergence of online gradient method for feedforward neural networks with smoothing \(L_{1/2}\) regularization penalty. Neurocomputing 131:208–216CrossRefGoogle Scholar
 Haykin S (2001) Neural networks: a comprehensive foundation, 2nd edn. Tsinghua University Press, Prentice Hall, BeijingGoogle Scholar
 Haykin S (2008) Neural networks and learning machines. PrenticeHall, Upper Saddle RiverGoogle Scholar
 Hinton GE (1989) Connectionist learning procedures. Artif Intell 40(1–3):185–234CrossRefGoogle Scholar
 Istook E, Martinez T (2002) Improved backpropagation learning in neural networks with windowed momentum. Int J Neural Syst 12(3–4):303–318CrossRefGoogle Scholar
 Karnin ED (1990) A simple procedure for pruning backpropagation trained neural networks. IEEE Trans Neural Netw 1:239–242CrossRefGoogle Scholar
 Kong J, Wu W (2001) Online gradient methods with a punishing term for neural networks. Northeast Math J 173:371–378Google Scholar
 Liu QS, Cao JD (2010) A recurrent neural network based on projection operator for extended general variational inequalities. IEEE Trans Syst Man Cybern B Cybern 40(3):928–938CrossRefGoogle Scholar
 Liu QS, Guo ZS, Wang J (2012) A onelayer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization. Neural Netw 26:99–109CrossRefGoogle Scholar
 Moody JE, Rognvaldsson TS (1997) Smoothing regularizers for projective basis function networks. In: Advances in neural information processing systems 9 (NIPS 1996). http://papers.nips.cc/book/advancesinneuralinformationprocessingsystems91996
 Natarajan BK (1995) Sparse approximate solutions to linear systems. SIAM J Comput 24:227–234CrossRefGoogle Scholar
 Perantonis SJ, Karras DA (1995) An efficient constrained learning algorithm with momentum acceleration. Neural Netw 8(2):237–249CrossRefGoogle Scholar
 Qian N (1999) On the momentum term in gradient descent learning algorithms. Neural Netw 12(1):145–151CrossRefGoogle Scholar
 Qiu G, Varley MR, Terrell TJ (1992) Accelerated training of backpropagation networks by using adaptive momentum step. IEEE Electron Lett 28(4):377–379CrossRefGoogle Scholar
 Reed R (1993) Pruning algorithmsa survey. IEEE Trans Neural Netw 4(5):740–747CrossRefGoogle Scholar
 Saito K, Nakano R (2000) Secondorder learning algorithm with squared penalty term. Neural Comput 12(3):709–729CrossRefGoogle Scholar
 Shao H, Zheng G (2011) Convergence analysis of a backpropagation algorithm with adaptive momentum. Neurocomputing 74:749–752CrossRefGoogle Scholar
 Tibshirani R (1996) Regression shrinkage and selection via the Lasso. J R Stat Soc B 58:267–288Google Scholar
 Torii M, Hagan MT (2002) Stability of steepest descent with momentum for quadratic functions. IEEE Trans Neural Netw 13(3):752–756CrossRefGoogle Scholar
 Wang J, Wu W, Zurada JM (2012) Computational properties and convergence analysis of BPNN for cyclic and almost cyclic learning with penalty. Neural Netw 33:127–135CrossRefGoogle Scholar
 Weigend AS, Rumelhart DE, Huberman BA (1991) Generalization by weight elimination applied to currency exchange rate prediction. In: Proceedings of the international joint conference on neural networks, vol 1, pp 837–841Google Scholar
 Wilson DR, Martinez TR (2003) The general inefficiency of batch training for gradient descent learning. Neural Netw 16:1429–1451CrossRefGoogle Scholar
 Wu W, Shao H, Li Z (2006) Convergence of batch BP algorithm with penalty for FNN training. Lect Notes Comput Sci 4232:562–569CrossRefGoogle Scholar
 Wu W, Li L, Yang J, Liu Y (2010) A modified gradientbased neurofuzzy learning algorithm and its convergence. Inf Sci 180:1630–1642CrossRefGoogle Scholar
 Wu W, Fan QW, Zurada JM et al (2014) Batch gradient method with smoothing \(L_{1/2}\) regularization for training of feedforward neural networks. Neural Netw 50:72–78CrossRefGoogle Scholar
 Xu Z, Zhang H, Wang Y, Chang X, Liang Y (2010) \(L_{1/2}\) regularizer. Sci China Inf Sci 53:1159–1169CrossRefGoogle Scholar
 Zhang NM, Wu W, Zheng GF (2006) Convergence of gradient method with momentum for twolayer feedforward neural networks. IEEE Trans Neural Netw 17(2):522–525CrossRefGoogle Scholar
 Zhang H, Wu W, Liu F, Yao M (2009) Boundedness and convergence of online gradient method with penalty for feedforward neural networks. IEEE Trans Neural Netw 20(6):1050–1054CrossRefGoogle Scholar
 Zhang HS, Zhang Y, Xu DP, Liu XD (2015) Deterministic convergence of chaos injectionbased gradient method for training feedforward neural networks. Cogn Neurodyn 9(3):331–340CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.