Skip to main content
Log in

Backpropagation to train an evolving radial basis function neural network

  • Original Paper
  • Published:
Evolving Systems Aims and scope Submit manuscript

Abstract

In this paper, a stable backpropagation algorithm is used to train an online evolving radial basis function neural network. Structure and parameters learning are updated at the same time in our algorithm, we do not make difference in structure learning and parameters learning. It generates groups with an online clustering. The center is updated to achieve the center is near to the incoming data in each iteration, so the algorithm does not need to generate a new neuron in each iteration, i.e., the algorithm does not generate many neurons and it does not need to prune the neurons. We give a time varying learning rate for backpropagation training in the parameters. We prove the stability of the proposed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Angelov PP, Filev DP (2004a) A approach to online identification of Takagi-Sugeno fuzzy models. IEEE Trans Syst Man Cybern 32(1):484–498

    Google Scholar 

  • Angelov PP, Filev DP (2004b) Flexible models with evolving structure. Int J Intell Syst 19(4):327–340

    Article  MATH  Google Scholar 

  • Angelov PP, Filev DP (2005) Simpl_eTS: a simplified method for learning evolving Takagi-Sugeno fuzzy models. In: The international conference on fuzzy systems, pp 1068–1072

  • Angelov P, Zhou X (2006) Evolving fuzzy systems from data streams in real-time. In: International symposium on evolving fuzzy systems, pp 29–35

  • Angelov P, Ramezany R, Zhou X (2008) Autonomous novelty detection and object tracking in video streams using evolving clustering and Takagi-Sugeno type neuro-fuzzy system. In: IEEE World Congress on computational intelligence, pp 1457–1464

  • Angelov P, Filev D, Kasabov N (2010) Editorial. Evol Syst 1:1–2

  • Chiu SL (1994) Fuzzy Model Identification based on cluster estimation. J Intell Fuzzy Syst 2(3):267–278

    Google Scholar 

  • Hassibi D, Stork DG (1993) Second order derivatives for network pruning. In: Advances in neural information processing, vol 5. Morgan Kaufmann, Los Altos, pp 164–171

  • Hilera JR, Martines VJ (1995) Redes Neuronales Artificiales, Fundamentos, Modelos y Aplicaciones. Adison Wesley Iberoamericana, USA

    Google Scholar 

  • Iglesias JA, Angelov P, Ledezma A, Sanchis A (2010) Evolving classification of agents’ behaviors: a general approach. Evol Syst 3

  • Jang JSR, Sun CT (1997) Neuro-fuzzy and soft computing. Prentice Hall, Englewood Cliffs

  • Juang CF, Lin CT (1998) An on-line self constructing nural fuzzy inference network and its applications. IEEE Trans Fuzzy Syst 6(1):12–32

    Article  Google Scholar 

  • Juang CF, Lin CT (1999) A recurrent self-organizing fuzzy inference network. IEEE Trans Neural Netw 10(4):828–845

    Article  Google Scholar 

  • Kasabov N (2001) Evolving fuzzy nural networks for supervised/unsupervised online knowledge-based learning. IEEE Trans Syst Man Cybern 31(6):902–918

    Article  Google Scholar 

  • Leng G, McGinnity TM, Prasad G (2005) An approach for online extraction of fuzzy rules using a self-organising fuzzy neural network. Fuzzy Sets Syst 150:211–243

    Article  MATH  MathSciNet  Google Scholar 

  • Lin CT (1994) Neural fuzzy control systems with structure and parameter learning. World Scientific, New York

    Google Scholar 

  • Lughofer E, Angelov P (2009) Detecting and reacting on drifts and shifts in on-line data streams with evolving fuzzy systems. In: International Fuzzy Systems Association World Congress, pp 931–937

  • Mitra S, Hayashi Y (2000) Neuro-fuzzy rule generation: survey in soft computing framework. IEEE Trans Neural Netw 11(3):748–769

    Article  Google Scholar 

  • Rivals I, Personnaz L (2003) Neural network construction and selection in non linear modelling. IEEE Trans Neural Netw 14(4):804–820

    Article  Google Scholar 

  • Rong HJ, Sundararajan N, Huang GB, Saratchandran P (2006) Sequential adaptive fuzzy inference system (SAFIS) for nonlinear system identification and prediction. Fuzzy Sets Syst 157(9):1260–1275

    Article  MATH  MathSciNet  Google Scholar 

  • Soleimani H, Lucas C, Araabi BN (2010) Recursive Gath–Geva clustering as a basis for evolving neuro-fuzzy modeling. Evol Syst 1:59–71

    Article  Google Scholar 

  • Tzafestas SG, Zikidis KC (2001) On-line neuro-fuzzy ART-based structure and parameter learning TSK model. IEEE Trans Syst Man Cybern 31(5):797–803

    Article  Google Scholar 

  • Wang LX (1997) A course in fuzzy systems and control. Prentice Hall, Englewood Cliffs

Download references

Acknowledgments

The authors are grateful with the editor and with the reviewers for their valuable comments and insightful suggestions, which can help to improve this research significantly. The authors thank the Secretaria de Investigación y Posgrado, the Comisión de Operación y Fomento de Actividades Académicas del IPN, and the Consejo Nacional de Ciencia y Tecnologia for their help in this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to José de Jesús Rubio.

Appendix

Appendix

Proof of Theorem 1

We select the following Lyapunov function L 1(k − 1) as:

$$ L_{1}(k-1)=\widetilde{c}_{ij}^{2}(k-1)+\widetilde{\sigma }_{ij}^{2}(k-1)+ \widetilde{v}_{j}^{2}(k-1) $$
(20)

By updating (13), we have:

$$ \begin{aligned} \widetilde{c}_{ij}(k)&=\widetilde{c}_{ij}(k-1)-\eta (k-1)D_{1ij}(k-1)e\left( k-1\right) \\ \widetilde{\sigma }_{ij}(k)&=\widetilde{\sigma }_{ij}(k-1)-\eta (k-1)D_{2ij}(k-1)e\left( k-1\right) \\ \widetilde{v}_{j}(k)&=\widetilde{v}_{j}(k-1)-\eta (k-1)D_{3j}(k-1)e\left( k-1\right)\\ \end{aligned} $$

Now we calculate ΔL 1(k − 1):

$$ \begin{aligned} \Updelta L_{1}(k-1)&=\left[ \widetilde{c}_{ij}(k-1)-\eta (k-1)D_{1ij}(k-1)e\left( k-1\right) \right] ^{2} \\ &\quad-\widetilde{c}_{ij}^{2}(k-1)+\left[ \widetilde{\sigma }_{ij}(k-1)-\eta (k-1)D_{2ij}(k-1)e\left( k-1\right) \right] ^{2} \\ &\quad-\widetilde{\sigma }_{ij}^{2}(k-1)+\left[ \widetilde{v}_{j}(k-1)-\eta (k-1)D_{3j}(k-1)e\left( k-1\right) \right] ^{2} \\ &\quad-\widetilde{v}_{j}^{2}(k-1) \\ &=\eta ^{2}(k-1)\left\{ D_{1ij}^{2}(k-1)+D_{2ij}^{2}(k-1)+D_{3j}^{2}(k-1)\right\} e^{2}\left( k-1\right) \\ &\quad-2\eta (k-1)\left[ D_{1ij}(k-1)\widetilde{c}_{ij}(k-1)+D_{2ij}(k-1) \widetilde{\sigma }_{ij}(k-1)+D_{3j}(k-1)\widetilde{v}_{j}(k-1)\right] e\left( k-1\right) \end{aligned} $$
(21)

Substituting (12) into the last term of (21) and using (14) gives:

$$ \begin{aligned} \Updelta L_{1}(k-1)&=-2\eta (k-1)\left[ e\left( k-1\right) -\zeta (k-1)\right] e\left( k-1\right) \\ &\quad+\eta ^{2}(k-1)\left\{ D_{1ij}^{2}(k-1)+D_{2ij}^{2}(k-1)+D_{3j}^{2}(k-1)\right\} e^{2}\left( k-1\right) \\ &\leq \eta ^{2}(k-1)\left\{ 1+D_{1ij}^{2}(k-1)+D_{2ij}^{2}(k-1)+D_{3j}^{2}(k-1)\right\} e^{2}\left( k-1\right) \\ &\quad-\eta (k-1)e^{2}\left( k-1\right) +\eta (k-1)\zeta ^{2}(k-1) \\ &\leq \eta ^{2}(k-1)\left\{ 1+q(k-1)\right\} e^{2}\left( k-1\right) \\ &\quad-\eta (k-1)e^{2}\left( k-1\right) +\eta (k-1)\zeta ^{2}(k-1) \\ &\leq -\eta (k-1)\left\{ 1-\eta (k-1)\left[ 1+q(k-1)\right] \right\} e^{2}\left( k-1\right) \\ &\quad+\eta (k-1)\zeta ^{2}(k-1)\\ \end{aligned} $$

Using the case \(e^{2}\left( k-1\right) \geq \frac{\overline{\zeta }^{2}}{1-\eta}\) of dead zone (14), then \(\eta (k-1)={\frac{\eta _{0}}{1+q(k-1)}}>0:\)

$$ \begin{aligned} \Updelta L_{1}(k-1)&\leq -\eta (k-1)\left\{ 1-\frac{\eta _{0}}{1+q(k-1)}\left[ 1+q(k-1)\right] \right\} e^{2}\left( k-1\right) \\ &\quad+\eta (k-1)\zeta ^{2}(k-1) \\ \Updelta L_{1}(k-1)&\leq -\eta (k-1)\left( 1-\eta _{0}\right) e^{2}\left( k-1\right) +\eta (k-1)\zeta ^{2}(k-1)\\ \end{aligned} $$

With \(\zeta ^{2}\left( k-1\right) \leq \overline{\zeta }^{2}:\)

$$ \Updelta L_{1}(k-1)\leq -\eta (k-1)\left[ \left( 1-\eta _{0}\right) e^{2}\left( k-1\right) -\overline{\zeta }^{2}\right] $$
(22)

From the dead-zone, \(e^{2}\left( k-1\right) \geq \frac{\overline{\zeta }^{2}}{1-\eta _{0}}\) and η(k − 1) > 0, ΔL 1(k − 1) ≤ 0. L 1(k) is bounded. If \(e^{2}\left( k-1\right) <\frac{\overline{\zeta }^{2}}{1-\eta _{0}},\) from (14) we know η(k − 1) = 0, all of weights are not changed, they are bounded, so L 1(k) is bounded.

When \(e^{2}\left( k-1\right) \geq \frac{\overline{\zeta }^{2}}{1-\eta _{0}},\) summarize (22) from 2 to T:

$$ \sum\limits_{k=2}^{T}\eta (k-1)\left[ \left( 1-\eta _{0}\right) e^{2}\left( k-1\right) -\overline{\zeta }^{2}\right] \leq L_{1}(1)-L_{1}(T) $$
(23)

Since L 1(T) is bounded and using \(\eta (k-1)=\frac{\eta _{0}}{1+q(k-1)}>0:\)

$$ \underset{T\rightarrow \infty }{\lim}\sum\limits_{k=2}^{T}\left( \frac{\eta _{0}}{1+q(k-1)}\right) \left[ \left( 1-\eta _{0}\right) e^{2}\left( k-1\right) -\overline{\zeta }^{2}\right] <\infty $$
(24)

Because \(e^{2}\left( k-1\right) \geq \frac{\overline{\zeta }^{2}}{1-\eta }, \left( \frac{\eta _{0}}{1+q(k-1)}\right) \left[ \left( 1-\eta _{0}\right) e^{2}\left( k-1\right) -\overline{\zeta }^{2}\right] \geq 0,\) so:

$$ \underset{k\rightarrow \infty }{\lim }\left( \frac{\eta _{0}}{1+q(k-1)}\right) \left[ \left( 1-\eta _{0}\right) e^{2}\left( k-1\right) -\overline{\zeta }^{2}\right] =0 $$
(25)

Because L 1(k − 1) is bounded, so q(k − 1) < ∞, and as \(\frac{\eta _{0}}{1+q(k-1)}>0:\)

$$ \underset{k\rightarrow \infty }{\lim }\left( 1-\eta _{0}\right) e^{2}\left( k-1\right) =\overline{\zeta }^{2} $$
(26)

That is (15). When \(e^{2}\left( k-1\right) <\frac{\overline{\zeta }^{2}}{\left[ 1-\eta _{0}\right]},\) it is already in this zone.

Rights and permissions

Reprints and permissions

About this article

Cite this article

de Jesús Rubio, J., Vázquez, D.M. & Pacheco, J. Backpropagation to train an evolving radial basis function neural network. Evolving Systems 1, 173–180 (2010). https://doi.org/10.1007/s12530-010-9015-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12530-010-9015-9

Keywords

Navigation