Skip to main content

Robust Neuroevolutionary Identification of Nonlinear Nonstationary Objects

Abstract

The neuroevolutionary approach is proposed to construct mathematical models of nonlinear nonstationary objects under non-Gaussian noise. The general structure of an evolutionary feed-forward neural network is considered. The modeling of various cases of nonstationarity has proved the efficiency of the proposed approach.

This is a preview of subscription content, access via your institution.

References

  1. 1.

    S. Fahlman, “Fast learning variations on back-propagation: An empirical study,” in: Proc. of the 1988 Connectionist Models Summers School, D. S. Touretzky, G. E. Hinton, and T. Sejnowski (eds.), Morgan Kaufmann, Pittsburgh (1988), pp. 38–51.

    Google Scholar 

  2. 2.

    S. Fahlman and C. Lebiere, “The cascade-correlation learning architecture,” in: D. S. Touretzky (ed.), Adv. Neur. Inform. Proces. Syst., 2, Morgan Kaufmann, Los Altos (CA) (1990), pp. 524–532.

    Google Scholar 

  3. 3.

    M. J. Riley, Ch. P. Thompson, and K. W. Jenkins, “Improving the performance of cascade correlation neural networks on multimodal functions,” in: Proc. World Congress of Engineering, 3, London (2010).

  4. 4.

    X. Yao, “Evolving artificial neural networks,” in: Proc. IEEE, 87, No 9, 1423–1447 (1999).

  5. 5.

    X. Yao and Y. Lin, “A new evolutionary system for evolving artificial neural networks,” IEEE Trans. on Neural Networks, 3, No. 3, 694–713 (1997).

    Article  Google Scholar 

  6. 6.

    D. Floreano, P. Durr, and C. Mattiussi, “Neuroevolution: From architecture to learning,” Evol. Intel., 1, 47–62 (2008).

    Article  Google Scholar 

  7. 7.

    D. B. Fogel, “An introduction to simulated evolutionary optimization,” IEEE Trans. on Neural Networks, 5, No. 1, 3–14 (1994).

    Article  Google Scholar 

  8. 8.

    T. Back, V. Hammel, and H.-D. Schwefel, “Evolutionary computation: Comments on the history and current state,” IEEE Trans. on Evol. Comput., 1, No. 4, 3–17 (1997).

    Article  Google Scholar 

  9. 9.

    B. Zhang, Y., Lu J., and K.-L. Du, “Evolutionary computation and its applications in neural and fuzzy systems,” Appl. Computational Intelligence and Soft Comput., Vol. 2011, Article ID 938240 (2011).

  10. 10.

    D. Whitley, “An overview of evolutionary algorithm: Practical issues and common pitfalls,” Inform. and Software Technology, 43, 817–831 (2001).

    Article  Google Scholar 

  11. 11.

    J. Holland, Adaptation in Natural and Artificial Systems, 2nd Edition, MIT Press, Cambridge (1992).

    Google Scholar 

  12. 12.

    D. Whitley, Y. S. Gordon, and K. Mathias, “Lamarckian evolution, the Baldwin effect, and function optimization,” in: Proc. of the Parallel Problem Solving from Nature, Springer, Berlin (1994), pp. 6–15.

    Google Scholar 

  13. 13.

    Ch. Girand-Carrier, “Unifying learning with evolution through Baldwinian evolution and Lamarckism: A case study,” in: Proc. Symp. on Comput. Intel. and Learning (CoIL-2000), Chios (Greece), MIT GmbH (2000), pp. 36–41.

  14. 14.

    H. P. Schwefel, Numerical Optimization of Computer Models, Wiley, Chichester, N.Y. (1981).

    MATH  Google Scholar 

  15. 15.

    T. Bäck and H.-P. Schwefel, “An overview of evolution algorithms for parameter optimization,” Evol. Comput., 1, No. 1, 1–23 (1993).

    Article  Google Scholar 

  16. 16.

    J. R. Koza and J. P. Rice, “Genetic generation of both the weights and architecture for neural networks,” in: Proc. IEEE Int. Joint Conf. on Neural Networks, 2, IEEE Press, Seattle (WA) (1991), pp. 71–76.

    Google Scholar 

  17. 17.

    V. Lefort, K. Knibbe, G. Beslon, and J. Fevrel, “Simultaneous optimization of weights and structure of an RBF neural network,” in: Artif. Evol., Springer, Berlin (2006), pp. 49–60.

    Chapter  Google Scholar 

  18. 18.

    G. E. Ninton and S. J. Nowlan, “How learning can guide evolution,” Complex Systems, 1, 495–502 (1987).

    Google Scholar 

  19. 19.

    D. J. Chalmers, “The evolution of learning: An experiment in genetic connectionism,” in: Proc. of the 1990 Connectionist Models Summer School, D. S. Touretzky, J. L. Elman and G. E. Hinton (eds.), Morgan Kaufmann, San Mateo (CA) (1990), pp. 81–90.

    Google Scholar 

  20. 20.

    J. Baxter, “The evolution of learning algorithms for artificial neural networks,” in: Complex Systems, D. Green and T. Bosso-Majer (eds.), IOS Press, Amsterdam (1992), pp. 313–326.

    Google Scholar 

  21. 21.

    W. Kinnebrock, “Accelerating the standard backpropagation method using a genetic approach,” Neurocomputing, 6, 583–588 (1994).

    Article  Google Scholar 

  22. 22.

    K. W. C. Ku, M. W. Mak, and W. C. Siu, “Approaches of combining local and evolutionary search for training neural networks: A review and some new results,” in: Adv. Evol. Comput.: Theory and Appl., Springer, Berlin (2003), pp. 615–641.

    Chapter  Google Scholar 

  23. 23.

    S. Ding, X. Xu, and H. Zhu, “Studies of optimization algorithms for some artificial neural networks based on genetic algorithm (GA),” J. Comput., 6, No. 5, 939–946 (2011).

    Article  Google Scholar 

  24. 24.

    H. Braun and P. Zagorski, “ENZO-M-a hybrid approach for optimizing neural networks by evolution and learning,” in: Proc. Intern. Conf. on Parallel Problem Solving from Nature, Springer, Berlin (1994), pp. 440–451.

    Google Scholar 

  25. 25.

    L. Guo, D.-S. Huang, and W. Zhao, “Combining genetic optimization with hybrid learning algorithm for radial basis function neural networks,” Electron. Lett., 39, No. 22, 1600–1601 (2003).

    Article  Google Scholar 

  26. 26.

    C. Harpham, C. W. Dawson, and M. R. Brown, “A review of genetic algorithms applied to training radial basis function networks,” Neural Comput. and Appl., 13, No. 3, 193–201 (2004).

    Article  Google Scholar 

  27. 27.

    Lu Chun, Shi Bingxue, and Chen Lu, “Hybrid BP-GA for multilayer feedforward neural networks,” in: 7th IEEE Intern. Conf. Electronics, Circuits and Systems, 2 (2000), pp. 958–961.

  28. 28.

    G. E. Hinton and S. J. Nowlan, “How learning can guide evolution,” Complex Systems, 1, 495–502 (1987).

    MATH  Google Scholar 

  29. 29.

    R. M. French and A. Messinger, “Genes, phenes, and the Baldwin effect: Learning and evolution in a simulated population,” in: Artif. Life IV (1994), pp. 277–282.

  30. 30.

    P. A. Castillo, J. J. Merelo, V. Rivas, et al., “G-Prop: Global optimization of multilayer perceptrons using GAs,” Neurocomputing, 35, 149–163 (2000).

    Article  MATH  Google Scholar 

  31. 31.

    K. O. Stanley and R. Miikkulainen, “Evolving neural networks through augmenting topologies,” Evol. Comput., 10, No. 2, 99–127 (2002).

    Article  Google Scholar 

  32. 32.

    F. H. F. Leung, H. K. Lam, S. H. Ling, and P. K. S. Tam, “Tuning of the structure and parameters of a neural network using an improved genetic algorithm,” IEEE Trans. on Neural Networks, 14, No. 1, 79–88 (2003).

    Article  Google Scholar 

  33. 33.

    D. J. Montana and L. Davis, “Training feedforward networks using genetic algorithms,” in: Proc. 11th Int. Joint Conf. on Artificial Intelligence, Morgan Kaufmann, Detroit (Mich.) (1989), pp. 762–767.

    Google Scholar 

  34. 34.

    R. Neruda and S. Slusnu, “Parameter genetic learning of perceptron networks,” in: Proc. 10th WSEAS Int. Conf. of SYSTEMS, Athens (Greece), Stevens Points (Wis., USA) (2006), pp. 92–97.

  35. 35.

    J. Gonzalez, I. Rojas, J. Ortega, et al., “Multiobjective evolutionary optimization of the size, shape, and position parameters of radial basis function networks for the function approximation,” IEEE Trans. on Neural Networks, 14, No. 6, 1478–1495 (2003).

    Article  Google Scholar 

  36. 36.

    P. A. Castillo, J. Carpio, J. J. Merelo, et al., “Evolving multilayer perceptrons,” Neur. Proces. Letters, 12, 115–127 (2000).

    Article  MATH  Google Scholar 

  37. 37.

    O. Buchtala, M. Klimek, and B. Sick, “Evolutionary optimization of radial basis function classifiers for data mining applications,” IEEE Trans. on Systems, Man, and Cybernetics, Part B, 928–947 (2005).

  38. 38.

    E. P. Maillard and D. Gueriot, “RBF neural network, basis functions and genetic algorithm,” in: Proc. Int. Conf. Neural Networks, 4, Houston (Tex) (1997), pp. 2187–2192.

  39. 39.

    S. Chen, Y. Wu, and B. L. Luk, “Combined genetic algorithm optimization and regularized orthogonal least squares learning for radial basis function networks,” IEEE Trans. on Neural Networks, 10, No. 5, 1239–1243 (1999).

    Article  Google Scholar 

  40. 40.

    Y. Jin and J. Branke, “Evolutionary optimization in uncertain environments — A survey,” IEEE Trans. Evol. Comput., 5, No. 3, 303–317 (2005).

    Article  Google Scholar 

  41. 41.

    P. Huber, Robust Statistics [Russian translation], Mir, Moscow (1984).

    Google Scholar 

  42. 42.

    Ya. Z. Tsypkin, Foundations of the Information Theory of Identification [in Russian], Nauka, Moscow (1984).

    Google Scholar 

  43. 43.

    Ya. Z. Tsypkin and B. T. Polyak, “A crude maximum likelihood method,” Systems Dynamics, No. 12, 22–46, Izd. Gork. Univ-ta, Gorky (1977).

  44. 44.

    C. C. Lee, P. C. Chung, J. R. Tsai, and C. I. Chang, “Robust radial basis function networks,” IEEE Trans. on Syst., Man, and Cybernetics, 29, No. 6, 674–684 (1999).

    Google Scholar 

  45. 45.

    O. G. Rudenko and O. O. Bezsonov, “Robust training of radial basis function networks,” Cybernetics and Systems Analysis, 47, No. 6, 863–870 (2011).

    Article  MathSciNet  Google Scholar 

  46. 46.

    O. G. Rudenko and O. O. Bezsonov, “Robust Training of wavelet neuronets,” Journal of Automation and Information Sciences, No. 5, 66–79 (2010).

  47. 47.

    O. G. Rudenko and O. O. Bezsonov, “M-training of radial basis function networks using asymmetric influence functions,” Journal of Automation and Information Sciences, No. 1, 79–93 (2012).

  48. 48.

    O. Rudenko and O. Bezsonov, “Function approximation using robust radial basis function networks” J. Intel. Learn. Syst. and Appl., No. 3, 17–25 (2011).

  49. 49.

    O. O. Bezsonov and O. G. Rudenko, “Identification of nonlinear nonstationary objects with the help of an evolutionary multilayer perceptron,” Vestn. KhNTU, No. 1 (44), 130–135 (2012).

  50. 50.

    O. O. Bezsonov, “Training of radial basis function networkss with the help of genetic algorithms with an adaptive mutation,” Information Processing Systems, No. 3 (101), 177–180 (2012).

  51. 51.

    J.-G. Hsieh, Y.-L. Lin, and J.-H. Jeng, “Preliminary study on Wilcoxon learning machines,” IEEE Trans. on Neural Networks, 19, No. 2, 201–211 (2008).

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to O. O. Bezsonov.

Additional information

Translated from Kibernetika i Sistemnyi Analiz, No. 1, pp. 21–36, January–February 2014.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Rudenko, O.G., Bezsonov, O.O. Robust Neuroevolutionary Identification of Nonlinear Nonstationary Objects. Cybern Syst Anal 50, 17–30 (2014). https://doi.org/10.1007/s10559-014-9589-5

Download citation

Keywords

  • identification
  • robustness
  • nonlinear nonstationary object
  • artificial neural network
  • evolutionary calculation
  • genetic algorithm