Memetic Cooperative Neuro-Evolution for Chaotic Time Series Prediction

  • Gary Wong
  • Rohitash ChandraEmail author
  • Anuraganand Sharma
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9949)


Cooperative neuro-evolution has shown to be promising for chaotic time series problem as it provides global search features using evolutionary algorithms. Back-propagation features gradient descent as a local search method that has the ability to give competing results. A synergy between the methods is needed in order to exploit their features and achieve better performance. Memetic algorithms incorporate local search methods for enhancing the balance between diversification and intensification. We present a memetic cooperative neuro-evolution method that features gradient descent for chaotic time series prediction. The results show that the proposed method utilizes lower computational costs while achieving higher prediction accuracy when compared to related methods. In comparison to related methods from the literature, the proposed method has favorable results for highly noisy and chaotic time series problems.


Memetic algorithms Cooperative neuro-evolution Backpropagation Gradient descent Feedforward networks 


  1. 1.
    Alsmadi, M., Omar, K., Noah, S., Almarashdeh, I.: A hybrid memetic algorithm with back-propagation classifier for fish classification based on robust features extraction from PLGF and shape measurements. Inf. Technol. J. 10(5), 944–954 (2011). ISSN 1812–5638CrossRefGoogle Scholar
  2. 2.
    Arab, A., Alfi, A.: An adaptive gradient descent-based local search in memetic algorithm applied to optimal controller design. Inf. Sci. 299, 117–142 (2015). ISSN 0020–0255MathSciNetCrossRefGoogle Scholar
  3. 3.
    Boné, R., Crucianu, M., de Beauville, J.-P.A.: Learning long-term dependencies by the selective addition of time-delayed connections to recurrent neural networks. Neurocomputing 48(1), 251–266 (2002)zbMATHCrossRefGoogle Scholar
  4. 4.
    Castillo, P., Arenas, M., Castellano, J., Merelo, J., Prieto, A., Rivas, V., Romero, G.: Lamarckian evolution, the baldwin effect in evolutionary neural networks. arXiv:cs/0603004 (2006)
  5. 5.
    SILSO, World Data Center: Sunspot Number and Long-term Solar Observations, Royal Observatory of Belgium, on-line Sunspot Number catalogue.
  6. 6.
    Chand, S., Chandra, R.: Multi-objective cooperative coevolution of neural networks for time series prediction. In 2014 International Joint Conference on Neural Networks (IJCNN), pp. 190–197. IEEE (2014)Google Scholar
  7. 7.
    Chandra, R.: Competition and collaboration in cooperative coevolution of elman recurrent neural networks for time-series prediction. IEEE Trans. Neural Netw. Learn. Syst. 26(12), 3123–3136 (2015)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Chandra, R., Zhang, M.: Cooperative coevolution of elman recurrent neural networks for chaotic time series prediction. Neurocomputing 86, 116–123 (2012). ISSN 0925–2312CrossRefGoogle Scholar
  9. 9.
    Chandra, R., Frean, M., Zhang, M.: Crossover-based local search in cooperative co-evolutionary feedforward neural networks. Appl. Soft Comput. 12(9), 2924–2932 (2012a). ISSN 1568–4946CrossRefGoogle Scholar
  10. 10.
    Chandra, R., Frean, M., Zhang, M.: On the issue of separability for problem decomposition in cooperative neuro-evolution. Neurocomputing 87, 33–40 (2012b)CrossRefGoogle Scholar
  11. 11.
    Deb, K., Anand, A., Joshi, D.: A computationally efficient evolutionary algorithm for real-parameter optimization. Evol. Comput. 10(4), 371–395 (2002). ISSN 1063–6560CrossRefGoogle Scholar
  12. 12.
    Gholipour, A., Araabi, B.N., Lucas, C.: Predicting chaotic time series using neural and neurofuzzy models: a comparative study. Neural Process. Lett. 24(3), 217–239 (2006)CrossRefGoogle Scholar
  13. 13.
    Kazarlis, S.A., Papadakis, S.E., Theocharis, J., Petridis, V.: Microgenetic algorithms as generalized hill-climbing operators for GA optimization. IEEE Trans. Evol. Comput. 5(3), 204–217 (2001). ISSN 1089–778XCrossRefGoogle Scholar
  14. 14.
    Li, B., Ong, Y.-S., Le, M.N., Goh, C.K.: Memetic gradient search. In: IEEE Congress on Evolutionary Computation, 2008. CEC 2008. (IEEE World Congress on Computational Intelligence), pp. 2894–2901. IEEE (2008). ISBN: 1424418224Google Scholar
  15. 15.
    Lin, C.-J., Chen, C.-H., Lin, C.-T.: A hybrid of cooperative particle swarm optimization and cultural algorithm for neural fuzzy networks and its prediction applications. IEEE Trans. Syst. Man. Cybern. Part C Appl. Rev. 39(1), 55–68 (2009)CrossRefGoogle Scholar
  16. 16.
    Mackey, M.C., Glass, L.: Oscillation and chaos in physiological control systems. Science 197(4300), 287–289 (1977)CrossRefGoogle Scholar
  17. 17.
    Moscato, P.: On evolution, search, optimization, genetic algorithms, martial arts: Towards memetic algorithms. Technical report 826, Caltech Concurrent Computation Program (1989)Google Scholar
  18. 18.
    Nand, R., Chandra, R.: Coevolutionary feature selection and reconstruction in neuro-evolution for time series prediction. In: Ray, T., Sarker, R., Li, X. (eds.) ACALCI 2016. LNCS (LNAI), vol. 9592, pp. 285–297. Springer, Heidelberg (2016). doi: 10.1007/978-3-319-28270-1_24 CrossRefGoogle Scholar
  19. 19.
  20. 20.
    Ong, Y.S., Keane, A.J.: Meta-Lamarckian learning in memetic algorithms. IEEE Trans. Evol. Comput. 8(2), 99–110 (2004). ISSN 1089–778XCrossRefGoogle Scholar
  21. 21.
    Potter, M.A., De Jong, K.A.: Cooperative coevolution: an architecture for evolving coadapted subcomponents. Evol. Comput. 8(1), 1–29 (2000)CrossRefGoogle Scholar
  22. 22.
    Rodan, A., Tiňo, P.: Minimum complexity echo state network. IEEE Trans. Neural Netw. 22(1), 131–144 (2011)CrossRefGoogle Scholar
  23. 23.
    Takens, F.: Detecting strange attractors in turbulence. In: Rand, D., Young, L.-S. (eds.) Dynamical Systems and Turbulence, Warwick 1980. Lecture Notes in Mathematics, vol. 898. Springer, Heidelberg (1981). Google Scholar
  24. 24.
    Weigend, A.S., Gershenfeld, N.A.: Laser problem dataset: the Santa Fe time series competition data (1994).
  25. 25.
    Yao, X.: Evolving artificial neural networks. Proc. IEEE 87(9), 1423–1447 (1999)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Gary Wong
    • 1
  • Rohitash Chandra
    • 2
    Email author
  • Anuraganand Sharma
    • 1
  1. 1.School of Computing Information and Mathematical SciencesUniversity of the South PacificSuvaFiji
  2. 2.Artificial Intelligence and Cybernetics Research GroupSoftware FoundationNausoriFiji

Personalised recommendations