Time Series Forecasting Using Neural Networks: Are Recurrent Connections Necessary?

  • Salihu A. AbdulkarimEmail author
  • Andries P. Engelbrecht


Artificial neural networks (NNs) are widely used in modeling and forecasting time series. Since most practical time series are non-stationary, NN forecasters are often implemented using recurrent/delayed connections to handle the temporal component of the time varying sequence. These recurrent/delayed connections increase the number of weights required to be optimized during training of the NN. Particle swarm optimization (PSO) is now an established method for training NNs, and was shown in several studies to outperform the classical backpropagation training algorithm. The original PSO was, however, designed for static environments. In dealing with non-stationary data, modified versions of PSOs for optimization in dynamic environments are used. These dynamic PSOs have been successfully used to train NNs on classification problems under non-stationary environments. This paper formulates training of a NN forecaster as dynamic optimization problem to investigate if recurrent/delayed connections are necessary in a NN time series forecaster when a dynamic PSO is used as the training algorithm. Experiments were carried out on eight forecasting problems. For each problem, a feedforward NN (FNN) is trained with a dynamic PSO algorithm and the performance is compared to that obtained from four different types of recurrent NNs (RNN) each trained using gradient descent, a standard PSO for static environments and the dynamic PSO algorithm. The RNNs employed were an Elman NN, a Jordan NN, a multirecurrent NN and a time delay NN. The performance of these forecasting models were evaluated under three different dynamic environmental scenarios. The results show that the FNNs trained with the dynamic PSO significantly outperformed all the RNNs trained using any of the other algorithms considered. These findings highlight that recurrent/delayed connections are not necessary in NNs used for time series forecasting (for the time series considered in this study) as long as a dynamic PSO algorithm is used as the training method.


Time series forecasting Neural networks Recurrent neural networks Resilient propagation Particle swarm optimization Cooperative quantum particle swarm optimization 



  1. 1.
    Abdulkarim SA (2016) Time series prediction with simple recurrent neural networks. Bayero J Pure Appl Sci 9(1):19–24CrossRefGoogle Scholar
  2. 2.
    Abdulkarim SA (2018) Time series forecasting using dynamic particle swarm optimizer trained neural networks. Ph.d. thesis, University of PretoriaGoogle Scholar
  3. 3.
    Adhikari R, Agrawal R (2011) Effectiveness of PSO based neural network for seasonal time series forecasting. In: Proceedings of the fifth Indian international conference on artificial intelligence, pp 231–244Google Scholar
  4. 4.
    Blackwell T, Bentley P (2002) Dynamic search with charged swarms. In: Proceedings of the genetic and evolutionary computation conference, pp 9–16Google Scholar
  5. 5.
    Blackwell T, Branke J (2006) Multiswarms, exclusion, and anti-convergence in dynamic environments. IEEE Trans Evol Comput 10(4):459–472CrossRefGoogle Scholar
  6. 6.
    Blackwell T, Branke J, Li X (2008) Particle swarms for dynamic optimization problems. Swarm intelligence. Springer, Berlin, pp 193–217Google Scholar
  7. 7.
    Chatterjee S, Hore S, Dey N, Chakraborty S, Ashour AS (2017) Dengue fever classification using gene expression data: a PSO based artificial neural network approach. In: Proceedings of the 5th international conference on frontiers in intelligent computing: theory and applications. Springer, Berlin, pp 331–341Google Scholar
  8. 8.
    Clerc M, Kennedy J (2002) The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans Evol Comput 6(1):58–73CrossRefGoogle Scholar
  9. 9.
    Deb K, Joshi D, Anand A (2002) Real-coded evolutionary algorithms with parent-centric recombination. IEEE Proc Congr Evol Comput 1:61–66Google Scholar
  10. 10.
    Dorffner G (1996) Neural networks for time series processing. Neural Netw World 6:447–468Google Scholar
  11. 11.
    Duhain J (2011) Particle swarm optimization in dynamically changing environment an empirical study. M.Sc. thesis, University of PretoriaGoogle Scholar
  12. 12.
    Eberhart R, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science, vol 1. pp 39–43Google Scholar
  13. 13.
    Eberhart R, Shi Y (2000) Comparing inertia weights and constriction factors in particle swarm optimization. Proc IEEE Congr Evol Comput 1:84–88Google Scholar
  14. 14.
    Elman J (1990) Finding structure in time. Cogn Sci 14:179–211CrossRefGoogle Scholar
  15. 15.
    Engelbrecht A (2007) Computational intelligence: an introduction. Wiley, New YorkCrossRefGoogle Scholar
  16. 16.
    Hamzacebi C (2008) Improving artificial neural networks performance in seasonal time series forecasting. Inf Sci 178(23):4550–4559CrossRefGoogle Scholar
  17. 17.
    Han HG, Lu W, Hou Y, Qiao JF (2018) An adaptive-PSO-based self-organizing RBF neural network. IEEE Trans Neural Netw Learn Syst 29(1):104–117MathSciNetCrossRefGoogle Scholar
  18. 18.
    Harrison K, Ombuki-berman B, Engelbrecht A (2016) A radius-free quantum particle swarm optimization technique for dynamic optimization problems. In: Proceedings of IEEE congress on evolutionary computation, pp 578–585Google Scholar
  19. 19.
    Hore S, Chatterjee S, Santhi V, Dey N, Ashour AS, Balas VE, Shi F (2017) Indian sign language recognition using optimized neural networks. In: Information technology and intelligent transportation systems, Springer, pp 553–563Google Scholar
  20. 20.
    Hyndman R (2013) Time series data library. Accessed 2015 Dec 05
  21. 21.
    Jha G, Thulasiraman P, Thulasiram R (2009) PSO based neural network for time series forecasting. In: Proceedings of IEEE international joint conference on neural networks, pp 1422–1427Google Scholar
  22. 22.
    Jordan M (1986) Attractor dynamics and parallellism in a connectionist sequential machine. In: Eighth annual conference of the cognitive science society. pp 513–546Google Scholar
  23. 23.
    Kennedy J, Mendes R (2002) Population structure and particle swarm performance. Proc IEEE Congr Evol Comput 2:1671–1676Google Scholar
  24. 24.
    Lawal I, Abdulkarim S, Hassan M, Sadiq J (2016) Improving HSDPA traffic forecasting using ensemble of neural networks. In: Proceedings of 15th IEEE international conference machine learning and applications. IEEE, pp 308–313Google Scholar
  25. 25.
    LeCun Y, Bottou L, Orr G, Müller K (2012) Efficient Backprop. Neural networks: tricks of the trade. Springer, Berlin, pp 9–48Google Scholar
  26. 26.
    Li X, Dam K (2003) Comparing particle swarms for tracking extrema in dynamic environments. IEEE Proc Evol Comput 3:1772–1779Google Scholar
  27. 27.
    Mann H, Whitney D (1947) On a test of whether one of two random variables is stochastically larger than the other. Ann Math Stat 18(1):50–60MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Mao C, Lin R, Xu C, He Q (2017) Towards a trust prediction framework for cloud services based on PSO-driven neural network. IEEE Access 5:2187–2199CrossRefGoogle Scholar
  29. 29.
    Mendes R, Cortez P, Rocha M, Neves J (2002) Particle swarms for feedforward neural network training. In: Proceedings of IEEE international joint conference on neural networks, pp 1895–1899Google Scholar
  30. 30.
    Morrison R (2003) Performance measurement in dynamic environments. In: Proceedings of the GECCO workshop on evolutionary algorithms for dynamic optimization problems, pp 5–8Google Scholar
  31. 31.
    Muralitharan K, Sakthivel R, Vishnuvarthan R (2018) Neural network based optimization approach for energy demand prediction in smart grid. Neurocomputing 273:199–208CrossRefGoogle Scholar
  32. 32.
    Parsaie A, Azamathulla HM, Haghiabi AH (2018) Prediction of discharge coefficient of cylindrical weir-gate using GMDH-PSO. ISH J Hydraul Eng 24(2):116–123CrossRefGoogle Scholar
  33. 33.
    Rakitianskaia A, Engelbrecht A (2008) Cooperative charged particle swarm optimiser. In: IEEE congress on evolutionary computation, pp 933–939Google Scholar
  34. 34.
    Rakitianskaia A, Engelbrecht A (2009) Training neural networks with PSO in dynamic environments. In: Proceedings of IEEE congress on evolutionary computation, pp 667–673Google Scholar
  35. 35.
    Rakitianskaia A, Engelbrecht A (2012) Training feedforward neural networks with dynamic particle swarm optimisation. Swarm Intell 6(3):233–270CrossRefGoogle Scholar
  36. 36.
    Rakitianskaia A, Engelbrecht A (2015) Saturation in PSO neural network training: good or evil ? IEEE Congr Evolut Comput 2:125–132Google Scholar
  37. 37.
    Rezaee J (2014) Particle swarm optimisation for dynamic optimisation problems: a review. Neural Compu Appl 25:1507–1516CrossRefGoogle Scholar
  38. 38.
    Riedmiller M (1994) Rprop—description and implementation details. Technical ReportGoogle Scholar
  39. 39.
    Riedmiller M, Braun H (1993) A direct adaptive method for faster backpropagation learning: the RPROP algorithm. In: IEEE international conference on neural networks, pp 586–591Google Scholar
  40. 40.
    Röbel A (1994) The dynamic pattern selection algorithm: effective training and controlled generalization of backpropagation neural networks. In: Technical report. Tech-nische Universität BerlinGoogle Scholar
  41. 41.
    Tang Z, Fishwick P (1993) Feedforward neural nets as models for time series forecasting. ORSA J Comput 5(4):374–385CrossRefzbMATHGoogle Scholar
  42. 42.
    Unger N, Ombuki-Berman B, Engelbrecht A (2013) Cooperative particle swarm optimization in dynamic environments. In: Proceedings of IEEE symposium on swarm intelligence, pp 172–179Google Scholar
  43. 43.
    Van Den Bergh F (2001) An analysis of particle swarm optimizers. Ph.d. thesis, University of PretoriaGoogle Scholar
  44. 44.
    Van den Bergh F, Engelbrecht A (2001) Effects of swarm size on cooperative particle swarm optimisers. In: Proceedings of the 3rd annual conference on genetic and evolutionary computation, pp 892–899Google Scholar
  45. 45.
    Van den Bergh F, Engelbrecht A (2004) A cooperative approach to particle swarm optimization. IEEE Trans Evol Comput 8(3):225–239CrossRefGoogle Scholar
  46. 46.
    Waibel A, Hanazawa T, Hinton G, Shikano K, Lang KJ (1990) Phoneme recognition using time-delay neural networks. In: Readings in speech recognition. Elsevier, pp 393–404Google Scholar
  47. 47.
    Wessels L, Barnard E (1992) Avoiding false local minima by proper initialization of connections. IEEE Trans Neural Netw 3(6):899–905CrossRefGoogle Scholar
  48. 48.
    Wyk A, Engelbrecht A (2010) Overfitting by PSO trained feedforward neural networks. In: IEEE congress on evolutionary computation, pp 1–8Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Computer Science DepartmentUniversity of PretoriaPretoriaSouth Africa
  2. 2.Department of Industrial Engineering, Department of Computer ScienceStellenbosch UniversityStellenboschSouth Africa

Personalised recommendations