Skip to main content
Log in

Forecasting foreign exchange rates with an improved back-propagation learning algorithm with adaptive smoothing momentum terms

  • Research Article
  • Published:
Frontiers of Computer Science in China Aims and scope Submit manuscript

Abstract

The slow convergence of back-propagation neural network (BPNN) has become a challenge in data-mining and knowledge discovery applications due to the drawbacks of the gradient descent (GD) optimization method, which is widely adopted in BPNN learning. To solve this problem, some standard optimization techniques such as conjugate-gradient and Newton method have been proposed to improve the convergence rate of BP learning algorithm. This paper presents a heuristic method that adds an adaptive smoothing momentum term to original BP learning algorithm to speedup the convergence. In this improved BP learning algorithm, adaptive smoothing technique is used to adjust the momentums of weight updating formula automatically in terms of “3 σ limits theory.” Using the adaptive smoothing momentum terms, the improved BP learning algorithm can make the network training and convergence process faster, and the network’s generalization performance stronger than the standard BP learning algorithm can do. In order to verify the effectiveness of the proposed BP learning algorithm, three typical foreign exchange rates, British pound (GBP), Euro (EUR), and Japanese yen (JPY), are chosen as the forecasting targets for illustration purpose. Experimental results from homogeneous algorithm comparisons reveal that the proposed BP learning algorithm outperforms the other comparable BP algorithms in performance and convergence rate. Furthermore, empirical results from heterogeneous model comparisons also show the effectiveness of the proposed BP learning algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Rumelhart D, Hinton G, Williams R. Learning internal representations by error propagation. In: Rumelhart D and McClelland J, eds. Parallel Distributed Processing: Explorations in the Microstructure of Cognition I. Cambridge: MIT Press, 1986, 318–363

    Google Scholar 

  2. Yu L, Wang S Y, Lai K K. A novel nonlinear ensemble forecasting model incorporating GLAR and ANN for foreign exchange rates. Computers & Operations Research, 2005, 32: 2523–2541

    Article  MATH  Google Scholar 

  3. Hornik K, Stinchocombe M, White, H. Multilayer feedforward networks are universal approximators. Neural Networksi, 1989, 2: 359–366

    Article  Google Scholar 

  4. White, H. Connectionist nonparametric regression: multilayer feedforward networks can learn arbitrary mappings. Neural Networks, 1990, 3: 535–549

    Article  Google Scholar 

  5. Widrow B, Lehr M A. 30 Years of adaptive neural networks: perception, madaline, and backprpagation. In: Proceedings of the IEEE Neural Networks I: Theory & Modeling, 1990, 1415–1442

  6. Yu X H. Can back-propagation error surface not have local minima? IEEE Transactions on Neural Networks, 1992, 3: 1019–1021

    Article  Google Scholar 

  7. Lawrence S, Giles C L, Tsoi A C. Lessons in neural network training: overfitting may be harder than expected. In: Proceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI-97), California: AAAI Press, 1997, 540–545

    Google Scholar 

  8. Yu L, Wang S Y, Lai K K. An integrated data preparation scheme for neural network data analysis. IEEE Transactions on Knowledge and Data Engineering, 2006, 18(2): 217–230

    Article  Google Scholar 

  9. Yu L, Lai K K, Wang S Y, et al. A bias-variance-complexity trade-off framework for complex system modeling. Lecture Notes in Computer Science, 2006, 3980: 518–527

    Article  Google Scholar 

  10. Ng A Y. Preventing “overfitting” of cross validation data. In: Proceedings of the Fourteenth International Conference on Machine Learning, Nashville: Morgan Kaufmann, 1997, 245–253

    Google Scholar 

  11. Behera L, Kumar S, Patnaik A. On adaptive learning rate that guarantees convergence in Feedforward Networks. IEEE Transactions on Neural Networks, 2006, 17(5): 1116–1125

    Article  Google Scholar 

  12. Chen G, Ogmen H. Modified extended Kalman filtering for supervised learning. International Journal of System Science, 1993, 24: 1207–1214

    Article  MATH  MathSciNet  Google Scholar 

  13. Iiguni Y, Sakai H, Tokumaru H. A real-time learning algorithm for a multilayered neural network based on the extended Kalman filter. IEEE Transactions on Signal Processing, 1992, 40: 959–966

    Article  Google Scholar 

  14. Ruck D W, Rogers S K, Kabrisky M, et al. Comparative analysis of back-propagation and the extended Kalman filter for training multilayer perceptrons. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14: 686–691

    Article  Google Scholar 

  15. Sha D, Bajic V B. An online hybrid learning algorithm for multilayer perceptron in identification problems. Computers and Electrical Engineering, 2002, 28: 587–598

    Article  MATH  Google Scholar 

  16. Zhang Y, Li X R. A fast U-D factorization-based learning algorithm with applications to nonlinear system modeling and identification. IEEE Transactions on Neural Networks, 1999, 10: 930–938

    Article  Google Scholar 

  17. Wang G J, Chen C C. A fast multilayer neural networks training algorithm based on the layer-by-layer optimizing procedures. IEEE Transactions on Neural Networks, 1996, 7: 768–775

    Article  Google Scholar 

  18. Ergezinger S, Thomsen E. An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer. IEEE Transactions on Neural Networks, 1995, 6: 3–42

    Article  Google Scholar 

  19. Allred LG, Kelly GE. Supervised learning techniques for back-propagation networks. In: Proceedings of International Joint Conference on Neural Networks, San Diego, 1990, 1: 702–709

    Google Scholar 

  20. Ooyen A V. Improving the convergence of the back-propagation algorithm. Neural Networks, 1992, 5: 465–571

    Article  Google Scholar 

  21. Shewhart W A. Economic Control of Quality of Manufactured Product. New York: D. Van Nostrand Company, 1931

    Google Scholar 

  22. Yu L, Wang S Y, Lai K K. Adaptive smoothing neural networks in foreign exchange rate forecasting. Lecture Notes in Computer Science, 2005, 3516: 523–530

    Article  Google Scholar 

  23. Yu L, Wang S Y, Lai K K. A novel adaptive learning algorithm for stock market prediction. Lecture Notes in Computer Science, 2005, 3827: 443–452

    Article  MathSciNet  Google Scholar 

  24. Yu X H, Chen G A, Cheng S X. Dynamic learning rate optimization of the back propagation algorithm. IEEE Transactions on Neural Networks, 1995, 6(3): 669–677

    Article  Google Scholar 

  25. Chase R B, Quilano A N J, Jacobs R F. Production and Operations Management: Manufacturing and Services. McGraw-Hill, 1998

  26. Raviv Y, Intrator N. Bootstrapping with noise: an effective regularization technique. Connection Science, 1996, 8: 355–372

    Article  Google Scholar 

  27. Yu L, Wang S Y, Lai K K. Foreign-Exchange-Rate Forecasting with Artificial Neural Networks. New York: Springer, 2007

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lean Yu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yu, L., Wang, S. & Lai, K.K. Forecasting foreign exchange rates with an improved back-propagation learning algorithm with adaptive smoothing momentum terms. Front. Comput. Sci. China 3, 167–176 (2009). https://doi.org/10.1007/s11704-009-0020-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11704-009-0020-8

Keywords

Navigation