Skip to main content

Stacking Generalization via Machine Learning for Trend Detection in Financial Time Series

  • 160 Accesses

Part of the Studies in Computational Intelligence book series (SCI,volume 990)

Abstract

The task of understanding and modeling the dynamics of financial data has a significant practical value. In particular, it can help intercept trend inversion signals, providing an accurate future forecast that is important for asset allocation, investment planning, portfolio risk hedging and so on. Yet, the irregular fluctuations, chaotic dynamics and constantly changing patterns of financial data make time series modeling a challenging task in this domain. In this paper, we propose a classifier ensemble operator based on stacking generalization, which is applied to a pool of individual signals generated by a Poisson process-based model. The forecasting ability of the methodology is tested on a set of price time series. The results of the ensemble model application demonstrate the increased accuracy of prediction and a mitigated sensitivity of the model to parameters, outperforming the output of individual model components.

Keywords

  • Poisson process
  • Classifier ensemble
  • Stacking generalization
  • Neural networks
  • Trend detection

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-75583-6_16
  • Chapter length: 8 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   169.00
Price excludes VAT (USA)
  • ISBN: 978-3-030-75583-6
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Hardcover Book
USD   219.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.

Notes

  1. 1.

    Temporal indexing t refers to an arbitrary observation frequency.

  2. 2.

    The L function is the time window on which the parameters will be estimated. This function could also depend on \(\mathfrak {R}\), or on the relevant state of the market [21].

  3. 3.

    i.e. The increments over a time frame of L steps.

  4. 4.

    Homogeneity is avoided to pursue a general path.

  5. 5.

    Refers to one of the three processes we built in 2.

  6. 6.

    For example, if this value is greater than its historical average plus c times its historical standard deviation.

References

  1. Sun, Y.: Exchange rate forecasting with an artificial neural network model: can we beat a random walk model? Master of Commerce and Management (MCM) thesis. Lincoln University, New Zealand (2005)

    Google Scholar 

  2. Tyree, E.W., Long, J.A.: Forecasting currency exchange rates: neural networks and the random walk model. In: Proceedings of the Third International Conference on Artificial Intelligence Applications, Wall Street, New York (1995)

    Google Scholar 

  3. Timmermann, A., Granger, C.W.J.: Efficient market hypothesis and forecasting. Int. J. Forecasting 20, 15–27 (2004)

    Google Scholar 

  4. Krollner, B., Vanstone, B., Finnie, G.: Financial time series forecasting with machine learning techniques: a survey. In: Paper presented at the European Symposium on Artificial Neural Networks: Computational and Machine Learning. Bruges, Belgium (2010)

    Google Scholar 

  5. Bellgard, C., Goldschmidt, P.: Forecasting across frequencies: linearity and non-linearity. In: Proceedings of the Conference on Advanced Investment Technology, Gold Coast, Australia, pp. 41–48 (1999)

    Google Scholar 

  6. Kourentzes, N., Barrow, D.K., Crone, S.F.: Neural network ensemble operators for time series forecasting. Expert Syst. Appl. 41, 4235–4244 (2014)

    CrossRef  Google Scholar 

  7. Chan, M.-C., Wong, C.-C., Lam, C.-C.: Financial time series forecasting by neural network using conjugate gradient learning algorithm and multiple linear regression weight initialization. In: Computing in Economics and Finance, vol. 61 (2000)

    Google Scholar 

  8. Krogh, A., Vedelsby, J.: Neural network ensembles. Cross validation and active learning. Adv. Neural Inf. Process. Syst. 7, 231–238 (1995)

    Google Scholar 

  9. Khashei, M., Bijari, M.: An artificial neural network (p, d, q) model for time series forecasting. Expert Syst. Appl. 37, 479–489 (2010)

    CrossRef  Google Scholar 

  10. Dietterich, T.G.: Ensemble methods in machine learning. In: Multiple Classifier Systems, pp. 1-15 (2000)

    Google Scholar 

  11. Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. Wiley-Interscience, New York (2004)

    Google Scholar 

  12. Chatfield, C.: The Analysis of Time Series: An Introduction, 6th edn. CRC Press, Cambridge (2013)

    MATH  Google Scholar 

  13. Brown, G., Kuncheva, L.I.: “Good” and “bad” diversity in majority vote ensembles. In: Proceeding of Multiple Classifier Systems, pp. 1-15 (2010)

    Google Scholar 

  14. Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990)

    CrossRef  Google Scholar 

  15. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MATH  Google Scholar 

  16. Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: Proceedings of the 13th International Conference on Machine Learning, pp. 148-156 (1996)

    Google Scholar 

  17. Opitz, D., Maclin, R.: Popular ensemble methods: an empirical study. J. Artif. Intell. Res. 11, 169–198 (1999)

    CrossRef  Google Scholar 

  18. Zhou, Z.-H., Wu, J., Tang, W.: Ensembling neural networks: many could be better than all. Artif. Intell. 137(1–2), 239–263 (2002)

    MathSciNet  CrossRef  Google Scholar 

  19. Aiolfi, M., Timmermann, A.: Persistence in forecasting performance and conditional combination strategies. J. Econometrics 135(1), 31–53 (2006)

    MathSciNet  CrossRef  Google Scholar 

  20. Gama, J., Kosina, P.: Tracking recurring concepts with meta-learners. In: Portuguese Conference on Artificial Intelligence, pp. 423-434 (2009)

    Google Scholar 

  21. Ilalan, D.: A Poisson process with random intensity for modeling financial stability. Spanish Rev. Financ. Econ. 14, 43–50 (2016)

    CrossRef  Google Scholar 

  22. Norris, J.: Markov Chains (Cambridge Series in Statistical and Probabilistic Mathematics). Cambridge University Press, Cambridge (1997)

    Google Scholar 

  23. Graham, E., Granger, C., Timmermann, A.: Handbook of Economic Forecasting, vol. 1 (2006)

    Google Scholar 

  24. Zhang, P.G., Berardi, L.V.: Time series forecasting with neural network ensembles: an application for exchange rate prediction. J. Oper. Res. Soc. 52(6), 652–664 (2001)

    CrossRef  Google Scholar 

  25. Stein, R.M.: Benchmarking default prediction models: pitfalls and remedies in model validation. J. Risk Model Validation 1, 77–113 (2007)

    CrossRef  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vittorio Carlei .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Carlei, V., Adamo, G., Ustenko, O., Barybina, V. (2021). Stacking Generalization via Machine Learning for Trend Detection in Financial Time Series. In: Bucciarelli, E., Chen, SH., Corchado, J.M., Parra D., J. (eds) Decision Economics: Minds, Machines, and their Society. DECON 2020. Studies in Computational Intelligence, vol 990. Springer, Cham. https://doi.org/10.1007/978-3-030-75583-6_16

Download citation