Advertisement

A New Methodology to Exploit Predictive Power in (Open, High, Low, Close) Data

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10614)

Abstract

Prediction of financial markets using neural networks and other techniques has predominately focused on the close price. Here, in contrast, the concept of a mid-price based on an Open, High, Low, Close (OHLC) data structure is proposed as a prediction target and shown to be a significantly easier target to forecast, suggesting previous works have attempted to extract predictive power from OHLC data in the wrong context. A prediction framework incorporating a factor discovery and mining process is developed using Randomised Decision Trees, with Long Short Term Memory Recurrent Neural Networks subsequently demonstrating remarkable predictive capabilities of up to 50.73% better than random (75.42% accuracy) on hourly data based on the FGBL German Bund futures contract, and 42.5% better than random (72.04% accuracy) on a comparison Bitcoin dataset.

Keywords

Machine learning LSTMs Decision Trees Factor Mining OHLC data Financial forecasting Mid-price 

References

  1. 1.
    Geurts, P., Ernst, D., Wehenkel, L.: Extremely randomized trees. Mach. Learn. 63, 3–42 (2006)CrossRefMATHGoogle Scholar
  2. 2.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  3. 3.
    Marshall, B., Young, M., Rose, L.: Candlestick technical trading strategies: can they create value for investors. J. Bank. Financ. 30, 2303–2323 (2005)CrossRefGoogle Scholar
  4. 4.
    Horton, M.: Stars, crows, and doji: the use of candlesticks in stock selection. Q. Rev. Econ. Financ. 49, 283–294 (2009)CrossRefGoogle Scholar
  5. 5.
    Fock, J., Klein, C., Zwergel, B.: Performance of candlestick analysis on intraday futures data. J. Deriv. 13(1), 28–40 (2005)CrossRefGoogle Scholar
  6. 6.
    Xie, H., Zhao, X., Wang, S.: A comprehensive look at the predictive information in Japanese candlesticks. In: International Conference on Computational Science (2012)Google Scholar
  7. 7.
    Lu, T., Chen, Y., Hsu, Y.: Trend definition or holding strategy: what determines the profitability of candlestick charting. J. Bank. Financ. 61, 172–183 (2015)CrossRefGoogle Scholar
  8. 8.
    Lu, T.: The profitability of candlestick charting in the Taiwan stock market. Pac.-Basin Financ. J. 26, 65–78 (2014)CrossRefGoogle Scholar
  9. 9.
    Breiman, L., Friedman, R.A., Olshen, R.A., Stone, C.G.: Classification and Regression Trees. Wadsworth, Pacific Grove (1984)MATHGoogle Scholar
  10. 10.
    Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: the RPROP algorithm. In: IEEE International Conference on Neural Networks, pp. 586–591 (1993)Google Scholar
  11. 11.
    Smeeton, N.C.: Early history of the kappa statistic. Biometrics 41, 795 (1985). JSTOR 2531300Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity College LondonLondonUK

Personalised recommendations