Deep Candlestick Mining

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10635)

Abstract

A data mining process we name Deep Candlestick Mining (DCM) is developed using Randomised Decision Trees, Long Short Term Memory Recurrent Neural Networks and k-means++, and is shown to discover candlestick patterns significantly outperforming traditional ones. A test for the predictive ability of novel versus traditional candlestick patterns is devised using all significant candlestick patterns within the traditional or deep mined categories. The deep mined candlestick system demonstrates a remarkable ability to outperform the traditional system by 75.2% and 92.6% on the German Bund 10-year futures contract and EURUSD hourly data.

Keywords

Machine learning LSTMs RNNs Decision trees Clustering Factor mining OHLC Data Candlestick patterns 

References

  1. 1.
    Marshall, B., Young, M., Rose, L.: Candlestick technical trading strategies: Can they create value for investors? J. Bank. Finance 30, 2303–2323 (2005)CrossRefGoogle Scholar
  2. 2.
    Horton, M.: Stars, crows, and doji: The use of candlesticks in stock selection. Q. Rev. Econ. Finance 49, 283–294 (2009)CrossRefGoogle Scholar
  3. 3.
    Fock, J., Klein, C., Zwergel, B.: Performance of candlestick analysis on intraday futures data. J. Deriv. 13(1), 28–40 (2005)CrossRefGoogle Scholar
  4. 4.
    Lu, T.: The profitability of candlestick charting in the Taiwan stock market. Pac.-Basin Financial J. 26, 65–78 (2014)CrossRefGoogle Scholar
  5. 5.
    Geurts, P., Ernst, D., Wehenkel, L.: Extremely randomized trees. Mach. Learn. 63, 3–42 (2006)CrossRefMATHGoogle Scholar
  6. 6.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  7. 7.
    Arthur, D., Vassilvitskii, S.: k-means++: the advantages of careful seeding. In: 18th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1027–1035. Society for Industrial and Applied Mathematics Philadelphia (2007)Google Scholar
  8. 8.
    Xie, H., Zhao, X., Wang, S.: A comprehensive look at the predictive information in Japanese candlesticks. In: International Conference on Computational Science (2012)Google Scholar
  9. 9.
    Breiman, L., Friedman, R.A., Olshen, R.A., Stone, C.G.: Classification and Regression Trees. Wadsworth, Pacific Grove, CA (1984)Google Scholar
  10. 10.
    Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In: IEEE International Conference Neural Networks, pp. 586–591 (1993)Google Scholar
  11. 11.
    Smeeton, N.C.: Early history of the kappa statistic. Biometrics 41, 795. JSTOR 2531300 (1985)Google Scholar
  12. 12.
    Prado, A.H., Ferneda, E., Morais R.C.L., Luiz, B.J.A., Matsura, E.: On the effectiveness of candlestick chart analysis for the Brazilian stock market. In: 17th International Conference in Knowledge Based and Intelligent Information and Engineering Systems, vol. 22, pp. 1136–1145, Procedia Computer Science (2013)Google Scholar
  13. 13.
    Mann, A.D., Gorse, D.: A new methodology to exploit predictive power in (open, high, low, close) data. In: 26th International Conference on Artificial Neural Networks, Sardinia, September 2017. (in press)Google Scholar
  14. 14.
    Rousseeuw, J.P.: Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 20, 53–65 (1987)CrossRefMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity College LondonLondonUK

Personalised recommendations