Abstract
The volatility incorporated in cryptocurrency prices makes it difficult to earn a profit through day trading. Usually, the best strategy is to buy a cryptocurrency and hold it until the price rises over a long period. This project aims to automate short term trading using Reinforcement Learning (RL), predominantly using the Deep Deterministic Policy Gradient (DDPG) algorithm. The algorithm integrates with the BitMEX cryptocurrency exchange and uses Technical Indicators (TIs) to create an abundance of features. Training on these different features and using diverse environments proved to have mixed results, many of them being exceptionally interesting. The most peculiar model shows that it is possible to create a strategy that can beat a buy and hold strategy relatively effortlessly in terms of profit made.
Keywords
- Reinforcement Learning
- Cryptocurrency
- Trading
This is a preview of subscription content, access via your institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Aboussalah, A.M., Lee, C.G.: Continuous control with stacked deep dynamic recurrent reinforcement learning for portfolio optimization. Expert Syst. Appl. 140 (2020)
Akiba, T., Sano, S., Yanase, T., Ohta, T., Koyama, M.: Optuna: a next-generation hyperparameter optimization framework (2019)
Alessandretti, L., ElBahrawy, A., Aiello, L.M., Baronchelli, A.: Anticipating cryptocurrency prices using machine learning. Complexity 2018, 16 (2018)
Baird, L.: The swirlds hashgraph consensus algorithm: fair, fast, byzantine fault tolerance. Swirlds Inc, Technical report SWIRLDS-TR-2016 1 (2016)
Bebarta, D.K., Rout, A.K., Biswal, B., Dash, P.K.: Efficient prediction of stock market indices using adaptive neural network. In: Deep, K., Nagar, A., Pant, M., Bansal, J.C. (eds.) Proceedings of the International Conference on Soft Computing for Problem Solving (SocProS 2011) December 20-22, 2011. AISC, vol. 131, pp. 287–294. Springer, New Delhi (2012). https://doi.org/10.1007/978-81-322-0491-6_28
Choi, H.K.: Stock price correlation coefficient prediction with ARIMA-LSTM hybrid model, 05 August 2018
Lillicrap, T.P., et al.: Continuous control with deep reinforcement learning, 09 September 2015
Milutinovic, M.: Cryptocurrency. 2334–9190 (2018). http://ageconsearch.umn.edu/record/290219/
Nakamoto, S.: Bitcoin: a peer-to-peer electronic cash system. Manubot (2008)
Nevmyvaka, Y., Feng, Y., Kearns, M.: Reinforcement learning for optimized trade execution. In: Proceedings of the 23rd international conference on Machine learning, pp. 673–680 (2006)
OpenAI: Gym: A toolkit for developing and comparing reinforcement learning algorithms, 21 October 2019. https://gym.openai.com/
Schnaubelt, M.: Deep reinforcement learning for the optimal placement of cryptocurrency limit orders. FAU Discussion Papers in Economics (2020)
Shin, W., Bu, S.J., Cho, S.B.: Automatic financial trading agent for low-risk portfolio management using deep reinforcement learning, 07 September 2019
Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., Riedmiller, M.: Deterministic policy gradient algorithms (2014)
Sohangir, S., Wang, D., Pomeranets, A., Khoshgoftaar, T.M.: Big data: deep learning for financial sentiment analysis. J. Big Data 5(1), 3 (2018)
Sutton, R.S.: Reinforcement Learning: An Introduction. Adaptive Computation and Machine Learning, second edn. The MIT Press, Cambridge (2018)
TA-Lib: Ta-lib: Technical analysis library - home (2020). https://ta-lib.org/
Xiong, Z., Liu, X.Y., Zhong, S., Yang, H., Walid, A.: Practical deep reinforcement learning approach for stock trading, 19 November 2018
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Tummon, E., Raja, M.A., Ryan, C. (2020). Trading Cryptocurrency with Deep Deterministic Policy Gradients. In: Analide, C., Novais, P., Camacho, D., Yin, H. (eds) Intelligent Data Engineering and Automated Learning – IDEAL 2020. IDEAL 2020. Lecture Notes in Computer Science(), vol 12489. Springer, Cham. https://doi.org/10.1007/978-3-030-62362-3_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-62362-3_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-62361-6
Online ISBN: 978-3-030-62362-3
eBook Packages: Computer ScienceComputer Science (R0)