Abstract
Recurrent neural networks (RNN) such as Long Short-Term Memory and Gated Recurrent Unit have recently emerged as a state-of-art neural network architectures to process sequential data efficiently. Thereby, they can be used to model prediction of time series data, since time series values are also a sequence of discrete time data. However, due to various existing RNN architectures, their functioning modes, the growth of hyper parameters and some other bottlenecks arise when it comes to train them. Thus, it becomes perplex to find out the most suited model for a given task. To address these matters, we propose a step-wise approach to predict the time series data, especially net asset value in this paper. We have started the study with the memory size of RNN to set the optimal memory size based on the prediction accuracy. Then, the study follows by analyzing existing data preparation methods and proposing a new one. The proposed data preparation methods prove their effectiveness in both stateless and stateful mode with single RNN layer. Finally, we confront the single RNN layer to stacked and bidirectional RNN to sort out the best performing models based on their prediction accuracy in various time horizons.
Similar content being viewed by others
References
Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci 79(8):2554–2558
Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780
Gers FA, Schmidhuber J, Cummins F (1999) Learning to forget: continual prediction with LSTM. In: 9th international conference on artificial neural networks: ICANN′99, pp 850–855
Gers FA, Schmidhuber J (2000) Recurrent nets that time and count. In: Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium, vol 3. IEEE, pp 189–194
Cho K, Van Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078
Yao K, Cohn T, Vylomova K, Duh K, Dyer C (2015) Depth-gated recurrent neural networks. arXiv preprint. arXiv preprint arXiv:1508.037909
Graves A, Mohamed AR, Hinton G (2013) Speech recognition with deep recurrent neural networks. In: 2013 IEEE international conference on Acoustics, speech and signal processing (ICASSP). IEEE, pp 6645–6649
Shin HC, Roberts K, Lu L, Demner-Fushman D, Yao J, Summers RM (2016) Learning to read chest x-rays: Recurrent neural cascade model for automated image annotation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2497–2506
Rana R (2016) Gated recurrent unit (GRU) for emotion classification from noisy speech. arXiv preprint arXiv:1612.07778
Veeriah V, Zhuang N, Qi GJ (2015) Differential recurrent neural networks for action recognition. In: Proceedings of the IEEE international conference on computer vision, pp 4041–4049
Fischer T, Krauss C (2018) Deep learning with long short-term memory networks for financial market predictions. Eur J Oper Res 270(2):654–669
Cui Z, Ke R, Wang Y (2016) Deep stacked bidirectional and unidirectional LSTM recurrent neural network for network-wide traffic speed prediction. In: 6th International Workshop on Urban Computing (UrbComp 2017)
Abdel-Nasser M, Mahmoud K (2017) Accurate photovoltaic power forecasting models using deep LSTM-RNN. Neural Comput Appl 31:2727–2740
Yu P, Yan X (2019) Stock price prediction based on deep neural networks. Neural Comput Appl. https://doi.org/10.1007/s00521-019-04212-x
Collins J, Sohl-Dickstein J, Sussillo D (2016) Capacity and trainability in recurrent neural networks. arXiv preprint arXiv:1611.09913
Greff K, Srivastava RK, Koutník J, Steunebrink BR, Schmidhuber J (2017) LSTM: a search space odyssey. IEEE Trans Neural Netw Learn Syst 28(10):2222–2232
Fu R, Zhang Z, Li L (2016) Using LSTM and GRU neural network methods for traffic flow prediction. In: Youth Academic Annual Conference of Chinese Association of Automation (YAC). IEEE, pp 324–328
Jozefowicz R, Zaremba W, Sutskever I (2015) An empirical exploration of recurrent network architectures. In: International conference on machine learning, pp 2342–2350
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
There is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Koudjonou, K.M., Rout, M. A stateless deep learning framework to predict net asset value. Neural Comput & Applic 32, 1–19 (2020). https://doi.org/10.1007/s00521-019-04525-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-019-04525-x