Skip to main content
Log in

A stateless deep learning framework to predict net asset value

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Recurrent neural networks (RNN) such as Long Short-Term Memory and Gated Recurrent Unit have recently emerged as a state-of-art neural network architectures to process sequential data efficiently. Thereby, they can be used to model prediction of time series data, since time series values are also a sequence of discrete time data. However, due to various existing RNN architectures, their functioning modes, the growth of hyper parameters and some other bottlenecks arise when it comes to train them. Thus, it becomes perplex to find out the most suited model for a given task. To address these matters, we propose a step-wise approach to predict the time series data, especially net asset value in this paper. We have started the study with the memory size of RNN to set the optimal memory size based on the prediction accuracy. Then, the study follows by analyzing existing data preparation methods and proposing a new one. The proposed data preparation methods prove their effectiveness in both stateless and stateful mode with single RNN layer. Finally, we confront the single RNN layer to stacked and bidirectional RNN to sort out the best performing models based on their prediction accuracy in various time horizons.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

References

  1. Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci 79(8):2554–2558

    Article  MathSciNet  Google Scholar 

  2. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780

    Article  Google Scholar 

  3. Gers FA, Schmidhuber J, Cummins F (1999) Learning to forget: continual prediction with LSTM. In: 9th international conference on artificial neural networks: ICANN′99, pp 850–855

  4. Gers FA, Schmidhuber J (2000) Recurrent nets that time and count. In: Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium, vol 3. IEEE, pp 189–194

  5. Cho K, Van Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078

  6. Yao K, Cohn T, Vylomova K, Duh K, Dyer C (2015) Depth-gated recurrent neural networks. arXiv preprint. arXiv preprint arXiv:1508.037909

  7. Graves A, Mohamed AR, Hinton G (2013) Speech recognition with deep recurrent neural networks. In: 2013 IEEE international conference on Acoustics, speech and signal processing (ICASSP). IEEE, pp 6645–6649

  8. Shin HC, Roberts K, Lu L, Demner-Fushman D, Yao J, Summers RM (2016) Learning to read chest x-rays: Recurrent neural cascade model for automated image annotation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2497–2506

  9. Rana R (2016) Gated recurrent unit (GRU) for emotion classification from noisy speech. arXiv preprint arXiv:1612.07778

  10. Veeriah V, Zhuang N, Qi GJ (2015) Differential recurrent neural networks for action recognition. In: Proceedings of the IEEE international conference on computer vision, pp 4041–4049

  11. Fischer T, Krauss C (2018) Deep learning with long short-term memory networks for financial market predictions. Eur J Oper Res 270(2):654–669

    Article  MathSciNet  Google Scholar 

  12. Cui Z, Ke R, Wang Y (2016) Deep stacked bidirectional and unidirectional LSTM recurrent neural network for network-wide traffic speed prediction. In: 6th International Workshop on Urban Computing (UrbComp 2017)

  13. Abdel-Nasser M, Mahmoud K (2017) Accurate photovoltaic power forecasting models using deep LSTM-RNN. Neural Comput Appl 31:2727–2740

    Article  Google Scholar 

  14. Yu P, Yan X (2019) Stock price prediction based on deep neural networks. Neural Comput Appl. https://doi.org/10.1007/s00521-019-04212-x

    Article  Google Scholar 

  15. Collins J, Sohl-Dickstein J, Sussillo D (2016) Capacity and trainability in recurrent neural networks. arXiv preprint arXiv:1611.09913

  16. Greff K, Srivastava RK, Koutník J, Steunebrink BR, Schmidhuber J (2017) LSTM: a search space odyssey. IEEE Trans Neural Netw Learn Syst 28(10):2222–2232

    Article  MathSciNet  Google Scholar 

  17. Fu R, Zhang Z, Li L (2016) Using LSTM and GRU neural network methods for traffic flow prediction. In: Youth Academic Annual Conference of Chinese Association of Automation (YAC). IEEE, pp 324–328

  18. Jozefowicz R, Zaremba W, Sutskever I (2015) An empirical exploration of recurrent network architectures. In: International conference on machine learning, pp 2342–2350

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Minakhi Rout.

Ethics declarations

Conflict of interest

There is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Koudjonou, K.M., Rout, M. A stateless deep learning framework to predict net asset value. Neural Comput & Applic 32, 1–19 (2020). https://doi.org/10.1007/s00521-019-04525-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-019-04525-x

Keywords

Navigation