Advertisement

Dropout for Recurrent Neural Networks

  • Nathan WattEmail author
  • Mathys C. du Plessis
Conference paper
Part of the Proceedings of the International Neural Networks Society book series (INNS, volume 1)

Abstract

Neural networks are computational structures which can be trained to perform tasks based on training examples or patterns. Recurrent neural networks are a type of network designed to process time-series data. Dropout is a neural network regularization technique. The literature advises that Dropout should not be directly applied to recurrent neural networks as its effects are too dramatic when applied recurrently. This direct approach is described as naive. Instead, there are two specialised recurrent neural network Dropout algorithms proposed by different authors. However, these specialised Dropout algorithms have not been tested against one another and the naive algorithm under identical experimental conditions. This paper compares all of these algorithms and finds that the naive approach performed as well as or better than the specialised Dropout algorithms.

Keywords

Deep learning Recurrent Neural Networks Dropout 

References

  1. 1.
    Bayer, J., Osendorfer, C., Korhammer, D., Chen, N., Urban, S., van der Smagt, P.: On fast dropout and its applicability to recurrent networks. arXiv preprint arXiv:1311.0701 (2013)
  2. 2.
    Gal, Y., Ghahramani, Z.: A theoretically grounded application of dropout in recurrent neural networks. In: Advances in Neural Information Processing Systems, pp. 1019–1027 (2016)Google Scholar
  3. 3.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  4. 4.
    Martens, J., Sutskever, I.: Learning recurrent neural networks with hessian-free optimization. In: Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 1033–1040. Citeseer (2011)Google Scholar
  5. 5.
    Pachitariu, M., Sahani, M.: Regularization and nonlinearities for neural language models: when are they needed? arXiv preprint arXiv:1301.5650 (2013)
  6. 6.
    Pennington, J., Socher, R., Manning, C.: Glove: Global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)Google Scholar
  7. 7.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  8. 8.
    Tieleman, T., Hinton, G.: Lecture 6.5—RmsProp: divide the gradient by a running average of its recent magnitude. In: COURSERA: Neural Networks for Machine Learning (2012)Google Scholar
  9. 9.
    Watt, N., du Plessis, M.C.: Dropout algorithms for recurrent neural networks. In: SAICSIT (2018)Google Scholar
  10. 10.
    Zaremba, W., Sutskever, I., Vinyals, O.: Recurrent neural network regularization. arXiv preprint arXiv:1409.2329 (2014)

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Nelson Mandela UniversityPort ElizabethSouth Africa

Personalised recommendations