Abstract
Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Hinton, G.E., Srivastave, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaption of feature detectors. arXiv:1207.0580 (2012)
Krizhevsky, A.: Learning multiple layers of features from tiny images. M.S. dissertation, University of Toronto (2009)
Goodfellow, I.J., Warde-Farley, D., Mirza, M., Courville, A., Bengio, Y.: Maxout networks. In: ICML (2013)
Zeiler, M.D., Fergus R.: Stochastic pooling for regularization of deep convolutional neural networks. In: ICLR (2013)
Wan, L., Zeiler, M.D., Zhang, S., LeCun, Y., Fergus, R.: Regularization of neural networks using DropConnect. In: ICML (2013)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS (2012)
Vinod, N., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: ICML (2010)
Ba, J.L., Frey, B.: Adaptive dropout for training deep neural networks. In: NIPS (2013)
Acknowledgements
This work was supported in part by National Natural Science Foundation of China under grant 61371148.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Wu, H., Gu, X. (2015). Max-Pooling Dropout for Regularization of Convolutional Neural Networks. In: Arik, S., Huang, T., Lai, W., Liu, Q. (eds) Neural Information Processing. ICONIP 2015. Lecture Notes in Computer Science(), vol 9489. Springer, Cham. https://doi.org/10.1007/978-3-319-26532-2_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-26532-2_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-26531-5
Online ISBN: 978-3-319-26532-2
eBook Packages: Computer ScienceComputer Science (R0)