Skip to main content

Stochastic Weight Matrix-Based Regularization Methods for Deep Neural Networks

  • Conference paper
  • First Online:
Machine Learning, Optimization, and Data Science (LOD 2019)

Abstract

The aim of this paper is to introduce two widely applicable regularization methods based on the direct modification of weight matrices. The first method, Weight Reinitialization, utilizes a simplified Bayesian assumption with partially resetting a sparse subset of the parameters. The second one, Weight Shuffling, introduces an entropy- and weight distribution-invariant non-white noise to the parameters. The latter can also be interpreted as an ensemble approach. The proposed methods are evaluated on benchmark datasets, such as MNIST, CIFAR-10 or the JSB Chorales database, and also on time series modeling tasks. We report gains both regarding performance and entropy of the analyzed networks. We also made our code available as a GitHub repository (https://github.com/rpatrik96/lod-wmm-2019).

The research presented in this paper has been supported by the European Union, co-financed by the European Social Fund (EFOP-3.6.2-16-2017-00013, Thematic Fundamental Research Collaborations Grounding Innovation in Informatics and Infocommunications), by the BME-Artificial Intelligence FIKP grant of Ministry of Human Resources (BME FIKP-MI/SC), by Doctoral Research Scholarship of Ministry of Human Resources (ÚNKP-19-4-BME-189) in the scope of New National Excellence Program, by János Bolyai Research Scholarship of the Hungarian Academy of Sciences. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. Patrik Reizinger expresses his gratitude for the financial support of the Nokia Bell Labs Hungary.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ignite (2019). https://github.com/pytorch/ignite

  2. Bergstra, J., Bardenet, R., Bengio, Y., Kégl, B.: Algorithms for hyper-parameter optimization. In: Advances in Neural Information Processing Systems (NIPS), pp. 2546–2554 (2011). https://doi.org/2012arXiv1206.2944S

  3. Cho, K., et al.: Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation. arXiv (2014). https://doi.org/10.1074/jbc.M608066200

  4. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. PMLR 9, 249–256 (2010). https://doi.org/10.1.1.207.2059

  5. Guennec, A.L., et al.: Data augmentation for time series classification using convolutional neural networks. In: ECML/PKDD Workshop on Advanced Analytics and Learning on Temporal Data (2016)

    Google Scholar 

  6. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735

    Article  Google Scholar 

  7. Ioffe, S., Szegedy, C.: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv (2015)

    Google Scholar 

  8. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical report, Citeseer (2009). https://doi.org/10.1.1.222.9220

  9. Krueger, D., et al.: Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations. arXiv, pp. 1–11 (2016). https://doi.org/10.1227/01.NEU.0000210260.55124.A4. http://arxiv.org/abs/1606.01305

  10. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2323 (1998). https://doi.org/10.1109/5.726791

    Article  Google Scholar 

  11. Masters, D., Luschi, C.: Revisiting Small Batch Training for Deep Neural Networks. arXiv, pp. 1–18 (2018). https://doi.org/10.1016/j.biortech.2007.04.007. http://arxiv.org/abs/1804.07612

  12. Neil, D., Pfeiffer, M., Liu, S.C.: Phased LSTM: accelerating recurrent network training for long or event-based sequences. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems 29, pp. 3882–3890. Curran Associates, Inc. (2016)

    Google Scholar 

  13. Paszke, A., et al.: Automatic differentiation in PyTorch. In: Advances in Neural Information Processing Systems 30 (NIPS), pp. 1–4 (2017)

    Google Scholar 

  14. Plappert, M., et al.: Parameter Space Noise for Exploration. arXiv, pp. 1–18 (2017). http://arxiv.org/abs/1706.01905

  15. Raś, Z., Wieczorkowska, A.: Advances in Music Information Retrieval. Studies in Computational Intelligence, vol. 274. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-11674-2

    Book  Google Scholar 

  16. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014). https://doi.org/10.1214/12-AOS1000

    Article  MathSciNet  MATH  Google Scholar 

  17. Wan, L., Zeiler, M., Zhang, S., LeCun, Y., Fergus, R.: Regularization of neural networks using DropConnect. In: ICML, pp. 109–111 (2013). https://doi.org/10.1109/TPAMI.2017.2703082

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Patrik Reizinger .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Reizinger, P., Gyires-Tóth, B. (2019). Stochastic Weight Matrix-Based Regularization Methods for Deep Neural Networks. In: Nicosia, G., Pardalos, P., Umeton, R., Giuffrida, G., Sciacca, V. (eds) Machine Learning, Optimization, and Data Science. LOD 2019. Lecture Notes in Computer Science(), vol 11943. Springer, Cham. https://doi.org/10.1007/978-3-030-37599-7_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-37599-7_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-37598-0

  • Online ISBN: 978-3-030-37599-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics