Advertisement

Towards Network Simplification for Low-Cost Devices by Removing Synapses

  • Martin BulínEmail author
  • Luboš Šmídl
  • Jan Švec
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11096)

Abstract

The deployment of robust neural network based models on low-cost devices touches the problem with hardware constraints like limited memory footprint and computing power. This work presents a general method for a rapid reduction of parameters (80–90%) in a trained (DNN or LSTM) network by removing its redundant synapses, while the classification accuracy is not significantly hurt. The massive reduction of parameters leads to a notable decrease of the model’s size and the actual prediction time of on-board classifiers. We show the pruning results on a simple speech recognition task, however, the method is applicable to any classification data.

Keywords

Pruning synapses Network simplification Minimal network structure Low-cost devices Speech recognition 

Notes

Acknowledgments

This research was supported by the Ministry of Education, Youth and Sports of the Czech Republic project No. LO1506.

References

  1. 1.
    Chen, Y., Li, J., Xiao, H., Jin, X., Yan, S., Feng, J.: Dual path networks. arXiv preprint arXiv:1707.01629 (2017)
  2. 2.
    Xiong, W., Wu, L., Alleva, F., Droppo, J., Huang, X., Stolcke, A.: The Microsoft 2017 conversational speech recognition system. CoRR,abs/1708.06073 (2017)Google Scholar
  3. 3.
    Chen, G., Parada, C., Sainath, T.N.: Query-by-example keyword spotting using long short-term memory networks. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2015). ISBN 978-1-4673-6997-8Google Scholar
  4. 4.
    Zhang, Y., Suda, N., Lai, L., Chandra, V.: Hello edge: keyword spotting on microcontrollers. arXiv arXiv:1711.07128v3 (2018)
  5. 5.
    Warden, P.: speech commands: a public dataset for single-word speech recognition (2017). http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz
  6. 6.
    Reed, R.: Pruning algorithms - a survey. IEEE Trans. Neural Netw. 4, 740–747 (1993)CrossRefGoogle Scholar
  7. 7.
    Bulín, M.: Optimization of neural network. Master thesis. University of West Bohemia. Univerzitní 8, 30100 Pilsen, Czech Republic (2017)Google Scholar
  8. 8.
    Mozer, M., Smolensky, P.: Skeletonization: a technique for trimming the fat from a network via relevance assessment. University of Colorado, Boulder, Department of Computer Science (1989)Google Scholar
  9. 9.
    LeCun, Y., Denker J.S., Solla, S.: Optimal brain damage. In: Advances in Neural Information Processing Systems, pp. 598–605 (1990)Google Scholar
  10. 10.
    Karnin, E.D.: A simple procedure for pruning back-propagation trained neural networks. IEEE Trans. Neural Netw. 1, 239–242 (1990)CrossRefGoogle Scholar
  11. 11.
    Kaggle Inc.: TensorFlow speech recognition challenge (2017). https://www.kaggle.com/c/tensorflow-speech-recognition-challenge
  12. 12.
    Chollet, F., et al.: Keras (2015). https://keras.io
  13. 13.
    Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). tensorflow.orgGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Department of CyberneticsUniversity of West BohemiaPilsenCzech Republic

Personalised recommendations