Advertisement

End to End Deep Neural Network Frequency Demodulation of Speech Signals

  • Dan Elbaz
  • Michael Zibulevsky
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 886)

Abstract

Frequency modulation (FM) is a form of radio broadcasting which is widely used nowadays and has been for almost a century. We suggest a software-defined-radio (SDR) receiver for FM demodulation that adopts an end-to-end learning based approach and utilizes the prior information of transmitted speech message in the demodulation process. The receiver detects and enhances speech from the in-phase and quadrature components of its base band version. The new system yields high performance detection for both acoustical disturbances, and communication channel noise and is foreseen to out-perform the established methods for low signal to noise ratio (SNR) conditions.

Keywords

Frequency Modulation (FM) Long Short-Term Memory (LSTM) Software-Defined-Radio (SDR) Deep learning End-to-end learning Amplitude noise Phase noise 

References

  1. 1.
    Amini, M., Balarastaghi, E.: Universal neural network demodulator for software defined radio. Int. J. Mach. Learn. Comput. 1(3), 305–310 (2011)CrossRefGoogle Scholar
  2. 2.
    Fan, M., Wu, L.: 2017 International Conference on Communication, Control, Computing and Electronics Engineering (ICCCCEE) (2017)Google Scholar
  3. 3.
    Garofolo, J.S., Lamel, L.F., Fischer, W.M., Fiscus, J.G., Pallett, D.S., Dahlgren, N.L.: DARPA TIMIT acoustic-phonetic continuous speech corpus CD-ROM. NASA STI/Recon Technical report N, 0, pp. 1–94, January 1993Google Scholar
  4. 4.
    Goehring, T., Bolner, F., Monaghan, J.J.M., van Dijk, B., Zarowski, A., Bleeck, S.: Speech enhancement based on neural networks improves speech intelligibility in noise for cochlear implant users. Hear. Res. 344, 183–194 (2016)CrossRefGoogle Scholar
  5. 5.
    Graves, A., Mohamed, A., Hinton, G.: Speech recognition with deep recurrent neural networks. In: ICASSP, no. 3, pp. 6645–6649 (2013)Google Scholar
  6. 6.
    Graves, A.: Generating sequences with recurrent neural networks. preprint. arXiv:1308.0850 (2013)
  7. 7.
    Hatai, I., Chakrabarti, I.: A new high-performance digital FM modulator and demodulator for software-defined radio and its FPGA implementation. Int. J. Reconfigurable Comput. 2011 (2011)CrossRefGoogle Scholar
  8. 8.
    Hochreiter, S., Schmidhuber, J.U.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  9. 9.
    Kolbaek, M., Tan, Z.-H., Jensen, J.: Speech enhancement using long short-term memory based recurrent neural networks for noise robust speaker verification. In: IEEE Workshop on Spoken Language Technology (SLT), no. 1, pp. 305–311 (2016)Google Scholar
  10. 10.
    Kumar, A., Florêncio, D.: Speech Enhancement In Multiple-Noise Conditions using Deep Neural Networks. CoRR, abs/1605.0 (2016)Google Scholar
  11. 11.
    Li, X., Wu, X.: Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4520–4524 (2014)Google Scholar
  12. 12.
    Önder, M., Akan, A., Doǧan, H.: Advanced neural network receiver design to combat multiple channel impairments. Turkish J. Electr. Eng. Comput. Sci. 24(4), 3066–3077 (2016)CrossRefGoogle Scholar
  13. 13.
    Pascanu, R., Mikolov, T., Bengio, Y.: On the difficulty of training recurrent neural networks. JMLR.org (2013)Google Scholar
  14. 14.
    Rohani, K., Manry, M.T.: The design of multi-layer perceptrons using building blocks (1991)Google Scholar
  15. 15.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986)CrossRefGoogle Scholar
  16. 16.
    Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Trans. Sig. Process. 45(11), 2673–2681 (1997)CrossRefGoogle Scholar
  17. 17.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  18. 18.
    Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems (NIPS), pp. 3104–3112 (2014)Google Scholar
  19. 19.
    Tieleman, T., Hinton, G.: Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. In: COURSERA: Neural Networks for Machine Learning (2012)Google Scholar
  20. 20.
    Turner, R.E., Sahani, M.: Demodulation as probabilistic inference. IEEE Trans. Audio Speech Lang. Process. 19(8), 2398–2411 (2011)CrossRefGoogle Scholar
  21. 21.
    Wornell, G.W.: Efficient symbol-spreading strategies for wireless communication. Research Laboratory of Electronics, Massachusetts Institute of Technology (1994)Google Scholar
  22. 22.
    Xu, Y., Du, J., Dai, L.-R., Lee, C.-H.: A regression approach to speech enhancement based on deep neural networks. IEEE/ACM Trans. Audio Speech Lang. Process. 23(1), 7–19 (2015)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Computer ScienceTechnion Israel Institute of TechnologyHaifaIsrael

Personalised recommendations