Convolutional Radio Modulation Recognition Networks

Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 629)


We study the adaptation of convolutional neural networks to the complex-valued temporal radio signal domain. We compare the efficacy of radio modulation classification using naively learned features against using expert feature based methods which are widely used today and e show significant performance improvements. We show that blind temporal learning on large and densely encoded time series using deep convolutional neural networks is viable and a strong candidate approach for this task especially at low signal to noise ratio.


Machine learning Radio Software radio Convolutional networks Deep learning Modulation recognition Cognitive radio Dynamic spectrum access 



The authors would like to thank the Bradley Department of Electrical and Computer Engineering at the Virginia Polytechnic Institute and State University, the Hume Center, and DARPA all for their generous support in this work.

This research was developed with funding from the Defense Advanced Research Projects Agency’s (DARPA) MTO Office under grant HR0011-16-1-0002. The views, opinions, and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.


  1. 1.
    Gardner, W.A., Spooner, C.M.: Signal interception: Performance advantages of cyclic-feature detectors. IEEE Trans. Commun. 40(1), 149–159 (1992)CrossRefzbMATHGoogle Scholar
  2. 2.
    Mitola III, J., Maguire Jr., G.Q.: Cognitive radio: Making software radios more personal. IEEE Pers. Commun. 6(4), 13–18 (1999)CrossRefGoogle Scholar
  3. 3.
    Blossom, E.: GNU radio: Tools for exploring the radio frequency spectrum. Linux J. 2004(122), 4 (2004)Google Scholar
  4. 4.
    Kolodzy, P.J.: Dynamic spectrum policies: Promises and challenges. CommLaw Conspectus 12, 147 (2004)Google Scholar
  5. 5.
    Lee, H., Battle, A., et al.: Efficient sparse coding algorithms. In: Advances in Neural Information Processing Systems, pp. 801–808 (2006)Google Scholar
  6. 6.
    Clancy, C., Hecker, J., Stuntebeck, E., O’Shea, T.: Applications of machine learning to cognitive radio networks. IEEE Wirel. Commun. 14(4), 47–52 (2007)CrossRefGoogle Scholar
  7. 7.
    Kim, K., Akbar, I.A., et al.: Cyclostationary approaches to signal detection and classification in cognitive radio. In: New Frontiers in Dynamic Spectrum Access Networks, pp. 212–215 (2007)Google Scholar
  8. 8.
    Nvidia, C.: Compute unified device architecture programming guide (2007)Google Scholar
  9. 9.
    Rondeau, T.W.: Application of artificial intelligence to wireless communications. Ph.D. thesis, Virginia Tech (2007)Google Scholar
  10. 10.
    Zeiler, M.D., Krishnan, D., Taylor, G.W., Fergus, R.: Deconvolutional networks. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2528–2535. IEEE (2010)Google Scholar
  11. 11.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  12. 12.
    Tieleman, T., Hinton, G.: Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. In: COURSERA: Neural Networks for Machine Learning, vol. 4, p. 2 (2012)Google Scholar
  13. 13.
    Graves, A., Mohamed, A., Hinton, G.E.: Speech recognition with deep recurrent neural networks. CoRR, vol. abs/1303.5778 (2013).
  14. 14.
    O’Shea, T.: GNU radio channel simulation. In: GNU Radio Conference (2013)Google Scholar
  15. 15.
    Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. CoRR, vol. abs/1412.6980 (2014).
  16. 16.
    Chollet, F.: Keras (2015).
  17. 17.
    Jaderberg, M., Simonyan, K., Zisserman, A., Kavukcuoglu, K.: Spatial transformer networks. CoRR, vol. abs/1506.02025 (2015).
  18. 18.
    Sainath, T.N., et al.: Learning the speech front-end with raw waveform CLDNNS. In: Proceedings of the Interspeech (2015)Google Scholar
  19. 19.
    Abadi, M., Agarwal, A., et al.: Tensorflow: Large-scale machine learningon heterogeneous systems (2015). Software available from

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Bradley Department of Electrical and Computer EngineeringVirginia TechArlingtonUSA
  2. 2.Corgan LabsSan JoseUSA

Personalised recommendations