Japanese Journal of Radiology

, Volume 37, Issue 1, pp 15–33 | Cite as

Technical and clinical overview of deep learning in radiology

  • Daiju UedaEmail author
  • Akitoshi Shimazaki
  • Yukio Miki
Invited Review


Deep learning has been applied to clinical applications in not only radiology, but also all other areas of medicine. This review provides a technical and clinical overview of deep learning in radiology. To gain a more practical understanding of deep learning, deep learning techniques are divided into five categories: classification, object detection, semantic segmentation, image processing, and natural language processing. After a brief overview of technical network evolutions, clinical applications based on deep learning are introduced. The clinical applications are then summarized to reveal the features of deep learning, which are highly dependent on training and test datasets. The core technology in deep learning is developed by image classification tasks. In the medical field, radiologists are specialists in such tasks. Using clinical applications based on deep learning would, therefore, be expected to contribute to substantial improvements in radiology. By gaining a better understanding of the features of deep learning, radiologists could be expected to lead medical development.


Deep learning Artificial intelligence AI Neural network Radiology Review 



Natural language processing


Artificial neural network


Area under the curve


Receiver operating characteristic


Convolutional neural network


Super resolution


Low resolution


High resolution


Generative adversarial networks


Neural architecture search


ImageNet large-scale visual recognition challenge


Fully convolutional network


Conditional random field


Regions with convolutional neural network features


You only look once


Single shot MultiBox detector


Pyramid scene parsing


Fast super resolution convolutional neural network


Efficient sub-pixel convolutional neural network


Very deep super resolution


Deeply-recursive convolutional network


Enhanced deep super resolution network


Residual dense network


Deep back-projection networks


Zero shot super resolution


Continuous bag-of-words


Global vectors for word representation


Deep convolutional generative adversarial network


Generative adversarial network with XO-structure


Efficient neural architecture search


Differentiable architecture search


Neural architecture optimization


Hemorrhage, mass effect, or hydrocephalus


Computed tomography


Suspected acute infarct


Hepato-cellular carcinoma


Magnetic resonance


Mild cognitive impairment


Intracranial hemorrhage


Epidural/subdural hemorrhage


Subarachnoid hemorrhage


Arterial spin labeling


Variational network


Parallel imaging and compressed sensing


Denoising convolutional neural network


Pulmonary embolism


Artificial intelligence



Another research about a deep learning for mammography received 10,000$ in 2017 from Wellness Open Living Labs. LLC, Osaka, Japan.

Compliance with ethical standards

Conflict of interest

Daiju Ueda received a research grant from Wellness Open Living Labs, LLC.

Ethical considerations

This article does not contain any research involving human participants or animals performed by any of the authors.


  1. 1.
    LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436.CrossRefGoogle Scholar
  2. 2.
    Deng L, Yu D. Deep learning: methods and applications. Foundations and Trends®. Signal Processing. 2014;7:197–387.Google Scholar
  3. 3.
    Goodfellow I, Bengio Y, Courville A, Bengio Y. Deep learning. Cambridge: MIT Press; 2016.Google Scholar
  4. 4.
    Hebb DO. The organization of behavior: a neurophysiological approach. New York: Wiley; 1949.Google Scholar
  5. 5.
    McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys. 1943;5:115–33.CrossRefGoogle Scholar
  6. 6.
    Rosenblatt F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev. 1958;65:386.CrossRefGoogle Scholar
  7. 7.
    Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature. 1986;323:533.CrossRefGoogle Scholar
  8. 8.
    Bengio Y, Lamblin P, Popovici D, Larochelle H. Greedy layer-wise training of deep networks. In: Advances in neural information processing systems. 2007. p. 153–60.Google Scholar
  9. 9.
    Hinton GE, Osindero S, Teh Y-W. A fast learning algorithm for deep belief nets. Neural Comput. 2006;18:1527–54.CrossRefGoogle Scholar
  10. 10.
    Poultney C, Chopra S, Cun YL. Efficient learning of sparse representations with an energy-based model. In: Advances in neural information processing systems. 2007. p. 1137–44.Google Scholar
  11. 11.
    Asada N, Doi K, MacMahon H, et al. Potential usefulness of an artificial neural network for differential diagnosis of interstitial lung diseases: pilot study. Radiology. 1990;177:857-60.CrossRefGoogle Scholar
  12. 12.
    Cicero M, Bilbily A, Colak E, et al. Training and validating a deep convolutional neural network for computer-aided detection and classification of abnormalities on frontal chest radiographs. Invest Radiol. 2017;52:281–7.CrossRefGoogle Scholar
  13. 13.
    Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. p. 1–9.Google Scholar
  14. 14.
    Yasaka K, Akai H, Kunimatsu A, Kiryu S, Abe O. Deep learning with convolutional neural network in radiology. Jpn J Radiol. 2018;36(4):257–72.CrossRefGoogle Scholar
  15. 15.
    Fukushima K, Miyake S. Neocognitron: a new algorithm for pattern recognition tolerant of deformations and shifts in position. Pattern Recogn. 1982;15:455–69.CrossRefGoogle Scholar
  16. 16.
    Hubel DH, Wiesel TN. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J Physiol. 1962;160:106–54.CrossRefGoogle Scholar
  17. 17.
    Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012. p. 1097–105.Google Scholar
  18. 18.
    Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. 2014. arXiv:1409.1556.
  19. 19.
    He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 770–8.Google Scholar
  20. 20.
    Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In: CVPR. 2017. p. 3.Google Scholar
  21. 21.
    Zhao Z-Q, Zheng P, Xu S-t, Wu X. Object detection with deep learning: a review. 2018. arXiv:1807.05511.
  22. 22.
    Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. p. 580–7.Google Scholar
  23. 23.
    Girshick R. Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision. 2015. p. 1440–8.Google Scholar
  24. 24.
    Ren S, He K, Girshick R, Sun J. Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in neural information processing systems. 2015. p. 91–9.Google Scholar
  25. 25.
    He K, Gkioxari G, Dollár P, Girshick R. Mask r-cnn. In: IEEE transactions on pattern analysis and machine intelligence. 2018.Google Scholar
  26. 26.
    Erhan D, Szegedy C, Toshev A, Anguelov D. Scalable object detection using deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. p. 2147–54.Google Scholar
  27. 27.
    Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 779–88.Google Scholar
  28. 28.
    Liu W, Anguelov D, Erhan D, et al. Ssd: Single shot multibox detector. In: European conference on computer vision. Springer; 2016. p. 21–37.Google Scholar
  29. 29.
    Lin T-Y, Goyal P, Girshick R, He K, Dollár P. Focal loss for dense object detection. In: IEEE transactions on pattern analysis and machine intelligence. 2018.Google Scholar
  30. 30.
    Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. p. 3431–40.Google Scholar
  31. 31.
    Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL. Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans Pattern Anal Mach Intell. 2018;40:834–48.CrossRefGoogle Scholar
  32. 32.
    Lin G, Milan A, Shen C, Reid ID. RefineNet: multi-path refinement networks for high-resolution semantic segmentation. In: Cvpr. 2017. p. 5.Google Scholar
  33. 33.
    Zhao H, Shi J, Qi X, Wang X, Jia J. Pyramid scene parsing network. In: IEEE conf on computer vision and pattern recognition (CVPR). 2017. p. 2881–90.Google Scholar
  34. 34.
    Jégou S, Drozdzal M, Vazquez D, Romero A, Bengio Y. The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation. In: Computer vision and pattern recognition workshops (CVPRW), 2017 IEEE conference. IEEE; 2017. p. 1175–83.Google Scholar
  35. 35.
    Badrinarayanan V, Kendall A, Cipolla R. Segnet: a deep convolutional encoder-decoder architecture for image segmentation. 2015. arXiv:1511.00561.
  36. 36.
    Rumelhart DE, Hinton GE, Williams RJ. Learning internal representations by error propagation. California Univ San Diego La Jolla Inst for Cognitive Science; 1985.Google Scholar
  37. 37.
    Garcia-Garcia A, Orts-Escolano S, Oprea S, Villena-Martinez V, Garcia-Rodriguez J. A review on deep learning techniques applied to semantic segmentation. 2017. arXiv:1704.06857.
  38. 38.
    Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL. Semantic image segmentation with deep convolutional nets and fully connected crfs. 2014. arXiv:1412.7062.
  39. 39.
    Eigen D, Fergus R. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In: Proceedings of the IEEE international conference on computer vision. 2015. p. 2650–8.Google Scholar
  40. 40.
    Liu W, Rabinovich A, Berg AC. Parsenet: Looking wider to see better. 2015. arXiv:1506.04579.
  41. 41.
    Pinheiro PO, Lin T-Y, Collobert R, Dollár P. Learning to refine object segments. In: European conference on computer vision. Springer; 2016. p. 75–91.Google Scholar
  42. 42.
    Krähenbühl P, Koltun V. Parameter learning and convergent inference for dense random fields. In: International conference on machine learning. 2013. p. 513–21.Google Scholar
  43. 43.
    Krähenbühl P, Koltun V. Efficient inference in fully connected crfs with gaussian edge potentials. In: Advances in neural information processing systems; 2011. p. 109–17.Google Scholar
  44. 44.
    Yu F, Koltun V. Multi-scale context aggregation by dilated convolutions. 2015. arXiv:1511.07122.
  45. 45.
    Yang T, Wu Y, Zhao J, Guan L. Semantic segmentation via highly fused convolutional network with multiple soft cost functions. Cognit Syst Res. 2018. arXiv:1801.01317
  46. 46.
    Park SC, Park MK, Kang MG. Super-resolution image reconstruction: a technical overview. IEEE Signal Process Mag. 2003;20:21–36.CrossRefGoogle Scholar
  47. 47.
    Dong C, Loy CC, He K, Tang X. Image super-resolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intell. 2016;38:295–307.CrossRefGoogle Scholar
  48. 48.
    Dong C, Loy CC, He K, Tang X. Learning a deep convolutional network for image super-resolution. In: European conference on computer vision. Springer; 2014. p. 184–99.Google Scholar
  49. 49.
    Dong C, Loy CC, Tang X. Accelerating the super-resolution convolutional neural network. In: European conference on computer vision. Springer; 2016. p. 391–407.Google Scholar
  50. 50.
    Shi W, Caballero J, Huszár F, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 1874–83.Google Scholar
  51. 51.
    Kim J, Kwon Lee J, Mu Lee K. Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 1646–54.Google Scholar
  52. 52.
    Kim J, Kwon Lee J, Mu Lee K. Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 1637–45.Google Scholar
  53. 53.
    Ledig C, Theis L, Huszár F, et al. Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR. 2017. p. 4.Google Scholar
  54. 54.
    Tai Y, Yang J, Liu X. Image super-resolution via deep recursive residual network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. p. 5.Google Scholar
  55. 55.
    Lim B, Son S, Kim H, Nah S, Lee KM. Enhanced deep residual networks for single image super-resolution. In: The IEEE conference on computer vision and pattern recognition (CVPR) workshops. 2017. p. 4.Google Scholar
  56. 56.
    Tong T, Li G, Liu X, Gao Q. Image super-resolution using dense skip connections. In: Computer vision (ICCV), 2017 IEEE international conference. IEEE; 2017. p. 4809–17.Google Scholar
  57. 57.
    Tai Y, Yang J, Liu X, Xu C. Memnet: A persistent memory network for image restoration. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. p. 4539–47.Google Scholar
  58. 58.
    Zhang Y, Tian Y, Kong Y, Zhong B, Fu Y. Residual dense network for image super-resolution. In: The IEEE conference on computer vision and pattern recognition (CVPR). 2018.Google Scholar
  59. 59.
    Haris M, Shakhnarovich G, Ukita N. Deep backprojection networks for super-resolution. In: Conference on computer vision and pattern recognition. 2018.Google Scholar
  60. 60.
    Shocher A, Cohen N, Irani M. Zero-Shot” super-resolution using deep internal learning. In: Conference on computer vision and pattern recognition (CVPR). 2018.Google Scholar
  61. 61.
    Young T, Hazarika D, Poria S, Cambria E. Recent trends in deep learning based natural language processing. 2017. arXiv:1708.02709.
  62. 62.
    Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J. Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems. 2013. p. 3111–9.Google Scholar
  63. 63.
    Pennington J, Socher R, Manning C. Glove: global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 2014. p. 1532–43.Google Scholar
  64. 64.
    Bojanowski P, Grave E, Joulin A, Mikolov T. Enriching word vectors with subword information. 2016. arXiv:1607.04606.
  65. 65.
    Joulin A, Grave E, Bojanowski P, Mikolov T. Bag of tricks for efficient text classification. 2016. arXiv:1607.01759.
  66. 66.
    Shannon CE. A mathematical theory of communication. In: ACM SIGMOBILE mobile computing and communications review, vol. 5. 2001. p. 3–55.Google Scholar
  67. 67.
    Collobert R, Weston J, Bottou L, Karlen M, Kavukcuoglu K, Kuksa P. Natural language processing (almost) from scratch. J Mach Learn Res. 2011;12:2493–537.Google Scholar
  68. 68.
    Elman JL. Finding structure in time. Cognit Sci. 1990;14:179–211.CrossRefGoogle Scholar
  69. 69.
    Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9:1735–80.CrossRefGoogle Scholar
  70. 70.
    Gers FA, Schmidhuber J, Cummins F. Learning to forget: continual prediction with LSTM. 1999.Google Scholar
  71. 71.
    Cho K, Van Merriënboer B, Gulcehre C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. 2014. arXiv:1406.1078.
  72. 72.
    Goller C, Kuchler A. Learning task-dependent distributed representations by backpropagation through structure. Neural Netw. 1996;1:347–52.Google Scholar
  73. 73.
    Graves A, Wayne G, Danihelka I. Neural turing machines. 2014. arXiv:1410.5401.
  74. 74.
    Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. 2014. arXiv:1409.0473.
  75. 75.
    Santoro A, Bartunov S, Botvinick M, Wierstra D, Lillicrap T. Meta-learning with memory-augmented neural networks. In: International conference on machine learning. 2016. p. 1842–50.Google Scholar
  76. 76.
    Hertel L, Barth E, Käster T, Martinetz T. Deep convolutional neural networks as generic feature extractors. In: Neural networks (IJCNN), 2015 international joint conference. IEEE; 2015. p. 1–4.Google Scholar
  77. 77.
    Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets. In: Advances in neural information processing systems. 2014. p. 2672–80.Google Scholar
  78. 78.
    Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. 2015. arXiv:1511.06434.
  79. 79.
    Isola P, Zhu J-Y, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. 2017. arXiv:1611.07004
  80. 80.
    Zhu J-Y, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. 2017. arXiv:1703.10593
  81. 81.
    Zhang Y. XOGAN: one-to-many unsupervised image-to-image translation. 2018. arXiv:1805.07277.
  82. 82.
    Zhang Y, Gan Z, Fan K, et al. Adversarial feature matching for text generation. 2017. arXiv:1706.03850.
  83. 83.
    Yu L, Zhang W, Wang J, Yu Y. SeqGAN: sequence generative adversarial nets with policy gradient. In: AAAI. 2017. p. 2852–858.Google Scholar
  84. 84.
    Fedus W, Goodfellow I, Dai AM. Maskgan: better text generation via filling in the _. 2018. arXiv:180107736.
  85. 85.
    Mortazi A, Bagci U. Automatically designing CNN architectures for medical image segmentation. In: International workshop on machine learning in medical imaging. Springer; 2018. p. 98–106.Google Scholar
  86. 86.
    Xie L, Yuille AL. Genetic CNN. In: ICCV; 2017. p. 1388–97.Google Scholar
  87. 87.
    Zoph B, Vasudevan V, Shlens J, Le QV. Learning transferable architectures for scalable image recognition. 2017. p. 2. arXiv:1707.07012.
  88. 88.
    Pham H, Guan MY, Zoph B, Le QV, Dean J. Efficient neural architecture search via parameter sharing. 2018. arXiv:1802.03268.
  89. 89.
    Liu H, Simonyan K, Yang Y. DARTS: differentiable architecture search. 2018. arXiv:1806.09055.
  90. 90.
    Luo R, Tian F, Qin T, Liu T-Y. Neural architecture optimization. 2018. arXiv:1808.07233.
  91. 91.
    Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284:574–82.CrossRefGoogle Scholar
  92. 92.
    Dietterich TG. Ensemble methods in machine learning. International workshop on multiple classifier systems. Springer; 2000. p. 1–15.Google Scholar
  93. 93.
    Prevedello LM, Erdal BS, Ryu JL, et al. Automated critical test findings identification and online notification system using artificial intelligence in imaging. Radiology. 2017;285:923–31.CrossRefGoogle Scholar
  94. 94.
    Kim JR, Shim WH, Yoon HM, et al. Computerized bone age estimation using deep learning based program: evaluation of the accuracy and efficiency. Am J Roentgenol. 2017;209:1374–80.CrossRefGoogle Scholar
  95. 95.
    Yasaka K, Akai H, Abe O, Kiryu S. Deep learning with convolutional neural network for differentiation of liver masses at dynamic contrast-enhanced CT: a preliminary study. Radiology. 2017;286:887–96.CrossRefGoogle Scholar
  96. 96.
    Larson DB, Chen MC, Lungren MP, Halabi SS, Stence NV, Langlotz CP. Performance of a deep-learning neural network model in assessing skeletal maturity on pediatric hand radiographs. Radiology. 2017;287:313–22.CrossRefGoogle Scholar
  97. 97.
    Yasaka K, Akai H, Kunimatsu A, Abe O, Kiryu S. Deep learning for staging liver fibrosis on CT: a pilot study. Eur Radiol. 2018;28:440–51.CrossRefGoogle Scholar
  98. 98.
    Yasaka K, Akai H, Kunimatsu A, Abe O, Kiryu S. Liver fibrosis: deep convolutional neural network for staging by using gadoxetic acid–enhanced hepatobiliary phase MR images. Radiology. 2017;287:146–55.CrossRefGoogle Scholar
  99. 99.
    Noguchi T, Higa D, Asada T, et al. Artificial intelligence using neural network architecture for radiology (AINNAR): classification of MR imaging sequences. Jpn J Radiol. 2018;36(12):691–7.CrossRefGoogle Scholar
  100. 100.
    England JR, Gross JS, White EA, Patel DB, England JT, Cheng PM. Detection of traumatic pediatric elbow joint effusion using a deep convolutional neural network. Am J Roentgenol. 2018;211(6):1361–8.CrossRefGoogle Scholar
  101. 101.
    Kim Y, Lee KJ, Sunwoo L, et al. Deep learning in diagnosis of maxillary sinusitis using conventional radiography. Invest Radiol. 2018.
  102. 102.
    Lehman CD, Yala A, Schuster T, et al. Mammographic breast density assessment using deep learning: clinical implementation. Radiology. 2018:180694.Google Scholar
  103. 103.
    Ueda D, Yamamoto A, Nishimori M, et al. Deep learning for MR angiography: automated detection of cerebral aneurysms. Radiology. 2018:180901.Google Scholar
  104. 104.
    Chang P, Kuoy E, Grinband J, et al. Hybrid 3D/2D convolutional neural network for hemorrhage evaluation on head CT. Am J Neuroradiol. 2018;39(9):1609–16.CrossRefGoogle Scholar
  105. 105.
    Becker AS, Marcon M, Ghafoor S, Wurnig MC, Frauenfelder T, Boss A. Deep learning in mammography: diagnostic accuracy of a multipurpose image analysis software in the detection of breast cancer. Invest Radiol. 2017;52:434–40.CrossRefGoogle Scholar
  106. 106.
    Norman B, Pedoia V, Majumdar S. Use of 2D U-Net convolutional neural networks for automated cartilage and meniscus segmentation of knee MR imaging data to determine relaxometry and morphometry. Radiology. 2018;288(1):177–85.CrossRefGoogle Scholar
  107. 107.
    Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: International conference on Medical image computing and computer-assisted intervention. Springer; 2015. p. 234–41.Google Scholar
  108. 108.
    Perkuhn M, Stavrinou P, Thiele F, et al. Clinical evaluation of a multiparametric deep learning model for glioblastoma segmentation using heterogeneous magnetic resonance imaging data from clinical routine. Investig Radiol. 2018;53(11):647–54.Google Scholar
  109. 109.
    Laukamp KR, Thiele F, Shakirin G, et al. Fully automated detection and segmentation of meningiomas using deep learning on routine multiparametric MRI. Eur Radiol. 2018:1-9.Google Scholar
  110. 110.
    Kamnitsas K, Ledig C, Newcombe VF, et al. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med Image Anal. 2017;36:61–78.CrossRefGoogle Scholar
  111. 111.
    Montoya J, Li Y, Strother C, Chen G-H. 3D deep learning angiography (3D-DLA) from C-arm conebeam CT. Am J Neuroradiol. 2018;39:916–22.CrossRefGoogle Scholar
  112. 112.
    Tao Q, Yan W, Wang Y, et al. Deep learning–based method for fully automatic quantification of left ventricle function from cine MR images: a multivendor, multicenter study. Radiology. 2018:180513.Google Scholar
  113. 113.
    Liu F, Jang H, Kijowski R, Bradshaw T, McMillan AB. Deep learning MR imaging–based attenuation correction for PET/MR imaging. Radiology. 2017;286:676–84.CrossRefGoogle Scholar
  114. 114.
    Kim KH, Choi SH, Park S-H. Improving arterial spin labeling by using deep learning. Radiology. 2017;287:658–66.CrossRefGoogle Scholar
  115. 115.
    Ahn SY, Chae KJ, Goo JM. The potential role of grid-like software in bedside chest radiography in improving image quality and dose reduction: an observer preference study. Korean J Radiol. 2018;19:526–33.CrossRefGoogle Scholar
  116. 116.
    Chen F, Taviani V, Malkiel I, et al. Variable-density single-shot fast spin-echo MRI with deep learning reconstruction by using variational networks. Radiology. 2018;289(2):180445.CrossRefGoogle Scholar
  117. 117.
    Kobler E, Klatzer T, Hammernik K, Pock T. Variational networks: connecting variational methods and deep learning. In: German conference on pattern recognition. Springer; 2017. p. 281–93.Google Scholar
  118. 118.
    Jiang D, Dou W, Vosters L, Xu X, Sun Y, Tan T. Denoising of 3D magnetic resonance images with multi-channel residual learning of convolutional neural network. Jpn J Radiol. 2018;36:566–74.CrossRefGoogle Scholar
  119. 119.
    Zhang K, Zuo W, Chen Y, Meng D, Zhang L. Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. IEEE Trans Image Process. 2017;26:3142–55.CrossRefGoogle Scholar
  120. 120.
    Chen MC, Ball RL, Yang L, et al. Deep learning to classify radiology free-text reports. Radiology. 2017;286:845–52.CrossRefGoogle Scholar
  121. 121.
    Kim Y. Convolutional neural networks for sentence classification. 2014. arXiv:1408.5882.
  122. 122.
    Zech J, Pain M, Titano J, et al. Natural language–based machine learning models for the annotation of clinical radiology reports. Radiology. 2018;287:570–80.CrossRefGoogle Scholar
  123. 123.
    Chang P, Grinband J, Weinberg B, et al. Deep-learning convolutional neural networks accurately classify genetic mutations in gliomas. Am J Neuroradiol. 2018;39(7):1201–7.CrossRefGoogle Scholar
  124. 124.
    Liu F, Zhou Z, Samsonov A, et al. Deep learning approach for evaluating knee MR images: achieving high diagnostic performance for cartilage lesion detection. Radiology. 2018;289(1):160–9.CrossRefGoogle Scholar
  125. 125.
    Choi KJ, Jang JK, Lee SS, et al. Development and validation of a deep learning system for staging liver fibrosis by using contrast agent–enhanced CT images in the liver. Radiology. 2018;289(3):688–97.CrossRefGoogle Scholar
  126. 126.
    Kim Y-H, Reddy B, Yun S, Seo C. Nemo: Neuro-evolution with multiobjective optimization of deep neural network for speed and accuracy. In: ICML.Google Scholar
  127. 127.
    Nam JG, Park S, Hwang EJ, et al. Development and validation of deep learning–based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology. 2018:180237.Google Scholar
  128. 128.
    Liang S, Tang F, Huang X, et al. Deep-learning-based detection and segmentation of organs at risk in nasopharyngeal carcinoma computed tomographic images for radiotherapy planning. Eur Radiol. 2018.
  129. 129.
    Park SH, Han K. Methodologic guide for evaluating clinical performance and effect of artificial intelligence technology for medical diagnosis and prediction. Radiology. 2018;286:800–9.CrossRefGoogle Scholar
  130. 130.
    Rajkomar A, Oren E, Chen K, et al. Scalable and accurate deep learning with electronic health records. npj Dig Med. 2018;1:18.CrossRefGoogle Scholar
  131. 131.
    Japanese goverment to make inclusive rules for use of AI in medical practice. Nikkei. 2018.Google Scholar
  132. 132.
    Nakajima Y, Yamada K, Imamura K, Kobayashi K. Radiologist supply and workload: international comparison–Working Group of Japanese College of Radiology. Radiat Med. 2008;26:455–65.CrossRefGoogle Scholar
  133. 133.
    Nishie A, Kakihara D, Nojo T, et al. Current radiologist workload and the shortages in Japan: how many full-time radiologists are required? Jpn J Radiol. 2015;33:266–72.CrossRefGoogle Scholar
  134. 134.
    Kumamaru KK, Machitori A, Koba R, Ijichi S, Nakajima Y, Aoki S. Global and Japanese regional variations in radiologist potential workload for computed tomography and magnetic resonance imaging examinations. Jpn J Radiol. 2018;36:273–81.CrossRefGoogle Scholar

Copyright information

© Japan Radiological Society 2018

Authors and Affiliations

  1. 1.Department of Diagnostic and Interventional RadiologyOsaka City University Graduate School of MedicineOsakaJapan

Personalised recommendations