Comparison of Shallow and Deep Learning Methods on Classifying the Regional Pattern of Diffuse Lung Disease
- 1.1k Downloads
This study aimed to compare shallow and deep learning of classifying the patterns of interstitial lung diseases (ILDs). Using high-resolution computed tomography images, two experienced radiologists marked 1200 regions of interest (ROIs), in which 600 ROIs were each acquired using a GE or Siemens scanner and each group of 600 ROIs consisted of 100 ROIs for subregions that included normal and five regional pulmonary disease patterns (ground-glass opacity, consolidation, reticular opacity, emphysema, and honeycombing). We employed the convolution neural network (CNN) with six learnable layers that consisted of four convolution layers and two fully connected layers. The classification results were compared with the results classified by a shallow learning of a support vector machine (SVM). The CNN classifier showed significantly better performance for accuracy compared with that of the SVM classifier by 6–9%. As the convolution layer increases, the classification accuracy of the CNN showed better performance from 81.27 to 95.12%. Especially in the cases showing pathological ambiguity such as between normal and emphysema cases or between honeycombing and reticular opacity cases, the increment of the convolution layer greatly drops the misclassification rate between each case. Conclusively, the CNN classifier showed significantly greater accuracy than the SVM classifier, and the results implied structural characteristics that are inherent to the specific ILD patterns.
KeywordsInterstitial lung disease Convolution neural network Deep architecture Support vector machine Interscanner variation
This study was supported by the Industrial Strategic Technology Development Program of the Ministry of Trade, Industry & Energy (10041618) in the Republic of Korea. This study is also the collaborated result supported by another national project of the ICT R&D program of MSIP/IITP (R6910-15-1023) in the Republic of Korea.
- 6.Fujisaki T et al.: Effects of density changes in the chest on lung stereotactic radiotherapy. Radiat Med 22(4):233–238, 2003Google Scholar
- 19.Krizhevsky A, Sutskever I, Hinton GE: Imagenet classification with deep convolutional neural networks. Adv Neural Inf Proces Syst 1097–1105, 2012Google Scholar
- 20.Mo D: A survey on deep learning: One small step toward AI. Albuquerque: Dept. Computer Science, Univ. of New Mexico, 2012Google Scholar
- 21.Goodfellow IJ et al.: Multi-digit number recognition from street view imagery using deep convolutional neural networks. arXiv preprint arXiv:1312.6082, 2013Google Scholar
- 22.Cruz-Roa AA et al.: A deep learning architecture for image representation, visual interpretability and automated basal-cell carcinoma cancer detection. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2013. Springer, 2013, pp 403–410Google Scholar
- 24.de BrebissonA, Montana G: Deep Neural Networks for Anatomical Brain Segmentation. arXiv preprint arXiv:1502.02445, 2015Google Scholar
- 25.Li Q et al.: Medical image classification with convolutional neural network. Control Automation Robotics & Vision (ICARCV), 2014 13th International Conference on 844–848, 2014Google Scholar
- 26.Gao M et al.: Holistic Classification of CT Attenuation Patterns for Interstitial Lung Diseases via Deep Convolutional Neural Networks. crcv.ucf.edu
- 30.Szegedy C et al.: Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014Google Scholar