Automatic lung segmentation in low-dose chest CT scans using convolutional deep and wide network (CDWN)

  • S. Akila Agnes
  • J. AnithaEmail author
  • J. Dinesh Peter
Recent Advances in Deep Learning for Medical Image Processing


Computed tomography (CT) imaging is the preferred imaging modality for diagnosing lung-related complaints. Automatic lung segmentation is the most common prerequisite to develop a computerized diagnosis system for analyzing chest CT images. In this paper, a convolutional deep and wide network (CDWN) is proposed to segment lung region from the chest CT scan for further medical diagnosis. Earlier lung segmentation techniques depend on handcrafted features, and their performance relies on the features considered for segmentation. The proposed model automatically segments the lung from complete CT scan in two laps: (1) learning the required filters to extract hierarchical feature representations at convolutional layers, (2) dense prediction with spatial features through learnable deconvolutional layers. The model has been trained and evaluated with low-dose chest CT scan images on LIDC-IDRI database. The proposed CDWN reaches the average Dice coefficient of 0.95 and accuracy of 98% in segmenting the lung regions from 20 test images and maintains consistent results for all test images. The experimental results confirm that the proposed approach achieves a superior performance compared to other state-of-the-art methods for lung segmentation.


Automatic lung segmentation Convolutional neural network Deep learning Image processing and analysis Medical imaging 


Compliance with ethical standards

Conflict of interest

All authors declare that they have no conflict of interest.


  1. 1.
    Aberle DR, Adams AM et al (2011) National lung screening trial research team, reduced lung-cancer mortality with low-dose computed tomographic screening. New Engl J Med 365:395–409CrossRefGoogle Scholar
  2. 2.
    Long Jonathan, Shelhamer Evan, Darrell Trevor (2017) Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell 39(4):640–651CrossRefGoogle Scholar
  3. 3.
    Hongsheng Li, Rui Zhao, Xiaogang Wang (2014) Highly efficient forward and backward propagation of convolutional neural networks for pixelwise classification. Preprint at
  4. 4.
    Badrinarayanan V, Kendall A, Cipolla R (2017) SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 39(12):2481–2495CrossRefGoogle Scholar
  5. 5.
    Chen LC, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2018) DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell 40(4):834–848CrossRefGoogle Scholar
  6. 6.
    Noh H, Hong S, Han B (2015) Learning deconvolution network for semantic segmentation. In: Proceedings of the 2015 IEEE international conference on computer vision (ICCV), pp 1520–1528Google Scholar
  7. 7.
    Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: Navab N et al (eds) Medical image computing and computer, vol 9351. Springer, Cham, pp 234–241Google Scholar
  8. 8.
    Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. Preprint at
  9. 9.
    Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9Google Scholar
  10. 10.
    Kaiming He, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778Google Scholar
  11. 11.
    Huang G, Liu Z, Maaten LVD, Weinberger KQ (2016) Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 2261–2269Google Scholar
  12. 12.
    Lin G, Milan A, Shen C, Reid I (2016) RefineNet: multi-path refinement networks for high-resolution semantic segmentation. Preprint at
  13. 13.
    Lguensat R, Sun M, Fablet R, Mason E, Tandeo P, Chen G (2017) EddyNet: a deep neural network for pixel-wise classification of oceanic eddies. Preprint at
  14. 14.
    Gibson E, Li W, Sudre C, Fidon L, Shakir DI, Eaton Rosen GWZ, Gray R, Doel T, Hu Y, Whyntie T, Nachev P, Modat M, Barratt DC, Ourselin S, Cardoso MJ, Vercauteren T (2018) NiftyNet: a deep-learning platform for medical imaging. Comput Methods Programs Biomed 158:113–122CrossRefGoogle Scholar
  15. 15.
    Shen D, Wu G, Suk HI (2017) Deep Learning in Medical Image Analysis. Annu Rev Biomed Eng 19(1):221–248CrossRefGoogle Scholar
  16. 16.
    Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, Ginneken BV, Sánchez CI (2017) A survey on deep learning in medical image analysis. Med Image Anal 42:60–68CrossRefGoogle Scholar
  17. 17.
    Zhou X, Ito T, Takayama R, Wang S, Hara T, Fujita H (2016) Three-dimensional CT image segmentation by combining 2D fully convolutional network with 3D majority voting. In: Proceedings of the LABELS, Springer, pp 111–120Google Scholar
  18. 18.
    Li W, Wang G, Fidon L, Ourselin S, Cardoso M.J, Vercauteren T (2017) On the compactness, efficiency, and representation of 3D convolutional networks: brain parcellation as a pretext task. In: Proceedings of information processing in medical imaging (IPMI), pp 348–360Google Scholar
  19. 19.
    Roth HR, Lu L, Seff A, Cherry KM, Hoffman J, Wang S, Liu J, Turkbey E, Summers RM (2014) A new 2.5D representation for lymph node detection using ran-dom sets of deep convolutional neural network observations. Med Image Comput Comput-Assist Interv 17(1):520–527Google Scholar
  20. 20.
    Salimans T, Kingma DP (2016) Weight normalization: a simple reparameterization to accelerate training of deep neural networks. Preprint at
  21. 21.
    Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. Preprint at
  22. 22.
    Jimmy LB, Jamie RK, Geoffrey EH (2016) Layer normalization. arXiv:1607.06450
  23. 23.
    Armato III SG, McLennan, G, Bidaut, L, McNitt-Gray, MF, Meyer, CR, Reeves, AP, Clarke, LP (2015) Data from LIDC–IDRI the cancer imaging archive.
  24. 24.
    Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, Tarbox L, Prior F (2013) The cancer imaging archive (TCIA): maintaining and operating a public information repository. J Digit Imaging 26(6):1045–1057CrossRefGoogle Scholar
  25. 25.
    Armato SG III, McLennan G, Bidaut L, McNitt-Gray MF, Meyer CR, Reeves AP, Zhao B, Aberle DR, Henschke CI, Hoffman EA, Kazerooni EA, MacMahon H, van Beek EJR, Yankelevitz D et al (2011) The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans. Med Phys 38:915–931CrossRefGoogle Scholar
  26. 26.
    Lee H, Grosse R, Raganath R, and Ng AY (2009) Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of 26th annual international conference machining learning, pp 606–616Google Scholar
  27. 27.
    Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929–1958MathSciNetzbMATHGoogle Scholar
  28. 28.
    Li W, Nie SD, Cheng JJ (2006) A fast automatic method of lung segmentation in CT images using mathematical morphology. World Congress Med Phys Biomed Eng 14:2419–2422Google Scholar
  29. 29.
    Cai W, Chen S, Zhang D (2007) Fast and robust fuzzy c-means clustering algorithms incorporating local information for image segmentation. Pattern Recognit 40(3):825–838CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringKarunya Institute of Technology and SciencesCoimbatoreIndia

Personalised recommendations