Advertisement

Unsupervised Segmentation of Micro-CT Images of Lung Cancer Specimen Using Deep Generative Models

  • Takayasu Moriya
  • Hirohisa Oda
  • Midori Mitarai
  • Shota Nakamura
  • Holger R. Roth
  • Masahiro Oda
  • Kensaku MoriEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)

Abstract

This paper presents a novel unsupervised segmentation method for the three-dimensional microstructure of lung cancer specimens in micro-computed tomography (micro-CT) images. Micro-CT scanning can nondestructively capture detailed histopathological components of resected lung cancer specimens. However, it is difficult to manually annotate cancer components on micro-CT images. Moreover, since most of the recent segmentation methods using deep neural networks have relied on supervised learning, it is also difficult to cope with unlabeled micro-CT images. In this paper, we propose an unsupervised segmentation method using a deep generative model. Our method consists of two phases. In the first phase, we train our model by iterating two steps: (1) inferring pairs of continuous and categorical latent variables of image patches randomly extracted from an unlabeled image and (2) reconstructing image patches from the inferred pairs of latent variables. In the second phase, our trained model estimates te probabilities of belonging to each category and assigns labels to patches from an entire image in order to obtain the segmented image. We apply our method to seven micro-CT images of resected lung cancer specimens. The original sizes of the micro-CT images were \(1024 \times 1024 \times (544{-}2185)\) voxels, and their resolutions were 25–30 \(\upmu \)m/voxel. Our aim was to automatically divide each image into three regions: invasive carcinoma, noninvasive carcinoma, and normal tissue. From quantitative evaluation, mean normalized mutual information scores of our results are 0.437. From qualitative evaluation, our segmentation results prove helpful for observing the anatomical extent of cancer components. Moreover, we visualize the degree of certainty of segmentation results by using values of categorical latent variables.

Notes

Acknowledgements

Parts of this work was supported by MEXT/JSPS KAKENHI (26108006, 17H00867, 17K20099, 26560255, 15H01116, 15K19933), the JSPS Bilateral Joint Research Project, AMED (19lk1010036h0001) and the Hori Sciences and Arts Foundation.

References

  1. 1.
    Kingma, D.P., Welling, M.: Auto-encoding variational bayes (2014)Google Scholar
  2. 2.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  3. 3.
    Moriya, T., et al.: Unsupervised segmentation of 3D medical images based on clustering and deep representation learning. In: Medical Imaging 2018: Biomedical Applications in Molecular, Structural, and Functional Imaging, vol. 10578, p. 1057820. International Society for Optics and Photonics (2018)Google Scholar
  4. 4.
    Bulten, W., Litjens, G.: Unsupervised prostate cancer detection on H&E using convolutional adversarial autoencoders. In: Medical Imaging with Deep Learning (2018)Google Scholar
  5. 5.
    MacQueen, J., et al.: Some methods for classification and analysis of multivariate observations. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Oakland, CA, USA, vol. 1, pp. 281–297 (1967)Google Scholar
  6. 6.
    Nakamura, S., et al.: Micro-computed tomography of the lung: imaging of alveolar duct and alveolus in human lung. In: D55. Lab Methodology and Bioengineering: Just Do It, pp. A7411–A7411. American Thoracic Society (2016)Google Scholar
  7. 7.
    Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I.: Adversarial autoencoders. In: International Conference on Learning Representations (2016)Google Scholar
  8. 8.
    Rosca, M., Lakshminarayanan, B., Warde-Farley, D., Mohamed, S.: Variational approaches for auto-encoding generative adversarial networks. CoRR abs/1706.04987 (2017)Google Scholar
  9. 9.
    Miyato, T., Koyama, M.: cGANs with projection discriminator. In: International Conference on Learning Representations (2018)Google Scholar
  10. 10.
    Tolstikhin, I., Bousquet, O., Gelly, S., Schoelkopf, B.: Wasserstein auto-encoders. In: International Conference on Learning Representations (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Takayasu Moriya
    • 1
  • Hirohisa Oda
    • 1
  • Midori Mitarai
    • 1
  • Shota Nakamura
    • 2
  • Holger R. Roth
    • 1
  • Masahiro Oda
    • 1
  • Kensaku Mori
    • 1
    • 3
    • 4
    Email author
  1. 1.Graduate School of InformaticsNagoya UniversityNagoyaJapan
  2. 2.Nagoya University Graduate School of MedicineNagoyaJapan
  3. 3.Information Technology CenterNagoya UniversityNagoyaJapan
  4. 4.Research Center for Medical BigdataNational Institute of InformaticsTokyoJapan

Personalised recommendations