Advertisement

Cross-domain Medical Image Translation by Shared Latent Gaussian Mixture Model

Conference paper
  • 4.3k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12262)

Abstract

Current deep learning based segmentation models generalize poorly to different domains due to the lack of sufficient labelled image data. An important example in radiology is generalizing from contrast enhanced CT to non-contrast CT. In real-world clinical applications, cross-domain image analysis tools are in high demand since medical images from different domains are generally used to achieve precise diagnoses. For example, contrast enhanced CT at different phases are used to enhance certain pathologies or internal organs. Many existing cross-domain image-to-image translation models show impressive results on large organ segmentation by successfully preserving large structures across domains. However, such models lack the ability to preserve fine structures during the translation process, which is significant for many clinical applications, such as segmenting small calcified plaques in the aorta and pelvic arteries. In order to preserve fine structures during medical image translation, we propose a patch-based model using shared latent variables from a Gaussian mixture. We compare our image translation framework to several state-of-the-art methods on cross-domain image translation and show our model does a better job preserving fine structures. The superior performance of our model is verified by performing two tasks with the translated images - detection and segmentation of aortic plaques and pancreas segmentation. We expect the utility of our framework will extend to other problems beyond segmentation due to the improved quality of the generated images and enhanced ability to preserve small structures.

Notes

Acknowledgments

This research was supported in part by the Intramural Research Program of the National Institutes of Health Clinical Center. We thank NVIDIA for GPU card donations.

References

  1. 1.
    Clark, K., et al.: The cancer imaging archive (TCIA): maintaining and operating a public information repository. J. Digit. Imaging 26(6), 1045–1057 (2013)CrossRefGoogle Scholar
  2. 2.
    Dilokthanakul, N., et al.: Deep unsupervised clustering with gaussian mixture variational autoencoders. arXiv e-prints arXiv:1611.02648 (2016)
  3. 3.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: International Conference on Computer Vision (ICCV), pp. 2980–2988 (2017)Google Scholar
  4. 4.
    Huang, X., Liu, M.Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: European Conference on Computer Vision (ECCV), pp. 172–189 (2018)Google Scholar
  5. 5.
    Jin, D., Xu, Z., Tang, Y., Harrison, A.P., Mollura, D.J.: CT-realistic lung nodule simulation from 3D conditional generative adversarial networks for robust lung segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 732–740 (2018)Google Scholar
  6. 6.
    Liu, J., Yao, J., Bagheri, M., Sandfort, V., Summers, R.M.: A semi-supervised CNN learning method with pseudo-class labels for atherosclerotic vascular calcification detection. In: International Symposium on Biomedical Imaging (ISBI), pp. 780–783 (2019)Google Scholar
  7. 7.
    Liu, L., Nie, F., Wiliem, A., Li, Z., Zhang, T., Lovell, B.C.: Multi-modal joint clustering with application for unsupervised attribute discovery. IEEE Trans. Image Process. 27(9), 4345–4356 (2018)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Liu, M.Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 700–708 (2017)Google Scholar
  9. 9.
    Parab, S.Y., Patil, V.P., Shetmahajan, M., Kanaparthi, A.: Coronary artery calcification on chest computed tomography scan-anaesthetic implications. Indian J. Anaesth. 63(8), 663 (2019)CrossRefGoogle Scholar
  10. 10.
    Roth, H., et al.: DeepOrgan: multi-level deep convolutional networks for automated pancreas segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9349, pp. 556–564. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24553-9_68CrossRefGoogle Scholar
  11. 11.
    Simpson, A.L., et al.: A large annotated medical image dataset for the development and evaluation of segmentation algorithms. arXiv e-prints arXiv:1902.09063 (2019)
  12. 12.
    Sriram, S.A., Paul, A., Zhu, Y., Sandfort, V., Pickhardt, P.J., Summers, R.: Multilevel U-Net for pancreas segmentation from non-contrast CT scans through domain adaptation. In: Hahn, H.K., Mazurowski, M.A. (eds.) Medical Imaging 2020: Computer-Aided Diagnosis. SPIE, March 2020Google Scholar
  13. 13.
    Tang, Y.B., Oh, S., Tang, Y.X., Xiao, J., Summers, R.M.: CT-realistic data augmentation using generative adversarial network for robust lymph node segmentation. In: Medical Imaging: Computer-Aided Diagnosis, vol. 10950, p. 109503V (2019)Google Scholar
  14. 14.
    Tang, Y., et al.: CT image enhancement using stacked generative adversarial networks and transfer learning for lesion segmentation improvement. In: International Workshop on Machine Learning in Medical Imaging, pp. 46–54 (2018)Google Scholar
  15. 15.
    Tang, Y., Tang, Y., Xiao, J., Summers, R.M.: XLSor: a robust and accurate lung segmentor on chest x-rays using criss-cross attention and customized radiorealistic abnormalities generation. In: International Conference on Medical Imaging with Deep Learning, pp. 457–467 (2019)Google Scholar
  16. 16.
    Tang, Y.X., Tang, Y.B., Han, M., Xiao, J., Summers, R.M.: Abnormal chest X-ray identification with generative adversarial one-class classifier. In: International Symposium on Biomedical Imaging, pp. 1358–1361 (2019)Google Scholar
  17. 17.
    Tang, Y., Tang, Y., Sandfort, V., Xiao, J., Summers, R.M.: TUNA-Net: task-oriented unsupervised adversarial network for disease recognition in cross-domain chest X-rays. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 431–440 (2019)Google Scholar
  18. 18.
    Tran, D., Kucukelbir, A., Dieng, A.B., Rudolph, M., Liang, D., Blei, D.M.: Edward: a library for probabilistic modeling, inference, and criticism. arXiv e-prints arXiv:1610.09787 (2016)
  19. 19.
    Yan, K., Wang, X., Lu, L., Summers, R.M.: DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning. J. Med. Imaging 5(3), 1–11 (2018)CrossRefGoogle Scholar
  20. 20.
    Zhou, T., Fu, H., Chen, G., Shen, J., Shen, J., Shao, L.: Hi-net: Hybrid-fusion network for multi-modal MR image synthesis. IEEE Trans. Med. Imaging PP(99), 1 (2020)Google Scholar
  21. 21.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: International Conference on Computer Yision (ICCV), pp. 2223–2232 (2017)Google Scholar
  22. 22.
    Zhu, Y., Elton, D.C., Lee, S., Pickhardt, P., Summers, R.: Image translation by latent union of subspaces for cross-domain plaque detection. arXiv e-prints arXiv:2005.11384 (2020)
  23. 23.
    Zhu, Y., Huang, D., la Torre Frade, F.D., Lucey, S.: Complex non-rigid motion 3D reconstruction by union of subspaces. In: Proceedings of CVPR, June 2014Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging SciencesNational Institutes of Health, Clinical CenterBethesdaUSA
  2. 2.School of Medicine and Public HealthUniversity of WisconsinMadisonUSA

Personalised recommendations