Advertisement

Tumor-Aware, Adversarial Domain Adaptation from CT to MRI for Lung Cancer Segmentation

  • Jue Jiang
  • Yu-Chi Hu
  • Neelam Tyagi
  • Pengpeng Zhang
  • Andreas Rimner
  • Gig S. Mageras
  • Joseph O. Deasy
  • Harini VeeraraghavanEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11071)

Abstract

We present an adversarial domain adaptation based deep learning approach for automatic tumor segmentation from T2-weighted MRI. Our approach is composed of two steps: (i) a tumor-aware unsupervised cross-domain adaptation (CT to MRI), followed by (ii) semi-supervised tumor segmentation using Unet trained with synthesized and limited number of original MRIs. We introduced a novel target specific loss, called tumor-aware loss, for unsupervised cross-domain adaptation that helps to preserve tumors on synthesized MRIs produced from CT images. In comparison, state-of-the art adversarial networks trained without our tumor-aware loss produced MRIs with ill-preserved or missing tumors. All networks were trained using labeled CT images from 377 patients with non-small cell lung cancer obtained from the Cancer Imaging Archive and unlabeled T2w MRIs from a completely unrelated cohort of 6 patients with pre-treatment and 36 on-treatment scans. Next, we combined 6 labeled pre-treatment MRI scans with the synthesized MRIs to boost tumor segmentation accuracy through semi-supervised learning. Semi-supervised training of cycle-GAN produced a segmentation accuracy of 0.66 computed using Dice Score Coefficient (DSC). Our method trained with only synthesized MRIs produced an accuracy of 0.74 while the same method trained in semi-supervised setting produced the best accuracy of 0.80 on test. Our results show that tumor-aware adversarial domain adaptation helps to achieve reasonably accurate cancer segmentation from limited MRI data by leveraging large CT datasets.

Notes

Acknowledgement

This work was funded in part through the NIH/NCI Cancer Center Support Grant P30 CA008748.

Supplementary material

473975_1_En_86_MOESM1_ESM.pdf (106 kb)
Supplementary material 1 (pdf 106 KB)

References

  1. 1.
    Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017, pp. 2962–2971 (2017)Google Scholar
  2. 2.
    Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: International Conference on Machine Learning (ICML), pp. 1180–1189 (2015)Google Scholar
  3. 3.
    Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., Webb, R.: Learning from simulated and unsupervised images through adversarial training. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 3, pp. 2107–2116 (2017)Google Scholar
  4. 4.
    Yoo, D., Kim, N., Park, S., Paek, A.S., Kweon, I.S.: Pixel-level domain transfer. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 517–532. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46484-8_31CrossRefGoogle Scholar
  5. 5.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: International Conference on Computer Vision (ICCV), pp. 2223–2232 (2017)Google Scholar
  6. 6.
    Nie, D., et al.: Medical image synthesis with context-aware generative adversarial networks. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 417–425. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66179-7_48CrossRefGoogle Scholar
  7. 7.
    Wolterink, J.M., Dinkla, A.M., Savenije, M.H.F., Seevinck, P.R., van den Berg, C.A.T., Išgum, I.: Deep MR to CT synthesis using unpaired data. In: Tsaftaris, S.A., Gooya, A., Frangi, A.F., Prince, J.L. (eds.) SASHIMI 2017. LNCS, vol. 10557, pp. 14–23. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68127-6_2CrossRefGoogle Scholar
  8. 8.
    Chartsias, A., Joyce, T., Dharmakumar, R., Tsaftaris, S.A.: Adversarial image synthesis for unpaired multi-modal cardiac data. In: Tsaftaris, S.A., Gooya, A., Frangi, A.F., Prince, J.L. (eds.) SASHIMI 2017. LNCS, vol. 10557, pp. 3–13. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68127-6_1CrossRefGoogle Scholar
  9. 9.
    Huo, Y., Xu, Z., Bao, S., Assad, A., Abramson, R.G., Landman, B.-A.: Adversarial synthesis learning enables segmentation without target modality ground truth. In: IEEE International Symposium on Biomedical Imaging (2018)Google Scholar
  10. 10.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  11. 11.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems (NIPS), pp. 2672–2680 (2014)Google Scholar
  12. 12.
    Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46475-6_43CrossRefGoogle Scholar
  13. 13.
    Milletari, F., Navab, N., Ahmadi, S.A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. IEEE (2016)Google Scholar
  14. 14.
    Paszke, A., et al.: Automatic differentiation in Py Torch (2017)Google Scholar
  15. 15.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: Proceedings of the 3rd International Conference on Learning Representations (ICLR) (2014)Google Scholar
  16. 16.
    Aerts, H., et al.: Data from NSCLC-radiomics. The Cancer Imaging Archive (2015)Google Scholar
  17. 17.
    Clark, K., et al.: The cancer imaging archive (TCIA): maintaining and operating a public information repository. J. Digital Imaging 26(6), 1045–1057 (2013)CrossRefGoogle Scholar
  18. 18.
    Shen, H., Wang, R., Zhang, J., McKenna, S.J.: Boundary-aware fully convolutional network for brain tumor segmentation. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 433–441. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_49CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Jue Jiang
    • 1
  • Yu-Chi Hu
    • 1
  • Neelam Tyagi
    • 1
  • Pengpeng Zhang
    • 1
  • Andreas Rimner
    • 2
  • Gig S. Mageras
    • 1
  • Joseph O. Deasy
    • 1
  • Harini Veeraraghavan
    • 1
    Email author
  1. 1.Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkUSA
  2. 2.Radiation OncologyMemorial Sloan Kettering Cancer CenterNew YorkUSA

Personalised recommendations