Advertisement

Factorised Spatial Representation Learning: Application in Semi-supervised Myocardial Segmentation

  • Agisilaos ChartsiasEmail author
  • Thomas Joyce
  • Giorgos Papanastasiou
  • Scott Semple
  • Michelle Williams
  • David Newby
  • Rohan Dharmakumar
  • Sotirios A. Tsaftaris
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11071)

Abstract

The success and generalisation of deep learning algorithms heavily depend on learning good feature representations. In medical imaging this entails representing anatomical information, as well as properties related to the specific imaging setting. Anatomical information is required to perform further analysis, whereas imaging information is key to disentangle scanner variability and potential artefacts. The ability to factorise these would allow for training algorithms only on the relevant information according to the task. To date, such factorisation has not been attempted. In this paper, we propose a methodology of latent space factorisation relying on the cycle-consistency principle. As an example application, we consider cardiac MR segmentation, where we separate information related to the myocardium from other features related to imaging and surrounding substructures. We demonstrate the proposed method’s utility in a semi-supervised setting: we use very few labelled images together with many unlabelled images to train a myocardium segmentation neural network. Specifically, we achieve comparable performance to fully supervised networks using a fraction of labelled images in experiments on ACDC and a dataset from Edinburgh Imaging Facility QMRI. Code will be made available at https://github.com/agis85/spatial_factorisation.

Notes

Acknowledgements

This work was supported in part by the US National Institutes of Health (1R01HL136578-01) and UK EPSRC (EP/P022928/1). We also thank NVIDIA Corporation for donating a Titan X GPU.

References

  1. 1.
    Achille, A., Soatto, S.: Emergence of invariance and disentangling in deep representations. In: ICML Workshop Principled Approaches to Deep Learning (2017)Google Scholar
  2. 2.
    Almahairi, A., Rajeswar, S., Sordoni, A., Bachman, P., Courville, A.: Augmented CycleGAN: learning many-to-many mappings from unpaired data. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 80, pp. 195–204. PMLR, Stockholm (2018). http://proceedings.mlr.press/v80/almahairi18a.html
  3. 3.
    Bai, W., et al.: Semi-supervised learning for network-based cardiac MR image segmentation. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 253–260. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_29CrossRefGoogle Scholar
  4. 4.
    Baur, C., Albarqouni, S., Navab, N.: Semi-supervised deep learning for fully convolutional networks. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 311–319. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66179-7_36CrossRefGoogle Scholar
  5. 5.
    Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE PAMI 35(8), 1798–1828 (2013)CrossRefGoogle Scholar
  6. 6.
    Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: InfoGAN: interpretable representation learning by information maximizing generative adversarial nets. In: NIPS, pp. 2172–2180 (2016)Google Scholar
  7. 7.
    Cheung, B., Livezey, J.A., Bansal, A.K., Olshausen, B.A.: Discovering hidden factors of variation in deep networks. arXiv:1412.6583 (2014)
  8. 8.
    Chu, C., Zhmoginov, A., Sandler, M.: CycleGAN: a Master of Steganography. arXiv:1712.02950 (2017)
  9. 9.
    Higgins, I., et al.: beta-VAE: Learning basic visual concepts with a constrained variational framework. In: ICLR (2017)Google Scholar
  10. 10.
    Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal Unsupervised Image-to-image Translation. In: ECCV (2018)Google Scholar
  11. 11.
    Kim, H., Mnih, A.: Disentangling by factorising. In: ICML (2018)Google Scholar
  12. 12.
    Luc, P., Couprie, C., Chintala, S., Verbeek, J.: Semantic segmentation using adversarial networks. arXiv preprint arXiv:1611.08408 (2016)
  13. 13.
    Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Smolley, S.P.: On the Effectiveness of Least Squares Generative Adversarial Networks. arXiv:1712.06391 (2017)
  14. 14.
    Mathieu, M.F., Zhao, J.J., Zhao, J., Ramesh, A., Sprechmann, P., LeCun, Y.: Disentangling factors of variation in deep representation using adversarial training. In: NIPS, pp. 5040–5048 (2016)Google Scholar
  15. 15.
    Narayanaswamy, S., et al.: Learning disentangled representations with semi-supervised deep generative models. In: NIPS, pp. 5927–5937 (2017)Google Scholar
  16. 16.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  17. 17.
    Souly, N., Spampinato, C., Shah, M.: Semi and weakly supervised semantic segmentation using generative adversarial network. arXiv:1703.09695 (2017)
  18. 18.
    Tulyakov, S., Liu, M.-Y., Yang, X., Kautz, J.: Mocogan: decomposing motion and content for video generation. arXiv:1707.04993 (2017)
  19. 19.
    Vondrick, C., Pirsiavash, H., Torralba, A.: Generating videos with scene dynamics. In: NIPS, pp. 613–621 (2016)Google Scholar
  20. 20.
    Zhang, Y., et al.: Deep adversarial networks for biomedical image segmentation utilizing unannotated images. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 408–416. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66179-7_47CrossRefGoogle Scholar
  21. 21.
    Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV, pp. 2242–2251 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Agisilaos Chartsias
    • 1
    Email author
  • Thomas Joyce
    • 1
  • Giorgos Papanastasiou
    • 2
  • Scott Semple
    • 2
  • Michelle Williams
    • 2
  • David Newby
    • 2
  • Rohan Dharmakumar
    • 3
  • Sotirios A. Tsaftaris
    • 1
  1. 1.Institute for Digital Communications, School of EngineeringUniversity of EdinburghEdinburghUK
  2. 2.Edinburgh Imaging Facility QMRI, Centre for Cardiovascular ScienceEdinburghUK
  3. 3.Cedars Sinai Medical CenterLos AngelesUSA

Personalised recommendations