Advertisement

Representation Disentanglement for Multi-task Learning with Application to Fetal Ultrasound

  • Qingjie MengEmail author
  • Nick Pawlowski
  • Daniel Rueckert
  • Bernhard Kainz
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11798)

Abstract

One of the biggest challenges for deep learning algorithms in medical image analysis is the indiscriminate mixing of image properties, e.g. artifacts and anatomy. These entangled image properties lead to a semantically redundant feature encoding for the relevant task and thus lead to poor generalization of deep learning algorithms. In this paper we propose a novel representation disentanglement method to extract semantically meaningful and generalizable features for different tasks within a multi-task learning framework. Deep neural networks are utilized to ensure that the encoded features are maximally informative with respect to relevant tasks, while an adversarial regularization encourages these features to be disentangled and minimally informative about irrelevant tasks. We aim to use the disentangled representations to generalize the applicability of deep neural networks. We demonstrate the advantages of the proposed method on synthetic data as well as fetal ultrasound images. Our experiments illustrate that our method is capable of learning disentangled internal representations. It outperforms baseline methods in multiple tasks, especially on images with new properties, e.g. previously unseen artifacts in fetal ultrasound.

Notes

Acknowledgments

We thank the Wellcome Trust IEH Award [102431], Nvidia (GPU donations) and Intel.

References

  1. 1.
    Ben-Cohen, A., Mechrez, R., Yedidia, N., Greenspan, H.: Improving CNN training using disentanglement for liver lesion classification in CT. arXiv:1811.00501 (2018)
  2. 2.
    Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)CrossRefGoogle Scholar
  3. 3.
    Burgess, C.P., et al.: Understanding disentangling in \(\beta \)-VAE. arXiv:1804.03599 (2018)
  4. 4.
    Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: InfoGAN: interpretable representation learning by information maximizing generative adversarial nets. In: NeurIPS 2016, pp. 2180–2188. Curran Associates Inc., USA (2016)Google Scholar
  5. 5.
    Chen, X., et al.: Variational lossy autoencoder. In: ICLR 2017 (2017)Google Scholar
  6. 6.
    Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A., Brendel, W.: ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv:1811.12231 (2018)
  7. 7.
    Gonzalez-Garcia, A., van de Weijer, J., Bengio, Y.: Image-to-image translation for cross-domain disentanglement. In: NeurIPS 2018, pp. 1287–1298. Curran Associates, Inc. (2018)Google Scholar
  8. 8.
    Hadad, N., Wolf, L., Shahar, M.: A two-step disentanglement method. In: CVPR 2018 (2018)Google Scholar
  9. 9.
    Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., Lerchner, A.: beta-VAE: learning basic visual concepts with a constrained variational framework. In: ICLR 2017 (2017)Google Scholar
  10. 10.
    Hyvärinen, A., Oja, E.: Independent component analysis: algorithms and applications. Neural Netw. 13(4–5), 411–430 (2000)CrossRefGoogle Scholar
  11. 11.
    Kamnitsas, K., et al.: Unsupervised domain adaptation in brain lesion segmentation with adversarial networks. In: Niethammer, M., et al. (eds.) IPMI 2017. LNCS, vol. 10265, pp. 597–609. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-59050-9_47CrossRefGoogle Scholar
  12. 12.
    Kim, H., Mnih, A.: Disentangling by factorising. CoRR arXiv/1802.05983 (2018)Google Scholar
  13. 13.
    Liu, A.H., Liu, Y.C., Yeh, Y.Y., Wang, Y.C.F.: A unified feature disentangler for multi-domain image translation and manipulation. In: NeurIPS, pp. 2590–2599. Curran Associates, Inc. (2018)Google Scholar
  14. 14.
    Mathieu, M.F., Zhao, J.J., Zhao, J., Ramesh, A., Sprechmann, P., LeCun, Y.: Disentangling factors of variation in deep representations using adversarial training. In: NeurIPS 2016, pp. 5040–5048 (2016)Google Scholar
  15. 15.
    Meng, Q., et al.: Weakly supervised estimation of shadow confidence maps in fetal ultrasound imaging. IEEE Trans. Med. Imaging (2019). https://ieeexplore.ieee.org/document/8698843
  16. 16.
    NHS: Fetal anomaly screening programme: programme handbook June 2015. Public Health England (2015)Google Scholar
  17. 17.
    Pawlowski, N., et al.: DLTK: state of the art reference implementations for deep learning on medical images. arXiv:1711.06853 (2017)
  18. 18.
    Tenenbaum, J.B., Freeman, W.T.: Separating style and content with bilinear models. Neural Comput. 12(6), 1247–1283 (2000)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Qingjie Meng
    • 1
    Email author
  • Nick Pawlowski
    • 1
  • Daniel Rueckert
    • 1
  • Bernhard Kainz
    • 1
  1. 1.Department of ComputingBioMedIA, Imperial College LondonLondonUK

Personalised recommendations