Advertisement

Multi-layer Domain Adaptation for Deep Convolutional Networks

  • Ozan CigaEmail author
  • Jianan Chen
  • Anne Martel
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11795)

Abstract

Despite their success in many computer vision tasks, convolutional networks tend to require large amounts of labeled data to achieve generalization. Furthermore, the performance is not guaranteed on a sample from an unseen domain at test time, if the network was not exposed to similar samples from that domain at training time. This hinders the adoption of these techniques in clinical setting where the imaging data is scarce, and where the intra- and inter-domain variance of the data can be substantial. We propose a domain adaptation technique that is especially suitable for deep networks to alleviate this requirement of labeled data. Our method utilizes gradient reversal layers [4] and Squeeze-and-Excite modules [6] to stabilize the training in deep networks. The proposed method was applied to publicly available histopathology and chest X-ray databases and achieved superior performance to existing state-of-the-art networks with and without domain adaptation. Depending on the application, our method can improve multi-class classification accuracy by 5–20% compared to DANN introduced in [4].

Notes

Acknowledgments

This work was funded by Canadian Cancer Society (grant #705772) and NSERC RGPIN-2016-06283.

References

  1. 1.
    Aresta, G., et al.: Bach: grand challenge on breast cancer histology images. Med. Image Anal. 56, 122–139 (2019)CrossRefGoogle Scholar
  2. 2.
    Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein GAN. arXiv preprint arXiv:1701.07875 (2017)
  3. 3.
    Baur, C., Albarqouni, S., Navab, N.: Semi-supervised deep learning for fully convolutional networks. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 311–319. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66179-7_36CrossRefGoogle Scholar
  4. 4.
    Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. arXiv preprint arXiv:1409.7495 (2014)
  5. 5.
    Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of Wasserstein GANs. In: Advances in neural information processing systems, pp. 5767–5777 (2017)Google Scholar
  6. 6.
    Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132–7141 (2018)Google Scholar
  7. 7.
    ICIAR: Iciar 2018-challenge - home (2018). https://iciar2018-challenge.grand-challenge.org/. Accessed 16 Aug 2019
  8. 8.
    Maicas, G., Bradley, A.P., Nascimento, J.C., Reid, I., Carneiro, G.: Training medical image analysis systems like radiologists. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 546–554. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00928-1_62CrossRefGoogle Scholar
  9. 9.
    Openi: what is open-i ? (2018). https://openi.nlm.nih.gov/faq/. Accessed 16 Aug 2019
  10. 10.
    Saito, K., Ushiku, Y., Harada, T.: Asymmetric tri-training for unsupervised domain adaptation. In: Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2988–2997 (2017). JMLR.org
  11. 11.
    Shimodaira, H.: Improving predictive inference under covariate shift by weighting the log-likelihood function. J. Stat. Plann. Infer. 90(2), 227–244 (2000)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Shu, R., Bui, H.H., Narui, H., Ermon, S.: A dirt-t approach to unsupervised domain adaptation. arXiv preprint arXiv:1802.08735 (2018)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Medical BiophysicsUniversity of TorontoTorontoCanada

Personalised recommendations