Advertisement

Combined Learning for Similar Tasks with Domain-Switching Networks

  • Daniel BugEmail author
  • Dennis Eschweiler
  • Qianyu Liu
  • Justus Schock
  • Leon Weninger
  • Friedrich Feuerhake
  • Julia Schüler
  • Johannes Stegmaier
  • Dorit Merhof
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)

Abstract

We introduce a domain switch for deep neural networks that enables to re-weight convolutional kernels for an input of a known domain. This technique is designed to address re-occurring tasks across multiple domains that are known at runtime and to incorporate them into a single, domain-spanning network. We evaluate this approach in three distinct tasks, namely combined cell nuclei analysis across different stains and fluorescence images, facial landmark detection in grayscale and thermal infrared images, and the BraTS challenge where we treat different recording institutions as domains. We found that conventional U-nets trained on multiple domains perform similar to domain-specific U-nets. Our method improves the results in facial landmark detection significantly, but no change is measured in the other two experiments compared to multi-domain U-nets.

Keywords

Deep learning Multi-modality Multi-domain 

Notes

Acknowledgements

This work was partially supported by the Federal Ministry of Education and Research – BMBF, Germany (grant no. 031 B0006B) and by the German Research Foundation – DFG (grant no. ME3737/3-1).

References

  1. 1.
    Bakas, S., et al.: Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 4, 170117 (2017)CrossRefGoogle Scholar
  2. 2.
    Bug, D., Grote, A., Schüler, J., Feuerhake, F., Merhof, D.: Analyzing immunohistochemically stained whole-slide images of ovarian carcinoma. Bildverarbeitung für die Medizin 2017. I, pp. 173–178. Springer, Heidelberg (2017).  https://doi.org/10.1007/978-3-662-54345-0_41CrossRefGoogle Scholar
  3. 3.
    Gadermayr, M., Appel, V., Klinkhammer, B.M., Boor, P., Merhof, D.: Which way round? A study on the performance of stain-translation for segmenting arbitrarily dyed histological images. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 165–173. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00934-2_19CrossRefGoogle Scholar
  4. 4.
    Kopaczka, M., Kolk, R., Merhof, D.: A fully annotated thermal face database and its application for thermal facial expression recognition. In: 2018 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), pp. 1–6. IEEE (2018)Google Scholar
  5. 5.
    Kopaczka, M., Schock, J., Merhof, D.: Super-realtime facial landmark detection and shape fitting by deep regression of shape model parameters. arXiv preprint arXiv:1902.03459 (2019)
  6. 6.
    Kumar, N., Verma, R., Sharma, S., Bhargava, S., Vahadane, A., Sethi, A.: A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE Trans. Med. Imaging 36(7), 1550–1560 (2017)CrossRefGoogle Scholar
  7. 7.
    Maška, M., et al.: A benchmark for comparison of cell tracking algorithms. Bioinformatics 30(11), 1609–1617 (2014)CrossRefGoogle Scholar
  8. 8.
    Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34(10), 1993–2024 (2015)CrossRefGoogle Scholar
  9. 9.
    Myronenko, A.: 3D MRI brain tumor segmentation using autoencoder regularization. In: Crimi, A., Bakas, S., Kuijf, H., Keyvan, F., Reyes, M., van Walsum, T. (eds.) BrainLes 2018. LNCS, vol. 11384, pp. 311–320. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-11726-9_28CrossRefGoogle Scholar
  10. 10.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  11. 11.
    Sagonas, C., Antonakos, E., Tzimiropoulos, G., Zafeiriou, S., Pantic, M.: 300 faces in-the-wild challenge: database and results. Image Vis. Comput. 47, 3–18 (2016)CrossRefGoogle Scholar
  12. 12.
    Sagonas, C., Tzimiropoulos, G., Zafeiriou, S., Pantic, M.: 300 faces in-the-wild challenge: the first facial landmark localization challenge. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 397–403 (2013)Google Scholar
  13. 13.
    Sagonas, C., Tzimiropoulos, G., Zafeiriou, S., Pantic, M.: A semi-automatic methodology for facial landmark annotation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 896–903 (2013)Google Scholar
  14. 14.
    Ulman, V., et al.: An objective comparison of cell-tracking algorithms. Nat. Methods 14(12), 1141 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Daniel Bug
    • 1
    Email author
  • Dennis Eschweiler
    • 1
  • Qianyu Liu
    • 1
  • Justus Schock
    • 1
  • Leon Weninger
    • 1
  • Friedrich Feuerhake
    • 2
  • Julia Schüler
    • 3
  • Johannes Stegmaier
    • 1
  • Dorit Merhof
    • 1
  1. 1.Institute of Imaging and Computer VisionRWTH Aachen UniversityAachenGermany
  2. 2.Institute for PathologyHannover Medical SchoolHannoverGermany
  3. 3.Charles River DiscoveryResearch Services Germany GmbHFreiburgGermany

Personalised recommendations