Deep Learning-Based Bias Transfer for Overcoming Laboratory Differences of Microscopic Images

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12722)


The automated analysis of medical images is currently limited by technical and biological noise and bias. The same source tissue can be represented by vastly different images if the image acquisition or processing protocols vary. For an image analysis pipeline, it is crucial to compensate such biases to avoid misinterpretations. Here, we evaluate, compare, and improve existing generative model architectures to overcome domain shifts for immunofluorescence (IF) and Hematoxylin and Eosin (H&E) stained microscopy images. To determine the performance of the generative models, the original and transformed images were segmented or classified by deep neural networks that were trained only on images of the target bias. In the scope of our analysis, U-Net cycleGANs trained with an additional identity and an MS-SSIM-based loss and Fixed-Point GANs trained with an additional structure loss led to the best results for the IF and H&E stained samples, respectively. Adapting the bias of the samples significantly improved the pixel-level segmentation for human kidney glomeruli and podocytes and improved the classification accuracy for human prostate biopsies by up to \(14\%\).


CycleGAN Fixed-Point GAN Domain adaptation H&E staining Unsupervised learning Immunofluorescence microscopy 



This work was supported by DFG (SFB 1192 projects B8, B9 and C3) and by BMBF (eMed Consortia ‘Fibromap’).


  1. 1.
    Armanious, K., Tanwar, A., Abdulatif, S., Küstner, T., Gatidis, S., Yang, B.: Unsupervised adversarial correction of rigid mr motion artifacts. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pp. 1494–1498 (2020)Google Scholar
  2. 2.
    Arvaniti, E., et al.: Replication Data for: Automated Gleason grading of prostate cancer tissue microarrays via deep learning (2018)Google Scholar
  3. 3.
    Arvaniti, E., et al.: Automated Gleason grading of prostate cancer tissue microarrays via deep learning. Scientific Reports (2018)Google Scholar
  4. 4.
    de Bel, T., Hermsen, M., Jesper Kers, R., van der Laak, J., Litjens, G.: Stain-transforming cycle-consistent generative adversarial networks for improved segmentation of renal histopathology. In: Proceedings of The 2nd International Conference on Medical Imaging with Deep Learning, pp. 151–163 (2019)Google Scholar
  5. 5.
    Chen, N., Zhou, Q.: The evolving gleason grading system. Chin. J. Cancer Res. 28(1), 58–64 (2016)Google Scholar
  6. 6.
    Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: StarGAN: unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 8789–8797 (2018)Google Scholar
  7. 7.
    Cohen, J.P., Luck, M., Honari, S.: Distribution matching losses can hallucinate features in medical image translation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 529–536. Springer, Cham (2018). Scholar
  8. 8.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009)Google Scholar
  9. 9.
    Dice, L.R.: Measures of the amount of ecologic association between species. Ecology 26(3), 297–302 (1945).
  10. 10.
    Egevad, L., et al.: Standardization of Gleason grading among 337 European pathologists. Histopathology 62(2), 247–256 (2013)CrossRefGoogle Scholar
  11. 11.
    Engin, D., Genc, A., Ekenel, H.K.: Cycle-dehaze: enhanced cyclegan for single image dehazing. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 938–946 (2018)Google Scholar
  12. 12.
    Gonzalez, R.C., Woods, R.E., Masters, B.R.: Digital Image Processing, 3rd edn. Pearson, London (2007)Google Scholar
  13. 13.
    Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6629–6640. NIPS’17 (2017)Google Scholar
  14. 14.
    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, pp. 5967–5976 (2017)Google Scholar
  15. 15.
    Kohli, M.D., Summers, R.M., Geis, J.R.: Medical image data and datasets in the era of machine learning - whitepaper from the 2016 c-mimi meeting dataset session. J. Digital Imag. 30(4), 392–399 (2017)CrossRefGoogle Scholar
  16. 16.
    Ma, Y., et al.: Cycle structure and illumination constrained GAN for medical image enhancement. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12262, pp. 667–677. Springer, Cham (2020). Scholar
  17. 17.
    Manakov, I., Rohm, M., Kern, C., Schworm, B., Kortuem, K., Tresp, V.: Noise as domain shift: denoising medical images by unpaired image translation. In: Wang, Q., et al. (eds.) DART/MIL3ID -2019. LNCS, vol. 11795, pp. 3–10. Springer, Cham (2019). Scholar
  18. 18.
    Reinhard, E., Ashikhmin, M., Gooch, B., Shirley, P.: Color transfer between images. IEEE Comput. Graph. Appl. 21(5), 34–41 (2001)CrossRefGoogle Scholar
  19. 19.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). Scholar
  20. 20.
    Shaban, M.T., Baur, C., Navab, N., Albarqouni, S.: Staingan: stain style transfer for digital histological images. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 953–956 (2019)Google Scholar
  21. 21.
    Siddiquee, M.M.R., et al.: Learning fixed points in generative adversarial networks: from image-to-image translation to disease detection and localization. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 191–200 (2019)Google Scholar
  22. 22.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)CrossRefGoogle Scholar
  23. 23.
    Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multi-scale structural similarity for image quality assessment. In: The Thirty-Seventh Asilomar Conference on Signals, Systems and Computers 2003, pp. 1398–1402 (2003)Google Scholar
  24. 24.
    Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2242–2251 (2017)Google Scholar
  25. 25.
    Zimmermann, M., et al.: Deep learning-based molecular morphometrics for kidney biopsies. JCI Insight 6 (2021)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.Institute of Medical Systems Biology, Center for Biomedical AI (bAIome)University Medical Center Hamburg-EppendorfHamburgGermany
  2. 2.III. Department of MedicineUniversity Medical Center Hamburg-EppendorfHamburgGermany
  3. 3.Institute of PathologyUniversity Medical Center Hamburg-EppendorfHamburgGermany
  4. 4.Institute of Experimental Medicine and Systems Biology, and Division of Nephrology and Clinical ImmunologyRWTH Aachen UniversityAachenGermany

Personalised recommendations