Skip to main content

Synth-by-Reg (SbR): Contrastive Learning for Synthesis-Based Registration of Paired Images

Part of the Lecture Notes in Computer Science book series (LNIP,volume 12965)

Abstract

Nonlinear inter-modality registration is often challenging due to the lack of objective functions that are good proxies for alignment. Here we propose a synthesis-by-registration method to convert this problem into an easier intra-modality task. We introduce a registration loss for weakly supervised image translation between domains that does not require perfectly aligned training data. This loss capitalises on a registration U-Net with frozen weights, to drive a synthesis CNN towards the desired translation. We complement this loss with a structure preserving constraint based on contrastive learning, which prevents blurring and content shifts due to overfitting. We apply this method to the registration of histological sections to MRI slices, a key step in 3D histology reconstruction. Results on two public datasets show improvements over registration based on mutual information (13% reduction in landmark error) and synthesis-based algorithms such as CycleGAN (11% reduction), and are comparable to registration with label supervision. Code and data are publicly available at https://github.com/acasamitjana/SynthByReg.

Keywords

  • Image synthesis
  • Inter-modality registration
  • Deformable registration
  • Contrastive estimation

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Amunts, K., et al.: BigBrain: an ultrahigh-resolution 3D human brain model. Science 340(6139), 1472–1475 (2013)

    CrossRef  Google Scholar 

  2. Arar, M., Ginger, Y., Danon, D., Bermano, A.H., Cohen-Or, D.: Unsupervised multi-modal image registration via geometry preserving image-to-image translation. In: CVPR, pp. 13410–13419. IEEE (2020)

    Google Scholar 

  3. Arjovsky, M., Bottou, L.: Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862 (2017)

  4. Ashburner, J., Friston, K.: Voxel-based morphometry-the methods. Neuroimage 11(6), 805–821 (2000)

    CrossRef  Google Scholar 

  5. Ashburner, J., Friston, K.: Unified segmentation. Neuroimage 26, 839–851 (2005)

    CrossRef  Google Scholar 

  6. Balakrishnan, G., Zhao, A., Sabuncu, M.R., Guttag, J., Dalca, A.V.: Voxelmorph: a learning framework for deformable medical image registration. IEEE Trans. Med. Imaging 38(8), 1788–1800 (2019)

    CrossRef  Google Scholar 

  7. Cao, X., et al.: Deformable image registration based on similarity-steered CNN regression. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10433, pp. 300–308. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66182-7_35

    CrossRef  Google Scholar 

  8. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49

    CrossRef  Google Scholar 

  9. Cohen, J.P., Luck, M., Honari, S.: Distribution matching losses can hallucinate features in medical image translation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 529–536. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_60

    CrossRef  Google Scholar 

  10. Dalca, A.V., Balakrishnan, G., Guttag, J., Sabuncu, M.R.: Unsupervised learning for fast probabilistic diffeomorphic registration. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 729–738. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_82

    CrossRef  Google Scholar 

  11. Ding, S.L., et al.: Comprehensive cellular-resolution atlas of the adult human brain. J. Comp. Neurol. 524(16), 3127–3481 (2016)

    CrossRef  Google Scholar 

  12. Fan, J., Cao, X., Wang, Q., Yap, P.T., Shen, D.: Adversarial learning for mono-or multi-modal registration. Med. Image Anal. 58, 101545 (2019)

    CrossRef  Google Scholar 

  13. Fonov, V., Evans, A.C., Botteron, K., Almli, C.R., McKinstry, R.C., Collins, D.L.: Unbiased average age-appropriate atlases for pediatric studies. Neuroimage 54(1), 313–327 (2011)

    CrossRef  Google Scholar 

  14. Heinrich, M.P., et al.: MIND: modality independent neighbourhood descriptor for multi-modal deformable registration. Med. Image Anal. 16(7), 1423–1435 (2012)

    CrossRef  Google Scholar 

  15. Hu, Y.: MR to ultrasound registration for image-guided prostate interventions. Med. Image Anal. 16(3), 687–703 (2012)

    CrossRef  Google Scholar 

  16. Huo, Y., Xu, Z., Bao, S., Assad, A., Abramson, R.G., Landman, B.A.: Adversarial synthesis learning enables segmentation without target modality ground truth. In: ISBI, pp. 1217–1220. IEEE (2018)

    Google Scholar 

  17. Iglesias, J.E., Konukoglu, E., Zikic, D., Glocker, B., Van Leemput, K., Fischl, B.: Is synthesizing MRI contrast useful for inter-modality analysis? In: Mori, K., Sakuma, I., Sato, Y., Barillot, C., Navab, N. (eds.) MICCAI 2013. LNCS, vol. 8149, pp. 631–638. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40811-3_79

    CrossRef  Google Scholar 

  18. Iglesias, J.E., Sabuncu, M.R.: Multi-atlas segmentation of biomedical images: a survey. Med. Image Anal. 24(1), 205–219 (2015)

    CrossRef  Google Scholar 

  19. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR, pp. 1125–1134. IEEE (2017)

    Google Scholar 

  20. Kwon, D., Niethammer, M., Akbari, H., Bilello, M., Davatzikos, C., Pohl, K.M.: PORTR: pre-operative and post-recurrence brain tumor registration. IEEE Trans. Med. Imaging 33(3), 651–667 (2013)

    CrossRef  Google Scholar 

  21. Maes, F., Vandermeulen, D., Suetens, P.: Medical image registration using mutual information. Proc. IEEE 91(10), 1699–1722 (2003)

    CrossRef  Google Scholar 

  22. Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: CVPR, pp. 2794–2802. IEEE (2017)

    Google Scholar 

  23. Milletari, F., Navab, N., Ahmadi, S.A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 3DV Conference, pp. 565–571 (2016)

    Google Scholar 

  24. Modat, M., et al.: Fast free-form deformation using graphics processing units. Comput. Methods Programs Biomed. 98(3), 278–284 (2010)

    CrossRef  Google Scholar 

  25. Oord, A.V.D., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018)

  26. Park, T., Efros, A.A., Zhang, R., Zhu, J.-Y.: Contrastive learning for unpaired image-to-image translation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12354, pp. 319–345. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_19

    CrossRef  Google Scholar 

  27. Pichat, J., Iglesias, J.E., Yousry, T., Ourselin, S., Modat, M.: A survey of methods for 3D histology reconstruction. Med. Image Anal. 46, 73–105 (2018)

    CrossRef  Google Scholar 

  28. Qin, C., Shi, B., Liao, R., Mansi, T., Rueckert, D., Kamen, A.: Unsupervised deformable registration for multi-modal images via disentangled representations. In: Chung, A.C.S., Gee, J.C., Yushkevich, P.A., Bao, S. (eds.) IPMI 2019. LNCS, vol. 11492, pp. 249–261. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20351-1_19

    CrossRef  Google Scholar 

  29. Reuter, M., Schmansky, N.J., Rosas, H.D., Fischl, B.: Within-subject template estimation for unbiased longitudinal image analysis. Neuroimage 61, 1402–18 (2012)

    CrossRef  Google Scholar 

  30. Rohlfing, T., Brandt, R., Menzel, R., Maurer, C.R., Jr.: Evaluation of atlas selection strategies for atlas-based image segmentation with application to confocal microscopy images of bee brains. Neuroimage 21(4), 1428–1442 (2004)

    CrossRef  Google Scholar 

  31. Sokooti, H., de Vos, B., Berendsen, F., Lelieveldt, B.P.F., Išgum, I., Staring, M.: Nonrigid image registration using multi-scale 3D convolutional neural networks. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10433, pp. 232–239. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66182-7_27

    CrossRef  Google Scholar 

  32. Sotiras, A., Davatzikos, C., Paragios, N.: Deformable medical image registration: a survey. IEEE Trans. Med. Imaging 32(7), 1153–1190 (2013)

    CrossRef  Google Scholar 

  33. Tanner, C., Ozdemir, F., Profanter, R., Vishnevsky, V., Konukoglu, E., Goksel, O.: Generative adversarial networks for MR-CT deformable image registration. arXiv preprint arXiv:1807.07349 (2018)

  34. de Vos, B.D., Berendsen, F.F., Viergever, M.A., Staring, M., Išgum, I.: End-to-end unsupervised deformable image registration with a convolutional neural network. In: Cardoso, M.J., et al. (eds.) DLMIA/ML-CDS -2017. LNCS, vol. 10553, pp. 204–212. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67558-9_24

    CrossRef  Google Scholar 

  35. Wang, C., Yang, G., Papanastasiou, G., Tsaftaris, S.A., Newby, D.E., Gray, C., et al.: DiCyc: GAN-based deformation invariant cross-domain information fusion for medical image synthesis. Inf. Fus. 67, 147–160 (2021)

    CrossRef  Google Scholar 

  36. Wei, D., et al.: Synthesis and inpainting-based MR-CT registration for image-guided thermal ablation of liver tumors. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11768, pp. 512–520. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32254-0_57

    CrossRef  Google Scholar 

  37. Xu, Z., et al.: Adversarial uni- and multi-modal stream networks for multimodal image registration. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12263, pp. 222–232. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59716-0_22

    CrossRef  Google Scholar 

  38. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: CVPR, pp. 2223–2232. IEEE (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adrià Casamitjana .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Casamitjana, A., Mancini, M., Iglesias, J.E. (2021). Synth-by-Reg (SbR): Contrastive Learning for Synthesis-Based Registration of Paired Images. In: Svoboda, D., Burgos, N., Wolterink, J.M., Zhao, C. (eds) Simulation and Synthesis in Medical Imaging. SASHIMI 2021. Lecture Notes in Computer Science(), vol 12965. Springer, Cham. https://doi.org/10.1007/978-3-030-87592-3_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87592-3_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87591-6

  • Online ISBN: 978-3-030-87592-3

  • eBook Packages: Computer ScienceComputer Science (R0)