Skip to main content

ContraReg: Contrastive Learning of Multi-modality Unsupervised Deformable Image Registration

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 (MICCAI 2022)

Abstract

Establishing voxelwise semantic correspondence across distinct imaging modalities is a foundational yet formidable computer vision task. Current multi-modality registration techniques maximize hand-crafted inter-domain similarity functions, are limited in modeling nonlinear intensity-relationships and deformations, and may require significant re-engineering or underperform on new tasks, datasets, and domain pairs. This work presents ContraReg, an unsupervised contrastive representation learning approach to multi-modality deformable registration. By projecting learned multi-scale local patch features onto a jointly learned inter-domain embedding space, ContraReg obtains representations useful for non-rigid multi-modality alignment. Experimentally, ContraReg achieves accurate and robust results with smooth and invertible deformations across a series of baselines and ablations on a neonatal T1-T2 brain MRI registration task with all methods validated over a wide range of deformation regularization strengths.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Arar, M., Ginger, Y., Danon, D., Bermano, A.H., Cohen-Or, D.: Unsupervised multi-modal image registration via geometry preserving image-to-image translation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  2. Avants, B.B., Tustison, N.J., Song, G., Cook, P.A., et al.: A reproducible evaluation of ants similarity metric performance in brain image registration. Neuroimage 54(3), 2033–2044 (2011)

    Google Scholar 

  3. Chen, T., Kornblith, S., Swersky, K., Norouzi, M., Hinton, G.: Big self-supervised models are strong semi-supervised learners. arXiv preprint arXiv:2006.10029 (2020)

  4. Czolbe, S., Krause, O., Feragen, A.: Semantic similarity metrics for learned image registration. In: Proceedings of the Fourth Conference on Medical Imaging with Deep Learning (2021)

    Google Scholar 

  5. Dalca, A.V., Balakrishnan, G., Guttag, J., Sabuncu, M.R.: Unsupervised learning of probabilistic diffeomorphic registration for images and surfaces. Med. Image Anal. 57, 226–236 (2019)

    Google Scholar 

  6. Dey, N., et al.: Multi-modal image fusion for multispectral super-resolution in microscopy. In: Medical Imaging 2019: Image Processing. vol. 10949, pp. 95–101. SPIE (2019)

    Google Scholar 

  7. Dey, N., Ren, M., Dalca, A.V., Gerig, G.: Generative adversarial registration for improved conditional deformable templates. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3929–3941, October 2021

    Google Scholar 

  8. Guo, C.K.: Multi-modal image registration with unsupervised deep learning. Master’s thesis, Massachusetts Institute of Technology (2019)

    Google Scholar 

  9. Gutierrez-Becker, B., Mateus, D., Peter, L., Navab, N.: Guiding multimodal registration with learned optimization updates. Med. Image Anal. 41, 2–17 (2017)

    Article  Google Scholar 

  10. Ha, D., Dai, A., Le, Q.V.: Hypernetworks. arXiv preprint arXiv:1609.09106 (2016)

  11. Haber, E., Modersitzki, J.: Intensity gradient based registration and fusion of multi-modal images. In: Larsen, R., Nielsen, M., Sporring, J. (eds.) MICCAI 2006. LNCS, vol. 4191, pp. 726–733. Springer, Heidelberg (2006). https://doi.org/10.1007/11866763_89

    Chapter  Google Scholar 

  12. Han, J., Shoeiby, M., Petersson, L., Armin, M.A.: Dual contrastive learning for unsupervised image-to-image translation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. pp. 746–755, June 2021

    Google Scholar 

  13. Hata, N., Dohi, T., Warfield, S., Wells, W., Kikinis, R., Jolesz, F.A.: Multimodality deformable registration of pre- and intraoperative images for MRI-guided brain surgery. In: Wells, W.M., Colchester, A., Delp, S. (eds.) MICCAI 1998. LNCS, vol. 1496, pp. 1067–1074. Springer, Heidelberg (1998). https://doi.org/10.1007/BFb0056296

    Chapter  Google Scholar 

  14. Heinrich, M.P., Jenkinson, M., Bhushan, M., Matin, T., Gleeson, F.V., Brady, M., Schnabel, J.A.: Mind: Modality independent neighbourhood descriptor for multi-modal deformable registration. Med. Image Anal. 16(7), 1423–1435 (2012)

    Article  Google Scholar 

  15. Hoffmann, M., Billot, B., Greve, D.N., Iglesias, J.E., Fischl, B., Dalca, A.V.: Synthmorph: learning contrast-invariant registration without acquired images. IEEE Trans. Med. Imaging 41(3), 543–558 (2021)

    Google Scholar 

  16. Hoopes, A., Hoffmann, M., Fischl, B., Guttag, J., Dalca, A.V.: HyperMorph: amortized hyperparameter learning for image registration. In: Feragen, A., Sommer, S., Schnabel, J., Nielsen, M. (eds.) IPMI 2021. LNCS, vol. 12729, pp. 3–17. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-78191-0_1

    Chapter  Google Scholar 

  17. Jing, L., Vincent, P., LeCun, Y., Tian, Y.: Understanding dimensional collapse in contrastive self-supervised learning. arXiv preprint arXiv:2110.09348 (2021)

  18. Lee, D., Hofmann, M., Steinke, F., Altun, Y., Cahill, N.D., Scholkopf, B.: Learning similarity measure for multi-modal 3D image registration. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 186–193. IEEE (2009)

    Google Scholar 

  19. Loeckx, D., Slagmolen, P., Maes, F., Vandermeulen, D., Suetens, P.: Nonrigid image registration using conditional mutual information. Inf. Process. Med. Imaging. 20, 725–737 (2009)

    Google Scholar 

  20. Lu, J., Öfverstedt, J., Lindblad, J., Sladoje, N.: Is image-to-image translation the panacea for multimodal image registration? A comparative study. arXiv preprint arXiv:2103.16262 (2021)

  21. Makropoulos, A., Gousias, I.S., Ledig, C., Aljabar, P., et al.: Automatic whole brain MRI segmentation of the developing neonatal brain. IEEE Trans. Med. Imaging 33(9), 1818–1831 (2014)

    Google Scholar 

  22. Makropoulos, A., et al.: The developing human connectome project: a minimal processing pipeline for neonatal cortical surface reconstruction. Neuroimage 173, 88–112 (2018)

    Article  Google Scholar 

  23. Mok, T.C.W., Chung, A.C.S.: Conditional deformable image registration with convolutional neural network. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12904, pp. 35–45. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87202-1_4

    Chapter  Google Scholar 

  24. Nimsky, C., Ganslandt, O., Merhof, D., et al.: Intraoperative visualization of the pyramidal tract by diffusion-tensor-imaging-based fiber tracking. Neuroimage 30, 1219–1229 (2006)

    Google Scholar 

  25. Oord, A.v.d., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018)

  26. Park, T., Efros, A.A., Zhang, R., Zhu, J.-Y.: Contrastive learning for unpaired image-to-image translation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12354, pp. 319–345. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_19

    Chapter  Google Scholar 

  27. Perez, E., Strub, F., de Vries, H., Dumoulin, V., Courville, A.C.: Film: visual reasoning with a general conditioning layer. In: AAAI (2018)

    Google Scholar 

  28. Pielawski, N., et al.: CoMIR: contrastive multimodal image representation for registration. In: 34th Conference on Advances in Neural Information Processing Systems (2020)

    Google Scholar 

  29. Qin, C., Shi, B., Liao, R., Mansi, T., Rueckert, D., Kamen, A.: Unsupervised deformable registration for multi-modal images via disentangled representations. In: Chung, A.C.S., Gee, J.C., Yushkevich, P.A., Bao, S. (eds.) IPMI 2019. LNCS, vol. 11492, pp. 249–261. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20351-1_19

    Chapter  Google Scholar 

  30. Qiu, H., Qin, C., Schuh, A., et al.: Learning diffeomorphic and modality-invariant registration using b-splines. Proc. Mach. Learn. Res. 143, 645–664 (2021)

    Google Scholar 

  31. Ren, M., Dey, N., Fishbaugh, J., Gerig, G.: Segmentation-renormalized deep feature modulation for unpaired image harmonization. IEEE Trans. Med. Imaging 40(6), 1519–1530 (2021)

    Article  Google Scholar 

  32. Ren, M., Dey, N., Styner, M.A., Botteron, K., Gerig, G.: Local spatiotemporal representation learning for longitudinally-consistent neuroimage analysis. arXiv preprint arXiv:2206.04281 (2022)

  33. Risholm, P., Golby, A.J., Wells, W.: Multimodal image registration for preoperative planning and image-guided neurosurgical procedures. Neurosurg. Clinics 22(2), 197–206 (2011)

    Article  Google Scholar 

  34. Russakoff, D.B., Tomasi, C., Rohlfing, T., Maurer, C.R.: Image similarity using mutual information of regions. In: Pajdla, T., Matas, J. (eds.) ECCV 2004. LNCS, vol. 3023, pp. 596–607. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24672-5_47

    Chapter  Google Scholar 

  35. Schuh, A.: Computational models of the morphology of the developing neonatal human brain. Ph.D. thesis, Imperial College London (2018)

    Google Scholar 

  36. Simonovsky, M., Gutiérrez-Becker, B., Mateus, D., Navab, N., Komodakis, N.: A Deep metric for multimodal registration. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9902, pp. 10–18. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46726-9_2

    Chapter  Google Scholar 

  37. Wachinger, C., Navab, N.: Entropy and laplacian images: Structural representations for multi-modal registration. Med. Image Anal. 16(1), 1–17 (2012)

    Article  Google Scholar 

  38. Wells, W.M., III., Viola, P., Atsumi, H., Nakajima, S., Kikinis, R.: Multi-modal volume registration by maximization of mutual information. Med. Image Anal. 1(1), 35–51 (1996)

    Article  Google Scholar 

  39. Woo, J., Stone, M., Prince, J.L.: Multimodal registration via mutual information incorporating geometric and spatial context. IEEE Trans. Image Process. 24, 757–769 (2014)

    Google Scholar 

  40. Zhou, B., Augenfeld, Z., Chapiro, J., Zhou, S.K., Liu, C., Duncan, J.S.: Anatomy-guided multimodal registration by learning segmentation without ground truth: application to intraprocedural CBCT/MR liver segmentation and registration. Med. Image Anal. 74 (2021)

    Google Scholar 

Download references

Funding

N. Dey and G. Gerig were partially supported by NIH 1R01HD088125-01A1. The dHCP data used in this study was funded by ERC Grant Agreement no. [319456].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neel Dey .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1800 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dey, N., Schlemper, J., Salehi, S.S.M., Zhou, B., Gerig, G., Sofka, M. (2022). ContraReg: Contrastive Learning of Multi-modality Unsupervised Deformable Image Registration. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13436. Springer, Cham. https://doi.org/10.1007/978-3-031-16446-0_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16446-0_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16445-3

  • Online ISBN: 978-3-031-16446-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics