Advertisement

Closing the Gap Between Deep and Conventional Image Registration Using Probabilistic Dense Displacement Networks

  • Mattias P. HeinrichEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)

Abstract

Nonlinear image registration continues to be a fundamentally important tool in medical image analysis. Diagnostic tasks, image-guided surgery and radiotherapy as well as motion analysis all rely heavily on accurate intra-patient alignment. Furthermore, inter-patient registration enables atlas-based segmentation or landmark localisation and shape analysis. When labelled scans are scarce and anatomical differences large, conventional registration has often remained superior to deep learning methods that have so far mainly dealt with relatively small or low-complexity deformations. We address this shortcoming by leveraging ideas from probabilistic dense displacement optimisation that has excelled in many registration tasks with large deformations. We propose to design a network with approximate min-convolutions and mean field inference for differentiable displacement regularisation within a discrete weakly-supervised registration setting. By employing these meaningful and theoretically proven constraints, our learnable registration algorithm contains very few trainable weights (primarily for feature extraction) and is easier to train with few labelled scans. It is very fast in training and inference and achieves state-of-the-art accuracies for the challenging inter-patient registration of abdominal CT outperforming previous deep learning approaches by 15% Dice overlap.

Keywords

Registration Deep learning Probabilistic Abdominal 

Supplementary material

490281_1_En_6_MOESM1_ESM.m4v (1.1 mb)
Supplementary material 1 (m4v 1087 KB)
490281_1_En_6_MOESM2_ESM.m4v (1.1 mb)
Supplementary material 2 (m4v 1101 KB)

References

  1. 1.
    Balakrishnan, G., Zhao, A., Sabuncu, M.R., Guttag, J., Dalca, A.V.: Voxelmorph: a learning framework for deformable medical image registration. IEEE Trans. Med. Imaging (2019) Google Scholar
  2. 2.
    Dosovitskiy, A., et al.: Flownet: learning optical flow with convolutional networks. In: Proceedings of ICCV, pp. 2758–2766 (2015)Google Scholar
  3. 3.
    Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. Int. J. Comput. Vis. 70(1), 41–54 (2006)CrossRefGoogle Scholar
  4. 4.
    Heinrich, M.P., Oktay, O., Bouteldja, N.: Obelisk-net: fewer layers to solve 3D multi-organ segmentation with sparse deformable convolutions. Med. Image Anal. 54, 1–9 (2019)CrossRefGoogle Scholar
  5. 5.
    Heinrich, M.P., Jenkinson, M., Papież, B.W., Brady, S.M., Schnabel, J.A.: Towards realtime multimodal fusion for image-guided interventions using self-similarities. In: Mori, K., Sakuma, I., Sato, Y., Barillot, C., Navab, N. (eds.) MICCAI 2013. LNCS, vol. 8149, pp. 187–194. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-40811-3_24CrossRefGoogle Scholar
  6. 6.
    Hu, Y., et al.: Weakly-supervised convolutional neural networks for multimodal image registration. Med. Image Anal. 49, 1–13 (2018)CrossRefGoogle Scholar
  7. 7.
    Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 36, 61–78 (2017)CrossRefGoogle Scholar
  8. 8.
    Krähenbühl, P., Koltun, V.: Efficient inference in fully connected CRFs with Gaussian edge potentials. In: NeurIPS, pp. 109–117 (2011)Google Scholar
  9. 9.
    Krebs, J., Mansi, T., Mailhé, B., Ayache, N., Delingette, H.: Unsupervised probabilistic deformation modeling for robust diffeomorphic registration. In: Stoyanov, D., et al. (eds.) DLMIA 2018, ML-CDS 2018. LNCS, vol. 11045, pp. 101–109. Springer, Heidelberg (2018).  https://doi.org/10.1007/978-3-030-00889-5_12CrossRefGoogle Scholar
  10. 10.
    Modat, M., et al.: Fast free-form deformation using graphics processing units. Comput. Methods Programs Biomed. 98(3), 278–284 (2010)CrossRefGoogle Scholar
  11. 11.
    Rousseau, F., Habas, P.A., Studholme, C.: A supervised patch-based approach for human brain labeling. IEEE Trans. Med. Imaging 30(10), 1852–1862 (2011)CrossRefGoogle Scholar
  12. 12.
    Rühaak, J., et al.: Estimation of large motion in lung CT by integrating regularized keypoint correspondences into dense deformable registration. IEEE Trans. Med. Imaging 36(8), 1746–1757 (2017)CrossRefGoogle Scholar
  13. 13.
    Sentker, T., Madesta, F., Werner, R.: GDL-FIRE\(^\text{4D }\): deep learning-based fast 4D CT image registration. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 765–773. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00928-1_86CrossRefGoogle Scholar
  14. 14.
    Jimenez-del Toro, O., Müller, H., Krenn, M., et al.: Cloud-based evaluation of anatomical structure segmentation and landmark detection algorithms: visceral anatomy benchmarks. IEEE Trans. Med. Imaging 35(11), 2459–2475 (2016)CrossRefGoogle Scholar
  15. 15.
    de Vos, B.D., Berendsen, F.F., Viergever, M.A., Sokooti, H., Staring, M., Išgum, I.: A deep learning framework for unsupervised affine and deformable image registration. Med. Image Anal. 52, 128–143 (2019)CrossRefGoogle Scholar
  16. 16.
    Xu, Z., et al.: Evaluation of 6 registration methods for the human abdomen on clinically acquired CT. IEEE Trans. Biomed. Eng. 63(8), 1563–1572 (2016)CrossRefGoogle Scholar
  17. 17.
    Yang, X., Kwitt, R., Styner, M., Niethammer, M.: Quicksilver: fast predictive image registration-a deep learning approach. NeuroImage 158, 378–396 (2017)CrossRefGoogle Scholar
  18. 18.
    Zheng, S., et al.: Conditional random fields as recurrent neural networks. In: Proceedings of ICCV, pp. 1529–1537 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Institute of Medical InformaticsUniversität zu LübeckLübeckGermany

Personalised recommendations