Skip to main content

Advertisement

Log in

F3RNet: full-resolution residual registration network for deformable image registration

  • Original Article
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

Deformable image registration (DIR) is essential for many image-guided therapies. Recently, deep learning approaches have gained substantial popularity and success in DIR. Most deep learning approaches use the so-called mono-stream high-to-low, low-to-high network structure and can achieve satisfactory overall registration results. However, accurate alignments for some severely deformed local regions, which are crucial for pinpointing surgical targets, are often overlooked. Consequently, these approaches are not sensitive to some hard-to-align regions, e.g., intra-patient registration of deformed liver lobes.

Methods

We propose a novel unsupervised registration network, namely full-resolution residual registration network (F3RNet), for deformable registration of severely deformed organs. The proposed method combines two parallel processing streams in a residual learning fashion. One stream takes advantage of the full-resolution information that facilitates accurate voxel-level registration. The other stream learns the deep multi-scale residual representations to obtain robust recognition. We also factorize the 3D convolution to reduce the training parameters and enhance network efficiency.

Results

We validate the proposed method on a clinically acquired intra-patient abdominal CT-MRI dataset and a public inspiratory and expiratory thorax CT dataset. Experiments on both multimodal and unimodal registration demonstrate promising results compared to state-of-the-art approaches.

Conclusion

By combining the high-resolution information and multi-scale representations in a highly interactive residual learning fashion, the proposed F3RNet can achieve accurate overall and local registration. The run time for registering a pair of images is less than 3 s using a GPU. In future works, we will investigate how to cost-effectively process high-resolution information and fuse multi-scale representations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Avants BB, Epstein CL, Grossman M, Gee JC (2008) Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med Image Anal 12(1):26–41

    Article  CAS  Google Scholar 

  2. Balakrishnan G, Zhao A, Sabuncu MR, Guttag J, Dalca AV (2018) An unsupervised learning model for deformable medical image registration. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9252–9260

  3. Chee E, Wu Z (2018) Airnet: self-supervised affine registration for 3d medical images using neural networks. arXiv preprint arXiv:1810.02583

  4. Fan J, Cao X, Yap PT, Shen D (2019) Birnet: brain image registration using dual-supervised fully convolutional networks. Med Image Anal 54:193–206

    Article  Google Scholar 

  5. Ferrante E, Dokania PK, Silva RM, Paragios N (2018) Weakly supervised learning of metric aggregations for deformable image registration. IEEE J Biomed Health Inform 23(4):1374–1384

    Article  Google Scholar 

  6. Ferrante E, Oktay O, Glocker B, Milone DH (2018) On the adaptability of unsupervised cnn-based deformable image registration to unseen image domains. In: International workshop on machine learning in medical imaging. Springer, pp. 294–302

  7. Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X (2020) Deep learning in medical image registration: a review. Phys Med Biol 65(20):20TR01

    Article  Google Scholar 

  8. Ghosal S, Ray N (2017) Deep deformable registration: enhancing accuracy by fully convolutional neural net. Pattern Recognit Lett 94:81–86

    Article  Google Scholar 

  9. Guo Z, Chen Z, Yu T, Chen J, Liu S (2019) Progressive image inpainting with full-resolution residual network. In: Proceedings of the 27th acm international conference on multimedia, pp. 2496–2504

  10. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778

  11. Heinrich MP, Jenkinson M, Bhushan M, Matin T, Gleeson FV, Brady M, Schnabel JA (2012) Mind: modality independent neighbourhood descriptor for multi-modal deformable registration. Med Image Anal 16(7):1423–1435

    Article  Google Scholar 

  12. Heinrich MP, Jenkinson M, Brady M, Schnabel JA (2013) MRF-based deformable registration and ventilation estimation of lung CT. IEEE Trans Med Imaging 32(7):1239–1248

    Article  Google Scholar 

  13. Hering A, Murphy K, van Ginneken B (2020) Lean2Reg challenge: CT lung registration-training data. https://doi.org/10.5281/zenodo.3835682

  14. Hu X, Kang M, Huang W, Scott MR, Wiest R, Reyes M (2019) Dual-stream pyramid registration network. In: International conference on medical image computing and computer-assisted intervention. Springer, pp. 382–390

  15. Hu Y, Modat M, Gibson E, Li W, Ghavami N, Bonmati E, Wang G, Bandula S, Moore CM, Emberton M, Ourselin S, Noble JA, Barratt DC, Vercauteren T (2018) Weakly-supervised convolutional neural networks for multimodal image registration. Med Image Anal 49:1–13

    Article  CAS  Google Scholar 

  16. Jaderberg M, Simonyan K, Zisserman A, Kavukcuoglu K (2015) Spatial transformer networks, pp 2017–2025. arXiv preprint arXiv:1506.02025

  17. Kingma DP, Ba J (2015) Adam: A method for stochastic optimization. In: International conference on learning representations

  18. Kuang D, Schmah T (2019) Faim–a convnet method for unsupervised 3d medical image registration. In: international workshop on machine learning in medical imaging. Springer, pp. 646–654

  19. Li H, Fan Y (2018) Non-rigid image registration using self-supervised fully convolutional networks without training data. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 1075–1078. IEEE

  20. Lv J, Yang M, Zhang J, Wang X (2018) Respiratory motion correction for free-breathing 3d abdominal MRI using CNN-based image registration: a feasibility study. Br J Radiol 91:20170788

    Article  Google Scholar 

  21. Mok TC, Chung AC (2020) Large deformation diffeomorphic image registration with Laplacian pyramid networks. In: International conference on medical image computing and computer-assisted intervention. Springer, pp. 211–221

  22. Pohlen T, Hermans A, Mathias M, Leibe B (2017) Full-resolution residual networks for semantic segmentation in street scenes. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4151–4160

  23. Sun K, Xiao B, Liu D, Wang J (2019) Deep high-resolution representation learning for human pose estimation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5693–5703

  24. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826

  25. de Vos BD, Berendsen FF, Viergever MA, Sokooti H, Staring M, Išgum I (2019) A deep learning framework for unsupervised affine and deformable image registration. Med Image Anal 52:128–143

    Article  Google Scholar 

  26. Wang J, Sun K, Cheng T, Jiang B, Deng C, Zhao Y, Liu D, Mu Y, Tan M, Wang X, Liu W, Xiao B (2020) Deep high-resolution representation learning for visual recognition. IEEE Trans Pattern Anal Mach Intell. https://doi.org/10.1109/TPAMI.2020.2983686

    Article  PubMed  Google Scholar 

  27. Wells WM III, Viola P, Atsumi H, Nakajima S, Kikinis R (1996) Multi-modal volume registration by maximization of mutual information. Med Image Anal 1(1):35–51

    Article  Google Scholar 

  28. Xu Z, Lee CP, Heinrich MP, Modat M, Rueckert D, Ourselin S, Abramson RG, Landman BA (2016) Evaluation of six registration methods for the human abdomen on clinically acquired CT. IEEE Trans Biomed Eng 63(8):1563–1572

    Article  Google Scholar 

Download references

Acknowledgements

This project was supported by the National Institutes of Health (Grant Nos. R01EB025964, R01DK119269, and P41EB015898), the National Key R&D Program of China (No. 2020AAA0108303), NSFC 41876098 and the Overseas Cooperation Research Fund of Tsinghua Shenzhen International Graduate School (Grant No. HW2018008).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiu Li.

Ethics declarations

Conflict of interest

Jayender Jagadeesan owns equity in Navigation Sciences, Inc. He is a co-inventor of a navigation device to assist surgeons in tumor excision that is licensed to Navigation Sciences. Dr.Jagadeesan’s interests were reviewed and are managed by BWH and Partners HealthCare in accordance with their conflict of interest policies.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, Z., Luo, J., Yan, J. et al. F3RNet: full-resolution residual registration network for deformable image registration. Int J CARS 16, 923–932 (2021). https://doi.org/10.1007/s11548-021-02359-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-021-02359-4

Keywords

Navigation