Advertisement

Towards Robust CT-Ultrasound Registration Using Deep Learning Methods

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11038)

Abstract

Multi-modal registration, especially CT/MR to ultrasound (US), is still a challenge, as conventional similarity metrics such as mutual information do not match the imaging characteristics of ultrasound. The main motivation for this work is to investigate whether a deep learning network can be used to directly estimate the displacement between a pair of multi-modal image patches, without explicitly performing similarity metric and optimizer, the two main components in a registration framework. The proposed DVNet is a fully convolutional neural network and is trained using a large set of artificially generated displacement vectors (DVs). The DVNet was evaluated on mono- and simulated multi-modal data, as well as real CT and US liver slices (selected from 3D volumes). The results show that the DVNet is quite robust on the single- and multi-modal (simulated) data, but does not work yet on the real CT and US images.

Keywords

CT Ultrasound Liver Registration CNN 

References

  1. 1.
    Pluim, J.P., et al.: Mutual-information-based registration of medical images: a survey. IEEE Trans. Med. Imaging 22(8), 986–1004 (2003)CrossRefGoogle Scholar
  2. 2.
    Heinrich, M.P., et al.: MIND: modality independent neighbourhood descriptor for multi-modal deformable registration. Med. Image Anal. 16(7), 1423–1435 (2012)CrossRefGoogle Scholar
  3. 3.
    Roche, A., Malandain, G., Pennec, X., Ayache, N.: The correlation ratio as a new similarity measure for multimodal image registration. In: Wells, W.M., Colchester, A., Delp, S. (eds.) MICCAI 1998. LNCS, vol. 1496, pp. 1115–1124. Springer, Heidelberg (1998).  https://doi.org/10.1007/BFb0056301CrossRefGoogle Scholar
  4. 4.
    Wein, W., et al.: Automatic CT-ultrasound registration for diagnostic imaging and image-guided intervention. Med. Image Anal. 12(5), 577–585 (2008)CrossRefGoogle Scholar
  5. 5.
    Wu, G., et al.: Scalable high performance image registration framework by unsupervised deep feature representations learning. IEEE Trans. Biomed. Eng. 63(7), 1505–1516 (2016)CrossRefGoogle Scholar
  6. 6.
    Simonovsky, M., Gutiérrez-Becker, B., Mateus, D., Navab, N., Komodakis, N.: A deep metric for multimodal registration. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9902, pp. 10–18. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46726-9_2CrossRefGoogle Scholar
  7. 7.
    Vos, B. D., et al.: End-to-end unsupervised deformable image registration with a convolutional neural network. arXiv preprint arXiv:1704.06065 (2017)
  8. 8.
    Sokooti, H., de Vos, B., Berendsen, F., Lelieveldt, B.P.F., Išgum, I., Staring, M.: Nonrigid image registration using multi-scale 3D convolutional neural networks. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10433, pp. 232–239. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66182-7_27CrossRefGoogle Scholar
  9. 9.
    Banerjee, J., et al.: 4D ultrasound tracking of liver and its verification for TIPS guidance. IEEE Trans. Med. Imaging 35(1), 52–62 (2016)CrossRefGoogle Scholar
  10. 10.
    Yang, X., Kwitt, R., Niethammer, M.: Fast predictive image registration. In: Carneiro, G., et al. (eds.) LABELS/DLMIA -2016. LNCS, vol. 10008, pp. 48–57. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46976-8_6CrossRefGoogle Scholar
  11. 11.
    Xu, B., Wang, N., Chen, T., Li, M.: Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853 (2015)

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Erasmus MCRotterdamThe Netherlands
  2. 2.Delft University of TechnologyDelftThe Netherlands

Personalised recommendations