Abstract
Purpose
We propose to learn a 3D keypoint descriptor which we use to match keypoints extracted from full-body CT scans. Our methods are inspired by 2D keypoint descriptor learning, which was shown to outperform hand-crafted descriptors. Adapting these to 3D images is challenging because of the lack of labelled training data and high memory requirements.
Method
We generate semi-synthetic training data. For that, we first estimate the distribution of local affine inter-subject transformations using labelled anatomical landmarks on a small subset of the database. We then sample a large number of transformations and warp unlabelled CT scans, for which we can subsequently establish reliable keypoint correspondences using guided matching. These correspondences serve as training data for our descriptor, which we represent by a CNN and train using the triplet loss with online triplet mining.
Results
We carry out experiments on a synthetic data reliability benchmark and a registration task involving 20 CT volumes with anatomical landmarks used for evaluation purposes. Our learned descriptor outperforms the 3D-SURF descriptor on both benchmarks while having a similar runtime.
Conclusion
We propose a new method to generate semi-synthetic data and a new learned 3D keypoint descriptor. Experiments show improvement compared to a hand-crafted descriptor. This is promising as literature has shown that jointly learning a detector and a descriptor gives further performance boost.
References
Bay H, Tuytelaars T, Van Gool L (2006) Surf: Speeded up robust features. In: European conference on computer vision, Springer, pp 404–417
Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110
Balntas V, Riba E, Ponsa D, Mikolajczyk K (2016) Learning local feature descriptors with triplets and shallow convolutional neural networks. In: Bmvc, vol 1, p 3
Agier R, Valette S, Fanton L, Croisille P, Prost R (2016) Hubless 3d medical image bundle registration. In: VISAPP 2016 11th Joint Conference
Sipiran I, Bustos B (2011) Harris 3d: a robust extension of the harris operator for interest point detection on 3d meshes. Vis Comput 27:963–976
Blendowski M, Heinrich M (2018) 3d-cnns for deep binary descriptor learning in medical volume data. In: Bildverarbeitung für die Medizin 2018, Springer, pp 23–28
Altwaijry H, Veit A, Belongie SJ, Tech C (2016) Learning to detect and match keypoints with deep architectures. In: BMVC
Langs G, Hanbury A, Menze B, Müller H (2012) Visceral: towards large data in medical imaging—challenges and directions. In: MICCAI international workshop on medical content-based retrieval for clinical decision support, Springer, pp 92–98
Krenn M, Grünberg K, Jimenez-del Toro O, Jakab A, Fernandez TS, Winterstein M, Weber MA, Langs G (2017) Datasets created in visceral. Cloud-Based Benchmarking of Medical Image Analysis. Springer, Cham, pp 69–84
Scott D (2015) Multivariate density estimation: Theory, practice, and visualization: Second edition
Hoffer E, Ailon N (2015) Deep metric learning using triplet network. In: International Workshop on Similarity-Based Pattern Recognition, Springer, pp 84–92
van der Maaten L, Hinton G (2008) Visualizing data using t-sne. J Mach Learn Res 9(86):2579–2605
Brown M, Hua G, Winder S (2010) Discriminative learning of local image descriptors. IEEE Trans Pattern Anal Mach Intel 33(1):43–57
Agier R, Valette S, Kéchichian R, Fanton L, Prost R (2020) Hubless keypoint-based 3d deformable groupwise registration. Med Image Anal 59:101564
Dusmanu M, Rocco I, Pajdla T, Pollefeys M, Sivic J, Torii A, Sattler T (2019) D2-net: A trainable CNN for joint detection and description of local features. CoRR abs/1905.03561
DeTone D, Malisiewicz T, Rabinovich A (2018) Superpoint: Self-supervised interest point detection and description
Funding
This work was funded by the TOPACS ANR-19-CE45-0015 project of the French National Research Agency (ANR).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Ethical approval
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed consent
Informed consent was obtained by the VISCERAL project from all individual participants included in the study
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Loiseau–Witon, N., Kéchichian, R., Valette, S. et al. Learning 3D medical image keypoint descriptors with the triplet loss. Int J CARS 17, 141–146 (2022). https://doi.org/10.1007/s11548-021-02481-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11548-021-02481-3