Skip to main content

Semi-supervised Keypoint Detector and Descriptor for Retinal Image Matching

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13681))

Included in the following conference series:


For retinal image matching (RIM), we propose SuperRetina, the first end-to-end method with jointly trainable keypoint detector and descriptor. SuperRetina is trained in a novel semi-supervised manner. A small set of (nearly 100) images are incompletely labeled and used to supervise the network to detect keypoints on the vascular tree. To attack the incompleteness of manual labeling, we propose Progressive Keypoint Expansion to enrich the keypoint labels at each training epoch. By utilizing a keypoint-based improved triplet loss as its description loss, SuperRetina produces highly discriminative descriptors at full input image size. Extensive experiments on multiple real-world datasets justify the viability of SuperRetina. Even with manual labeling replaced by auto labeling and thus making the training process fully manual-annotation free, SuperRetina compares favorably against a number of strong baselines for two RIM tasks, i.e. image registration and identity verification.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions


  1. 1.

    As fundus images depict small area of retina, it is justified to apply the planar as- sumption in generating homographies [4, 31].

  2. 2.

  3. 3.

    We use bilinear upsampling, as transposed convolutions originally used by U-Net are computationally more expensive, and introduce unwanted checkerboard artifact[13].

  4. 4.

    Keypoint labeling requires little medical knowledge. The first author performed the labeling task in 4 working hours, which we believe was affordable.

  5. 5.

  6. 6.

  7. 7.

  8. 8.

  9. 9.

  10. 10.


  1. Addison Lee, J., et al.: A low-dimensional step pattern analysis algorithm with application to multimodal retinal image registration. In: CVPR (2015)

    Google Scholar 

  2. Aleem, S., Sheng, B., Li, P., Yang, P., Feng, D.D.: Fast and accurate retinal identification system: using retinal blood vasculature landmarks. IEEE Trans. Industr. Inf. 15(7), 4099–4110 (2018)

    Article  Google Scholar 

  3. Arandjelović, R., Zisserman, A.: Three things everyone should know to improve object retrieval. In: CVPR (2012)

    Google Scholar 

  4. Cattin, P.C., Bay, H., van Gool, L., Székely, G.: Retina mosaicing using local features. In: MICCAI (2006)

    Google Scholar 

  5. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2017)

    Article  Google Scholar 

  6. Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M.: ScanNet: richly-annotated 3D reconstructions of indoor scenes. In: CVPR (2017)

    Google Scholar 

  7. DeTone, D., Malisiewicz, T., Rabinovich, A.: SuperPoint: self-supervised interest point detection and description. In: CVPR Workshops (2018)

    Google Scholar 

  8. Hernandez-Matas, C., Zabulis, X., Argyros, A.A.: REMPE: registration of retinal images through eye modelling and pose estimation. IEEE J. Biomed. Health Inform. 24(12), 3362–3373 (2020)

    Article  Google Scholar 

  9. Hernandez-Matas, C., Zabulis, X., Triantafyllou, A., Anyfanti, P., Douma, S., Argyros, A.A.: FIRE: fundus image registration dataset. Model. Artif. Intell. Ophthalmol. 1(4), 16–28 (2017)

    Article  Google Scholar 

  10. Jiang, W., Trulls, E., Hosang, J., Tagliasacchi, A., Yi, K.M.: COTR: correspondence transformer for matching across images. In: ICCV (2021)

    Google Scholar 

  11. Jonas, J.B., Xu, L., Wang, Y.: The Beijing eye study. Acta Ophthalmol. 87(3), 247–261 (2009)

    Article  Google Scholar 

  12. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)

    Google Scholar 

  13. Laibacher, T., Weyde, T., Jalali, S.: M2U-Net: effective and efficient retinal vessel segmentation for real-world applications. In: CVPRW (2019)

    Google Scholar 

  14. Lajevardi, S.M., Arakala, A., Davis, S.A., Horadam, K.J.: Retina verification system based on biometric graph matching. IEEE Trans. Image Process. 22(9), 3625–3635 (2013)

    Article  Google Scholar 

  15. Lee, J.A., Liu, P., Cheng, J., Fu, H.: A deep step pattern representation for multimodal retinal image registration. In: ICCV (2019)

    Google Scholar 

  16. Lin, T.Y., et al.: Microsoft COCO: common objects in context. In: ECCV (2014)

    Google Scholar 

  17. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)

    Article  Google Scholar 

  18. Milletari, F., Navab, N., Ahmadi, S.A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: 3DV (2016)

    Google Scholar 

  19. Oinonen, H., Forsvik, H., Ruusuvuori, P., Yli-Harja, O., Voipio, V., Huttunen, H.: Identity verification based on vessel matching from fundus images. In: ICIP (2010)

    Google Scholar 

  20. Ortega, M., Penedo, M.G., Rouco, J., Barreira, N., Carreira, M.J.: Retinal verification using a feature points-based biometric pattern. EURASIP J. Adv. Signal Process. 2009(1), 1–13 (2009).

    Article  MATH  Google Scholar 

  21. Revaud, J., Weinzaepfel, P., de Souza, C.R., Humenberger, M.: R2D2: repeatable and reliable detector and descriptor. In: NeurIPS (2019)

    Google Scholar 

  22. Rocco, I., Cimpoi, M., Arandjelović, R., Torii, A., Pajdla, T., Sivic, J.: Neighbourhood consensus networks. In: NeurIPS (2018)

    Google Scholar 

  23. Rocco, I., Cimpoi, M., Arandjelović, R., Torii, A., Pajdla, T., Sivic, J.: NCNet: neighbourhood consensus networks for estimating image correspondences. IEEE Trans. Pattern Anal. Mach. Intell. 44(2), 1020–1034 (2022)

    Article  Google Scholar 

  24. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: MICCAI (2015)

    Google Scholar 

  25. Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A.: SuperGlue: learning feature matching with graph neural networks. In: CVPR (2020)

    Google Scholar 

  26. Sattler, T., et al.: Benchmarking 6DOF outdoor visual localization in changing conditions. In: CVPR (2018)

    Google Scholar 

  27. Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: CVPR (2015)

    Google Scholar 

  28. Simon, C.: A new scientific method of identification. N. Y. State J. Med. 35(18), 901–906 (1935)

    Google Scholar 

  29. Sun, J., Shen, Z., Wang, Y., Bao, H., Zhou, X.: LoFTR: detector-free local feature matching with transformers. In: CVPR (2021)

    Google Scholar 

  30. Tian, Y., Yu, X., Fan, B., Wu, F., Heijnen, H., Balntas, V.: SOSNet: second order similarity regularization for local descriptor learning. In: CVPR (2019)

    Google Scholar 

  31. Truong, P., Apostolopoulos, S., Mosinska, A., Stucky, S., Ciller, C., Zanet, S.D.: GLAMpoints: greedily learned accurate match points. In: ICCV (2019)

    Google Scholar 

  32. Truong, P., Danelljan, M., Van Gool, L., Timofte, R.: Learning accurate dense correspondences and when to trust them. In: CVPR (2021)

    Google Scholar 

  33. Wang, Y., et al.: A segmentation based robust deep learning framework for multimodal retinal image registration. In: ICASSP (2020)

    Google Scholar 

  34. Wei, Q., et al.: Learn to segment retinal lesions and beyond. In: ICPR (2020)

    Google Scholar 

  35. Wei, S.E., Ramakrishna, V., Kanade, T., Sheikh, Y.: Convolutional pose machines. In: CVPR (2016)

    Google Scholar 

  36. Wei, W., et al.: Subfoveal choroidal thickness: the Beijing eye study. Ophthalmology 120(1), 175–180 (2013)

    Article  Google Scholar 

Download references


This work was supported by NSFC (No. 62172420, No. 62072463), BJNSF (No. 4202033), and Public Computing Cloud, Renmin University of China.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Xirong Li .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 4419 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, J., Li, X., Wei, Q., Xu, J., Ding, D. (2022). Semi-supervised Keypoint Detector and Descriptor for Retinal Image Matching. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13681. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19802-1

  • Online ISBN: 978-3-031-19803-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics