Advertisement

Imitation Learning Network for Fundus Image Registration Using a Divide-And-Conquer Approach

  • Siming BayerEmail author
  • Xia Zhong
  • Weilin Fu
  • Nishant Ravikumar
  • Andreas Maier
Conference paper
  • 49 Downloads
Part of the Informatik aktuell book series (INFORMAT)

Zusammenfassung

Comparison of microvascular circulation on fundoscopic images is a non-invasive clinical indication for the diagnosis and monitoring of diseases, such as diabetes and hypertensions. The differences between intra-patient images can be assessed quantitatively by registering serial acquisitions. Due to the variability of the images (i.e. contrast, luminosity) and the anatomical changes of the retina, the registration of fundus images remains a challenging task. Recently, several deep learning approaches have been proposed to register fundus images in an end-to-end fashion, achieving remarkable results. However, the results are diffcult to interpret and analyze. In this work, we propose an imitation learning framework for the registration of 2D color funduscopic images for a wide range of applications such as disease monitoring, image stitching and super-resolution. We follow a divide-and-conquer approach to improve the interpretability of the proposed network, and analyze both the influence of the input image and the hyperparameters on the registration result. The results show that the proposed registration network reduces the initial target registration error up to 95%.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  1. Saha SK, Xiao D, Bhuiyan A, et al. Color fundus image registration techniques and applications for automated analysis of diabetic retinopathy progression: a review. Biomed Signal Process Control. 2019;47:288 – 302.Google Scholar
  2. Deng K, Tian J, Zheng J, et al. Retinal fundus image registration via vascular structure graph matching. In: Int J Biomed Imaging; 2010. p. 13.Google Scholar
  3. Hernandez-Matas C, Zabulis X, Argyros AA. An experimental evaluation of the accuracy of keypoints-based retinal image registration. In: Proc IEEE EMBS; 2017. p. 377–381.Google Scholar
  4. Mahapatra D, Antony B, Sedai S, et al. Deformable medical image registration using generative adversarial networks. In: Proc IEEE ISBI; 2018. p. 1449–1453.Google Scholar
  5. Maier A, Schebesch F, Syben C, et al. Precision learning: towards use of known operators in neural networks. In: Proc ICPR; 2018. p. 183–188.Google Scholar
  6. Zhong X, Bayer S, Ravikumar N, et al. Resolve intraoperative brain shift as imitation game. In: Simulation, Image Processing, and Ultrasound Systems for Assisted Diagnosis and Navigation; 2018. p. 129–137.Google Scholar
  7. Fu W, Breininger K, Schaert R, et al. A Divide-and-Conquer approach towards understanding deep networks. In: MICCAI; 2019. p. 183–191.Google Scholar
  8. Hernandez-Matas C, Zabulis X, Triantafyllou A, et al. FIRE: Fundus Image Registration Dataset. Model Opthalmol. 2017;1:16–28.Google Scholar
  9. Fu W, Breininger K, Schaffert R, et al. Frangi-Net. In: Proc BVM; 2018. p. 341–346.Google Scholar

Copyright information

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2020

Authors and Affiliations

  • Siming Bayer
    • 1
    Email author
  • Xia Zhong
    • 1
  • Weilin Fu
    • 1
  • Nishant Ravikumar
    • 2
  • Andreas Maier
    • 1
  1. 1.Pattern Recognition LabFAU Erlangen-NurembergErlangenDeutschland
  2. 2.CISTIB, School of Computing and School of MedicinUniversity of LeedsLeedsVereinigtes Königreich

Personalised recommendations