A Regenerated Feature Extraction Method for Cross-modal Image Registration

  • Jian Yang
  • Qi WangEmail author
  • Xuelong Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10989)


Cross-modal image registration is an intractable problem in computer vision and pattern recognition. Inspired by that human gradually deepen to learn in the cognitive process, we present a novel method to automatically register images with different modes in this paper. Unlike most existing registrations that align images by single type of features or directly using multiple features, we employ the “regenerated” mechanism cooperated with a dynamic routing to adaptively detect features and match for different modal images. The geometry-based maximally stable extremal regions (MSER) are first implemented to fast detect non-overlapping regions as the primitive of feature regeneration, which are used to generate novel control-points using salient image disks (SIDs) operator embedded by a sub-pixel iteration. Then a dynamic routing is proposed to select suitable features and match images. Experimental results on optical and multi-sensor images show that our method has a better accuracy compared to state-of-the-art approaches.


Feature regeneration MSER SIDs Image registration Dynamic routing 


  1. 1.
    Barbara, Z., Jan, F.: Image registration methods: a survey. Image Vis. Comput. 11(21), 977–1000 (2003)Google Scholar
  2. 2.
    Collins, T., Bartoli, A.: Planar structure-from-motion with affine camera models: closed-form solutions, ambiguities and degeneracy analysis. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1237–1255 (2017)CrossRefGoogle Scholar
  3. 3.
    Kim, S., Min, D., Kim, S., Sohn, K.: Feature augmentation for learning confidence measure in stereo matching. IEEE Trans. Image Process. 26(12), 6019–6033 (2017)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Li, J., Kaess, M., Eustice, R., Johnson-Roberson, M.: Pose-graph SLAM using forward-looking sonar. IEEE Robot. Autom. Lett. 3, 2330–2337 (2018)CrossRefGoogle Scholar
  5. 5.
    Gong, M., Zhao, S., Jiao, L., Tian, D., Wang, S.: A novel coarse-to-fine scheme for automatic image registration based on SIFT and mutual information. IEEE Trans. Geosci. Remote Sens. 52(7), 4328–4338 (2014)CrossRefGoogle Scholar
  6. 6.
    dos Santos, D.R., Basso, M.A., Khoshelham, K., de Oliveira, E., Pavan, N.L., Vosselman, G.: Mapping indoor spaces by adaptive coarse-to-fine registration of RGB-D data. IEEE Geosci. Remote Sens. Lett. 13(2), 262–266 (2016)CrossRefGoogle Scholar
  7. 7.
    Guislain, M., Digne, J., Chaine, R., Monnier, G.: Fine scale image registration in large-scale urban LIDAR point sets. Comput. Vis. Image Underst. 157, 90–102 (2017)CrossRefGoogle Scholar
  8. 8.
    Rister, B., Horowitz, M.A., Rubin, D.L.: Fine scale image registration in large-scale urban LIDAR point sets. IEEE Trans. Image Process. 157(10), 4900–4910 (2017)CrossRefGoogle Scholar
  9. 9.
    Du, W.L., Tian, X.L.: An automatic image registration evaluation model on dense feature points by pinhole camera simulation. In: 2017 IEEE International Conference on Image Processing (ICIP), pp. 2259–2263. IEEE, Beijing (2017)Google Scholar
  10. 10.
    Hsu, W.Y., Lee, Y.C.: Rat brain registration using improved speeded up robust features. J. Med. Biol. Eng. 37(1), 45–52 (2017)CrossRefGoogle Scholar
  11. 11.
    Al-khafaji, S.L., Zhou, J., Zia, A., Liew, A.W.C.: Spectral-spatial scale invariant feature transform for hyperspectral images. IEEE Trans. Image Process. 27(2), 837–850 (2018)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Seregni, M., Paganelli, C., Summers, P., Bellomi, M., Baroni, G., Riboldi, M.: A hybrid image registration and matching framework for real-time motion tracking in MRI-guided radiotherapy. IEEE Trans. Biomed. Eng. 65(1), 131–139 (2018)CrossRefGoogle Scholar
  13. 13.
    Mikolajczyk, K., Schmid, C.: Scale & affine invariant interest point detectors. Int. J. Comput. Vis. 60(1), 63–86 (2004)CrossRefGoogle Scholar
  14. 14.
    Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. 22(10), 761–767 (2004)CrossRefGoogle Scholar
  15. 15.
    Han, J., Pauwels, E.J., De Zeeuw, P.: Visible and infrared image registration in man-made environments employing hybrid visual features. Image Vis. Comput. 34(1), 42–51 (2013)Google Scholar
  16. 16.
    Palenichka, R.M., Zaremba, M.B.: Automatic extraction of control points for the registration of optical satellite and LiDAR images. IEEE Trans. Geosci. Remote Sens. 7, 2864–2879 (2010)CrossRefGoogle Scholar
  17. 17.
    Zhang, Q., Wang, Y., Wang, L.: Registration of images with affine geometric distortion based on maximally stable extremal regions and phase congruency. Image Vis. Comput. 36, 23–39 (2015)CrossRefGoogle Scholar
  18. 18.
    Morel, J.M., Yu, G.: ASIFT: a new framework for fully affine invariant image comparison. SIAM J. Imaging Sci. 2(2), 438–469 (2009)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.School of Computer Science and Center for OPTical IMagery Analysis and Learning (OPTIMAL)Northwestern Polytechnical UniversityXi’anPeople’s Republic of China
  2. 2.Unmanned System Research Institute (USRI)Northwestern Polytechnical UniversityXi’anPeople’s Republic of China
  3. 3.Xi’an Institute of Optics and Precision MechanicsChinese Academy of ScienceXi’anPeople’s Republic of China

Personalised recommendations