Advertisement

Image Registration Based on Patch Matching Using a Novel Convolutional Descriptor

  • Wang Xie
  • Hongxia GaoEmail author
  • Zhanhong Chen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11257)

Abstract

In this paper we introduce a novel feature descriptor based on deep learning that trains a model to match the patches of images on scenes captured under different viewpoints and lighting conditions. The patch matching of images capturing the same scene in varied circumstances and diverse manners is challenging. Our approach is influenced by recent success of CNNs in classification tasks. We develop a model which maps the raw image patch to a low dimensional feature vector. As our experiments show, the proposed approach is much better than state-of-the-art descriptors and can be considered as a direct replacement of SURF. The results confirm that these techniques further improve the performance of the proposed descriptor. Then we propose an improved Random Sample Consensus algorithm for removing false matching points. Finally, we show that our neural network based image descriptor for image patch matching outperforms state-of-the-art methods on a number of benchmark datasets and can be used for image registration with high quality.

Keywords

Feature descriptor Deep learning Patch matching 

Notes

Acknowledgements

This work was supported by Natural Science Foundation of China under Grant 61603105, Fundamental Research Funds for the Central Universities under Grant 2015ZM128 and Science and Technology Program of Guangzhou, China under Grant (201707010054, 201704030072).

References

  1. 1.
    Lowe, D.: Distinctive image features from scale-invariant key-points. IJCV 60(2), 91–110 (2004)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. CVIU 110(3), 346–359 (2008)Google Scholar
  3. 3.
    Hua, G., Brown, M., Winder, S.: Discriminant learning of local image descriptors. IEEE Trans. Pattern Anal. Mach. Intell. (2010)Google Scholar
  4. 4.
    Trzcinski, T., Christoudias, C., Lepetit, V., Fua, P.: Learning image descriptors with the boosting-trick. In: NIPS, pp. 278–286 (2012)Google Scholar
  5. 5.
    Trzcinski, T., Christoudias, M., Fua, P., Lepetit, V.: Boosting binary key-point descriptors. In: Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2013, Washington, DC, USA, pp. 2874–2881. IEEE Computer Society (2013)Google Scholar
  6. 6.
    Verdie, Y., Yi, K., Fua, P., Lepetit, V.: TILDE: a temporally invariant learned detector. In: CVPR (2015)Google Scholar
  7. 7.
    Strecha, C., Hansen, W., Van Gool, L., Fua, P., Thoennessen, U.: On benchmarking camera calibration and multi-view stereo for high resolution imagery. In: CVPR (2008)Google Scholar
  8. 8.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, vol. 28, pp. 91–99 (2015)Google Scholar
  9. 9.
    Fischer, P., Dosovitskiy, A., Brox, T.: Descriptor matching with convolutional neural networks: a comparison to sift. Arxiv (2014)Google Scholar
  10. 10.
    Simo-Serra, E., Trulls, E., Ferraz, L., Kokkinos, I., Fua, P., Moreno-Noguer, F.: Discriminative learning of deep convolutional feature point descriptors. In: ICCV (2015)Google Scholar
  11. 11.
    Han, X., Leung, T., Jia, Y., Sukthankar, R., Berg, A.: MatchNet: unifying feature and metric learning for patch-based matching. In: CVPR (2015)Google Scholar
  12. 12.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. IJCV 1–42 (2015)Google Scholar
  13. 13.
    Brown, L.: A survey of image registration techniques. ACM Comput. Surv. (CSUR) 24(4), 325–376 (1992)CrossRefGoogle Scholar
  14. 14.
    Zitova, B., Flusser, J.: Image registration methods: a survey. Image Vis. Comput. 21(11), 977–1000 (2003)CrossRefGoogle Scholar
  15. 15.
    Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision (1981)Google Scholar
  16. 16.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: Alvey Vision Conference, Manchester, UK, vol. 15 (1988).  https://doi.org/10.5244/c.2.23
  17. 17.
    Lowe, D.: Distinctive image features from scale-invariant key-points. Int. J. Comput. Vis. 60, 91–110 (2004)CrossRefGoogle Scholar
  18. 18.
    Fischler, M., Bolles, R.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Rublee, E., Rabaud, V., Konolidge, K., Bradski, G.: ORB: an efficient alternative to SIFT or SURF. In: ICCV (2011)Google Scholar
  20. 20.
    Balntas, V., Johns, E., Tang, L., Mikolajczyk, K.: PN-Net: conjoined triple deep network for learning local image descriptors. arXiv Preprint (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.School of Automation Science and EngineeringSouth China University of TechnologyGuangzhouPeople’s Republic of China

Personalised recommendations