Advertisement

Recovering Affine Features from Orientation- and Scale-Invariant Ones

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11361)

Abstract

An approach is proposed for recovering affine correspondences (ACs) from orientation- and scale-invariant, e.g. SIFT, features. The method calculates the affine parameters consistent with a pre-estimated epipolar geometry from the point coordinates and the scales and rotations which the feature detector obtains. The closed-form solution is given as the roots of a quadratic polynomial equation, thus having two possible real candidates and fast procedure, i.e. <1 ms. It is shown, as a possible application, that using the proposed algorithm allows us to estimate a homography for every single correspondence independently. It is validated both in our synthetic environment and on publicly available real world datasets, that the proposed technique leads to accurate ACs. Also, the estimated homographies have similar accuracy to what the state-of-the-art methods obtain, but due to requiring only a single correspondence, the robust estimation, e.g. by Graph-Cut RANSAC, is an order of magnitude faster.

Notes

Acknowledgement

D. Barath acknowledges the support of the OP VVV funded project CZ.02.1.01/0.0/0.0/16_019/0000765 and that of the Hungarian Scientific Research Fund (No. OTKA/ NKFIH 120499).

References

  1. 1.
    Barath, D.: P-HAF: homography estimation using partial local affine frames. In: International Conference on Computer Vision Theory and Applications (2017)Google Scholar
  2. 2.
    Barath, D.: Five-point fundamental matrix estimation for uncalibrated cameras. In: Conference on Computer Vision and Pattern Recognition (2018)Google Scholar
  3. 3.
    Barath, D., Hajder, L.: A theory of point-wise homography estimation. Pattern Recognit. Lett. 94, 7–14 (2017)CrossRefGoogle Scholar
  4. 4.
    Barath, D., Matas, J.: Graph-Cut RANSAC. In: Conference on Computer Vision and Pattern Recognition (2018)Google Scholar
  5. 5.
    Barath, D., Toth, T., Hajder, L.: A minimal solution for two-view focal-length estimation using two affine correspondences. In: Conference on Computer Vision and Pattern Recognition (2017)Google Scholar
  6. 6.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006).  https://doi.org/10.1007/11744023_32CrossRefGoogle Scholar
  7. 7.
    Bentolila, J., Francos, J.M.: Conic epipolar constraints from affine correspondences. Comput. Vis. Image Underst. 122, 105–114 (2014)CrossRefGoogle Scholar
  8. 8.
    Chum, O., Matas, J.: Matching with PROSAC-progressive sample consensus. In: Computer Vision and Pattern Recognition (2005)Google Scholar
  9. 9.
    Hartley, R., Zisserman, A.: Multiple view Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)zbMATHGoogle Scholar
  10. 10.
    Köser, K.: Geometric estimation with local affine frames and free-form surfaces. Shaker (2009)Google Scholar
  11. 11.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: International Conference on Computer Vision (1999)Google Scholar
  12. 12.
    Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. 22, 761–767 (2004)CrossRefGoogle Scholar
  13. 13.
    Mikolajczyk, K., et al.: A comparison of affine region detectors. Int. J. Comput. Vis. 65(1–2), 43–72 (2005)CrossRefGoogle Scholar
  14. 14.
    Mishkin, D., Matas, J., Perdoch, M.: MODS: fast and robust method for two-view matching. Comput. Vis. Image Underst. 141, 81–93 (2015)CrossRefGoogle Scholar
  15. 15.
    Molnár, J., Chetverikov, D.: Quadratic transformation for planar mapping of implicit surfaces. J. Math. Imaging Vis. 48, 176–184 (2014)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Morel, J.M., Yu, G.: ASIFT: a new framework for fully affine invariant image comparison. SIAM J. Imaging Sci. 2(2), 438–469 (2009)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Perdoch, M., Matas, J., Chum, O.: Epipolar geometry from two correspondences. In: International Conference on Pattern Recognition (2006)Google Scholar
  18. 18.
    Pritts, J., Kukelova, Z., Larsson, V., Chum, O.: Radially-distorted conjugate translations. In: Conference on Computer Vision and Pattern Recognition (2018)Google Scholar
  19. 19.
    Raposo, C., Barreto, J.P.: Theory and practice of structure-from-motion using affine correspondences. In: Computer Vision and Pattern Recognition (2016)Google Scholar
  20. 20.
    Sinha, S.N., Frahm, J.M., Pollefeys, M., Genc, Y.: GPU-based video feature tracking and matching. In: Workshop on Edge Computing Using New Commodity Architectures, vol. 278, p. 4321 (2006)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Centre for Machine PerceptionCzech Technical UniversityPragueCzech Republic
  2. 2.Machine Perception Research LaboratoryMTA SZTAKIBudapestHungary

Personalised recommendations