Machine Vision and Applications

, Volume 25, Issue 3, pp 763–773 | Cite as

ReigSAC: fast discrimination of spurious keypoint correspondences on planar surfaces

  • Hugo ProençaEmail author
Original Paper


Various methods were proposed to detect/match special interest points (keypoints) in images and some of them (e.g., SIFT and SURF) are among the most cited techniques in computer vision research. This paper describes an algorithm to discriminate between genuine and spurious keypoint correspondences on planar surfaces. We draw random samples of the set of correspondences, from which homographies are obtained and their principal eigenvectors extracted. Density estimation on that feature space determines the most likely true transform. Such homography feeds a cost function that gives the goodness of each keypoint correspondence. Being similar to the well-known RANSAC strategy, the key finding is that the main eigenvector of the most (genuine) homographies tends to represent a similar direction. Hence, density estimation in the eigenspace dramatically reduces the number of transforms actually evaluated to obtain reliable estimations. Our experiments were performed on hard image data sets, and pointed that the proposed approach yields effectiveness similar to the RANSAC strategy, at significantly lower computational burden, in terms of the proportion between the number of homographies generated and those that are actually evaluated.


Keypoints detection Keypoints matching RANSAC 



The financial support given by ”FCT-Fundao para a Cincia e Tecnologia” and ”FEDER” in the scope of the PTDC/EIA/103945/2008 research project ”NECOVID: Negative Covert Biometric Recognition” is acknowledged. Also, the support given by IT-Instituto de Telecomunicaes in the scope of the “NOISYRIS” research project is also acknowledged.


  1. 1.
    Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359. (2008)Google Scholar
  2. 2.
    Box, G., Cox, D.: An analysis of transformations. J. Roy. Stat Soc Ser B 26(2), 211–252 (1964)Google Scholar
  3. 3.
    Calonder, M., Lepetit, V., Ozuysal, M., Trzcinski, T., Strecha, C., Fua, P.: BRIEF: computing a local binary descriptor very fast. IEEE Trans. Pattern Anal. Mach. Intell. (2012). doi: 10.1109/TPAMI.2011.222
  4. 4.
    Chin, T.-J., Yu, J., Suter, D.: Accelerated hypothesis generation for multi-structure data via preference analysis. IEEE Trans. Pattern Anal. Mach. Intell. 34(4), 625–638 (2012)Google Scholar
  5. 5.
    Choi, S., Kim, T., Yu, W.: Performance evaluation of RANSAC family. In: Proceedings of the British Machine Vision Conference, pp. 1–12 (2009)Google Scholar
  6. 6.
    Chum, O., Matas J.: Matching with PROSAC—progressive sample consensus. In: Proceedings of the 2005 Conference on Computer Vision and Pattern Recognition, (1), pp. 220–226. (2005)Google Scholar
  7. 7.
    Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002)Google Scholar
  8. 8.
    Du, Y., Belcher, C., Zhou, Z.: Scale invariant Gabor descriptor-based noncooperative Iris recognition. EURASIP J. Adv. Signal Process. (2010). doi: 10.1155/2010/936512
  9. 9.
    Duy-Nguyen, T., Wei-Chao, C., Gelfand, N., Pulli, K.: SURFTrac: efficient tracking and continuous object recognition using local feature descriptors. In: Proceedings of the IEEE Conference On Computer Vision and Pattern Recognition, (2009). doi: 10.1109/CVPR.2009.5206831
  10. 10.
    Fischler, M.A., Bolles R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24, 381–395 (1981)Google Scholar
  11. 11.
    Golub, G., Van Loan, C.: Matrix Computations, 3rd edn. Johns Hopkins University Press (1996) ISBN: 0-8018-5414-8Google Scholar
  12. 12.
    Goshen, L., Shimshoni, I.: Guided sampling via weak motion models and outlier sample generation for epipolar geometry estimation. Int. J. Comput. Vis. 80(2), 275–288 (2008)Google Scholar
  13. 13.
    Hanajik, M., Ravas, R., Smiesko, V.: Interest point detection for vision based mobile robot navigation. In: Proceedings of the IEEE 9\(^{th}\) International Symposium on Applied Machine Intelligence and Informatics, pp. 207–211. (2011)Google Scholar
  14. 14.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, ISBN: 0-521-54051-8, 2003Google Scholar
  15. 15.
    Huang, C., Chen, C., Chung, P.: Contrast context histogram: an efficient discriminating local descriptor for object recognition and image matching. Pattern Recogni. 41, 3071–3077 (2008)Google Scholar
  16. 16.
    Kenney, C.S., Manjunath, B.S., Zuliani, M., Hewer, G., Van, A.: Novel. A condition number for point matching with application to registration and post-registration error estimation. IEEE Trans. Pattern Anal. Mach. Intell. 25, 1437–1454 (2004)Google Scholar
  17. 17.
    Liu, C., Yuen, J., Torralba, A., Freeman, W.: SIFT flow: dense correspondence across scenes and its applications. IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 978–994 (2011)Google Scholar
  18. 18.
    Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)Google Scholar
  19. 19.
    Matej, K., Ales, L., Danijel, S.: Multivariate online Kernel density estimation with Gaussian Kernels. Pattern Recognit. 44(10–11), 2630–2642 (2011)zbMATHGoogle Scholar
  20. 20.
    Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1615–1630 (2005) Google Scholar
  21. 21.
    Ni, K., Jin, H., Dellaert, F.: GroupSAC: efficient consensus in the presence of groupings. In: Proceedings of the International Conference on Computer Vision, pp. 2193–2200. (2009)Google Scholar
  22. 22.
    Ozuysal, M., Calonder, M., Lepetit, V., Fua, P.: Fast keypoint recognition using random ferns. IEEE Trans. Pattern Anal. Mach. Intell. 32(3), 448–461 (2010)Google Scholar
  23. 23.
    Nistèr, D.: Preemptive RANSAC for live structure and motion estimation. Mach. Vis. Appl. 16(5), 321–329 (2005)CrossRefGoogle Scholar
  24. 24.
    Quelhas, P., Monay, F., Odobez, J.-M., Gatica-Perez, D., Tuytelaars, T.: A thousand words in a scene. IEEE Trans. Pattern Anal. Mach. Intell. 29(9), 1575–1589 (2007)Google Scholar
  25. 25.
    Sande, K., Gevers, T., Snoek, C.: Evaluating color descriptors for object and scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1582–1596 (2010)Google Scholar
  26. 26.
    Schmid, C., Mohr, R., Bauckhage, C.: Evaluation of interest point detectors. Int. J. Comput. Vis. 37(2), 151–172 (2000)zbMATHGoogle Scholar
  27. 27.
    Shi, J., Tomasi, C.: Good features to track. In: IEEE Proceedings of the Conference on Computer Vision and Pattern Recognition, pp. 593–600. (1994)Google Scholar
  28. 28.
    Tola, E., Lepetit, V., Fua, P.: DAISY: an efficient dense descriptor applied to wide-baseline stereo. IEEE Trans. Pattern Anal. Mach. Intell. 32(5), 815–830 (2010)CrossRefGoogle Scholar
  29. 29.
    Tordoff, B., Murray, D.: Guided-MLESAC: faster image transform estimation by using matching priors. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1523–1535 (2005)CrossRefGoogle Scholar
  30. 30.
    Torr, P.H.S., Zisserman, A.: MLESAC: a new robust estimator with application to estimating image geometry. Comput. Vis. Image Underst. 78, 138–156 (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.Department of Computer Science, IT-Instituto de TelecomunicacoesUniversity of Beira InteriorCovilhaPortugal

Personalised recommendations