Advertisement

Core Algorithm for Structural Verification of Keypoint Matches

  • Roman O. Malashin
Chapter
Part of the Intelligent Systems Reference Library book series (ISRL, volume 135)

Abstract

Outlier elimination is a crucial stage in keypoints-based methods, especially in extreme conditions. In this chapter, a fast and robust “Core” Structural Verification Algorithm (CSVA) for a variety of applications and feature extraction methods is developed. The proposed algorithm pipeline involves many-to-one matches’ exclusion, the improved Hough clustering of keypoint matches, and cluster verification procedure. The Hough clustering is improved through an accurate incorporation of translation parameters of similarity transform and “partially ignoring” the boundary impact using two displaced accumulators. The cluster verification procedure involves the use of modified RANSAC. It is also shown that the use of the nearest neighbour ratio may eliminate too many inliers, when two images are matched (especially in extreme conditions), and the preferable method is a simple many-to-one matches exclusion. The theory and experiment prove the propriety of the suggested parameters, algorithms, and modifications. The developed cluster analysis algorithms are robust and computationally efficient at the same time. These algorithms use some specific information (rigidity of objects in a scene), consume low volume memory and only 3 ms in average on a standard Intel i7 processor for verification of 1,000 matches (i.e. magnitudes less than the time needed to generate those matches). The CSVA has been successfully applied to practical tasks with minor adaptation, such as the matching of 3D indoor scenes, retrieval of images of 3D scenes based on the concept of Bag of Words (BoWs), and matching of aerial and cosmic photographs with strong appearance changes caused by season, day-time, and viewpoint variation. Eliminating a huge number of outliers using geometrical constraints allowed to reach the reliability and accuracy in all solutions.

Keywords

Keypoint-based methods Scene geometry Structural analysis Outlier elimination Hough clustering RANSAC Bag of words Nearest neighbour ratio Aerospace images 3D images 

Notes

Acknowledgements

This work was supported by the Ministry of Education and Science of the Russian Federation and the Government of the Russian Federation, Grant 074-U01. Special thanks to Shounan An and Vadim Lutsiv for useful comments and suggestions.

References

  1. 1.
    Lowe, D.: Automatic panoramic image stitching using invariant features. Int. J. Comput. Vis. 74(1), 59–73 (2007)CrossRefGoogle Scholar
  2. 2.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: 4th Alvey Vision Conference, pp. 147–151 (1988)Google Scholar
  3. 3.
    Rosten, E., Drummond, T.: Machine learning for high-speed corner detection. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) Computer Vision—ECCV 2006. LNCS, vol. 3951, pp. 430–443. Springer, Berlin, Heidelberg (2006)Google Scholar
  4. 4.
    Shi, J., Tomasi, C.: Good features to track. In: IEEE International Conference on Computer Vision and Pattern Recognition (CCVPR’1994), pp. 593–600 (1994)Google Scholar
  5. 5.
    Lowe, D., Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide baseline stereo from maximally stable extremal regions. Image Vis. Comput. 22(10), 761–767 (2004)CrossRefGoogle Scholar
  6. 6.
    Agrawal, M., Konolige, K., Blas, M.R.: CenSurE. Center surround extremas for realtime feature detection and matching. In: 10th European Conference on Computer Vision (ECCV’2008), pp. 102–115 (2008)Google Scholar
  7. 7.
    Tuytelaars, T., Van Gool, L.: Wide baseline stereo based on local, affinely invariant regions. In: British Machine Vision Conference (BMVC’2000), pp. 412–422 (2000)Google Scholar
  8. 8.
    Rosten, E., Porter, R., Drummond, T.: Faster and better: a machine learning approach to corner detection. IEEE Trans. Pattern Anal. Mach. Intell. 32(1), 105–119 (2010)CrossRefGoogle Scholar
  9. 9.
    Hauagge, D., Snavely, N.: Image matching using local symmetry features. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR’2012), pp. 206–213 (2012)Google Scholar
  10. 10.
    Delponte, E., Noceti, N., Odone, F., Verri, A.: Spatio-temporal constraints for matching view-based descriptions of 3D objects. In: 8th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS’2007), pp. 91–116 (2007)Google Scholar
  11. 11.
    Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  12. 12.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: Speeded up robust features. J. Comput. Vis. Image Underst. 110(3), 346–359 (2008)CrossRefGoogle Scholar
  13. 13.
    Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’2001), pp. I-501–I-518 (2001)Google Scholar
  14. 14.
    Lutsiv, V., Malyshev, I., Pepelka, V.: Automatic fusion of the multiple sensor and multiple season images. In: Proceedings of the SPIE, vol. 4380, pp. 174–183 (2001)Google Scholar
  15. 15.
    Malashin, R., Peterson, M., Lutsiv, V.: Application of structural methods for stereo depth map improvement. In: AIP Conference Proceedings 1537, vol. 1, pp. 27–33 (2013)Google Scholar
  16. 16.
    Calonder, M., Lepetit, V., Strecha, C., Fua, P.: Brief: Binary robust independent elementary features. In: 11th European Conference on Computer Vision (ECCV’2010), Part IV, pp 778–792 (2010)Google Scholar
  17. 17.
    Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: An efficient alternative to SIFT or SURF. In: International Conference on Computer Vision (ICCV’2011), pp. 2564–2571 (2011)Google Scholar
  18. 18.
    Leutenegger, S., Chli, M., Siegwart, R.: BRISK: Binary robust invariant scalable keypoints. In: International Conference on Computer Vision (ICCV’2011), pp. 2548–2555 (2011)Google Scholar
  19. 19.
    Ortiz, R., Vandergheynst, P.: FREAK. Fast retina keypoint. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR’2012), pp. 510–517 (2012) Google Scholar
  20. 20.
    Ke, Y., Sukthankar, R.: PCA-SIFT: A more distinctive representation for local image descriptors. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’2004), pp. 506–513 (2004)Google Scholar
  21. 21.
    Ojala, T., Pietikäinen, M., Harwood, D.: Performance evaluation of texture measures with classification based on Kullback discrimination of distributions. In: 12th IAPR International Conference on Pattern Recognition, vol. 1, pp. 582–585 (1994)Google Scholar
  22. 22.
    Ojala, T., Pietikäinen, M., Harwood, D.A.: Comparative study of texture measures with classification based on feature distributions. Pattern Recognit. 29, 51–59 (1996)CrossRefGoogle Scholar
  23. 23.
    Hafiane, A., Seetharaman, G., Zavidovique, B.: Median binary pattern for texture classification. In: 4th International Conference on Image Analysis and Recognition (ICIAR’2007), pp. 387–398 (2007)Google Scholar
  24. 24.
    He, C., Ahonen, T., Pietikäinen, M.: A Bayesian local binary pattern texture descriptor. In: 19th International Conference on Pattern Recognition (ICPR’2008), pp. 1–4 (2008)Google Scholar
  25. 25.
    Laptev, I., Lindeberg, T.: Local descriptors for spatio-temporal recognition. In: 1st International Conference on Spatial Coherence for Visual Motion Analysis (SCVMA’2004), pp. 91–103 (2004)Google Scholar
  26. 26.
    Laptev, I., Caputo, B., Schuldt, C., Lindeberg, T.: Local velocity-adapted motion events for spatio-temporal recognition. Comput. Vis. Image Underst. 108(3), 207–229 (2007)CrossRefGoogle Scholar
  27. 27.
    Koeser, K., Koch, R.: Perspectively invariant normal features. In: IEEE 11th International Conference on Computer Vision (ICCV’2007), pp. 1–8 (2007)Google Scholar
  28. 28.
    Wu, C., Clipp, B., Li, X., Frahm, J., Pollefeys, M.: 3D model matching with viewpoint-invariant patches (VIP). In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’2008), pp. 1–8 (2008)Google Scholar
  29. 29.
    Mikolajczyk, K., Schmid, C.: Scale and affine invariant interest point detectors. Int. J. Comput. Vis. 60(1), 63–86 (2004)CrossRefGoogle Scholar
  30. 30.
    Averkin, A., Potapov, A., Lutsiv, V.: Construction of systems of local invariant image indicators based on the Fourier-Mellin transform. J. Opt. Technol. 77(1), 28–32 (2010)CrossRefGoogle Scholar
  31. 31.
    Cover, T., Hart, P.: Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 13, 21–27 (1967)CrossRefMATHGoogle Scholar
  32. 32.
    Muja, M., Lowe, D.: Fast approximate nearest neighbors with automatic algorithm configuration. In: International Conference on Computer Vision Theory and Applications (VISAPP’2009), pp. 331–340 (2009)Google Scholar
  33. 33.
    Muja, M., Lowe, D.: Fast matching of binary features. In: 19th Conference on Computer Robot Vision (CRV’2012), pp. 404–410 (2012)Google Scholar
  34. 34.
    Lutsiv, V., Potapov, A., Novikova, T., Lapina, N.: Hierarchical 3D structural matching in the aerospace photographs and indoor scenes. In: Proceedings of the SPIE, vol. 5807, pp. 455–466 (2005)Google Scholar
  35. 35.
    Peterson, M.: Clustering of a set of identified points on images of dynamic scenes, based on the principle of minimum description length. J. Opt. Technol. 77, 701–706 (2010)CrossRefGoogle Scholar
  36. 36.
    Fischler, M., Bolles, R.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)CrossRefMathSciNetGoogle Scholar
  37. 37.
    Chum, O., Matas, J.: Matching with PROSAC—progressive sample consensus. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR’2005), pp. 1063–6919 (2005)Google Scholar
  38. 38.
    Lowe, D.: Local feature view clustering for 3D object recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR’2001), pp. 682–688 (2001)Google Scholar
  39. 39.
    Malashin, R.: Methods of structural analysis of images of 3D scenes. Ph.D. thesis, National Research University of Information Technology (2014) (in Russian)Google Scholar
  40. 40.
    Malashin, R.: Matching of aerospace photographs with the use of local features. J. Phys.: Conf. Ser. 536(1), 12–18 (2014)Google Scholar
  41. 41.
    Malashin, R.: Correlating images of three-dimensional scenes by clusterizing the correlated local attributes, using the Hough transform. J. Opt. Technol. 81(6), 327–333 (2014)CrossRefGoogle Scholar
  42. 42.
    Lowe, D.: Object recognition from local scale-invariant features. In: International Conference on Computer Vision (ICCV’1999), vol. 2, pp. 1150–1157 (1999)Google Scholar
  43. 43.
    Gordon, I., Lowe, D.: What and where: 3D object recognition with accurate pose. Toward Category-Level Object Recognit. 4270, 67–82 (2006)CrossRefGoogle Scholar
  44. 44.
    Brown, M., Lowe, D.: Recognising panoramas. In: International Conference on Computer Vision (ICCV’2003), pp. 1218–1225 (2003)Google Scholar
  45. 45.
    Li, Y., Wang, Y., Huang, W., Zhang, Z.: Automatic image stitching using SIFT. In: International Conference on Audio, Language and Image Proceedings (ICALIP’2008), pp. 568–571 (2008)Google Scholar
  46. 46.
    Scovanner, P., Ali, S., Shah, M.: A 3-dimensional sift descriptor and its application to action recognition. In: 15th ACM International Conference on Multimedia (MM’2007), pp. 357–360 (2007)Google Scholar
  47. 47.
    Niebles, J., Wang, H., Li, F.: Unsupervised learning of human action categories using spatial-temporal words. Int. J. Comput. Vis. 79(3), 299–318 (2006)CrossRefGoogle Scholar
  48. 48.
    Bicego, M., Lagorio, A., Grosso, E., Tistarelli, M.: On the use of SIFT features for face authentication. In: Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’2006), pp. 35–35 (2006)Google Scholar
  49. 49.
    Luo, J., Ma, Y., Takikawa, E., Lao, S., Kawade, M., Lu, B.-L.: Person-specific SIFT features for face recognition. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’2007), pp. 593–596 (2007)Google Scholar
  50. 50.
    Park, U., Pankanti, S., Jain, A.: Fingerprint verification using SIFT features. In: SPIE Defense and Security Symposium, vol. 6944, no. 1, 69440K-1–69440K-9 (2008)Google Scholar
  51. 51.
    Shuai, X., Zhang, C., Hao, P.: Fingerprint indexing based on composite set of reduced SIFT features. In: 19th International Conference on Pattern Recognition (ICPR’2008), pp. 1–4 (2008) Google Scholar
  52. 52.
    Lutsiv, V.: Automatic Image Analysis: An Object-Independent Structural Approach. Lambert Academic Publishing, Saarbrücken, Germany (2011) (in Russian)Google Scholar
  53. 53.
    Morel, J., Yu, G.: ASIFT: A new framework for fully affine invariant image comparison. SIAM J. Image Sci. 438–469 (2009)Google Scholar
  54. 54.
    Brown, M., Lowe, D.: Invariant features from interest point groups. In: British Machine Vision Conference (BMVC’2002), pp. 656–665 (2002)Google Scholar
  55. 55.
    Ballard, D.H.: Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognit. 13(2), 111–122 (1981)CrossRefMATHGoogle Scholar
  56. 56.
    Malashin, R.: Image retrieval with the use of bag of words and structural analysis. J. Phys: Conf. Ser. 735(1), 12–16 (2016)Google Scholar
  57. 57.
    Isack, H., Boykov, Y.: Energy-based geometric multi-model fitting. Int. J. Comput. Vis. 97(2), 23–147 (2012)CrossRefMATHGoogle Scholar
  58. 58.
    Fei-Fei, L.: Bayesian hierarchical model for learning natural scene categories. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’2005), pp. 524–521 (2005)Google Scholar
  59. 59.
    Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Object retrieval with large vocabularies and fast spatial matching. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR’2007), pp. 351–362 (2007)Google Scholar
  60. 60.
    Lutsiv, V., Malyshev, I., Potapov, A.: Hierarchical structural matching algorithms for registration of aerospace images. In: Proceedings of SPIE, vol. 5238, pp. 164–175 (2003)Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.National Research University of Information Technology, Mechanics and OpticsSt. PetersburgRussian Federation
  2. 2.Pavlov Institute of Physiology Russian Academy of SciencesSt. PetersburgRussian Federation

Personalised recommendations