Advertisement

Making Affine Correspondences Work in Camera Geometry Computation

Conference paper
  • 546 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12356)

Abstract

Local features e.g. SIFT and its affine and learned variants provide region-to-region rather than point-to-point correspondences. This has recently been exploited to create new minimal solvers for classical problems such as homography, essential and fundamental matrix estimation. The main advantage of such solvers is that their sample size is smaller, e.g., only two instead of four matches are required to estimate a homography. Works proposing such solvers often claim a significant improvement in run-time thanks to fewer RANSAC iterations. We show that this argument is not valid in practice if the solvers are used naively. To overcome this, we propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline. We propose a method for refining the local feature geometries by symmetric intensity-based matching, combine uncertainty propagation inside RANSAC with preemptive model verification, show a general scheme for computing uncertainty of minimal solvers results, and adapt the sample cheirality check for homography estimation. Our experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times when following our guidelines. We make code available at https://github.com/danini/affine-correspondences-for-camera-geometry.

Notes

Acknowledgements

This research was supported by project Exploring the Mathematical Foundations of Artificial Intelligence (2018-1.2.1-NKP-00008), the Research Center for Informatics project CZ.02.1.01/0.0/0.0/16 019/0000765, the MSMT LL1901 ERC-CZ grant, the Swedish Foundation for Strategic Research (Semantic Mapping and Visual Navigation for Smart Robots), the Chalmers AI Research Centre (CHAIR) (VisLo-cLearn), the European Regional Development Fund under IMPACT No. CZ.02.1.01/0.0/0.0/15 003/0000468, EU H2020 ARtwin No. 856994, and EU H2020 SPRING No. 871245 Projects.

Supplementary material

504452_1_En_42_MOESM1_ESM.pdf (849 kb)
Supplementary material 1 (pdf 848 KB)

References

  1. 1.
    Ackermann, F.: Digital image correlation: performance and potential application in photogrammetry. Photogrammetric Rec. 11(64), 429–439 (1984)CrossRefGoogle Scholar
  2. 2.
    Balntas, V., Lenc, K., Vedaldi, A., Mikolajczyk, K.: HPatches: a benchmark and evaluation of handcrafted and learned local descriptors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5173–5182 (2017)Google Scholar
  3. 3.
    Baráth, D., Tóth, T., Hajder, L.: A minimal solution for two-view focal-length estimation using two affine correspondences. In: IEEE Conference on Computer Vision and Pattern Recognition (2017)Google Scholar
  4. 4.
    Barath, D., Hajder, L.: A theory of point-wise homography estimation. Pattern Recogn. Lett. 94, 7–14 (2017)CrossRefGoogle Scholar
  5. 5.
    Barath, D., Hajder, L.: Efficient recovery of essential matrix from two affine correspondences. IEEE Trans. Image Process. 27(11), 5328–5337 (2018)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Barath, D., Matas, J.: Graph-cut RANSAC. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6733–6741 (2018)Google Scholar
  7. 7.
    Begelfor, E., Werman, M.: How to put probabilities on homographies. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1666–1670 (2005), http://dx.doi.org/10.1109/TPAMI.2005.200
  8. 8.
    Bentolila, J., Francos, J.M.: Conic epipolar constraints from affine correspondences. Comput. Vis. Image Understand. 122, 105–114 (2014)CrossRefGoogle Scholar
  9. 9.
    Bian, J.W., Wu, Y.H., Zhao, J., Liu, Y., Zhang, L., Cheng, M.M., Reid, I.: An evaluation of feature matchers forfundamental matrix estimation. arXiv preprint arXiv:1908.09474 (2019), https://jwbian.net/fm-bench
  10. 10.
    Chum, O., Matas, J.: Matching with PROSAC-progressive sample consensus. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. vol. 1, pp. 220–226. IEEE (2005)Google Scholar
  11. 11.
    Chum, O., Matas, J.: Optimal randomized RANSAC. IEEE Trans. Pattern Anal. Mach. Intell. 30(8), 1472–1482 (2008)CrossRefGoogle Scholar
  12. 12.
    Chum, O., Matas, J., Kittler, J.: Locally optimized RANSAC. In: Michaelis, B., Krell, G. (eds.) DAGM 2003. LNCS, vol. 2781, pp. 236–243. Springer, Heidelberg (2003).  https://doi.org/10.1007/978-3-540-45243-0_31CrossRefGoogle Scholar
  13. 13.
    Criminisi, A.: Accurate Visual Metrology from Single and Multiple Uncalibrated Images. Springer, London (2001).  https://doi.org/10.1007/978-0-85729-327-5
  14. 14.
    Csurka, G., Zeller, C., Zhang, Z., Faugeras, O.D.: Characterizing the uncertainty of the fundamental matrix. Comput. Vis. Image Understand. 68(1), 18–36 (1997)CrossRefGoogle Scholar
  15. 15.
    Dainty, J.C., Shaw, R.: Image Science. Academic Press (1974)Google Scholar
  16. 16.
    Eichhardt, I., Chetverikov, D.: Affine correspondences between central cameras for rapid relative pose estimation. In: Proceedings of the European Conference on Computer Vision, pp. 482–497 (2018)Google Scholar
  17. 17.
    Fischler, M., Bolles, R.: Random sampling consensus: a paradigm for model fitting with application to image analysis and automated cartography. Commun. ACM 24, 381–395 (1981)CrossRefGoogle Scholar
  18. 18.
    Förstner, W.: On the geometric precision of digital correlation. In: Hakkarainen, J., Kilpelä, E., Savolainen, A. (eds.) International Archives of Photogrammetry. vol. XXIV-3, pp. 176–189. ISPRS Symposium, Communication III, Helsinki, June 1982, http://www.ipb.uni-bonn.de/pdfs/Forstner1982Geometric.pdf
  19. 19.
    Förstner, W.: Image preprocessing for feature extraction in digital intensity, color and range images. In: Geomatic Methods for the Analysis of Data in Earth Sciences, Lecture Notes in Earth Sciences, vol. 95/2000, pp. 165–189. Springer, Heidelberg (2000), http://www.ipb.uni-bonn.de/pdfs/Forstner2000Image.pdf
  20. 20.
    Förstner, W., Khoshelham, K.: Efficient and accurate registration of point clouds with plane to plane correspondences. In: 3rd International Workshop on Recovering 6D Object Pose (2017), http://www.ipb.uni-bonn.de/pdfs/foerstner17efficient.pdf
  21. 21.
    Förstner, W., Wrobel, B.P.: Photogrammetric Computer Vision. Springer, Heidelberg (2016), http://www.ipb.uni-bonn.de/book-pcv/
  22. 22.
    Frahm, J.M., Pollefeys, M.: RANSAC for (quasi-)degenerate data (QDEGSAC). vol. 1, pp. 453–460 (2006)Google Scholar
  23. 23.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. IEEE (2012)Google Scholar
  24. 24.
    Guan, B., Zhao, J., Li, Z., Sun, F., Fraundorfer, F.: Minimal solutions for relative pose with a single affine correspondence (2020)Google Scholar
  25. 25.
    Hajder, L., Baráth, D.: Relative planar motion for vehicle-mounted cameras from a single affine correspondence. In: IEEE International Conference on Robotics and Automation (2020)Google Scholar
  26. 26.
    Haralick, R., Shapiro, L.G.: Computer and Robot Vision, vol. II. Addison-Wesley, Reading, MA (1992), http://www.ipb.uni-bonn.de/pdfs/Forstner1993Image.pdf
  27. 27.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press (2003)Google Scholar
  28. 28.
    Knapitsch, A., Park, J., Zhou, Q.Y., Koltun, V.: Tanks and temples: benchmarking large-scale scene reconstruction. ACM Trans. Graph. 36(4), 78 (2017)CrossRefGoogle Scholar
  29. 29.
    Köser, K.: Geometric Estimation with Local Affine Frames and Free-form Surfaces. Shaker (2009)Google Scholar
  30. 30.
    Lebeda, K., Matas, J., Chum, O.: Fixing the locally optimized RANSAC. In: British Machine Vision Conference, pp. 1–11. Citeseer (2012)Google Scholar
  31. 31.
    Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2) (2004)Google Scholar
  32. 32.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: International Conference on Computer Vision. IEEE (1999)Google Scholar
  33. 33.
    Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: IJCAI 1981. pp. 674–679 (1981)Google Scholar
  34. 34.
    Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. 22(10), 761–767 (2004), http://www.sciencedirect.com/science/article/pii/S0262885604000435
  35. 35.
    Matas, J., Chum, O.: Randomized RANSAC with sequential probability ratio test. In: Tenth IEEE International Conference on Computer Vision (ICCV 2005), vol. 1. vol. 2, pp. 1727–1732. IEEE (2005)Google Scholar
  36. 36.
    Mikolajczyk, K., Schmid, C.: Indexing based on scale invariant interest points. In: Proceedings Eighth IEEE International Conference on Computer Vision, vol. 1, pp. 525–531. IEEE (2001)Google Scholar
  37. 37.
    Mikolajczyk, K., Schmid, C.: An affine invariant interest point detector. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2350, pp. 128–142. Springer, Heidelberg (2002).  https://doi.org/10.1007/3-540-47969-4_9CrossRefGoogle Scholar
  38. 38.
    Mishkin, D., Radenovic, F., Matas, J.: Repeatability is not enough: learning affine regions via discriminability. In: Proceedings of the European Conference on Computer Vision, pp. 284–300 (2018)Google Scholar
  39. 39.
    Mishkin, D., Matas, J., Perdoch, M.: Mods: fast and robust method for two-view matching. Comput. Vis. Image Understand. 141, 81–93 (2015)CrossRefGoogle Scholar
  40. 40.
    Morel, J.M., Yu, G.: ASIFT: a new framework for fully affine invariant image comparison. SIAM J. Imag. Sci. 2(2), 438–469 (2009)MathSciNetCrossRefGoogle Scholar
  41. 41.
    Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: an open-source SLAM system for monocular. Stereo and RGB-D Cameras. TRO 33(5), 1255–1262 (2017)Google Scholar
  42. 42.
    Nistér, D.: An efficient solution to the five-point relative pose problem. In: IEEE TPAMI, pp. 756–770 (2004)Google Scholar
  43. 43.
    Perdoch, M., Matas, J., Chum, O.: Epipolar geometry from two correspondences. In: International Conference on Computer Vision (2006)Google Scholar
  44. 44.
    Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Object retrieval with large vocabularies and fast spatial matching. In: Conference on Computer Vision and Pattern Recognition (2007)Google Scholar
  45. 45.
    Pritts, J.B., Kukelova, Z., Larsson, V., Lochman, Y., Chum, O.: Minimal solvers for rectifying from radially-distorted conjugate translations. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)Google Scholar
  46. 46.
    Raguram, R., Chum, O., Pollefeys, M., Matas, J., Frahm, J.M.: USAC: a universal framework for random sample consensus. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 2022–2038 (2012)CrossRefGoogle Scholar
  47. 47.
    Raguram, R., Frahm, J.M., Pollefeys, M.: Exploiting uncertainty in random sample consensus. In: International Conference on Computer Vision, pp. 2074–2081. IEEE (2009)Google Scholar
  48. 48.
    Raposo, C., Barreto, J.P.: Theory and practice of structure-from-motion using affine correspondences. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5470–5478 (2016)Google Scholar
  49. 49.
    Schneider, J., Stachniss, C., Förstner, W.: On the quality and efficiency of approximate solutions to bundle adjustment with epipolar and trifocal constraints. In: ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences. vol. IV-2/W3, pp. 81–88 (2017). https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/IV-2-W3/81/2017/isprs-annals-IV-2-W3-81-2017.pdf
  50. 50.
    Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: CVPR, June 2016Google Scholar
  51. 51.
    Schönberger, J.L., Zheng, E., Pollefeys, M., Frahm, J.M.: Pixelwise view selection for unstructured multi-view stereo. In: European Conference on Computer Vision (ECCV) (2016)Google Scholar
  52. 52.
    Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGB-D SLAM systems. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 573–580. IEEE (2012)Google Scholar
  53. 53.
    Sur, F., Noury, N., Berger, M.O.: Computing the uncertainty of the 8 point algorithm for fundamental matrix estimation (2008)Google Scholar
  54. 54.
    Szeliski, R.: Computer Vision: Algorithms and Applications. Springer, London (2010).  https://doi.org/10.1007/978-1-84882-935-0
  55. 55.
    Torr, P.H.S.: Bayesian model estimation and selection for epipolar geometry and generic manifold fitting. In: IJCV (2002)Google Scholar
  56. 56.
    Vedaldi, A., Fulkerson, B.: VLFeat: An open and portable library of computer vision algorithms. In: Proceedings of the 18th ACM International Conference on Multimedia, pp. 1469–1472 (2010)Google Scholar
  57. 57.
    Wang, Z., Fan, B., Wu, F.: Local intensity order pattern for feature description. In: 2011 International Conference on Computer Vision, pp. 603–610. IEEE (2011)Google Scholar
  58. 58.
    Zhang, Z.: Determining the epipolar geometry and its uncertainty: a review. Int. J. Comput. Vis. 27(2), 161–195 (1998)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Machine Perception Research Laboratory, SZTAKI in BudapestBudapestHungary
  2. 2.VRG, Faculty of Electrical Engineering, Czech Technical University in PraguePragueCzech Republic
  3. 3.Czech Institute of Informatics, Robotics and Cybernetics, CTU in PraguePragueCzech Republic
  4. 4.Chalmers University of Technology in GothenburgGothenburgSweden
  5. 5.Institute of Geodesy and Geoinformation, University of BonnBonnGermany

Personalised recommendations