Reliable Point Correspondences in Scenes Dominated by Highly Reflective and Largely Homogeneous Surfaces

  • Srimal Jayawardena
  • Stephen Gould
  • Hongdong Li
  • Marcus Hutter
  • Richard Hartley
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9008)

Abstract

Common Structure from Motion (SfM) tasks require reliable point correspondences in images taken from different views to subsequently estimate model parameters which describe the 3D scene geometry. For example when estimating the fundamental matrix from point correspondences using RANSAC. The amount of noise in the point correspondences drastically affect the estimation algorithm and the number of iterations needed for convergence grows exponentially with the level of noise. In scenes dominated by highly reflective and largely homogeneous surfaces such as vehicle panels and buildings with a lot of glass, existing approaches give a very high proportion of spurious point correspondences. As a result the number of iterations required for subsequent model estimation algorithms become intractable. We propose a novel method that uses descriptors evaluated along points in image edges to obtain a sufficiently high proportion of correct point correspondences. We show experimentally that our method gives better results in recovering the epipolar geometry in scenes dominated by highly reflective and homogeneous surfaces compared to common baseline methods on stereo images taken from considerably wide baselines.

References

  1. 1.
    Snavely, N., Seitz, S.M., Szeliski, R.: Modeling the world from internet photo collections. Int. J. Comput. Vis. (IJCV) 80, 189–210 (2008)CrossRefGoogle Scholar
  2. 2.
    Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. (IJCV) 60, 91–110 (2004)CrossRefGoogle Scholar
  3. 3.
    Jayawardena, S., Yang, D., Hutter, M.: 3D model assisted image segmentation. In: Proceedings of the International Conference on Digital Image Computing Techniques and Applications (DICTA). IEEE (2011)Google Scholar
  4. 4.
    Jayawardena, S., Hutter, M., Brewer, N.: A novel illumination-invariant loss for monocular 3D pose estimation. In: Proceedings of the International Conference on Digital Image Computing Techniques and Applications (DICTA). IEEE (2011)Google Scholar
  5. 5.
    Jayawardena, S.: Image based automatic vehicle damage detection. Ph.D. thesis, The Australian National University (2013)Google Scholar
  6. 6.
    Pylvanainen, T., Berclaz, J., Korah, T., Hedau, V., Aanjaneya, M., Grzeszczuk, R.: 3D city modeling from street-level data for augmented reality applications. In: 3DIMPVT. IEEE (2012)Google Scholar
  7. 7.
    Greenspan, H., Gordon, S., Zimmerman, G., Lotenberg, S., Jeronimo, J., Antani, S., Long, R.: Automatic detection of anatomical landmarks in uterine cervix images. IEEE Trans. Med. Imaging 28, 454–468 (2009)CrossRefGoogle Scholar
  8. 8.
    Zimmerman-Moreno, G., Greenspan, H.: Automatic detection of specular reflections in uterine cervix images. In: Medical Imaging, International Society for Optics and Photonics, p. 61446E (2006)Google Scholar
  9. 9.
    Wu, T.T., Qu, J.Y.: Optical imaging for medical diagnosis based on active stereo vision and motion tracking. Opt. Express 15, 10421–10426 (2007)CrossRefGoogle Scholar
  10. 10.
    Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2000)MATHGoogle Scholar
  11. 11.
    Swaminathan, R., Kang, S.B., Szeliski, R., Criminisi, A., Nayar, S.K.: On the motion and appearance of specularities in image sequences. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002, Part I. LNCS, vol. 2350, pp. 508–523. Springer, Heidelberg (2002) CrossRefGoogle Scholar
  12. 12.
    Mendonça, P.R., Cipolla, R.: Estimation of epipolar geometry from apparent contours: Affine and circular motion cases. In: Proceedings of Computer Vision and Pattern Recognition (CVPR) (1999)Google Scholar
  13. 13.
    Schmid, C., Zisserman, A.: The geometry and matching of lines and curves over multiple views. Int. J. Comput. Vis. (IJCV) 40, 199–233 (2000)CrossRefMATHGoogle Scholar
  14. 14.
    Tola, E., Lepetit, V., Fua, P.: DAISY: an efficient dense descriptor applied to wide baseline stereo. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 32, 815–830 (2010)CrossRefGoogle Scholar
  15. 15.
    Vedaldi, A., Fulkerson, B.: VLFeat: An open and portable library of computer vision algorithms (2008). http://www.vlfeat.org/
  16. 16.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006, Part I. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006) CrossRefGoogle Scholar
  17. 17.
    Leutenegger, S., Chli, M., Siegwart, R.: BRISK: Binary robust invariant scalable keypoints. In: Proceedings of the International Conference on Computer Vision (ICCV) (2011)Google Scholar
  18. 18.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: Alvey Vision Conference (AVC) (1988)Google Scholar
  19. 19.
    Bosch, A., Zisserman, A., Muoz, X.: Image classification using random forests and ferns. In: Proceedings of the International Conference on Computer Vision (ICCV) (2007)Google Scholar
  20. 20.
    Mikolajczyk, K., Tuytelaars, T., Schmid, C., Zisserman, A., Matas, J., Schaffalitzky, F., Kadir, T., Gool, L.V.: A comparison of affine region detectors. Int. J. Comput. Vis. (IJCV) 65, 43–72 (2005)CrossRefGoogle Scholar
  21. 21.
    Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. (IVC) 22, 761–767 (2004)CrossRefGoogle Scholar
  22. 22.
    Meltzer, J., Soatto, S.: Edge descriptors for robust wide-baseline correspondence. In: Proceedings of Computer Vision and Pattern Recognition (CVPR) (2008)Google Scholar
  23. 23.
    Lin, W., Cheong, I., Tan, P., Dong, G., Liu, S.: Simultaneous camera pose and correspondence estimation with motion coherence. Int. J. Comput. Vis. (IJCV) 96, 145–161 (2012)CrossRefMATHMathSciNetGoogle Scholar
  24. 24.
    Mikolajczyk, K., Zisserman, A., Schmid, C., et al.: Shape recognition with edge-based features. In: Proceedings of the British Machine Vision Conference (BMVC) (2003)Google Scholar
  25. 25.
    Belongie, S., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 24, 509–522 (2002)CrossRefGoogle Scholar
  26. 26.
    Fischer, J., Ruppel, A., Weißhardt, F., Verl, A.: A rotation invariant feature descriptor O-DAISY and its FPGA implementation. In: Proceedings of the International Conference on Intelligent Robots and Systems (IROS) (2011)Google Scholar
  27. 27.
    Klein, G., Murray, D.: Improving the agility of keyframe-based SLAM. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part II. LNCS, vol. 5303, pp. 802–815. Springer, Heidelberg (2008) CrossRefGoogle Scholar
  28. 28.
    Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 8, 679–698 (1986)CrossRefGoogle Scholar
  29. 29.
    Heath, M.D., Sarkar, S., Sanocki, T., Bowyer, K.W.: A robust visual method for assessing the relative performance of edge-detection algorithms. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 19, 1338–1359 (1997)CrossRefGoogle Scholar
  30. 30.
    Tola, E., Lepetit, V., Fua, P.: A fast local descriptor for dense matching. In: Proceedings of Computer Vision and Pattern Recognition (CVPR) (2008)Google Scholar
  31. 31.
    Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 23, 1222–1239 (2001)CrossRefGoogle Scholar
  32. 32.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Assoc. Comput. Mach. (ACM) 24, 381–395 (1981)MathSciNetGoogle Scholar
  33. 33.
    Aghazadeh, O., Sullivan, J., Carlsson, S.: Novelty detection from an ego-centric perspective. In: Proceedings of Computer Vision and Pattern Recognition (CVPR) (2011)Google Scholar
  34. 34.
    Rogers, M., Graham, J.: Robust active shape model search. Proc. of the European Conference on Computer Vision (ECCV) (2006)Google Scholar
  35. 35.
    Zhang, Z., Deriche, R., Faugeras, O., Luong, Q.T.: A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry. Artif. Intell. (AI) 78, 87–119 (1995)CrossRefGoogle Scholar
  36. 36.
    Zhang, Z.: Determining the epipolar geometry and its uncertainty: A review. Int. J. Comput. Vis. (IJCV) 27, 161–198 (1998)CrossRefGoogle Scholar
  37. 37.
    Kitt, B., Geiger, A., Lategahn, H.: Visual odometry based on stereo image sequences with ransac-based outlier rejection scheme. In: Intelligent Vehicles Symposium (IV) (2010)Google Scholar
  38. 38.
    Frahm, J.M., Pollefeys, M.: RANSAC for (Quasi-)Degenerate data (QDEGSAC). In: Proceedings of Computer Vision and Pattern Recognition (CVPR) (2006)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Srimal Jayawardena
    • 1
  • Stephen Gould
    • 2
  • Hongdong Li
    • 2
  • Marcus Hutter
    • 2
  • Richard Hartley
    • 2
  1. 1.Autonomous Systems Laboratory, CSIROBrisbaneAustralia
  2. 2.Research School of Computer Science, The ANUCanberraAustralia

Personalised recommendations