Good Edgels to Track: Beating the Aperture Problem with Epipolar Geometry

  • Tommaso Piccini
  • Mikael Persson
  • Klas Nordberg
  • Michael Felsberg
  • Rudolf Mester
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8926)

Abstract

An open issue in multiple view geometry and structure from motion, applied to real life scenarios, is the sparsity of the matched key-points and of the reconstructed point cloud. We present an approach that can significantly improve the density of measured displacement vectors in a sparse matching or tracking setting, exploiting the partial information of the motion field provided by linear oriented image patches (edgels). Our approach assumes that the epipolar geometry of an image pair already has been computed, either in an earlier feature-based matching step, or by a robustified differential tracker. We exploit key-points of a lower order, edgels, which cannot provide a unique 2D matching, but can be employed if a constraint on the motion is already given. We present a method to extract edgels, which can be effectively tracked given a known camera motion scenario, and show how a constrained version of the Lucas-Kanade tracking procedure can efficiently exploit epipolar geometry to reduce the classical KLT optimization to a 1D search problem. The potential of the proposed methods is shown by experiments performed on real driving sequences.

Keywords

Densification Tracking Epipolar geometry Lucas-Kanade Feature extraction Edgels Edges 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Agrawal, M., Konolige, K., Blas, M.R.: CenSurE: Center Surround Extremas for Realtime Feature Detection and Matching. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part IV. LNCS, vol. 5305, pp. 102–115. Springer, Heidelberg (2008) CrossRefGoogle Scholar
  2. 2.
    Baker, S., Matthews, I.: Lucas-Kanade 20 years on: A unifying framework. International Journal of Computer Vision 56(3), 221–255 (2004)CrossRefGoogle Scholar
  3. 3.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: Speeded Up Robust Features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006, Part I. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006) CrossRefGoogle Scholar
  4. 4.
    Birchfield, S.T., Pundlik, S.J.: Joint tracking of features and edges. In: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2008) (2008)Google Scholar
  5. 5.
    Bouguet, J.Y.: Pyramidal implementation of the affine Lucas Kanade feature tracker: description of the algorithm. Intel Corporation, Tech. rep. (2001)Google Scholar
  6. 6.
    Calonder, M., Lepetit, V., Strecha, C., Fua, P.: BRIEF: Binary Robust Independent Elementary Features. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 778–792. Springer, Heidelberg (2010) CrossRefGoogle Scholar
  7. 7.
    Farnebäck, G.: Two-Frame Motion Estimation Based on Polynomial Expansion. In: Bigun, J., Gustavsson, T. (eds.) SCIA 2003. LNCS, vol. 2749, pp. 363–370. Springer, Heidelberg (2003) CrossRefGoogle Scholar
  8. 8.
    Fusiello, A., Trucco, E.: Improving feature tracking with robust statistics. Pattern Analysis & Applications pp. 312–320 (1999)Google Scholar
  9. 9.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the KITTI vision benchmark suite. In: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2012) (2012)Google Scholar
  10. 10.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: Alvey Vision Conference, pp. 23.1–23.6 (1988)Google Scholar
  11. 11.
    Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press (2004)Google Scholar
  12. 12.
    Hedborg, J., Forssén, P.-E., Felsberg, M.: Fast and Accurate Structure and Motion Estimation. In: Bebis, G., Boyle, R., Parvin, B., Koracin, D., Kuno, Y., Wang, J., Wang, J.-X., Wang, J., Pajarola, R., Lindstrom, P., Hinkenjann, A., Encarnação, M.L., Silva, C.T., Coming, D. (eds.) ISVC 2009, Part I. LNCS, vol. 5875, pp. 211–222. Springer, Heidelberg (2009) CrossRefGoogle Scholar
  13. 13.
    Horn, B., Schunck, B.: Determining optical flow. In: SPIE 0281, Techniques and Applications of Image Understanding, vol. 319 (1981)Google Scholar
  14. 14.
    Jonsson, E., Felsberg, M.: Efficient robust mean value computation of 1D features. In: Proceedings of Svenska Sällskapet för Automatiserad Bildanalys. SSBA-2005 (2005)Google Scholar
  15. 15.
    Kitt, B., Lategahn, H.: Trinocular optical flow estimation for intelligent vehicle applications. In: International IEEE Conference on Intelligent Transportation Systems (ITSC), pp. 300–306. IEEE, September 2012Google Scholar
  16. 16.
    Lee, T., Soatto, S.: Fast planar object detection and tracking via edgel templates. In: IEEE Workshop on the Applications of Computer Vision (WACV), pp. 473–480. IEEE, January 2012Google Scholar
  17. 17.
    Leutenegger, S.: BRISK: Binary robust invariant scalable keypoints. IEEE Int. Conf. on Computer Vision (ICCV) 2011, 2548–2555 (2011)Google Scholar
  18. 18.
    Lowe, D.G.: Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision 60(2), 91–110 (2004)CrossRefGoogle Scholar
  19. 19.
    Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: International Joint Conference on Artificial Intellicence (IJCAI) (1981)Google Scholar
  20. 20.
    Persson, M.: Online Monocular SLAM. Master’s thesis, Computer Vision Laboratory, Linköping University, Sweden (December 2013)Google Scholar
  21. 21.
    Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to SIFT or SURF. In: IEEE Int. Conf. on Computer Vision (ICCV), pp. 2564–2571 (2011)Google Scholar
  22. 22.
    Shi, J., Tomasi, C.: Good features to track. In: IEEE Conf. on Computer Vision and Pattern Recognition CVPR 1994, pp. 593–600 (1994)Google Scholar
  23. 23.
    Tommasini, T., Fusiello, A.: Making good features track better. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 178–183 (1998)Google Scholar
  24. 24.
    Trajković, M., Hedley, M.: Fast corner detection. Image and Vision Computing 16(1998), 75–87 (1998)CrossRefGoogle Scholar
  25. 25.
    Trummer, M., Denzler, J., Munkelt, C.: KLT tracking using intrinsic and extrinsic camera parameters in consideration of uncertainty. International Conference on Computer Vision Theory and Applications (VISAPP) (2008)Google Scholar
  26. 26.
    Yamaguchi, K., McAllester, D., Urtasun, R.: Robust monocular epipolar flow estimation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1862–1869. IEEE, June 2013Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Tommaso Piccini
    • 1
  • Mikael Persson
    • 1
  • Klas Nordberg
    • 1
  • Michael Felsberg
    • 1
  • Rudolf Mester
    • 1
    • 2
  1. 1.CVL, ISYLinköping UniversityLinköpingSweden
  2. 2.VSI Lab, C.S. DepartmentGoethe UniversityFrankfurtGermany

Personalised recommendations