Advertisement

Journal of Real-Time Image Processing

, Volume 11, Issue 4, pp 785–798 | Cite as

Video Extruder: a semi-dense point tracker for extracting beams of trajectories in real time

  • Matthieu Garrigues
  • Antoine ManzaneraEmail author
  • Thierry M. Bernard
Special Issue Paper

Abstract

Two crucial aspects of general-purpose embedded visual point tracking are addressed in this paper. First, the algorithm should reliably track as many points as possible. Second, the computation should achieve real-time video processing, which is challenging on low power embedded platforms. We propose a new multi-scale semi-dense point tracker called Video Extruder, whose purpose is to fill the gap between short-term, dense motion estimation (optical flow) and long-term, sparse salient point tracking. This paper presents a new detector, including a new salience function with low computational complexity and a new selection strategy that allows to obtain a large number of keypoints. Its density and reliability in mobile video scenarios are compared with those of the FAST detector. Then, a multi-scale matching strategy is presented, based on hybrid regional coarse-to-fine and temporal prediction, which provides robustness to large camera and object accelerations. Filtering and merging strategies are then used to eliminate most of the wrong or useless trajectories. Thanks to its high degree of parallelism, the proposed algorithm extracts beams of trajectories from the video very efficiently. We compare it with the state-of-the-art pyramidal Lucas–Kanade point tracker and show that, in short range mobile video scenarios, it yields similar quality results, while being up to one order of magnitude faster. Three different parallel implementations of this tracker are presented, on multi-core CPU, GPU and ARM SoCs. On a commodity 2010 CPU, it can track 8,500 points in a 640 × 480 video at 150 Hz.

Keywords

Point tracking Optical flow Beam of trajectories Semi-dense Real time 

Notes

Acknowledgments

This work was part of a EUREKA-ITEA2 project and was funded by the French Ministry of Economy (General Directorate for Competitiveness, Industry and Services).

References

  1. 1.
    Botella, G., Martín, H.J.A., Santos, M., Meyer-Baese, U.: FPGA-based multimodal embedded sensor system integrating low- and mid-level vision. Sensors 11(12), 8164–8179 (2011). doi: 10.3390/s110808164, URL: http://www.mdpi.com/1424-8220/11/8/8164/
  2. 2.
    Bouchafa, S., Zavidovique, B.: c-velocity: a flow-cumulating uncalibrated approach for 3d plane detection. Int. J. Comput. Vis. 97(2), 148–166 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Bouguet, J.Y.: Pyramidal implementation of the affine Lucas Kanade feature tracker description of the algorithm. Intel Corporation (2001)Google Scholar
  4. 4.
    Brostow, G.J., Shotton, J., Fauqueur, J., Cipolla, R.: Segmentation and recognition using structure from motion point clouds. In: ECCV (1) pp. 44–57 (2008)Google Scholar
  5. 5.
    Chaudhry, R., Ravichandran, A., Hager, G., Vidal, R.: Histograms of oriented optical flow and Binet–Cauchy kernels on nonlinear dynamical systems for the recognition of human actions. In: Computer Vision and Pattern Recognition, CVPR 2009, IEEE Conference, pp. 1932–1939 (2009)Google Scholar
  6. 6.
    d’Angelo, E., Paratte, J., Puy, G., Vandergheynst, P.: Fast TV-L1 optical flow for interactivity. In: IEEE International Conference on Image Processing (ICIP’11), pp. 1925–1928. Brussels (2011)Google Scholar
  7. 7.
    Doyle, D.D., Jennings, A.L., Black, J.T.: Optical flow background estimation for real-time pan/tilt camera object tracking. Measurement 48, 195–207 (2014). doi: 10.1016/j.measurement.2013.10.025, URL: http://linkinghub.elsevier.com/retrieve/pii/S0263224113005241
  8. 8.
    Farnebäck, G.: Two-frame motion estimation based on polynomial expansion. In: Image Analysis, Springer, pp. 363–370 (2003)Google Scholar
  9. 9.
    Fassold, H., Rosner, J., Schallaeur, P., Bailer, W.: Realtime KLT feature point tracking for high definition video. In: Computer Graphics, Computer Vision and Mathematics (GraVisMa’09), Plzen (2009)Google Scholar
  10. 10.
    Garrigues, M., Manzanera, A.: Real time semi-dense point tracking. In: Campilho, A., Kamel, M. (eds) International Conference on Image Analysis and Recognition (ICIAR 2012), Springer, Aveiro. Lecture Notes in Computer Science, vol. 7324, pp. 245–252 (2012)Google Scholar
  11. 11.
    Hoberock, J., Bell, N.: Thrust: A parallel template library. URL: http://thrust.github.io/, version 1.7.0 (2010)
  12. 12.
    Isard, M., Blake, A.: Condensation—conditional density propagation for visual tracking. Int. J. Comput. Vis. 29, 5–28 (1998)CrossRefGoogle Scholar
  13. 13.
    Nguyen, T., Manzanera, A.: Action recognition using bag of features extracted from a beam of trajectories. In: International Conference on Image Processing (IEEE-ICIP’13), Melbourne (2013)Google Scholar
  14. 14.
    Nguyen, T., Manzanera, A., Garrigues, M.: Motion trend patterns for action modelling and recognition. In: International Conference on Computer Analysis of Images and Patterns (CAIP’13), New York (2013)Google Scholar
  15. 15.
    Rabe, C., Franke, U., Koch, R.: Dense 3D motion field estimation from a moving observer in real time. Smart Mobile In-Veh. Syst. (2014). URL: http://link.springer.com/chapter/10.1007/978-1-4614-9120-0_2
  16. 16.
    Rosten, E., Drummond, T.: Fusing points and lines for high performance tracking. IEEE Int. Conf. Comput. Vis. 2, 1508–1511 (2005)Google Scholar
  17. 17.
    Rosten, E., Drummond, T.: Machine learning for high-speed corner detection. In: European Conference on Computer Vision (ECCV’06), vol. 1, pp. 430–443 (2006)Google Scholar
  18. 18.
    Rosten, E., Porter, R., Drummond, T.: Faster and better: a machine learning approach to corner detection. IEEE Trans. Pattern Anal. Mach. Intell. 32, 105–119 (2010)CrossRefGoogle Scholar
  19. 19.
    Sand, P., Teller, S: Particle video: long-range motion estimation using point trajectories. In: Computer Vision and Pattern Recognition (CVPR’06), pp. 2195–2202. New York (2006)Google Scholar
  20. 20.
    Schmid, C., Mohr, R., Bauckhage, C.: Evaluation of interest point detectors. Int. J. Comput. Vis. 37(2), 151–172 (2000)CrossRefzbMATHGoogle Scholar
  21. 21.
    Sekkati, H., Mitiche, A.: Joint optical flow estimation, segmentation, and 3d interpretation with level sets. Comput. Vis. Image Underst. 103(2), 89–100 (2006)CrossRefGoogle Scholar
  22. 22.
    Sinha, S.N., Frahm, J.M., Pollefeys, M., Genc, Y.: GPU-based video feature tracking and matching. In: EDGE, Workshop on Edge Computing Using New Commodity Architectures, vol 278, p 4321 (2006)Google Scholar
  23. 23.
    Sinha, S.N., Frahm, J.M., Pollefeys, M., Genc, Y.: Feature tracking and matching in video using programmable graphics hardware. Mach. Vis. Appl. 22(1), 207–217 (2007)CrossRefGoogle Scholar
  24. 24.
    Tomasi, C., Kanade, T.: Detection and tracking of point features. Carnegie Mellon University Technical, Report CMU-CS-91-132 (1991)Google Scholar
  25. 25.
    Wang, H., Kläser, A., Schmid, C., Chen, g., Lin, L.: Action recognition by dense trajectories. IEEE Conference on Computer Vision and Pattern Recognition, pp. 3169–3176. Colorado Springs (2011)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Matthieu Garrigues
    • 1
  • Antoine Manzanera
    • 1
    Email author
  • Thierry M. Bernard
    • 1
  1. 1.ENSTA-ParisTechPalaiseau CedexFrance

Personalised recommendations