Synchronizing Video Sequences from Temporal Epipolar Lines Analysis

  • Vincent Guitteny
  • Ryad Benosman
  • Christophe Charbuillet
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5259)


This paper deals with the issue of synchronization of a multi camera system observing dynamic scenes. The developed method presented is not based on the use of local image features that are in general not robust to possible occlusions and noise. Instead, a new approach is introduced allowing a temporal alignment of video sequences using the analysis of moving object traces in scenes in the frequency or spatial domain. This method uses the stereoscopic constraint to apply a temporal correlation by analyzing epipolar lines temporal evolution. Experimental results on real data are presented, and the estimated temporal alignment are quantitatively evaluated and compared to a time truth temporal electronic device in cases of noise and occlusions.


Video Sequence Video Stream Interest Point Impulsive Noise Epipolar Line 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bertrand, H., Hervé, M.: Infrastructure of the GrImage experimental platform: the video acquisition part. Technical Report INRIA - Rhone-Alpes (November 2004)Google Scholar
  2. 2.
    Brown, M., Lowe, D.: Invariant features from interest point groups. In: British Machine Vision Conference (BMVC) (2002)Google Scholar
  3. 3.
    Caspi, Y., Irani, M.: Spatio-Temporal Alignment of Sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(11), 1409–1424 (2002)CrossRefGoogle Scholar
  4. 4.
    Caspi, Y., Irani, M.: Alignment of non-overlapping sequences. In: International Conference on Computer Vision (ICCV), vol. 2, pp. 76–83 (July 2001)Google Scholar
  5. 5.
    Kang, J., Cohen, I., Medioni, G.: Continuous Multi-Views Tracking using Tensor Voting. In: Proceedings of the Workshop on Motion and Video Computing (MOTION 2002), pp. 181–186 (December 2002)Google Scholar
  6. 6.
    Kuthirummal, S., Jawahar, C., Narayanan, P.: Video Frame Alignment in Multiple Views. In: International Conference on Image Processing (ICIP) (September 2002)Google Scholar
  7. 7.
    Laptev, I.: On Space-Time Interest Points. International Journal of Computer Vision (IJCV) 64(2/3), 107–123 (2005)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Lee, L., Romano, R., Stein, G.: Monitoring Activities from Multiple Video Streams: Establishing a Common Coordinate Frame. IEEE Transactions on Pattern Recognition and Machine Intelligence, Special Section on Video Surveillance and Monitoring 22(8), 758–767 (2000)CrossRefGoogle Scholar
  9. 9.
    Litos, G., Zabulis, X., Triantafyllidis, G.: Synchronous Image Acquisition based on Network Synchronization. In: Conference on Computer Vision and Pattern Recognition Workshop (CVPRW 2006) (2006)Google Scholar
  10. 10.
    Mills, D.L.: Internet time synchronization: the Network Time Protocol. In: IEEE Transactions on Communications COM-39, vol. 10, pp. 1482–1493 (October 1991)Google Scholar
  11. 11.
    Oram, D.: Rectification for any Epipolar Geometry. In: 12th British Machine Vision Conference (BMVC 2001) (September 2001)Google Scholar
  12. 12.
    Richardson, F.: Importance of Synchronizing Taking and Camera Speeds. Transactions of S.M.P.E 17, 117–123 (1924)Google Scholar
  13. 13.
    Sudipta, N., Sinha, M.P.: Synchronization and Calibration of Camera Networks from Silhouettes. In: International Conference on Pattern Recognition (ICPR 2004), vol. 1, pp. 116–119 (2004)Google Scholar
  14. 14.
    Whitehead, A., Laganiere, R., Bose, P.: Temporal Synchronization of Video Sequences in Theory and in Practice. In: IEEE Workshop on Motion and Video Computing (WACV/MOTION 2005), vol. 2, pp. 132–137 (2005)Google Scholar
  15. 15.
    Yan, J., Pollefeys, M.: Video Synchronization Via Space-Time Interest Point Distribution. In: Advanced Concepts for Intelligent Vision Systems (ACIVS) (2004)Google Scholar
  16. 16.
    Zang, Q., Klette, R.: Evaluation of an Adaptive Composite Gaussian Model in Video Surveillance. In: Petkov, N., Westenberg, M.A. (eds.) CAIP 2003. LNCS, vol. 2756, pp. 165–172. Springer, Heidelberg (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Vincent Guitteny
    • 1
  • Ryad Benosman
    • 2
  • Christophe Charbuillet
    • 2
  2. 2.UPMC Univ Paris 06, F-75005, Paris, France CNRS, FRE 2507, ISIR, Institut des Systèmes Intelligents et de RobotiqueParisFrance

Personalised recommendations