Advertisement

Feature-Based Multi-video Synchronization with Subframe Accuracy

  • A. Elhayek
  • C. Stoll
  • K. I. Kim
  • H. -P. Seidel
  • C. Theobalt
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7476)

Abstract

We present a novel algorithm for temporally synchronizing multiple videos capturing the same dynamic scene. Our algorithm relies on general image features and it does not require explicitly tracking any specific object, making it applicable to general scenes with complex motion. This is facilitated by our new trajectory filtering and matching schemes that correctly identifies matching pairs of trajectories (inliers) from a large set of potential candidate matches, of which many are outliers. We find globally optimal synchronization parameters by using a stable RANSAC-based optimization approach. For multi-video synchronization, the algorithm identifies an informative subset of video pairs which prevents the RANSAC algorithm from being biased by outliers. Experiments on two-camera and multi-camera synchronization demonstrate the performance of our algorithm.

Keywords

Span Tree Fundamental Matrix Epipolar Line Synchronization Algorithm Feature Trajectory 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ballan, L., Brostow, G.J., Puwein, J., Pollefeys, M.: Unstructured video-based rendering: interactive exploration of casually captured videos. In: ACM SIGGRAPH (2010)Google Scholar
  2. 2.
    Wedge, D., Huynh, D.Q., Kovesi, P.: Motion Guided Video Sequence Synchronization. In: Narayanan, P.J., Nayar, S.K., Shum, H.-Y. (eds.) ACCV 2006, Part II. LNCS, vol. 3852, pp. 832–841. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  3. 3.
    Stein, G.P.: Tracking from multiple view points: Self-calibration of space and time. In: DARPA IU Workshop, pp. 521–527 (1998)Google Scholar
  4. 4.
    Dai, C., Zheng, Y., Li, X.: Subframe video synchronization via 3d phase correlation. In: IEEE International Conference on Image Processing (2006)Google Scholar
  5. 5.
    Caspi, Y., Simakov, D., Irani, M.: Feature-based sequence-to-sequence matching. Int. J. Comput. Vision 68, 53–64 (2006)CrossRefGoogle Scholar
  6. 6.
    Sinha, S.N., Pollefeys, M.: Synchronization and calibration of camera networks from silhouettes. In: ICPR (2004)Google Scholar
  7. 7.
    Meyer, B., Stich, T., Pollefeys, M.: Subframe temporal alignment of non-stationary cameras. In: BMVC (2008)Google Scholar
  8. 8.
    Hasler, N., Rosenhahn, B., Thormählen, T., Wand, M., Gall, J., Seidel, H.P.: Markerless motion capture with unsynchronized moving cameras. In: CVPR (2009)Google Scholar
  9. 9.
    Shrestha, P., Weda, H., Barbieri, M., Sekulovski, D.: Synchronization of multiple video recordings based on still camera flashes. ACM Multimedia (2006)Google Scholar
  10. 10.
    Pádua, F.L.C., Carceroni, R.L., Santos, G.A.M.R., Kutulakos, K.N.: Linear sequence-to-sequence alignment. IEEE Trans. Pattern Anal. Mach. Intell. 32, 304–320 (2010)CrossRefGoogle Scholar
  11. 11.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60, 91–110 (2004)CrossRefGoogle Scholar
  12. 12.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • A. Elhayek
    • 1
  • C. Stoll
    • 1
  • K. I. Kim
    • 1
  • H. -P. Seidel
    • 1
  • C. Theobalt
    • 1
  1. 1.MPI InformatikGermany

Personalised recommendations