Fundamental Matrices from Moving Objects Using Line Motion Barcodes

  • Yoni KastenEmail author
  • Gil Ben-Artzi
  • Shmuel Peleg
  • Michael Werman
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9906)


Computing the epipolar geometry between cameras with very different viewpoints is often very difficult. The appearance of objects can vary greatly, and it is difficult to find corresponding feature points. Prior methods searched for corresponding epipolar lines using points on the convex hull of the silhouette of a single moving object. These methods fail when the scene includes multiple moving objects. This paper extends previous work to scenes having multiple moving objects by using the “Motion Barcodes”, a temporal signature of lines. Corresponding epipolar lines have similar motion barcodes, and candidate pairs of corresponding epipoar lines are found by the similarity of their motion barcodes. As in previous methods we assume that cameras are relatively stationary and that moving objects have already been extracted using background subtraction.


Fundamental matrix Epipolar geometry Motion barcodes Epipolar lines Multi-camera calibration 



This research was supported by Google, by Intel ICRI-CI, by DFG, and by the Israel Science Foundation.


  1. 1.
    Sinha, S.N., Pollefeys, M.: Camera network calibration and synchronization from silhouettes in archived video. IJCV 87(3), 266–283 (2010)CrossRefGoogle Scholar
  2. 2.
    Cipolla, R., Giblin, P.: Visual Motion of Curves and Surfaces. Cambridge University Press, Cambridge (2000)zbMATHGoogle Scholar
  3. 3.
    Ben-Artzi, G., Kasten, Y., Peleg, S., Werman, M.: Camera calibration from dynamic silhouettes using motion barcodes (2015). arXiv preprint arXiv:1506.07866
  4. 4.
    Ben-Artzi, G., Werman, M., Peleg, S.: Event retrieval using motion barcodes. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 2621–2625. IEEE (2015)Google Scholar
  5. 5.
    Meingast, M., Oh, S., Sastry, S.: Automatic camera network localization using object image tracks. In: 2007 IEEE 11th International Conference on Computer Vision, ICCV 2007, pp. 1–8. IEEE (2007)Google Scholar
  6. 6.
    Stein, G.P.: Tracking from multiple view points: self-calibration of space and time. In: 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1. IEEE (1999)Google Scholar
  7. 7.
    Krahnstoever, N., Mendonca, P.R.: Bayesian autocalibration for surveillance. In: 2005 Tenth IEEE International Conference on Computer Vision, ICCV 2005, vol. 2, pp. 1858–1865. IEEE (2005)Google Scholar
  8. 8.
    Lv, F., Zhao, T., Nevatia, R.: Camera calibration from video of a walking human. IEEE Trans. Pattern Anal. Mach. Intell. 9, 1513–1518 (2006)Google Scholar
  9. 9.
    Chen, T., Bimbo, A.D., Pernici, F., Serra, G.: Accurate self-calibration of two cameras by observations of a moving person on a ground plane. In: 2007 IEEE Conference on Advanced Video and Signal Based Surveillance, AVSS 2007, pp. 129–134. IEEE (2007)Google Scholar
  10. 10.
    Cucchiara, R., Grana, C., Piccardi, M., Prati, A.: Detecting moving objects, ghosts, and shadows in video streams. IEEE Trans. Pattern Anal. Mach. Intell. 25(10), 1337–1342 (2003)CrossRefGoogle Scholar
  11. 11.
    Hartley, R., Zisserman, A.: Multiple view geometry in computer vision (2003)Google Scholar
  12. 12.
    Hassner, T.: Viewing real-world faces in 3D. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3607–3614 (2013)Google Scholar
  13. 13.

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Yoni Kasten
    • 1
    Email author
  • Gil Ben-Artzi
    • 1
  • Shmuel Peleg
    • 1
  • Michael Werman
    • 1
  1. 1.School of Computer Science and EngineeringThe Hebrew University of JerusalemJerusalemIsrael

Personalised recommendations