Statistical Inference of Motion in the Invisible

  • Haroon Idrees
  • Imran Saleemi
  • Mubarak Shah
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7575)


This paper focuses on the unexplored problem of inferring motion of objects that are invisible to all cameras in a multiple camera setup. As opposed to methods for learning relationships between disjoint cameras, we take the next step to actually infer the exact spatiotemporal behavior of objects while they are invisible. Given object trajectories within disjoint cameras’ FOVs (field-of-view), we introduce constraints on the behavior of objects as they travel through the unobservable areas that lie in between. These constraints include vehicle following (the trajectories of vehicles adjacent to each other at entry and exit are time-shifted relative to each other), collision avoidance (no two trajectories pass through the same location at the same time) and temporal smoothness (restricts the allowable movements of vehicles based on physical limits). The constraints are embedded in a generalized, global cost function for the entire scene, incorporating influences of all objects, followed by a bounded minimization using an interior point algorithm, to obtain trajectory representations of objects that define their exact dynamics and behavior while invisible. Finally, a statistical representation of motion in the entire scene is estimated to obtain a probabilistic distribution representing individual behaviors, such as turns, constant velocity motion, deceleration to a stop, and acceleration from rest for evaluation and visualization. Experiments are reported on real world videos from multiple disjoint cameras in NGSIM data set, and qualitative as well as quantitative analysis confirms the validity of our approach.


Collision Avoidance Multiple Camera Interior Point Algorithm Camera Network Object Trajectory 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Stauffer, C., Tieu, K.: Automated multi-camera planar tracking correspondence modeling. In: CVPR (2003)Google Scholar
  2. 2.
    Khan, S., Shah, M.: Consistent labeling of tracked objects in multiple cameras with overlapping fields of view. IEEE PAMI 25 (2003)Google Scholar
  3. 3.
    Makris, D., Ellis, T., Black, J.: Bridging the gaps between cameras. In: CVPR (2004)Google Scholar
  4. 4.
    Gheissari, N., Sebastian, T., Hartley, R.: Person reidentification using spatiotemporal appearance. In: CVPR, vol. 2 (2006)Google Scholar
  5. 5.
    Hu, W., Hu, M., Zhou, X., Tan, T., Lou, J., Maybank, S.: Principal axis-based correspondence between multiple cameras for people tracking. IEEE PAMI 28(4) (2006)Google Scholar
  6. 6.
    Gray, D., Tao, H.: Viewpoint Invariant Pedestrian Recognition with an Ensemble of Localized Features. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part I. LNCS, vol. 5302, pp. 262–275. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  7. 7.
    Loy, C.C., Xiang, T., Gong, S.: Multi-camera activity correlation analysis. In: CVPR (2009)Google Scholar
  8. 8.
    Pellegrini, S., Ess, A., Schindler, K., van Gool, L.: You’ll never walk alone: Modeling social behavior for multi-target tracking. In: ICCV (2009)Google Scholar
  9. 9.
    Kikuchi, S., Chakroborty, P.: Car following model based on fuzzy inference system. Transport. Res. Record (1992)Google Scholar
  10. 10.
    Nagel, K., Schreckenberg, M.: A cellular automaton model for freeway traffic. J. Phys. I France 2(12) (1992)Google Scholar
  11. 11.
    Newell, G.: Nonlinear effects in the dynamics of car following. Ops. Res. 9(2) (1961)Google Scholar
  12. 12.
    Javed, O., Shafique, K., Rasheed, Z., Shah, M.: Modeling inter-camera space-time and appearance relationships for tracking across non-overlapping views. CVIU 109 (2008)Google Scholar
  13. 13.
    Huang, T., Russell, S.: Object identification in a bayesian context. In: IJCAI (1997)Google Scholar
  14. 14.
    Kettnaker, V., Zabih, R.: Bayesian multi-camera surveillance. In: CVPR (1999)Google Scholar
  15. 15.
    Prosser, B., Gong, S., Xiang, T.: Multi-camera matching using bi-directional cumulative brightness transfer functions. In: BMVC (2008)Google Scholar
  16. 16.
    Dockstader, S., Tekalp, A.: Multiple camera fusion for multi-object tracking. In: IEEE WMOT (2001)Google Scholar
  17. 17.
    Soto, C., Song, B., Roy-Chowdhury, A.: Distributed multi-target tracking in a self-configuring camera network. In: CVPR (2009)Google Scholar
  18. 18.
    Song, B., Kamal, A., Soto, C., Ding, C., Farrell, J., Roy-Chowdhury, A.: Tracking and activity recognition through consensus in distributed camera networks. In: IEEE TIP (2010)Google Scholar
  19. 19.
    Kamal, A., Ding, C., Song, B., Farrell, J.A., Roy-Chowdhury, A.: A generalized kalman consensus filter for wide area video networks. In: IEEE CDC (2011)Google Scholar
  20. 20.
    Tieu, K., Dalley, G., Grimson, W.: Inference of non-overlapping camera network topology by measuring statistical dependence. In: ICCV (2005)Google Scholar
  21. 21.
    Stauffer, C.: Learning to track objects through unobserved regions. In: WMVC (2005)Google Scholar
  22. 22.
    Atev, S., Arumugam, H., Masoud, O., Janardan, R., Papanikolopoulos, N.: A vision-based approach to collision prediction at traffic intersections. IEEE TIT Systems 6(4) (2005)Google Scholar
  23. 23.
    van den Berg, J., Guy, S., Lin, M., Manocha, D.: Reciprocal n-body collision avoidance 70, 3–19 (2011)Google Scholar
  24. 24.
    LaValle, S.M.: Planning Algorithms. Cambridge University Press (2006)Google Scholar
  25. 25.
    Saleemi, I., Hartung, L., Shah, M.: Scene understanding by statistical modeling of motion patterns. In: CVPR (2010)Google Scholar
  26. 26.
    Next Generation Simulation (NGSIM) dataset,

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Haroon Idrees
    • 1
  • Imran Saleemi
    • 1
  • Mubarak Shah
    • 1
  1. 1.Computer Vision LabUniversity of Central FloridaOrlandoUSA

Personalised recommendations