Extraction of Long-Duration Moving Object Trajectories from Curtailed Tracks

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 704)


Object tracking remains one of the critical challenges in visual surveillance. It is difficult to track each moving object in a crowded scene. This paper proposed a new approach to track moving objects for longer duration. First, key points are tracked for short duration using state-of-the-art feature tracker. Next, the features are grouped and linked in spatiotemporal domain. Finally, we create a single trajectory for each object or a group of similar objects. We have tested the method on publicly available video datasets where more than 100 people were moving randomly. The results reveal that the proposed method can be highly effective to extract long-duration trajectories from the curtailed tracklets obtained using short-duration feature tracker.


Object Motion Tracklets Crowded Scenes Kanade-Lucas-Tomasi Feature Tracker Short Tracklets 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Ahmed, A., Dogra, D., Kar, S., Kim, B., Hill, P., Bhaskar, H.: Localization of region of interest in surveillance scene. Multimedia Tools and Applications pp. 1–30 (2016)CrossRefGoogle Scholar
  2. 2.
    Akay, B.: A study on particle swarm optimization and artificial bee colony algorithms for multilevel thresholding. Applied Soft Computing 13(6), 3066–3091 (2013)CrossRefGoogle Scholar
  3. 3.
    Birchfield, S.: Klt: An implementation of the kanade-lucas-tomasi feature tracker (2007)Google Scholar
  4. 4.
    Brox, T., Malik, J.: Object segmentation by long term analysis of point trajectories. Computer Vision–ECCV 2010 pp. 282–295 (2010)CrossRefGoogle Scholar
  5. 5.
    Campello, R.J., Moulavi, D., Sander, J.: Density-based clustering based on hierarchical density estimates. In: Advances in Knowledge Discovery and Data Mining, pp. 160–172. Springer (2013)CrossRefGoogle Scholar
  6. 6.
    Dogra, D., Ahmed, A., Bhaskar, H.: Smart video summarization using mealy machine-based trajectory modelling for surveillance applications. Multimedia Tools and Applications pp. 6373–6401 (2016)CrossRefGoogle Scholar
  7. 7.
    Dogra, D., Reddy, R., Subramanyam, K., Ahmed, A., Bhaskar, H.: Scene representation and anomalous activity detection using weighted region association graph. In: Proceedings of the 10th International Conference on Computer Vision Theory and Applications. pp. 104–112 (March 2015)Google Scholar
  8. 8.
    Kuo, C.H., Huang, C., Nevatia, R.: Multi-target tracking by on-line learned discriminative appearance models. In: Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. pp. 685–692. IEEE (2010)Google Scholar
  9. 9.
    Kwon, Y., Kang, K., Jin, J., Moon, J., Park, J.: Hierarchically linked infinite hidden markov model based trajectory analysis and semantic region retrieval in a trajectory dataset. Expert Systems with Applications 78, 386–395 (2017)CrossRefGoogle Scholar
  10. 10.
    Morris, B.T., Trivedi, M.M.: A survey of vision-based trajectory learning and analysis for surveillance. IEEE transactions on circuits and systems for video technology 18(8), 1114–1127 (2008)CrossRefGoogle Scholar
  11. 11.
    Nazare, A.C., dos Santos, C.E., Ferreira, R., Robson Schwartz, W.: Smart surveillance framework: A versatile tool for video analysis. In: IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 753–760. IEEE (2014)Google Scholar
  12. 12.
    Ochs, P., Brox, T.: Object segmentation in video: a hierarchical variational approach for turning point trajectories into dense regions. In: Computer Vision (ICCV), 2011 IEEE International Conference on. pp. 1583–1590. IEEE (2011)Google Scholar
  13. 13.
    Oh, S., Hoogs, A., Perera, A., Cuntoor, N., Chen, C.C., Lee, J.T., Mukherjee, S., Aggarwal, J., Lee, H., Davis, L., et al.: A large-scale benchmark dataset for event recognition in surveillance video. In: Computer vision and pattern recognition (CVPR), 2011 IEEE conference on. pp. 3153–3160. IEEE (2011)Google Scholar
  14. 14.
    Petitjean, F., Forestier, G., Webb, G., Nicholson, A.E., Chen, Y., Keogh, E., et al.: Dynamic time warping averaging of time series allows faster and more accurate classification. In: IEEE International Conference on Data Mining (ICDM). pp. 470–479 (2014)Google Scholar
  15. 15.
    Petitjean, F., Ketterlin, A., Gançarski, P.: A global averaging method for dynamic time warping, with applications to clustering. Pattern Recognition 44(3), 678–693 (2011)CrossRefGoogle Scholar
  16. 16.
    Rubinstein, M., Liu, C., Freeman, W.T.: Towards longer long-range motion trajectories (2012)Google Scholar
  17. 17.
    Sand, P., Teller, S.: Particle video: Long-range motion estimation using point trajectories. International Journal of Computer Vision 80(1), 72–91 (2008)CrossRefGoogle Scholar
  18. 18.
    Shao, J., Loy, C., Wang, X.: Scene-independent group profiling in crowd. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 2227–2234 (2014)Google Scholar
  19. 19.
    Shi, J., et al.: Good features to track. In: Computer Vision and Pattern Recognition, 1994. Proceedings CVPR’94., 1994 IEEE Computer Society Conference on. pp. 593–600. IEEE (1994)Google Scholar
  20. 20.
    Sundaram, N., Brox, T., Keutzer, K.: Dense point trajectories by gpu-accelerated large displacement optical flow. In: European conference on computer vision. pp. 438–451. Springer (2010)CrossRefGoogle Scholar
  21. 21.
    Yilmaz, A., Javed, O., Shah, M.: Object tracking: A survey. Acm computing surveys 38(4),  13 (2006)CrossRefGoogle Scholar
  22. 22.
    Zhou, B., Tang, X., Wang, X.: Measuring crowd collectiveness. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3049–3056 (2013)Google Scholar
  23. 23.
    Zhou, B., Wang, X., Tang, X.: Random field topic model for semantic region analysis in crowded scenes from tracklets. In: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. pp. 3441–3448. IEEE (2011)Google Scholar
  24. 24.
    Zhou, B., Wang, X., Tang, X.: Understanding collective crowd behaviors: Learning a mixture model of dynamic pedestrian-agents. In: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. pp. 2871–2878. IEEE (2012)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Haldia Institute of TechnologyHaldiaIndia
  2. 2.IIT BhubaneswarBhubaneswarIndia
  3. 3.National Institute of Technology DurgapurDurgapurIndia
  4. 4.Indian Institute of Technology RoorkeeRoorkeeIndia

Personalised recommendations