Advertisement

A Novel Probabilistic Projection Model for Multi-camera Object Tracking

  • Jiaxin LinEmail author
  • Chun Xiao
  • Disi Chen
  • Dalin Zhou
  • Zhaojie Ju
  • Honghai Liu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11649)

Abstract

Correlation Filter (CF)-based algorithms have achieved remarkable performance in the field of object tracking during past decades. They have great advantages in dense sampling and reduced computational cost due to the usage of circulant matrix. However, present monocular object tracking algorithms can hardly solve fast motion which usually causes tracking failure. In this paper, a novel probabilistic projection model for multi-camera object tracking using two Kinects is proposed. Once the object is found lost using multimodal target detection, the point projection using a probabilistic projection model is processed to get a better tracking position of the targeted object. The projection model works well in the related experiments. Furthermore, when compared with other popular methods, the proposed tracking method grounded on the projection model is demonstrated to be more effective to accommodate the fast motion and achieve better tracking performance to promote robotic autonomy.

Keywords

Object tracking Correlation filter Multi-camera Multimodal target detection Projection 

References

  1. 1.
    Bolme, D.S., Beveridge, J.R., Draper, B.A., Lui, Y.M.: Visual object tracking using adaptive correlation filters. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2544–2550. IEEE, June 2010Google Scholar
  2. 2.
    Chen, W., Cao, L., Chen, X., Huang, K.: A novel solution for multi-camera object tracking. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 2329–2333. IEEE, October 2014Google Scholar
  3. 3.
    Danelljan, M., Bhat, G., Khan, F.S., Felsberg, M.: ECO: efficient convolution operators for tracking. In: CVPR, vol. 1, no. 2, p. 7, July 2017Google Scholar
  4. 4.
    Danelljan, M., Shahbaz Khan, F., Felsberg, M., Van de Weijer, J.: Adaptive color attributes for real-time visual tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1090–1097 (2014)Google Scholar
  5. 5.
    Danelljan, M., Hager, G., Shahbaz Khan, F., Felsberg, M.: Learning spatially regularized correlation filters for visual tracking. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4310–4318 (2015)Google Scholar
  6. 6.
    Danelljan, M., Robinson, A., Shahbaz Khan, F., Felsberg, M.: Beyond correlation filters: learning continuous convolution operators for visual tracking. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 472–488. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46454-1_29CrossRefGoogle Scholar
  7. 7.
    Danelljan, M., Häger, G., Khan, F.S., Felsberg, M.: Discriminative scale space tracking. IEEE Trans. Pattern Anal. Mach. Intell. 39(8), 1561–1575 (2017)CrossRefGoogle Scholar
  8. 8.
    Dongyuan, G., Xifan, Y., Kainan, L.: Calibration of binocular stereo vision system. Mech. Des. Manufact. 6(6), 1–2 (2010)Google Scholar
  9. 9.
    Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)CrossRefGoogle Scholar
  10. 10.
    Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: Exploiting the circulant structure of tracking-by-detection with kernels. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2016. LNCS, vol. 7575, pp. 702–715. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33765-9_50CrossRefGoogle Scholar
  11. 11.
    Mueller, M., Smith, N., Ghanem, B.: Context-aware correlation filter tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, no. 3, p. 6, July 2017Google Scholar
  12. 12.
    Nam, H., Baek, M., Han, B.: Modeling and propagating CNNs in a tree structure for visual tracking. arXiv preprint arXiv:1608.07242 (2016)
  13. 13.
    Saxena, A.: Monocular depth perception and robotic grasping of novel objects. Stanford university, Dept of Computer Science, CA (2009)Google Scholar
  14. 14.
    Saxena, A., Driemeyer, J., Ng, A.Y.: Robotic grasping of novel objects using vision. Int. J. Robot. Res. 27(2), 157–173 (2008)CrossRefGoogle Scholar
  15. 15.
    Wang, M., Liu, Y., Huang, Z.: Large margin object tracking with circulant feature maps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp. 21–26, July 2017Google Scholar
  16. 16.
    Wu, Y., Lim, J., Yang, M.H.: Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2015)CrossRefGoogle Scholar
  17. 17.
    Zhang, Z., Xie, Y., Xing, F., McGough, M., Yang, L.: MDNet: a semantically and visually interpretable medical image diagnosis network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6428–6436 (2017)Google Scholar
  18. 18.
    Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Jiaxin Lin
    • 1
    • 2
    Email author
  • Chun Xiao
    • 1
  • Disi Chen
    • 2
  • Dalin Zhou
    • 2
  • Zhaojie Ju
    • 2
  • Honghai Liu
    • 2
  1. 1.School of AutomationWuhan University of TechnologyWuhanChina
  2. 2.Intelligent Systems and Biomedical Robotics Group, School of ComputingUniversity of PortsmouthPortsmouthUK

Personalised recommendations