Skip to main content

Tracking and frame-rate enhancement for real-time 2D human pose estimation

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

We propose a near real-time solution for frame-rate enhancement that enables the use of existing sophisticated pose estimation solutions at elevated frame rates. Our approach couples a keypoint human pose estimator with optical flow using a multistage system of queues operating in a multi-threaded environment. As additional contributions, we propose a pose tracking solution and an approach to overcome errors caused by optical flow. A reduction in error in the range of 30–35% is observed at practical frame rates of pose estimator (4–6 frames per second) while processing \(1920\times 1080\) resolution 30 frames-per-second videos at native frame rate. Slower frame rates have increased the reduction of error up to 50%, thereby promoting the use of cheaper hardware and sharing of expensive hardware. Thus, while improving accuracy by enabling sophisticated pose estimation models to operate at above-par frame rates, our approach reduces cost per frame by promoting efficient resource utilization.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27
Fig. 28

References

  1. 1.

    Ma, A.: How china is watching its citizens in a modern surveillance state - business insider. https://www.businessinsider.com/how-china-is-watching-its-citizens-in-a-modern-surveillance-state-2018-4. Accessed 27 July 2019

  2. 2.

    Andriluka, M., Pishchulin, L., Gehler, P., Schiele, B.: In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014)

  3. 3.

    Lin, T., Maire, M., Belongie, S.J., Bourdev, L.D., Girshick, R.B., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: CoRR abs/1405.0312 (2014). arxiv:1405.0312

  4. 4.

    Cao, Z., Simon, T., Wei, S.E., Sheikh, Y.: In: CVPR (2017)

  5. 5.

    Wei, S.E., Ramakrishna, V., Kanade, T., Sheikh, Y.: In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4724–4732 (2016)

  6. 6.

    Papandreou, G., Zhu, T., Kanazawa, N., Toshev, A., Tompson, J., Bregler, C., Murphy, K.P.: In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3711–3719 (2017)

  7. 7.

    Milan, A., Leal-Taixé, L., Reid, I., Roth, S., Schindler, K.: arXiv:1603.00831 [cs] (2016)

  8. 8.

    Knuth, D.: Openpose. https://github.com/CMU-Perceptual-Computing-Lab/openpose. Accessed 10 Feb 2019

  9. 9.

    Schüldt, C., Laptev, I., Caputo, B.: In: Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. 3, 32 (2004)

  10. 10.

    Wan, C., Yuan, B., Miao, Z.: Markerless human body motion capture using Markov random field and dynamic graph cuts. Vis. Comput. 24(5), 373 (2008). https://doi.org/10.1007/s00371-007-0195-7

    Article  Google Scholar 

  11. 11.

    Chu, X., Yang, W., Ouyang, W., Ma, C., Yuille, A.L., Wang, X.: In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5669–5678 (2017)

  12. 12.

    Chen, Y., Shen, C., Wei, X.S., Liu, L., Yang, J.: In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1221–1230 (2017)

  13. 13.

    Toshev, A., Szegedy, C.: In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1653–1660 (2014)

  14. 14.

    Openpose 1.1.0 benchmark. https://docs.google.com/spreadsheets/d/1-DynFGvoScvfWDA1P4jDInCkbD4lg0IKOYbXgEq0sK0. Accessed 30 Dec 2018

  15. 15.

    Xiu, Y., Li, J., Wang, H., Fang, Y., Lu, C.: In: BMVC (2018)

  16. 16.

    Fang, H., Xie, S., Tai, Y., Lu, C.: In: IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22–29, 2017 (IEEE Computer Society), pp. 2353–2362 (2017). https://doi.org/10.1109/ICCV.2017.256

  17. 17.

    Christian Zimmermann, C.D.W.B., Welschehold, T., Brox, T.: In: IEEE International Conference on Robotics and Automation (ICRA) (2018). https://lmb.informatik.uni-freiburg.de/projects/rgbd-pose3d/. Accessed 28 July 2019

  18. 18.

    Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: CoRR abs/1704.04861 (2017)

  19. 19.

    Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S.E., Fu, C., Berg, A.C.: CoRR abs/1512.02325 (2015). arxiv:1512.02325

  20. 20.

    Redmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788 (2016)

  21. 21.

    Osokin, D.: CoRR abs/1811.12004 (2018). arxiv:1811.12004

  22. 22.

    Handa, A., Newcombe, R.A., Angeli, A., Davison, A.J.: Real-time camera tracking: when is high frame-rate best? In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) Computer Vision—ECCV 2012, pp. 222–235. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  23. 23.

    Lucas, B.D., Kanade, T.: In: IJCAI 1981 (1981)

  24. 24.

    Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artif. Intell. 17, 185 (1981)

    Article  Google Scholar 

  25. 25.

    Galoogahi, H.K., Fagg, A., Huang, C., Ramanan, D., Lucey, S.: In: 2017 IEEE International Conference on Computer Vision (ICCV) (2017). https://doi.org/10.1109/iccv.2017.128

  26. 26.

    Song, M ho, Godøy, R.I.: How fast is your body motion? Determining a sufficient frame rate for an optical motion tracking system using passive markers. PLoS ONE 11 3, e0150993 (2016)

    Article  Google Scholar 

  27. 27.

    Ye, S., Liu, C., Li, Z., Al-Ahmari, A.: Improved frame-by-frame object pose tracking in complex environments. Multimed. Tools Appl. 77(19), 24983 (2018). https://doi.org/10.1007/s11042-018-5736-8

    Article  Google Scholar 

  28. 28.

    Barron, J.L., Fleet, D.J., Beauchemin, S.S.: Performance of optical flow techniques. Int. J. Comput. Vis. 12, 43 (1994)

    Article  Google Scholar 

  29. 29.

    Opencv library. https://opencv.org. Accessed 10 Feb 2019

  30. 30.

    Kim, W., Ramanagopal, M.S., Barto, C., Yu, M., Rosaen, K., Goumas, N., Vasudevan, R., Johnson-Roberson, M.: CoRR abs/1809.03605 (2018). arxiv:1809.03605

  31. 31.

    Bernardin, K., Stiefelhagen, R.: Evaluating multiple object tracking performance: the CLEAR MOT metrics. EURASIP J. Image Video Process. 2008(1), 246309 (2008). https://doi.org/10.1155/2008/246309

    Article  Google Scholar 

  32. 32.

    O’Malley, M., de Paor, D.L.A.M.: Kinematic analysis of human walking gait using digital image processing. Med. Biol. Eng. Comput. 31(4), 392 (1993). https://doi.org/10.1007/BF02446694

    Article  Google Scholar 

  33. 33.

    Li, Y., Wang, Z., Yang, X., Wang, M., Poiana, S.I., Chaudhry, E., Zhang, J.: Efficient convolutional hierarchical autoencoder for human motion prediction. Visu. Comput. 35(6), 1143 (2019). https://doi.org/10.1007/s00371-019-01692-9

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Madhawa Vidanpathirana.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Vidanpathirana, M., Sudasingha, I., Vidanapathirana, J. et al. Tracking and frame-rate enhancement for real-time 2D human pose estimation. Vis Comput 36, 1501–1519 (2020). https://doi.org/10.1007/s00371-019-01757-9

Download citation

Keywords

  • Human pose estimation
  • Real-time
  • Tracking
  • Optical flow
  • Frame-rate enhancement