Advertisement

A New Relational Geometric Feature for Human Action Recognition

  • M. VinagreEmail author
  • J. Aranda
  • A. Casals
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 325)

Abstract

Pose-based features have demonstrated to outperform low-levelappearance features in human action recognition. New RGB-D cameras provide locations of human joints with which geometric correspondences can be easily calculated. In this article, a new geometric correspondence between joints called Trisarea feature is presented. It is defined as the area of the triangle formed by three joints. Relevant triangles describing human pose are identified and it is shown how the variation over time of the selected Trisarea features constitutes a descriptor of human action. Experimental results show a comparison with other methods and demonstrate how this Trisarea-based representation can be applied to human action recognition.

Keywords

Pose-based feature Action descriptor Action recognition 

Notes

Acknowledgments

This work has been done under project IPRES, DPI2011-29660-C04-01 of the Spanish National Research Program, with partial FEDER funds. It has also been partially founded by Fundació La Caixa inside the Recercaixa 2012 research program.

References

  1. 1.
    Aggarwal, J., Ryoo, M.: Human activity analysis: A review. ACM Comput. Surv. 43(3), 16:1–16:43 (2011)Google Scholar
  2. 2.
    Chen, C., Zhuang, Y., Xiao, J., Liang, Z.: Perceptual 3D pose distance estimation by boosting relational geometric features. Comput. Animation Virtual Worlds 20(2–3), 267277 (2009)Google Scholar
  3. 3.
    Ellis, C., Masood, S.Z., Tappen, M.F., Laviola Jr, J.J., Sukthankar, R.: Exploring the trade-off between accuracy and observational latency in action recognition. Int. J. Comput. Vis. 101(3), 420–436 (2013)CrossRefGoogle Scholar
  4. 4.
    Gorelick, L., Blank, M., Shechtman, E., Irani, M., Basri, R.: Actions as space-time shapes. Trans. Pattern Anal. Mach. Intell. 29(12), 2247–2253 (2007)CrossRefGoogle Scholar
  5. 5.
    Gu, J., Ding, X., Wang, S., Wu, Y.: Action and gait recognition from recovered 3-d human joints. Trans. Sys. Man Cyber. Part B 40(4), 1021–1033 (2010)CrossRefGoogle Scholar
  6. 6.
    Li, W., Zhang, Z., Liu, Z.: Action recognition based on a bag of 3d points. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 9–14. IEEE Computer Society, Washington, DC, USA (2010)Google Scholar
  7. 7.
    Lu, Y., Cohen, I., Zhou, X. S., Tian, Q.: Feature selection using principal feature analysis. In: Proceedings of the 15th International Conference on Multimedia, pp. 301–304. ACM, New York, NY, USA (2007)Google Scholar
  8. 8.
    Luo, X., Berendsen, B., Tan, R.T., Veltkamp, R.C.: Human pose estimation for multiple persons based on volume reconstruction. In: Proceedings of the 2010 20th International Conference on Pattern Recognition, pp. 3591–3594. IEEE Computer Society, Washington, DC, USA (2010)Google Scholar
  9. 9.
    Matikainen, P., Hebert, M., Sukthankar, R.: Representing pairwise spatial and temporal relations for action recognition. In: Proceedings of the 11th European Conference on Computer Vision: Part I, pp. 508–521. Springer, Berlin, Heidelberg (2010)Google Scholar
  10. 10.
    Straka, M., Hauswiesner, S., Rüther, M., Bischof, H.: Skeletal graph based human pose estimation in real-time. In: Proceedings of the British Machine Vision Conference, pp. 69.1–69.12. BMVA Press, Aberystwyth, WalesGoogle Scholar
  11. 11.
    Miranda, L., Vieira, T., Morera, D.M., Lewiner, T., Vieira, A.W., Campos, M.F.M.: Real-time gesture recognition from depth data through key poses learning and decision forests. In: SIBGRAPI, pp. 268–275. IEEE Computer Society, Washington, DC, USA (2012)Google Scholar
  12. 12.
    Müller, M., Baak, A., Seidel, H.-P.: Efficient and robust annotation of motion capture data. In: Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 17–26. ACM, New York, NY, USA (2009)Google Scholar
  13. 13.
    Poppe, R.: A survey on vision-based human action recognition. Image Vis. Comput. 28(6), 976–990 (2010)CrossRefGoogle Scholar
  14. 14.
    Raptis, M., Kirovski, D., Hoppe, H.: Real-time classification of dance gestures from skeleton animation. In: Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 147–156. ACM, New York, NY, USA (2011)Google Scholar
  15. 15.
    Schwarz, L., Mateus, D., Castaneda, V., Navab, N.: Manifold learning for tof-based human body tracking and activity recognition. In: Proceedings of the British Machine Vision Conference, pp. 80.1–80.11. BMVA Press, Aberystwyth, Wales (2010)Google Scholar
  16. 16.
    Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1297–1304. IEEE Computer Society, Washington, DC, USA (2011)Google Scholar
  17. 17.
    Sung, J., Ponce, C., Selman, B., Saxena, A.: Unstructured human activity detection from rgbd images. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 842–849. IEEE, Washington, DC, USA (2012)Google Scholar
  18. 18.
    Uddin, M.Z., Thang, N.D., Kim, J.T., Kim, T.-S.: Human activity recognition using body joint-angle features and hidden markov model. ETRI J. 33(4), 569–579 (2011)CrossRefGoogle Scholar
  19. 19.
    Vieira, A.W., Nascimento, E.R., Oliveira, G.L., Liu, Z., Campos, M.F.M.: Stop: Space-time occupancy patterns for 3d action recognition from depth map sequences. In: Proceedings of the 17th Iberoamerican Congress, pp. 252–259. Springer, Berlin, Heidelberg (2012)Google Scholar
  20. 20.
    Wang, J., Liu, Z., Wu, Y., Yuan, J.: Mining actionlet ensemble for action recognition with depth cameras. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1290–1297. IEEE Computer Society, Washington, DC, USA (2012)Google Scholar
  21. 21.
    Xia, L., Chen, C.-C., Aggarwal, J.K.: View invariant human action recognition using histograms of 3d joints. In: CVPR Workshops, pp. 20–27. IEEE, Washington, DC, USA (2012)Google Scholar
  22. 22.
    Yao, A., Gall, J., Fanelli, G., Van Gool, L.: Does human action recognition benefit from pose estimation? In: Proceedings of the British Machine Vision Conference, pp. 67.1-67.11. BMVA Press, Aberystwyth, Wales (2011)Google Scholar
  23. 23.
    Yun, K., Honorio, J., Chattopadhyay, D., Berg, T., Samaras, D.: Two-person interaction detection using body-pose features and multiple instance learning. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 28–35. IEEE (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Robotics GroupInstitute for Bioengineering of CataloniaBarcelonaSpain
  2. 2.Universitat Politècnica de CatalunyaBarcelonaTechBarcelonaSpain

Personalised recommendations