Advertisement

Dynamic Perceptual Attribute-Based Hidden Conditional Random Fields for Gesture Recognition

  • Gang HuEmail author
  • Qigang Gao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9164)

Abstract

The demand for gesture/action recognition technologies has been increased in the recent years. State-of-the-art systems of gesture/action recognition have been using low-level features or intermediate bag-of-features as gesture/action descriptors. Those methods ignore the spatial and temporal information on shape and internal structures of the targets. Dynamic Perceptual Attributes (DPAs) is a set of descriptors of gesture’s perceptual properties. Their context relations reveal gestures/actions’ intrinsic structures. This paper utilizes the hidden conditional random fields (HCRF) model based on DPAs to describe complex human gestures and facilitate the recognition tasks. Experimental results show our model gains better performance against state-of-the-art methods.

Keywords

Perceptual features Gesture recognition Shape extraction HCRF 

References

  1. 1.
    Hu, G., Gao, Q.: A 3D gesture recognition framework based on hierarchical visual attention and perceptual organization models. In: ICPR, pp. 1411–1414 (2012)Google Scholar
  2. 2.
    Laptev, I., Lindeberg, T.: Space-time interest points. In: ICCV (2003)Google Scholar
  3. 3.
    Dollar, P., Rabaud, V., Cottrell, G., Belongie, S.: Behavior recognition via sparse spatio-temporal features. In: VS-PETS (2005)Google Scholar
  4. 4.
    Willems, G., Tuytelaars, T., Van Gool, L.: An efficient dense and scale-invariant spatio-temporal interest point detector. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part II. LNCS, vol. 5303, pp. 650–663. Springer, Heidelberg (2008)Google Scholar
  5. 5.
    Laptev, I., Marszalek, M., Schmid, C., Rozenfeld, B.: Learning realistic human actions from movies. In: CVPR (2008)Google Scholar
  6. 6.
    Klaser, A., Marszalek, M., Schmid, C.: A spatio-temporal descriptor based on 3D gradients. In: BMVC (2008)Google Scholar
  7. 7.
    Niebles, J.C., Chen, C.-W., Fei-Fei, L.: Modeling temporal structure of decomposable motion segments for activity classification. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part II. LNCS, vol. 6312, pp. 392–405. Springer, Heidelberg (2010)Google Scholar
  8. 8.
    Liu, J., Kuipers, B., Savarese, S.: Recognizing human actions by attributes. In: IEEE CVPR 2011, pp. 3337–3344 (2011)Google Scholar
  9. 9.
    Hernndez-Vela, A., et al.: Bovdw: Bag-of-visual-anddepth-words for gesture recognition. In: ICPR, pp. 449–452. IEEE (2012)Google Scholar
  10. 10.
    Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: IEEE CVPR (2011)Google Scholar
  11. 11.
    Yang, X., Tian, Y.: Eigenjoints-based action recognition using naivebayes-nearest-neighbor. In: CVPR Workshops (CVPRW), pp. pp. 14–19. IEEE (2012)Google Scholar
  12. 12.
    Wang, J., Liu, Z., Wu, Y., Yuan, J.: Mining actionlet ensemble for action recognition with depth cameras. In: CVPR, pp. 1290–1297 (2012)Google Scholar
  13. 13.
    Gaur, U., Zhu, Y., Song, B.: A “String of feature graphs” model for recognition of complex activities in natural videos. In: ICCV (2011)Google Scholar
  14. 14.
    Brendel, W., Todorovic, S.: Learning spatiotemporal graphs of human activities. In: ICCV (2011)Google Scholar
  15. 15.
    Xia, L., Chen, C.-C., Aggarwal, J.: View invariant human action recognition using histograms of 3d joints. In: CVPR Workshops (CVPRW), pp. 20–27 (2012)Google Scholar
  16. 16.
    Sminchisescu, C., Kanaujia, A., Li, Z., Metaxas, D.: Conditional models for contextual human motion recognition. In: ICCV (2005)Google Scholar
  17. 17.
    Wang, Y., Mori, G.: Max-margin hidden conditional random fields for human action recognition. In: CVPR (2009)Google Scholar
  18. 18.
    Hu, G., Gao, Q.: A non-parametric statistics based method for generic curve partition and classification. In: Proceedings of IEEE 17th ICIP, pp. 3041–3044 (2010)Google Scholar
  19. 19.
    Rapantzikos, K., Avrithis, Y., Kollias, S.: Dense saliency-based spatiotemporal feature points for action recognition. In: IEEE CVPR, pp. 1454–1461 (2009)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Faculty of Computer ScienceDalhousie UniversityHalifaxCanada

Personalised recommendations