Advertisement

Learning and Association of Features for Action Recognition in Streaming Video

  • Binu M. Nair
  • Vijayan K. Asari
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8888)

Abstract

We propose a novel framework which learns and associates local motion pattern manifolds in streaming videos using generalized regression neural networks (GRNN) to facilitate real time human action recognition. The motivation is to determine an individual’s action even when the action cycle has not yet been completed. The GRNNs are trained to model the regression function of patterns in latent action space on the input local motion-shape patterns. This manifold learning makes the framework invariant to different sequence length and varying action states. Computation of latent action basis is done using EOF analysis and association of local temporal patterns to an action class at runtime follows a probabilistic formulation. This corresponds to finding the closest estimate the GRNN obtains to the corresponding action basis. Experimental results on two datasets, KTH and the UCF Sports, show accuracy of above 90% obtained from 15 to 25 frames.

Keywords

Empirical Orthogonal Function Action Recognition Generalize Regression Neural Network Empirical Orthogonal Function Analysis Human Action Recognition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Chin, T.J., Wang, L., Schindler, K., Suter, D.: Extrapolating learned manifolds for human activity recognition. In: IEEE International Conference on Image Processing, ICIP 2007, vol. 1, pp. 381–384 (2007)Google Scholar
  2. 2.
    Holmstrom, I.: Analysis of time series by means of empirical orthogonal functions. Tellus 22(6), 638–647 (1970)CrossRefMathSciNetGoogle Scholar
  3. 3.
    Jiang, Z., Lin, Z., Davis, L.: Recognizing human actions by learning and matching shape-motion prototype trees. IEEE Transactions on Pattern Analysis and Machine Intelligence 34(3), 533–547 (2012)CrossRefGoogle Scholar
  4. 4.
    Laptev, I., Marszalek, M., Schmid, C., Rozenfeld, B.: Learning realistic human actions from movies. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2008, pp. 1–8 (2008)Google Scholar
  5. 5.
    Liao, S., Chung, A.: Face recognition with salient local gradient orientation binary patterns. In: 2009 16th IEEE International Conference on Image Processing (ICIP), pp. 3317–3320 (November 2009)Google Scholar
  6. 6.
    Nair, B., Asari, V.: Time invariant gesture recognition by modelling body posture space. In: Jiang, H., Ding, W., Ali, M., Wu, X. (eds.) IEA/AIE 2012. LNCS, vol. 7345, pp. 124–133. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  7. 7.
    Nair, B., Asari, V.: Regression based learning of human actions from video using hof-lbp flow patterns. In: 2013 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 4342–4347 (October 2013)Google Scholar
  8. 8.
    Rodriguez, M.D., Ahmed, J., Shah, M.: Action mach a spatio-temporal maximum average correlation height filter for action recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2008, pp. 1–8 (June 2008)Google Scholar
  9. 9.
    Saghafi, B., Rajan, D.: Human action recognition using pose-based discriminant embedding. Signal Processing: Image Communication 27(1), 96–111 (2012)Google Scholar
  10. 10.
    Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local svm approach. In: Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, vol. 3, pp. 32–36 (August 2004)Google Scholar
  11. 11.
    Shao, L., Chen, X.: Histogram of body poses and spectral regression discriminant analysis for human action categorization. In: Proc. BMVC, pp. 88.1–88.11 (2010)Google Scholar
  12. 12.
    Specht, D.: A general regression neural network. IEEE Transactions on Neural Networks 2(6), 568–576 (1991)CrossRefGoogle Scholar
  13. 13.
    Wang, J., Chen, Z., Wu, Y.: Action recognition with multiscale spatio-temporal contexts. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3185–3192 (2011)Google Scholar
  14. 14.
    Yeffet, L., Wolf, L.: Local trinary patterns for human action recognition. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 492–497 (September 2009)Google Scholar
  15. 15.
    Yu, L., Liu, H.: Feature selection for high-dimensional data: A fast correlation-based filter solution. In: Proceedings of the Twentieth International Conference on Machine Leaning (ICML 2003), pp. 856–863 (2003)Google Scholar
  16. 16.
    Yuan, C., Li, X., Hu, W., Ling, H., Maybank, S.: 3d r transform on spatio-temporal interest points for action recognition. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 724–730 (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Binu M. Nair
    • 1
  • Vijayan K. Asari
    • 1
  1. 1.ECE DepartmentUniversity of DaytonDaytonUSA

Personalised recommendations