International Journal of Computer Vision

, Volume 79, Issue 3, pp 299–318

Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words

Article

DOI: 10.1007/s11263-007-0122-4

Cite this article as:
Niebles, J.C., Wang, H. & Fei-Fei, L. Int J Comput Vis (2008) 79: 299. doi:10.1007/s11263-007-0122-4

Abstract

We present a novel unsupervised learning method for human action categories. A video sequence is represented as a collection of spatial-temporal words by extracting space-time interest points. The algorithm automatically learns the probability distributions of the spatial-temporal words and the intermediate topics corresponding to human action categories. This is achieved by using latent topic models such as the probabilistic Latent Semantic Analysis (pLSA) model and Latent Dirichlet Allocation (LDA). Our approach can handle noisy feature points arisen from dynamic background and moving cameras due to the application of the probabilistic models. Given a novel video sequence, the algorithm can categorize and localize the human action(s) contained in the video. We test our algorithm on three challenging datasets: the KTH human motion dataset, the Weizmann human action dataset, and a recent dataset of figure skating actions. Our results reflect the promise of such a simple approach. In addition, our algorithm can recognize and localize multiple actions in long and complex video sequences containing multiple motions.

Keywords

Action categorization Bag of words Spatio-temporal interest points Topic models Unsupervised learning 

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  • Juan Carlos Niebles
    • 1
    • 2
  • Hongcheng Wang
    • 3
  • Li Fei-Fei
    • 4
  1. 1.Department of Electrical EngineeringPrinceton University, Engineering QuadranglePrincetonUSA
  2. 2.Robotics and Intelligent Systems GroupUniversidad del NorteBarranquillaColombia
  3. 3.United Technologies Research Center (UTRC)East HartfordUSA
  4. 4.Department of Computer SciencePrinceton UniversityPrincetonUSA

Personalised recommendations