International Journal of Computer Vision

, Volume 121, Issue 1, pp 5–25

Spatially Coherent Interpretations of Videos Using Pattern Theory

  • Fillipe D. M. de Souza
  • Sudeep Sarkar
  • Anuj Srivastava
  • Jingyong Su
Article

DOI: 10.1007/s11263-016-0913-6

Cite this article as:
de Souza, F.D.M., Sarkar, S., Srivastava, A. et al. Int J Comput Vis (2017) 121: 5. doi:10.1007/s11263-016-0913-6
  • 359 Downloads

Abstract

Activity interpretation in videos results not only in recognition or labeling of dominant activities, but also in semantic descriptions of scenes. Towards this broader goal, we present a combinatorial approach that assumes availability of algorithms for detecting and labeling objects and basic actions in videos, albeit with some errors. Given these uncertain labels and detected objects, we link them into interpretable structures using the domain knowledge, under the framework of Grenander’s general pattern theory. Here a semantic description is built using basic units, termed generators, that represent either objects or actions. These generators have multiple out-bonds, each associated with different types of domain semantics, spatial constraints, and image evidence. The generators combine, according to a set of pre-defined combination rules that capture domain semantics, to form larger configurations that represent video interpretations. This framework derives its representational power from flexibility in size and structure of configurations. We impose a probability distribution on the configuration space, with inferences generated using a Markov chain Monte Carlo-based simulated annealing process. The primary advantage of the approach is that it handles known challenges—appearance variabilities, errors in object labels, object clutter, simultaneous events, etc—without the need for exponentially-large (labeled) training data. Experimental results demonstrate its ability to successfully provide interpretations under clutter and the simultaneity of events. They show: (1) a performance increase of more than 30 % over other state-of-the-art approaches using more than 5000 video units from the Breakfast Actions dataset, and (2) an overall recall and precision improvement of more than 50 and 100 %, respectively, on the YouCook data set.

Keywords

Activity detection Pattern theory Graphical methods Compositional approach 

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • Fillipe D. M. de Souza
    • 1
  • Sudeep Sarkar
    • 1
  • Anuj Srivastava
    • 2
  • Jingyong Su
    • 3
  1. 1.Department of Computer Science & EngineeringUniversity of South FloridaTampaUSA
  2. 2.Department of StatisticsFlorida State UniversityTallahasseeUSA
  3. 3.Department of Mathematics & StatisticsTexas Tech UniversityLubbockUSA

Personalised recommendations