Gesture Saliency: A Context-Aware Analysis

  • Matei Mancas
  • Donald Glowinski
  • Gualtiero Volpe
  • Paolo Coletta
  • Antonio Camurri
Conference paper

DOI: 10.1007/978-3-642-12553-9_13

Part of the Lecture Notes in Computer Science book series (LNCS, volume 5934)
Cite this paper as:
Mancas M., Glowinski D., Volpe G., Coletta P., Camurri A. (2010) Gesture Saliency: A Context-Aware Analysis. In: Kopp S., Wachsmuth I. (eds) Gesture in Embodied Communication and Human-Computer Interaction. GW 2009. Lecture Notes in Computer Science, vol 5934. Springer, Berlin, Heidelberg

Abstract

This paper presents a motion attention model that aims at analyzing gesture saliency using context-related information at three different levels. At the first level, motion features are compared in the spatial context of the current video frame; at the intermediate level, salient behavior is analyzed on a short temporal context; at the third level, computation of saliency is extended to longer time windows. An attention/saliency index is computed at the three levels based on an information theory approach. This model can be considered as a preliminary step towards context-aware expressive gesture analysis.

Keywords

Visual attention expressive gesture context-aware analysis 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Matei Mancas
    • 1
  • Donald Glowinski
    • 2
  • Gualtiero Volpe
    • 2
  • Paolo Coletta
    • 2
  • Antonio Camurri
    • 2
  1. 1.F.P.Ms/IT Research Center/TCTS LabUniversity of MonsMonsBelgium
  2. 2.INFOMUS LabUniversity of GenoaItaly

Personalised recommendations