Advertisement

Measuring Similarity in the Semantic Representation of Moving Objects in Video

  • Miyoung Cho
  • Dan Song
  • Chang Choi
  • Pankoo Kim
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4092)

Abstract

There are more and more researchers concentrate on the spatio-temporal relationships during the video retrieval process. However, these researches are just limited to trajectory-based or content-based retrieval, and we seldom retrieve information referring to semantics. For satisfying the naive users’ requirement from the common point of view, in this paper, we propose a novel approach for motion recognition from the aspect of semantic meaning. This issue can be addressed through a hierarchical model that explains how the human language interacts with motions. And, in the experiment part, we evaluate our new approach using trajectory distance based on spatial relations to distinguish the conceptual similarity and get the satisfactory results.

Keywords

Spatial Relation Semantic Similarity Semantic Representation Topological Relation Link Type 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Miller, G.A.: Introduction to WordNet: An On-line Lexical Database. International Journal of Lexicography (1990)Google Scholar
  2. 2.
  3. 3.
    Levin, B.: English Verb Classes and Alternations. University of Chicago Press, Chicago (1993)Google Scholar
  4. 4.
    Li, J.Z., Tamer Ozsu, M., Szafron, D.: Modeling of Moving Objects in a Video Database. In: Proceedings of the International Conference on Multimedia Computing and Systems, pp. 336–343 (1997)Google Scholar
  5. 5.
    Shim, C.-B., Chang, J.-W.: Spatio-temporal Representation and Retrieval Using Moving Object’s Trajectories. In: ACM Multimedia Workshops, pp. 209–212 (2000)Google Scholar
  6. 6.
    Erwig, M., Schneider, M.: Query-By-Trace: Visual Predicate Specification in Spatio-Temporal Databases. In: 5th IFIP Conf. on Visual databases (2000)Google Scholar
  7. 7.
    Aghbari, Z., Kaneko, K., Makinouchi, A.: Modeling and Querying Videos by Content Trajectories. In: Proceedings of the International Conference and Multimedia Expo, pp. 463–466 (2000)Google Scholar
  8. 8.
    Chen, P.-Y., Chen, A.L.P.: Video Retrieval Based on Video Motion Tracks of Moving Objects. In: Proceedings of SPIE, vol. 5307, pp. 550–558 (2003)Google Scholar
  9. 9.
    Hongeng, S., Nevatia, R., Bremond, F.: Video-based event recognition: activity representation and probabilistic recognition methods. Computer Vision and Image Understanding 96(2), 129–162 (2004)CrossRefGoogle Scholar
  10. 10.
    Cho, M., Song, D., Choi, C., Choi, J., Park, J., Kim, P.-K.: Comparison Between Motion Verbs Using Similarity Measure for the Semantic Representation of Moving Object. In: Sundaram, H., Naphade, M., Smith, J.R., Rui, Y. (eds.) CIVR 2006. LNCS, vol. 4071, pp. 281–290. Springer, Heidelberg (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Miyoung Cho
    • 1
  • Dan Song
    • 1
  • Chang Choi
    • 1
  • Pankoo Kim
    • 2
  1. 1.Dept. of Computer ScienceChosun UniversityKorea
  2. 2.Dept. of CSEChosun UniversityKorea

Personalised recommendations