A Hybrid Framework for Detecting the Semantics of Concepts and Context
Semantic understanding of multimedia content necessitates models for the semantics of concepts, context and structure. We propose a hybrid framework that can combine discriminant or generative models for concepts with generative models for structure and context. Using the TREC Video 2002 benchmark corpus we show that robust models can be built for several diverse visual semantic concepts. We use a novel factor graphical framework to model inter-conceptual context for 12 semantic concepts of the corpus. Using the sum-product algorithm  for approximate or exact inference in these factor graph multinets, we attempt to correct errors made during isolated concept detection by forcing high-level constraints. This results in a significant improvement in the overall detection performance. Enforcement of this probabilistic context model enhances the detection performance further to 22 % using the global multinet, whereas its factored approximation also leads to improvement by 18 % over the baseline concept detection. This improvement is achieved without using any additional training data or separate annotations.
KeywordsDetection Performance Average Precision Function Node Semantic Concept Variable Node
Unable to display preview. Download preview PDF.
- M. Naphade, T. Kristjansson, B. Frey, and T. S. Huang, “Probabilistic multimedia objects (multijects): A novel approach to indexing and retrieval in multimedia systems,” in Proceedings of IEEE International Conference on Image Processing, Chicago, IL, Oct. 1998, vol. 3, pp. 536–540.Google Scholar
- S.F. Chang, W. Chen, and H. Sundaram, “Semantic visual templates-linking features to semantics,” in Proceedings of IEEE International Conference on Image Processing, Chicago, IL, Oct. 1998, vol. 3, pp. 531–535.Google Scholar
- M. Naphade and J. Smith, “The role of classifiers in multimedia content management,” in SPIE Storage and Retrieval for Media Databases, San Jose, CA, Jan 2003, vol. 5021.Google Scholar
- M. Naphade, C. Lin, A. Natsev, B. Tseng, and J. Smith, “A framework for moderate vocabulary visual semantic concept detection,” submitted to IEEE ICME 2003.Google Scholar
- M. Naphade, S. Basu, J. Smith, C. Lin, and B. Tseng, “Modeling semnatic concepts to support query by keywords in video,” in IEEE International Confernce on Image Processing, Rochester, NY, Sep 2002.Google Scholar
- “TREC Video Retrieval,” 2002, National Institute of Standards and Technology, http://www-nlpir.nist.gov/projects/trecvid/.Google Scholar
- W.H. Adams, A. Amir, C. Dorai, S. Ghoshal, G. Iyengar, A. Jaimes, C. Lang, C. Y. Lin, M. R. Naphade, A. Natsev, C. Neti, H. J. Nock, H. Permutter, R. Singh, S. Srinivasan, J.R. Smith, B. L. Tseng, A.T. Varadaraju, and D. Zhang, “IBM research TREC-2002 video retrieval system,” in Proc. Text Retrieval Conference (TREC), Gaithersburg, MD, Nov 2002.Google Scholar
- S. Srinivasan, D. Ponceleon, A. Amir, and D. Petkovic, “What is that video anyway? In search of better browsing,” in Proceedings of IEEE International Conference on Multimedia and Expo, New York, July 2000, pp. 388–392.Google Scholar
- M. R. Naphade, R. Wang, and T. S. Huang, “Classifying motion picture soundtrack for video indexing,” in IEEE International Conference on Multimedia and Expo, Tokyo, Japan, August 2001.Google Scholar