Measuring Multi-modality Similarities Via Subspace Learning for Cross-Media Retrieval

  • Hong Zhang
  • Jianguang Weng
Conference paper

DOI: 10.1007/11922162_111

Part of the Lecture Notes in Computer Science book series (LNCS, volume 4261)
Cite this paper as:
Zhang H., Weng J. (2006) Measuring Multi-modality Similarities Via Subspace Learning for Cross-Media Retrieval. In: Zhuang Y., Yang SQ., Rui Y., He Q. (eds) Advances in Multimedia Information Processing - PCM 2006. PCM 2006. Lecture Notes in Computer Science, vol 4261. Springer, Berlin, Heidelberg

Abstract

Cross-media retrieval is an interesting research problem, which seeks to breakthrough the limitation of modality so that users can query multimedia objects by examples of different modalities. In order to cross-media retrieve, the problem of similarity measure between media objects with heterogeneous low-level features needs to be solved. This paper proposes a novel approach to learn both intra- and inter-media correlations among multi-modality feature spaces, and construct MLE semantic subspace containing multimedia objects of different modalities. Meanwhile, relevance feedback strategies are developed to enhance the efficiency of cross-media retrieval from both short- and long-term perspectives. Experiments show that the result of our approach is encouraging and the performance is effective.

Keywords

Cross-media Retrieval Heterogeneous Multi-modality Correlation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Hong Zhang
    • 1
  • Jianguang Weng
    • 1
  1. 1.The Institute of Artificial IntelligenceZhejiang UniversityHangZhouP.R. China

Personalised recommendations