Measuring Multi-modality Similarities Via Subspace Learning for Cross-Media Retrieval
- Cite this paper as:
- Zhang H., Weng J. (2006) Measuring Multi-modality Similarities Via Subspace Learning for Cross-Media Retrieval. In: Zhuang Y., Yang SQ., Rui Y., He Q. (eds) Advances in Multimedia Information Processing - PCM 2006. PCM 2006. Lecture Notes in Computer Science, vol 4261. Springer, Berlin, Heidelberg
Cross-media retrieval is an interesting research problem, which seeks to breakthrough the limitation of modality so that users can query multimedia objects by examples of different modalities. In order to cross-media retrieve, the problem of similarity measure between media objects with heterogeneous low-level features needs to be solved. This paper proposes a novel approach to learn both intra- and inter-media correlations among multi-modality feature spaces, and construct MLE semantic subspace containing multimedia objects of different modalities. Meanwhile, relevance feedback strategies are developed to enhance the efficiency of cross-media retrieval from both short- and long-term perspectives. Experiments show that the result of our approach is encouraging and the performance is effective.
KeywordsCross-media Retrieval Heterogeneous Multi-modality Correlation
Unable to display preview. Download preview PDF.