Advertisement

Graph Embedding Learning for Cross-Modal Information Retrieval

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10636)

Abstract

The aim of cross-modal retrieval is to learn mappings that project samples from different modalities into a common space where the similarity among instances can be measured. To pursuit common subspace, traditional approaches tend to solve the exact projection matrices while it is unrealistic to fully model multimodal data only by linear projection. In this paper, we propose a novel graph embedding learning framework that directly approximates the projected manifold and utilizes both the labeled information and local geometric structures. It avoids explicit eigenvector decomposition by iterating random walk on graph. Sampling strategies are adopted to generate training pairs to fully explore inter and intra modality among the data cloud. Moreover, graph embedding is learned in a semi-supervised learning manner which helps to discriminate the underlying representation over different classes. Experimental results on Wikipedia datasets show that the proposed framework is effective and outperforms other state-of-the-art methods on cross-modal retrieval.

Keywords

Graph embedding learning Cross-modal retrieval Semi-supervised learning 

Notes

Acknowledgments

This work was supported in part by National Natural Science Foundation of China under grants 61371148 and 61771145. The authors would like to greatly thank Jiayan Cao for his help in architecture modeling and implementation.

References

  1. 1.
    Rasiwasia, N., Costa Pereira, J., Coviello, E., et al.: A new approach to cross-modal multimedia retrieval. In: Proceedings of the 18th ACM International Conference on Multimedia, pp. 251–260 (2010)Google Scholar
  2. 2.
    Sharma, A., Jacobs, D.W.: Bypassing synthesis: PLS for face recognition with pose, low-resolution and sketch. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 593–600 (2011)Google Scholar
  3. 3.
    Li, D., Dimitrova, N., Li, M., et al.: Multimedia content processing through cross-modal association. In: Eleventh ACM International Conference on Multimedia, Berkeley, CA, USA, November, pp. 604–611 (2003)Google Scholar
  4. 4.
    Zhai, X., Peng, Y., Xiao, J.: Learning cross-media joint representation with sparse and semisupervised regularization. IEEE Trans. Circ. Syst. Video Technol. 24, 965–978 (2014)CrossRefGoogle Scholar
  5. 5.
    Wang, K., He, R., Wang, L., et al.: Joint feature selection and subspace learning for cross-modal retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 38, 2010–2023 (2016)CrossRefGoogle Scholar
  6. 6.
    Bengio, Y., Ducharme, R., Vincent, P., et al.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1155 (2003)MATHGoogle Scholar
  7. 7.
    Mikolov, T., Chen, K., Corrado, G., et al.: Efficient estimation of word representations in vector space. Computer Science (2013)Google Scholar
  8. 8.
    Yang, Z., Cohen, W.W., Salakhutdinov, R.: Revisiting semi-supervised learning with graph embeddings. In: Proceedings of the International Conference on Machine Learning (2016)Google Scholar
  9. 9.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Computer Science (2014)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Electronic EngineeringFudan UniversityShanghaiChina

Personalised recommendations