Machine Learning

, Volume 81, Issue 1, pp 21–35

Large scale image annotation: learning to rank with joint word-image embeddings

Authors

  • Samy Bengio
    • Google
  • Nicolas Usunier
    • Université Paris 6, LIP6
Article

DOI: 10.1007/s10994-010-5198-3

Cite this article as:
Weston, J., Bengio, S. & Usunier, N. Mach Learn (2010) 81: 21. doi:10.1007/s10994-010-5198-3

Abstract

Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at k of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method both outperforms several baseline methods and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where annotations with alternate spellings or even languages are close in the embedding space. Hence, even when our model does not predict the exact annotation given by a human labeler, it often predicts similar annotations, a fact that we try to quantify by measuring the newly introduced “sibling” precision metric, where our method also obtains excellent results.

Keywords

Large scaleImage annotationLearning to rankEmbedding
Download to read the full article text

Copyright information

© The Author(s) 2010