Advertisement

Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary

  • P. Duygulu
  • K. Barnard
  • J. F. G. de Freitas
  • D. A. Forsyth
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2353)

Abstract

We describe a model of object recognition as machine translation. In this model, recognition is a process of annotating image regions with words. Firstly, images are segmented into regions, which are classified into region types using a variety of features. A mapping between region types and keywords supplied with the images, is then learned, using a method based around EM. This process is analogous with learning a lexicon from an aligned bitext. For the implementation we describe, these words are nouns taken from a large vocabulary. On a large test set, the method can predict numerous words with high accuracy. Simple methods identify words that cannot be predicted well. We show how to cluster words that individually are difficult to predict into clusters that can be predicted well — for example, we cannot predict the distinction between train and locomotive using the current set of features, but we can predict the underlying concept. The method is trained on a substantial collection of images. Extensive experimental results illustrate the strengths and weaknesses of the approach.

Keywords

Object recognition correspondence EM algorithm 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    K. Barnard, P. Duygulu and D. A. Forsyth. Clustering art. In IEEE Conf. on Computer Vision and Pattern Recognition, II: 434–441, 2001.Google Scholar
  2. 2.
    K. Barnard and D. A. Forsyth. Learning the semantics of words and pictures. In Int. Conf. on Computer Vision pages 408–15, 2001.Google Scholar
  3. 3.
    P. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 32(2):263–311, 1993.Google Scholar
  4. 4.
    D.A. Forsyth and J. Ponce. Computer Vision: a modern approach. Prentice-Hall 2001. in preparation.Google Scholar
  5. 5.
    D. Jurafsky and J. H. Martin. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition. Prentice-Hall, 2000.Google Scholar
  6. 6.
    C. D. Manning and H. Schütze. Foundations of Statistical Natural Language Processing. MIT Press, 1999.Google Scholar
  7. 7.
    M. Markkula and E. Sormunen. End-user searching challenges indexing practices in the digital newspaper photo archive. Information retrieval, 1:259–285, 2000.zbMATHCrossRefGoogle Scholar
  8. 8.
    Y. Mori, H. Takahashi, R. Oka Image-to-word transformation based on dividing and vector quantizing images with words In First International Workshop on Multimedia Intelligent Storage and Retrieval Management (MISRM’99), 1999Google Scholar
  9. 9.
    O. Maron. Learning from Ambiguity. PhD thesis, MIT, 1998.Google Scholar
  10. 10.
    O. Maron and A. L. Ratan. Multiple-Instance Learning for Natural Scene Classification, In The Fifteenth International Conference on Machine Learning, 1998Google Scholar
  11. 11.
    I. Dan Melamed. Empirical Methods for Exploiting Parallel Texts. MIT Press, 2001.Google Scholar
  12. 12.
    S. Ornager. View a picture, theoretical image analysis and empirical user studies on indexing and retrieval. Swedis Library Research, 2–3:31–41, 1996.Google Scholar
  13. 13.
    J. Shi and J. Malik. Normalised cuts and image segmentation. In IEEE Conf. on Computer Vision and Pattern Recognition, pages 731–737, 1997.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • P. Duygulu
    • 1
  • K. Barnard
    • 1
  • J. F. G. de Freitas
    • 2
  • D. A. Forsyth
    • 1
  1. 1.Computer Science DivisionU.C. BerkeleyBerkeley
  2. 2.Department of Computer ScienceUniversity of British ColumbiaVancouver

Personalised recommendations