Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary
We describe a model of object recognition as machine translation. In this model, recognition is a process of annotating image regions with words. Firstly, images are segmented into regions, which are classified into region types using a variety of features. A mapping between region types and keywords supplied with the images, is then learned, using a method based around EM. This process is analogous with learning a lexicon from an aligned bitext. For the implementation we describe, these words are nouns taken from a large vocabulary. On a large test set, the method can predict numerous words with high accuracy. Simple methods identify words that cannot be predicted well. We show how to cluster words that individually are difficult to predict into clusters that can be predicted well — for example, we cannot predict the distinction between train and locomotive using the current set of features, but we can predict the underlying concept. The method is trained on a substantial collection of images. Extensive experimental results illustrate the strengths and weaknesses of the approach.
KeywordsObject recognition correspondence EM algorithm
Unable to display preview. Download preview PDF.
- 1.K. Barnard, P. Duygulu and D. A. Forsyth. Clustering art. In IEEE Conf. on Computer Vision and Pattern Recognition, II: 434–441, 2001.Google Scholar
- 2.K. Barnard and D. A. Forsyth. Learning the semantics of words and pictures. In Int. Conf. on Computer Vision pages 408–15, 2001.Google Scholar
- 3.P. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 32(2):263–311, 1993.Google Scholar
- 4.D.A. Forsyth and J. Ponce. Computer Vision: a modern approach. Prentice-Hall 2001. in preparation.Google Scholar
- 5.D. Jurafsky and J. H. Martin. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition. Prentice-Hall, 2000.Google Scholar
- 6.C. D. Manning and H. Schütze. Foundations of Statistical Natural Language Processing. MIT Press, 1999.Google Scholar
- 8.Y. Mori, H. Takahashi, R. Oka Image-to-word transformation based on dividing and vector quantizing images with words In First International Workshop on Multimedia Intelligent Storage and Retrieval Management (MISRM’99), 1999Google Scholar
- 9.O. Maron. Learning from Ambiguity. PhD thesis, MIT, 1998.Google Scholar
- 10.O. Maron and A. L. Ratan. Multiple-Instance Learning for Natural Scene Classification, In The Fifteenth International Conference on Machine Learning, 1998Google Scholar
- 11.I. Dan Melamed. Empirical Methods for Exploiting Parallel Texts. MIT Press, 2001.Google Scholar
- 12.S. Ornager. View a picture, theoretical image analysis and empirical user studies on indexing and retrieval. Swedis Library Research, 2–3:31–41, 1996.Google Scholar
- 13.J. Shi and J. Malik. Normalised cuts and image segmentation. In IEEE Conf. on Computer Vision and Pattern Recognition, pages 731–737, 1997.Google Scholar