Logistic Regression of Generic Codebooks for Semantic Image Retrieval
This paper is about automatically annotating images with keywords in order to be able to retrieve images with text searches. Our approach is to model keywords such as ’mountain’ and ’city’ in terms of visual features that were extracted from images. In contrast to other algorithms, each specific keyword-model considers not only its own training data but also the whole training set by utilizing correlations of visual features to refine its own model. Initially, the algorithm clusters all visual features extracted from the full imageset, captures its salient structure (e.g. mixture of clusters or patterns) and represents this as a generic codebook. Then keywords that were associated with images in the training set are encoded as a linear combination of patterns from the generic codebook. We evaluate the validity of our approach in an image retrieval scenario with two distinct large datasets of real-world photos and corresponding manual annotations.
KeywordsGaussian Mixture Model Image Retrieval Machine Translation Average Precision Retrieval Performance
Unable to display preview. Download preview PDF.
- 1.Yavlinsky, A., Schofield, E., Rüger, S.: Automated image annotation using global features and robust nonparametric density estimation. In: Leow, W.-K., Lew, M., Chua, T.-S., Ma, W.-Y., Chaisorn, L., Bakker, E.M. (eds.) CIVR 2005. LNCS, vol. 3568, pp. 507–517. Springer, Heidelberg (2005)CrossRefGoogle Scholar
- 2.Carneiro, G., Vasconcelos, N.: Formulating semantic image annotation as a supervised learning problem. In: IEEE Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA (2005)Google Scholar
- 3.Westerveld, T., de Vries, A.P.: Experimental result analysis for a generative probabilistic image retrieval model. In: ACM SIGIR Conference on research and development in information retrieval, Toronto, Canada (2003)Google Scholar
- 4.Mori, Y., Takahashi, H., Oka, R.: Image-to-word transformation based on dividing and vector quantizing images with words. In: Int’l Workshop on Multimedia Intelligent Storage and Retrieval Management, Orlando, FL, USA (1999)Google Scholar
- 7.Jeon, J., Lavrenko, V., Manmatha, R.: Automatic image annotation and retrieval using cross-media relevance models. In: ACM SIGIR Conference on research and development in information retrieval, Toronto, Canada (2003)Google Scholar
- 8.Lavrenko, V., Manmatha, R., Jeon, J.: A model for learning the semantics of pictures. In: Neural Information Processing System Conference, Vancouver, Canada (2003)Google Scholar
- 9.Feng, S.L., Lavrenko, V., Manmatha, R.: Multiple Bernoulli relevance models for image and video annotation. In: IEEE Conference on Computer Vision and Pattern Recognition, Cambridge, UK (2004)Google Scholar
- 10.Barnard, K., Forsyth, D.A.: Learning the semantics of words and pictures. In: Int’l Conference on Computer Vision (2001)Google Scholar
- 11.Blei, D., Jordan, M.: Modeling annotated data. In: ACM SIGIR Conference on research and development in information retrieval, Toronto, Canada (2003)Google Scholar