Automatic Image Annotation Based on WordNet and Hierarchical Ensembles
Automatic image annotation concerns a process of automatically labeling image contents with a pre-defined set of keywords, which are regarded as descriptors of image high-level semantics, so as to enable semantic image retrieval via keywords. A serious problem in this task is the unsatisfactory annotation performance due to the semantic gap between the visual content and keywords. Targeting at this problem, we present a new approach that tries to incorporate lexical semantics into the image annotation process. In the phase of training, given a training set of images labeled with keywords, a basic visual vocabulary consisting of visual terms, extracted from the image to represent its content, and the associated keywords is generated at first, using K-means clustering combined with semantic constraints obtained from WordNet, then the statistical correlation between visual terms and keywords is modeled by a two-level hierarchical ensemble model composed of probabilistic SVM classifiers and a co-occurrence language model. In the phase of annotation, given an unlabeled image, the most likely associated keywords are predicted by the posterior probability of each keyword given each visual term at the first-level classifier ensemble, then the second-level language model is used to refine the annotation quality by word co-occurrence statistics derived from the annotated keywords in the training set of images. We carried out experiments on a medium-sized image collection from Corel Stock Photo CDs. The experimental results demonstrated that the annotation performance of this method outperforms some traditional annotation methods by about 7% in average precision, showing the feasibility and effectiveness of the proposed approach.
Unable to display preview. Download preview PDF.
- 3.Jeon, J., Lavrenko, V., Manmatha, R.: Automatic image annotation and retrieval using cross-media relevance models. In: Proceedings of the 26th intl. SIGIR Conf., pp. 119–126 (2003)Google Scholar
- 4.Mori, Y., Takahashi, H., Oka, R.: Image-to-word transformation based on dividing and vector quantizing images with words. In: Proceedings of the First International Workshop on Multimedia Intelligent Storage and Retrieval Management (1999)Google Scholar
- 5.Chang, E., Goh, K., Sychay, G., Wu, G.: CBSA: Content-based soft annotation for multimodal image retrieval using Bayes point machines. IEEE Transactions on Circuits and Systems for Video Technology Special Issue on Conceptual and Dynamical Aspects of Multimedia Content Descriptions 13(1), 26–38 (2003)Google Scholar
- 8.Lavrenko, V., Manmatha, R., Jeon, J.: A model for learning the semantics of pictures. In: Proceedings of the 16th Annual Conference on Neural Information Processing Systems (2004)Google Scholar
- 9.Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines, Software (2001), available at http://www.csie.ntu.edu.tw/~cjlin/libsvm
- 10.Hsu, C.-W., Lin, C.-J.: A comparison of methods for multi-class support vector machines. IEEE Transactions on Neural Networks, 415–425 (2002)Google Scholar
- 11.Cusano, C., Ciocca, G., Schettini, R.: Image Annotation using SVM. In: Proceedings of SPIE-IS&T Electronic Imaging. SPIE, vol. 5304, pp. 330–338 (2004)Google Scholar
- 13.Pedersen, T., Patwardhan, S., Michelizzi, J.: WordNet:Similarity - measuring the relatedness of concepts. In: Proceedings of the Nineteenth National Conference on Artificial Intelligence, AAAI 2004 (2004)Google Scholar
- 14.Jin, R., Chai, J.Y., Si, L.: Effective automatic image annotation via a coherent language model and active learning. In: Proceedings of ACM International Conference on Multimedia, pp. 892–899. ACM, New York (2004)Google Scholar
- 15.Barnard, K., Forsyth, D.A.: Learning the semantics of words and pictures. In: Proceedings of International Conference on Computer Version, pp. 408–415 (2001)Google Scholar
- 16.Fan, J., Gao, Y., Luo, H.: Multi-level annotation of natural scenes using dominant image components and semantic concepts. In: Proceedings of the ACM International Conference on Multimedia, pp. 540–547. ACM, New York (2004)Google Scholar
- 17.Wagstaff, K., Cardie, C., Rogers, S., Schroedl, S.: Constrained k-means clustering with background knowledge. In: Proceeding of 18th ICML, pp. 557–584 (2001)Google Scholar
- 18.Blei, D., Jordan, M.I.: Modeling annotated data. In: Proceedings of the 26th intl. SIGIR Conf., pp. 127–134 (2003)Google Scholar