Automatic Image Annotation Based on WordNet and Hierarchical Ensembles

  • Wei Li
  • Maosong Sun
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3878)

Abstract

Automatic image annotation concerns a process of automatically labeling image contents with a pre-defined set of keywords, which are regarded as descriptors of image high-level semantics, so as to enable semantic image retrieval via keywords. A serious problem in this task is the unsatisfactory annotation performance due to the semantic gap between the visual content and keywords. Targeting at this problem, we present a new approach that tries to incorporate lexical semantics into the image annotation process. In the phase of training, given a training set of images labeled with keywords, a basic visual vocabulary consisting of visual terms, extracted from the image to represent its content, and the associated keywords is generated at first, using K-means clustering combined with semantic constraints obtained from WordNet, then the statistical correlation between visual terms and keywords is modeled by a two-level hierarchical ensemble model composed of probabilistic SVM classifiers and a co-occurrence language model. In the phase of annotation, given an unlabeled image, the most likely associated keywords are predicted by the posterior probability of each keyword given each visual term at the first-level classifier ensemble, then the second-level language model is used to refine the annotation quality by word co-occurrence statistics derived from the annotated keywords in the training set of images. We carried out experiments on a medium-sized image collection from Corel Stock Photo CDs. The experimental results demonstrated that the annotation performance of this method outperforms some traditional annotation methods by about 7% in average precision, showing the feasibility and effectiveness of the proposed approach.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Barnard, K., Dyugulu, P., de Freitas, N., Forsyth, D., Blei, D., Jordan, M.I.: Matching words and pictures. Journal of Machine Learning Research 3, 1107–1135 (2003)MATHCrossRefGoogle Scholar
  2. 2.
    Duygulu, P., Barnard, K., de Freitas, N., Forsyth, D.: Ojbect recognition as machine translation: Learning a lexicon fro a fixed image vocabulary. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2353, pp. 97–112. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  3. 3.
    Jeon, J., Lavrenko, V., Manmatha, R.: Automatic image annotation and retrieval using cross-media relevance models. In: Proceedings of the 26th intl. SIGIR Conf., pp. 119–126 (2003)Google Scholar
  4. 4.
    Mori, Y., Takahashi, H., Oka, R.: Image-to-word transformation based on dividing and vector quantizing images with words. In: Proceedings of the First International Workshop on Multimedia Intelligent Storage and Retrieval Management (1999)Google Scholar
  5. 5.
    Chang, E., Goh, K., Sychay, G., Wu, G.: CBSA: Content-based soft annotation for multimodal image retrieval using Bayes point machines. IEEE Transactions on Circuits and Systems for Video Technology Special Issue on Conceptual and Dynamical Aspects of Multimedia Content Descriptions 13(1), 26–38 (2003)Google Scholar
  6. 6.
    Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Transactions On Pattern Analysis and Machine Intelligence 22(8), 888–905 (2000)CrossRefGoogle Scholar
  7. 7.
    Li, J., Wang, J.A.: Automatic linguistic indexing of pictures by a statistical modeling approach. IEEE Transactions on PAMI 25(10), 175–1088 (2003)MATHGoogle Scholar
  8. 8.
    Lavrenko, V., Manmatha, R., Jeon, J.: A model for learning the semantics of pictures. In: Proceedings of the 16th Annual Conference on Neural Information Processing Systems (2004)Google Scholar
  9. 9.
    Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines, Software (2001), available at http://www.csie.ntu.edu.tw/~cjlin/libsvm
  10. 10.
    Hsu, C.-W., Lin, C.-J.: A comparison of methods for multi-class support vector machines. IEEE Transactions on Neural Networks, 415–425 (2002)Google Scholar
  11. 11.
    Cusano, C., Ciocca, G., Schettini, R.: Image Annotation using SVM. In: Proceedings of SPIE-IS&T Electronic Imaging. SPIE, vol. 5304, pp. 330–338 (2004)Google Scholar
  12. 12.
    Fellbaum, C.: WordNet: An electronic lexical database. MIT Press, Cambridge (1998)MATHGoogle Scholar
  13. 13.
    Pedersen, T., Patwardhan, S., Michelizzi, J.: WordNet:Similarity - measuring the relatedness of concepts. In: Proceedings of the Nineteenth National Conference on Artificial Intelligence, AAAI 2004 (2004)Google Scholar
  14. 14.
    Jin, R., Chai, J.Y., Si, L.: Effective automatic image annotation via a coherent language model and active learning. In: Proceedings of ACM International Conference on Multimedia, pp. 892–899. ACM, New York (2004)Google Scholar
  15. 15.
    Barnard, K., Forsyth, D.A.: Learning the semantics of words and pictures. In: Proceedings of International Conference on Computer Version, pp. 408–415 (2001)Google Scholar
  16. 16.
    Fan, J., Gao, Y., Luo, H.: Multi-level annotation of natural scenes using dominant image components and semantic concepts. In: Proceedings of the ACM International Conference on Multimedia, pp. 540–547. ACM, New York (2004)Google Scholar
  17. 17.
    Wagstaff, K., Cardie, C., Rogers, S., Schroedl, S.: Constrained k-means clustering with background knowledge. In: Proceeding of 18th ICML, pp. 557–584 (2001)Google Scholar
  18. 18.
    Blei, D., Jordan, M.I.: Modeling annotated data. In: Proceedings of the 26th intl. SIGIR Conf., pp. 127–134 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Wei Li
    • 1
  • Maosong Sun
    • 1
  1. 1.State Key Lab of Intelligent Technology and Systems, Department of Computer Science and TechnologyTsinghua UniversityBeijingChina

Personalised recommendations