Advertisement

A Study of Vocabularies for Image Annotation

  • Allan Hanbury
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4816)

Abstract

In order to evaluate image annotation and object categorisation algorithms, ground truth in the form of a set of images correctly annotated with text describing each image is required. Statistics on the WordNet categories of keywords collected from recent automated image annotation and object categorisation publications and evaluation campaigns are presented. These statistics provide a snapshot of keywords used to train and test current image annotation systems as well as information on the usefulness of WordNet for categorising them.

Keywords

Image Annotation Proper Noun Machine Learn Research Evaluation Campaign Automate Image Annotation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Barnard, K., Duygulu, P., de Freitas, N., Forsyth, D., Blei, D., Jordan, M.I.: Matching words and pictures. Journal of Machine Learning Research 3, 1107–1135 (2003)zbMATHCrossRefGoogle Scholar
  2. 2.
    Carbonetto, P., de Freitas, N., Barnard, K.: A statistical model for general contextual object recognition. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3021, pp. 350–362. Springer, Heidelberg (2004)Google Scholar
  3. 3.
    Li, J., Wang, J.Z.: Automatic linguistic indexing of pictures by a statistical modeling approach. IEEE Trans. PAMI 25(9), 1075–1088 (2003)Google Scholar
  4. 4.
    Winn, J., Criminisi, A., Minka, T.: Object categorization by learned universal visual dictionary. In: Proc. ICCV, pp. 1800–1807 (2005)Google Scholar
  5. 5.
    Everingham, M., et al.: The 2005 PASCAL visual object classes challenge. In: Selected Proceedings of the First PASCAL Challenges Workshop (2006)Google Scholar
  6. 6.
    Jörgenson, C., Jörgenson, P.: Testing a vocabulary for image indexing and ground truthing. In: Proc. Internet Imaging III, pp. 207–215 (2002)Google Scholar
  7. 7.
    Perronnin, F., Dance, C., Csurka, G., Bressan, M.: Adapted vocabularies for generic visual categorization. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954, pp. 464–475. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  8. 8.
    Chen, Y., Wang, J.Z.: Image categorization by learning and reasoning with regions. Journal of Machine Learning Research 5, 913–939 (2004)Google Scholar
  9. 9.
    Fei-Fei, L., Fergus, R., Perona, P.: Learning generative visual models from few training examples an incremental bayesian approach tested on 101 object categories. In: Proc. Workshop on Generative-Model Based Vision (June 2004)Google Scholar
  10. 10.
    Miller, G.A., Beckwith, R., Fellbaum, C., Gross, D., Miller, K.: Introduction to WordNet: An on-line lexical database. International Journal of Lexicography 3(4), 235–244 (1990)CrossRefGoogle Scholar
  11. 11.
    Zinger, S., Millet, C., Mathieu, B., Grefenstette, G., Hède, P., Moëllic, P.A.: Extracting an ontology of portrayable objects from WordNet. In: Proc. MUSCLE/ImageCLEF Workshop on Image & Video Retrieval Evaluation, pp. 17–23 (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Allan Hanbury
    • 1
  1. 1.Pattern Recognition and Image Processing group (PRIP), Institute of Computer Aided Automation, Vienna University of Technology, Favoritenstraße 9/1832, A-1040 ViennaAustria

Personalised recommendations