Advertisement

Empirical Investigations on Benchmark Tasks for Automatic Image Annotation

  • Ville Viitaniemi
  • Jorma Laaksonen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4781)

Abstract

Automatic image annotation aims at labeling images with keywords. In this paper we investigate three annotation benchmark tasks used in literature to evaluate annotation systems’ performance. We empirically compare the first two of the tasks, the 5000 Corel images and the Corel categories tasks, by applying a family of annotation system configurations derived from our PicSOM image content analysis framework. We establish an empirical correspondence of performance levels in the tasks by studying the performance of our system configurations, along with figures presented in literature. We also consider ImageCLEF 2006 Object Annotation Task that has earlier been found difficult. By experimenting with the data, we gain insight into the reasons that make the ImageCLEF task difficult. In the course of our experiments, we demonstrate that in these three tasks the PicSOM system—based on fusion of numerous global image features—outperforms the other considered annotation methods.

Keywords

Training Image Annotation System Image Annotation Annotation Task Benchmark Task 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Andrews, S., Tsochantaridis, I., Hoffman, T.: Support vector machines for multiple-instance learning. In: Advances in Neural Information Processing Systems 15, pp. 561–568. MIT Press, Cambridge (2003)Google Scholar
  2. 2.
    Carneiro, G., Vasconcelos, N.: Formulating semantic image annotation as supervised learning problem. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 163–168. IEEE Computer Society Press, Los Alamitos (2005)Google Scholar
  3. 3.
    Celebi, E., Alpkocak, A.: Combining textual and visual clusters for semantic image retrieval and auto-annotation. In: EWIMT. Proc. of European Workshop on the Integration of Knowledge, Semantic and Digital Media Technologies, UK, pp. 219–225 (November 2005)Google Scholar
  4. 4.
    Chen, Y., Zwang, J.Z.: Image categorization by learning and reasoning with regions. Journal of Machine Learning Research 5, 913–939 (2004)Google Scholar
  5. 5.
    Clough, P., Grubinger, M., Deselaers, T., Hanbury, A., Müller, H.: Overview of the ImageCLEF 2006 photographic retrieval and object annotation tasks. In: CLEF working notes, Alicante, Spain (September 2006)Google Scholar
  6. 6.
    Duygulu, P., Barnard, K., de Freitas, N., Forsyth, D.: Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2350, pp. 97–112. Springer, Heidelberg (2002)Google Scholar
  7. 7.
    Feng, S.L., Manmatha, R., Lavrenko, V.: Multiple Bernoulli relevance models for image and video annotation. Proc. of IEEE CVPR 2, 1002–1009 (2004)Google Scholar
  8. 8.
    ISO/IEC. Information technology - Multimedia content description interface - Part 3: Visual, 15938-3:2002(E) (2002)Google Scholar
  9. 9.
    Jeon, J., Lavrenko, V., Manmatha, R.: Automatic image annotation and retrieval using cross-media relevance models. In: Proceedings of 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Canada, pp. 119–126 (July-August 2003)Google Scholar
  10. 10.
    Jeon, J., Manmatha, R.: Using maximum entropy for automatic image annotation. In: Proc. of International Conference on Image and Video Retrieval, pp. 24–32 (2004)Google Scholar
  11. 11.
    Kohonen, T.: Self-Organizing Maps. Springer, Heidelberg (2001)zbMATHGoogle Scholar
  12. 12.
    Koikkalainen, P., Oja, E.: Self-organizing hierarchical feature maps. In: Proc. IJCNN, San Diego, CA, USA, vol. II, pp. 279–284 (1990)Google Scholar
  13. 13.
    Laaksonen, J., Koskela, M., Oja, E.: PicSOM—Self-organizing image retrieval with MPEG-7 content descriptions. IEEE Transactions on Neural Networks 13(4), 841–853 (2002)CrossRefGoogle Scholar
  14. 14.
    Lavrenko, V., Manmatha, R., Jeon, J.: A model for learning the semantics of pictures. In: Proc. NIPS, vol. 16, pp. 553–560 (2003)Google Scholar
  15. 15.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2), 91–110 (2004)CrossRefGoogle Scholar
  16. 16.
    LTU Technologies: (Accessed 2007-5-18), http://www.LTUtech.com
  17. 17.
    Metzler, D., Manmatha, R.: An inference network approach to image retrieval. In: Enser, P.G.B., Kompatsiaris, Y., O’Connor, N.E., Smeaton, A.F., Smeulders, A.W.M. (eds.) CIVR 2004. LNCS, vol. 3115, pp. 42–50. Springer, Heidelberg (2004)Google Scholar
  18. 18.
    Mori, Y., Takahashi, H., Oka, R.: Image-to-word transformation based on dividing and vector quantizing images with words. In: Proc. of First International Workshop on Multimedia Intelligent Storage and Retrieval Management (1999)Google Scholar
  19. 19.
    Qi, X., Han, Y.: Incorporating multiple SVMs for automatic image annotation. Pattern Recognition 40, 728–741 (2007)zbMATHCrossRefGoogle Scholar
  20. 20.
    Viitaniemi, V., Laaksonen, J.: Evaluating performance of automatic image annotation: example case by fusing global image features. In: Proc. of International Workshop on Content-Based Multimedia Indexing, Bordeaux, France (June 2007)Google Scholar
  21. 21.
    Yavlinsky, A., Schofield, E., Rüger, S.: Automated image annotation using global features and robust nonparametric density estimation. In: Leow, W.-K., Lew, M.S., Chua, T.-S., Ma, W.-Y., Chaisorn, L., Bakker, E.M. (eds.) CIVR 2005. LNCS, vol. 3568, pp. 507–517. Springer, Heidelberg (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Ville Viitaniemi
    • 1
  • Jorma Laaksonen
    • 1
  1. 1.Adaptive Informatics Research Centre, Helsinki University of Technology, P.O. Box 5400, FIN-02015 TKKFinland

Personalised recommendations