Advertisement

Semantic Annotation of Image Groups with Self-organizing Maps

  • Markus Koskela
  • Jorma Laaksonen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3568)

Abstract

Automatic image annotation has attracted a lot of attention recently as a method for facilitating semantic indexing and text-based retrieval of visual content. In this paper, we propose the use of multiple Self-Organizing Maps in modeling various semantic concepts and annotating new input images automatically. The effect of the semantic gap is compensated by annotating multiple images concurrently, thus enabling more accurate estimation of the semantic concepts’ distributions. The presented method is applied to annotating images from a freely-available database consisting of images of different semantic categories.

Keywords

Image Retrieval Relevance Feedback Semantic Concept Salient Object Image Group 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Gevers, T., Aldershoff, F., Geusebroek, J.M.: Integrating visual and textual cues for image classification. In: Laurini, R. (ed.) VISUAL 2000. LNCS, vol. 1929, pp. 419–429. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  2. 2.
    Szummer, M., Picard, R.W.: Indoor-outdoor image classification. In: Proc. IEEE International Workshop on Content-Based Access of Image and Video Database, Bombay, India, pp. 42–51 (1998)Google Scholar
  3. 3.
    Vailaya, A., Jain, A., Zhang, H.J.: On image classification: City images vs. landscapes. Pattern Recognition 31, 1921–1935 (1998)CrossRefGoogle Scholar
  4. 4.
    Chang, E., Goh, K., Sychay, G., Wu, G.: CBSA: Content-based soft annotation for multimodal image retrieval using bayes point machines. IEEE Transactions on Circuits and Systems for Video Technology 13, 26–38 (2003)CrossRefGoogle Scholar
  5. 5.
    Barnard, K., Duygulu, P., de Freitas, N., Forsyth, D., Blei, D., Jordan, M.I.: Matching words and pictures. Journal of Machine Learning Research 3, 1107–1135 (2003)zbMATHCrossRefGoogle Scholar
  6. 6.
    Li, J., Wang, J.Z.: Automatic linguistic indexing of pictures by a statistical modeling approach. IEEE Trans. on Patt. Anal. and Machine Intell. 25, 1075–1088 (2003)CrossRefGoogle Scholar
  7. 7.
    Jeon, J., Lavrenko, V., Manmatha, R.: Automatic image annotation and retrieval using cross-media relevance models. In: Proc. 26th ACM SIGIR Conf. on Research and Development in Information Retrieval, Toronto, Canada, pp. 119–126 (2003)Google Scholar
  8. 8.
    Blei, D.M., Jordan, M.I.: Modeling annotated data. In: Proc. 26th ACM SIGIR Conf. on Res. and Devel, in Information Retrieval, Toronto, Canada, pp. 127–134 (2003)Google Scholar
  9. 9.
    Lu, Y., Hu, C., Zhu, X., Zhang, H., Yang, Q.: A unified framework for semantics and feature based relevance feedback in image retrieval systems. In: Proc. 8th ACM Int’l Conf. on Multimedia, Los Angeles, CA, USA, pp. 31–37 (2000)Google Scholar
  10. 10.
    Kohonen, T.: Self-Organizing Maps, 3rd edn. Springer, Heidelberg (2001)zbMATHGoogle Scholar
  11. 11.
    Laaksonen, J., Koskela, M., Oja, E.: PicSOM—Self-organizing image retrieval with MPEG-7 content descriptions. IEEE Trans. on Neural Networks 13, 841–853 (2002)CrossRefGoogle Scholar
  12. 12.
    Koskela, M., Laaksonen, J., Oja, E.: Use of image subset features in image retrieval with self-organizing maps. In: Enser, P.G.B., Kompatsiaris, Y., O’Connor, N.E., Smeaton, A., Smeulders, A.W.M. (eds.) CIVR 2004. LNCS, vol. 3115, pp. 508–516. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  13. 13.
    Laaksonen, J., Koskela, M., Oja, E.: Class distributions on SOM surfaces for feature extraction and object retrieval. Neural Networks 17, 1121–1133 (2004)CrossRefGoogle Scholar
  14. 14.
    Fei-Fei, L., Fergus, R., Perona, P.: Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories. In: Proc. Workshop on Generative-Model Based Vision, Wash., DC (2004)Google Scholar
  15. 15.
    ISO/IEC: (Information technology - Multimedia content description interface - Part 3: Visual) 15938-3:2002(E)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Markus Koskela
    • 1
  • Jorma Laaksonen
    • 1
  1. 1.Laboratory of Computer and Information ScienceHelsinki University of TechnologyFinland

Personalised recommendations