Recognizing Objects and Scenes in News Videos

  • Muhammet Baştan
  • Pınar Duygulu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4071)


We propose a new approach to recognize objects and scenes in news videos motivated by the availability of large video collections. This approach considers the recognition problem as the translation of visual elements to words. The correspondences between visual elements and words are learned using the methods adapted from statistical machine translation and used to predict words for particular image regions (region naming), for entire images (auto-annotation), or to associate the automatically generated speech transcript text with the correct video frames (video alignment). Experimental results are presented on TRECVID 2004 data set, which consists of about 150 hours of news videos associated with manual annotations and speech transcript text. The results show that the retrieval performance can be improved by associating visual and textual elements. Also, extensive analysis of features are provided and a method to combine features are proposed.


Automatic Speech Recognition Manual Annotation Mean Average Precision News Video Statistical Machine Translation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
  2. 2.
    Trec vieo retrieval evaluation,
  3. 3.
    Barnard, K., Duygulu, P., de Freitas, N., Forsyth, D.A., Blei, D., Jordan, M.: Matching words and pictures. Journal of Machine Learning Research 3, 1107–1135 (2003)zbMATHCrossRefGoogle Scholar
  4. 4.
    Blei, D., Jordan, M.I.: Modeling annotated data. In: 26th Annual International ACM SIGIR Conference, Toronto, Canada, July 28-August 1, pp. 127–134 (2003)Google Scholar
  5. 5.
    Brown, P., Pietra, S.A.D., Pietra, V.J.D., Mercer, R.L.: The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics 19(2), 263–311 (1993)Google Scholar
  6. 6.
    Carbonetto, P., de Freitas, N., Barnard, K.: A Statistical Model for General Contextual Object Recognition. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3021, pp. 350–362. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  7. 7.
    Duygulu, P., Barnard, K., Freitas, N., Forsyth, D.A.: Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2353, pp. 97–112. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  8. 8.
    Duygulu, P., Wactlar, H.: Associating video frames with text. In: Multimedia Information Retrieval Workshop in conjuction with the 26th annual ACM SIGIR conference on Information Retrieval, Toronto, Canada (2003)Google Scholar
  9. 9.
    Feng, S., Manmatha, R., Lavrenko, V.: Multiple bernoulli relevance models for image and video annotation. In: Proceedings of the International Conference on Pattern Recognition (CVPR 2004), vol. 2, pp. 1002–1009 (2004)Google Scholar
  10. 10.
    Gauvain, J., Lamel, L., Adda, G.: The limsi broadcast news transcription system. Speech Communication 37(1-2), 89–108 (2002)zbMATHCrossRefGoogle Scholar
  11. 11.
    Ghoshal, A., Ircing, P., Khudanpur, S.: Hidden Markov Models for Automatic Annotation and Content Based Retrieval of Images and Video. In: The 28th International ACM SIGIR Conference, Salvador, Brazil, August 15-19 (2005)Google Scholar
  12. 12.
    Lin, C.-Y., Tseng, B.L., Smith, J.R.: Video collaborative annotation forum: Establishing ground-truth labels on large multimedia datasets. In: NIST TREC 2003 Video Retrieval Evaluation Conference, Gaithersburg, MD (November 2003)Google Scholar
  13. 13.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2) (2004)Google Scholar
  14. 14.
    Och, F.J., Ney, H.: A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics 1(29), 19–51 (2003)CrossRefGoogle Scholar
  15. 15.
    Quelhas, P., Monay, F., Odobez, J.-M., Gatica-Perez, D., Tuytelaars, T., Gool, L.V.: Modeling scenes with local descriptors and latent aspects. In: Proc. International Conference on Computer Vision (ICCV), Beijing (2005)Google Scholar
  16. 16.
    Yang, J., Chen, M.-Y., Hauptmann, A.: Finding person X: Correlating names with visual appearances. In: Enser, P.G.B., Kompatsiaris, Y., O’Connor, N.E., Smeaton, A.F., Smeulders, A.W.M. (eds.) CIVR 2004. LNCS, vol. 3115, pp. 270–278. Springer, Heidelberg (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Muhammet Baştan
    • 1
  • Pınar Duygulu
    • 1
  1. 1.Department of Computer EngineeringBilkent UniversityAnkaraTurkey

Personalised recommendations