Linguistic Estimation of Topic Difficulty in Cross-Language Image Retrieval

  • Michael Grubinger
  • Clement Leung
  • Paul Clough
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4022)


Selecting suitable topics in order to assess system effectiveness is a crucial part of any benchmark, particularly those for retrieval systems. This includes establishing a range of example search requests (or topics) in order to test various aspects of the retrieval systems under evaluation. In order to assist with selecting topics, we present a measure of topic difficulty for cross-language image retrieval. This measure has enabled us to ground the topic generation process within a methodical and reliable framework for ImageCLEF 2005. This document describes such a measure for topic difficulty, providing concrete examples for every aspect of topic complexity and an analysis of topics used in the ImageCLEF 2003, 2004 and 2005 ad-hoc task.


Image Retrieval Query Expansion Mean Average Precision Translation Quality Search Request 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Clough, P., Sanderson, M., Reid, N.: The Eurovision St Andrews Photographic Collection (ESTA), Image CLEF Report, University of Sheffield, UK (February 2003)Google Scholar
  2. 2.
    Grubinger, M., Leung, C.: A Benchmark for Performance Calibration in Visual Informa-tion Search. In: Proceedings of 2003 Conference of Visual Information Systems, Miami, Florida, USA, September 24-26, pp. 414–419 (2003)Google Scholar
  3. 3.
    Leung, C., Ip, H.: Benchmarking for Content-Based Visual Information Search. In: Laurini, R. (ed.) VISUAL 2000. LNCS, vol. 1929, pp. 442–456. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  4. 4.
    Müller, H., Müller, W., Squire, D., Marchand-Millet, S., Pun, T.: Performance Evaluation in Content Based Image Retrieval: Overview and Proposals. Pattern Recognition Letters 22(5), 563–601 (2001)CrossRefGoogle Scholar
  5. 5.
    Clough, P., Sanderson, M.: The CLEF 2003 Cross Language Image Retrieval Track. In: Peters, C., Gonzalo, J., Braschler, M., Kluck, M. (eds.) CLEF 2003. LNCS, vol. 3237, pp. 581–593. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  6. 6.
    Clough, P., Müller, H., Sanderson, M.: Overview of the CLEF cross-language image re-trieval track (ImageCLEF) 2004. In: Peters, C., Clough, P.D., Jones, G.J.F., Gonzalo, J., Kluck, M., Magnini, B. (eds.) Multilingual Information Access for Text, Speech and Im-ages: Result of the fifth CLEF evaluation campaign. LNCS, Springer, England (2005)Google Scholar
  7. 7.
    Armitage, L., Enser, P.: Analysis of User Need in Image Archives. Journal of Information Science 23(4), 287–299 (1997)CrossRefGoogle Scholar
  8. 8.
    Bagga, A., Biermann, A.: Analysing the complexity of a domain with respect to an infor-mation extraction task. In: Proceedings of the tenth International Conference on Research on Computational Linguistics (ROCLING X), August 1997, pp. 175–194 (1997)Google Scholar
  9. 9.
    Niyogi, P.: The Informational Complexity of Learning from Examples. PhD Thesis, MIT (1996)Google Scholar
  10. 10.
    Barton, E., Berwick, R., Ristad, E.: Computational Complexity and Natural Language. The MIT Press, Cambridge (1987)Google Scholar
  11. 11.
    Ristad, E.: The Language Complexity Games. MIT Press, Cambridge (1993)Google Scholar
  12. 12.
    Flank, S.: Sentences vs. Phrases: Syntactic Complexity in Multimedia Information Retrieval. In: NAACL-ANLP 2000 Workshop: Syntactic and Semantic Complexity in Natural Language Processing Systems (2000)Google Scholar
  13. 13.
    Pollard, S., Biermann, A.: A Measure of Semantic Complexity for Natural Language Systems. In: NAACL-ANLP 2000 Workshop: Syntactic and Semantic Complexity in Natural Language Processing Systems (2000)Google Scholar
  14. 14.
    Cronen-Townsend, S., Zhou, Y., Croft, W.B.: Predicting Query Performance. In: Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Develop-ment in Information Retrieval 2002, pp. 299–306. ACM Press, New York (2002)CrossRefGoogle Scholar
  15. 15.
    Amati, G., Carpineto, C., Romano, G.: Query Difficulty, Robustness, and Selective Application of Query Expansion. In: McDonald, S., Tait, J.I. (eds.) ECIR 2004. LNCS, vol. 2997, pp. 127–137. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  16. 16.
    Yom-Tov, E., Fine, S., Carmel, D., Darlow, A.: Learning to Estimate Query Difficulty. In: Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 2005, ACM Press, New York (2005)Google Scholar
  17. 17.
    Clough, P., Müller, H., Deselaers, T., Grubinger, M., Lehmann, T., Jensen, J., Hersh, W.: The CLEF 2005 Cross–Language Image Retrieval Track. In: Peters, C., Gey, F.C., Gonzalo, J., Müller, H., Jones, G.J.F., Kluck, M., Magnini, B., de Rijke, M., Giampiccolo, D. (eds.) CLEF 2005. LNCS, vol. 4022, pp. 535–557. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  18. 18.
    Clough, P.D., Sanderson, M.: Assessing Translation Quality for Cross Language Image Retrieval. In: Peters, C., Gonzalo, J., Braschler, M., Kluck, M. (eds.) CLEF 2003. LNCS, vol. 3237, pp. 594–610. Springer, Heidelberg (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Michael Grubinger
    • 1
  • Clement Leung
    • 1
  • Paul Clough
    • 2
  1. 1.School of Computer Science and MathematicsVictoria UniversityMelbourneAustralia
  2. 2.Department of Information StudiesSheffield UniversitySheffieldUK

Personalised recommendations