Informativeness for Adhoc IR Evaluation: A Measure that Prevents Assessing Individual Documents

  • Romain Deveaud
  • Véronique Moriceau
  • Josiane Mothe
  • Eric SanJuanEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9626)


Informativeness measures have been used in interactive information retrieval and automatic summarization evaluation. Indeed, as opposed to adhoc retrieval, these two tasks cannot rely on the Cranfield evaluation paradigm in which retrieved documents are compared to static query relevance document lists. In this paper, we explore the use of informativeness measures to evaluate adhoc task. The advantage of the proposed evaluation framework is that it does not rely on an exhaustive reference and can be used in a changing environment in which new documents occur, and for which relevance has not been assessed. We show that the correlation between the official system ranking and the informativeness measure is specifically high for most of the TREC adhoc tracks.


Information retrieval Evaluation Informativeness Adhoc retrieval 


  1. 1.
    Cleverdon, C.: The cranfield tests on index language devices. In: Aslib Proceedings, vol. 19. No. 6. MCB UP Ltd (1967)Google Scholar
  2. 2.
    Hauff, C.: Predicting the effectiveness of queries and retrieval systems. Ph.D thesis, Enschede, SIKS Dissertation Series No. 2010-05, January 2010Google Scholar
  3. 3.
    Nuray, R., Can, F.: Automatic ranking of information retrieval systems using data fusion. Inf. Process. Manage. 42(3), 595–614 (2006)CrossRefzbMATHGoogle Scholar
  4. 4.
    Pavlu, V., Rajput, S., Golbus, P.B., Aslam, J.A.: IR system evaluation using nugget-based test collections. In: Proceedings of WSDM (2012)Google Scholar
  5. 5.
    SanJuan, E., Moriceau, V., Tannier, X., Bellot, P., Mothe, J.: Overview of the INEX 2011 question answering track (QA@INEX). In: Geva, S., Kamps, J., Schenkel, R. (eds.) INEX 2011. LNCS, vol. 7424, pp. 188–206. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  6. 6.
    SanJuan, E., Moriceau, E., Tannier, X., Bellot, P., Mothe, J.: Overview of the INEX 2012 tweet contextualization track. In: CLEF (2012)Google Scholar
  7. 7.
    Spink, A., Greisdorf, H., Bateman, J.: From highly relevant to not relevant: examining different regions of relevance. Inf. Process. Manage. 34(5), 599–621 (1998)CrossRefGoogle Scholar
  8. 8.
    Tague-Sutcliffe, J.: Measuring the informativeness of a retrieval process. In: Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 23–36 (1992)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Romain Deveaud
    • 1
    • 3
  • Véronique Moriceau
    • 2
  • Josiane Mothe
    • 1
  • Eric SanJuan
    • 1
    • 3
    Email author
  1. 1.IRIT-CNRS UMR5505Université de ToulouseToulouseFrance
  2. 2.LIMSI-CNRSUniversité de Paris-Sud, Université de Paris-SaclayParisFrance
  3. 3.LIAAgorantic, Université d’AvignonAvignonFrance

Personalised recommendations