Evaluating Scalability in Information Retrieval with Multigraded Relevance

  • Amélie Imafouo
  • Michel Beigbeder
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4182)


For the user’s point of view, in large environments, it can be desirable to have Information Retrieval Systems (IRS) that retrieve documents according to their relevance levels. Relevance levels have been studied in some previous Information Retrieval (IR) works while some others (few) IR research works tackled the questions of IRS effectiveness and collections size. These latter works used standard IR measures on collections of increasing size to analyze IRS effectiveness scalability. In this work, we bring together these two issues in IR (multigraded relevance and scalability) by designing some new metrics for evaluating the ability of IRS to rank documents according to their relevance levels when collection size increases.


Information Retrieval Information Gain Information Retrieval System Relevance Level Gain Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Lyman, P., Varian, H.R., Swearingen, K., Charles, P., Good, N., Jordan, L.L., Pal, J.: How much informations 2003 (2003), http://www.sims.berkeley.edu/research/projects/how-much-info-2003/
  2. 2.
    Mizzaro, S.: How many relevances in information retrieval? Interacting with Computers 10, 303–320 (1998)CrossRefGoogle Scholar
  3. 3.
    Barry, C.L.: User-de.ned relevance criteria: an exploratory study. Journal of the American Society for Information Science 45, 149–159 (1994)CrossRefGoogle Scholar
  4. 4.
    Saracevic, T.: Relevance: A review of and a framework for the thinking on the notion in information science. Journal of the American Society for Information Science 26, 321–343 (1975)CrossRefGoogle Scholar
  5. 5.
    Schamber, L., Eisenberg, M.B., Nilan, M.S.: A re-examination of relevance: toward a dynamic, situational definition. Information Processing and Management 26, 755–776 (1990)CrossRefGoogle Scholar
  6. 6.
    Wilson, P.: Situational relevance. Information Storage and Retrieval 9, 457–471 (1973)CrossRefGoogle Scholar
  7. 7.
    Cooper, W.S.: A definition of relevance for information retrieval. Information Storage and Retrieval (1971)Google Scholar
  8. 8.
    Cosijn, E., Ingwersen, P.: Dimensions of relevance. Information Processing and Management 36, 533–550 (2000)CrossRefGoogle Scholar
  9. 9.
    Rees, A.M., Schulz, D.G.: A field experimental approach to the study of relevance assessments in relation to document searching. 2 vols. Technical Report NSF Contract No. C-423, Center for Documentation and Communication Research, School of Library Science (1967)Google Scholar
  10. 10.
    Cuadra, C.A., Katter, R.V.: The relevance of relevance assessment. In: Proceedings of the American Documentation Institute, Washington, DC, vol. 4, pp. 95–99 (1967)Google Scholar
  11. 11.
    Kekäläinen, J., Järvelin, K.: Using graded relevance assessments in ir evaluation. Journal of the American Society for Information Science and Technology 53, 1120–1129 (2002)CrossRefGoogle Scholar
  12. 12.
    Tang, R., William, M., Shaw, J., Vevea, J.L.: Towards the identification of the optimal number of relevance categories. Journal of the American Society for Information Science 50, 254–264 (1999)CrossRefGoogle Scholar
  13. 13.
    Spink, A., Greisdorf, H., Bateman, J.: From highly relevant to not relevant: examining different regions of relevance. Information Processing and Management: an International Journal 34, 599–621 (1998)CrossRefGoogle Scholar
  14. 14.
    Voorhees, E.M.: Evaluation by highly relevant documents. In: Proceedings of the 24th annual international ACM SIGIR Conference, pp. 74–82 (2001)Google Scholar
  15. 15.
    Ntcir workshop 1: Proceedings of the first ntcir workshop on retrieval in Japanese text retrieval and term recognition, tokyo, japan. In: Kando, N., Nozue, T., eds.: NTCIR (1999)Google Scholar
  16. 16.
    Järvelin, K., Kekäläinen, J.: Ir evaluation methods for retrieving highly relevant documents. In: Proceedings of the 23th annual international ACM SIGIR Conference, pp. 41–48 (2000)Google Scholar
  17. 17.
    Sakai, T.: Average gain ratio: A simple retrieval performance measure for evaluation with multiple relevance levels. In: SIGIR 2003 (2003)Google Scholar
  18. 18.
    Kekäläinen, J.: Binary and graded relevance in ir evaluations -comparison of the effects on rankings of ir systems. Information Processing an Management 41 (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Amélie Imafouo
    • 1
  • Michel Beigbeder
    • 1
  1. 1.Ecole Nationale Supérieure des Mines of Saint-EtienneSaint-EtienneFrance

Personalised recommendations