Skip to main content

Cumulated Relative Position: A Metric for Ranking Evaluation

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 7488))

Abstract

The development of multilingual and multimedia information access systems calls for proper evaluation methodologies to ensure that they meet the expected user requirements and provide the desired effectiveness. IR research offers a strong evaluation methodology and a range of evaluation metrics, such as MAP and (n)DCG. In this paper, we propose a new metric for ranking evaluation, the CRP. We start with the observation that a document of a given degree of relevance may be ranked too early or too late regarding the ideal ranking of documents for a query. Its relative position may be negative, indicating too early ranking, zero indicating correct ranking, or positive, indicating too late ranking. By cumulating these relative rankings we indicate, at each ranked position, the net effect of document displacements, the CRP. We first define the metric formally and then discuss its properties, its relationship to prior metrics, and its visualization. Finally we propose different visualizations of CRP by exploiting a test collection to demonstrate its behavior.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   72.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Sanderson, M.: Test Collection Based Evaluation of Information Retrieval Systems. Foundations and Trends in Information Retrieval (FnTIR) 4, 247–375 (2010)

    Article  MATH  Google Scholar 

  2. Harman, D.K.: Information Retrieval Evaluation. Morgan & Claypool Publishers, USA (2011)

    Google Scholar 

  3. Kekäläinen, J., Järvelin, K.: Using Graded Relevance Assessments in IR Evaluation. Journal of the American Society for Information Science and Technology (JASIST) 53, 1120–1129 (2002)

    Article  Google Scholar 

  4. Robertson, S.E., Kanoulas, E., Yilmaz, E.: Extending Average Precision to Graded Relevance Judgments. In: Proc. 33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2010), pp. 603–610. ACM Press, New York (2010)

    Google Scholar 

  5. Järvelin, K., Kekäläinen, J.: Cumulated Gain-Based Evaluation of IR Techniques. ACM Transactions on Information Systems (TOIS) 20, 422–446 (2002)

    Article  Google Scholar 

  6. Keskustalo, H., Järvelin, K., Pirkola, A., Kekäläinen, J.: Intuition-Supporting Visualization of User’s Performance Based on Explicit Negative Higher-Order Relevance. In: Proc. 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2008), pp. 675–681. ACM Press, New York (2008)

    Chapter  Google Scholar 

  7. Korfhage, R.R.: Information Storage and Retrieval. Wiley Computer Publishing, John Wiley & Sons, Inc., USA (1997)

    Google Scholar 

  8. Salton, G., McGill, M.J.: Introduction to Modern Information Retrieval. McGraw-Hill, New York (1983)

    Google Scholar 

  9. Voorhees, E.M.: TREC: Continuing Information Retrieval’s Tradition of Experimentation. Communications of the ACM (CACM) 50, 51–54 (2007)

    Article  Google Scholar 

  10. Cosijn, E., Ingwersen, P.: Dimensions of Relevance. Information Processing & Management 36, 533–550 (2000)

    Article  Google Scholar 

  11. Saracevic, T.: Relevance reconsidered. In: Ingwersen, P., Pors, N.O. (eds.) Proc. 2nd International Conference on Conceptions of Library and Information Science – Integration in Perspective (CoLIS 2), pp. 201–218. Royal School of Librarianship, Copenhagen (1996)

    Google Scholar 

  12. Sormunen, E.: Liberal Relevance Criteria of TREC: Counting on Negligible Documents? In: Proc. of the 25th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 324–330. ACM Press (2002)

    Google Scholar 

  13. Ferro, N., Sabetta, A., Santucci, G., Tino, G.: Visual Comparison of Ranked Result Cumulated Gains. In: Proc. 2nd International Workshop on Visual Analytics (EuroVA 2011), pp. 21–24. Eurographics Association, Goslar (2011)

    Google Scholar 

  14. Voorhees, E., Harman, D.: Overview of the Seventh Text REtrieval Conference (TREC-7). In: NIST Special Publication 500-242: The Seventh Text REtrieval Conference (TREC 7). Springer, Heidelberg (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Angelini, M. et al. (2012). Cumulated Relative Position: A Metric for Ranking Evaluation. In: Catarci, T., Forner, P., Hiemstra, D., Peñas, A., Santucci, G. (eds) Information Access Evaluation. Multilinguality, Multimodality, and Visual Analytics. CLEF 2012. Lecture Notes in Computer Science, vol 7488. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33247-0_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-33247-0_13

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-33246-3

  • Online ISBN: 978-3-642-33247-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics