CLEF 2012: Information Access Evaluation. Multilinguality, Multimodality, and Visual Analytics pp 112-123 | Cite as
Cumulated Relative Position: A Metric for Ranking Evaluation
Abstract
The development of multilingual and multimedia information access systems calls for proper evaluation methodologies to ensure that they meet the expected user requirements and provide the desired effectiveness. IR research offers a strong evaluation methodology and a range of evaluation metrics, such as MAP and (n)DCG. In this paper, we propose a new metric for ranking evaluation, the CRP. We start with the observation that a document of a given degree of relevance may be ranked too early or too late regarding the ideal ranking of documents for a query. Its relative position may be negative, indicating too early ranking, zero indicating correct ranking, or positive, indicating too late ranking. By cumulating these relative rankings we indicate, at each ranked position, the net effect of document displacements, the CRP. We first define the metric formally and then discuss its properties, its relationship to prior metrics, and its visualization. Finally we propose different visualizations of CRP by exploiting a test collection to demonstrate its behavior.
Keywords
Information Retrieval Relevant Document Mean Average Precision Test Collection Relevance DegreePreview
Unable to display preview. Download preview PDF.
References
- 1.Sanderson, M.: Test Collection Based Evaluation of Information Retrieval Systems. Foundations and Trends in Information Retrieval (FnTIR) 4, 247–375 (2010)MATHCrossRefGoogle Scholar
- 2.Harman, D.K.: Information Retrieval Evaluation. Morgan & Claypool Publishers, USA (2011)Google Scholar
- 3.Kekäläinen, J., Järvelin, K.: Using Graded Relevance Assessments in IR Evaluation. Journal of the American Society for Information Science and Technology (JASIST) 53, 1120–1129 (2002)CrossRefGoogle Scholar
- 4.Robertson, S.E., Kanoulas, E., Yilmaz, E.: Extending Average Precision to Graded Relevance Judgments. In: Proc. 33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2010), pp. 603–610. ACM Press, New York (2010)Google Scholar
- 5.Järvelin, K., Kekäläinen, J.: Cumulated Gain-Based Evaluation of IR Techniques. ACM Transactions on Information Systems (TOIS) 20, 422–446 (2002)CrossRefGoogle Scholar
- 6.Keskustalo, H., Järvelin, K., Pirkola, A., Kekäläinen, J.: Intuition-Supporting Visualization of User’s Performance Based on Explicit Negative Higher-Order Relevance. In: Proc. 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2008), pp. 675–681. ACM Press, New York (2008)CrossRefGoogle Scholar
- 7.Korfhage, R.R.: Information Storage and Retrieval. Wiley Computer Publishing, John Wiley & Sons, Inc., USA (1997)Google Scholar
- 8.Salton, G., McGill, M.J.: Introduction to Modern Information Retrieval. McGraw-Hill, New York (1983)Google Scholar
- 9.Voorhees, E.M.: TREC: Continuing Information Retrieval’s Tradition of Experimentation. Communications of the ACM (CACM) 50, 51–54 (2007)CrossRefGoogle Scholar
- 10.Cosijn, E., Ingwersen, P.: Dimensions of Relevance. Information Processing & Management 36, 533–550 (2000)CrossRefGoogle Scholar
- 11.Saracevic, T.: Relevance reconsidered. In: Ingwersen, P., Pors, N.O. (eds.) Proc. 2nd International Conference on Conceptions of Library and Information Science – Integration in Perspective (CoLIS 2), pp. 201–218. Royal School of Librarianship, Copenhagen (1996)Google Scholar
- 12.Sormunen, E.: Liberal Relevance Criteria of TREC: Counting on Negligible Documents? In: Proc. of the 25th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 324–330. ACM Press (2002)Google Scholar
- 13.Ferro, N., Sabetta, A., Santucci, G., Tino, G.: Visual Comparison of Ranked Result Cumulated Gains. In: Proc. 2nd International Workshop on Visual Analytics (EuroVA 2011), pp. 21–24. Eurographics Association, Goslar (2011)Google Scholar
- 14.Voorhees, E., Harman, D.: Overview of the Seventh Text REtrieval Conference (TREC-7). In: NIST Special Publication 500-242: The Seventh Text REtrieval Conference (TREC 7). Springer, Heidelberg (1999)Google Scholar