Improving Ranking Evaluation Employing Visual Analytics

  • Marco Angelini
  • Nicola Ferro
  • Giuseppe Santucci
  • Gianmaria Silvello
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8138)


In order to satisfy diverse user needs and support challenging tasks, it is fundamental to provide automated tools to examine system behavior, both visually and analytically. This paper provides an analytical model for examining rankings produced by IR systems, based on the discounted cumulative gain family of metrics, and visualization for performing failure and “what-if” analyses.


Information Retrieval Failure Analysis Ranking Model Information Retrieval System Ranking List 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Angelini, M., Ferro, N., Santucci, G., Silvello, G.: Visual Interactive Failure Analysis: Supporting Users in Information Retrieval Evaluation. In: Proc. of the 4th Information Interaction in Context Symposium, IIIX 2012, pp. 194–203. ACM, New York (2012)Google Scholar
  2. 2.
    Berkhin, P.: A Survey of Clustering Data Mining Techniques. In: Kogan, J., Nicholas, C., Teboulle, M. (eds.) Grouping Multidimensional Data, pp. 25–71. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  3. 3.
    Geng, X., Liu, T.-Y., Qin, T., Li, H.: Feature Selection for Ranking. In: Kraaij, W., de Vries, A.P., Clarke, C.L.A., Fuhr, N., Kando, N. (eds.) Proc. 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2007), pp. 407–414. ACM Press, New York (2007)CrossRefGoogle Scholar
  4. 4.
    Harman, D., Buckley, C.: Overview of the Reliable Information Access Workshop. Information Retrieval 12(6), 615–641 (2009)CrossRefGoogle Scholar
  5. 5.
    Järvelin, K., Kekäläinen, J.: Cumulated Gain-Based Evaluation of IR Techniques. ACM Transactions on Information System (TOIS) 20(4), 422–446 (2002)CrossRefGoogle Scholar
  6. 6.
    Liu, T.-Y.: Learning to Rank for Information Retrieval. Foundations and Trends in Information Retrieval 3(3), 225–331 (2009)CrossRefGoogle Scholar
  7. 7.
    Liu, T.-Y.Y., Xu, J., Qin, T., Xiong, W., Li, H.: LETOR: Benchmark Dataset for Research on Learning to Rank for Information Retrieval. In: Joachims, T., Li, H., Liu, T.-Y., Zhai, C. (eds.) SIGIR 2007 Workshop on Learning to Rank for Information Retrieval (2007)Google Scholar
  8. 8.
    Teevan, J., Dumais, S.T., Horvitz, E.: Potential for Personalization. ACM Transactions on Computer-Human Interaction (TOCHI) 17(1), 1–31 (2010)CrossRefGoogle Scholar
  9. 9.
    van Rijsbergen, C.J.: Information Retrieval, 2nd edn. Butterworths, London (1979)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Marco Angelini
    • 2
  • Nicola Ferro
    • 1
  • Giuseppe Santucci
    • 2
  • Gianmaria Silvello
    • 1
  1. 1.University of PaduaItaly
  2. 2.“La Sapienza” University of RomeItaly

Personalised recommendations