Overview of the INEX 2008 Entity Ranking Track

  • Gianluca Demartini
  • Arjen P. de Vries
  • Tereza Iofciu
  • Jianhan Zhu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5631)

Abstract

In many contexts a search engine user would prefer to retrieve entities instead of just documents. Example queries include “Italian nobel prize winners”, “Formula 1 drivers that won the Monaco Grand Prix”, or “German spoken Swiss cantons”. The XML Entity Ranking (XER) track at INEX creates a discussion forum aimed at standardizing evaluation procedures for entity retrieval. This paper describes the XER tasks and the evaluation procedure used at the XER track in 2008, focusing specifically on the sampled pooling strategy applied first this year. We conclude with a brief discussion of the predominant participant approaches and their effectiveness.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    de Vries, A.P., Vercoustre, A.-M., Thom, J.A., Craswell, N., Lalmas, M.: Overview of the INEX 2007 Entity Ranking Track. In: Fuhr, N., Kamps, J., Lalmas, M., Trotman, A. (eds.) INEX 2007. LNCS, vol. 4862, pp. 245–251. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  2. 2.
    Geva, S., Kamps, J., Trotman, A.: Advances in Focused Retrieval 7th International Workshop of the Initiative for the Evaluation of XML Retrieval (INEX 2008). LNCS. Springer, Heidelberg (2009)Google Scholar
  3. 3.
    Jiang, J., Lu, W., Rong, X., Gao, Y.: Adapting Expert Search Models to Rank Entities. In: Geva et al. [2]Google Scholar
  4. 4.
    Kaptein, R., Kamps, J.: Finding Entities in Wikipedia using Links and Categories. In: Geva et al. [2]Google Scholar
  5. 5.
    Craswell, N., de Vries, A.P., Soboroff, I.: Overview of the trec-2005 enterprise track. In: Proc. of TREC 2005 (2006)Google Scholar
  6. 6.
    Rode, H., Hiemstra, D., de Vries, A.P., Serdyukov, P.: Efficient XML and Entity Retrieval with PF/Tijah: CWI and University of Twente at INEX 2008. In: Geva et al. [2]Google Scholar
  7. 7.
    Vercoustre, A.-M., Pehcevski, J., Naumovski, V.: Topic Difficulty Prediction in Entity Ranking. In: Geva et al. [2]Google Scholar
  8. 8.
    Yilmaz, E., Aslam, J.A.: Estimating average precision with incomplete and imperfect judgments. In: Yu, P.S., Tsotras, V.J., Fox, E.A., Liu, B. (eds.) CIKM, pp. 102–111. ACM Press, New York (2006)Google Scholar
  9. 9.
    Yilmaz, E., Kanoulas, E., Aslam, J.A.: A simple and efficient sampling method for estimating AP and NDCG. In: Myaeng, S.-H., Oard, D.W., Sebastiani, F., Chua, T.-S., Leong, M.-K. (eds.) SIGIR, pp. 603–610. ACM, New York (2008)Google Scholar
  10. 10.
    Zhu, J., de Vries, A.P., Demartini, G., Iofciu, T.: Relation Retrieval for Entities and Experts. In: Future Challenges in Expertise Retrieval (fCHER 2008), SIGIR 2008 Workshop, Singapore (July 2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Gianluca Demartini
    • 1
  • Arjen P. de Vries
    • 2
  • Tereza Iofciu
    • 1
  • Jianhan Zhu
    • 3
  1. 1.L3S Research CenterLeibniz Universität HannoverHannoverGermany
  2. 2.CWI & Delft University of TechnologyThe Netherlands
  3. 3.University College LondonIpswichUK

Personalised recommendations