Overview of the INEX 2010 Ad Hoc Track

  • Paavo Arvola
  • Shlomo Geva
  • Jaap Kamps
  • Ralf Schenkel
  • Andrew Trotman
  • Johanna Vainio
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6932)


This paper gives an overview of the INEX 2010 Ad Hoc Track. The main goals of the Ad Hoc Track were three-fold. The first goal was to study focused retrieval under resource restricted conditions such as a small screen mobile device or a document summary on a hit-list. This leads to variants of the focused retrieval tasks that address the impact of result length/reading effort, thinking of focused retrieval as a form of “snippet” retrieval. The second goal was to extend the ad hoc retrieval test collection on the INEX 2009 Wikipedia Collection with additional topics and judgments. For this reason the Ad Hoc track topics and assessments stayed unchanged. The third goal was to examine the trade-off between effectiveness and efficiency by continuing the Efficiency Track as a task in the Ad Hoc Track. The INEX 2010 Ad Hoc Track featured four tasks: the Relevant in Context Task, the Restricted Relevant in Context Task, the Restrict Focused Task, and the Efficiency Task. We discuss the setup of the track, and the results for the four tasks.


Test Collection Context Task Relevant Text Focus Task Passage Retrieval 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Arvola, P., Kekäläinen, J., Junkkari, M.: Expected reading effort in focused retrieval evaluation. Information Retrieval 13, 460–484 (2010)CrossRefGoogle Scholar
  2. 2.
    Beigbeder, M.: Focused retrieval with proximity scoring. In: Proceedings of the 2010 ACM Symposium on Applied Computing (SAC 2010), pp. 1755–1759. ACM Press, New York (2010)CrossRefGoogle Scholar
  3. 3.
    Clarke, C.L.A.: Range results in XML retrieval. In: Proceedings of the INEX 2005 Workshop on Element Retrieval Methodology, Glasgow, UK, pp. 4–5 (2005)Google Scholar
  4. 4.
    Denoyer, L., Gallinari, P.: The Wikipedia XML Corpus. INEX 2006 40, 64–69 (2006)CrossRefGoogle Scholar
  5. 5.
    Géry, M., Largeron, C., Thollard, F.: Integrating structure in the probabilistic model for information retrieval. In: Proceedings of the 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, pp. 763–769. IEEE Computer Society Press, Los Alamitos (2008)CrossRefGoogle Scholar
  6. 6.
    Kamps, J., Larsen, B.: Understanding differences between search requests in XML element retrieval. In: Trotman, A., Geva, S. (eds.) Proceedings of the SIGIR 2006 Workshop on XML Element Retrieval Methodology, pp. 13–19 (2006)Google Scholar
  7. 7.
    Kamps, J., Koolen, M., Lalmas, M.: Locating relevant text within XML documents. In: Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 847–849. ACM Press, New York (2008)Google Scholar
  8. 8.
    Kamps, J., Pehcevski, J., Kazai, G., Lalmas, M., Robertson, S.: INEX 2007 evaluation measures. In: Fuhr, N., Kamps, J., Lalmas, M., Trotman, A. (eds.) INEX 2007. LNCS, vol. 4862, pp. 24–33. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  9. 9.
    Kamps, J., Lalmas, M., Larsen, B.: Evaluation in context. In: Agosti, M., Borbinha, J., Kapidakis, S., Papatheodorou, C., Tsakonas, G. (eds.) ECDL 2009. LNCS, vol. 5714, pp. 339–351. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  10. 10.
    Kekäläinen, J., Järvelin, K.: Using graded relevance assessments in IR evaluation. Journal of the American Society for Information Science and Technology 53, 1120–1129 (2002)CrossRefGoogle Scholar
  11. 11.
    Pal, S., Mitra, M., Kamps, J.: Evaluation effort, reliability and reusability in XML retrieval. Journal of the American Society for Information Science and Technology 62, 375–394 (2011)CrossRefGoogle Scholar
  12. 12.
    Schenkel, R., Suchanek, F.M., Kasneci, G.: YAWN: A semantically annotated Wikipedia XML corpus. In: 12. GI-Fachtagung für Datenbanksysteme in Business, Technologie und Web (BTW 2007), pp. 277–291 (2007)Google Scholar
  13. 13.
    Trotman, A., Geva, S.: Passage retrieval and other XML-retrieval tasks. In: Proceedings of the SIGIR 2006 Workshop on XML Element Retrieval Methodology, University of Otago, Dunedin New Zealand, pp. 43–50 (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Paavo Arvola
    • 1
  • Shlomo Geva
    • 2
  • Jaap Kamps
    • 3
  • Ralf Schenkel
    • 4
  • Andrew Trotman
    • 5
  • Johanna Vainio
    • 1
  1. 1.University of TampereTampereFinland
  2. 2.Queensland University of TechnologyBrisbaneAustralia
  3. 3.University of AmsterdamAmsterdamThe Netherlands
  4. 4.Max-Planck-Institut für InformatikSaarbrückenGermany
  5. 5.University of OtagoDunedinNew Zealand

Personalised recommendations