Skip to main content

On the Contributions of Topics to System Evaluation

  • Conference paper
Advances in Information Retrieval (ECIR 2011)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 6611))

Included in the following conference series:

Abstract

We consider the selection of good subsets of topics for system evaluation. It has previously been suggested that some individual topics and some subsets of topics are better for system evaluation than others: given limited resources, choosing the best subset of topics may give significantly better prediction of overall system effectiveness than (for example) choosing random subsets. Earlier experimental results are extended, with particular reference to generalisation: the ability of a subset of topics selected on the basis on one collection of system runs to perform well in evaluating another collection of system runs. It turns out to be hard to establish generalisability; it is not at all clear that it is possible to identify subsets of topics that are good for general evaluation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cattelan, M., Mizzaro, S.: IR evaluation without a common set of topics. In: Azzopardi, L., Kazai, G., Robertson, S., Rüger, S., Shokouhi, M., Song, D., Yilmaz, E. (eds.) ICTIR 2009. LNCS, vol. 5766, pp. 342–345. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  2. Cormack, G.V., Lynam, T.R.: Statistical precision of information retrieval evaluation. In: SIGIR 2006, pp. 533–540. ACM Press, New York (2006)

    Google Scholar 

  3. Guiver, J., Mizzaro, S., Robertson, S.: A few good topics: Experiments in topic set reduction for retrieval evaluation. Transactions on Information Systems 27(4) (2009)

    Google Scholar 

  4. Hauff, C., Hiemstra, D., de Jong, F., Azzopardi, L.: Relying on topic subsets for system ranking estimation. In: CIKM 2009, pp. 1859–1862. ACM, New York (2009), http://doi.acm.org/10.1145/1645953.1646249

    Google Scholar 

  5. Kanoulas, E., Pavlu, V., Dai, K., Aslam, J.A.: Modeling the score distributions of relevant and non-relevant documents. In: Azzopardi, L., Kazai, G., Robertson, S., Rüger, S., Shokouhi, M., Song, D., Yilmaz, E. (eds.) ICTIR 2009. LNCS, vol. 5766, pp. 152–163. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  6. Kleinberg, J.: Authoritative sources in a hyperlinked environment. Journal of the Association for Computing Machinery 46(5), 604–632 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  7. Mizzaro, S., Robertson, S.: HITS hits TREC — exploring IR evaluation results with network analysis. In: SIGIR 2007, pp. 479–486. ACM Press, New York (2007)

    Google Scholar 

  8. Robertson, S.: On GMAP – and other transformations. In: CIKM 2006, pp. 78–83. ACM Press, New York (2006)

    Google Scholar 

  9. Voorhees, E., Buckley, C.: The effect of topic set size on retrieval experiment error. In: SIGIR 2002, pp. 316–323. ACM Press, New York (2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Robertson, S. (2011). On the Contributions of Topics to System Evaluation. In: Clough, P., et al. Advances in Information Retrieval. ECIR 2011. Lecture Notes in Computer Science, vol 6611. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-20161-5_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-20161-5_14

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-20160-8

  • Online ISBN: 978-3-642-20161-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics