Crowdsourcing Assessments for XML Ranked Retrieval

  • Omar Alonso
  • Ralf Schenkel
  • Martin Theobald
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5993)

Abstract

Crowdsourcing has gained a lot of attention as a viable approach for conducting IR evaluations. This paper shows through a series of experiments on INEX data that crowdsourcing can be a good alternative for relevance assessment in the context of XML retrieval.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Alonso, O., Mizzaro, S.: Can we get rid of TREC assessors? Using Mechanical Turk for relevance assessment. In: SIGIR IR Evaluation Workshop (2009)Google Scholar
  2. 2.
    Snow, R., et al.: Cheap and fast–but is it good?: evaluating non-expert annotations for natural language tasks. In: EMNLP (2008)Google Scholar
  3. 3.
    Piwowarski, B., Trotman, A., Lalmas, M.: Sound and complete relevance assessment for XML retrieval. ACM Trans. Inf. Syst. 27(1), 1–37 (2008)CrossRefGoogle Scholar
  4. 4.
    Fuhr, N., Kamps, J., Lalmas, M., Malik, S., Trotman, A.: Overview of the INEX 2007 Ad Hoc Track. In: Fuhr, N., Kamps, J., Lalmas, M., Trotman, A. (eds.) INEX 2007. LNCS, vol. 4862, pp. 1–23. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  5. 5.
    Denoyer, L., Gallinari, P.: The Wikipedia XML corpus. SIGIR Forum 40(1), 64–69 (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Omar Alonso
    • 1
  • Ralf Schenkel
    • 1
    • 2
  • Martin Theobald
    • 1
  1. 1.Max-Planck Institute für InformatikSaarbrückenGermany
  2. 2.Saarland UniversitySaarbrückenGermany

Personalised recommendations