Crowdsourcing Assessments for XML Ranked Retrieval
Crowdsourcing has gained a lot of attention as a viable approach for conducting IR evaluations. This paper shows through a series of experiments on INEX data that crowdsourcing can be a good alternative for relevance assessment in the context of XML retrieval.
Unable to display preview. Download preview PDF.
- 1.Alonso, O., Mizzaro, S.: Can we get rid of TREC assessors? Using Mechanical Turk for relevance assessment. In: SIGIR IR Evaluation Workshop (2009)Google Scholar
- 2.Snow, R., et al.: Cheap and fast–but is it good?: evaluating non-expert annotations for natural language tasks. In: EMNLP (2008)Google Scholar