Encyclopedia of Database Systems

Living Edition
| Editors: Ling Liu, M. Tamer Özsu

Cost and Quality Trade-Offs in Crowdsourcing

Living reference work entry
DOI: https://doi.org/10.1007/978-1-4899-7993-3_80658-1

Definition

In crowdsourcing, some tasks are conducted by the crowd due to enjoyment [8] or social reward [6]. However, arbitrary tasks are seldom enjoyable, and social award is often associated to some specific tasks, such as Wikipedia (https://en.wikipedia.org/wiki/Main_Page) and Stack Overflow (http://stackoverflow.com/). Thus, given an arbitrary task, a requester often needs to offer incentive (i.e., the cost of the task) to motivate workers to conduct the task. The cost per task is often paid in the form of financial compensation, a few cents per task. The quality of a crowdsourcing task is often referred as accuracy. Since the workers are humans, which may make errors when they perform tasks, the results returned by the crowd will have errors as a consequence. The trade-offs between cost and quality refer to the relationships between the financial incentive and the performance.

Historical Background

Wikip...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Ariely D, Gneezy U, Loewenstein G, Mazar N. Large stakes and big mistakes. Rev Econ Stud. 2009;76:451–69.CrossRefMATHGoogle Scholar
  2. 2.
    Faradani S, Hartmann B, Ipeirotis PG. What’s the right price? Pricing tasks for finishing on time. In: Proceedings of the 2011 AAAI conference on artificial intelligence. 2011. p. 26–31.Google Scholar
  3. 3.
    Gneezy U, Rustichini A. Pay enough or don’t pay at all. Q J Econ. 2000;115(3):791–810.CrossRefGoogle Scholar
  4. 4.
    Kazai G. An exploration of the influence that task parameters have on the performance of crowds. In: CrowdConf. 2010.Google Scholar
  5. 5.
    Mason W, Watts DJ. Financial incentives and the performance of crowds. In: ACM SIGKDD human computation. 2009. p. 100–08.CrossRefGoogle Scholar
  6. 6.
    Nov O, Naaman M, Ye C. What drives content tagging: the case of photos on Flickr. In: HCI, 2008. p. 1097–1110.Google Scholar
  7. 7.
    Snow R, O’Connor B, Jurafsky D, Ng AY. Cheap and fast – but is it good?: evaluating non-expert annotations for natural language tasks. In: Conference on empirical methods in natural language processing. 2008. p. 254–63.CrossRefGoogle Scholar
  8. 8.
    von Ahn L. Games with a purpose. Computer. 2006;39(6):92–4.CrossRefGoogle Scholar
  9. 9.
    Xie H, Lui JCS, Jiang JW, Chen W. Incentive mechanism and protocol design for crowdsourcing systems. In: Proceedings of the 2014 annual Allerton conference on communication, control, and computing. IEEE; 2014. p. 140–47.Google Scholar
  10. 10.
    Xintong G, Hongzhi W, Song Y, Hong G. Brief survey of crowdsourcing for data mining. Expert Syst Appl. 2014;41:7987–94.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media LLC 2018

Authors and Affiliations

  1. 1.The Hong Kong University of Science and TechnologyHong KongChina

Section editors and affiliations

  • Reynold Cheng
    • 1
  1. 1.Computer ScienceThe University of Hong KongHong KongChina