Advertisement

Risks and Rewards of Crowdsourcing Marketplaces

  • Jesse ChandlerEmail author
  • Gabriele Paolacci
  • Pam Mueller
Chapter

Abstract

Crowdsourcing has become an increasingly popular means of flexibly deploying large amounts of human computational power. The present chapter investigates the role of microtask labor marketplaces in managing human and hybrid human machine computing. Labor marketplaces offer many advantages that in combination allow human intelligence to be allocated across projects rapidly and efficiently and information to be transmitted effectively between market participants. Human computation comes with a set of challenges that are distinct from machine computation, including increased unsystematic error (e.g. mistakes) and systematic error (e.g. cognitive biases), both of which can be exacerbated when motivation is low, incentives are misaligned, and task requirements are poorly communicated. We provide specific guidance about how to ameliorate these issues through task design, workforce selection, data cleaning and aggregation.

Keywords

Work Ability Multiple Choice Question Piece Rate Individual Requester Online Marketplace 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Akerlof GA (1970) The market for“ lemons”: quality uncertainty and the market mechanism. Q J Econ 84:488–500Google Scholar
  2. Allio RJ (2004) CEO interview: the InnoCentive model of open innovation. Strategy Leadersh 32(4):4–9CrossRefGoogle Scholar
  3. Anderson LR, Holt CA (1997) Information cascades in the laboratory. Am Econ Rev 87:847–862Google Scholar
  4. Bao J, Sakamoto Y, Nickerson JV (2011) Evaluating design solutions using crowds. In: Proceedings of the 17th Americas conference on information systems, Detroit, MI, USAGoogle Scholar
  5. Becker GS, Murphy KM (1992) The division of labor, coordination costs, and knowledge. Q J Econ 107(4):1137–1160CrossRefGoogle Scholar
  6. Berinsky AJ, Huber GA, Lenz GS (2012) Evaluating online labor markets for experimental research: Amazon.com’s Mechanical Turk. Polit Anal 20:351–368. doi: 10.1093/pan/mpr057 CrossRefGoogle Scholar
  7. Bernstein MS, Little G, Miller RC, Hartmann B, Ackerman MS, Karger DR, Crowell D, Panovich K (2010) Soylent: a word processor with a crowd inside. In: Proceeding UIST 2010, ACM Press, pp 313–322Google Scholar
  8. Bigham JP, Jayant C, Ji H, Little G, Miller A, Miller RC, … Yeh T (2010) VizWiz: nearly real-time answers to visual questions. In: Proceedings of the 23nd annual ACM symposium on user interface software and technology, ACM, New York, pp 333–342Google Scholar
  9. Case SM, Swanson DB (2001) Constructing written test questions for the basic and clinical sciences, 3rd edn. National Board of Medical Examiners, PhiladelphiaGoogle Scholar
  10. Chandler D, Horton J (2011) Labor allocation in paid crowdsourcing: experimental evidence on positioning, nudges and prices. In: Workshops at the twenty-fifth AAAI conference on artificial intelligence. AAAI Press, Menlo Park, CaliforniaGoogle Scholar
  11. Chandler D, Kapelner A (2013) Breaking monotony with meaning: motivation in crowdsourcing markets. J Econ Behav Organ 90:123–133CrossRefGoogle Scholar
  12. Chandler J, Mueller P, Paolacci G (in press) Methodological concerns and advanced uses of Amazon mechanical Turk in psychological research. Manuscript submitted for publicationGoogle Scholar
  13. Chilton LB, Horton JJ, Miller RC, Azenkot S (2010) Task search in a human computation market. In: Proceedings of the ACM SIGKDD workshop on human computation, ACM, New York, pp 1–9Google Scholar
  14. Chua CC, Milosavljevic M, Curran JR (2009) A sentiment detection engine for internet stock message boards. In Pizzato LA, Schwitter R (eds) Proceedings of the Australasian language technology association workshop 2009, Sydney, pp 89–93Google Scholar
  15. Collins A, Joseph D, Bielaczyc K (2004) Design research: theoretical and methodological issues. J Learn Sci 13(1):15–42Google Scholar
  16. Cooper S, Khatib F, Treuille A, Barbero J, Lee J, Beenen M, Leaver-Fay A, Baker D, Popović Z (2010) Predicting protein structures with a multiplayer online game. Nature 466(7307): 756–760CrossRefGoogle Scholar
  17. Couper M (2008) Designing effective web surveys. Cambridge University Press, New YorkCrossRefGoogle Scholar
  18. Davis LE (1965) Pacing effects on manned assembly lines. Int J Prod Res 4(3):171–184CrossRefGoogle Scholar
  19. Dominowski RL, Dallob PI (1995) Insight and problem solving. In: Sternberg RJ, Davidson JE (eds) The nature of insight. MIT Press, Cambridge, pp 33–62Google Scholar
  20. Elson DK, McKeown KR (2010) Automatic attribution of quoted speech in literary narrative. In: Proceedings of the twenty-fourth AAAI conference on artificial intelligence. The AAAI Press, Menlo Park, pp 1013–1019Google Scholar
  21. Estellés-Arolas E, González-Ladrón-de-Guevara F (2012) Towards an integrated crowdsourcing definition. J Info Sci 38(2):189–200Google Scholar
  22. Galton F 1907 Vox populi. Nature 75:450–451Google Scholar
  23. Gneezy U, Meier S, Rey-Biel P (2011) When and why incentives (don’t) work to modify behavior. J Econ Perspect 25:191–209CrossRefGoogle Scholar
  24. Goldin G, Darlow A (2013) TurkGate (Version 0.4.0) [Software]. Available from http://gideongoldin.github.com/TurkGate/)
  25. Goodman JK, Cryder CE, Cheema A (2012) Data collection in a flat world: the strengths and weaknesses of mechanical Turk samples. J Behav Decis Making 26:213–224CrossRefGoogle Scholar
  26. Grady C, Lease M (2010) Crowdsourcing document relevance assessment with mechanical Turk. In: Proceedings of the NAACL HLT 2010 workshop on creating speech and language data with Amazon’s mechanical Turk. Association for Computational Linguistics, pp 172–179Google Scholar
  27. Grice HP (1989) Studies in the way of words. Harvard University Press, CambridgeGoogle Scholar
  28. Gruenstein A, McGraw I, Sutherland A (2009) A self-transcribing speech corpus: collecting continuous speech with an online educational game. In: Proceedings of the speech and language technology in education (SLaTE) workshop. WarwickshireGoogle Scholar
  29. Hayes AF, Krippendorff K (2007) Answering the call for a standard reliability measure for coding data. Commun Methods Meas 1:77–89. doi: 10.1080/19312450709336664 CrossRefGoogle Scholar
  30. Horton JJ (2010) Online labor markets. Springer Berlin Heidelberg, pp 515–522Google Scholar
  31. Horton JJ (2011) The condition of the Turking class: are online employers fair and honest? Econ Lett 111(1):10–12CrossRefGoogle Scholar
  32. Horton JJ, Chilton LB (2010) The labor economics of paid crowdsourcing. In: Proceedings of the 11th ACM conference on electronic commerce, ACM, pp 209–218Google Scholar
  33. Hosseini M, Cox I, Milić-Frayling N, Kazai G, Vinay V (2012) On aggregating labels from multiple crowd workers to infer relevance of documents. Adv Inf Retr 182–194Google Scholar
  34. Hsieh G, Kraut RE, Hudson SE (2010) Why pay?: exploring how financial incentives are used for question & answer. In: Proceedings of the 28th international conference on human factors in computing systems, pp 305–314. doi:  10.1145/1753326.1753373
  35. Hullman J, Adar E, Shah P (2011) The impact of social information on visual judgments. In: Proceedings of the 2011 annual conference on human factors in computing systems, ACM, New York, pp 1461–1470Google Scholar
  36. Ipeirotis P (2010) Demographics of mechanical Turk. CeDER-10–01 working paper, New York UniversityGoogle Scholar
  37. Ipeirotis PG, Horton JJ (2011) The need for standardization in crowdsourcing. CHIGoogle Scholar
  38. Jung HJ, Lease M (2011) Improving consensus accuracy via Z-score and weighted voting. In: Proceedings of the 3rd Human Computation Workshop (HCOMP) at AAAI Press, Menlo Park, CaliforniaGoogle Scholar
  39. Kahneman D (2011) Thinking, fast and slow. Farrar, Straus and Giroux, New YorkGoogle Scholar
  40. Kapelner A, Chandler D (2010) Preventing satisficing in online surveys: a ‘kapcha’ to ensure higher quality data. In: The world’s first conference on the future of distributed work, San Francisco, CA (CrowdConf2010)Google Scholar
  41. Kaufmann N, Schulze T, Veit D (2011) More than fun and money. worker motivation in crowdsourcing–a study on mechanical turk. In: Proceedings of the seventeenth Americas conference on information systems, DetroitGoogle Scholar
  42. Kazai G, Milic-Frayling N (2009) On the evaluation of the quality of relevance assessments collected through crowdsourcing. In: SIGIR 2009 workshop on the future of IR evaluation, Boston, MA, p 21Google Scholar
  43. Kazai G, Kamps J, Milic-Frayling N (2012) The face of quality in crowdsourcing relevance labels: Demographics, personality and labeling accuracy. In: Proceedings of the 21st ACM international conference on Information and knowledge, ACM, New York, pp 2583–2586Google Scholar
  44. Khanna S, Ratan A, Davis J, Thies W (2010) Evaluating and improving the usability of mechanical turk for low-income workers in India. In: Proceedings of the first ACM symposium on computing for development, ACM, New York, p 12Google Scholar
  45. Kittur A, Chi EH, Suh, B (2008) Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI conference on human factors in computing systems, ACM, New York, pp 453–456Google Scholar
  46. Kittur A, Nickerson J, Bernstein M, Gerber E, Shaw A, Zimmerman J, … Horton J (2013) The future of crowd work. In: Sixteenth ACM conference on Computer Supported Coooperative Work (CSCW 2013), ForthcomingGoogle Scholar
  47. Krippendorff K (2004) Reliability in content analysis. Hum Commun Res, 30(3):411–433Google Scholar
  48. Krosnick JA (2006) Response strategies for coping with the cognitive demands of attitude measures in surveys. Appl Cogn Psychol 5(3):213–236CrossRefGoogle Scholar
  49. Krug S (2009) Don’t make me think: a common sense approach to web usability. New Riders, Berkeley, CAGoogle Scholar
  50. Lakhani KR (2008) InnoCentive. com (A). Harvard Business School Case, 608–170Google Scholar
  51. Lane I, Weibel A, Eck M, Rottmann K (2010) Tools for collecting speech corpora via Mechanical-Turk. In: Proceedings of the NAACL HLT 2010 workshop on creating speech and language data with Amazon’s mechanical turk, Association for Computational Linguistics, Stroudsbug, PA, pp 184–187Google Scholar
  52. Lau T, Drews C, Nichols J (2009) Interpreting written how-to instructions. In: Kitano H (ed) Proceedings of the 21st international joint conference on artificial intelligence, Morgan Kaufmann, San Francisco, pp 1433–1438Google Scholar
  53. Li B, Liu Y, Agichtein E (2008) CoCQA: co-training over questions and answers with an application to predicting question subjectivity orientation. In: Proceedings of the 2008 conference on empirical methods in natural language processing. Association for Computational Linguistics, Stroudsburg. doi:  10.3115/1613715.1613836, pp 937–946
  54. Lintott CJ, Schawinski K, Slosar A, Land K, Bamford S, Thomas D, … Vandenberg J (2008) Galaxy zoo: morphologies derived from visual inspection of galaxies from the Sloan Digital Sky Survey. Mon Not R Astron Soc 389(3):1179–1189Google Scholar
  55. Marge M, Banerjee S, Rudnicky AI (2010) Using the Amazon mechanical turk for transcription of spoken language. In: Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE international conference on (5270–5273), Institute of Electronics and Electrical Engineers, Washington, DC. doi: 10.1109/ICASSP.2010.5494979
  56. Mason W, Suri S (2012) Conducting behavioral research on Amazon’s Mechanical Turk. Behav Res Methods 44(1):1–23Google Scholar
  57. Mason W, Watts DJ (2009) Financial incentives and the performance of crowds. In: Proceedings of the ACM SIGKDD workshop on human computation, ACM, New York, pp 77–85Google Scholar
  58. Molla D, Santiago-Martinez ME (2011) Development of a corpus for evidence based medicine summarisation. In: Proceedings of Australasian language technology association workshop, Australasian Language Technology Association, Melbourne, pp 86–94Google Scholar
  59. Nelson L, Held C, Pirolli P, Hong L, Schiano D, Chi EH (2009) With a little help from my friends: examining the impact of social annotations in sensemaking tasks. In: Proceedings of the 27th international conference on human factors in computing systems, ACM, New York, pp 1795–1798. doi: 10.1145/1518701.1518977
  60. Nickerson JV, Sakamoto Y, Yu L (2011) Structures for creativity: the crowdsourcing of design. In: CHI workshop on crowdsourcing and human computation, pp 1–4Google Scholar
  61. Open Science Collaboration (2013) The reproducibility project: a model of large-scale collaboration for empirical research on reproducibility. In: Stodden V, Leisch F, Peng R (eds) Implementing reproducible computational research (A Volume in The R Series). Taylor and Francis, New YorkGoogle Scholar
  62. Paolacci G, Chandler J, Ipeirotis P (2010) Running experiments on Amazon mechanical turk. Judgm Decis Making 5:411–419Google Scholar
  63. Pe’er E, Paolacci G, Chandler J, Mueller P (2012) Screening participants from previous studies on Amazon mechanical turk and qualtrics. Available at SSRN 2100631Google Scholar
  64. Prelec D (2004) A bayesian truth serum for subjective data. Science 306(5695):462–466Google Scholar
  65. Pontin J (2007) Artificial intelligence, with help from the humans. New York Times, 25. Retrieved from http://www.nytimes.com/2007/03/25/business/yourmoney/25Stream.html
  66. Rand DG (2012) The promise of mechanical turk: how online labor markets can help theorists run behavioral experiments. J Theor Biol 299:172–179MathSciNetCrossRefGoogle Scholar
  67. Resnick P, Kuwabara K, Zeckhauser R, Friedman E (2000) Reputation systems. Commun ACM 43(12):45–48CrossRefGoogle Scholar
  68. Rochet JC, Tirole J (2003) Platform competition in two‐sided markets. J Eur Econ Assoc 1(4):990–1029CrossRefGoogle Scholar
  69. Rogstadius J, Kostakos V, Kittur A, Smus B, Laredo J, Vukovic M (2011) An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. In ICWSMGoogle Scholar
  70. Sayeed AB, Rusk B, Petrov M, Nguyen HC, Meyer TJ, Weinberg A (2011) Crowdsourcing syntactic relatedness judgements for opinion mining in the study of information technology adoption. In: Proceedings of the 5th ACL-HLT workshop on language technology for cultural heritage, social sciences, and humanities, Association for Computational Linguistics, Stroudsburg, pp 69–77Google Scholar
  71. Schwarz N (1999) Self-reports: how the questions shape the answers. Am psychol 54(2):93Google Scholar
  72. Shapiro DN, Chandler J, Mueller PA (2013) Using mechanical turk to study clinical populations. Clin Psychol Sci 1:213–220CrossRefGoogle Scholar
  73. Shaw AD, Horton JJ, Chen DL (2011) Designing incentives for inexpert human raters. In: Proceedings of the ACM 2011 conference on computer supported cooperative work, ACM, New York, pp 275–284Google Scholar
  74. Sheerman-Chase T, Ong EJ, Bowden R (2011) Cultural factors in the regression of non-verbal communication perception. In: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, pp 1242–1249Google Scholar
  75. Silberman M, Irani L, Ross J (2010) Ethics and tactics of professional crowdwork. XRDS Crossroads ACM Mag Stud 17(2):39–43CrossRefGoogle Scholar
  76. Simon HA (1972) Theories of bounded rationality. Decis Organ 1:161–176Google Scholar
  77. Suri S, Goldstein DG, Mason WA (2011) Honesty in an online labor market. In: von Ahn L, Ipeirotis PG (eds) Papers from the 2011 AAAI workshop. AAAI Press, Menlo ParkGoogle Scholar
  78. Tang W, Lease M (2011) Semi-supervised consensus labeling for crowdsourcing. In: Proceedings of the ACM SIGIR workshop on crowdsourcing for information retrieval, ACM, New YorkGoogle Scholar
  79. Tetlock P (2005) Expert political judgment: how good is it? How can we know? Princeton University Press, PrincetonGoogle Scholar
  80. Tetreault JR, Filatova E, Chodorow M (2010) Rethinking grammatical error annotation and evaluation with the Amazon mechanical turk. In: Proceedings of the NAACL HLT 2010 fifth workshop on innovative use of NLP for building educational applications, Association for Computational Linguistics, pp 45–48Google Scholar
  81. Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases. Science 211(January):453–458MathSciNetGoogle Scholar
  82. von Ahn L (2006) Games with a purpose. Computer 39(6):92–94CrossRefGoogle Scholar
  83. Von Ahn L, Maurer B, McMillen C, Abraham D, Blum M (2008) recaptcha: human-based character recognition via web security measures. Science 321(5895):1465–1468Google Scholar
  84. Wenger E (1998) Communities of practice: learning, meaning, and identity. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  85. Winchester S (2004) The meaning of everything: The story of the Oxford English Dictionary. Oxford University PressGoogle Scholar
  86. Yu L, Nickerson JV (2011) Cooks or cobblers?: crowd creativity through combination. In: Proceedings of the 2011 annual conference on human factors in computing systems, ACM, New York, pp 1393–1402Google Scholar
  87. Zhou DX, Resnick P, Mei Q (2011) Classifying the political leaning of news articles and users from user votes. In: Proceedings of the fifth international AAAI conference on weblogs and social media. The AAAI Press, Menlo Park, pp 417–424Google Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Jesse Chandler
    • 1
    Email author
  • Gabriele Paolacci
    • 2
  • Pam Mueller
    • 3
  1. 1.University of Michigan/PRIME ResearchAnn ArborUSA
  2. 2.Erasmus University RotterdamRotterdamNetherlands
  3. 3.Princeton UniversityPrincetonUSA

Personalised recommendations