Business & Information Systems Engineering

, Volume 57, Issue 3, pp 167–179 | Cite as

How (not) to Incent Crowd Workers

Payment Schemes and Feedback in Crowdsourcing
  • Tim Straub
  • Henner Gimpel
  • Florian Teschner
  • Christof Weinhardt
Research Paper

Abstract

Crowdsourcing gains momentum: In digital work places such as Amazon Mechanical Turk, oDesk, Clickworker, 99designs, or InnoCentive it is easy to distribute human work to hundreds or thousands of freelancers. In these crowdsourcing settings, one challenge is to properly incent worker effort to create value. Common incentive schemes are piece rate payments and rank-order tournaments among workers. Tournaments might or might not disclose a worker’s current competitive position via a leaderboard. Following an exploratory approach, we derive a model on worker performance in rank-order tournaments and present a series of real effort studies using experimental techniques on an online labor market to test the model and to compare dyadic tournaments to piece rate payments. Data suggests that on average dyadic tournaments do not improve performance compared to a simple piece rate for simple and short crowdsourcing tasks. Furthermore, giving feedback on the competitive position in such tournaments tends to be negatively related to workers’ performance. This relation is partially mediated by task completion and moderated by the provision of feedback: When playing against strong competitors, feedback is associated with workers quitting the task altogether and, thus, showing lower performance. When the competitors are weak, workers tend to complete the task but with reduced effort. Overall, individual piece rate payments are most simple to communicate and implement while incenting performance is on par with more complex dyadic tournaments.

Keywords

Crowdsourcing Online labor Incentives Exploratory study Experimental techniques Real effort task Rank-order tournament Piece rate Feedback 

References

  1. Baron RM, Kenny DA (1986) The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. J Pers Soc Psychol 51(6):1173–1182CrossRefGoogle Scholar
  2. Berinsky AJ, Huber GA, Lenz GS (2012) Evaluating online labor markets for experimental research: Amazon.com’s mechanical turk. Polit Anal 20(3):351–368CrossRefGoogle Scholar
  3. Boudreau M-C, Gefen D, Straub DW (2001) Validation in information systems research: a state-of-the-art assessment. MIS Q 25(1):1–16CrossRefGoogle Scholar
  4. Bracha A, Fershtman C (2013) Competitive incentives: working harder or working smarter? Manag Sci 59(4):771–781CrossRefGoogle Scholar
  5. Briggs RO, Schwabe G (2011) On expanding the scope of design science in is research. In: Jain H, Sinha AP, Vitharana P (eds) DESRIST 2011, LNCS 6629. Springer, Heidelberg, pp 92–106Google Scholar
  6. Buhrmester M, Kwang T, Gosling SD (2011) Amazon’s mechanical turk: a new source of inexpensive, yet high-quality, data? Perspect Psychol Sci 6(1):3–5CrossRefGoogle Scholar
  7. Bull C, Schotter A, Weigelt K (1987) Tournaments and piece rates: an experimental study. J of Polit Econ 95(1):1–33CrossRefGoogle Scholar
  8. Chilton LB, Horton JJ, Miller RC, Azenkot S (2010) Task search in a human computation market. In: ACM SIGKDD workshop on hum comput (HCOMP 2010), New York, pp 1–9Google Scholar
  9. Cohen J (1988) Statistical power analysis for the behavioral sciences. Lawrence Erlbaum Associates, HillsdaleGoogle Scholar
  10. Donabedian A (1980) Explorations in quality assessment and monitoring: the definition of quality and approaches to its assessment, vol 1. Health Administration Press, Ann ArborGoogle Scholar
  11. Donabedian A (2003) An introduction to quality assurance in health care. Oxford University Press, New YorkGoogle Scholar
  12. Eccles JS, Wigfield A (2002) Motivational beliefs, values, and goals. Annu Rev of Psychol 53(1):109–132CrossRefGoogle Scholar
  13. Ehrenberg RG, Bognanno ML (1990) Do tournaments have incentive effects? J of Polit Econ 98(6):1307–1324CrossRefGoogle Scholar
  14. Eriksson T, Teyssier S, Villeval MC (2009a) Self-selection and the efficiency of tournaments. Econ Inq 47(3):530–548CrossRefGoogle Scholar
  15. Eriksson T, Poulsen A, Villeval MC (2009b) Feedback and incentives: experimental evidence. Labour Econ 16:679–688CrossRefGoogle Scholar
  16. Fershtman C, Gneezy U (2011) The tradeoff between performance and quitting in high power tournaments. J Europ Econ Assoc 9(2):318–336CrossRefGoogle Scholar
  17. Fischbacher U (2007) Z-Tree: zurich toolbox for ready-made economic experiments. Exp Econ 10(2):171–178CrossRefGoogle Scholar
  18. Gill D, Prowse V (2012) A structural analysis of disappointment aversion in a real effort competition. Am Econ Rev 102(1):469–503CrossRefGoogle Scholar
  19. Hammon L, Hippner H (2012) Crowdsourcing. Bus Inf Syst Eng 4(3):163–166CrossRefGoogle Scholar
  20. Harrison GW, List JA (2004) Field experiments. J Econ Lit 42(4):1009–1055CrossRefGoogle Scholar
  21. Hayes AF (2009) Beyond baron and kenny: statistical mediation analysis in the new millennium. Commun Monogr 76(4):408–420CrossRefGoogle Scholar
  22. Horton JJ, Rand DG, Zeckhauser RJ (2011) The online laboratory: conducting experiments in a real labor market. Exp Econ 14(3):399–425CrossRefGoogle Scholar
  23. Ipeirotis PG (2010) Analyzing the Amazon mechanical turk marketplace. XRDS 17(2):16–21CrossRefGoogle Scholar
  24. Ipeirotis PG, Provost F, Wang J (2010) Quality management on amazon mechanical turk. In: ACM SIGKDD workshop on hum comput (HCOMP 2010), Washington DC, pp 64–67Google Scholar
  25. Kaufmann N, Schulze T, Veit D (2011) More than fun and money. Worker motivation in crowdsourcing – a study on mechanical turk. In: 17th Am conf on inf syst (AMCIS 2011), Detroit, paper 340Google Scholar
  26. Kittur A, Khamkar S, André P, Kraut RE (2012) CrowdWeaver: visually managing complex crowd work. In: ACM 2012 conf on comput support coop work (CSCW 2012), Seattle, pp 1033–1036Google Scholar
  27. Kittur A, Nickerson JV, Bernstein MS, Gerber EM, Shaw A, Zimmerman J, Lease M, Horton JJ (2013) The future of crowd work. In: 2013 conf on comput support coop work (CSCW 2013), San Antonio, pp 1301–1318Google Scholar
  28. Kokkodis M, Ipeirotis PG (2013) Have you done anything like that? Predicting performance using inter-category reputation. In: 6th ACM int conf on web search and data min (WSDM 2013), Rome, pp 435–444Google Scholar
  29. Kuhnen CM, Tymula A (2012) Feedback, self-esteem, and performance in organizations. Manag Sci 58(1):94–113CrossRefGoogle Scholar
  30. Lazear EP, Rosen S (1981) Rank-order tournaments as optimum labor contracts. J of Polit Econ 89(5):841–864CrossRefGoogle Scholar
  31. Leimeister JM (2010) Collective intelligence. Bus Inf Syst Eng 2(4):245–248CrossRefGoogle Scholar
  32. Malone TW, Laubacher R, Dellarocas C (2010) The collective intelligence genome. MIT Sloan Manag Rev 51(3):21–31Google Scholar
  33. Mao A, Chen Y, Gajos KZ, Parkes D, Procaccia AD, Zhang H (2012) TurkServer: enabling synchronous and longitudinal online experiments. In: HCOMP (2012)Google Scholar
  34. Mason W, Suri S (2012) Conducting behavioral research on Amazon’s mechanical turk. Behav Res Methods 44(1):1–23CrossRefGoogle Scholar
  35. Mason W, Watts DJ (2009) Financial incentives and the performance of crowds. ACM SigKDD Explor Newsl 11(2):100–108CrossRefGoogle Scholar
  36. Paolacci G, Chandler J, Ipeirotis PG (2010) Running experiments on amazon mechanical turk. Judgm Decis Mak 5(5):411–419Google Scholar
  37. Pederson EC, Denson TF, Goss RJ, Vasquez EA, Kelley NJ, Miller N (2011) The impact of rumination on aggressive thoughts, feelings, arousal, and behavior. Br J Soc Psychol 50:281–301CrossRefGoogle Scholar
  38. Pilz D, Gewald H (2013) Does money matter? Motivational factors for participation in paid- and non-profit-crowdsourcing communities. In: 11th Int Conf on Wirtschaftsinformatik (WI2013), LeipzigGoogle Scholar
  39. Preacher KJ, Hayes AF (2004) SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behav Res Methods, Instrum Comp 36(4):717–731CrossRefGoogle Scholar
  40. Pull K, Bäker H, Bäker A (2013) The ambivalent role of idiosyncratic risk in asymmetric tournaments. Theor Econ Lett 3(3A):16–22CrossRefGoogle Scholar
  41. Roth AE (1986) Laboratory experimentation in economics. Econ Philos 2:245–273CrossRefGoogle Scholar
  42. Roth AE (1987) Introduction and overview. In: Roth AE (ed) Laboratory experimentation in economics: six points of view. Cambridge University Press, Cambridge, pp 1–13CrossRefGoogle Scholar
  43. Ryan RM, Deci EL (2000) Intrinsic and extrinsic motivations: classic definitions and new directions. Contemp Educ Psychol 25(1):54–67CrossRefGoogle Scholar
  44. Shaw AD, Horton JJ, Chen DL (2011) Designing incentives for inexpert human raters. In: ACM 2011 conf on comput support coop work (CSCW 2011), Hangzhou, pp 275–284Google Scholar
  45. Stebbins R (2001) Exploratory research in the social sciences. Sage Pubs, Thousand OaksGoogle Scholar
  46. Straub T, Gimpel H, Teschner F, Weinhardt C (2014a) Feedback and performance in crowd work: a real effort experiment. In: ECIS 2014 Proc, Tel AvivGoogle Scholar
  47. Straub T, Gimpel H, Teschner F, (2014b) The negative effect of feedback on performance in crowd labor tournaments. In: Nickerson J, Malone T (eds) Proc of collective intell 2014Google Scholar
  48. Teschner F, Gimpel H (2013a) Crowd labor markets as platform for IS research: first evidence from electronic markets. In: 2013 Int conf on inf syst (ICIS 2013), MilanGoogle Scholar
  49. Teschner F, Gimpel H (2013b) Validity of MTurk experiments in IS research: results from electronic markets. Working paperGoogle Scholar
  50. Van Dijk F, Sonnemans J, van Winden F (2001) Incentive systems in a real effort experiment. Europ Econ Rev 45(2):187–214CrossRefGoogle Scholar
  51. Wang J, Ipeirotis PG, Provost F (2013) Quality-based pricing for crowdsourced workers. Working Paper. http://ssrn.com/abstract=2283000. Accessed 13 Mar 2015

Copyright information

© Springer Fachmedien Wiesbaden 2015

Authors and Affiliations

  • Tim Straub
    • 1
  • Henner Gimpel
    • 2
  • Florian Teschner
    • 3
  • Christof Weinhardt
    • 1
  1. 1.Institute of Information Systems and Marketing, Karlsruhe Service Research Institute (KSRI)Karlsruhe Institute of Technology (KIT)KarlsruheGermany
  2. 2.Research Center Finance and Information Management, Project Group Business and Information Systems Engineering of Fraunhofer FITUniversity of AugsburgAugsburgGermany
  3. 3.Institute of Information Systems and MarketingKarlsruhe Institute of Technology (KIT)KarlsruheGermany

Personalised recommendations