What? How? Where? A Survey of Crowdsourcing

  • Xu Yin
  • Wenjie Liu
  • Yafang Wang
  • Chenglei Yang
  • Lin Lu
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 269)


Crowdsourcing system recruits an undefined group of people to accomplish tasks proposed by a requester in a short time with low cost. Crowdsourcing makes great contributions in many fields. Building a crowdsourcing system faces many challenges, such as how to motivate people, how to decompose and assign tasks, how to control quality, how to aggregate contributions. In this paper, we explain what the crowdsourcing is and propose three necessary characters as criteria to judge a crowdsourcing system. Then, we introduce solutions to those challenges of crowdsourcing system. Finally, we talk about where crowdsourcing can be used and choose two fields to illustrate the usefulness of crowdsourcing.


Crowdsourcing Survey 



This work was partly supported by China national natural science foundation (61272243, 61202146, 61003149), Shandong Provincial Natural Science Foundation, China (ZR2010FQ011, ZR2012FQ026).


  1. 1.
    von Ahn L, Dabbish L (2004) Labeling images with a computer game. In: CHI’04, pp 319–326Google Scholar
  2. 2.
    Lasecki W, Miller C, Sadilek A, Abumoussa A, Borrello D, Kushalnagar R, Bigham J (2012) Real-time captioning by groups of non-experts. In: UIST’12, pp 23–34Google Scholar
  3. 3.
    Hankins RA, Lee A (2011) Crowdsourcing and prediction markets.
  4. 4.
    Doan A, Ramakrishnan R, Halevy AY (2011) Crowdsourcing systems on the world-wide web. Commun ACM 54(4):86–96Google Scholar
  5. 5.
    Kaufmann N, Schulze T, Veit D (2011) More than fun and money. Worker motivation in crowdsourcing – a study on mechanical turk. In: AMCIS’11, pp 1–11Google Scholar
  6. 6.
    Quinn AJ, Bederson BB (2011) Human computation: a survey and taxonomy of a growing field. In: CHI’11, pp 1403–1412Google Scholar
  7. 7.
    Yuen MC, King I, Leung KS (2011) A survey of crowdsourcing systems. In: Passat’11 and Socialcom’11, pp 766 –773Google Scholar
  8. 8.
    Callison-Burch C, Dredze M (2010) Creating speech and language data with amazon’s mechanical turk. In: CSLDAMT’10, pp 1–12Google Scholar
  9. 9.
    Gentry C, Ramzan Z, Stubblebine S (2005) Secure distributed human computation. In: EC’05, pp 155–164Google Scholar
  10. 10.
    Kulkarni A, Can M, Hartmann B (2012) Collaboratively crowdsourcing workflows with turkomatic. In: CSCW’12, pp 1003–1012Google Scholar
  11. 11.
    Noronha J, Hysen E, Zhang H, Gajos KZ (2011) Platemate: crowdsourcing nutritional analysis from food photographs. In: UIST’11, pp 1–12Google Scholar
  12. 12.
    von Ahn L, Dabbish L (2008) Designing games with a purpose. Commun ACM 51(8):58–67Google Scholar
  13. 13.
    Bernstein MS, Brandt J, Miller RC, Karger DR (2011) Crowds in two seconds: enabling realtime crowd-powered interfaces. In: UIST’11, pp 33–42Google Scholar
  14. 14.
    Yu L, Nickerson JV (2011) Cooks or cobblers?: crowd creativity through combination. In: CHI’11, pp 1393–1402Google Scholar
  15. 15.
    Howe J (2006) The rise of crowdsourcing. Wired 14(6):1–4Google Scholar
  16. 16.
    Estells-Arolas E, Gonzlez Ladrn-de Guevara F (2012) Towards an integrated crowdsourcing definition. J Inf Sci 38(2):189–200Google Scholar
  17. 17.
    Antin J, Shaw A (2012) Social desirability bias and self-reports of motivation: a study of amazon mechanical turk in the US and India. In: CHI’12, pp 2925–2934Google Scholar
  18. 18.
    Brabham DC (June 2008) Moving the crowd at istockphoto: the composition of the crowd and motivations for participation in a crowdsourcing application. First Monday 13(6):1–22Google Scholar
  19. 19.
    Cooper S, Khatib F, Treuille A, Barbero J, Lee J, Beenen M, Leaver-Fay A, Baker D, Popovic Z, Players F (2010) Predicting protein structures with a multiplayer online game. Nature 466:756–760Google Scholar
  20. 20.
    Starbird K (2011) Digital volunteerism during disaster: crowdsourcing information processing.
  21. 21.
    Wohn D, Velasquez A, Bjornrud T, Lampe C (2012) Habit as an explanation of participation in an online peer-production community. In: CHI’12, pp 2905–2914Google Scholar
  22. 22.
    Ariely D, Gneezy U, Loewenstein G, Mazar N (2009) Large stakes and big mistakes. Rev Econ Stud 76(2):451–469CrossRefMATHGoogle Scholar
  23. 23.
    Mason W, Watts DJ (2009) Financial incentives and the “performance of crowds”. In: HCOMP’09, pp 77–85Google Scholar
  24. 24.
    Gneezy U, Rustichini A (2000) Pay enough or don’t pay at all. Q J Econ 115(3):791–810CrossRefGoogle Scholar
  25. 25.
    Heimerl K, Gawalt B, Chen K, Parikh T, Hartmann B (2012) Community sourcing: engaging local crowds to perform expert work via physical kiosks. In: CHI’12, pp 1539–1548Google Scholar
  26. 26.
    Sarkar C, Wohn D, Lampe C, DeMaagd K (2012) A quantitative explanation of governance in an online peer-production community. In: CHI’12, pp 2939–2942Google Scholar
  27. 27.
    von Ahn L, Maurer B, McMillen C, Abraham D, Blum M (2008). ReCAPTCHA: human-based character recognition via web security measures. Science 321(5895):1465–1468Google Scholar
  28. 28.
    Tuite K, Snavely N, Hsiao D, Tabing N, Popovic Z (2011) Photocity: training experts at large- scale image acquisition through a competitive game. In: CHI’11, pp 1383–1392Google Scholar
  29. 29.
    Bernstein MS, Teevan J, Dumais S, Liebling D, Horvitz E (2012) Direct answers for search queries in the long tail. In: CHI’12, pp 237–246Google Scholar
  30. 30.
    Hu C, Resnik P, Kronrod Y, Bederson B (2012) Deploying monotrans widgets in the wild. In: CHI’12, pp 2935–2938Google Scholar
  31. 31.
    Parent G, Eskenazi M (2011) Sources of variability and adaptive tasks.
  32. 32.
    Quinn AJ, Bederson BB (2011) Human-machine hybrid computation.
  33. 33.
    Ahmad S, Battle A, Malkani Z, Kamvar S (2011) The jabberwocky programming environment for structured social computing. In: UIST’11, pp 53–64Google Scholar
  34. 34.
    Le J, Edmonds A, Hester V, Biewald L (2010) Ensuring quality in crowdsourced search relevance. In: Workshop on crowdsourcing for search evaluation at SIGIR’10, pp 21–26Google Scholar
  35. 35.
    Rzeszotarski JM, Kittur A (2011) Instrumenting the crowd: using implicit behavioral measures to predict task performance. In: UIST’11, pp 13–22Google Scholar
  36. 36.
    Rzeszotarski J, Kittur A (2012) Crowdscape: interactively visualizing user behavior and output. In: UIST’12, pp 55–62Google Scholar
  37. 37.
    Dow S, Kulkarni A, Klemmer S, Hartmann B (2012) Shepherding the crowd yields better work. In: CSCW’12, pp 1013–1022Google Scholar
  38. 38.
    Chen K-T, Wu C-C, Chang Y-C, Lei C-L (2009) A crowdsourceable QoE evaluation framework for multimedia content. In: MM’09, pp 491–500Google Scholar
  39. 39.
    Bernstein MS, Little G, Miller RC, Hartmann B, Ackerman MS, Karger DR, Crowell D, Panovich K (2010) Soylent: a word processor with a crowd inside. In: UIST’10, pp 313–322Google Scholar
  40. 40.
    Kittur A, Smus B, Khamkar S, Kraut RE (2011) Crowdforge: crowdsourcing complex work. In: UIST’11, pp 43–52Google Scholar
  41. 41.
    Chia PH, Chuang J (2012) Community-based web security: complementary roles of the serious and casual contributors. In: CSCW’12, pp 1023–1032Google Scholar
  42. 42.
    Bigham JP, Jayant C, Ji H, Little G, Miller A, Miller RC, Miller R, Tatarowicz A, White B, White S, Yeh T (2010) Vizwiz: nearly real-time answers to visual questions. In: UIST’10, pp 333–342Google Scholar
  43. 43.
    Chilana PK, Ko AJ, Wobbrock JO (2012) Lemonaid: selection-based crowdsourced contextual help for web applications. In: CHI’12, pp 1549–1558Google Scholar
  44. 44.
    Willett W, Heer J, Agrawala M (2012) Strategies for crowdsourcing social data analysis. In: CHI’12, pp 227–236Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  • Xu Yin
    • 1
  • Wenjie Liu
    • 1
  • Yafang Wang
    • 1
  • Chenglei Yang
    • 1
  • Lin Lu
    • 1
  1. 1.School of Computer Science and TechnologyShandong UniversityJinanChina

Personalised recommendations