Advertisement

Paid Crowdsourcing, Low Income Contributors, and Subjectivity

  • Giannis HaralabopoulosEmail author
  • Christian Wagner
  • Derek McAuley
  • Ioannis Anagnostopoulos
Conference paper
  • 560 Downloads
Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT, volume 560)

Abstract

Scientific projects that require human computation often resort to crowdsourcing. Interested individuals can contribute to a crowdsourcing task, essentially contributing towards the project’s goals. To motivate participation and engagement, scientists use a variety of reward mechanisms. The most common motivation, and the one that yields the fastest results, is monetary rewards. By using monetary, scientists address a wider audience to participate in the task. As the payment is below minimum wage for developed economies, users from developing countries are more eager to participate. In subjective tasks, or tasks that cannot be validated through a right or wrong type of validation, monetary incentives could contrast with the much needed quality of submissions. We perform a subjective crowdsourcing task, emotion annotation, and compare the quality of the answers from contributors of varying income levels, based on the Gross Domestic Product. The results indicate a different contribution process between contributors from varying GDP regions. Low income contributors, possibly driven by the monetary incentive, submit low quality answers at a higher pace, while high income contributors provide diverse answers at a slower pace.

Keywords

Crowdsourcing Demographics Monetary rewards Subjectivity 

References

  1. 1.
    Mao, A., et al.: Volunteering versus work for pay: incentives and tradeoffs in crowdsourcing. In: First AAAI Conference on Human Computation and Crowdsourcing (2013)Google Scholar
  2. 2.
    Pavlick, E., Post, M., Irvine, A., Kachaev, D., Callison-Burch, C.: The language demographics of Amazon mechanical turk. Trans. Assoc. Comput. Linguist. 2, 79–92 (2014)CrossRefGoogle Scholar
  3. 3.
    Ross, J., Irani, L., Silberman, M., Zaldivar, A., Tomlinson, B.: Who are the crowdworkers?: shifting demographics in mechanical turk. In: Extended Abstracts on Human Factors in Computing Systems, CHI 2010, pp. 2863–2872. ACM (2010)Google Scholar
  4. 4.
    Sharma, A.: Crowdsourcing critical success factor model: strategies to harness the collective intelligence of the crowd. London School of Economics (LSE), London (2010)Google Scholar
  5. 5.
    Horton, J.J., Chilton, L.B.: The labor economics of paid crowdsourcing. In: Proceedings of the 11th ACM Conference on Electronic Commerce, pp. 209–218 (2010)Google Scholar
  6. 6.
    Rogstadius, J., Kostakos, V., Kittur, A., Smus, B., Laredo, J., Vukovic, M.: An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. In: ICWSM, vol. 11, pp. 17–21 (2011)Google Scholar
  7. 7.
    Kazai, G., Kamps, J., Milic-Frayling, N.: The face of quality in crowdsourcing relevance labels: demographics, personality and labeling accuracy. In: Proceedings of the 21st ACM International Conference on Information and Knowledge Management, pp. 2583–2586. ACM (2012)Google Scholar
  8. 8.
    Hamari, J., Koivisto, J., Sarsa, H.: Does gamification work? – a literature review of empirical studies on gamification. In: 2014 47th Hawaii International Conference on System Sciences (HICSS), pp. 3025–3034. IEEE (2014)Google Scholar
  9. 9.
    Haralabopoulos, G., Simperl, E.: Crowdsourcing for beyond polarity sentiment analysis a pure emotion lexicon. arXiv preprint arXiv:1710.04203 (2017)
  10. 10.
    Allahbakhsh, M., Benatallah, B., Ignjatovic, A., Motahari-Nezhad, H.R., Bertino, E., Dustdar, S.: Quality control in crowdsourcing systems: issues and directions. IEEE Internet Comput. 17(2), 76–81 (2013)CrossRefGoogle Scholar
  11. 11.
    Mohammad, S.M., Turney, P.D.: NRC emotion lexicon. NRC Technical report (2013)Google Scholar
  12. 12.
    Haralabopoulos, G., Wagner, C., McAuley, D., Simperl, E.: A multivalued emotion lexicon created and evaluated by the crowd. In: 2018 Fifth International Conference on Social Networks Analysis, Management and Security (SNAMS), pp. 355–362. IEEE, October 2018Google Scholar
  13. 13.
    Ikeda, K., Bernstein, M.S.: Pay it backward: per-task payments on crowdsourcing platforms reduce productivity. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM (2016)Google Scholar
  14. 14.
    Acar, O.A.: Harnessing the creative potential of consumers: money, participation, and creativity in idea crowdsourcing. Mark. Lett. 29(2), 177–188 (2018)MathSciNetCrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2019

Authors and Affiliations

  1. 1.University of NottinghamNottinghamUK
  2. 2.University of ThessalyVolosGreece

Personalised recommendations