Advertisement

What You Sow, So Shall You Reap! Toward Preselection Mechanisms for Macrotask Crowdsourcing

  • Ujwal GadirajuEmail author
  • Mengdie Zhuang
Chapter
Part of the Human–Computer Interaction Series book series (HCIS)

Abstract

Crowdsourcing marketplaces have been flourishing over the last decade, providing a new source of income for hundreds of thousands of people around the globe. Different from microtasks, which are simple and require innate human intelligence in return for small amounts of monetary compensation, the work available on freelancing platforms or in macrotasks often requires a skilled workforce, considerably more time to complete, but the associated rewards are relatively larger and commensurate. Therefore, forming efficient collaboration among workers and finding experts are crucial for ensuring the quality of macrotasks. Worker preselection can be used to ensure that desirable workers participate in available tasks in crowdsourcing marketplaces. In this chapter, we describe two novel preselection mechanisms that have been shown to be effective in microtask crowdsourcing. We discuss how these preselection mechanisms can be used within macrotasks.

Keywords

Crowdsourcing Macrotasks Microtasks Preselection Self-assessment Behavior Accuracy Performance Quality Workers 

References

  1. Archak, N., & Sundararajan, A. (2009). Optimal design of crowdsourcing contests. In ICIS 2009 Proceedings (p. 200).Google Scholar
  2. Bachrach, Y., Graepel, T., Kasneci, G., Kosinski, M., & Van Gael, J. (2012). Crowd IQ: Aggregating opinions to boost performance. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems (Vol. 1, pp. 535–542). International Foundation for Autonomous Agents and Multiagent Systems.Google Scholar
  3. Berg, B. L. (2004). Methods for the social sciences. Qualitative research methods for the social sciences. Boston: Pearson Education.Google Scholar
  4. Burson, K. A., Larrick, R. P., & Klayman, J. (2006). Skilled or unskilled, but still unaware of it: How perceptions of difficulty drive miscalibration in relative comparisons. Journal of Personality and Social Psychology, 90(1), 60.CrossRefGoogle Scholar
  5. Chatterjee, A., Varshney, L.R., & Vishwanath, S. (2015). Work capacity of freelance markets: Fundamental limits and decentralized schemes. In 2015 IEEE Conference on Computer Communications (INFOCOM) (pp. 1769–1777). IEEE.Google Scholar
  6. Cheng, J., Teevan, J., Iqbal, S. T., & Bernstein, M. S. (2015). Break it down: A comparison of macro-and microtasks. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 4061–4064). ACM.Google Scholar
  7. Chilton, L. B., Horton, J. J., Miller, R. C., & Azenkot, S. (2010). Task search in a human computation market. In Proceedings of the ACM SIGKDD Workshop on Human Computation (pp. 1–9). ACM.Google Scholar
  8. Dang, B., Hutson, M., & Lease, M. (2016, October 30–November 3). MmmTurkey: A crowdsourcing framework for deploying tasks and recording worker behavior on amazon mechanical turk. In HCOMP’16. Proceedings of the 4th AAAI Conference on Human Computation and Crowdsourcing (HCOMP): Works-in-Progress Track, Austin, Texas, USA (pp. 1–3). AAAI Press.Google Scholar
  9. Denzin, N. K. (1978). The research act: A theoretical orientation to sociological methods (Vol. 2). New York: McGraw-Hill.Google Scholar
  10. Difallah, D. E., Catasta, M., Demartini, G., Ipeirotis, P. G., & Cudré-Mauroux, P. (2015). The dynamics of micro-task crowdsourcing—The case of Amazon MTurk. In 24th International Conference on World Wide Web (WWW) (pp. 238–247). ACM.Google Scholar
  11. DiPalantino, D., & Vojnovic, M. (2009). Crowdsourcing and all-pay auctions. In Proceedings of the 10th ACM Conference on Electronic Commerce (pp. 119–128). ACM.Google Scholar
  12. Dow, S., Kulkarni, A., Klemmer, S., & Hartmann, B. (2012). Shepherding the crowd yields better work. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (pp. 1013–1022). ACM.Google Scholar
  13. Dukat, C., & Caton, S. (2013). Towards the competence of crowdsourcees: Literature-based considerations on the problem of assessing crowdsourcees’ qualities. In 2013 Third International Conference on Cloud and Green Computing (CGC) (pp. 536–540). IEEE.Google Scholar
  14. Dunning, D. (2011). The dunning-kruger effect: On being ignorant of one’s own ignorance. Advances in Experimental Social Psychology, 44, 247.Google Scholar
  15. Dunning, D., Heath, C., & Suls, J. M. (2004). Flawed self-assessment implications for health, education, and the workplace. Psychological Science in the Public Interest, 5(3), 69–106.CrossRefGoogle Scholar
  16. Eckersley, P. (2010). How unique is your web browser? In Privacy Enhancing Technologies (pp. 1–18). Springer.Google Scholar
  17. Ehrlinger, J., & Dunning, D. (2003). How chronic self-views influence (and potentially mislead) estimates of performance. Journal of Personality and Social Psychology, 84(1), 5.CrossRefGoogle Scholar
  18. Ehrlinger, J., Johnson, K., Banner, M., Dunning, D., & Kruger, J. (2008). Why the unskilled are unaware: Further explorations of (absent) self-insight among the incompetent. Organizational Behavior and Human Decision Processes, 105(1), 98–121.CrossRefGoogle Scholar
  19. Eickhoff, C., Harris, C. G., de Vries, A. P., & Srinivasan, P. (2012, August 12–16). Quality through flow and immersion: Gamifying crowdsourced relevance assessments. In SIGIR’12. Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, Portland, OR, USA (pp. 871–880). New York: ACM Press.Google Scholar
  20. Gadiraju, U., Demartini, G., Kawase, R., & Dietze, S. (2015). Human beyond the machine: Challenges and opportunities of microtask crowdsourcing. IEEE Intelligent Systems, 30(4), 81–85.CrossRefGoogle Scholar
  21. Gadiraju, U., Demartini, G., Kawase, R., & Dietze, S. (2018). Crowd anatomy beyond the good and bad: Behavioral traces for crowd worker modeling and pre-selection. In Computer Supported Cooperative Work (CSCW) (pp. 1–27).Google Scholar
  22. Gadiraju, U., & Dietze, S. (2017). Improving learning through achievement priming in crowdsourced information finding microtasks. In Proceedings of the Seventh International Learning Analytics & Knowledge Conference (pp. 105–114). ACM.Google Scholar
  23. Gadiraju, U., Fetahu, B., & Kawase, R. (2015a). Training workers for improving performance in crowdsourcing microtasks. In Proceedings of the 10th European Conference on Technology Enhanced Learning. EC-TEL 2015 (pp. 100–114). Springer.Google Scholar
  24. Gadiraju, U., Kawase, R., Dietze, S., & Demartini, G. (2015b, April 18–23). Understanding malicious behavior in crowdsourcing platforms: The case of online surveys. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI 2015, Seoul, Republic of Korea (pp. 1631–1640).Google Scholar
  25. Gadiraju, U., Fetahu, B., Kawase, R., Siehndel, P., & Dietze, S. (2017a). Using worker self-assessments for competence-based pre-selection in crowdsourcing microtasks. ACM Transactions on Computer-Human Interaction (TOCHI), 24(4), 30.Google Scholar
  26. Gadiraju, U., Yang, J., & Bozzon, A. (2017b). Clarity is a worthwhile quality: On the role of task clarity in microtask crowdsourcing. In Proceedings of the 28th ACM Conference on Hypertext and Social Media (pp. 5–14). ACM.Google Scholar
  27. Gadiraju, U., & Kawase, R. (2017). Improving reliability of crowdsourced results by detecting crowd workers with multiple identities. In International Conference on Web Engineering (pp. 190–205). Springer.Google Scholar
  28. Gadiraju, U., Kawase, R., & Dietze, S. (2014). A taxonomy of microtasks on the web. In Proceedings of the 25th ACM Conference on Hypertext and Social Media (pp. 218–223). ACM.Google Scholar
  29. Georgescu, M., Pham, D. D., Firan, C. S., Gadiraju, U., & Nejdl, W. (2014). When in doubt ask the crowd: Employing crowdsourcing for active learning. In Proceedings of the 4th International Conference on Web Intelligence, Mining and Semantics (WIMS14) (p. 12). ACM.Google Scholar
  30. Haas, D., Ansel, J., Gu, L., & Marcus, A. (2015). Argonaut: Macrotask crowdsourcing for complex data processing. Proceedings of the VLDB Endowment, 8(12), 1642–1653.CrossRefGoogle Scholar
  31. Ho, C. J., & Vaughan, J. W. (2012). Online task assignment in crowdsourcing markets. In AAAI (Vol. 12, pp. 45–51).Google Scholar
  32. Howe, J. (2006). The rise of crowdsourcing. Wired Magazine, 14(6), 1–4.Google Scholar
  33. Irani, L. C., & Silberman, M. (2013). Turkopticon: Interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 611–620). ACM.Google Scholar
  34. Kaufmann, N., Schulze, T., & Veit, D. (2011, August 4–8). More than fun and money. Worker motivation in crowdsourcing—A study on mechanical turk. In A Renaissance of Information Technology for Sustainability and Global Competitiveness. 17th Americas Conference on Information Systems, AMCIS 2011, Detroit, Michigan, USA. Association for Information Systems.Google Scholar
  35. Kazai, G. (2011). In search of quality in crowdsourcing for search engine evaluation. In Advances in information retrieval (pp. 165–176). Springer.Google Scholar
  36. Kazai, G., Kamps, J., & Milic-Frayling, N. (2011). Worker types and personality traits in crowdsourcing relevance labels. In Proceedings of the 20th ACM international conference on Information and Knowledge Management (pp. 1941–1944). ACM.Google Scholar
  37. Kazai, G., Kamps, J., & Milic-Frayling, N. (2012). The face of quality in crowdsourcing relevance labels: Demographics, personality and labeling accuracy. InProceedings of the 21st ACM International Conference on Information and Knowledge Management (pp. 2583–2586). ACM.Google Scholar
  38. Kittur, A., Chi, E. H., & Suh, B. (2008). Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 453–456). ACM.Google Scholar
  39. Kittur, A., Nickerson, J. V., Bernstein, M., Gerber, E., Shaw, A., Zimmerman, J., et al. (2013). The future of crowd work. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (pp. 1301–1318). ACM.Google Scholar
  40. Kosinski, M., Bachrach, Y., Kasneci, G., Van-Gael, J., & Graepel, T. (2012). Crowd IQ: Measuring the intelligence of crowdsourcing platforms. In Proceedings of the 4th Annual ACM Web Science Conference (pp. 151–160). ACM.Google Scholar
  41. Krajc, M., & Ortmann, A. (2008). Are the unskilled really that unaware? An alternative explanation. Journal of Economic Psychology, 29(5), 724–738.CrossRefGoogle Scholar
  42. Krueger, J., & Mueller, R. A. (2002). Unskilled, unaware, or both? The better-than-average heuristic and statistical regression predict errors in estimates of own performance. Journal of Personality and Social Psychology, 82(2), 180.CrossRefGoogle Scholar
  43. Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121.CrossRefGoogle Scholar
  44. Kulkarni, C., Wei, K. P., Le, H., Chia, D., Papadopoulos, K., Cheng, J., et al. (2015). Peer and self assessment in massive online classes. In Design Thinking Research (pp. 131–168). Springer.Google Scholar
  45. Lykourentzou, I., Antoniou, A., Naudet, Y., & Dow, S. P. (2016). Personality matters: Balancing for personality types leads to better outcomes for crowd teams. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (pp. 260–273). ACM.Google Scholar
  46. Lykourentzou, I., Kraut, R. E., & Dow, S. P. (2017). Team dating leads to better online ad hoc collaborations. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW ’17 (pp. 2330–2343). New York, NY, USA: ACM.Google Scholar
  47. Marshall, C. C., & Shipman, F. M. (2013). Experiences surveying the crowd: Reflections on methods, participation, and reliability. In Proceedings of the 5th Annual ACM Web Science Conference (pp. 234–243). ACM.Google Scholar
  48. Marston, W. M. (2013). Emotions of normal people. Routledge.Google Scholar
  49. Martin, D., Hanrahan, B. V., O’Neill, J., & Gupta, N. (2014). Being a turker. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 224–235). ACM.Google Scholar
  50. Martin, D., O’Neill, J., Gupta, N., & Hanrahan, B. V. (2016). Turking in a global labour market. Computer Supported Cooperative Work (CSCW), 25(1), 39–77.CrossRefGoogle Scholar
  51. Oleson, D., Sorokin, A., Laughlin, G. P., Hester, V., Le, J., & Biewald, L. (2011). Programmatic gold: Targeted and scalable quality assurance in crowdsourcing. Human Computation, 11(11).Google Scholar
  52. Pongratz, H. J. (2018). Of crowds and talents: Discursive constructions of global online labour. New Technology, Work and Employment, 33(1), 58–73.CrossRefGoogle Scholar
  53. Rzeszotarski, J., & Kittur, A. (2012, October 7–10). Crowdscape: Interactively visualizing user behavior and output. In UIST’12. Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, Cambridge, MA, USA (pp. 55–62). New York: ACM Press.Google Scholar
  54. Rzeszotarski, J. M., & Kittur, A. (2011, October 16–19). Instrumenting the crowd: Using implicit behavioral measures to predict task performance. In UIST’11. Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA (pp. 13–22). New York: ACM Press.Google Scholar
  55. Schmitz, H., & Lykourentzou, I. (2018). Online sequencing of non-decomposable macrotasks in expert crowdsourcing. ACM Transactions on Social Computing, 1(1), 1.CrossRefGoogle Scholar
  56. Schwartz, B. (2004). The paradox of choice: Why less is more. New York: Ecco.Google Scholar
  57. Schwartz, B., & Ward, A. (2004). Doing better but feeling worse: The paradox of choice. In Positive psychology in practice (pp. 86–104).Google Scholar
  58. Sheshadri, A., & Lease, M. (2013, November 7–9). SQUARE: A benchmark for research on computing crowd consensus. In HCOMP’13. Proceedings of the First AAAI Conference on Human Computation and Crowdsourcing, Palm Springs, CA, USA (pp. 156–164). AAAI Press.Google Scholar
  59. Strauss, A., & Glaser, B. (1967). Discovery of grounded theory. Chicago: Aldine.Google Scholar
  60. Strauss, A. L. (1987). Qualitative analysis for social scientists. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  61. Venanzi, M., Guiver, J., Kazai, G., Kohli, P., & Shokouhi, M. (2014, April 7–11). Community-based Bayesian aggregation models for crowdsourcing. In WWW’14. Proceedings of the 23rd International World Wide Web Conference, Seoul, Republic of Korea (pp. 155–164). New York: ACM Press.Google Scholar
  62. Vuurens, J. B., & De Vries, A. P. (2012). Obtaining high-quality relevance judgments using crowdsourcing. IEEE Internet Computing, 16(5), 20–27.CrossRefGoogle Scholar
  63. Wang, J., Ipeirotis, P. G., & Provost, F. (2011, March 12–14). Managing crowdsourcing workers. In WCBI’11. Proceedings of the Winter Conference on Business Intelligence, Salt Lake City, Utah, USA (pp. 10–12). Citeseer.Google Scholar
  64. Yu, H., Shen, Z., Miao, C., & An, B. (2012, December 4–7). Challenges and opportunities for trust management in crowdsourcing. In 2012 IEEE/WIC/ACM International Conferences on Intelligent Agent Technology, IAT 2012, Macau, China (pp. 486–493). IEEE Computer Society.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.L3S Research Center, Leibniz Universität HannoverHannoverGermany
  2. 2.University College LondonLondonUK

Personalised recommendations