Results of a Survey About the Perceived Task Similarities in Micro Task Crowdsourcing Systems
- 109 Downloads
Recommender mechanisms can support the assignment of jobs in crowdsourcing platforms. The use of recommendations can improve the quality and outcome for both worker and requester. Workers expect to get tasks similar to previously finished ones as recommendations, as a preceding study shows. Such similarities between tasks have to be identified and analyzed in order to create task recommendation systems that fulfil the workers’ requirements. How workers characterize task similarity has been left open in the previous study. Therefore, this work provides an empirical study on how workers perceive the similarities between tasks. Different similarity aspects (e.g., the complexity, required action or the requester of the task) are evaluated towards their usefulness and the results are discussed. Worker characteristics, such as age, experience and country of origin are taken into account to determine how different worker groups judge similarity aspects of tasks.
KeywordsCrowdsourcing Recommender systems User survey
This work is supported by the Deutsche Forschungsgemeinschaft (DFG) under Grants STE 866/9-2, RE 2593/3-2, in the project “Design und Bewertung neuer Mechanismen für Crowdsourcing”.
- 1.Ambati, V., Vogel, S., Carbonell, J.: Towards task recommendation in micro-task markets. In: Proceedings of the 11th AAAI Conference on Human Computation, AAAIWS 2011, pp. 80–83. AAAI Press (2011)Google Scholar
- 3.Chartron, G., Kembellec, G.: General introduction to recommender systems. In: Kembellec, G., Chartron, G., Saleh, I. (eds.) Recommender Systems, pp. 1–23. Wiley, Hoboken (2014)Google Scholar
- 4.Chilton, L.B., Horton, J.J., Miller, R.C., Azenkot, S.: Task search in a human computation market. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 1–9. ACM (2010)Google Scholar
- 8.Kaufmann, N., Schulze, T., Veit, D.: More than fun and money. Worker motivation in crowdsourcing-a study on mechanical Turk. In: AMCIS, vol. 11, pp. 1–11 (2011)Google Scholar
- 9.Schnitzer, S., Neitzel, S., Schmidt, S., Rensing, C.: Perceived task similarities for task recommendation in crowdsourcing systems. In: Proceedings of the 25th International Conference Companion on World Wide Web. International World Wide Web Conferences (2016)Google Scholar
- 10.Schnitzer, S., Rensing, C., Schmidt, S., Borchert, K., Hirth, M., Tran-Gia, P.: Demands on task recommendation in crowdsourcing platforms - the worker’s perspective. In: ACM RecSys 2015 CrowdRec Workshop, Vienna (2015)Google Scholar
- 11.Yuen, M.-C., King, I., Leung, K.-S.: Task recommendation in crowdsourcing systems. In: Proceedings of the First International Workshop on Crowdsourcing and Data Mining, pp. 22–26. ACM (2012)Google Scholar
- 12.Yuen, M.-C., King, I., Leung, K.-S.: TaskRec: a task recommendation framework in crowdsourcing systems. Neural Process. Lett. 41, 1–16 (2014)Google Scholar