Results of a Survey About the Perceived Task Similarities in Micro Task Crowdsourcing Systems

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11406)


Recommender mechanisms can support the assignment of jobs in crowdsourcing platforms. The use of recommendations can improve the quality and outcome for both worker and requester. Workers expect to get tasks similar to previously finished ones as recommendations, as a preceding study shows. Such similarities between tasks have to be identified and analyzed in order to create task recommendation systems that fulfil the workers’ requirements. How workers characterize task similarity has been left open in the previous study. Therefore, this work provides an empirical study on how workers perceive the similarities between tasks. Different similarity aspects (e.g., the complexity, required action or the requester of the task) are evaluated towards their usefulness and the results are discussed. Worker characteristics, such as age, experience and country of origin are taken into account to determine how different worker groups judge similarity aspects of tasks.


Crowdsourcing Recommender systems User survey 



This work is supported by the Deutsche Forschungsgemeinschaft (DFG) under Grants STE 866/9-2, RE 2593/3-2, in the project “Design und Bewertung neuer Mechanismen für Crowdsourcing”.


  1. 1.
    Ambati, V., Vogel, S., Carbonell, J.: Towards task recommendation in micro-task markets. In: Proceedings of the 11th AAAI Conference on Human Computation, AAAIWS 2011, pp. 80–83. AAAI Press (2011)Google Scholar
  2. 2.
    Brabham, D.C.: Moving the crowd at threadless: motivations for participation in a crowdsourcing application. Inf. Commun. Soc. 13(8), 1122–1145 (2010)CrossRefGoogle Scholar
  3. 3.
    Chartron, G., Kembellec, G.: General introduction to recommender systems. In: Kembellec, G., Chartron, G., Saleh, I. (eds.) Recommender Systems, pp. 1–23. Wiley, Hoboken (2014)Google Scholar
  4. 4.
    Chilton, L.B., Horton, J.J., Miller, R.C., Azenkot, S.: Task search in a human computation market. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 1–9. ACM (2010)Google Scholar
  5. 5.
    Geiger, D.: Personalized Task Recommendation in Crowdsourcing Systems. Progress in IS. Springer, Cham (2016). Scholar
  6. 6.
    Goodman, J.K., Cryder, C.E., Cheema, A.: Data collection in a flat world: the strengths and weaknesses of mechanical turk samples. J. Behav. Decis. Mak. 26(3), 213–224 (2013)CrossRefGoogle Scholar
  7. 7.
    Hoßfeld, T., et al.: Best practices for QoE crowdtesting: QoE assessment with crowdsourcing. IEEE Trans. Multimedia 16(2), 541–558 (2014)CrossRefGoogle Scholar
  8. 8.
    Kaufmann, N., Schulze, T., Veit, D.: More than fun and money. Worker motivation in crowdsourcing-a study on mechanical Turk. In: AMCIS, vol. 11, pp. 1–11 (2011)Google Scholar
  9. 9.
    Schnitzer, S., Neitzel, S., Schmidt, S., Rensing, C.: Perceived task similarities for task recommendation in crowdsourcing systems. In: Proceedings of the 25th International Conference Companion on World Wide Web. International World Wide Web Conferences (2016)Google Scholar
  10. 10.
    Schnitzer, S., Rensing, C., Schmidt, S., Borchert, K., Hirth, M., Tran-Gia, P.: Demands on task recommendation in crowdsourcing platforms - the worker’s perspective. In: ACM RecSys 2015 CrowdRec Workshop, Vienna (2015)Google Scholar
  11. 11.
    Yuen, M.-C., King, I., Leung, K.-S.: Task recommendation in crowdsourcing systems. In: Proceedings of the First International Workshop on Crowdsourcing and Data Mining, pp. 22–26. ACM (2012)Google Scholar
  12. 12.
    Yuen, M.-C., King, I., Leung, K.-S.: TaskRec: a task recommendation framework in crowdsourcing systems. Neural Process. Lett. 41, 1–16 (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Multimedia Communications LabTechnische Universität DarmstadtDarmstadtGermany

Personalised recommendations