Skip to main content

Results of a Survey About the Perceived Task Similarities in Micro Task Crowdsourcing Systems

  • Conference paper
  • First Online:
Behavioral Analytics in Social and Ubiquitous Environments (MUSE 2015, MSM 2015, MSM 2016)

Abstract

Recommender mechanisms can support the assignment of jobs in crowdsourcing platforms. The use of recommendations can improve the quality and outcome for both worker and requester. Workers expect to get tasks similar to previously finished ones as recommendations, as a preceding study shows. Such similarities between tasks have to be identified and analyzed in order to create task recommendation systems that fulfil the workers’ requirements. How workers characterize task similarity has been left open in the previous study. Therefore, this work provides an empirical study on how workers perceive the similarities between tasks. Different similarity aspects (e.g., the complexity, required action or the requester of the task) are evaluated towards their usefulness and the results are discussed. Worker characteristics, such as age, experience and country of origin are taken into account to determine how different worker groups judge similarity aspects of tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    www.mturk.com.

  2. 2.

    www.microworkers.com.

  3. 3.

    http://www.kom.tu-darmstadt.de/~schnitze/files/msm_www16_survey.pdf.

References

  1. Ambati, V., Vogel, S., Carbonell, J.: Towards task recommendation in micro-task markets. In: Proceedings of the 11th AAAI Conference on Human Computation, AAAIWS 2011, pp. 80–83. AAAI Press (2011)

    Google Scholar 

  2. Brabham, D.C.: Moving the crowd at threadless: motivations for participation in a crowdsourcing application. Inf. Commun. Soc. 13(8), 1122–1145 (2010)

    Article  Google Scholar 

  3. Chartron, G., Kembellec, G.: General introduction to recommender systems. In: Kembellec, G., Chartron, G., Saleh, I. (eds.) Recommender Systems, pp. 1–23. Wiley, Hoboken (2014)

    Google Scholar 

  4. Chilton, L.B., Horton, J.J., Miller, R.C., Azenkot, S.: Task search in a human computation market. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 1–9. ACM (2010)

    Google Scholar 

  5. Geiger, D.: Personalized Task Recommendation in Crowdsourcing Systems. Progress in IS. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-22291-2

    Book  Google Scholar 

  6. Goodman, J.K., Cryder, C.E., Cheema, A.: Data collection in a flat world: the strengths and weaknesses of mechanical turk samples. J. Behav. Decis. Mak. 26(3), 213–224 (2013)

    Article  Google Scholar 

  7. Hoßfeld, T., et al.: Best practices for QoE crowdtesting: QoE assessment with crowdsourcing. IEEE Trans. Multimedia 16(2), 541–558 (2014)

    Article  Google Scholar 

  8. Kaufmann, N., Schulze, T., Veit, D.: More than fun and money. Worker motivation in crowdsourcing-a study on mechanical Turk. In: AMCIS, vol. 11, pp. 1–11 (2011)

    Google Scholar 

  9. Schnitzer, S., Neitzel, S., Schmidt, S., Rensing, C.: Perceived task similarities for task recommendation in crowdsourcing systems. In: Proceedings of the 25th International Conference Companion on World Wide Web. International World Wide Web Conferences (2016)

    Google Scholar 

  10. Schnitzer, S., Rensing, C., Schmidt, S., Borchert, K., Hirth, M., Tran-Gia, P.: Demands on task recommendation in crowdsourcing platforms - the worker’s perspective. In: ACM RecSys 2015 CrowdRec Workshop, Vienna (2015)

    Google Scholar 

  11. Yuen, M.-C., King, I., Leung, K.-S.: Task recommendation in crowdsourcing systems. In: Proceedings of the First International Workshop on Crowdsourcing and Data Mining, pp. 22–26. ACM (2012)

    Google Scholar 

  12. Yuen, M.-C., King, I., Leung, K.-S.: TaskRec: a task recommendation framework in crowdsourcing systems. Neural Process. Lett. 41, 1–16 (2014)

    Google Scholar 

Download references

Acknowledgements

This work is supported by the Deutsche Forschungsgemeinschaft (DFG) under Grants STE 866/9-2, RE 2593/3-2, in the project “Design und Bewertung neuer Mechanismen für Crowdsourcing”.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Steffen Schnitzer or Christoph Rensing .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Schnitzer, S., Neitzel, S., Schmidt, S., Rensing, C. (2019). Results of a Survey About the Perceived Task Similarities in Micro Task Crowdsourcing Systems. In: Atzmueller, M., Chin, A., Lemmerich, F., Trattner, C. (eds) Behavioral Analytics in Social and Ubiquitous Environments. MUSE MSM MSM 2015 2015 2016. Lecture Notes in Computer Science(), vol 11406. Springer, Cham. https://doi.org/10.1007/978-3-030-34407-8_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-34407-8_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-33906-7

  • Online ISBN: 978-3-030-34407-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics