Skip to main content

Task Recommendation for Crowd Worker

  • Chapter
  • First Online:
Intelligent Crowdsourced Testing
  • 220 Accesses

Abstract

A wealth of previous literature has shown the inequalities gap between task requesters’ and workers’ decision support provided in crowdsourcing platforms. On the one hand, many platforms allow requesters to assess worker performance data and support the gauging of qualification criterion in order to control the quality of crowd submissions. On the other hand, workers are usually provided with very limited support throughout the task selection and completion processes. In particular, most workers must manually browse through a long list of open tasks before they determine which ones to sign up. Manual task selection not only is time consuming, but also tends to be sub optimal due to subjective, ad hoc worker behaviors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Callison-Burch, C.: Crowd-workers: Aggregating information across turkers to help them find higher paying work. In: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2014 (2014)

    Google Scholar 

  2. Chouldechova, A., Roth, A.: A snapshot of the frontiers of fairness in machine learning. Commun. ACM 63(5), 82–89 (2020)

    Article  Google Scholar 

  3. Hanrahan, B.V., Willamowski, J.K., Swaminathan, S., Martin, D.B.: Turkbench: Rendering the market for turkers. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, p. 1613–1616 (2015)

    Google Scholar 

  4. Irani, L.C., Silberman, M.S.: Turkopticon: Interrupting worker invisibility in amazon mechanical turk. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’13, p. 611–620 (2013)

    Google Scholar 

  5. Irani, L.C., Silberman, M.S.: Stories we tell about labor: Turkopticon and the trouble with “design”. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI ’16, p. 4573–4586 (2016)

    Google Scholar 

  6. Martin, D., Hanrahan, B.V., O’Neill, J., Gupta, N.: Being a turker. In: Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW ’14, p. 224–235 (2014)

    Google Scholar 

  7. McInnis, B., Cosley, D., Nam, C., Leshed, G.: Taking a hit: Designing around rejection, mistrust, risk, and workers’ experiences in Amazon Mechanical Turk. Conference on Human Factors in Computing Systems—Proceedings pp. 2271–2282 (2016)

    Google Scholar 

  8. Saito, S., Chiang, C., Savage, S., Nakano, T., Kobayashi, T., Bigham, J.P.: Turkscanner: Predicting the hourly wage of microtasks. In: The World Wide Web Conference, WWW 2019, pp. 3187–3193 (2019)

    Google Scholar 

  9. Xu, J., Yao, Y., Tong, H., Tao, X., Lu, J.: Ice-breaking: Mitigating cold-start recommendation problem by rating comparison. In: Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, pp. 3981–3987 (2015)

    Google Scholar 

  10. Yang, Y., Karim, M.R., Saremi, R., Ruhe, G.: Who should take this task?: Dynamic decision support for crowd workers. In: ESEM’16, p. 8 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Wang, Q., Chen, Z., Wang, J., Feng, Y. (2022). Task Recommendation for Crowd Worker. In: Intelligent Crowdsourced Testing. Springer, Singapore. https://doi.org/10.1007/978-981-16-9643-5_5

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-9643-5_5

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-9642-8

  • Online ISBN: 978-981-16-9643-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics