Abstract
A wealth of previous literature has shown the inequalities gap between task requesters’ and workers’ decision support provided in crowdsourcing platforms. On the one hand, many platforms allow requesters to assess worker performance data and support the gauging of qualification criterion in order to control the quality of crowd submissions. On the other hand, workers are usually provided with very limited support throughout the task selection and completion processes. In particular, most workers must manually browse through a long list of open tasks before they determine which ones to sign up. Manual task selection not only is time consuming, but also tends to be sub optimal due to subjective, ad hoc worker behaviors.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Callison-Burch, C.: Crowd-workers: Aggregating information across turkers to help them find higher paying work. In: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2014 (2014)
Chouldechova, A., Roth, A.: A snapshot of the frontiers of fairness in machine learning. Commun. ACM 63(5), 82–89 (2020)
Hanrahan, B.V., Willamowski, J.K., Swaminathan, S., Martin, D.B.: Turkbench: Rendering the market for turkers. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, p. 1613–1616 (2015)
Irani, L.C., Silberman, M.S.: Turkopticon: Interrupting worker invisibility in amazon mechanical turk. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’13, p. 611–620 (2013)
Irani, L.C., Silberman, M.S.: Stories we tell about labor: Turkopticon and the trouble with “design”. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI ’16, p. 4573–4586 (2016)
Martin, D., Hanrahan, B.V., O’Neill, J., Gupta, N.: Being a turker. In: Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW ’14, p. 224–235 (2014)
McInnis, B., Cosley, D., Nam, C., Leshed, G.: Taking a hit: Designing around rejection, mistrust, risk, and workers’ experiences in Amazon Mechanical Turk. Conference on Human Factors in Computing Systems—Proceedings pp. 2271–2282 (2016)
Saito, S., Chiang, C., Savage, S., Nakano, T., Kobayashi, T., Bigham, J.P.: Turkscanner: Predicting the hourly wage of microtasks. In: The World Wide Web Conference, WWW 2019, pp. 3187–3193 (2019)
Xu, J., Yao, Y., Tong, H., Tao, X., Lu, J.: Ice-breaking: Mitigating cold-start recommendation problem by rating comparison. In: Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, pp. 3981–3987 (2015)
Yang, Y., Karim, M.R., Saremi, R., Ruhe, G.: Who should take this task?: Dynamic decision support for crowd workers. In: ESEM’16, p. 8 (2016)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Wang, Q., Chen, Z., Wang, J., Feng, Y. (2022). Task Recommendation for Crowd Worker. In: Intelligent Crowdsourced Testing. Springer, Singapore. https://doi.org/10.1007/978-981-16-9643-5_5
Download citation
DOI: https://doi.org/10.1007/978-981-16-9643-5_5
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-9642-8
Online ISBN: 978-981-16-9643-5
eBook Packages: Computer ScienceComputer Science (R0)