Skip to main content

Quality Assessment of Crowdwork via Eye Gaze: Towards Adaptive Personalized Crowdsourcing

  • Conference paper
  • First Online:
Human-Computer Interaction – INTERACT 2021 (INTERACT 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12933))

Included in the following conference series:

Abstract

A significant challenge for creating efficient and fair crowdsourcing platforms is in rapid assessment of the quality of crowdwork. If a crowdworker lacks the skill, motivation, or understanding to provide adequate quality task completion, this reduces the efficacy of a platform. While this would seem like only a problem for task providers, the reality is that the burden of this problem is increasingly leveraged on crowdworkers. For example, task providers may not pay crowdworkers for their work after the evaluation of the task results has been completed. In this paper, we propose methods for quickly evaluating the quality of crowdwork using eye gaze information by estimating the correct answer rate. We find that the method with features generated by self-supervised learning (SSL) provides the most efficient result with a mean absolute error of 0.09. The results exhibit the potential of using eye gaze information to facilitate adaptive personalized crowdsourcing platforms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Amis, G.P., Carpenter, G.A.: Self-supervised ARTMAP. Neural Netw. 23(2) (2010)

    Google Scholar 

  2. Baba, Y., Kashima, H.: Statistical quality estimation for general crowdsourcing tasks. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2013, Chicago, USA, pp. 554–562. ACM (2013)

    Google Scholar 

  3. Buscher, G., Dengel, A., Elst, L. V.: Eye movements as implicit relevance feedback. In: CHI 2008 Extended Abstracts on Human Factors in Computing Systems, CHI EA 2008, Florence, Italy, pp. 2991–2996. ACM (2008)

    Google Scholar 

  4. Daniel, F., Kucherbaev, P., Cappiello, C., Benatallah, B., Allahbakhsh, M.: Quality control in crowdsourcing: a survey of quality attributes, assessment techniques, and assurance actions. ACM Comput. Surv. 51(1), 1–40 (2018)

    Google Scholar 

  5. Dontcheva, M., Morris, R.R., Brandt, J.R., Gerber, E.M.: Combining crowdsourcing and learning to improve engagement and performance. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2014, Toronto, Ontario, Canada, pp. 3379–3388. ACM (2014)

    Google Scholar 

  6. Gadiraju, U., Kawase, R., Dietze, S., Demartini, G.: Understanding malicious behavior in crowdsourcing platforms: the case of online surveys. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI 2015, Seoul, Republic of Korea, pp. 1631–1640. ACM (2015)

    Google Scholar 

  7. Gidaris, S., Singh, P., Komodakis, N.: Unsupervised representation learning by predicting image rotations. CoRR, arXiv.abs/1803.07728 (2018)

  8. Haresamudram, H., et al.: Masked reconstruction based self-supervision for human activity recognition. In: Proceedings of the 2020 International Symposium on Wearable Computers, ISWC 2020, Virtual Event, Mexico, pp. 45–49. ACM (2020)

    Google Scholar 

  9. Hettiachchi, D., van Berkel, N., Hosio, S., Kostakos, V., Goncalves, J.: Effect of cognitive abilities on crowdsourcing task performance. In: Lamas, D., Loizides, F., Nacke, L., Petrie, H., Winckler, M., Zaphiris, P. (eds.) INTERACT 2019. LNCS, vol. 11746, pp. 442–464. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29381-9_28

    Chapter  Google Scholar 

  10. Ishimaru, S., Maruichi, T., Dengel, A., Kise, K.: Confidence-aware learning assistant. arXiv:2102.07312 (2021)

  11. Islam, M.R., et al.: Self-supervised deep learning for reading activity classification. arXiv preprint arXiv:2012.03598 (2020)

  12. Jiang, H., Matsubara, S.: Efficient task decomposition in crowdsourcing. In: Dam, H.K., Pitt, J., Xu, Y., Governatori, G., Ito, T. (eds.) PRIMA 2014. LNCS (LNAI), vol. 8861, pp. 65–73. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-13191-7_6

    Chapter  Google Scholar 

  13. Jung, H., Park, Y., Lease, M.: Predicting next label quality: a time-series model of crowdwork. In: AAAI Conference on Human Computation and Crowdsourcing. Association for the Advancement of Artificial Intelligence, Pittsburg, USA (2014)

    Google Scholar 

  14. Kazai, G., Kamps, J., Koolen, M., Milic-Frayling, N.: Crowdsourcing for book search evaluation: impact of hit design on comparative system ranking. In: Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2011, Beijing, China, pp. 205–214. ACM (2011)

    Google Scholar 

  15. Kazai, G., Kamps, J., Milic-Frayling, N.: Worker types and personality traits in crowdsourcing relevance labels. In: Proceedings of the 20th ACM International Conference on Information and Knowledge Management, CIKM 2011, Glasgow, Scotland, UK, pp. 1941–1944. ACM (2011)

    Google Scholar 

  16. Kazai, G., Zitouni, I.: Quality management in crowdsourcing using gold judges behavior. In: Proceedings of the Ninth ACM International Conference on Search and Data Mining, WSDM 2016, San Francisco, USA, pp. 267–276. ACM (2016)

    Google Scholar 

  17. Kuang, L., Zhang, H., Shi, R., Liao, Z., Yang, X.: A spam worker detection approach based on heterogeneous network embedding in crowdsourcing platforms. Comput. Netw. 183, 107587 (2020)

    Article  Google Scholar 

  18. Kwek, A.: Crowdsourced research: vulnerability, autonomy, and exploitation. Ethics Hum. Res. 42(1), 22–35 (2020)

    Article  Google Scholar 

  19. Liu, X., Weijer, J.V.D., Bagdanov, A.D.: Exploiting unlabeled data in CNNs by self-supervised learning to rank. IEEE Trans. Pattern Anal. Mach. Intell. 41(8), 1862–1878 (2019)

    Article  Google Scholar 

  20. Moshfeghi, Y., Huertas-Rosero, A.F., Jose, J.M.: Identifying careless workers in crowdsourcing platforms: a game theory approach. In: Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2016, Pisa, Italy, pp. 857–860. ACM (2016)

    Google Scholar 

  21. Oppenlaender, J., Milland, K., Visuri, A., Ipeirotis, P., Hosio, S.: Creativity on paid crowdsourcing platforms. In: Proceedings of 2020 CHI Conference on Human Factors in Computing Systems, CHI 2020, Honolulu, USA, pp. 1–14. ACM (2020)

    Google Scholar 

  22. Raykar, V.C., Yu, S.: Eliminating spammers and ranking annotators for crowdsourced labeling tasks. JMLR 13(16), 491–518 (2012)

    MathSciNet  MATH  Google Scholar 

  23. Ross, J., Irani, L., Silberman, M. S., Zaldivar, A., Tomlinson, B.: Who are the crowdworkers? Shifting demographics in mechanical Turk. In: CHI 2010 Extended Abstracts on Human Factors in Computing Systems, CHI EA 2010, Atlanta, Georgia, USA, pp. 2863–2872. ACM (2010)

    Google Scholar 

  24. Rzeszotarski, J.M., Kittur, A.: Instrumenting the crowd: using implicit behavioral measures to predict task performance. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST 2011, Santa Barbara, California, USA, pp. 13–22. ACM (2011)

    Google Scholar 

  25. Saeed, A., Ozcelebi, T., Lukkien, J.: Multi-task self-supervised learning for human activity detection. In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 3, no. 2, p. 30 (2019)

    Google Scholar 

  26. Tsai, M., Hou, H., Lai, M., Liu, W., Yang, F.: Visual attention for solving multiple-choice science problem: an eye-tracking analysis. Comput. Educ. 58(1), 375–385 (2012)

    Article  Google Scholar 

  27. Yamada, K., Kise, K., Augereau, O.: Estimation of confidence based on eye gaze: an application to multiple-choice questions. In: Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers, UbiComp 2017, Maui, Hawaii, pp. 217–220. ACM (2017)

    Google Scholar 

  28. Yuasa, S., et al.: Towards quality assessment of crowdworker output based on behavioral data. In: 2019 IEEE International Conference on Big Data, Los Angeles, USA, pp. 4659–4661. IEEE (2019)

    Google Scholar 

  29. Zeng, A., Yu, K., Song, S., Suo, D., Walker, E., Rodriguez, A., Xiao, J.: Multiview self-supervised deep learning for 6D pose estimation in the Amazon Picking Challenge. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, Singapore, pp. 1383–1386. IEEE (2017)

    Google Scholar 

  30. Zhuang, M., Gadiraju, U.: In what mood are you today? An analysis of crowd workers’ mood, performance and engagement. In: Proceedings of the 10th ACM Conference on Web Science, WebSci 2019, Boston, Massachusetts, USA, pp. 373–382. ACM (2019)

    Google Scholar 

Download references

Acknowledgments

This work was supported in part by the JST CREST (Grant No. JPMJCR16E1), JSPS Grant-in-Aid for Scientific Research (20H04213, 20KK0235), Grand challenge of the iLDi, and OPU Keyproject.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Md. Rabiul Islam .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Islam, M.R. et al. (2021). Quality Assessment of Crowdwork via Eye Gaze: Towards Adaptive Personalized Crowdsourcing. In: Ardito, C., et al. Human-Computer Interaction – INTERACT 2021. INTERACT 2021. Lecture Notes in Computer Science(), vol 12933. Springer, Cham. https://doi.org/10.1007/978-3-030-85616-8_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-85616-8_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-85615-1

  • Online ISBN: 978-3-030-85616-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics