Advertisement

Interplay of Game Incentives, Player Profiles and Task Difficulty in Games with a Purpose

  • Gloria Re Calegari
  • Irene Celino
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11313)

Abstract

How to take multiple factors into account when evaluating a Game with a Purpose? How is player behaviour or participation influenced by different incentives? How does player engagement impact their accuracy in solving tasks? In this paper, we present a detailed investigation of multiple factors affecting the evaluation of a GWAP and we show how they impact on the achieved results. We inform our study with the experimental assessment of a GWAP designed to solve a multinomial classification task.

Notes

Acknowledgments

This work is partially supported by the STARS4ALL project (H2020-688135), co-funded by the European Commission. We thank all the Night Knights players who contributed to the classification task solution and allowed us to perform this work.

References

  1. 1.
    Von Ahn, L., Dabbish, L.: Designing games with a purpose. Commun. ACM 51(8), 58–67 (2008)Google Scholar
  2. 2.
    Law, E., Ahn, L.v.: Human Computation. Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 5, no. 3, pp. 1–121 (2011)CrossRefGoogle Scholar
  3. 3.
    Von Ahn, L., Dabbish, L.: Labeling images with a computer game. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 319–326. ACM (2004)Google Scholar
  4. 4.
    Singh, A., Ahsan, F., Blanchette, M., Waldispuhl, J.: Lessons from an online massive genomics computer game. In: Proceedings of the Fifth Conference on Human Computation and Crowdsourcing (HCOMP 2017) (2017)Google Scholar
  5. 5.
    Sauermann, H., Franzoni, C.: Crowd science user contribution patterns and their implications. Proc. Natl. Acad. Sci. 112(3), 679–684 (2015)CrossRefGoogle Scholar
  6. 6.
    Yang, J., Redi, J., Demartini, G., Bozzon, A.: Modeling task complexity in crowdsourcing. In: Fourth AAAI Conference on Human Computation and Crowdsourcing (2016)Google Scholar
  7. 7.
    Ryan, R.M., Deci, E.L.: Intrinsic and extrinsic motivations: classic definitions and new directions. Contemp. Educ. Psychol. 25(1), 54–67 (2000)CrossRefGoogle Scholar
  8. 8.
    Prestopnik, N., Crowston, K., Wang, J.: Gamers, citizen scientists, and data: exploring participant contributions in two games with a purpose. Comput. Hum. Behav. 68, 254–268 (2017)CrossRefGoogle Scholar
  9. 9.
    Thaler, S., Simperl, E., Wolger, S.: An experiment in comparing human-computation techniques. IEEE Internet Comput. 16, 52–58 (2012)CrossRefGoogle Scholar
  10. 10.
    Feyisetan, O., Simperl, E., Van Kleek, M., Shadbolt, N.: Improving paid microtasks through gamification and adaptive furtherance incentives. In: Proceedings of the 24th International Conference on World Wide Web, WWW 2015, pp. 333–343, Republic and Canton of Geneva, Switzerland, International World Wide Web Conferences Steering Committee (2015)Google Scholar
  11. 11.
    Feyisetan, O., Simperl, E.: Social incentives in paid collaborative crowdsourcing. ACM Trans. Intell. Syst. Technol. 8(6), 73:1–73:31 (2017)CrossRefGoogle Scholar
  12. 12.
    Re Calegari, G., Nasi, G., Celino, I.: Human computation vs. machine learning: an experimental comparison for image classification. Hum. Comput. J. 5(1), 13–30 (2018)Google Scholar
  13. 13.
    Siu, K., Zook, A., Riedl, M.O.: Collaboration versus competition: design and evaluation of mechanics for games with a purpose. In: Proceedings of Foundations of Digital Games Conference (2014)Google Scholar
  14. 14.
    Reeves, N., West, P., Simperl, E.: “A game without competition is hardly a game”: the impact of competitions on player activity in a human computation game. In: Proceedings of Human Computation Conference (2018)Google Scholar
  15. 15.
    Reeves, N., Tinati, R., Zerr, S., Van Kleek, M., Simperl, E.: From crowd to community: a survey of online community features in citizen science projects. In: Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW 2017, pp. 2137–2152 (2017)Google Scholar
  16. 16.
    Celino, I., Corcho, Ó., Hölker, F., Simperl, E.: Citizen science: design and engagement (dagstuhl seminar 17272). Dagstuhl Reports 7(7), 22–43 (2017)Google Scholar
  17. 17.
    Ponciano, L., Brasileiro, F.: Finding volunteers’ engagement profiles in human computation for citizen science projects. Hum. Comput. J. 1(2), 247–266 (2015)Google Scholar
  18. 18.
    Aristeidou, M., Scanlon, E., Sharples, M.: Profiles of engagement in online communities of citizen science participation. Comput. Hum. Behav. 74, 246–256 (2017)CrossRefGoogle Scholar
  19. 19.
    Allahbakhsh, M., Benatallah, B., Ignjatovic, A., Motahari-Nezhad, H.R., Bertino, E., Dustdar, S.: Quality control in crowdsourcing systems: issues and directions. IEEE Internet Comput. 17(2), 76–81 (2013)CrossRefGoogle Scholar
  20. 20.
    Karger, D.R., Oh, S., Shah, D.: Budget-optimal task allocation for reliable crowdsourcing systems. Oper. Res. 62(1), 1–24 (2014)CrossRefGoogle Scholar
  21. 21.
    Han, T., Sun, H., Song, Y., Wang, Z., Liu, X.: Budgeted task scheduling for crowdsourced knowledge acquisition. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 1059–1068. ACM (2017)Google Scholar
  22. 22.
    Sheng, V.S., Provost, F., Ipeirotis, P.G.: Get another label? Improving data quality and data mining using multiple, noisy labelers. In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 614–622. ACM (2008)Google Scholar
  23. 23.
    Re Calegari, G., Fiano, A., Celino, I.: A framework to build games with a purpose for linked data refinement. In: The Semantic Web - ISWC 2018 - 17th International Semantic Web Conference, Part II. pp. 154–169. Monterey, CA, USA (2018). https://doi.org/10.1007/978-3-030-00668-6_10Google Scholar
  24. 24.
    Celino, I., Re Calegari, G.: An Incremental Truth Inference Approach to Aggregate Crowdsourcing Contributions in GWAPs. In: currently under revision (2018)Google Scholar
  25. 25.
    Celino, I., et al.: Linking smart cities datasets with human computation – the case of UrbanMatch. In: Cudré-Mauroux, P., et al. (eds.) ISWC 2012. LNCS, vol. 7650, pp. 34–49. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-35173-0_3CrossRefGoogle Scholar
  26. 26.
    Zheng, Y., Li, G., Li, Y., Shan, C., Cheng, R.: Truth inference in crowdsourcing: is the problem solved? Proc. VLDB Endow. 10(5), 541–552 (2017)CrossRefGoogle Scholar
  27. 27.
    Anable, A.: Casual games, time management, and the work of affect (2013)Google Scholar
  28. 28.
    Hwong, C.: Leveling up your mobile game: using audience measurement data to boost user acquisition and engagement (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Cefriel – Politecnico of MilanoMilanItaly

Personalised recommendations