Methods for Engaging and Evaluating Users of Human Computation Systems

Chapter

Abstract

One of the most significant challenges facing some Human Computation Systems is how to motivate participation on a scale required to produce high quality data. This chapter discusses methods that can be used to design the task interface, motivate users and evaluate the system, using as an example Phrase Detectives, a game-with-a-purpose to collect data on anaphoric co-reference in text.

References

  1. Aker A, El-Haj M, Albakour D, Kruschwitz U (2012) Assessing crowdsourcing quality through objective tasks. In: Proceedings of LREC’12, IstanbulGoogle Scholar
  2. Albakour M-D, Kruschwitz U, Lucas S (2010) Sentence-level attachment prediction. In: Proceedings of the 1st information retrieval facility conference. Volume 6107 of lecture notes in computer science, Vienna. Springer, pp 6–19Google Scholar
  3. Bernstein MS, Karger DR, Miller RC, Brandt J (2012) Analytic methods for optimizing realtime crowdsourcing. CoRRGoogle Scholar
  4. Chamberlain J, Poesio M, Kruschwitz U (2008) Phrase detectives: a web-based collaborative annotation game. In: Proceedings of the international conference on semantic systems (I-Semantics’08), GrazGoogle Scholar
  5. Chamberlain J, Kruschwitz U, Poesio M (2009a) Constructing an anaphorically annotated corpus with non-experts: Assessing the quality of collaborative annotations. In: Proceedings of the 2009 workshop on the people’s web meets NLP: collaboratively constructed semantic resources, SingaporeGoogle Scholar
  6. Chamberlain J, Poesio M, Kruschwitz U (2009b) A new life for a dead parrot: incentive structures in the phrase detectives game. In: Proceedings of the WWW 2009 workshop on web incentives (WEBCENTIVES’09), MadridGoogle Scholar
  7. Chamberlain J, Kruschwitz U, Poesio M (2012) Motivations for participation in socially networked collective intelligence systems. In: Proceedings of CI2012, BostonGoogle Scholar
  8. Chamberlain J, Fort K, Kruschwitz U, Mathieu L, Poesio M (2013) Using games to create language resources: successes and limitations of the approach. In: ACM transactions on interactive intelligent systems, volume The People’s Web Meets NLP: collaboratively constructed language resources. Springer pp 3–44Google Scholar
  9. Chklovski T, Gil Y (2005) Improving the design of intelligent acquisition interfaces for collecting world knowledge from web contributors. In: Proceedings of K-CAP ’05, BanffGoogle Scholar
  10. Dandapat S, Biswas P, Choudhury M, Bali K (2009) Complex linguistic annotation – no easy way out! a case from Bangla and Hindi POS labeling tasks. In: Proceedings of the 3rd ACL linguistic annotation workshop, SingaporeGoogle Scholar
  11. Feng D, Besana S, Zajac R (2009) Acquiring high quality non-expert knowledge from on-demand workforce. In: Proceedings of the 2009 workshop on the people’s web meets NLP: collaboratively constructed semantic resources, SingaporeGoogle Scholar
  12. Fenouillet F, Kaplan J, Yennek N (2009) Serious games et motivation. In: 4eme conference francophone sur les environnements informatiques pour l’apprentissage humain (EIAH’09), vol. Actes de l’Atelier “Jeux Serieux: conception et usages”, Le MansGoogle Scholar
  13. Howe J (2008) Crowdsourcing: why the power of the crowd is driving the future of business. Crown Publishing Group, New YorkGoogle Scholar
  14. Kanefsky B, Barlow N, Gulick V (2001) Can distributed volunteers accomplish massive data analysis tasks? In: Lunar and planetary science, XXXII, HoustonGoogle Scholar
  15. Kazai G (2011) In search of quality in crowdsourcing for search engine evaluation. In: Proceedings of the 33r d european conference on information retrieval (ECIR’11), DublinGoogle Scholar
  16. Lieberman H, Smith DA, Teeters A (2007) Common consensus: a web-based game for collecting commonsense goals. In: Proceedings of IUI, HonoluluGoogle Scholar
  17. Luo X, Shinaver J (2009) MultiRank: reputation ranking for generic semantic social networks. In: Proceedings of the WWW 2009 workshop on web incentives (WEBCENTIVES’09), MadridGoogle Scholar
  18. Malone T, Laubacher R, Dellarocas C (2009) Harnessing crowds: mapping the genome of collective intelligence. Research Paper No. 4732-09, Sloan School of Management, Massachusetts Institute of Technology, CambridgeGoogle Scholar
  19. Mason W, Watts DJ (2009) Financial incentives and the “performance of crowds”. In: Proceedings of the ACM SIGKDD workshop on human computation, ParisGoogle Scholar
  20. Mrozinski J, Whittaker E, Furui S (2008) Collecting a why-question corpus for development and evaluation of an automatic QA-system. In: Proceedings of ACL-08: HLT, ColumbusGoogle Scholar
  21. Poesio M, Chamberlain J, Kruschwitz U, Robaldo L, Ducceschi L (2013) Phrase detectives: utilizing collective intelligence for internet-scale language resource creation. ACM transactions on interactive intelligent systems 3:1–44Google Scholar
  22. Quinn A, Bederson B (2011) Human computation: a survey and taxonomy of a growing field. In: CHI, VancouverGoogle Scholar
  23. Raddick MJ, Bracey G, Gay PL, Lintott CJ, Murray P, Schawinski K, Szalay AS, Vandenberg J (2010) Galaxy zoo: exploring the motivations of citizen science volunteers. Astronomy Educ Rev 9(1):010103CrossRefGoogle Scholar
  24. Smadja F (2009) Mixing financial, social and fun incentives for social voting. World Wide Web Internet And Web Information SystemsGoogle Scholar
  25. Snow R, O’Connor B, Jurafsky D, Ng AY (2008) Cheap and fast—but is it good?: Evaluating non-expert annotations for natural language tasks. In: EMNLP ’08: Proceedings of the conference on empirical methods in natural language processing, HonoluluGoogle Scholar
  26. Surowiecki J (2005) The wisdom of crowds. Anchor, New YorkGoogle Scholar
  27. Thaler S, Siorpaes K, Simperl E, Hofer C (2011) A survey on games for knowledge acquisition. Technical report STI TR 2011-05-01, Semantic Technology InstituteGoogle Scholar
  28. von Ahn L (2006) Games with a purpose. Comput 39(6):92–94CrossRefGoogle Scholar
  29. von Ahn L, Dabbish L (2008) Designing games with a purpose. Commun ACM 51(8):58–67Google Scholar
  30. Wang A, Hoang CDV, Kan MY (2010) Perspectives on crowdsourcing annotations for natural language processing. Language resources and evaluation, pp 1–19Google Scholar
  31. Yang H, Lai C (2010) Motivations of wikipedia content contributors. Comput Hum Behav 26:1377–1383Google Scholar
  32. Yeun CA, Noll MG, Gibbins N, Meinel C, Shadbolt N (2009) On measuring expertise in collaborative tagging systems. In: Proceedings of WebSci’09, AthensGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Jon Chamberlain
    • 1
  • Udo Kruschwitz
    • 1
  • Massimo Poesio
    • 1
  1. 1.University of EssexColchesterEngland

Personalised recommendations