Skip to main content

Methods for Engaging and Evaluating Users of Human Computation Systems

  • Chapter
  • First Online:

Abstract

One of the most significant challenges facing some Human Computation Systems is how to motivate participation on a scale required to produce high quality data. This chapter discusses methods that can be used to design the task interface, motivate users and evaluate the system, using as an example Phrase Detectives, a game-with-a-purpose to collect data on anaphoric co-reference in text.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    http://www.wikipedia.org

  2. 2.

    http://www.galaxyzoo.org

  3. 3.

    http://openmind.media.mit.edu

  4. 4.

    http://conceptnet.media.mit.edu

  5. 5.

    http://www.google.com/recaptcha

  6. 6.

    https://www.mturk.com

  7. 7.

    http://www.phrasedetectives.com

  8. 8.

    Anaphoric coreference is a type of linguistic reference where one expression depends on another referential element. An example would be the relation between the entity ‘Jon’ and the pronoun ‘his’ in the text ‘Jon rode his bike to school.’

  9. 9.

    http://www.facebook.com

  10. 10.

    http://www.usability.gov/guidelines

  11. 11.

    http://www.google.co.uk/analytics

  12. 12.

    http://mashable.com/2010/08/02/stats-time-spent-online

  13. 13.

    http://www.appdata.com

  14. 14.

    http://www.infosolutionsgroup.com/2010_PopCap_Social_Gaming_Research_Results.pdf

  15. 15.

    http://www.lightspeedresearch.com/press-releases/it’s-game-on-for-facebook-users

  16. 16.

    Since the initial development of PD Facebook has changed how posts are displayed. Posts from the game now appear on the user’s profile and in a news ticker.

References

  • Aker A, El-Haj M, Albakour D, Kruschwitz U (2012) Assessing crowdsourcing quality through objective tasks. In: Proceedings of LREC’12, Istanbul

    Google Scholar 

  • Albakour M-D, Kruschwitz U, Lucas S (2010) Sentence-level attachment prediction. In: Proceedings of the 1st information retrieval facility conference. Volume 6107 of lecture notes in computer science, Vienna. Springer, pp 6–19

    Google Scholar 

  • Bernstein MS, Karger DR, Miller RC, Brandt J (2012) Analytic methods for optimizing realtime crowdsourcing. CoRR

    Google Scholar 

  • Chamberlain J, Poesio M, Kruschwitz U (2008) Phrase detectives: a web-based collaborative annotation game. In: Proceedings of the international conference on semantic systems (I-Semantics’08), Graz

    Google Scholar 

  • Chamberlain J, Kruschwitz U, Poesio M (2009a) Constructing an anaphorically annotated corpus with non-experts: Assessing the quality of collaborative annotations. In: Proceedings of the 2009 workshop on the people’s web meets NLP: collaboratively constructed semantic resources, Singapore

    Google Scholar 

  • Chamberlain J, Poesio M, Kruschwitz U (2009b) A new life for a dead parrot: incentive structures in the phrase detectives game. In: Proceedings of the WWW 2009 workshop on web incentives (WEBCENTIVES’09), Madrid

    Google Scholar 

  • Chamberlain J, Kruschwitz U, Poesio M (2012) Motivations for participation in socially networked collective intelligence systems. In: Proceedings of CI2012, Boston

    Google Scholar 

  • Chamberlain J, Fort K, Kruschwitz U, Mathieu L, Poesio M (2013) Using games to create language resources: successes and limitations of the approach. In: ACM transactions on interactive intelligent systems, volume The People’s Web Meets NLP: collaboratively constructed language resources. Springer pp 3–44

    Google Scholar 

  • Chklovski T, Gil Y (2005) Improving the design of intelligent acquisition interfaces for collecting world knowledge from web contributors. In: Proceedings of K-CAP ’05, Banff

    Google Scholar 

  • Dandapat S, Biswas P, Choudhury M, Bali K (2009) Complex linguistic annotation – no easy way out! a case from Bangla and Hindi POS labeling tasks. In: Proceedings of the 3rd ACL linguistic annotation workshop, Singapore

    Google Scholar 

  • Feng D, Besana S, Zajac R (2009) Acquiring high quality non-expert knowledge from on-demand workforce. In: Proceedings of the 2009 workshop on the people’s web meets NLP: collaboratively constructed semantic resources, Singapore

    Google Scholar 

  • Fenouillet F, Kaplan J, Yennek N (2009) Serious games et motivation. In: 4eme conference francophone sur les environnements informatiques pour l’apprentissage humain (EIAH’09), vol. Actes de l’Atelier “Jeux Serieux: conception et usages”, Le Mans

    Google Scholar 

  • Howe J (2008) Crowdsourcing: why the power of the crowd is driving the future of business. Crown Publishing Group, New York

    Google Scholar 

  • Kanefsky B, Barlow N, Gulick V (2001) Can distributed volunteers accomplish massive data analysis tasks? In: Lunar and planetary science, XXXII, Houston

    Google Scholar 

  • Kazai G (2011) In search of quality in crowdsourcing for search engine evaluation. In: Proceedings of the 33r d european conference on information retrieval (ECIR’11), Dublin

    Google Scholar 

  • Lieberman H, Smith DA, Teeters A (2007) Common consensus: a web-based game for collecting commonsense goals. In: Proceedings of IUI, Honolulu

    Google Scholar 

  • Luo X, Shinaver J (2009) MultiRank: reputation ranking for generic semantic social networks. In: Proceedings of the WWW 2009 workshop on web incentives (WEBCENTIVES’09), Madrid

    Google Scholar 

  • Malone T, Laubacher R, Dellarocas C (2009) Harnessing crowds: mapping the genome of collective intelligence. Research Paper No. 4732-09, Sloan School of Management, Massachusetts Institute of Technology, Cambridge

    Google Scholar 

  • Mason W, Watts DJ (2009) Financial incentives and the “performance of crowds”. In: Proceedings of the ACM SIGKDD workshop on human computation, Paris

    Google Scholar 

  • Mrozinski J, Whittaker E, Furui S (2008) Collecting a why-question corpus for development and evaluation of an automatic QA-system. In: Proceedings of ACL-08: HLT, Columbus

    Google Scholar 

  • Poesio M, Chamberlain J, Kruschwitz U, Robaldo L, Ducceschi L (2013) Phrase detectives: utilizing collective intelligence for internet-scale language resource creation. ACM transactions on interactive intelligent systems 3:1–44

    Google Scholar 

  • Quinn A, Bederson B (2011) Human computation: a survey and taxonomy of a growing field. In: CHI, Vancouver

    Google Scholar 

  • Raddick MJ, Bracey G, Gay PL, Lintott CJ, Murray P, Schawinski K, Szalay AS, Vandenberg J (2010) Galaxy zoo: exploring the motivations of citizen science volunteers. Astronomy Educ Rev 9(1):010103

    Article  Google Scholar 

  • Smadja F (2009) Mixing financial, social and fun incentives for social voting. World Wide Web Internet And Web Information Systems

    Google Scholar 

  • Snow R, O’Connor B, Jurafsky D, Ng AY (2008) Cheap and fast—but is it good?: Evaluating non-expert annotations for natural language tasks. In: EMNLP ’08: Proceedings of the conference on empirical methods in natural language processing, Honolulu

    Google Scholar 

  • Surowiecki J (2005) The wisdom of crowds. Anchor, New York

    Google Scholar 

  • Thaler S, Siorpaes K, Simperl E, Hofer C (2011) A survey on games for knowledge acquisition. Technical report STI TR 2011-05-01, Semantic Technology Institute

    Google Scholar 

  • von Ahn L (2006) Games with a purpose. Comput 39(6):92–94

    Article  Google Scholar 

  • von Ahn L, Dabbish L (2008) Designing games with a purpose. Commun ACM 51(8):58–67

    Google Scholar 

  • Wang A, Hoang CDV, Kan MY (2010) Perspectives on crowdsourcing annotations for natural language processing. Language resources and evaluation, pp 1–19

    Google Scholar 

  • Yang H, Lai C (2010) Motivations of wikipedia content contributors. Comput Hum Behav 26:1377–1383

    Google Scholar 

  • Yeun CA, Noll MG, Gibbins N, Meinel C, Shadbolt N (2009) On measuring expertise in collaborative tagging systems. In: Proceedings of WebSci’09, Athens

    Google Scholar 

Download references

Acknowledgements

The original Phrase Detectives game was funded as part of the EPSRC AnaWiki project, EP/F00575X/1.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jon Chamberlain .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Cite this chapter

Chamberlain, J., Kruschwitz, U., Poesio, M. (2013). Methods for Engaging and Evaluating Users of Human Computation Systems. In: Michelucci, P. (eds) Handbook of Human Computation. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-8806-4_54

Download citation

  • DOI: https://doi.org/10.1007/978-1-4614-8806-4_54

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4614-8805-7

  • Online ISBN: 978-1-4614-8806-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics