Towards Crowdsourcing and Cooperation in Linguistic Resources

Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 505)

Abstract

Linguistic resources can be populated with data through the use of such approaches as crowdsourcing and gamification when motivated people are involved. However, current crowdsourcing genre taxonomies lack the concept of cooperation, which is the principal element of modern video games and may potentially drive the annotators’ interest. This survey on crowdsourcing taxonomies and cooperation in linguistic resources provides recommendations on using cooperation in existent genres of crowdsourcing and an evidence of the efficiency of cooperation using a popular Russian linguistic resource created through crowdsourcing as an example.

Keywords

Games with a purpose Mechanized labor Wisdom of the crowd Gamification Crowdsourcing Cooperation Linguistic resources 

Notes

Acknowledgments

This work is supported by the Russian Foundation for the Humanities, project no. 13-04-12020 “New Open Electronic Thesaurus for Russian”, and by the Program of Government of the Russian Federation 02.A03.21.0006 on 27.08.2013.

The author would like to thank Dmitry Granovsky for the extended statistical information collected from http://opencorpora.org/. The author is also grateful to the anonymous referees who offered very useful comments on the present paper.

References

  1. 1.
    Biemann, C.: Creating a system for lexical substitutions from scratch using crowdsourcing. Lang. Resour. Eval. 47(1), 97–122 (2013)CrossRefGoogle Scholar
  2. 2.
    Sabou, M., Bontcheva, K., Derczynski, L., Scharl, A.: Corpus annotation through crowdsourcing: towards best practice guidelines. In: Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2014), European Language Resources Association (ELRA), pp. 859–866 (2014)Google Scholar
  3. 3.
    Lofi, C., Selke, J., Balke, W.T.: Information extraction meets crowdsourcing: a promising couple. Datenbank-Spektrum 12(2), 109–120 (2012)CrossRefGoogle Scholar
  4. 4.
    Quinn, A.J., Bederson, B.B.: A Taxonomy of Distributed Human Computation. Human-Computer Interaction Lab Tech Report, University of Maryland (2009)Google Scholar
  5. 5.
    Yuen, M.C., Chen, L.J., King, I.: A survey of human computation systems. In: International Conference on Computational Science and Engineering, 2009. CSE 2009. vol. 4, pp. 723–728. IEEE (2009)Google Scholar
  6. 6.
    Sabou, M., Bontcheva, K., Scharl, A.: Crowdsourcing Research Opportunities: Lessons from Natural Language Processing. In: Proceedings of the 12th International Conference on Knowledge Management and Knowledge Technologies, pp. 17:1–17:8. ACM (2012)Google Scholar
  7. 7.
    Sabou, M., Scharl, A., Michael, F.: Crowdsourced knowledge acquisition: towards hybrid-genre workflows. Int. J. Semant. Web Inf. Syst. 9(3), 14–41 (2013)CrossRefGoogle Scholar
  8. 8.
    Zwass, V.: Co-Creation: toward a taxonomy and an integrated research perspective. Int. J. Electron. Commer. 15(1), 11–48 (2010)CrossRefGoogle Scholar
  9. 9.
    Erickson, T.: Some thoughts on a framework for crowdsourcing. In: CHI 2011 Workshop on Crowdsourcing and Human Computation (2011)Google Scholar
  10. 10.
    Suendermann, D., Pieraccini, R.: Crowdsourcing for Industrial Spoken Dialog Systems. In: Eskénazi, M., Levow, G.A., Meng, H., Parent, G., Suendermann, D. (eds.) Crowdsourcing for Speech Processing: Applications to Data Collection, pp. 280–302. Transcription and Assessment. John Wiley & Sons, Ltd (2013)Google Scholar
  11. 11.
    Wang, A., Hoang, C.D.V., Kan, M.Y.: Perspectives on crowdsourcing annotations for natural language processing. Lang. Resour. Eval. 47(1), 9–31 (2013)CrossRefGoogle Scholar
  12. 12.
    Kohn, A.: No Contest: A Case Against Competition. Houghton Mifflin Harcourt, New York (1992)Google Scholar
  13. 13.
    Wilkinson, D.M., Huberman, B.A.: Cooperation and quality in wikipedia. In: Proceedings of the 2007 International Symposium on Wikis, pp. 157–164. ACM (2007)Google Scholar
  14. 14.
    Arazy, O., Nov, O.: Determinants of wikipedia quality: the roles of global and local contribution inequality. In: Proceedings of the 2010 ACM Conference on Computer Supported Cooperative Work, pp. 233–236. ACM (2010)Google Scholar
  15. 15.
    Budzise-Weaver, T., Chen, J., Mitchell, M.: Collaboration and crowdsourcing: the cases of multilingual digital libraries. Electron. Libr. 30(2), 220–232 (2012)CrossRefGoogle Scholar
  16. 16.
    Ranj Bar, A., Maheswaran, M.: Case study: integrity of wikipedia articles. In: Confidentiality and Integrity in Crowdsourcing Systems. SpringerBriefs in Applied Sciences and Technology. Springer International Publishing, pp. 59–66 (2014)Google Scholar
  17. 17.
    Poesio, M., Chamberlain, J., Kruschwitz, U., Robaldo, L., Ducceschi, L.: Phrase Detectives: Utilizing Collective Intelligence for Internet-scale Language Resource Creation. ACM Trans. Interact. Intell. Syst. 3(1), 3:1–3:44 (2013)CrossRefGoogle Scholar
  18. 18.
    Yasseri, T., Sumi, R., Rung, A., Kornai, A., Kertész, J.: Dynamics of conflicts in wikipedia. PLOS ONE 7(6), e38869 (2012)CrossRefGoogle Scholar
  19. 19.
    Braslavski, P., Ustalov, D., Mukhin, M.: A spinning wheel for YARN: user interface for a crowdsourced thesaurus. In: Proceedings of the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational Linguistics, Association for Computational Linguistics, pp. 101–104 (2014)Google Scholar
  20. 20.
    Bocharov, V., Alexeeva, S., Granovsky, D., Protopopova, E., Stepanova, M., Surikov, A.: Crowdsourcing morphological annotation. In: Computational Linguistics and Intellectual Technologies: papers from the Annual conference “Dialogue”, RGGU, pp. 109–124 (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Krasovsky Institute of Mathematics and MechanicsEkaterinburgRussia
  2. 2.Ural Federal UniversityEkaterinburgRussia
  3. 3.NLPubEkaterinburgRussia

Personalised recommendations