Machine-Crowd Annotation Workflow for Event Understanding Across Collections and Domains

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9678)

Abstract

People need context to process the massive information online. Context is often expressed by a specific event taking place. The multitude of data streams used to mention events provide an inconceivable amount of information redundancy and perspectives. This poses challenges to both humans, i.e., to reduce the information overload and consume the meaningful information and machines, i.e., to generate a concise overview of the events. For machines to generate such overviews, they need to be taught to understand events. The goal of this research project is to investigate whether combining machines output with crowd perspectives boosts the event understanding of state-of-the-art natural language processing tools and improve their event detection. To answer this question, we propose an end-to-end research methodology for: machine processing, defining experimental data and setup, gathering event semantics and results evaluation. We present preliminary results that indicate crowdsourcing as a reliable approach for (1) linking events and their related entities in cultural heritage collections and (2) identifying salient event features (i.e., relevant mentions and sentiments) for online data. We provide an evaluation plan for the overall research methodology of crowdsourcing event semantics across modalities and domains.

Keywords

Crowdsourcing Event extraction Machine-human computation Information extraction Event semantics annotation 

References

  1. 1.
    Gangemi, A.: A comparison of knowledge extraction tools for the semantic web. In: Cimiano, P., Corcho, O., Presutti, V., Hollink, L., Rudolph, S. (eds.) ESWC 2013. LNCS, vol. 7882, pp. 351–366. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  2. 2.
    McClosky, D., Surdeanu, M., Manning, C.D.: Event extraction as dependency parsing. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 1626–1635 (2011)Google Scholar
  3. 3.
    Kim, S.M., Hovy, E.: Automatic detection of opinion bearing words and sentences. In: Companion Volume to the Proceedings of the International Joint Conference on Natural Language Processing (IJCNLP), pp. 61–66 (2005)Google Scholar
  4. 4.
    Soboroff, I., Harman, D.: Novelty detection: the TREC experience. In: Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, pp. 105–112. ACL (2005)Google Scholar
  5. 5.
    Nowak, S., Rüger, S.: How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation. In: Proceedings of the International Conference on Multimedia IR, pp. 557–566. ACM (2010)Google Scholar
  6. 6.
    Aroyo, L., Welty, C.: Truth is a lie: CrowdTruth and the seven myths of human annotation. AI Mag. 36(1), 15–24 (2015)Google Scholar
  7. 7.
    Aroyo, L., Welty, C.: The three sides of CrowdTruth. J. Hum. Comput. 1, 31–34 (2014)Google Scholar
  8. 8.
    Yan, Y., Fung, G.M., Rosales, R., Dy, J.G.: Active learning from crowds. In: Proceedings of the 28th International Conference on Machine Learning (ICML 2011), pp. 1161–1168 (2011)Google Scholar
  9. 9.
    Intxaurrondo, A., Agirre, E., de Lacalle, O.L., Surdeanu, M.: Diamonds in the rough: event extraction from imperfect microblog data. In: Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL HLT) (2015)Google Scholar
  10. 10.
    Li, Y., Rizzo, G., Redondo García, J.L., Troncy, R., Wald, M., Wills, G.: Enriching media fragments with named entities for video classification. In: Proceedings of the 22nd International Conference on World Wide Web Companion, pp. 469–476 (2013)Google Scholar
  11. 11.
    Rizzo, G., van Erp, M., Troncy, R.: Benchmarking the extraction and disambiguation of named entities on the semantic web. In: Proceedings of the 9th International Conference on Language Resources and Evaluation, pp. 4593–4600 (2014)Google Scholar
  12. 12.
    Chen, L., Ortona, S., Orsi, G., Benedikt, M.: Aggregating semantic annotators. Proc. VLDB Endowment 6(13), 1486–1497 (2013)CrossRefGoogle Scholar
  13. 13.
    Hellmann, S., Lehmann, J., Auer, S., Brümmer, M.: Integrating NLP using linked data. In: Alani, H., Kagal, L., Fokoue, A., Groth, P., Biemann, C., Parreira, J.X., Aroyo, L., Noy, N., Welty, C., Janowicz, K. (eds.) ISWC 2013, Part II. LNCS, vol. 8219, pp. 98–113. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  14. 14.
    Kozareva, Z., Ferrández, Ó., Montoyo, A., Muñoz, R., Suárez, A., Gómez, J.: Combining data-driven systems for improving named entity recognition. Data Knowl. Eng. 61(3), 449–466 (2007)CrossRefGoogle Scholar
  15. 15.
    Schreiber, G., Amin, A., Aroyo, L., van Assem, M., de Boer, V., Hardman, L., Hildebrand, M., Omelayenko, B., et al.: Semantic annotation and search of cultural-heritage collections: the MultimediaN E-Culture demonstrator. Web Seman. Sci. Serv. Agents WWW 6(4), 243–249 (2008)CrossRefGoogle Scholar
  16. 16.
    Oomen, J., Belice Baltussen, L., Limonard, S., van Ees, A., Brinkerink, M., Aroyo, L., Vervaart, J., Asaf, K., Gligorov, R.: Emerging practices in the cultural heritage domain-social tagging of audiovisual heritage. In: Proceedings of the WebSci 2010: Extending the Frontiers of Society On-Line (2010)Google Scholar
  17. 17.
    Oosterman, J., Nottamkandath, A., Dijkshoorn, C., Bozzon, A., Houben, G.J., Aroyo, L.: Crowdsourcing knowledge-intensive tasks in cultural heritage. In: Proceedings of the 2014 ACM Conference on Web Science, pp. 267–268. ACM (2014)Google Scholar
  18. 18.
    Maccatrozzo, V., Aroyo, L., Van Hage, W.R., et al.: Crowdsourced evaluation of semantic patterns for recommendation. In: UMAP Workshops (2013)Google Scholar
  19. 19.
    Wei, Z., Gao, W.: Utilizing microblogs for automatic news highlights extraction. In: COLING (2014)Google Scholar
  20. 20.
    Verheij, A., Kleijn, A., Frasincar, F., Hogenboom, F.: A comparison study for novelty control mechanisms applied to web news stories. In: 2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology (WI-IAT), vol. 1, pp. 431–436. IEEE (2012)Google Scholar
  21. 21.
    Snow, R., O’Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast–but is it good?: evaluating non-expert annotations for natural language tasks. In: Proceedings of the Conference on Empirical Methods in NLP, pp. 254–263 (2008)Google Scholar
  22. 22.
    Rao, Y., Lei, J., Wenyin, L., Li, Q., Chen, M.: Building emotional dictionary for sentiment analysis of online news. World Wide Web 17(4), 723–742 (2014)CrossRefGoogle Scholar
  23. 23.
    Balahur, A., Steinberger, R., Kabadjov, M., Zavarella, V., Van Der Goot, E., Halkia, M., Pouliquen, B., Belyaeva, J.: Sentiment analysis in the news. In: Proceedings of the 7th International Conference on Language Resources and Evaluation, pp. 2216–2220 (2010)Google Scholar
  24. 24.
    Finin, T., Murnane, W., Karandikar, A., Keller, N., Martineau, J., Dredze, M.: Annotating named entities in twitter data with crowdsourcing. In: Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pp. 80–88. ACL (2010)Google Scholar
  25. 25.
    Inel, O., Khamkham, K., Cristea, T., Dumitrache, A., Rutjes, A., van der Ploeg, J., Romaszko, L., Aroyo, L., Sips, R.-J.: CrowdTruth: machine-human computation framework for harnessing disagreement in gathering annotated data. In: Mika, P., et al. (eds.) ISWC 2014, Part II. LNCS, vol. 8797, pp. 486–504. Springer, Heidelberg (2014)Google Scholar
  26. 26.
    Soberón, G., Aroyo, L., Welty, C., Inel, O., Lin, H., Overmeen, M.: Measuring crowd truth: disagreement metrics combined with worker behavior filters. In: Proceedings of CrowdSem 2013 Workshop, ISWC (2013)Google Scholar
  27. 27.
    de Boer, V., Oomen, J., Inel, O., Aroyo, L., van Staveren, E., Helmich, W., de Beurs, D.: Dive into the event-based browsing of linked historical media. Web Semant. Sci. Serv. Agents WWW 35(3), 152–158 (2015)CrossRefGoogle Scholar
  28. 28.
    Usbeck, R., Röder, M., Ngonga Ngomo, A.C., Baron, C., Both, A., Brümmer, M., Ceccarelli, D., Cornolti, M., Cherix, D., Eickmann, B., et al.: Gerbil: general entity annotator benchmarking framework. In: Proceedings of the 24th International Conference on World Wide Web, pp. 1133–1143 (2015)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Vrije Universiteit AmsterdamAmsterdamThe Netherlands
  2. 2.IBM Center for Advanced Studies BeneluxAmsterdamThe Netherlands

Personalised recommendations