Abstract
In this paper we present datasets of Facebook comment threads to mainstream media posts in Slovene and English developed inside the Slovene national project FRENK (the acronym FRENK stands for “FRENK - Raziskave Elektronske Nespodobne Komunikacije” (engl. “Research on Electronic Inappropriate Communication”)) which cover two topics, migrants and LGBT, and are manually annotated for different types of socially unacceptable discourse (SUD). The main advantages of these datasets compared to the existing ones are identical sampling procedures, producing comparable data across languages and an annotation schema that takes into account six types of SUD and five targets at which SUD is directed. We describe the sampling and annotation procedures, and analyze the annotation distributions and inter-annotator agreements. We consider this dataset to be an important milestone in understanding and combating SUD for both languages.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
While in this paper we describe the annotation results of Slovene and English only, an annotation campaign over Croatian data is already under way and plans exist to annotate Dutch and French data as well.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
To use this service from May 2018 onwards, users have to go through a screening process that would quite likely not be successful for harvesting purposes, but our collection was performed in October 2017, before this restrictive change in policy.
- 20.
- 21.
As always, these results have to be taken with caution and not as final, as other factors might have produced this difference, such as (1) the fact that in Slovenia the referendum regarding same-sex marriages was carried out during the period these Facebook posts cover and (2) the fact that most of the LGBT-related content comes from Nova24TV, which is, as already mentioned, a medium on the right side of the political spectrum. The latter has proven to have an impact as this source has socially acceptable comments in 42% of cases, while the other two have 57% and 62% of non-SUD comments on this topic. Both other sources still have, however, a higher percentage of SUD comments than the English average.
References
Davidson, T., Warmsley, D., Macy, M.W., Weber, I.: Automated hate speech detection and the problem of offensive language. CoRR abs/1703.04009 (2017). http://arxiv.org/abs/1703.04009
Fišer, D., Erjavec, T., Ljubešić, N.: Legal framework, dataset and annotation schema for socially unacceptable online discourse practices in Slovene. In: Proceedings of the First Workshop on Abusive Language Online, pp. 46–51 (2017)
Krippendorff, K.: Content Analysis: An Introduction to Its Methodology, 2nd edn. Sage Publications, Thousand Oaks (2004)
Ljubešić, N., Erjavec, T., Fišer, D.: Datasets of slovene and croatian moderated news comments. In: Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pp. 124–131 (2018)
Pavlopoulos, J., Malakasiotis, P., Androutsopoulos, I.: Deeper attention to abusive user content moderation. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, 9–11 September 2017, pp. 1125–1135 (2017). https://aclanthology.info/papers/D17-1117/d17-1117
Ross, B., Rist, M., Carbonell, G., Cabrera, B., Kurowsky, N., Wojatzki, M.: Measuring the reliability of hate speech annotations: the case of the European refugee crisis. CoRR abs/1701.08118 (2017). http://arxiv.org/abs/1701.08118
Waseem, Z., Hovy, D.: Hateful symbols or hateful people? Predictive features for hate speech detection on Twitter. In: Proceedings of the Student Research Workshop, SRW@HLT-NAACL 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, 12–17 June 2016, pp. 88–93 (2016). http://aclweb.org/anthology/N/N16/N16-2013.pdf
Wulczyn, E., Thain, N., Dixon, L.: Ex Machina: personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, WWW 2017, International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, pp. 1391–1399 (2017). https://doi.org/10.1145/3038912.3052591
Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., Kumar, R.: Predicting the type and target of offensive posts in social media. In: Proceedings of NAACL (2019)
Acknowledgement
The work described in this paper was funded by the Slovenian Research Agency within the national basic research project “Resources, methods and tools for the understanding, identification and classification of various forms of socially unacceptable discourse in the information society” (J7-8280, 2017–2020) and the Slovenian-Flemish bilateral basic research project “Linguistic landscape of hate speech on social media” (N06-0099, 2019–2023).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Ljubešić, N., Fišer, D., Erjavec, T. (2019). The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English. In: Ekštein, K. (eds) Text, Speech, and Dialogue. TSD 2019. Lecture Notes in Computer Science(), vol 11697. Springer, Cham. https://doi.org/10.1007/978-3-030-27947-9_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-27947-9_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-27946-2
Online ISBN: 978-3-030-27947-9
eBook Packages: Computer ScienceComputer Science (R0)