Advertisement

Generating Cross-Domain Text Classification Corpora from Social Media Comments

  • Benjamin MurauerEmail author
  • Günther SpechtEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11696)

Abstract

In natural language processing (NLP), cross-domain text classification problems like cross-topic, cross-genre or cross-language authorship attribution are characterized by having different contexts for training and testing data. That is, learning algorithms which are trained on the specific properties of the training data have to make predictions on test data which comprises substantially different properties. To this end, the corpora that are used for analyses in cross-domain problems are limited in size and variation, decreasing the expressive power and generalizability of the proposed solutions. In this paper, we present a methodological framework and toolset for dynamically creating cross-domain datasets by utilizing millions of Reddit comments. We show that different types of cross-domain datasets such as cross-topic or cross-lingual corpora can be constructed, and demonstrate a wide variety of use cases, including previously unfeasible analyses like cross-lingual authorship attribution on original, non-translated texts. Using state-of-the-art authorship attribution methods, we show the potential of a cross-topic corpus generated by our framework when compared to the corpora that were used in related approaches, and enable the advance of research previously limited by corpora availability.

References

  1. 1.
    Bogdanova, D., Lazaridou, A.: Cross-language authorship attribution. In: Proceedings of the 9th International Conference on Language Resources and Evaluation, pp. 2015–2020 (2014)Google Scholar
  2. 2.
    Eder, M.: Does size matter? Authorship attribution, small samples, big problem. Digit. Sch. Hum. 30(2), 167–182 (2013)Google Scholar
  3. 3.
    Gómez-Adorno, H., Posadas-Durán, J.P., Sidorov, G., Pinto, D.: Document embeddings learned on various types of n-grams for cross-topic authorship attribution. Computing 100(7), 741–756 (2018). https://doi.org/10.1007/s00607-018-0587-8CrossRefGoogle Scholar
  4. 4.
    Koppel, M., Schler, J., Argamon, S., Messeri, E.: Authorship attribution with thousands of candidate authors. In: Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM Press (2006).  https://doi.org/10.1145/1148170.1148304
  5. 5.
    Llorens, M., Delany, S.J.: Deep level lexical features for cross-lingual authorship attribution. In: Proceedings of the First Workshop on Modeling, Learning and Mining for Cross/Multilinguality, pp. 16–25. Dublin Institute of Technology (2016)Google Scholar
  6. 6.
    Luyckx, K., Daelemans, W.: The effect of author set size and data size in authorship attribution. Literary Linguist. Comput. 26(1), 35–55 (2011).  https://doi.org/10.1093/llc/fqq013CrossRefGoogle Scholar
  7. 7.
    Markov, I., Stamatatos, E., Sidorov, G.: Improving cross-topic authorship attribution: the role of pre-processing. In: Gelbukh, A. (ed.) CICLing 2017. LNCS, vol. 10762, pp. 289–302. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-77116-8_21CrossRefGoogle Scholar
  8. 8.
    Menon, R., Choi, Y.: Domain independent authorship attribution without domain adaptation. In: Proceedings of the International Conference Recent Advances in Natural Language Processing, pp. 309–315 (2011)Google Scholar
  9. 9.
    Murauer, B., Tschuggnall, M., Specht, G.: Dynamic parameter search for cross-domain authorship attribution. Working Notes of CLEF (2018)Google Scholar
  10. 10.
    Narayanan, A., et al.: On the feasibility of internet-scale author identification. In: 2012 IEEE Symposium on Security and Privacy. IEEE, May 2012.  https://doi.org/10.1109/sp.2012.46
  11. 11.
    Overdorf, R., Greenstadt, R.: Blogs, Twitter feeds, and reddit comments: cross-domain authorship attribution. Proc. Privacy Enhancing Technol. 2016(3), 155–171 (2016)CrossRefGoogle Scholar
  12. 12.
    Posadas-Durán, J.P., Gómez-Adorno, H., Sidorov, G., Batyrshin, I., Pinto, D., Chanona-Hernández, L.: Application of the distributed document representation in the authorship attribution task for small corpora. Soft Computing 21(3), 627–639 (2017).  https://doi.org/10.1007/s00500-016-2446-xCrossRefGoogle Scholar
  13. 13.
    Potthast, M., Hagen, M., Stein, B.: Author obfuscation: attacking the state of the art in authorship verification. In: Working Notes Papers of the CLEF 2016 Evaluation Labs. CEUR Workshop Proceedings, CLEF and CEUR-WS.org, September 2016Google Scholar
  14. 14.
    Sapkota, U., Bethard, S., Montes, M., Solorio, T.: Not all character n-grams are created equal: a study in authorship attribution. In: Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 93–102, June 2015Google Scholar
  15. 15.
    Sapkota, U., Solorio, T., y Gómez, M.M., Bethard, S., Rosso, P.: Cross-topic authorship attribution: will out-of-topic data help? In: Proceedings of the 25th International Conference on Computational Linguistics (COLING 2014), pp. 1228–1237, August 2014Google Scholar
  16. 16.
    Stamatatos, E.: On the robustness of authorship attribution based on character n-gram features. J. Law Policy 21, 421–439 (2013)Google Scholar
  17. 17.
    Venuti, L.: The Translator’s Invisibility: A History of Translation. Routledge, Abingdon (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Unitversität InnsbruckInnsbruckAustria

Personalised recommendations