Effect of the Text Size on Stylometry—Application on Arabic Religious Texts
In stylometry, there are two important technical questions: Firstly, does the text size affect the authorship attribution performances? and secondly, what could be the effect of the language on that attribution? To respond to those questions, we have conducted several experiments of authorship attribution applied on multi-size text documents. The text size varies from 100 words to 3000 words per document. For that purpose, a specific Arabic dataset has been conceived (i.e. A4P corpus). The corpus is made available for the scientific community and is suitable for the task of stylometry since the genre and theme are quite similar. Two types of features are investigated: character n-grams and words, in association with several classifiers, namely: SVM, MLP, Linear regression, Stamatatos distance and Manhattan distance. During the experiments, 2 types of scores are proposed: the “Score of Good Attribution” and “Robustness against Size Reduction” ratio. Results are quite interesting, showing that the minimum text size required for performing a fair authorship attribution, depends on the feature and classification method that are employed. For the evaluation task, a specific application of authorship attribution has been conducted on 7 religious books, where the main purpose was to check whether the Quran and Hadith could have the same Author or not. Results have clearly shown that those two books should have 2 different Authors.
KeywordsNatural language processing Authorship attribution Stylometry Performances versus size Classifiers Arabic language
We warmly thank the research team of Dr. Juola and Al-Waraq library.
- 1.Juola, P.: JGAAP, Authorship attribution. In: Foundations and Trends in Information Retrieval, vol. 1, no. 3, pp. 233–334. Now Publisher (2006)Google Scholar
- 3.Sayoud, H.: A Visual analytics based investigation on the authorship of the holy Quran. In: 6th International Conference on Information Visualization Theory and Applications, pp. 177−181. Berlin, 11−14 Mar 2015Google Scholar
- 4.Vel, O. de., Anderson, A., Corney, M., Mohay, G.: ACM SIGMOD Rec. 30(4), 55−64 (2001)Google Scholar
- 6.Eder, M.: Does size matter? Authorship attribution, small samples, big problem. Lit. Ling. Comput. (2013). doi: 10.1093/llc/fqt066
- 7.Stamatatos, E., Fakotakis, N., Kokkinakis, G.: Automatic authorship attribution. In: Proceedings of the 9th Conference οf the European Chapter of the Association for Computer Linguistics, pp. 158−164 (1999)Google Scholar
- 9.Kešelj, V., Peng, F., Cercone, N., Thomas, C.: N-gram-based author profiles for authorship attribution. In: Proceedings of the Pacific Association for Computer Linguistics, vol. 3, pp. 255−264 (2003)Google Scholar
- 10.Sayoud, H.: Automatic speaker recognition—Connexionnist approach. PhD thesis, USTHB University, Algiers (2003)Google Scholar
- 11.Witten, I.H., Eibe, F., Trigg, L., Hall, M., Holmes, G., Cunningham S.J.: Weka: practical machine learning tools and techniques with Java implementations. In: Proceedings of the ICONIP/ANZIIS/ANNES’99 Workshop on Emerging Knowledge Engineering and Connectionist-Based Information Systems, New Zealand, pp. 192−196 (1999)Google Scholar
- 12.Keerthi, S.S., Shevade, S.K., Bhattacharyya, C., Murthy, K.R.K.: Improvements to platt’s SMO algorithm for SVM classifier design. Neural Comput. 13, 637–649 (2001)Google Scholar
- 13.Stamatatos, E.: On the robustness of authorship attribution based on character n-gram features. J. Law Policy 21(2), 421–439 (2013)Google Scholar