Text Punctuation: An Inter-annotator Agreement Study
- 1k Downloads
Spoken language is a phenomenon which is hard to be annotated accurately. One of the most ambiguous tasks is to fill in the punctuation marks into the spoken language transcription. Used punctuation marks are often dependent on how annotators understand the transcription content. This may differ as the spoken language often lacks clear structure (inherent to written language) due to the utterance spontaneity or due to skipping between ideas.
Therefore we suspect that filling commas into the spoken language transcription is a very ambiguous task with low inter-annotator agreement (IAA). Low IAA also means that application of Gold Truth (GT) annotations for automatic algorithm evaluation is questionable as already discussed in [7, 8].
In this paper we analyze the IAA within group of annotators and we propose methods to increase it. We also propose and evaluate a reformulation of classical GT annotations for cases with multiple annotations available.
KeywordsComma adding Spoken language Inter-annotator agreement
We are very grateful to the students doing the annotation work, thank you. This work was supported by the Student’s Grant Scheme at the Technical University of Liberec (SGS 2016), by the Ministry of Education of CR within the LINDAT-Clarin project LM2015071 and by the Grant Agency of CR within the project 15-13277S.
- 1.Boháč, M., Blavka, K., Kuchařová, M., Škodová, S.: Post-processing of the recognized speech for web presentation of large audio archive. In: 2012 35th International Conference on Telecommunications and Signal Processing (TSP), pp. 441–445, July 2012Google Scholar
- 2.Boháč, M., Nouza, J., Blavka, K.: Investigation on most frequent errors in large-scale speech recognition applications. In: Sojka, P., Horák, A., Kopeček, I., Pala, K. (eds.) TSD 2012. LNCS, vol. 7499, pp. 520–527. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-32790-2_63 CrossRefGoogle Scholar
- 3.Kolář, J., Švec, J., Psutka, J.: Automatic punctuation annotation in Czech broadcast news speech. In: 9th Conference Speech and Computer (2004)Google Scholar
- 5.Kovář, V., Horák, A., Jakubíček, M.: Syntactic analysis as pattern matching: the SET parsing system. In: Proceedings of 4th Language and Technology Conference, Wydawnictwo Poznańskie, Poznań, Poland, pp. 978–983 (2009)Google Scholar
- 7.Kovář, V.: Evaluating natural language processing tasks with low inter-annotator agreement: the case of corpus applications. In: Recent Advances in Slavonic Natural Language Processing, RASLAN 2016, pp. 127–134 (2016)Google Scholar
- 8.Kovář, V., Jakubíček, M., Horák, A.: On evaluation of natural language processing tasks - is gold standard evaluation methodology a good solution? In: Proceedings of the ICAART 2016, vol. 2, pp. 540–545. SCITEPRESS (2016)Google Scholar
- 9.Mihajlik, P., Fegyó, T., Németh, B., Tüske, Z., Trón, V.: Towards automatic transcription of large spoken archives in agglutinating languages – Hungarian ASR for the MALACH Project. In: Matoušek, V., Mautner, P. (eds.) TSD 2007. LNCS, vol. 4629, pp. 342–349. Springer, Heidelberg (2007). doi: 10.1007/978-3-540-74628-7_45 CrossRefGoogle Scholar
- 10.Nouza, J., Červa, P., Ždánský, J., et al.: Speech-to-text technology to transcribe and disclose 100, 000+ hours of bilingual documents from historical Czech and Czechoslovak radio archive. In: INTERSPEECH 2014, pp. 964–968 (2014)Google Scholar
- 11.Petkevič, V.: Kontrola české gramatiky (český grammar checker). Studie z aplikované lingvistiky-Studies in Applied Linguistics 5(2), 48–66 (2014)Google Scholar