Advertisement

Interaction of Information Content and Frequency as Predictors of Verbs’ Lengths

  • Michael RichterEmail author
  • Yuki Kyogoku
  • Max Kölbl
Conference paper
Part of the Lecture Notes in Business Information Processing book series (LNBIP, volume 353)

Abstract

The topic of this paper is the interaction of Average Information Content (IC) and frequency of aspect-coded verbs in Linear Mixed Effect Models as predictors of the verbs’ lengths. For 30 languages in focus, it came to light that IC and frequency do not have a simultaneous, positive impact on the length of verb forms: the effect of the IC is high, when the effect of frequency is low and vice versa. This is an indication of Uniform Information Density [13, 14, 15, 16]. Additionally, the predictors IC and frequency yield high correlations between predicted and actual verbs’ lengths.

Keywords

Information Content Frequency Linear mixed models Economy in interactions Aspect 

Notes

Acknowledgments

This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – project number: 357550571.

References

  1. 1.
    Nivre, J., Agić, Z., Ahrenberg, L., et al.: Universal dependencies 2.0–CoNLL 2017 shared task development and test data. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (FAL) (2017)Google Scholar
  2. 2.
    Cohen Priva, U.: Using information content to predict phone deletion. In: Proceedings of the 27th West Coast Conference on Formal Linguistics, pp. 90–98. Cascadilla Proceedings Project (2008)Google Scholar
  3. 3.
    Piantadosi, S.T., Tily, H., Gibson, E.: Word lengths are optimized for efficient communication. PNAS 108(9), 3526–3529 (2011)CrossRefGoogle Scholar
  4. 4.
    Zipf, G.: The Psycho-Biology of Language. Houghton Mifflin, Boston (1935)Google Scholar
  5. 5.
    Bates, D., Mächler, M., Bolker, B., Walker, S.: Fitting linear mixed-effects models using lme4. J. Stat. Softw. 67(1), 1–48 (2015)CrossRefGoogle Scholar
  6. 6.
    Greenberg, J.H.: Language Universals With Special Reference to Feature Hierarchies. Mouton, The Hague (1966)Google Scholar
  7. 7.
    Croft, W.: Verbs: Aspect and Causal Structure. Oxford University Press, Oxford (2012)CrossRefGoogle Scholar
  8. 8.
    Haspelmath, M.: Creating economical patterns in language change. In: Good, J. (ed.) Semantics and Contextual Expressions, pp. 185–214. Oxford University Press (2008)Google Scholar
  9. 9.
    Haspelmath, M., Calude, A., Spagnol, M., Narrog, H., Bamyaci, E.: Coding causal noncausal verb alternations: a form–frequency correspondence explanation. J. Linguist. 50(3), 587–625 (2014)CrossRefGoogle Scholar
  10. 10.
    Levchina, N.: Communicative efficiency and syntactic predictability: a crosslinguistic study based on the universal dependency corpora. In: Proceedings of the NoDaLiDa 2017 Workshop on Universal Dependencies, (UDW 2017) (2017)Google Scholar
  11. 11.
    Celano, G.A, Richter, M., Voll, R., Heyer, G.: Aspect coding asymmetries of verbs: the case of Russian. In: Barbaresi, A., Biber, H., Neubarth, F., Osswald, R. (eds.) KONVENS 2018. Proceedings of the 14th Conference on Natural Language Processing, pp. 34–39 (2018)Google Scholar
  12. 12.
    Shannon, C.E., Weaver, W.: A mathematical theory of communication. Bell Syst. Tech. J. 27, 623–656 (1948)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Jaeger, T.F.: Redundancy and reduction: Speakers manage syntactic information density. Cogn. Psychol. 61(1), 23–62 (2010).  https://doi.org/10.1016/j.cogpsych.2010.02.002MathSciNetCrossRefGoogle Scholar
  14. 14.
    Levy, R., Jaeger T.F.: Speakers optimize information density through syntactic reduction. In: Proceedings of the 20th Conference on Neural Information Processing Systems (NIPS) (2007)Google Scholar
  15. 15.
    Aylett, M., Turk, A.: The smooth signal redundancy hypothesis: a functional explanation for relationships between redundancy, prosodic prominence, and duration in spontaneous speech. Lang. Speech 47(1), 31–56 (2004)CrossRefGoogle Scholar
  16. 16.
    Genzel, D., Charniak E.: Entropy rate constancy in text. In: Proceedings of ACL, pp. 199–206 (2002)Google Scholar
  17. 17.
    Crocker, M.W., Demberg, V., Teich, E.: Information density and linguistic encoding (IDeaL). KI Künstliche Intelligenz 30(1), 77–81 (2016)CrossRefGoogle Scholar
  18. 18.
    Agrawal, A., Agarwal, S., Husain, S.: Role of expectation and working memory constraints in Hindi comprehension: an eyetracking corpus analysis. J. Eye Mov. Res. 10(2), 1–15 (2017)Google Scholar
  19. 19.
    Hale, J.: A probabilistic earley parser as a psycholinguistic model. In: Proceedings of NAACL (2001)Google Scholar
  20. 20.
    Altmann, G., Kamide, Y.: Incremental interpretation at verbs: restricting the domain of subsequent reference. Cognition 73(3), 247–264 (1999)CrossRefGoogle Scholar
  21. 21.
    Levy, R.: Expectation–based syntactic comprehension. Cognition 106(3), 1126–1177 (2008)CrossRefGoogle Scholar
  22. 22.
    Villupillai, V.: Zero Coding in Tense-Aspect Systems of Creole Languages. John Benjamins, Amsterdam (2012)Google Scholar
  23. 23.
    Vendler, Z.: Linguistics in Philosophy. Cornell University Press, Ithaca (1967)Google Scholar
  24. 24.
    Bohnemeyer, J., Swift, M.: Event realization and default aspect. Linguist. Philos. 27(3), 263–296 (2004)CrossRefGoogle Scholar
  25. 25.
    Levy, R.: Memory and surprisal in human sentence comprehension. In: van Gompel, R.P.G. (ed.) Sentence Processing, pp. 78–114. Psychology Press, Hove (2013)Google Scholar
  26. 26.
    Collins, M.X.: Information density and dependency length as complementary cognitive models. J. Psycholinguist. Res. 43(5), 651–681 (2014)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Natural Language Processing GroupUniversität LeipzigLeipzigGermany

Personalised recommendations