Skip to main content

A Deep Learning Approach to Solving Morphological Analogies

  • Conference paper
  • First Online:
Case-Based Reasoning Research and Development (ICCBR 2022)

Abstract

Analogical proportions are statements of the form “A is to B as C is to D”. They support analogical inference and provide a logical framework to address learning, transfer, and explainability concerns. This logical framework finds useful applications in AI and natural language processing (NLP). In this paper, we address the problem of solving morphological analogies using a retrieval approach named ANNr. Our deep learning framework encodes structural properties of analogical proportions and relies on a specifically designed embedding model capturing morphological characteristics of words. We demonstrate that ANNr outperforms the state of the art on 11 languages. We analyze ANNr results for Navajo and Georgian, languages on which the model performs worst and best, to explore potential correlations between the mistakes of ANNr and linguistic properties.

This research was partially supported by TAILOR, a project funded by EU Horizon 2020 research and innovation program under GA No 952215, and the Inria Project Lab “Hybrid Approaches for Interpretable AI” (HyAIAI).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Except in this paragraph, we use retrieval in the general meaning and not the one of CBR in this article.

  2. 2.

    An example of embedding space collapse: moving all the embeddings in a smaller area of the embedding space by multiplying them by \(10^5\) minimizes \(\text {MSE}(e_D, \widehat{e_D})\) but does not improve retrieval performance, as the relative distance between embeddings does not change.

  3. 3.

    We experimented with both Euclidean and cosine distance, the former giving slightly better results in most cases, even though the difference is not significant.

  4. 4.

    https://github.com/EMarquer/nn-morpho-analogy-iccbr.

  5. 5.

    The samples were randomly selected using a fixed random seed.

  6. 6.

    By freezing we mean that the parameters of the model are not updated.

  7. 7.

    By convergence we mean that there is no improvement in the development set loss.

References

  1. Alsaidi, S., Decker, A., Lay, P., Marquer, E., Murena, P.A., Couceiro, M.: A neural approach for detecting morphological analogies. In: IEEE 8th DSAA, pp. 1–10 (2021)

    Google Scholar 

  2. Alsaidi, S., Decker, A., Lay, P., Marquer, E., Murena, P.A., Couceiro, M.: On the transferability of neural models of morphological analogies. In: AIMLAI, ECML PKDD, vol. 1524, pp. 76–89 (2021). https://doi.org/10.1007/978-3-030-93736-2_7

  3. Bayoudh, M., Prade, H., Richard, G.: Evaluation of analogical proportions through kolmogorov complexity. Knowl.-Based Syst. 29, 20–30 (2012)

    Article  Google Scholar 

  4. Chen, D., Peterson, J.C., Griffiths, T.: Evaluating vector-space models of analogy. In: 39th CogSci, pp. 1746–1751. Cognitive Science Society (2017)

    Google Scholar 

  5. Cotterell, R., Kirov, C., Sylak-Glassman, J., Yarowsky, D., Eisner, J., Hulden, M.: The sigmorphon 2016 shared task-morphological reinflection. In: SIGMORPHON 2016. ACL (2016)

    Google Scholar 

  6. Drozd, A., Gladkova, A., Matsuoka, S.: Word embeddings, analogies, and machine learning: Beyond king - man + woman = queen. In: 26th COLING, pp. 3519–3530 (2016)

    Google Scholar 

  7. Eddington, D., Lachler, J.: A computational analysis of navajo verb stems, pp. 143–161. CSLI Publications (2010)

    Google Scholar 

  8. Fam, R., Lepage, Y.: Morphological predictability of unseen words using computational analogy. In: 24th ICCBR Workshops, pp. 51–60 (2016)

    Google Scholar 

  9. Fam, R., Lepage, Y.: Tools for the production of analogical grids and a resource of n-gram analogical grids in 11 languages. In: 11th LREC, pp. 1060–1066. ELRA (2018)

    Google Scholar 

  10. Karpinska, M., Li, B., Rogers, A., Drozd, A.: Subcharacter information in japanese embeddings: when is it worth it? In: Workshop on the Relevance of Linguistic Structure in Neural Architectures for NLP, pp. 28–37. ACL (2018)

    Google Scholar 

  11. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd ICLR (2015)

    Google Scholar 

  12. Langlais, P., Yvon, F., Zweigenbaum, P.: Improvements in analogical learning: application to translating multi-terms of the medical domain. In: 12th EACL, pp. 487–495. ACL (2009)

    Google Scholar 

  13. Leer, J.: Proto-athabaskan verb stem variation. Part One: Phonology. Fairbanks: Alaska Native Language Center (1979)

    Google Scholar 

  14. Lepage, Y.: De l’analogie rendant compte de la commutation en linguistique. Université Joseph-Fourier - Grenoble I, Habilitation à diriger des recherches (2003)

    Google Scholar 

  15. Lepage, Y.: Character-position arithmetic for analogy questions between word forms. In: 25th ICCBR (Workshops), vol. 2028, pp. 23–32 (2017)

    Google Scholar 

  16. Lepage, Y., Ando, S.: Saussurian analogy: a theoretical account and its application. In: 16th COLING (1996)

    Google Scholar 

  17. Levy, O., Goldberg, Y.: Dependency-based word embeddings. In: 52nd ACL (Volume 2: Short Papers), pp. 302–308. ACL (2014)

    Google Scholar 

  18. Lieber, J., Nauer, E., Prade, H.: When revision-based case adaptation meets analogical extrapolation. In: Sánchez-Ruiz, A.A., Floyd, M.W. (eds.) ICCBR 2021. LNCS (LNAI), vol. 12877, pp. 156–170. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86957-1_11

    Chapter  Google Scholar 

  19. Lim, S., Prade, H., Richard, G.: Solving word analogies: a machine learning perspective. In: Kern-Isberner, G., Ognjanović, Z. (eds.) ECSQARU 2019. LNCS (LNAI), vol. 11726, pp. 238–250. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29765-7_20

    Chapter  Google Scholar 

  20. Marquer, E., Couceiro, M., Alsaidi, S., Decker, A.: Siganalogies - morphological analogies from Sigmorphon 2016 and 2019 (2022)

    Google Scholar 

  21. Miclet, L., Bayoudh, S., Delhay, A.: Analogical dissimilarity: definition, algorithms and two experiments in machine learning. J. Artif. Intell. Res. 32, 793–824 (2008)

    Article  MathSciNet  Google Scholar 

  22. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. In: 1st ICLR, Workshop Track (2013)

    Google Scholar 

  23. Murena, P.A., Al-Ghossein, M., Dessalles, J.L., Cornuéjols, A.: Solving analogies on words based on minimal complexity transformation. In: 29th IJCAI, pp. 1848–1854 (2020)

    Google Scholar 

  24. Prade, H., Richard, G.: A short introduction to computational trends in analogical reasoning. In: Prade, H., Richard, G. (eds.) Computational Approaches to Analogical Reasoning: Current Trends. SCI, vol. 548, pp. 1–22. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-642-54516-0_1

    Chapter  Google Scholar 

  25. Rumelhart, D.E., Abrahamson, A.A.: A model for analogical reasoning. Cogn. Psychol. 5(1), 1–18 (1973)

    Article  Google Scholar 

  26. de Saussure, F.: Cours de linguistique générale. Payot (1916)

    Google Scholar 

  27. Vania, C.: On understanding character-level models for representing morphology. Ph.D. thesis, University of Edinburgh (2020)

    Google Scholar 

  28. Wang, L., Lepage, Y.: Vector-to-sequence models for sentence analogies. In: ICACSIS, pp. 441–446 (2020)

    Google Scholar 

  29. Yvon, F.: Finite-state transducers solving analogies on words. Rapport GET/ENST &LTCI (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Esteban Marquer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Marquer, E., Alsaidi, S., Decker, A., Murena, PA., Couceiro, M. (2022). A Deep Learning Approach to Solving Morphological Analogies. In: Keane, M.T., Wiratunga, N. (eds) Case-Based Reasoning Research and Development. ICCBR 2022. Lecture Notes in Computer Science(), vol 13405. Springer, Cham. https://doi.org/10.1007/978-3-031-14923-8_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-14923-8_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-14922-1

  • Online ISBN: 978-3-031-14923-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics