Abstract
Analogical learning is a lazy learning mechanism which maps input forms (e.g. strings) to output ones, thanks to analogies identified in a training material. It has proven effective in a number of Natural Language Processing (NLP) tasks such as machine translation. One challenge with this approach is the solving of so-called analogical equations. In this paper, we investigate how structured learning can be used for learning to solve formal analogical equations. We evaluate our learning procedure on several test sets and show that we can improve upon fair baselines.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Over 3 million vectors of dimension 300 for words seen at least 5 times; trained with the skip-gram model on the large Google news corpus.
- 2.
The first valid index is 0.
- 3.
The definition immediately follows from Theorem 1.
- 4.
- 5.
- 6.
The degree of an analogy roughly correlates with the number of commutations among strings involved; the higher the degree, the harder it is to solve the analogy.
References
Bayoudh, S., Mouchère, H., Miclet, L., Anquetil, E.: Learning a classifier with very few examples: analogy based and knowledge based generation of new examples for character recognition. In: Kok, J.N., Koronacki, J., Mantaras, R.L., Matwin, S., Mladenič, D., Skowron, A. (eds.) ECML 2007. LNCS (LNAI), vol. 4701, pp. 527–534. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74958-5_49
Ben Hassena, A.: Apprentissage analogique par analogie de structures d’arbres. Ph.D. thesis, Univ. de Rennes I, France (2011)
Collins, M.: Discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms. In: EMNLP, pp. 1–8 (2002)
Collins, M., Roark, B.: Incremental parsing with the perceptron algorithm. In: 42nd ACL (2004)
Dandapat, S., Morrissey, S., Naskar, S.K., Somers, H.: Mitigating problems in analogy-based EBMT with SMT and vice versa: a case study with named entity transliteration. In: PACLIC, Sendai, Japan (2010)
Huang, L., Fayong, S., Guo, Y.: Structured perceptron with inexact search. In: NAACL, pp. 142–151 (2012)
Kaveeta, V., Lepage, Y.: Solving analogical equations between strings of symbols using neural networks. In: Workshop on Computational Analogy at ICCBR 2016, pp. 67–76 (2016)
Koehn, P., et al.: Moses: open source toolkit for statistical machine translation. In: 45th ACL, pp. 177–180 (2007). Interactive Poster and Demonstration Sessions
Langlais, P.: Mapping source to target strings without alignment by analogical learning: a case study with transliteration. In: 51st ACL, pp. 684–689 (2013)
Langlais, P., Patry, A.: Translating unknown words by analogical learning. In: EMNLP, Prague, Czech Republic, pp. 877–886 (2007)
Langlais, P., Yvon, F., Zweigenbaum, P.: Improvements in analogical learning: application to translating multi-terms of the medical domain. In: 12th EACL, Athens, pp. 487–495 (2009)
Lepage, Y.: Solving analogies on words: an algorithm. In: COLING-ACL, Montreal, Canada, pp. 728–733 (1998)
Lepage, Y., Denoual, E.: Purest ever example-based machine translation: detailed presentation and assesment. Mach. Trans. 19, 25–252 (2005)
Lepage, Y., Shin-ichi, A.: Saussurian analogy: a theoretical account and its application. In: 7th COLING, pp. 717–722 (1996)
Letard, V., Illouz, G., Rosset, S.: Reducing noise sensitivity of formal analogical reasoning applied to language transfer. In: Workshop on Computational Analogy at ICCBR 2016, pp. 87–97 (2016)
Levy, O., Goldberg, Y., Dagan, I.: Improving distributional similarity with lessons learned from word embeddings. Trans. Assoc. Comput. Linguist. 3, 211–225 (2015)
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. CoRR abs/1301.3781 (2013)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. In: 26th NIPS, pp. 3111–3119 (2013)
Stroppa, N., Yvon, F.: An analogical learner for morphological analysis. In: 9th CONLL, Ann Arbor, USA, pp. 120–127 (2005)
Yang, W., Lepage, Y.: Inflating a small parallel corpus into a large quasi-parallel corpus using monolingual data for Chinese-Japanese machine translation. J. Inf. Process. (Information Processing Society of Japan) (2017)
Yvon, F.: Paradigmatic cascades: a linguistically sound model of pronunciation by analogy. In: 35th ACL, pp. 429–435 (1997)
Yvon, F., Stroppa, N., Delhay, A., Miclet, L.: Solving analogies on words. Technical report. D005, École Nationale Supérieure des Télécommuncations, Paris, France (2004)
Acknowledgments
This work has been partly funded by the Natural Sciences and Engineering Research Council of Canada. We thank reviewers for their constructive comments.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Rhouma, R., Langlais, P. (2018). Experiments in Learning to Solve Formal Analogical Equations. In: Cox, M., Funk, P., Begum, S. (eds) Case-Based Reasoning Research and Development. ICCBR 2018. Lecture Notes in Computer Science(), vol 11156. Springer, Cham. https://doi.org/10.1007/978-3-030-01081-2_40
Download citation
DOI: https://doi.org/10.1007/978-3-030-01081-2_40
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-01080-5
Online ISBN: 978-3-030-01081-2
eBook Packages: Computer ScienceComputer Science (R0)