Integration of the Dual Approaches in the Distributional Learning of Context-Free Grammars
Recently several “distributional learning algorithms” have been proposed and have made great success in learning different subclasses of context-free grammars. The distributional learning models and exploits the relation between strings and contexts that form grammatical sentences in the language of the learning target. There are two main approaches. One, which we call primal, constructs nonterminals whose language is supposed to be characterized by strings. The other, which we call dual, uses contexts to characterize the language of each nonterminal of the conjecture grammar. This paper shows how those opposite approaches are integrated into single learning algorithms that learn quite rich classes of context-free grammars.
Unable to display preview. Download preview PDF.
- 3.Clark, A.: Distributional learning of some context-free languages with a minimally adequate teacher. In: Sempere, García (eds.) , pp. 24–37Google Scholar
- 4.Clark, A.: Learning context free grammars with the syntactic concept lattice. In: Sempere, García (eds.) , pp. 38–51Google Scholar
- 6.Clark, A., Eyraud, R., Habrard, A.: A note on contextual binary feature grammars. In: EACL 2009 Workshop on Computational Linguistic Aspects of Grammatical Inference, pp. 33–40 (2009)Google Scholar
- 9.Shirakawa, H., Yokomori, T.: Polynomial-time MAT learning of c-deterministic context-free grammars. Transaction of Information Processing Society of Japan 34, 380–390 (1993)Google Scholar
- 12.Yoshinaka, R., Kanazawa, M.: Distributional Learning of Abstract Categorial Grammars. In: Pogodalla, S., Prost, J.-P. (eds.) LACL 2011. LNCS, vol. 6736, pp. 251–266. Springer, Heidelberg (2011)Google Scholar