Original Article

Journal of Logic, Language and Information

, Volume 13, Issue 4, pp 439-455

First online:

Learning Local Transductions Is Hard

  • Martin JanscheAffiliated withCenter for Computational Learning Systems, Columbia University Email author 

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access


Local deterministic string-to-string transductions arise in natural language processing (NLP) tasks such as letter-to-sound translation or pronunciation modeling. This class of transductions is a simple generalization of morphisms of free monoids; learning local transductions is essentially the same as inference of certain monoid morphisms. However, learning even a highly restricted class of morphisms, the so-called fine morphisms, leads to intractable problems: deciding whether a hypothesized fine morphism is consistent with observations is an NP-complete problem; and maximizing classification accuracy of the even smaller class of alphabetic substitution morphisms is APX-hard. These theoretical results provide some justification for using the kinds of heuristics that are commonly used for this learning task.

Key words

Boolean satisfiability combinatorial optimization formal languages letter-to-sound rules machine learning natural language processing NP completeness rational transductions