Machine Learning

, Volume 44, Issue 1, pp 9–35

Learning DFA from Simple Examples

  • Rajesh Parekh
  • Vasant Honavar
Article

DOI: 10.1023/A:1010822518073

Cite this article as:
Parekh, R. & Honavar, V. Machine Learning (2001) 44: 9. doi:10.1023/A:1010822518073

Abstract

Efficient learning of DFA is a challenging research problem in grammatical inference. It is known that both exact and approximate (in the PAC sense) identifiability of DFA is hard. Pitt has posed the following open research problem: “Are DFA PAC-identifiable if examples are drawn from the uniform distribution, or some other known simple distribution?” (Pitt, in Lecture Notes in Artificial Intelligence, 397, pp. 18–44, Springer-Verlag, 1989). We demonstrate that the class of DFA whose canonical representations have logarithmic Kolmogorov complexity is efficiently PAC learnable under the Solomonoff Levin universal distribution (m). We prove that the class of DFA is efficiently learnable under the PACS (PAC learning with simple examples) model (Denis, D'Halluin & Gilleron, STACS'96—Proceedings of the 13th Annual Symposium on the Theoretical Aspects of Computer Science, pp. 231–242, 1996) wherein positive and negative examples are sampled according to the universal distribution conditional on a description of the target concept. Further, we show that any concept that is learnable under Gold's model of learning from characteristic samples, Goldman and Mathias' polynomial teachability model, and the model of learning from example based queries is also learnable under the PACS model.

DFA inferenceexact identificationcharacteristic setsPAC learningcollusion
Download to read the full article text

Copyright information

© Kluwer Academic Publishers 2001

Authors and Affiliations

  • Rajesh Parekh
    • 1
  • Vasant Honavar
    • 2
  1. 1.Blue Martini SoftwareSan MateoUSA
  2. 2.Department of Computer ScienceIowa State UniversityAmesUSA