Learning decision lists and trees with equivalence-queries

  • Hans Ulrich Simon
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 904)


This paper is concerned with the model of learning with equivalence-queries which was introduced by Angluin in [2]. We show that decision lists and decision trees of bounded rank are polynomially learnable in this model. If there are N base functions, then N2 queries are sufficient for learning lists. For learning trees of rank r, (1+o(1))N2r queries are sufficient. We also investigate the problem of learning a shortest representation of a target decision list. Let k-DL denote the class of decision lists with boolean terms of maximal length k as base functions. We show that shortest representations for lists from 1-DL can be learned efficiently. The corresponding questions for k≥2 are open, although we are able to show some related (but weaker) results. For instance, we present an algorithm which efficiently learns shortest representations of boolean 2-CNF or 2-DNF formulas.


Base Function Target Function Learn Decision Tree Alternation Level Decision List 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    A. V. Aho, M. R. Garey, and J. D. Ulman. The transitive reduction of a directed graph. SIAM Journal on Computing, 1(2):131–137, 1972.Google Scholar
  2. 2.
    Dana Angluin. Queries and concept learning. Machine Learning, 2(4):319–342, 1988.Google Scholar
  3. 3.
    Avrim Blum. Separating distribution-free and mistake-bound learning models over the boolean domain. In Proceedings of the 31st Symposium on Foundations of Computer Science, pages 211–220, 1990.Google Scholar
  4. 4.
    Avrim Blum. Rank-r decision trees are a subclass of r-decision lists. Information Processing Letters, 42:183–185, 1992.Google Scholar
  5. 5.
    Andrzej Ehrenfeucht and David Haussler. Learning decision trees from random examples. Information and Computation, 82(3):231–246, 1989.Google Scholar
  6. 6.
    M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman, San Francisco, 1979.Google Scholar
  7. 7.
    Thomas Hancock, Tao Jiang, Ming Li, and John Tromp. Lower bounds on learning decision lists and trees. Draft.Google Scholar
  8. 8.
    David Helmbold, Robert Sloan, and Manfred K. Warmuth. Learning nested differences of intersection-closed concept classes. Machine Learning, 5:165–196, 1990.Google Scholar
  9. 9.
    N. Jones, Y. Lien, and W. Laaser. New problems complete for nondeterministic log space. Math. Systems Theory, 10:1–17, 1976.Google Scholar
  10. 10.
    Nick Littlestone. Personal Communication.Google Scholar
  11. 11.
    Nick Littlestone. Learning quickly when irrelevant attributes abound: a new linear threshold algorithm. Machine Learning, 2(4):245–318, 1988.Google Scholar
  12. 12.
    Kurt Mehlhorn. Data Structures and Algorithms 2: Graph Algorithms and NP-Completeness, volume 2 of EATCS Monographs on Theoretical Computer Science. Springer Verlag, Berlin, 1984.Google Scholar
  13. 13.
    Ronald Rivest. Learning decision lists. Machine Learning, 2(3):229–246, 1987.Google Scholar
  14. 14.
    Leslie G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • Hans Ulrich Simon
    • 1
  1. 1.Fachbereich InformatikUniversität DortmundDortmund

Personalised recommendations