Advertisement

Learning from incomplete boundary queries using split graphs and hypergraphs

Extended abstract
  • Robert H. Sloan
  • György Turán
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1208)

Abstract

We consider learnability with membership queries in the presence of incomplete information. In the incomplete boundary query model introduced by Blum et al. [7], it is assumed that membership queries on instances near the boundary of the target concept may receive a “don't know” answer.

We show that zero-one threshold functions are efficiently learnable in this model. The learning algorithm uses split graphs when the boundary region has radius 1, and their generalization to split hypergraphs (for which we give a split-finding algorithm) when the boundary region has constant radius greater than 1. We use a notion of indistinguishability of concepts that is appropriate for this model.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. Angluin. Queries and concept learning. Machine Learning, 2(4):319–342, Apr. 1988.Google Scholar
  2. 2.
    D. Angluin and M. Kriķis. Learning with malicious membership queries and exceptions. In Proc. 7th Annu. ACM Workshop on Comput. Learning Theory, pages 57–66. ACM Press, New York, NY, 1994.Google Scholar
  3. 3.
    D. Angluin, M. Kriķis, R. H. Sloan, and G. Turán. Malicious omissions and errors in answers to membership queries. Machine Learning. To appear.Google Scholar
  4. 4.
    D. Angluin and P. Laird. Learning from noisy examples. Machine Learning, 2(4):343–370, 1988.Google Scholar
  5. 5.
    D. Angluin and D. K. Slonim. Randomly fallible teachers: learning monotone DNF with an incomplete membership oracle. Machine Learning, 14(1):7–26, 1994.Google Scholar
  6. 6.
    M. Anthony, G. Brightwell, D. Cohen, and J. Shawe-Taylor. On exact specification by examples. In Proc. 5th Annu. Workshop on Comput. Learning Theory, pages 311–318. ACM Press, New York, NY, 1992.Google Scholar
  7. 7.
    A. Blum, P. Chalasani, S. A. Goldman, and D. K. Slonim. Learning with unreliable boundary queries. In Proc. 8th Annu. Conf. on Comput. Learning Theory, pages 98–107. ACM Press, New York, NY, 1995.Google Scholar
  8. 8.
    N. Bshouty, T. Hancock, L. Hellerstein, and M. Karpinski. An algorithm to learn read-once threshold formulas, and transformations between learning models. Computational Complexity, 4:37–61, 1994.Google Scholar
  9. 9.
    S. Földes and P. L. Hammer. Split graphs. Congressus Numerantium, 19:311–315, 1977.Google Scholar
  10. 10.
    S. A. Goldman and H. D. Mathias. Learning k-term DNF formulas with an incomplete membership oracle. In Proc. 5th Annu. Workshop on Comput. Learning Theory, pages 77–84. ACM Press, New York, NY, 1992.Google Scholar
  11. 11.
    S. A. Goldman and R. H. Sloan. Can PAC learning algorithms tolerate random attribute noise? Algorithmica, 14:70–84, 1995.Google Scholar
  12. 12.
    M. C. Golumbic. Algorithmic Graph Theory and Perfect Graphs. Computer Science and Applied Mathematics. Academic Press, New York, 1980.Google Scholar
  13. 13.
    Q. P. Gu and A. Maruoka. Learning monotone boolean functions by uniformly distributed examples. SIAM J. Comput., 21:587–599, 1992.Google Scholar
  14. 14.
    T. Hegedüs. On training simple neural networks and small-weight neurons. In Computational Learning Theory: Eurocolt '93, volume New Series Number 53 of The Institute of Mathematics and its Applications Conference Series, pages 69–82, Oxford, 1994. Oxford University Press.Google Scholar
  15. 15.
    K. J. Lang and E. B. Baum. Query learning can work poorly when a human oracle is used. In International Joint Conference on Neural Networks, Beijing, 1992.Google Scholar
  16. 16.
    N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2:285–318, 1988.Google Scholar
  17. 17.
    N. V. R. Mahadev and U. N. Peled. Threshold Graphs and Related Topics, volume 56 of Annals of Discrete Mathematics. Elsevier Science B.V., Amsterdam, The Netherlands, 1995.Google Scholar
  18. 18.
    U. Peled. Personal Communication.Google Scholar
  19. 19.
    L. Pitt and L. Valiant. Computational limitations on learning from examples. J. ACM, 35:965–984, 1988.Google Scholar
  20. 20.
    Y. Sakakibara. On learning from queries and counterexamples in the presence of noise. Inform. Proc. Lett., 37:279–284, 1991.Google Scholar
  21. 21.
    R. H. Sloan. Four types of noise in data for PAC learning. Inform. Proc. Lett., 54:157–162, 1995.Google Scholar
  22. 22.
    R. H. Sloan and G. Turán. Learning with queries but incomplete information. In Proc. 7th Annu. ACM Workshop on Comput. Learning Theory, pages 237–245. ACM Press, New York, NY, 1994.Google Scholar
  23. 23.
    L. G. Valiant. Learning disjunctions of conjunctions. In Proceedings of the 9th International Joint Conference on Artificial Intelligence, vol. 1, pages 560–566, Los Angeles, California, 1985. International Joint Committee for Artificial Intelligence.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Robert H. Sloan
    • 1
  • György Turán
    • 2
    • 3
  1. 1.Dept. of EE & Computer ScienceUniversity of Illinois at ChicagoChicagoUSA
  2. 2.Dept. of Mathematics, Stat., & Computer ScienceUniversity of Illinois at ChicagoUSA
  3. 3.Research Group on Artificial IntelligenceHungarian Academy of SciencesHungary

Personalised recommendations