Advertisement

On the Relationship between Models for Learning in Helpful Environments

  • Rajesh Parekh
  • Vasant Honavar
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1891)

Abstract

The PAC and other equivalent learning models are widely accepted models for polynomial learnability of concept classes. However, negative results abound in the PAC learning framework (concept classes such as deterministic finite state automata (DFA) are not efficiently learnable in the PAC model). The PAC model’s requirement of learnability under all conceivable distributions could be considered too stringent a restriction for practical applications. Several models for learning in more helpful environments have been proposed in the literature including: learning from example based queries [2], online learning allowing a bounded number of mistakes [14], learning with the help of teaching sets [7], learning from characteristic sets [5], and learning from simple examples [12,4]. Several concept classes that are not learnable in the standard PAC model have been shown to be learnable in these models. In this paper we identify the relationships between these different learning models. We also address the issue of unnatural collusion between the teacher and the learner that can potentially trivialize the task of learning in helpful environments.

Keywords

Models of learning Query learning Mistake bounded learning PAC learning teaching sets characteristic samples DFA learning 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Angluin, D.: Learning regular sets from queries and counterexamples. Information and Computation 75, 87–106 (1987)zbMATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Angluin, D.: Queries and concept learning. Machine Learning 2(4), 319–342 (1988)Google Scholar
  3. 3.
    Castro, J., Guijarro, D.: Query, PACS and simple-PAC learning. Technical Report LSI-98-2-R, Universitat Polytéctica de Catalunya, Spain (1998)Google Scholar
  4. 4.
    Denis, F., D’Halluin, C., Gilleron, R.: Pac learning with simple examples. In: STACS 1996 - Proceedings of the 13th Annual Symposium on the Theoretical Aspectsof Computer Science, pp. 231–242 (1996)Google Scholar
  5. 5.
    Gold, E.M.: Complexity of automaton identification from given data. Information and Control 37(3), 302–320 (1978)zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Goldman, S., Mathias, H.: Teaching a smarter learner. In: Proceedings of the Workshop on Computational Learning Theory (COLT 1993), pp. 67–76. ACM Press, New York (1993)Google Scholar
  7. 7.
    Goldman, S., Mathias, H.: Teaching a smarter learner. Journal of Computer and System Sciences 52, 255–267 (1996)CrossRefMathSciNetGoogle Scholar
  8. 8.
    Haussler, D., Kearns, M., Littlestone, N., Warmuth, M.: Equivalence of models for polynomial learnability. Information and Computation 95, 129–161 (1991)zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    de la Higuera, C.: Characteristic sets for polynomial grammatical inference. In: Miclet, L., de la Higuera, C. (eds.) ICGI 1996. LNCS (LNAI), vol. 1147, pp. 59–71. Springer, Heidelberg (1996)CrossRefGoogle Scholar
  10. 10.
    Jackson, J., Tomkins, A.: A computational model of teaching. In: Proceedings of the Workshop on Computational Learning Theory (COLT 1992), pp. 319–326. ACM Press, New York (1992)CrossRefGoogle Scholar
  11. 11.
    Kearns, M., Valiant, L.G.: Cryptographic limitations on learning boolean formulae and finite automata. In: Proceedings of the 21st Annual ACM Symposium on Theory of Computing, New York, pp. 433–444 (1989)Google Scholar
  12. 12.
    Li, M., Vitányi, P.: Learning simple concepts under simple distributions. SIAM Journal of Computing 20(5), 911–935 (1991)zbMATHCrossRefGoogle Scholar
  13. 13.
    Li, M., Vitányi, P.: An Introduction to Kolmogorov Complexity and its Applications, 2nd edn. Springer, New York (1997)zbMATHGoogle Scholar
  14. 14.
    Littlestone, N.: Learning quickly when irrelevant attributes abound: A new linearthreshold algorithm. Machine Learning 2, 285–318 (1988)Google Scholar
  15. 15.
    Oncina, J., García, P.: Inferring regular languages in polynomial update time. In: Pérez, N., et al. (eds.) Pattern Recognition and Image Analysis, pp. 49–61. World Scientific, Singapore (1992)CrossRefGoogle Scholar
  16. 16.
    Parekh, R., Honavar, V.: Simple DFA are polynomially probably exactly learnable from simple examples. In: Proceedings of the Sixteenth International Conference on Machine Learning (ICML 1999), Bled, Slovenia, pp. 298–306 (1999)Google Scholar
  17. 17.
    Parekh, R.G., Honavar, V.G.: Learning DFA from simple examples. In: Li, M. (ed.) ALT 1997. LNCS (LNAI), vol. 1316, pp. 116–131. Springer, Heidelberg (1997)Google Scholar
  18. 18.
    Parekh, R.G., Nichitiu, C., Honavar, V.G.: A polynomial time incremental algorithm for regular grammar inference. In: Honavar, V.G., Slutzki, G. (eds.) ICGI 1998. LNCS (LNAI), vol. 1433, pp. 37–49. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  19. 19.
    Pitt, L., Warmuth, M.K.: Reductions among prediction problems: on the difficulty of predicting automata. In: Proceedings of the 3rd IEEE Conference on Structure in Complexity Theory, pp. 60–69 (1988)Google Scholar
  20. 20.
    Valiant, L.: A theory of the learnable. Communications of the ACM 27, 1134–1142 (1984)zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Rajesh Parekh
    • 1
  • Vasant Honavar
    • 2
  1. 1.Blue Martini SoftwareSan MateoUSA
  2. 2.Department of Computer ScienceIowa State UniversityAmesUSA

Personalised recommendations