Skip to main content

Learning and consistency

  • 1 Inductive Inference Theory
  • Chapter
  • First Online:
Algorithmic Learning for Knowledge-Based Systems

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 961))

Abstract

In designing learning algorithms it seems quite reasonable to construct them in such a way that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail. It may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it.

Therefore we study several types of consistent learning in recursion-theoretic inductive inference. We show that these types are not of universal power. We give “lower bounds” on this power. We characterize these types by some versions of decidability of consistency with respect to suitable “non-standard” spaces of hypotheses.

Then we investigate the problem of learning consistently in polynomial time. In particular, we present a natural learning problem and prove that it can be solved in polynomial time if and only if the algorithm is allowed to work inconsistently.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Angluin, D. (1980), Finding patterns common to a set of strings, Journal of Computer and System Sciences21, 46–62.

    Google Scholar 

  • Angluin, D., and Smith, C.H. (1983), Inductive inference: theory and methods, Computing Surveys15, 237–269.

    Google Scholar 

  • Angluin, D., and Smith, C.H. (1987), Formal inductive inference, in “Encyclopedia of Artificial Intelligence” (St.C. Shapiro, Ed.), Vol. 1, pp. 409–418, Wiley-Interscience Publication, New York.

    Google Scholar 

  • Barzdin, J. (1974a), Inductive inference of automata, functions and programs, in “Proceedings International Congress of Math.,” Vancouver, pp. 455–460.

    Google Scholar 

  • Barzdin, J. (1974b), Две теоремы о предельном синтезе функций, in “Теория Алгоритмов и Программ,” (J. Barzdin, Ed.), Vol.1, pp.82–88, Latvian State University.

    Google Scholar 

  • Blum, L., and Blum, M. (1975), Toward a mathematical theory of inductive inference, Information and Control28, 122–155.

    Google Scholar 

  • Blum, M. (1967), Machine independent theory of complexity of recursive functions, Journal of the Association for Computing Machinery14, 322–336.

    Google Scholar 

  • Fulk, M. (1988), Saving the phenomena: requirements that inductive inference machines not contradict known data, Information and Computation79, 193–209.

    Google Scholar 

  • Garey, M.R., and Johnson, D.S. (1979), “Computers and Intractability. A Guide to the Theory of \(\mathcal{N}\mathcal{P}\)-completeness,” San Francisco, Freeman and Company.

    Google Scholar 

  • Gold, M.E. (1965), Limiting recursion, Journal of Symbolic Logic30, 28–48.

    Google Scholar 

  • Gold, M.E. (1967), Language identification in the limit, Information and Control10, 447–474.

    Google Scholar 

  • Jantke, K.P. (1991a), Monotonic and non-monotonic inductive inference, New Generation Computing8, 349–360.

    Google Scholar 

  • Jantke, K.P., and Beick, H.R. (1981), Combining postulates of naturalness in inductive inference, Journal of Information Processing and Cybernetics (EIK)8/9, 465–484.

    Google Scholar 

  • Kearns, M., and Pitt, L. (1989), A polynomial-time algorithm for learning k- variable pattern languages from examples, in “Proceedings 1st Annual Workshop on Computational Learning Theory,” (D. Haussler and L. Pitt, Eds.), pp. 196–205, Morgan Kaufmann Publishers Inc., San Mateo.

    Google Scholar 

  • Ko, Ker-I, Marron, A., and Tzeng, W.G. (1990), Learning string patterns and tree patterns from examples, in “Proceedings 7th Conference on Machine Learning,” (B.W. Porter, and R.J. Mooney, Eds.), pp. 384–391, Morgan Kaufmann Publishers Inc., San Mateo.

    Google Scholar 

  • Kummer, M. (1992), personal communication to T. Zeugmann.

    Google Scholar 

  • Lange, S., and Wiehagen, R. (1991), Polynomial-time inference of arbitrary pattern languages, New Generation Computing8, 361–370.

    Google Scholar 

  • Lange, S., and Zeugmann, T. (1992), Types of monotonic language learning and their characterization, in “Proceedings 5th Annual ACM Workshop on Computational Learning Theory,” (D. Haussier, Ed.), pp. 377–390, ACM Press, New York.

    Google Scholar 

  • Lange, S., and Zeugmann, T. (1993), Monotonic versus non-monotonic language learning, in “Proceedings 2nd International Workshop on Nonmonotonic and Inductive Logic,” (G. Brewka, K.P. Jantke and P.H. Schmitt, Eds.), Lecture Notes in Artificial Intelligence Vol. 659, pp. 254–269, Springer-Verlag, Berlin.

    Google Scholar 

  • Michalski, R.S., Carbonell, J.G., and Mitchell, T.M. (1984), “Machine Learning, An Artificial Intelligence Approach,” Vol. 1, Springer-Verlag, Berlin.

    Google Scholar 

  • Michalski, R.S., Carbonell, J.G., and Mitchell, T.M. (1986), “Machine Learning, An Artificial Intelligence Approach,” Vol. 2, Morgan Kaufmann Publishers Inc., San Mateo.

    Google Scholar 

  • Minicozzi, E. (1976), Some natural properties of strong-identification in inductive inference, Theoretical Computer Science2, 345–360.

    Google Scholar 

  • Nix, R.P. (1983), Editing by examples, Yale University, Dept. Computer Science, Technical Report 280.

    Google Scholar 

  • Osherson, D., Stob, M., and Weinstein, S. (1986), “Systems that Learn, An Introduction to Learning Theory for Cognitive and Computer Scientists,” MIT-Press, Cambridge, Massachusetts.

    Google Scholar 

  • Porat, S., and Feldman, J.A. (1988), Learning automata from ordered examples, in “Proceedings 1st Workshop on Computational Learning Theory,” (D. Haussler and L. Pitt, Eds.), pp. 386–396, Morgan Kaufmann Publishers Inc., San Mateo.

    Google Scholar 

  • Rogers, H.Jr. (1967), “Theory of Recursive Functions and Effective Computability,” McGraw-Hill, New York.

    Google Scholar 

  • Schapire, R.E. (1990), Pattern languages are not learnable, in “Proceedings 3rd Annual Workshop on Computational Learning Theory,” (M.A. Fulk and J. Case, Eds.), pp. 122–129, Morgan Kaufmann Publishers, Inc., San Mateo.

    Google Scholar 

  • Shinohara, T. (1982), Polynomial time inference of extended regular pattern languages, in “Proceedings RIMS Symposia on Software Science and Engineering,” Lecture Notes in Computer Science 147, pp. 115–127, Springer-Verlag, Berlin.

    Google Scholar 

  • Smullyan, R.M. (1961), Theory of formal systems, Annals of Math. Studies47.

    Google Scholar 

  • Solomonoff, R. (1964), A formal theory of inductive inference, Information and Control7, 1–22, 234–254.

    Google Scholar 

  • Trakhtenbrot, B.A., and Barzdin, J. (1970) “Конечные Автоматы (Поведение и Синтез),” Наука, Москва, English translation: “Finite Automata-Behavior and Synthesis, Fundamental Studies in Computer Science 1,” North-Holland, Amsterdam, 1973.

    Google Scholar 

  • Wiehagen, R. (1976), Limes-Erkennung rekursiver Funktionen durch spezielle Strategien, Journal of Information Processing and Cybernetics (EIK)12, 93–99.

    Google Scholar 

  • Wiehagen, R. (1978), Characterization problems in the theory of inductive inference, in “Proceedings 5th Colloquium on Automata, Languages and Programming,” (G. Ausiello and C. Böhm, Eds.), Lecture Notes in Computer Science 62, pp. 494–508, Springer-Verlag, Berlin.

    Google Scholar 

  • Wiehagen, R. (1992), From inductive inference to algorithmic learning theory, in “Proceedings 3rd Workshop on Algorithmic Learning Theory,” (S. Doshita, K. Furukawa, K.P. Jantke and T. Nishida, Eds.), Lecture Notes in Artificial Intelligence 743, pp. 3–24, Springer-Verlag, Berlin.

    Google Scholar 

  • Wiehagen, R., and Liepe, W. (1976), Charakteristische Eigenschaften von erkennbaren Klassen rekursiver Funktionen, Journal of Information Processing and Cybernetics (EIK)12, 421–438.

    Google Scholar 

  • Wiehagen, R., and Zeugmann, T. (1992), Too much information can be too much for learning efficiently, in “Proceedings 3rd International Workshop on Analogical and Inductive Inference,” (K.P. Jantke, Ed.), Lecture Notes in Artificial Intelligence 642, pp. 72–86, Springer-Verlag, Berlin.

    Google Scholar 

  • Wiehagen, R., and Zeugmann, T. (1994), Ignoring data may be the only way to learn efficiently, Journal of Theoretical and Experimental Artificial Intelligence6, 131–144.

    Google Scholar 

  • Zeugmann, T. (1983), A-posteriori characterizations in inductive inference of recursive functions, Journal of Information Processing and Cybernetics (EIK)19, 559–594.

    Google Scholar 

  • Zeugmann, T., Lange, S., and Kapur, S. (199x), Characterizations of monotonic and dual monotonic language learning, Information and Computation, to appear.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Klaus P. Jantke Steffen Lange

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Wiehagen, R., Zeugmann, T. (1995). Learning and consistency. In: Jantke, K.P., Lange, S. (eds) Algorithmic Learning for Knowledge-Based Systems. Lecture Notes in Computer Science, vol 961. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-60217-8_1

Download citation

  • DOI: https://doi.org/10.1007/3-540-60217-8_1

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-60217-0

  • Online ISBN: 978-3-540-44737-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics