Abstract
Introduced is a new inductive inference paradigm, Dynamic Modeling. Within this learning paradigm, for example, function h learns function g iff, in the i-th iteration, h and g both produce output, h gets the sequence of all outputs from g in prior iterations as input, g gets all the outputs from h in prior iterations as input, and, from some iteration on, the sequence of h’s outputs will be programs for the output sequence of g.
Dynamic Modeling provides an idealization of, for example, a social interaction in which h seeks to discover program models of g’s behavior it sees in interacting with g, and h openly discloses to g its sequence of candidate program models to see what g says back.
Sample results: every g can be so learned by some h; there are g that can only be learned by an h if g can also learn that h back; there are extremely secretive h which cannot be learned back by any g they learn, but which, nonetheless, succeed in learning infinitely many g; quadratictime learnablity is strictly more powerful than lintime learnablity.
This latter result, as well as others, follow immediately from general correspondence theorems obtained from a unified approach to the paradigms within inductive inference.
Many proofs, some sophisticated, employ machine self-reference, a.k.a., recursion theorems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bārzdiņš, J.: Prognostication of automata and functions. Information Processing 1, 81–84 (1971)
Bārzdiņš, J.: Inductive inference of automata, functions and programs. In: Int. Math. Congress, Vancouver, pp. 771–776 (1974)
Bārzdiņš, J.: Two theorems on the limiting synthesis of functions. In: Theory of Algorithms and Programs, Latvian State University, Riga, vol. 210, pp. 82–88 (1974)
Blum, L., Blum, M.: Toward a mathematical theory of inductive inference. Information and Control 28, 125–155 (1975)
Blum, M., De Santis, A., Micali, S., Persiano, G.: Noninteractive zero-knowledge. SIAM J. Comput. 20(6), 1084–1118 (1991)
Case, J.: Periodicity in generations of automata. Mathematical Systems Theory 8, 15–32 (1974)
Case, J.: Infinitary self-reference in learning theory. Journal of Experimental and Theoretical Artificial Intelligence 6, 3–16 (1994)
Case, J., Jain, S., Montagna, F., Simi, G., Sorbi, A.: On learning to coordinate: Random bits help, insightful normal forms, and competency isomorphisms. Journal of Computer and System Sciences 71(3), 308–332 (2005); Special issue for selected learning theory papers from COLT 2003, FOCS 2003, and STOC 2003
Case, J., Smith, C.: Comparison of identification criteria for machine inductive inference. Theoretical Computer Science 25, 193–220 (1983)
Cormen, T., Leiserson, C., Rivest, R., Stein, C.: Introduction to Algorithms, 2nd edn. MIT Press, Cambridge (2001)
Freivalds, R., Kinber, E.B., Smith, C.H.: On the intrinsic complexity of learning. Information and Computation 123(1), 64–71 (1995)
Gold, E.: Language identification in the limit. Information and Control 10, 447–474 (1967)
Hartmanis, J., Stearns, R.: On the computational complexity of algorithms. Transactions of the American Mathematical Society 117, 285–306 (1965)
Li, M., Vitanyi, P.: An Introduction to Kolmogorov Complexity and Its Applications, 2nd edn. Springer, Heidelberg (1997)
Minicozzi, E.: Some natural properties of strong identification in inductive inference. In: Theoretical Computer Science, pp. 345–360 (1976)
Montagna, F., Osherson, D.: Learning to coordinate: A recursion theoretic perspective. Synthese 118, 363–382 (1999)
Pitt, L.: Inductive inference, DFAs, and computational complexity. In: Jantke, K.P. (ed.) AII 1989. LNCS, vol. 397, pp. 18–44. Springer, Heidelberg (1989)
Podnieks, K.: Comparing various concepts of function prediction. Theory of Algorithms and Programs 210, 68–81 (1974)
Royer, J., Case, J.: Subrecursive Programming Systems: Complexity and Succinctness. In: Research monograph in Progress in Theoretical Computer Science, Birkhäuser, Boston (1994)
Rogers, H.: Theory of Recursive Functions and Effective Computability. McGraw Hill, New York (1967) (Reprinted by MIT Press, Cambridge, Massachusetts, 1987)
Wiehagen, R.: Limes-erkennung rekursiver Funktionen durch spezielle Strategien. Electronische Informationverarbeitung und Kybernetik 12, 93–99 (1976)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Case, J., Kötzing, T. (2008). Dynamic Modeling in Inductive Inference. In: Freund, Y., Györfi, L., Turán, G., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2008. Lecture Notes in Computer Science(), vol 5254. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87987-9_33
Download citation
DOI: https://doi.org/10.1007/978-3-540-87987-9_33
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-87986-2
Online ISBN: 978-3-540-87987-9
eBook Packages: Computer ScienceComputer Science (R0)