Skip to main content

Dynamic Modeling in Inductive Inference

  • Conference paper
Algorithmic Learning Theory (ALT 2008)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5254))

Included in the following conference series:

Abstract

Introduced is a new inductive inference paradigm, Dynamic Modeling. Within this learning paradigm, for example, function h learns function g iff, in the i-th iteration, h and g both produce output, h gets the sequence of all outputs from g in prior iterations as input, g gets all the outputs from h in prior iterations as input, and, from some iteration on, the sequence of h’s outputs will be programs for the output sequence of g.

Dynamic Modeling provides an idealization of, for example, a social interaction in which h seeks to discover program models of g’s behavior it sees in interacting with g, and h openly discloses to g its sequence of candidate program models to see what g says back.

Sample results: every g can be so learned by some h; there are g that can only be learned by an h if g can also learn that h back; there are extremely secretive h which cannot be learned back by any g they learn, but which, nonetheless, succeed in learning infinitely many g; quadratictime learnablity is strictly more powerful than lintime learnablity.

This latter result, as well as others, follow immediately from general correspondence theorems obtained from a unified approach to the paradigms within inductive inference.

Many proofs, some sophisticated, employ machine self-reference, a.k.a., recursion theorems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bārzdiņš, J.: Prognostication of automata and functions. Information Processing 1, 81–84 (1971)

    Google Scholar 

  2. Bārzdiņš, J.: Inductive inference of automata, functions and programs. In: Int. Math. Congress, Vancouver, pp. 771–776 (1974)

    Google Scholar 

  3. Bārzdiņš, J.: Two theorems on the limiting synthesis of functions. In: Theory of Algorithms and Programs, Latvian State University, Riga, vol. 210, pp. 82–88 (1974)

    Google Scholar 

  4. Blum, L., Blum, M.: Toward a mathematical theory of inductive inference. Information and Control 28, 125–155 (1975)

    Article  MATH  MathSciNet  Google Scholar 

  5. Blum, M., De Santis, A., Micali, S., Persiano, G.: Noninteractive zero-knowledge. SIAM J. Comput. 20(6), 1084–1118 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  6. Case, J.: Periodicity in generations of automata. Mathematical Systems Theory 8, 15–32 (1974)

    Article  MATH  MathSciNet  Google Scholar 

  7. Case, J.: Infinitary self-reference in learning theory. Journal of Experimental and Theoretical Artificial Intelligence 6, 3–16 (1994)

    Article  MATH  Google Scholar 

  8. Case, J., Jain, S., Montagna, F., Simi, G., Sorbi, A.: On learning to coordinate: Random bits help, insightful normal forms, and competency isomorphisms. Journal of Computer and System Sciences 71(3), 308–332 (2005); Special issue for selected learning theory papers from COLT 2003, FOCS 2003, and STOC 2003

    Article  MATH  MathSciNet  Google Scholar 

  9. Case, J., Smith, C.: Comparison of identification criteria for machine inductive inference. Theoretical Computer Science 25, 193–220 (1983)

    Article  MATH  MathSciNet  Google Scholar 

  10. Cormen, T., Leiserson, C., Rivest, R., Stein, C.: Introduction to Algorithms, 2nd edn. MIT Press, Cambridge (2001)

    MATH  Google Scholar 

  11. Freivalds, R., Kinber, E.B., Smith, C.H.: On the intrinsic complexity of learning. Information and Computation 123(1), 64–71 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  12. Gold, E.: Language identification in the limit. Information and Control 10, 447–474 (1967)

    Article  MATH  Google Scholar 

  13. Hartmanis, J., Stearns, R.: On the computational complexity of algorithms. Transactions of the American Mathematical Society 117, 285–306 (1965)

    Article  MATH  MathSciNet  Google Scholar 

  14. Li, M., Vitanyi, P.: An Introduction to Kolmogorov Complexity and Its Applications, 2nd edn. Springer, Heidelberg (1997)

    MATH  Google Scholar 

  15. Minicozzi, E.: Some natural properties of strong identification in inductive inference. In: Theoretical Computer Science, pp. 345–360 (1976)

    Google Scholar 

  16. Montagna, F., Osherson, D.: Learning to coordinate: A recursion theoretic perspective. Synthese 118, 363–382 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  17. Pitt, L.: Inductive inference, DFAs, and computational complexity. In: Jantke, K.P. (ed.) AII 1989. LNCS, vol. 397, pp. 18–44. Springer, Heidelberg (1989)

    Google Scholar 

  18. Podnieks, K.: Comparing various concepts of function prediction. Theory of Algorithms and Programs 210, 68–81 (1974)

    MathSciNet  Google Scholar 

  19. Royer, J., Case, J.: Subrecursive Programming Systems: Complexity and Succinctness. In: Research monograph in Progress in Theoretical Computer Science, Birkhäuser, Boston (1994)

    Google Scholar 

  20. Rogers, H.: Theory of Recursive Functions and Effective Computability. McGraw Hill, New York (1967) (Reprinted by MIT Press, Cambridge, Massachusetts, 1987)

    MATH  Google Scholar 

  21. Wiehagen, R.: Limes-erkennung rekursiver Funktionen durch spezielle Strategien. Electronische Informationverarbeitung und Kybernetik 12, 93–99 (1976)

    MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Case, J., Kötzing, T. (2008). Dynamic Modeling in Inductive Inference. In: Freund, Y., Györfi, L., Turán, G., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2008. Lecture Notes in Computer Science(), vol 5254. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87987-9_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-87987-9_33

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-87986-2

  • Online ISBN: 978-3-540-87987-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics