Learning with growing quality
Usually “quality” of learning grows with experience. Here is given a formalization of that phenomenon within a recursion theoretic framework. We consider the learning of total recursive functions by some algorithmic device (inductive inference machine) and describe the “quality” of learning in two different ways: as probability with which machine identifies the given function correctly, and as density of a set of arguments for which the hypothesis given by machine coincides with the identifiable function. We prove that in both cases there exist classes of sets of total recursive functions, such that for each of these sets the “quality” with which a learning device can identify an arbitrary function from the set grows with the number of other functions, which learning device are trying to identify at the same time, i.e., these classes are identifiable only with learning devices that show some improvement of learning capabilities with practice.
Unable to display preview. Download preview PDF.
- 1.D.Angluin, W.Gasarch and C.Smith. Training sequences. Theoretical Computer Science 66 (1989), p.255–272.Google Scholar
- 5.W.Gasarch and C.Smith. On the inference of sequences of functions. Lecture Notes in Computer Science 265 (1987), p.23–41.Google Scholar
- 7.H.Rogers. Theory of Recursive Functions and Effective Computability. MIT Press, 1987.Google Scholar
- 8.J.Royer. Inductive inference of approximations. Information and Computation 70 (1986), p.156–178.Google Scholar
- 10.C.Smith and M.Velauthapillai. On the inference of approximate programs. Theoretical Computer Science 77 (1990), p.249–266.Google Scholar