# Encyclopedia of Machine Learning

2010 Edition
| Editors: Claude Sammut, Geoffrey I. Webb

# Connections Between Inductive Inference and Machine Learning

• John Case
• Sanjay Jain
Reference work entry
DOI: https://doi.org/10.1007/978-0-387-30164-8_160

## Definition

Inductive inference is a theoretical framework to model learning in the limit. Here we will discuss some results in inductive inference, which have relevance to machine learning community.

The mathematical/theoretical area called , is also known as computability theoretic learning and learning in the limit (Jain, Osherson, Royer, & Sharma,  1999; Odifreddi,  1999) typically but, as will be seen below, not always involves a situation depicted in ( 1) just below.
$$\begin{array}{rcl} \textrm{ Data }{d}_{0},{d}_{1},{d}_{2},\ldots \mathop \to \limits^{{\mathop{\rm In}\nolimits}} M\mathop \to \limits^{{\mathop{\rm Out}\nolimits}} \textrm{ Programs}{e}_{0},{e}_{1},{e}_{2},\ldots.\end{array}$$
This is a preview of subscription content, log in to check access.

1. Ambainis, A., Case, J., Jain, S., & Suraj, M. (2004). Parsimony hierarchies for inductive inference. Journal of Symbolic Logic, 69, 287–328.
2. Angluin, D., Gasarch, W., & Smith, C. (1989). Training sequences. Theoretical Computer Science, 66(3), 255–272.
3. Angluin, D. (1980). Finding patterns common to a set of strings. Journal of Computer and System Sciences, 21, 46–62.
4. Arikawa, S., Shinohara, T., & Yamamoto, A. (1992). Learning elementary formal systems. Theoretical Computer Science, 95, 97–113.
5. Bain, M., & Sammut, C. (1999). A framework for behavioural cloning. In K. Furakawa, S. Muggleton, & D. Michie (Eds.), Machine intelligence 15. Oxford: Oxford University Press.Google Scholar
6. Baluja, S., & Pomerleau, D. (1995). Using the representation in a neural network’s hidden layer for task specific focus of attention. Technical Report CMU-CS-95-143, School of Computer Science, CMU, May 1995. Appears in Proceedings of the 1995 IJCAI.Google Scholar
7. Bartlett, P., Ben-David, S., & Kulkarni, S. (1996). Learning changing concepts by exploiting the structure of change. In Proceedings of the ninth annual conference on computational learning theory, Desenzano del Garda, Italy. New York: ACM Press.Google Scholar
8. Bartlmae, K., Gutjahr, S., & Nakhaeizadeh, G. (1997). Incorporating prior knowledge about financial markets through neural multitask learning. In Proceedings of the fifth international conference on neural networks in the capital markets.Google Scholar
9. Bārzdiņš, J. (1974a). Inductive inference of automata, functions and programs. In Proceedings of the international congress of mathematicians, Vancouver (pp. 771–776).Google Scholar
10. Bārzdiņš, J. (1974b). Two theorems on the limiting synthesis of functions. In Theory of algorithms and programs (Vol. 210, pp. 82–88). Latvian State University, Riga.Google Scholar
11. Blum, L., & Blum, M. (1975). Toward a mathematical theory of inductive inference. Information and Control, 28, 125–155.
12. Blum, A., & Chalasani, P. (1992). Learning switching concepts. In Proceedings of the fifth annual conference on computational learning theory, Pittsburgh, Pennsylvania, (pp. 231–242). New York: ACM Press.Google Scholar
13. Bratko, I., & Muggleton, S. (1995). Applications of inductive logic programming. Communications of the ACM, 38(11), 65–70.Google Scholar
14. Bratko, I., Urbančič, T., & Sammut, C. (1998). Behavioural cloning of control skill. In R. S. Michalski, I. Bratko, & M. Kubat (Eds.), Machine learning and data mining: Methods and applications, (pp. 335–351). New York: Wiley.Google Scholar
15. Brazma, A., Ukkonen, E., & Vilo, J. (1996). Discovering unbounded unions of regular pattern languages from positive examples. In Proceedings of the seventh international symposium on algorithms and computation (ISAAC’96), Lecture notes in computer science, (Vol. 1178, pp. 95–104), Berlin: Springer-Verlag.Google Scholar
16. Breiman, L. (1996). Bagging predictors. Machine Learning, 24(2), 123–140.
17. Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32.
18. Caruana, R. (1993). Multitask connectionist learning. In Proceedings of the 1993 connectionist models summer school (pp. 372–379). NJ: Lawrence Erlbaum.Google Scholar
19. Caruana, R. (1996). Algorithms and applications for multitask learning. In Proceedings 13th international conference on machine learning (pp. 87–95). San Francisco, CA: Morgan Kaufmann.Google Scholar
20. Case, J. (1994). Infinitary self-reference in learning theory. Journal of Experimental and Theoretical Artificial Intelligence, 6, 3–16.
21. Case, J. (1999). The power of vacillation in language learning. SIAM Journal on Computing, 28(6), 1941–1969.
22. Case, J. (2007). Directions for computability theory beyond pure mathematical. In D. Gabbay, S. Goncharov, & M. Zakharyaschev (Eds.), Mathematical problems from applied logic II. New logics for the XXIst century, International Mathematical Series, (Vol. 5). New York: Springer.Google Scholar
23. Case, J., & Lynes, C. (1982). Machine inductive inference and language identification. In M. Nielsen & E. Schmidt, (Eds.), Proceedings of the 9th International Colloquium on Automata, Languages and Programming, Lecture notes in computer science, (Vol. 140, pp. 107–115). Berlin: Springer-Verlag.Google Scholar
24. Case, J., & Smith, C. (1983). Comparison of identification criteria for machine inductive inference. Theoretical Computer Science, 25, 193–220.
25. Case, J., & Suraj, M. (2010). Weakened refutability for machine learning of higher order definitions, 2010. (Working paper for eventual journal submission).Google Scholar
26. Case, J., Jain, S., Kaufmann, S., Sharma, A., & Stephan, F. (2001). Predictive learning models for concept drift (Special Issue for ALT’98). Theoretical Computer Science, 268, 323–349.
27. Case, J., Jain, S., Lange, S., & Zeugmann, T. (1999). Incremental concept learning for bounded data mining. Information and Computation, 152, 74–110.
28. Case, J., Jain, S., Montagna, F., Simi, G., & Sorbi, A. (2005). On learning to coordinate: Random bits help, insightful normal forms, and competency isomorphisms (Special issue for selected learning theory papers from COLT’03, FOCS’03, and STOC’03). Journal of Computer and System Sciences, 71(3), 308–332.
29. Case, J., Jain, S., Martin, E., Sharma, A., & Stephan, F. (2006). Identifying clusters from positive data. SIAM Journal on Computing, 36(1), 28–55.
30. Case, J., Jain, S., Ott, M., Sharma, A., & Stephan, F. (2000). Robust learning aided by context (Special Issue for COLT’98). Journal of Computer and System Sciences, 60, 234–257.
31. Case, J., Jain, S., & Sharma, A. (1996). Machine induction without revolutionary changes in hypothesis size. Information and Computation, 128, 73–86.
32. Case, J., Jain, S., Stephan, F., & Wiehagen, R. (2004). Robust learning – rich and poor. Journal of Computer and System Sciences, 69(2), 123–165.
33. Case, J., Ott, M., Sharma, A., & Stephan, F. (2002). Learning to win process-control games watching gamemasters. Information and Computation, 174(1), 1–19.
34. Cenzer, D., & Remmel, J. (1992). Recursively presented games and strategies. Mathematical Social Sciences, 24, 117–139.
35. Chen, K. (1982). Tradeoffs in the inductive inference of nearly minimal size programs. Information and Control, 52, 68–86.
36. de Garis, H. (1990a). Genetic programming: Building nanobrains with genetically programmed neural network modules. In IJCNN: International Joint Conference on Neural Networks, (Vol. 3, pp. 511–516). Piscataway, NJ: IEEE Service Center.Google Scholar
37. deGaris,H.(1990b).Geneticprogramming:ModularneuralevolutionforDarwin machines. In M. Caudill (Ed.), IJCNN-90-WASH DC; International joint conferenceonneuralnetworks(Vol. 1,pp.194–197).Hillsdale,NJ:Lawrence Erlbaum Associates.Google Scholar
38. de Garis, H. (1991). Genetic programming: Building artificial nervous systems with genetically programmed neural network modules. In B. Soušek, & The IRIS group (Eds.), Neural and intelligenct systems integeration: Fifth and sixth generation integerated reasoning information systems (Chap. 8, pp. 207–234). New York: Wiley.Google Scholar
39. Devaney, M., & Ram, A. (1994). Dynamically adjusting concepts to accommodate changing contexts. In M. Kubat, G. Widmer (Eds.), Proceedings of the ICML-96 Pre-conference workshop on learning in context-sensitive domains, Bari, Italy (Journal submission).Google Scholar
40. Dietterich, T., Hild, H., & Bakiri, G. (1995). A comparison of ID3 and backpropogation for English text-tospeech mapping. Machine Learning, 18(1), 51–80.Google Scholar
41. Fahlman, S. (1991). The recurrent cascade-correlation architecture. In R. Lippmann, J. Moody, and D. Touretzky (Eds.), Advances in neural information processing systems (Vol. 3, pp. 190–196). San Mateo, CA: Morgan Kaufmann Publishers.Google Scholar
42. Freivalds, R. (1975). Minimal Gödel numbers and their identification in the limit. In Lecture notes in computer science (Vol. 32, pp. 219–225). Berlin: Springer-Verlag.Google Scholar
43. Freund, Y., & Mansour, Y. (1997). Learning under persistent drift. In S. Ben-David, (Ed.), Proceedings of the third European conference on computational learning theory (EuroCOLT’97), Lecture notes in artificial intelligence, (Vol. 1208, pp. 94–108). Berlin: Springer-Verlag.Google Scholar
44. Fulk, M. (1990). Robust separations in inductive inference. In Proceedings of the 31st annual symposium on foundations of computer science (pp. 405–410). St. Louis, Missouri. Washington, DC: IEEE Computer Society.Google Scholar
45. Harding, S. (Ed.). (1976). Can theories be refuted? Essays on the Duhem-Quine thesis. Dordrecht: Kluwer Academic Publishers.
46. Helmbold, D., & Long, P. (1994). Tracking drifting concepts by minimizing disagreements. Machine Learning, 14, 27–46.
47. Hildebrand, F. (1956). Introduction to numerical analysis. New York: McGraw-Hill.
48. Jain, S. (1999). Robust behaviorally correct learning. Information and Computation, 153(2), 238–248.
49. Jain, S., & Sharma, A. (1997). Elementary formal systems, intrinsic complexity, and procrastination. Information and Computation, 132, 65–84.
50. Jain, S., & Sharma, A. (2002). Mind change complexity of learning logic programs. Theoretical Computer Science, 284(1), 143–160.
51. Jain, S., Osherson, D., Royer, J., & Sharma, A. (1999). Systems that learn: An introduction to learning theory (2nd ed.). Cambridge, MA: MIT Press.Google Scholar
52. Jain, S., Smith, C., & Wiehagen, R. (2001). Robust learning is rich. Journal of Computer and System Sciences, 62(1), 178–212.
53. Kilpeläinen, P., Mannila, H., & Ukkonen, E. (1995). MDL learning of unions of simple pattern languages from positive examples. In P. Vitányi (Ed.), Computational learning theory, second European conference, EuroCOLT’95, Lecture notes in artificial intelligence, (Vol. 904, pp. 252–260). Berlin: Springer-Verlag.Google Scholar
54. Kinber, E. (1977). On a theory of inductive inference. In Lecture notes in computer science (Vol. 56, pp. 435–440). Berlin: Springer-Verlag.Google Scholar
55. Kinber, E., Smith, C., Velauthapillai, M., & Wiehagen, R. (1995). On learning multiple concepts in parallel. Journal of Computer and System Sciences, 50, 41–52.
56. Krishna Rao, M. (1996). A class of prolog programs inferable from positive data. In A. Arikawa & A. Sharma (Eds.), Seventh international conference on algorithmic learning theory (ALT’ 96), Lecture notes in artificial intelligence (Vol. 1160, pp. 272–284). Berlin: Springer-Verlag.Google Scholar
57. Krishna Rao, M. (2000). Some classes of prolog programs inferable from positive data (Special Issue for ALT’96). Theoretical Computer Science A, 241, 211–234.
58. Krishna Rao, M. (2004). Inductive inference of term rewriting systems from positive data. In S. Ben-David, J. Case, & A. Maruoka (Eds.), Algorithmic learning theory: Fifteenth international conference (ALT’ 2004), Lecture notes in artificial intelligence (Vol. 3244, pp. 69–82). Berlin: Springer-Verlag.Google Scholar
59. Krishna Rao, M. (2005). A class of prolog programs with non-linear outputs inferable from positive data. In S. Jain, H. U. Simon, & E. Tomita (Eds.), Algorithmiclearningtheory:Sixteenthinternationalconference(ALT’2005), Lecture notes in artificial intelligence, (Vol. 3734, pp. 312–326). Berlin: Springer-Verlag.Google Scholar
60. Krishna Rao, M., & Sattar, A. (1998). Learning from entailment of logic programs with local variables. In M. Richter, C. Smith, R. Wiehagen, & T. Zeugmann (Eds.), Ninth international conference on algorithmic learning theory (ALT’ 98), Lecture notes in artificial intelligence (Vol. 1501, pp. 143–157). Berlin: Springer-Verlag.Google Scholar
61. Kubat, M. (1992). A machine learning based approach to load balancing in computer networks. Cybernetics and Systems, 23, 389–400.Google Scholar
62. Kummer, M., & Ott, M. (1996). Learning branches and learning to win closed recursive games. In Proceedings of the ninth annual conference on computational learning theory, Desenzano del Garda, Italy. New York: ACM Press.Google Scholar
63. Lange, S., & Wiehagen, R. (1991). Polynomial time inference of arbitrary pattern languages. New Generation Computing, 8, 361–370.
64. Lavrač, N., & Džeroski, S. (1994). Inductive logic programming: Techniques and applications. New York: Ellis Horwood.
65. Maler, O., Pnueli, A., & Sifakis, J. (1995). On the synthesis of discrete controllers for timed systems. In Proceedings of the annual symposium on the theoretical aspects of computer science, LNCS (Vol. 900, pp. 229–242). Berlin: Springer-Verlag.Google Scholar
66. Matwin, S., & Kubat, M. (1996). The role of context in concept learning. In M. Kubat & G. Widmer (Eds.), Proceedings of the ICML-96 pre-conference workshop on learning in context-sensitive domains, Bari, Italy, (pp. 1–5).Google Scholar
67. Maye, A., Hsieh, C., Sugihara, G., & Brembs, B. (2007). Order in spontaneous behavior. PLoS One, May, 2007. See: http://brembs.net/spontaneous/
68. Mishra, N., Ron, D., & Swaminathan, R. (2004). A new conceptual clustering framework. Machine Learning, 56(1–3), 115–151.
69. Mitchell, T. (1997). Machine learning. New York: McGraw Hill.
70. Mitchell, T., Caruana, R., Freitag, D., McDermott, J., & Zabowski, D. (1994). Experience with a learning, personal assistant. Communications of the ACM, 37, 80–91.Google Scholar
71. Montagna, F., & Osherson, D. (1999). Learning to coordinate: A recursion theoretic perspective. Synthese, 118, 363–382.
72. Muggleton, S., & De Raedt, L. (1994). Inductive logic programming: Theory and methods. Journal of Logic Programming, 19/20, 669–679.Google Scholar
73. Odifreddi, P. (1999). Classical recursion theory (Vol. II). Amsterdam: Elsivier.Google Scholar
74. Osherson, D., Stob, M., & Weinstein, S. (1986). Systems that learn: An introduction to learning theory for cognitive and computer scientists. Cambridge, MA: MIT Press.Google Scholar
75. Ott, M., & Stephan, F. (2002). Avoiding coding tricks by hyperrobust learning. Theoretical Computer Science, 284(1), 161–180.
76. Pitt, L., & Reinke, R. (1988). Criteria for polynomial-time (conceptual) clustering. Machine Learning, 2, 371–396.Google Scholar
77. Popper, K. (1992). Conjectures and refutations: The growth of scientific knowledge. New York: Basic Books.Google Scholar
78. Pratt, L., Mostow, J., & Kamm, C. (1991). Direct transfer of learned information among neural networks. In Proceedings of the 9th national conference on artificial intelligence (AAAI-91), Anaheim, California. Menlo Park, CA: AAAI press.Google Scholar
79. Rogers, H. (1987). Theory of recursive functions and effective computability. New York: McGraw Hill (Reprinted, MIT Press, 1987).Google Scholar
80. Salomaa, A. (1994a). Patterns (The formal language theory column). EATCS Bulletin, 54, 46–62.Google Scholar
81. Salomaa, A. (1994b). Return to patterns (The formal language theory column). EATCS Bulletin, 55, 144–157.Google Scholar
82. Sejnowski, T., & Rosenberg, C. (1986). NETtalk: A parallel network that learns to read aloud. Technical Report JHU-EECS-86-01, Johns Hopkins University.Google Scholar
83. Shimozono, S., Shinohara, A., Shinohara, T., Miyano, S., Kuhara, S., & Arikawa, S. (1994). Knowledge acquisition from amino acid sequences by machine learning system BONSAI. Transactions of Information Processing Society of Japan, 35, 2009–2018.Google Scholar
84. Shinohara, T. (1983). Inferring unions of two pattern languages. Bulletin of Informatics and Cybernetics, 20, 83–88.
85. Shinohara, T., & Arikawa, A. (1995). Pattern inference. In K. P. Jantke & S. Lange (Eds.), Algorithmic learning for knowledge-based systems, Lecture notes in artificial intelligence (Vol. 961, pp. 259–291). Berlin: Springer-Verlag.Google Scholar
86. Smullyan, R. (1961). Theory of formal systems. In Annals of Mathematics Studies (Vol. 47). Princeton, NJ: Princeton University Press.Google Scholar
87. Šuc, D. (2003). Machine reconstruction of human control strategies. Frontiers in artificial intelligence and applications (Vol. 99). Amsterdam: IOS Press.Google Scholar
88. Thomas, W. (1995). On the synthesis of strategies in infinite games. In Proceedings of the annual symposium on the theoretical aspects of computer science, LNCS (Vol. 900, pp. 1–13). Berlin: Springer-Verlag.Google Scholar
89. Thrun, S. (1996). Is learning the n-th thing any easier than learning the first? In Advances in neural information processing systems, 8. San Mateo, CA: Morgan Kaufmann.Google Scholar
90. Thrun, S., & Sullivan, J. (1996). Discovering structure in multiple learning tasks: The TC algorithm. In Proceedings of the thirteenth international conference on machine learning (ICML-96) (pp. 489–497). San Francisco, CA: Morgan Kaufmann.Google Scholar
91. Tsung, F., & Cottrell, G. (1989). A sequential adder using recurrent networks. In IJCNN-89-WASHINGTON DC: International joint conference on neural networks June 18–22 (Vol. 2, pp. 133–139). Piscataway, NJ: IEEE Service Center.Google Scholar
92. Waibel, A. (1989a). Connectionist glue: Modular design of neural speech systems. In D. Touretzky, G. Hinton, & T. Sejnowski (Eds.), Proceedings of the 1988 connectionist models summer school (pp. 417–425). San Mateo, CA: Morgan Kaufmann.Google Scholar
93. Waibel, A. (1989b). Consonant recognition by modular construction of large phonemic time-delay neural networks. In D. S. Touretzky (Ed.), Advances in neural information processing systems I (pp. 215–223). San Mateo, CA: Morgan Kaufmann.Google Scholar
94. Wallace, C. (2005). Statistical and inductive inference by minimum message length. (Information Science and Statistics). New York: Springer (Posthumously published).Google Scholar
95. Wallace, C., & Dowe, D. (1999). Minimum message length and kolmogorov complexity (Special Issue on Kolmogorov Complexity). Computer Journal, 42(4), 123–155. http://comjnl.oxfordjournals.org/cgi/reprint/42/4/270.
96. Widmer, G., & Kubat, M. (1996). Learning in the presence of concept drift and hidden contexts. Machine Learning, 23, 69–101.Google Scholar
97. Wiehagen, R. (1976). Limes-Erkennung rekursiver Funktionen durch spezielle Strategien. Electronische Informationverarbeitung und Kybernetik, 12, 93–99.
98. Wiehagen, R., & Zeugmann, T. (1994). Ignoring data may be the only way to learn efficiently. Journal of Experimental and Theoretical Artificial Intelligence, 6, 131–144.
99. Wright, K. (1989). Identification of unions of languages drawn from an identifiable class. In R. Rivest, D. Haussler, & M. Warmuth (Eds.), Proceedings of the second annual workshop on computational learning theory, Santa Cruz, California, (pp. 328–333). San Mateo, CA: Morgan Kaufmann Publishers.Google Scholar
100. Wrobel, S. (1994). Concept formation and knowledge revision. Dordrecht: Kluwer Academic Publishers.
101. Zeugmann, T. (1986). On Bārzdiņš’ conjecture. In K. P. Jantke (Ed.), Analogical and inductive inference, Proceedings of the international workshop, Lecture notes in computer science, (Vol. 265, pp. 220–227). Berlin: Springer-Verlag.Google Scholar
102. Zeugmann, T. (1998). Lange and Wiehagen’s pattern language learning algorithm: An average case analysis with respect to its total learning time. Annals of Mathematics and Artificial Intelligence, 23, 117–145.