Abstract
Inductive inference is a branch of computational learning theory which deals with learning in the limit. Though this topic deals with mostly theoretical work, it has provided some results which can be of use to practical machine learning. Some of these works include the work multitask or context-sensitive learning, learnability of elementary formal systems, behavioral cloning, learning to coordinate, geometrical clustering, and so on. The results in these areas also often give insights into limitations of science.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsRecommended Reading
Ambainis A, Case J, Jain S, Suraj M (2004) Parsimony hierarchies for inductive inference. J Symb Logic 69:287–328
Angluin D, Gasarch W, Smith C (1989) Training sequences. Theor Comput Sci 66(3):255–272
Angluin D (1980) Finding patterns common to a set of strings. J Comput Syst Sci 21:46–62
Arikawa S, Shinohara T, Yamamoto A (1992) Learning elementary formal systems. Theor Comput Sci 95:97–113
Bain M, Sammut C (1999) A framework for behavioural cloning. In: Furakawa K, Muggleton S, Michie D (eds) Machine intelligence, vol 15. Oxford University Press, Oxford
Baluja S, Pomerleau D (1995) Using the representation in a neural network’s hidden layer for task specific focus of attention. Technical report CMU-CS-95-143, School of Computer Science, CMU, May 1995. Appears in proceedings of the 1995 IJCAI
Bartlett P, Ben-David S, Kulkarni S (1996) Learning changing concepts by exploiting the structure of change. In: Proceedings of the ninth annual conference on computational learning theory, Desenzano del Garda. ACM Press, New York
Bartlmae K, Gutjahr S, Nakhaeizadeh G (1997) Incorporating prior knowledge about financial markets through neural multitask learning. In: Refenes APN, Burgess AN, Moody JE (eds) Decision technologies for computational finance. Proceedings of the fifth international conference on computational finance. Kluwer Academic, pp 425–432
Bārzdiņš J (1974a) Inductive inference of automata, functions and programs. In: Proceedings of the international congress of mathematicians, Vancouver, pp 771–776
Bārzdiņš J (1974b) Two theorems on the limiting synthesis of functions. In: Theory of algorithms and programs, vol 210. Latvian State University, Riga, pp 82–88
Blum L, Blum M (1975) Toward a mathematical theory of inductive inference. Inf Control 28:125–155
Blum A, Chalasani P (1992) Learning switching concepts. In: Proceedings of the fifth annual conference on computational learning theory, Pittsburgh. ACM Press, New York, pp 231–242
Bratko I, Muggleton S (1995) Applications of inductive logic programming. Commun ACM 38(11):65–70
Bratko I, Urbančič T, Sammut C (1998) Behavioural cloning of control skill. In: Michalski RS, Bratko I, Kubat M (eds) Machine learning and data mining: methods and applications. Wiley, New York, pp 335–351
Brazma A, Ukkonen E, Vilo J (1996) Discovering unbounded unions of regular pattern languages from positive examples. In: Proceedings of the seventh international symposium on algorithms and computation (ISAAC’96). Lecture notes in computer science, vol 1178. Springer, Berlin, pp 95–104
Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140
Breiman L (2001) Random forests. Mach Learn 45(1):5–32
Caruana R (1993) Multitask connectionist learning. In: Proceedings of the 1993 connectionist models summer school. Lawrence Erlbaum, Hillsdale, pp 372–379
Caruana R (1996) Algorithms and applications for multitask learning. In: Proceedings 13th international conference on machine learning. Morgan Kaufmann, San Francisco, pp 87–95
Case J (1994) Infinitary self-reference in learning theory. J Exp Theor Artif Intell 6:3–16
Case J (1999) The power of vacillation in language learning. SIAM J Comput 28(6):1941–1969
Case J (2007) Directions for computability theory beyond pure mathematical. In: Gabbay D, Goncharov S, Zakharyaschev M (eds) Mathematical problems from applied logic II. New logics for the twenty-first century. International mathematical series, vol 5. Springer, New York
Case J, Kötzing T (2009) Difficulties in forcing fairness of polynomial time inductive inference. In: Gavalda R, Lugosi G, Zeugmann T, Zilles S (eds) 20th international conference on algorithmic learning theory (ALT’09). LNAI, vol 5809. Springer, Berlin, pp 263–277
Case J, Lynes C (1982) Machine inductive inference and language identification. In: Nielsen M, Schmidt E (eds) Proceedings of the 9th international colloquium on automata, languages and programming. Lecture notes in computer science, vol 140. Springer, Berlin, pp 107–115
Case J, Smith C (1983) Comparison of identification criteria for machine inductive inference. Theor Comput Sci 25:193–220
Case J, Suraj M (2007) Weakened refutability for machine learning of higher order definitions 2007. Working paper for eventual journal submission
Case J, Jain S, Kaufmann S, Sharma A, Stephan F (2001) Predictive learning models for concept drift (special issue for ALT’98). Theor Comput Sci 268:323–349
Case J, Jain S, Lange S, Zeugmann T (1999) Incremental concept learning for bounded data mining. Inf Comput 152:74–110
Case J, Jain S, Montagna F, Simi G, Sorbi A (2005) On learning to coordinate: random bits help, insightful normal forms, and competency isomorphisms (special issue for selected learning theory papers from COLT’03, FOCS’03, and STOC’03). J Comput Syst Sci 71(3):308–332
Case J, Jain S, Martin E, Sharma A, Stephan F (2006) Identifying clusters from positive data. SIAM J Comput 36(1):28–55
Case J, Jain S, Ott M, Sharma A, Stephan F (2000) Robust learning aided by context (special issue for COLT’98). J Comput Syst Sci 60:234–257
Case J, Jain S, Sharma A (1996) Machine induction without revolutionary changes in hypothesis size. Inf Comput 128:73–86
Case J, Jain S, Stephan F, Wiehagen R (2004) Robust learning – rich and poor. J Comput Syst Sci 69(2):123–165
Case J, Ott M, Sharma A, Stephan F (2002) Learning to win process-control games watching gamemasters. Inf Comput 174(1):1–19
Cenzer D, Remmel J (1992) Recursively presented games and strategies. Math Soc Sci 24:117–139
Chen K (1982) Tradeoffs in the inductive inference of nearly minimal size programs. Inf Control 52:68–86
de Garis H (1990a) Genetic programming: building nanobrains with genetically programmed neural network modules. In: IJCNN: international joint conference on neural networks, vol 3. IEEE Service Center, Piscataway, pp 511–516
deGarisH(1990b)Geneticprogramming:modularneuralevolutionforDarwin machines. In: Caudill M (ed) IJCNN-90-WASH DC; international joint conferenceonneuralnetworks,vol 1.LawrenceErlbaumAssociates, Hillsdale, pp 194–197
de Garis H (1991) Genetic programming: building artificial nervous systems with genetically programmed neural network modules. In: Soušek B, The IRIS group (eds) Neural and intelligenct systems integeration: fifth and sixth generation integerated reasoning information systems, Chap. 8 Wiley, New York, pp 207–234
Devaney M, Ram A (1994) Dynamically adjusting concepts to accommodate changing contexts. In: Kubat M, Widmer G (eds) Proceedings of the ICML-96 pre-conference workshop on learning in context-sensitive domains, Bari. Journal submission
Dietterich T, Hild H, Bakiri G (1995) A comparison of ID3 and backpropogation for English text-tospeech mapping. Mach Learn 18(1):51–80
Fahlman S (1991) The recurrent cascade-correlation architecture. In: Lippmann R, Moody J, Touretzky D (eds) Advances in neural information processing systems, vol 3. Morgan Kaufmann Publishers, San Mateo, pp 190–196
Freivalds R (1975) Minimal Gödel numbers and their identification in the limit. Lecture notes in computer science, vol 32. Springer, Berlin, pp 219–225
Freund Y, Mansour Y (1997) Learning under persistent drift. In: Ben-David S, (ed) Proceedings of the third European conference on computational learning theory (EuroCOLT’97). Lecture notes in artificial intelligence, vol 1208. Springer, Berlin, pp 94–108
Fulk M (1990) Robust separations in inductive inference. In: Proceedings of the 31st annual symposium on foundations of computer science. IEEE Computer Society, St. Louis, pp 405–410
Harding S (ed) (1976) Can theories be refuted? Essays on the Duhem-Quine thesis. Kluwer Academic Publishers, Dordrecht
Helmbold D, Long P (1994) Tracking drifting concepts by minimizing disagreements. Mach Learn 14:27–46
Hildebrand F (1956) Introduction to numerical analysis. McGraw-Hill, New York
Jain S (1999) Robust behaviorally correct learning. Inf Comput 153(2):238–248
Jain S, Sharma A (1997) Elementary formal systems, intrinsic complexity, and procrastination. Inf Comput 132:65–84
Jain S, Sharma A (2002) Mind change complexity of learning logic programs. Theor Comput Sci 284(1):143–160
Jain S, Osherson D, Royer J, Sharma A (1999) Systems that learn: an introduction to learning theory, 2nd edn. MIT Press, Cambridge, MA
Jain S, Smith C, Wiehagen R (2001) Robust learning is rich. J Comput Syst Sci 62(1):178–212
Kilpeläinen P, Mannila H, Ukkonen E (1995) MDL learning of unions of simple pattern languages from positive examples. In: Vitányi P (ed) Computational learning theory, second European conference, EuroCOLT’95. Lecture notes in artificial intelligence, vol 904. Springer, Berlin, pp 252–260
Kinber E (1977) On a theory of inductive inference. Lecture notes in computer science, vol 56. Springer, Berlin, pp 435–440
Kinber E, Smith C, Velauthapillai M, Wiehagen R (1995) On learning multiple concepts in parallel. J Comput Syst Sci 50:41–52
Krishna Rao M (1996) A class of prolog programs inferable from positive data. In: Arikawa A, Sharma A (eds) Seventh international conference on algorithmic learning theory (ALT’ 96). Lecture notes in artificial intelligence, vol 1160. Springer, Berlin, pp 272–284
Krishna Rao M (2000) Some classes of prolog programs inferable from positive data (Special Issue for ALT’96). Theor Comput Sci A 241:211–234
Krishna Rao M (2004) Inductive inference of term rewriting systems from positive data. In: Ben-David S, Case J, Maruoka A (eds) Algorithmic learning theory: fifteenth international conference (ALT’2004). Lecture notes in artificial intelligence, vol 3244. Springer, Berlin, pp 69–82
Krishna Rao M (2005) A class of prolog programs with non-linear outputs inferablefrompositivedata.In:JainS,SimonHU,TomitaE(eds)Algorithmic learningtheory:sixteenthinternationalconference(ALT’2005).Lecturenotes in artificial intelligence, vol 3734. Springer, Berlin, pp 312–326
Krishna Rao M, Sattar A (1998) Learning from entailment of logic programs with local variables. In: Richter M, Smith C, Wiehagen R, Zeugmann T (eds) Ninth international conference on algorithmic learning theory (ALT’98). Lecture notes in artificial intelligence, vol 1501. Springer, Berlin, pp 143–157
Kubat M (1992) A machine learning based approach to load balancing in computer networks. Cybern Syst 23:389–400
Kummer M, Ott M (1996) Learning branches and learning to win closed recursive games. In: Proceedings of the ninth annual conference on computational learning theory, Desenzano del Garda. ACM Press, New York
Lange S, Wiehagen R (1991) Polynomial time inference of arbitrary pattern languages. New Gener Comput 8:361–370
Lavrač N, Džeroski S (1994) Inductive logic programming: techniques and applications. Ellis Horwood, New York
Maler O, Pnueli A, Sifakis J (1995) On the synthesis of discrete controllers for timed systems. In: Proceedings of the annual symposium on the theoretical aspects of computer science. LNCS, vol 900. Springer, Berlin, pp 229–242
Matwin S, Kubat M (1996) The role of context in concept learning. In: Kubat M, Widmer G (eds) Proceedings of the ICML-96 pre-conference workshop on learning in context-sensitive domains, Bari, pp 1–5
Maye A, Hsieh C, Sugihara G, Brembs B (2007) Order in spontaneous behavior. PLoS One, May 2007. http://brembs.net/spontaneous/
Mishra N, Ron D, Swaminathan R (2004) A new conceptual clustering framework. Mach Learn 56(1–3):115–151
Mitchell T (1997) Machine learning. McGraw Hill, New York
Mitchell T, Caruana R, Freitag D, McDermott J, Zabowski D (1994) Experience with a learning, personal assistant. Commun ACM 37:80–91
Montagna F, Osherson D (1999) Learning to coordinate: a recursion theoretic perspective. Synthese 118:363–382
Muggleton S, De Raedt L (1994) Inductive logic programming: theory and methods. J Logic Program 19/20:669–679
Odifreddi P (1999) Classical recursion theory, vol II. Elsivier, Amsterdam
Osherson D, Stob M, Weinstein S (1986) Systems that learn: an introduction to learning theory for cognitive and computer scientists. MIT Press, Cambridge, MA
Ott M, Stephan F (2002) Avoiding coding tricks by hyperrobust learning. Theor Comput Sci 284(1):161–180
Pitt L, Reinke R (1988) Criteria for polynomial-time (conceptual) clustering. Mach Learn 2:371–396
Popper K (1992) Conjectures and refutations: the growth of scientific knowledge. Basic Books, New York
Pratt L, Mostow J, Kamm C (1991) Direct transfer of learned information among neural networks. In: Proceedings of the 9th national conference on artificial intelligence (AAAI-91), Anaheim. AAAI press, Menlo Park
Rogers H (1987) Theory of recursive functions and effective computability. McGraw Hill, New York. (Reprinted, MIT Press, 1987)
Salomaa A (1994a) Patterns (The formal language theory column). EATCS Bull 54:46–62
Salomaa A (1994b) Return to patterns (The formal language theory column). EATCS Bull 55:144–157
Sejnowski T, Rosenberg C (1986) NETtalk: a parallel network that learns to read aloud. Technical report JHU-EECS-86-01, Johns Hopkins University
Shimozono S, Shinohara A, Shinohara T, Miyano S, Kuhara S, Arikawa S (1994) Knowledge acquisition from amino acid sequences by machine learning system BONSAI. Trans Inf Process Soc Jpn 35:2009–2018
Shinohara T (1983) Inferring unions of two pattern languages. Bull Inf Cybern 20:83–88
Shinohara T, Arikawa A (1995) Pattern inference. In: Jantke KP, Lange S (eds) Algorithmic learning for knowledge-based systems. Lecture notes in artificial intelligence, vol 961. Springer, Berlin, pp 259–291
Smullyan R (1961) Theory of formal systems. Annals of mathematics studies, vol 47). Princeton University Press, Princeton
Šuc D (2003) Machine reconstruction of human control strategies. Frontiers in artificial intelligence and applications, vol 99. IOS Press, Amsterdam
Thomas W (1995) On the synthesis of strategies in infinite games. In: Proceedings of the annual symposium on the theoretical aspects of computer science. LNCS, vol 900. Springer, Berlin, pp 1–13
Thrun S (1996) Is learning the n-th thing any easier than learning the first? In: Advances in neural information processing systems, vol 8. Morgan Kaufmann, San Mateo
Thrun S, Sullivan J (1996) Discovering structure in multiple learning tasks: the TC algorithm. In: Proceedings of the thirteenth international conference on machine learning (ICML-96). Morgan Kaufmann, San Francisco, pp 489–497
Tsung F, Cottrell G (1989) A sequential adder using recurrent networks. In: IJCNN-89-WASHINGTON DC: international joint conference on neural networks, 18–22 June, vol 2. IEEE Service Center, Piscataway, pp 133–139
Waibel A (1989a) Connectionist glue: modular design of neural speech systems. In: Touretzky D, Hinton G, Sejnowski T (eds) Proceedings of the 1988 connectionist models summer school. Morgan Kaufmann, San Mateo, pp 417–425
Waibel A (1989b) Consonant recognition by modular construction of large phonemic time-delay neural networks. In: Touretzky DS (ed) Advances in neural information processing systems I. Morgan Kaufmann, San Mateo, pp 215–223
Wallace C (2005) Statistical and inductive inference by minimum message length. Information science and statistics. Springer, New York. Posthumously published
Wallace C, Dowe D (1999) Minimum message length and Kolmogorov complexity (special issue on Kolmogorov complexity). Comput J 42(4):123–155. http://comjnl.oxfordjournals.org/cgi/reprint/42/4/270
Widmer G, Kubat M (1996) Learning in the presence of concept drift and hidden contexts. Mach Learn 23:69–101
Wiehagen R (1976) Limes-Erkennung rekursiver Funktionen durch spezielle Strategien. Electronische Informationverarbeitung und Kybernetik 12: 93–99
Wiehagen R, Zeugmann T (1994) Ignoring data may be the only way to learn efficiently. J Exp Theor Artif Intell 6:131–144
Wright K (1989) Identification of unions of languages drawn from an identifiable class. In: Rivest R, Haussler D, Warmuth M (eds) Proceedings of the second annual workshop on computational learning theory, Santa Cruz. Morgan Kaufmann Publishers, San Mateo, pp 328–333
Wrobel S (1994) Concept formation and knowledge revision. Kluwer Academic Publishers, Dordrecht
Zeugmann T (1986) On Bārzdiņš’ conjecture. In: Jantke KP (ed) Proceedings of the international workshop on analogical and inductive inference. Lecture notes in computer science, vol 265. Springer, Berlin, pp 220–227
Zeugmann T (1998) Lange and Wiehagen’s pattern language learning algorithm: an average case analysis with respect to its total learning time. Ann Math Artif Intell 23:117–145
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Science+Business Media New York
About this entry
Cite this entry
Case, J., Jain, S. (2017). Connections Between Inductive Inference and Machine Learning. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7687-1_52
Download citation
DOI: https://doi.org/10.1007/978-1-4899-7687-1_52
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4899-7685-7
Online ISBN: 978-1-4899-7687-1
eBook Packages: Computer ScienceReference Module Computer Science and Engineering