Skip to main content

Connections Between Inductive Inference and Machine Learning

  • Reference work entry
Encyclopedia of Machine Learning
  • 389 Accesses

Definition

Inductive inference is a theoretical framework to model learning in the limit. Here we will discuss some results in inductive inference, which have relevance to machine learning community.

The mathematical/theoretical area called Inductive Inference, is also known as computability theoretic learning and learning in the limit (Jain, Osherson, Royer, & Sharma, 1999; Odifreddi, 1999) typically but, as will be seen below, not always involves a situation depicted in (1) just below.

$$\begin{array}{rcl} \textrm{ Data }{d}_{0},{d}_{1},{d}_{2},\ldots \mathop \to \limits^{{\mathop{\rm In}\nolimits}} M\mathop \to \limits^{{\mathop{\rm Out}\nolimits}} \textrm{ Programs}{e}_{0},{e}_{1},{e}_{2},\ldots.\end{array}$$
(1)

Let \(\mathbb{N} =\) the set of nonnegative integers. Strings, including program strings, computer reals, and other data structures, inside computers, are finite bit strings and, hence, can be coded into \(\mathbb{N}\). Therefore, mathematically at least, it is without...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Recommended Reading

  • Ambainis, A., Case, J., Jain, S., & Suraj, M. (2004). Parsimony hierarchies for inductive inference. Journal of Symbolic Logic, 69, 287–328.

    MathSciNet  MATH  Google Scholar 

  • Angluin, D., Gasarch, W., & Smith, C. (1989). Training sequences. Theoretical Computer Science, 66(3), 255–272.

    MathSciNet  MATH  Google Scholar 

  • Angluin, D. (1980). Finding patterns common to a set of strings. Journal of Computer and System Sciences, 21, 46–62.

    MathSciNet  MATH  Google Scholar 

  • Arikawa, S., Shinohara, T., & Yamamoto, A. (1992). Learning elementary formal systems. Theoretical Computer Science, 95, 97–113.

    MathSciNet  MATH  Google Scholar 

  • Bain, M., & Sammut, C. (1999). A framework for behavioural cloning. In K. Furakawa, S. Muggleton, & D. Michie (Eds.), Machine intelligence 15. Oxford: Oxford University Press.

    Google Scholar 

  • Baluja, S., & Pomerleau, D. (1995). Using the representation in a neural network’s hidden layer for task specific focus of attention. Technical Report CMU-CS-95-143, School of Computer Science, CMU, May 1995. Appears in Proceedings of the 1995 IJCAI.

    Google Scholar 

  • Bartlett, P., Ben-David, S., & Kulkarni, S. (1996). Learning changing concepts by exploiting the structure of change. In Proceedings of the ninth annual conference on computational learning theory, Desenzano del Garda, Italy. New York: ACM Press.

    Google Scholar 

  • Bartlmae, K., Gutjahr, S., & Nakhaeizadeh, G. (1997). Incorporating prior knowledge about financial markets through neural multitask learning. In Proceedings of the fifth international conference on neural networks in the capital markets.

    Google Scholar 

  • Bārzdiņš, J. (1974a). Inductive inference of automata, functions and programs. In Proceedings of the international congress of mathematicians, Vancouver (pp. 771–776).

    Google Scholar 

  • Bārzdiņš, J. (1974b). Two theorems on the limiting synthesis of functions. In Theory of algorithms and programs (Vol. 210, pp. 82–88). Latvian State University, Riga.

    Google Scholar 

  • Blum, L., & Blum, M. (1975). Toward a mathematical theory of inductive inference. Information and Control, 28, 125–155.

    MathSciNet  MATH  Google Scholar 

  • Blum, A., & Chalasani, P. (1992). Learning switching concepts. In Proceedings of the fifth annual conference on computational learning theory, Pittsburgh, Pennsylvania, (pp. 231–242). New York: ACM Press.

    Google Scholar 

  • Bratko, I., & Muggleton, S. (1995). Applications of inductive logic programming. Communications of the ACM, 38(11), 65–70.

    Google Scholar 

  • Bratko, I., Urbančič, T., & Sammut, C. (1998). Behavioural cloning of control skill. In R. S. Michalski, I. Bratko, & M. Kubat (Eds.), Machine learning and data mining: Methods and applications, (pp. 335–351). New York: Wiley.

    Google Scholar 

  • Brazma, A., Ukkonen, E., & Vilo, J. (1996). Discovering unbounded unions of regular pattern languages from positive examples. In Proceedings of the seventh international symposium on algorithms and computation (ISAAC’96), Lecture notes in computer science, (Vol. 1178, pp. 95–104), Berlin: Springer-Verlag.

    Google Scholar 

  • Breiman, L. (1996). Bagging predictors. Machine Learning, 24(2), 123–140.

    MathSciNet  MATH  Google Scholar 

  • Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32.

    MATH  Google Scholar 

  • Caruana, R. (1993). Multitask connectionist learning. In Proceedings of the 1993 connectionist models summer school (pp. 372–379). NJ: Lawrence Erlbaum.

    Google Scholar 

  • Caruana, R. (1996). Algorithms and applications for multitask learning. In Proceedings 13th international conference on machine learning (pp. 87–95). San Francisco, CA: Morgan Kaufmann.

    Google Scholar 

  • Case, J. (1994). Infinitary self-reference in learning theory. Journal of Experimental and Theoretical Artificial Intelligence, 6, 3–16.

    MATH  Google Scholar 

  • Case, J. (1999). The power of vacillation in language learning. SIAM Journal on Computing, 28(6), 1941–1969.

    MathSciNet  MATH  Google Scholar 

  • Case, J. (2007). Directions for computability theory beyond pure mathematical. In D. Gabbay, S. Goncharov, & M. Zakharyaschev (Eds.), Mathematical problems from applied logic II. New logics for the XXIst century, International Mathematical Series, (Vol. 5). New York: Springer.

    Google Scholar 

  • Case, J., & Lynes, C. (1982). Machine inductive inference and language identification. In M. Nielsen & E. Schmidt, (Eds.), Proceedings of the 9th International Colloquium on Automata, Languages and Programming, Lecture notes in computer science, (Vol. 140, pp. 107–115). Berlin: Springer-Verlag.

    Google Scholar 

  • Case, J., & Smith, C. (1983). Comparison of identification criteria for machine inductive inference. Theoretical Computer Science, 25, 193–220.

    MathSciNet  MATH  Google Scholar 

  • Case, J., & Suraj, M. (2010). Weakened refutability for machine learning of higher order definitions, 2010. (Working paper for eventual journal submission).

    Google Scholar 

  • Case, J., Jain, S., Kaufmann, S., Sharma, A., & Stephan, F. (2001). Predictive learning models for concept drift (Special Issue for ALT’98). Theoretical Computer Science, 268, 323–349.

    MathSciNet  MATH  Google Scholar 

  • Case, J., Jain, S., Lange, S., & Zeugmann, T. (1999). Incremental concept learning for bounded data mining. Information and Computation, 152, 74–110.

    MathSciNet  MATH  Google Scholar 

  • Case, J., Jain, S., Montagna, F., Simi, G., & Sorbi, A. (2005). On learning to coordinate: Random bits help, insightful normal forms, and competency isomorphisms (Special issue for selected learning theory papers from COLT’03, FOCS’03, and STOC’03). Journal of Computer and System Sciences, 71(3), 308–332.

    MathSciNet  MATH  Google Scholar 

  • Case, J., Jain, S., Martin, E., Sharma, A., & Stephan, F. (2006). Identifying clusters from positive data. SIAM Journal on Computing, 36(1), 28–55.

    MathSciNet  MATH  Google Scholar 

  • Case, J., Jain, S., Ott, M., Sharma, A., & Stephan, F. (2000). Robust learning aided by context (Special Issue for COLT’98). Journal of Computer and System Sciences, 60, 234–257.

    MathSciNet  MATH  Google Scholar 

  • Case, J., Jain, S., & Sharma, A. (1996). Machine induction without revolutionary changes in hypothesis size. Information and Computation, 128, 73–86.

    MathSciNet  MATH  Google Scholar 

  • Case, J., Jain, S., Stephan, F., & Wiehagen, R. (2004). Robust learning – rich and poor. Journal of Computer and System Sciences, 69(2), 123–165.

    MathSciNet  MATH  Google Scholar 

  • Case, J., Ott, M., Sharma, A., & Stephan, F. (2002). Learning to win process-control games watching gamemasters. Information and Computation, 174(1), 1–19.

    MathSciNet  MATH  Google Scholar 

  • Cenzer, D., & Remmel, J. (1992). Recursively presented games and strategies. Mathematical Social Sciences, 24, 117–139.

    MathSciNet  MATH  Google Scholar 

  • Chen, K. (1982). Tradeoffs in the inductive inference of nearly minimal size programs. Information and Control, 52, 68–86.

    MathSciNet  MATH  Google Scholar 

  • de Garis, H. (1990a). Genetic programming: Building nanobrains with genetically programmed neural network modules. In IJCNN: International Joint Conference on Neural Networks, (Vol. 3, pp. 511–516). Piscataway, NJ: IEEE Service Center.

    Google Scholar 

  • deGaris,H.(1990b).Geneticprogramming:ModularneuralevolutionforDarwin machines. In M. Caudill (Ed.), IJCNN-90-WASH DC; International joint conferenceonneuralnetworks(Vol. 1,pp.194–197).Hillsdale,NJ:Lawrence Erlbaum Associates.

    Google Scholar 

  • de Garis, H. (1991). Genetic programming: Building artificial nervous systems with genetically programmed neural network modules. In B. Soušek, & The IRIS group (Eds.), Neural and intelligenct systems integeration: Fifth and sixth generation integerated reasoning information systems (Chap. 8, pp. 207–234). New York: Wiley.

    Google Scholar 

  • Devaney, M., & Ram, A. (1994). Dynamically adjusting concepts to accommodate changing contexts. In M. Kubat, G. Widmer (Eds.), Proceedings of the ICML-96 Pre-conference workshop on learning in context-sensitive domains, Bari, Italy (Journal submission).

    Google Scholar 

  • Dietterich, T., Hild, H., & Bakiri, G. (1995). A comparison of ID3 and backpropogation for English text-tospeech mapping. Machine Learning, 18(1), 51–80.

    Google Scholar 

  • Fahlman, S. (1991). The recurrent cascade-correlation architecture. In R. Lippmann, J. Moody, and D. Touretzky (Eds.), Advances in neural information processing systems (Vol. 3, pp. 190–196). San Mateo, CA: Morgan Kaufmann Publishers.

    Google Scholar 

  • Freivalds, R. (1975). Minimal Gödel numbers and their identification in the limit. In Lecture notes in computer science (Vol. 32, pp. 219–225). Berlin: Springer-Verlag.

    Google Scholar 

  • Freund, Y., & Mansour, Y. (1997). Learning under persistent drift. In S. Ben-David, (Ed.), Proceedings of the third European conference on computational learning theory (EuroCOLT’97), Lecture notes in artificial intelligence, (Vol. 1208, pp. 94–108). Berlin: Springer-Verlag.

    Google Scholar 

  • Fulk, M. (1990). Robust separations in inductive inference. In Proceedings of the 31st annual symposium on foundations of computer science (pp. 405–410). St. Louis, Missouri. Washington, DC: IEEE Computer Society.

    Google Scholar 

  • Harding, S. (Ed.). (1976). Can theories be refuted? Essays on the Duhem-Quine thesis. Dordrecht: Kluwer Academic Publishers.

    MATH  Google Scholar 

  • Helmbold, D., & Long, P. (1994). Tracking drifting concepts by minimizing disagreements. Machine Learning, 14, 27–46.

    MATH  Google Scholar 

  • Hildebrand, F. (1956). Introduction to numerical analysis. New York: McGraw-Hill.

    MATH  Google Scholar 

  • Jain, S. (1999). Robust behaviorally correct learning. Information and Computation, 153(2), 238–248.

    MathSciNet  MATH  Google Scholar 

  • Jain, S., & Sharma, A. (1997). Elementary formal systems, intrinsic complexity, and procrastination. Information and Computation, 132, 65–84.

    MathSciNet  MATH  Google Scholar 

  • Jain, S., & Sharma, A. (2002). Mind change complexity of learning logic programs. Theoretical Computer Science, 284(1), 143–160.

    MathSciNet  MATH  Google Scholar 

  • Jain, S., Osherson, D., Royer, J., & Sharma, A. (1999). Systems that learn: An introduction to learning theory (2nd ed.). Cambridge, MA: MIT Press.

    Google Scholar 

  • Jain, S., Smith, C., & Wiehagen, R. (2001). Robust learning is rich. Journal of Computer and System Sciences, 62(1), 178–212.

    MathSciNet  MATH  Google Scholar 

  • Kilpeläinen, P., Mannila, H., & Ukkonen, E. (1995). MDL learning of unions of simple pattern languages from positive examples. In P. Vitányi (Ed.), Computational learning theory, second European conference, EuroCOLT’95, Lecture notes in artificial intelligence, (Vol. 904, pp. 252–260). Berlin: Springer-Verlag.

    Google Scholar 

  • Kinber, E. (1977). On a theory of inductive inference. In Lecture notes in computer science (Vol. 56, pp. 435–440). Berlin: Springer-Verlag.

    Google Scholar 

  • Kinber, E., Smith, C., Velauthapillai, M., & Wiehagen, R. (1995). On learning multiple concepts in parallel. Journal of Computer and System Sciences, 50, 41–52.

    MathSciNet  MATH  Google Scholar 

  • Krishna Rao, M. (1996). A class of prolog programs inferable from positive data. In A. Arikawa & A. Sharma (Eds.), Seventh international conference on algorithmic learning theory (ALT’ 96), Lecture notes in artificial intelligence (Vol. 1160, pp. 272–284). Berlin: Springer-Verlag.

    Google Scholar 

  • Krishna Rao, M. (2000). Some classes of prolog programs inferable from positive data (Special Issue for ALT’96). Theoretical Computer Science A, 241, 211–234.

    MathSciNet  Google Scholar 

  • Krishna Rao, M. (2004). Inductive inference of term rewriting systems from positive data. In S. Ben-David, J. Case, & A. Maruoka (Eds.), Algorithmic learning theory: Fifteenth international conference (ALT’ 2004), Lecture notes in artificial intelligence (Vol. 3244, pp. 69–82). Berlin: Springer-Verlag.

    Google Scholar 

  • Krishna Rao, M. (2005). A class of prolog programs with non-linear outputs inferable from positive data. In S. Jain, H. U. Simon, & E. Tomita (Eds.), Algorithmiclearningtheory:Sixteenthinternationalconference(ALT’2005), Lecture notes in artificial intelligence, (Vol. 3734, pp. 312–326). Berlin: Springer-Verlag.

    Google Scholar 

  • Krishna Rao, M., & Sattar, A. (1998). Learning from entailment of logic programs with local variables. In M. Richter, C. Smith, R. Wiehagen, & T. Zeugmann (Eds.), Ninth international conference on algorithmic learning theory (ALT’ 98), Lecture notes in artificial intelligence (Vol. 1501, pp. 143–157). Berlin: Springer-Verlag.

    Google Scholar 

  • Kubat, M. (1992). A machine learning based approach to load balancing in computer networks. Cybernetics and Systems, 23, 389–400.

    Google Scholar 

  • Kummer, M., & Ott, M. (1996). Learning branches and learning to win closed recursive games. In Proceedings of the ninth annual conference on computational learning theory, Desenzano del Garda, Italy. New York: ACM Press.

    Google Scholar 

  • Lange, S., & Wiehagen, R. (1991). Polynomial time inference of arbitrary pattern languages. New Generation Computing, 8, 361–370.

    MATH  Google Scholar 

  • Lavrač, N., & Džeroski, S. (1994). Inductive logic programming: Techniques and applications. New York: Ellis Horwood.

    MATH  Google Scholar 

  • Maler, O., Pnueli, A., & Sifakis, J. (1995). On the synthesis of discrete controllers for timed systems. In Proceedings of the annual symposium on the theoretical aspects of computer science, LNCS (Vol. 900, pp. 229–242). Berlin: Springer-Verlag.

    Google Scholar 

  • Matwin, S., & Kubat, M. (1996). The role of context in concept learning. In M. Kubat & G. Widmer (Eds.), Proceedings of the ICML-96 pre-conference workshop on learning in context-sensitive domains, Bari, Italy, (pp. 1–5).

    Google Scholar 

  • Maye, A., Hsieh, C., Sugihara, G., & Brembs, B. (2007). Order in spontaneous behavior. PLoS One, May, 2007. See: http://brembs.net/spontaneous/

  • Mishra, N., Ron, D., & Swaminathan, R. (2004). A new conceptual clustering framework. Machine Learning, 56(1–3), 115–151.

    MATH  Google Scholar 

  • Mitchell, T. (1997). Machine learning. New York: McGraw Hill.

    MATH  Google Scholar 

  • Mitchell, T., Caruana, R., Freitag, D., McDermott, J., & Zabowski, D. (1994). Experience with a learning, personal assistant. Communications of the ACM, 37, 80–91.

    Google Scholar 

  • Montagna, F., & Osherson, D. (1999). Learning to coordinate: A recursion theoretic perspective. Synthese, 118, 363–382.

    MathSciNet  MATH  Google Scholar 

  • Muggleton, S., & De Raedt, L. (1994). Inductive logic programming: Theory and methods. Journal of Logic Programming, 19/20, 669–679.

    Google Scholar 

  • Odifreddi, P. (1999). Classical recursion theory (Vol. II). Amsterdam: Elsivier.

    Google Scholar 

  • Osherson, D., Stob, M., & Weinstein, S. (1986). Systems that learn: An introduction to learning theory for cognitive and computer scientists. Cambridge, MA: MIT Press.

    Google Scholar 

  • Ott, M., & Stephan, F. (2002). Avoiding coding tricks by hyperrobust learning. Theoretical Computer Science, 284(1), 161–180.

    MathSciNet  MATH  Google Scholar 

  • Pitt, L., & Reinke, R. (1988). Criteria for polynomial-time (conceptual) clustering. Machine Learning, 2, 371–396.

    Google Scholar 

  • Popper, K. (1992). Conjectures and refutations: The growth of scientific knowledge. New York: Basic Books.

    Google Scholar 

  • Pratt, L., Mostow, J., & Kamm, C. (1991). Direct transfer of learned information among neural networks. In Proceedings of the 9th national conference on artificial intelligence (AAAI-91), Anaheim, California. Menlo Park, CA: AAAI press.

    Google Scholar 

  • Rogers, H. (1987). Theory of recursive functions and effective computability. New York: McGraw Hill (Reprinted, MIT Press, 1987).

    Google Scholar 

  • Salomaa, A. (1994a). Patterns (The formal language theory column). EATCS Bulletin, 54, 46–62.

    Google Scholar 

  • Salomaa, A. (1994b). Return to patterns (The formal language theory column). EATCS Bulletin, 55, 144–157.

    Google Scholar 

  • Sejnowski, T., & Rosenberg, C. (1986). NETtalk: A parallel network that learns to read aloud. Technical Report JHU-EECS-86-01, Johns Hopkins University.

    Google Scholar 

  • Shimozono, S., Shinohara, A., Shinohara, T., Miyano, S., Kuhara, S., & Arikawa, S. (1994). Knowledge acquisition from amino acid sequences by machine learning system BONSAI. Transactions of Information Processing Society of Japan, 35, 2009–2018.

    Google Scholar 

  • Shinohara, T. (1983). Inferring unions of two pattern languages. Bulletin of Informatics and Cybernetics, 20, 83–88.

    MathSciNet  MATH  Google Scholar 

  • Shinohara, T., & Arikawa, A. (1995). Pattern inference. In K. P. Jantke & S. Lange (Eds.), Algorithmic learning for knowledge-based systems, Lecture notes in artificial intelligence (Vol. 961, pp. 259–291). Berlin: Springer-Verlag.

    Google Scholar 

  • Smullyan, R. (1961). Theory of formal systems. In Annals of Mathematics Studies (Vol. 47). Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Šuc, D. (2003). Machine reconstruction of human control strategies. Frontiers in artificial intelligence and applications (Vol. 99). Amsterdam: IOS Press.

    Google Scholar 

  • Thomas, W. (1995). On the synthesis of strategies in infinite games. In Proceedings of the annual symposium on the theoretical aspects of computer science, LNCS (Vol. 900, pp. 1–13). Berlin: Springer-Verlag.

    Google Scholar 

  • Thrun, S. (1996). Is learning the n-th thing any easier than learning the first? In Advances in neural information processing systems, 8. San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  • Thrun, S., & Sullivan, J. (1996). Discovering structure in multiple learning tasks: The TC algorithm. In Proceedings of the thirteenth international conference on machine learning (ICML-96) (pp. 489–497). San Francisco, CA: Morgan Kaufmann.

    Google Scholar 

  • Tsung, F., & Cottrell, G. (1989). A sequential adder using recurrent networks. In IJCNN-89-WASHINGTON DC: International joint conference on neural networks June 18–22 (Vol. 2, pp. 133–139). Piscataway, NJ: IEEE Service Center.

    Google Scholar 

  • Waibel, A. (1989a). Connectionist glue: Modular design of neural speech systems. In D. Touretzky, G. Hinton, & T. Sejnowski (Eds.), Proceedings of the 1988 connectionist models summer school (pp. 417–425). San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  • Waibel, A. (1989b). Consonant recognition by modular construction of large phonemic time-delay neural networks. In D. S. Touretzky (Ed.), Advances in neural information processing systems I (pp. 215–223). San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  • Wallace, C. (2005). Statistical and inductive inference by minimum message length. (Information Science and Statistics). New York: Springer (Posthumously published).

    Google Scholar 

  • Wallace, C., & Dowe, D. (1999). Minimum message length and kolmogorov complexity (Special Issue on Kolmogorov Complexity). Computer Journal, 42(4), 123–155. http://comjnl.oxfordjournals.org/cgi/reprint/42/4/270.

  • Widmer, G., & Kubat, M. (1996). Learning in the presence of concept drift and hidden contexts. Machine Learning, 23, 69–101.

    Google Scholar 

  • Wiehagen, R. (1976). Limes-Erkennung rekursiver Funktionen durch spezielle Strategien. Electronische Informationverarbeitung und Kybernetik, 12, 93–99.

    MathSciNet  MATH  Google Scholar 

  • Wiehagen, R., & Zeugmann, T. (1994). Ignoring data may be the only way to learn efficiently. Journal of Experimental and Theoretical Artificial Intelligence, 6, 131–144.

    MATH  Google Scholar 

  • Wright, K. (1989). Identification of unions of languages drawn from an identifiable class. In R. Rivest, D. Haussler, & M. Warmuth (Eds.), Proceedings of the second annual workshop on computational learning theory, Santa Cruz, California, (pp. 328–333). San Mateo, CA: Morgan Kaufmann Publishers.

    Google Scholar 

  • Wrobel, S. (1994). Concept formation and knowledge revision. Dordrecht: Kluwer Academic Publishers.

    MATH  Google Scholar 

  • Zeugmann, T. (1986). On Bārzdiņš’ conjecture. In K. P. Jantke (Ed.), Analogical and inductive inference, Proceedings of the international workshop, Lecture notes in computer science, (Vol. 265, pp. 220–227). Berlin: Springer-Verlag.

    Google Scholar 

  • Zeugmann, T. (1998). Lange and Wiehagen’s pattern language learning algorithm: An average case analysis with respect to its total learning time. Annals of Mathematics and Artificial Intelligence, 23, 117–145.

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media, LLC

About this entry

Cite this entry

Case, J., Jain, S. (2011). Connections Between Inductive Inference and Machine Learning. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-30164-8_160

Download citation

Publish with us

Policies and ethics