Machine learning, inductive reasoning, and reliability of generalisations

  • Petr SpeldaEmail author
Original Article


The present paper shows how statistical learning theory and machine learning models can be used to enhance understanding of AI-related epistemological issues regarding inductive reasoning and reliability of generalisations. Towards this aim, the paper proceeds as follows. First, it expounds Price’s dual image of representation in terms of the notions of e-representations and i-representations that constitute subject naturalism. For Price, this is not a strictly anti-representationalist position but rather a dualist one (e- and i-representations). Second, the paper links this debate with machine learning in terms of statistical learning theory becoming more viable epistemological tool when it abandons the perspective of object naturalism. The paper then argues that machine learning grounds a form of knowing that can be understood in terms of e- and i-representation learning. Third, this synthesis shows a way of analysing inductive reasoning in terms of reliability of generalisations stemming from a structure of e- and i-representations. In the age of Artificial Intelligence, connecting Price’s dual view of representation with Deep Learning provides an epistemological way forward and even perhaps an approach to how knowing is possible.


Representation Object naturalism Subject naturalism Machine learning Statistical learning theory Deep learning 



This research was supported by the Charles University's research centers program no. UNCE/HUM/037. The Human-Machine Nexus and Its Implications for the International Order and by the grant from VVZ C4SS 52-04 at Metropolitan University Prague. I would like to express my gratitude to both anonymous reviewers and dr. Vit Stritecky for the time they invested in making insightful suggestions greatly improving the paper.


  1. Armstrong DM (1978) A theory of universals: volume 2: universals and scientific realism. Cambridge University Press, CambridgeGoogle Scholar
  2. Barlett PL, Mendelson S (2002) Rademacher and gaussian complexities: risk bounds and structural results. J Mach Learn Res 3:463–482MathSciNetGoogle Scholar
  3. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. The MIT Press, CambridgezbMATHGoogle Scholar
  4. Goodman N (1953) Fact, fiction, and forecast. Harvard University Press, CambridgeGoogle Scholar
  5. Harman G, Kulkarni S (2007) Reliable reasoning: induction and statistical learning theory. The MIT Press, CambridgeGoogle Scholar
  6. Hernández-Orallo J (2017) The measure of all minds: evaluating natural and artificial intelligence. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  7. Hernández-Orallo J, Dowe DL (2010) Measuring universal intelligence: towards an anytime intelligence test. Artif Intell 174(18):1508–1539. MathSciNetCrossRefGoogle Scholar
  8. Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2:359–366CrossRefzbMATHGoogle Scholar
  9. Lewis DK (1983) New work for a theory of universals. Aust J Philos 61(4):343–377CrossRefGoogle Scholar
  10. Nagel T (1974) What Is it like to be a bat? Philos Rev 83(4):435–450CrossRefGoogle Scholar
  11. Nagel T (1986) The view from nowhere. Oxford University Press, New YorkGoogle Scholar
  12. Price H (2011) Naturalism without mirrors. Oxford University Press, New YorkGoogle Scholar
  13. Price H (2013) Expressivism, pragmatism and representationalism. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  14. Quine WVO (1951) Two dogmas of empiricism. Philos Rev 60(1):20–43CrossRefzbMATHGoogle Scholar
  15. Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Networks 61:85–117. CrossRefGoogle Scholar
  16. Shea N (2007) Contents and its vehicles in connectionist systems. Mind Lang 22(3):246–269. CrossRefGoogle Scholar
  17. Vapnik VN (1995) The nature of statistical learning theory. Springer, New YorkCrossRefzbMATHGoogle Scholar
  18. Vapnik VN, Chervonenkis AY (1971) On the uniform convergence of relative frequencies of events to their probabilities. Theory Probab Appl XVI(2):264–280zbMATHGoogle Scholar
  19. Witten IH, Frank E, Hall MA, Pal ChJ (2016) Data mining, practical machine learning tools and techniques, 4th edn. Morgan Kaufmann, CambridgeGoogle Scholar
  20. Zhang Ch, Bengio S, Hardt M, Recht B, Vinyals O (2017) Understanding deep learning requires re-thinking generalization. ICLR 2017 conference version.

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2018

Authors and Affiliations

  1. 1.Periculum: Charles University Research Centre of ExcellenceCharles University PraguePraha 5 – JinoniceCzech Republic

Personalised recommendations