Advertisement

Minds and Machines

, Volume 28, Issue 4, pp 667–688 | Cite as

Computational Functionalism for the Deep Learning Era

  • Ezequiel López-Rubio
Article

Abstract

Deep learning is a kind of machine learning which happens in a certain type of artificial neural networks called deep networks. Artificial deep networks, which exhibit many similarities with biological ones, have consistently shown human-like performance in many intelligent tasks. This poses the question whether this performance is caused by such similarities. After reviewing the structure and learning processes of artificial and biological neural networks, we outline two important reasons for the success of deep learning, namely the extraction of successively higher level features and the multiple layer structure, which are closely related to each other. Then some indications about the framing of this heated debate are given. After that, an assessment of the value of artificial deep networks as models of the human brain is given from the similarity perspective of model representation. Finally, a new version of computational functionalism is proposed which addresses the specificity of deep neural computation better than classic, program based computational functionalism.

Keywords

Computational functionalism Artificial intelligence Machine learning Neuroscience 

Notes

Acknowledgements

The author wishes to thank the editor and the anonymous reviewers for their constructive feedback on the manuscript. He is also grateful to David Teira (Universidad Nacional de Educación a Distancia, Madrid, Spain) and Emanuele Ratti (University of Notre Dame) for their valuable comments. Finally, he is indebted to José Muñoz-Pérez, José Luis Pérez-de-la-Cruz and Lawrence Mandow (Universidad de Málaga, Spain) for sharing with him their views on Artificial Intelligence.

References

  1. Bartels, A. (2006). Defending the structural concept of representation. THEORIA An International Journal for Theory, History and Foundations of Science, 21(1), 7–19.MathSciNetzbMATHGoogle Scholar
  2. Bassett, D. S., & Mattar, M. G. (2017). A network neuroscience of human learning: Potential to inform quantitative theories of brain and behavior. Trends in Cognitive Sciences, 21(4), 250–264.Google Scholar
  3. Blum, L., Shub, M., & Smale, S. (1989). On a theory of computation and complexity over the real numbers: NP-completeness, recursive functions and universal machines. Bulletin of the American Mathematical Society, 21(1), 1–46.MathSciNetzbMATHGoogle Scholar
  4. Bonfiglioli, R., & Nanni, F. (2016). History and philosophy of computing. In From close to distant and back: how to read with the help of machines (pp. 87–100). Springer, Cham.Google Scholar
  5. Bueno, O., & French, S. (2011). How theories represent. The British Journal for the Philosophy of Science, 62(4), 857–894.MathSciNetzbMATHGoogle Scholar
  6. Bueno, O., French, S., & Ladyman, J. (2002). On representing the relationship between the mathematical and the empirical. Philosophy of Science, 69(3), 452–473.MathSciNetGoogle Scholar
  7. Cireşan, D., Meier, U., Masci, J., & Schmidhuber, J. (2011). A committee of neural networks for traffic sign classification. In The 2011 international joint conference on neural networks (pp. 1918–1921).Google Scholar
  8. Cireşan, D., Meier, U., Masci, J., & Schmidhuber, J. (2012a). Multi-column deep neural network for traffic sign classification. Neural Networks, 32, 333–338.Google Scholar
  9. Cireşan, D., Meier, U., & Schmidhuber, J. (2012b). Multi-column deep neural networks for image classification. In Proceedings of the 2012 IEEE conference on computer vision and pattern recognition (CVPR), IEEE Computer Society, Washington, DC, USA, CVPR ’12 (pp. 3642–3649).Google Scholar
  10. Dehaene, S., Meyniel, F., Wacongne, C., Wang, L., & Pallier, C. (2015). The neural representation of sequences: From transition probabilities to algebraic patterns and linguistic trees. Neuron, 88(1), 2–19.Google Scholar
  11. Fitch, W. T. (2014). Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition. Physics of Life Reviews, 11(3), 329–364.Google Scholar
  12. Giere, R. N. (2009). An agent-based conception of models and scientific representation. Synthese, 172(2), 269.Google Scholar
  13. Gomes, L. (2014). Machine-learning maestro Michael Jordan on the delusions of big data and other huge engineering efforts. IEEE Spectrum 20 Oct 2014.Google Scholar
  14. Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired artificial intelligence. Neuron, 95(2), 245–258.Google Scholar
  15. Hinton, G. (2014). Where do features come from? Cognitive Science, 38(6), 1078–1101.Google Scholar
  16. Hinton, G. E., Osindero, S., & Teh, Y. W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18(7), 1527–1554.MathSciNetzbMATHGoogle Scholar
  17. Holland, P. C., & Schiffino, F. L. (2016). Mini-review: Prediction errors, attention and associative learning. Neurobiology of Learning and Memory, 131, 207–215.Google Scholar
  18. Hong, H., Yamins, D. L. K., Majaj, N. J., & DiCarlo, J. J. (2016). Explicit information for category-orthogonal object properties increases along the ventral stream. Nature Neuroscience, 19, 613–622.Google Scholar
  19. Khadivi, P., Tandon, R., & Ramakrishnan, N. (2016). Flow of information in feed-forward deep neural networks. arxiv:1603.06220v1.Google Scholar
  20. Kiani, R., Esteky, H., Mirpour, K., & Tanaka, K. (2007). Object category structure in response patterns of neuronal population in monkey inferior temporal cortex. Journal of Neurophysiology, 97(6), 4296–4309.Google Scholar
  21. Kruger, N., Janssen, P., Kalkan, S., Lappe, M., Leonardis, A., Piater, J., et al. (2013). Deep hierarchies in the primate visual cortex: What can we learn for computer vision? IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1847–1871.Google Scholar
  22. Legg, S., & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds and Machines, 17(4), 391–444.Google Scholar
  23. LeRoux, N., & Bengio, Y. (2008). Representational power of restricted Boltzmann machines and deep belief networks. Neural Computation, 20(6), 1631–1649.MathSciNetzbMATHGoogle Scholar
  24. Levine, Y., Yakira, D., Cohen, N., & Shashua, A. (2017). Deep learning and quantum entanglement: Fundamental connections with implications to network design. arxiv:1704.01552.Google Scholar
  25. Lin, H. W., & Tegmark, M. (2016a). Critical behavior from deep dynamics: A hidden dimension in natural language. arxiv:1606.06737.Google Scholar
  26. Lin, H. W., & Tegmark, M. (2016b). Why does deep and cheap learning work so well? arxiv:1608.08225.Google Scholar
  27. Maass, W. (1996). Lower bounds for the computational power of networks of spiking neurons. Neural Computation, 8(1), 1–40.zbMATHGoogle Scholar
  28. Maass, W. (1997). Networks of spiking neurons: The third generation of neural network models. Neural Networks, 10(9), 1659–1671.Google Scholar
  29. Mäki, U. (2009). MISSing the world. Models as isolations and credible surrogate systems. Erkenntnis, 70(1), 29–43.Google Scholar
  30. Mäki, U. (2011). Models and the locus of their truth. Synthese, 180(1), 47–63.Google Scholar
  31. Manning, C. D. (2015). Computational linguistics and deep learning. Computational Linguistics, 41(4), 701–707.MathSciNetGoogle Scholar
  32. Marblestone, A. H., Wayne, G., & Kording, K. P. (2016). Toward an integration of deep learning and neuroscience. Frontiers in Computational Neuroscience, 10, 94.Google Scholar
  33. Mehta, P., & Schwab, D. J. (2014). An exact mapping between the variational renormalization group and deep learning. arxiv:1410.3831v1.Google Scholar
  34. Merzenich, M. (2000). Seeing in the sound zone. Nature, 404, 820–821.Google Scholar
  35. Mnih, V., Kavukcuoglu, K., & Silver, D. (2015). Human-level control through deep reinforcement learning. Nature, 518, 529–533.Google Scholar
  36. Parnas, D. L. (2014). On the significance of Turing’s test. Communications of the ACM, 57(12), 8.Google Scholar
  37. Parnas, D. L. (2017). The real risks of artificial intelligence. Communications of the ACM, 60(10), 27–31.Google Scholar
  38. Patel, A. B., Nguyen, T., & Baraniuk, R. G. (2015). A probabilistic theory of deep learning. arxiv:1504.00641v1.Google Scholar
  39. Piccinini, G. (2010). The mind as neural software? Understanding functionalism, computationalism, and computational functionalism. Philosophy and Phenomenological Research, 81(2), 269–311.Google Scholar
  40. Piccinini, G., & Bahar, S. (2013). Neural computation and the computational theory of cognition. Cognitive Science, 37(3), 453–488.Google Scholar
  41. Piccinini, G., & Scarantino, A. (2011). Information processing, computation, and cognition. Journal of Biological Physics, 37(1), 1–38.Google Scholar
  42. Poggio, T., Mhaskar, H., Rosasco, L., Miranda, B., & Liao, Q. (2017). Why and when can deep—but not shallow—networks avoid the curse of dimensionality: a review. arxiv:1611.00740.Google Scholar
  43. Quiroga, R. Q., Reddy, L., Koch, C., & Fried, I. (2007). Decoding visual inputs from multiple neurons in the human temporal lobe. Journal of Neurophysiology, 98(4), 1997–2007.Google Scholar
  44. Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117.Google Scholar
  45. Sharir, O., & Shashua, A. (2017). On the expressive power of overlapping operations of deep networks. arxiv:1703.02065.Google Scholar
  46. Silver, D., Schrittwieser, J., & Simonyan, K. (2017). Mastering the game of Go without human knowledge. Nature, 550, 354–359.Google Scholar
  47. Stallkamp, J., Schlipsing, M., Salmen, J., & Igel, C. (2012). Man versus computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Networks, 32, 323–332.Google Scholar
  48. Tishby, N., & Zaslavsky, N. (2015). Deep learning and the information bottleneck principle. arxiv:1503.02406.Google Scholar
  49. Trappenberg, T. P. (2014). Growing adaptive machines. In A brief introduction to probabilistic machine learning and its relation to neuroscience (pp. 61–108). Springer, Berlin.Google Scholar
  50. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 49, 433–460.MathSciNetGoogle Scholar
  51. van Fraassen, B. C. (2008). Scientific representation: Paradoxes of perspective. Oxford: Clarendon Press.Google Scholar
  52. von Melchner, L., Pallas, S. L., & Sur, M. (2000). Visual behaviour mediated by retinal projections directed to the auditory pathway. Nature, 404, 871–876.Google Scholar
  53. Voosen, P. (2015). The believers. Chronicle of Higher Education 61(24).Google Scholar
  54. Weisberg, M. (2013). Simulation and similarity: Using models to understand the world. New York: Oxford University Press.Google Scholar
  55. Weisberg, M. (2015). Biology and philosophy symposium on simulation and similarity: Using models to understand the world. Biology & Philosophy, 30(2), 299–310.Google Scholar
  56. Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45.Google Scholar
  57. Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., et al. (2016). Google’s neural machine translation system: Bridging the gap between human and machine translation. arxiv:1609.08144v2.Google Scholar
  58. Yamins, D. L. K., & DiCarlo, J. J. (2016). Using goal-driven deep learning models to understand sensory cortex. Nature Neuroscience, 19(3), 356–365.Google Scholar
  59. Yamins, D. L. K., Hong, H., Cadieu, C. F., Solomon, E. A., Seibert, D., & DiCarlo, J. J. (2014). Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111(23), 8619–8624.Google Scholar
  60. Yu, D., & Deng, L. (2011). Deep learning and its applications to signal and information processing. IEEE Signal Processing Magazine, 28(1), 145–154.Google Scholar

Copyright information

© Springer Nature B.V. 2018

Authors and Affiliations

  1. 1.Departamento de Lenguajes y Ciencias de la ComputaciónUniversidad de Málaga (UMA)MálagaSpain
  2. 2.Departamento de Lógica, Historia y Filosofía de la CienciaUniversidad Nacional de Educación a Distancia (UNED)MadridSpain

Personalised recommendations