Skip to main content
Log in

Deep Intelligence: What AI Should Learn from Nature’s Imagination

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

Artificial intelligence (AI) has recently seen explosive growth and remarkable successes in several application areas. However, it is becoming clear that the methods that have made this possible are subject to several limitations that might inhibit progress towards replicating the more general intelligence seen in humans and other animals. In contrast to current AI methods that focus on specific tasks and rely on large amounts of offline data and extensive, slow, and mostly supervised learning, this natural intelligence is quick, versatile, agile, and open-ended. This position paper brings together ideas from neuroscience, evolutionary and developmental biology, and complex systems to analyze why such natural intelligence is possible in animals and suggests that AI should exploit the same strategies to move in a different direction. In particular, it argues that integrated embodiment, modularity, synergy, developmental learning, and evolution are key enablers of natural intelligence and should be at the core of AI systems as well. The analysis in the paper leads to the description of a biologically grounded deep intelligence (DI) framework for understanding natural intelligence and developing a new approach to building more versatile, autonomous, and integrated AI. The paper concludes that the dominant paradigm of AI today is unlikely to lead to truly natural general intelligence and that something like the biologically inspired DI framework is needed for that.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Price includes VAT (Canada)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data Availability

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

References

  1. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9(8):1735–80. https://doi.org/10.1162/neco.1997.9.8.1735.

    Article  Google Scholar 

  2. Bengio Y, Lamblin P, Popovici D, Larochelle H. Greedy layer-wise training of deep networks. Adv Neur Inform Proc Syst. 2007;153–160.

  3. Hinton GE. Learning multiple layers of representation. Trends Cogn Sci. 2007;11:428–34.

    Google Scholar 

  4. Ciresan D, Meier U, Schmidhuber J. Multi-column deep neural networks for image classification, Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. 2012;3642–3649. doi:https://doi.org/10.1109/cvpr.2012.6248110.

  5. Krizhevsky A, Sutskever I, Hinton G. ImageNet classification with deep convolutional neural networks. Adv Neur Inform Proc Syst. 2012.

  6. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44. https://doi.org/10.1038/nature14539.

    Article  Google Scholar 

  7. Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw. 2015;61:85–117. https://doi.org/10.1016/j.neunet.2014.09.003.

    Article  Google Scholar 

  8. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I. Attention is all you need. Adv Neur Inform Proc Syst 2017. arXiv:1706.03762.

  9. Sejnowski TJ. The deep learning revolution. MIT press. 2018.

  10. Pearl J, McKenzie D. The book of why: the new science of cause and effect. Basic Books; 2018.

    Google Scholar 

  11. Harnett K. To build truly intelligent machines, teach them cause and effect. Quanta. 2018. https://www.quantamagazine.org/to-build-truly-intelligent-machines-teach-them-cause-and-effect-20180515/.

  12. Marcus G, Davis E. Rebooting AI: building artificial intelligence we can trust. Pantheon. 2019.

  13. Heaven D. Why deep-learning AIs are so easy to fool. Nature. 2019;574:163–6. https://doi.org/10.1038/d41586-019-03013-5.

    Article  Google Scholar 

  14. Mitchell M. Artificial intelligence: a guide for thinking humans. Strauss and Giroux: Farrar; 2019.

    Google Scholar 

  15. Brooks RA. The cul-de-sac of the computational metaphor: a talk by Rodney Brooks. Edge2019. https://www.edge.org/conversation/rodney_a_brooks-the-cul-de-sac-of-the-computational-metaphor.

  16. Marcus G, Davis E, Aaronson S. A very preliminary analysis of DALL-E 2. 2022. arXiv:2204.13807 [cs.CV].

  17. Minai AA, Braha D, Bar-Yam Y. Complex systems engineering: a new paradigm, in complex engineered systems: science meets technology, D. Braha, A.A. Minai, and Y. Bar-Yam (Eds.). Springer Verlag. 2006;1–22.

  18. Raff RA. The shape of life: genes, development, and the evolution of animal form. University of Chicago Press. 1996.

    Google Scholar 

  19. Schlosser G, Wagner GP (eds.). Modularity in development and evolution. Univer Chic Press. 2004.

  20. Carroll SB. Endless forms most beautiful: the new science of evo-devo and the making of the animal kingdom. WW Norton & Company. 2005.

  21. Wagner A. The origins of evolutionary innovations. Oxford: Oxford University Press; 2011.

    Google Scholar 

  22. Meunier D, Lambiotte R, Bullmore E. Modular and hierarchically modular organization of brain networks. Front Neurosci. 2010;4. https://doi.org/10.3389/fnins.2010.00200.

  23. Grossberg S. The complementary brain: Unifying brain dynamics and modularity. Trends Cogn Sci. 2000;4:233–46. https://doi.org/10.1016/S1364-6613(00)01464-9.

    Article  Google Scholar 

  24. Grossberg S. Conscious mind, resonant brain: how each brain makes a mind. Oxford University Press; 2021.

    Google Scholar 

  25. d’Avella A, Pai DK. Modularity for sensorimotor control: evidence and a new prediction. J Mot Behav. 2010;42:361–9.

    Google Scholar 

  26. Geary DC. The origin of mind: evolution of brain, cognition, and general intelligence. Am Psychol Assoc. 2005.

    Google Scholar 

  27. Thelen E, Smith LB. A dynamic systems approach to the development of cognition and action. MIT Press; 1994.

    Google Scholar 

  28. Kelso JAS. Dynamic patterns: the self-organization of brain and behavior. Bradford Books; 1995.

    Google Scholar 

  29. Goldfield EC. Emergent forms: origins and early development of human action and perception. Oxford University Press; 1995.

    Google Scholar 

  30. Nolfi S, Floreano D. Evolutionary robotics: the biology, intelligence, and technology of self-organizing machines. MIT press. 2000.

  31. Weng J, McClelland J, Pentland A, Sporns O, Stockman I, Sur M, Thelen E. Autonomous mental development by robots and animals. Science. 2001;291:599–600.

    Google Scholar 

  32. Jin Y, Meng Y. Morphogenetic robotics: a new emerging field in developmental robotics. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Reviews and Applications. 2011;41(2):145–60.

    Google Scholar 

  33. Weng J. Symbolic models and emergent models: a review. IEEE Trans Auton Ment Dev. 2011;4:29–54.

    Google Scholar 

  34. Cangelosi A, Schlesinger M. Developmental Robotics: from babies to robots. MIT Press. 2015.

  35. Vujovic V, Rosendo A, Brodbeck L, Iida F. Evolutionary developmental robotics: Improving morphology and control of physical robots. Artificial Life. 2017;23(2):169–185. https://doi.org/10.1162/ARTL_a_00228.

  36. Merel J, Botvinick M, Wayne G. Hierarchical motor control in mammals and machines. Nat Commun. 2019;10:5489. https://doi.org/10.1038/s41467-019-13239-6.

    Article  Google Scholar 

  37. Botvinick M, Ritter S, Wang JX, Kurth-Nelson Z, Hassabis D. Reinforcement learning, fast and slow. Trends Cogn Sci. 2019;23:408–22. https://doi.org/10.1016/j.tics.2019.02.006.

    Article  Google Scholar 

  38. Barretto A, Hou S, Borsa D, Silver D, Precup D. Fast reinforcement learning with generalized policy updates. PNAS. 2020;117:30079–87.

    Google Scholar 

  39. Spearman C. General intelligence, objectively determined and measured. Am J Psychol. 1904;15:201–93.

    Google Scholar 

  40. Cattell EB. Theory of fluid and crystallized intelligence: a critical experiment. J Educ Psychol. 1963;54:1–22.

    Google Scholar 

  41. Kahneman D. Thinking fast and slow. Straus and Giroux: Farrar; 2011.

    Google Scholar 

  42. Callebaut W, Rasskin-Gutman D (eds.). Modularity: understanding the development and evolution of natural complex systems. MIT Press. 2005.

  43. Whitacre JM. Degeneracy: A link between evolvability, robustness and complexity in biological systems. Theor Biol Med Model. 2010;7:6. https://doi.org/10.1186/1742-4682-7-6.

    Article  Google Scholar 

  44. Dawkins R. The evolution of evolvability, In Langton C. G. (Ed.), Artificial life: the proceedings of an interdisciplinary workshop on the synthesis and simulation of living systems. Addison‐Wesley Publishing Co. 1988;201–220.

  45. Kirschner M, Gerhart J. Evolvability. PNAS. 1998;95(15):8420–7. https://doi.org/10.1073/pnas.95.15.8420.

    Article  Google Scholar 

  46. Wagner A. Robustness and evolvability in living systems. Princeton University Press; 2005.

    Google Scholar 

  47. Kerg G, Mittal S, Rolnick D, Bengio Y, Richards B, Lajoie G. On neural architecture inductive biases for relational tasks. 2022. arXiv:2206.05056 [cs.NE]. https://doi.org/10.48550/arXiv.2206.05056.

  48. Bender EM, Gebru T, McMillan-Major A, Shmitchell S. On the dangers of stochastic parrots: can language models be too big?. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21). 2021;610–623. https://doi.org/10.1145/3442188.3445922.

  49. Chen MX, Firat O, Bapna A, Johnson M, Macherey W, Foster GF, Jones L, Parmar N, Schuster M, Chen Z, Wu Y, Hughes M. The best of both worlds: combining recent advances in neural machine translation, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia (Long Papers). 2018;76–86.

  50. Liu X, Duh K, Liu L, Gao J. Very deep transformers for neural machine translation. 2020. arXiv:2008.07772 [cs.CL].

  51. Heaven WD. OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless. MIT Technol Rev. 2020. https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/.

  52. Silver D, Huang A, Maddison CJ, Guez A, Sifre L, va den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, Dieleman S, Grewe D, Nham J, Kalchbrenner N, Sutskever I, Lillicrap T, Leach M, Kavukcuoglu K, Graepel T, Hassabis D. Mastering the game of Go with deep neural networks and tree search. Nature. 2016;529(7587):484–9. https://doi.org/10.1038/nature16961.

    Article  Google Scholar 

  53. Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, Hubert T, Baker L, Lai M, Bolton A, Chen Y, Lillicrap T, Hui F, Sifre L, van den Driessche G, Graepel T, Hassabis D. Mastering the game of Go without human knowledge. Nature. 2017;550(7676):354–9. https://doi.org/10.1038/nature24270.

    Article  Google Scholar 

  54. Girshick, R.B. (2015) Fast R-CNN, 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448.

  55. OpenAI (2022) ChatGPT: Optimizing language models for dialogue. https://openai.com/blog/chatgpt/.

  56. Ramesh A, Pavlov M, Goh G, Gray S, Voss C, Radford A, Chen M, Sutskever I. Zero-shot text-to-image generation. 2021. https://arxiv.org/abs/2102.12092v2.

  57. Minai AA, Perdoor M, Byadarhaly KV, Vasa S, Iyer LR. A synergistic view of autonomous cognitive systems. Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN’2010). 2010;498–505.

  58. Braitenberg V. Vehicles: experiments in synthetic psychology. Cambridge, MA: MIT Press; 1984.

    Google Scholar 

  59. Carlson JM, Doyle J. Complexity and robustness. PNAS. 2002;99(supp. 1):2538–45.

    Google Scholar 

  60. Tanaka R, Doyle J. Scale-rich metabolic networks: background and introduction. 2004. https://arxiv.org/abs/q-bio/0410009.

  61. Zador AM. A critique of pure learning and what artificial neural networks can learn from animal brains. Nat Commun. 2019;10:3770.

    Google Scholar 

  62. Latash ML. Understanding and synergy: a single concept at different levels of analysis?. Front Syst Neurosci. 2021;15. https://doi-org.uc.idm.oclc.org/10.3389/fnsys.2021.735406.

  63. Latash ML. Motor synergies and the equilibrium-point hypothesis. Mot Control. 2010;14(3):294–322. https://doi.org/10.1123/mcj.14.3.294.

    Article  MathSciNet  Google Scholar 

  64. Riley MA, Kuznetsov N, Bonnette S. State-, parameter-, and graph-dynamics: constraints and the distillation of postural control systems. Science & Motricité. 2011;74:5–18. https://doi.org/10.1051/sm/2011117.

    Article  Google Scholar 

  65. Dobzhansky T. Nothing in biology makes sense except in the light of evolution. American Biology Teacher. 1973;35(3):125–9. https://doi.org/10.1093/icb/4.4.443.

    Article  Google Scholar 

  66. Hubel DH, Wiesel TN. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J Physiol. 1962;160:106–54.

    Google Scholar 

  67. Hubel DH, Wiesel TN. Brain and visual perception. New York: Oxford Press; 2005.

    Google Scholar 

  68. Fogel LJ, Owens AJ, Walsh MJ. Artificial intelligence through simulated evolution. NY: John Wiley; 1966.

    Google Scholar 

  69. Holland JH. Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. University of Michigan Press; 1975.

    Google Scholar 

  70. Goldberg D. Genetic algorithms in search, optimization and machine learning. Addison-Wesley Professional. 1989.

  71. Stanley KO, Miikkulainen R. Evolving neural networks through augmenting topologies. Evol Comput. 2002;10(2):99–127. https://doi.org/10.1162/106365602320169811.

    Article  Google Scholar 

  72. Stanley K, Miikkulainen R. A taxonomy for artificial embryogeny. Artif Life. 2003;9(2):93–130.

    Google Scholar 

  73. Clune J, Beckmann BE, Ofria C, Pennock RT. Evolving coordinated quadruped gaits with the HyperNEAT generative encoding. Proc IEEE Cong Evol Comp. 2009;2764–2771.

  74. Sims K. Evolving virtual creatures. Proceedings of SIGGRAPH '94. 1994;15–22.

  75. Sims K. Evolving 3D morphology and behavior by competition. Artif Life. 1994;1:353–72. https://doi.org/10.1162/artl.1994.1.4.353.

    Article  Google Scholar 

  76. Rieffel J, Pollack J. An endosymbiotic model for modular acquisition in stochastic developmental systems. Proceedings of the Tenth International Conference on the Simulation and Synthesis of Living Systems (ALIFE X). 2006.

  77. Kirschner MW, Gerhart JC. The plausibility of life: resolving Darwin’s dilemma. Yale University Press; 2005.

    Google Scholar 

  78. Gerhart J, Kirschner M. The theory of facilitated variation. PNAS. 2007;104(Supp. 1):8582–9.

    Google Scholar 

  79. Kimura M. The neutral theory of molecular evolution. Press: Cambridge Univ; 1983.

    Google Scholar 

  80. Huneman P. Neutral spaces and topological explanations in evolutionary biology: lessons from some landscapes and mappings. Philosophy of Science. 2018;85(5):969–83. https://doi.org/10.1086/699759.

    Article  Google Scholar 

  81. Kauffman SA. The origins of order: self-organization and selection in evolution. Oxford University Press; 1993.

    Google Scholar 

  82. Siebert BA, Hall CL, Gleeson JP, Asllani M. Role of modularity in self-organization dynamics in biological networks. Phys Rev E. 2020;102:052306. https://doi.org/10.1103/PhysRevE.102.052306.

  83. Mountcastle VB. The columnar organization of the neocortex. Brain. 1997;120:701–22.

    Google Scholar 

  84. Bressler SL, Tognoli E. Operational principles of neurocognitive networks. Int J Psychophysiol. 2006;60(2):139–48. https://doi.org/10.1016/j.ijpsycho.2005.12.008.

    Article  Google Scholar 

  85. Abeles M. Local cortical circuits: an electrophysiological study. Springer; 1982.

    Google Scholar 

  86. Buzsáki G. Neural syntax: cell assemblies, synapsembles, and readers. Neuron. 2010;68:362–85.

    Google Scholar 

  87. Grillner S. The motor infrastructure: from ion channels to neuronal networks. Nat Rev Neurosci. 2003;4:673–86.

    Google Scholar 

  88. Grillner S. Biological pattern generation: the cellular and computational logic of networks in motion. Neuron. 2006;52:751–66.

    Google Scholar 

  89. Grillner S, Deliagina T, Ekeberg O, El Manira A, Hill RH, Lansner A, Orlovsky GN, Wallén P. Neural networks that co-ordinate locomotion and body orientation in lamprey. Trends Neurosci. 1995;18:270–9.

    Google Scholar 

  90. Grillner S, Hellgren J, Ménard A, Saitoh K, Wikström MA. Mechanisms for selection of basic motor programs – roles for the striatum and pallidum. Trends Neurosci. 2005;28:364–70.

    Google Scholar 

  91. Ijspeert AJ, Crespi A, Ryczko D, Cabelguen JM. From swimming to walking with a salamander robot driven by a spinal cord model. Science. 2007;315:1416–20.

    Google Scholar 

  92. Simon HA. Near decomposability and complexity: how a mind resides in a brain, In Morowitz, H.J. and Singer, J.L. (eds). The Mind, the Brain, and Complex Systems, Addison-Wesley. 1995.

  93. Simon HA. Near decomposability and the speed of evolution. Ind Corp Chang. 2002;11:587–99.

    Google Scholar 

  94. Cheung VCK, Seki K. Approaches to revealing the neural basis of muscle synergies: a review and a critique. J Neurophysiol. 2021;125:1580–97. https://doi.org/10.1152/jn.00625.2019.

    Article  Google Scholar 

  95. Heess NM, Wayne G, Tassa Y, Lillicrap TP, Riedmiller MA, Silver D. Learning and transfer of modulated locomotor controllers. 2016. https://arxiv.org/abs/1610.05182.

  96. Elman JL. Learning and development in neural networks: the importance of starting small. Cognition. 1993;48:71–99.

    Google Scholar 

  97. Bengio Y, Louradour J, Collobert R, Weston J. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning (ICML ‘09). 2009;41–48. https://doi.org/10.1145/1553374.1553380.

  98. Wang X, Chen Y, Zhu W. A survey on curriculum learning. IEEE Trans Pattern Anal Mach Intell. 2020. https://doi.org/10.1109/TPAMI.2021.3069908.

    Article  Google Scholar 

  99. Soviany P, Ionescu RT, Rota P, Sebe N. Curriculum learning: a survey. Int J Comput Vision. 2022;130:1526–65. https://doi.org/10.1007/s11263-022-01611-x.

    Article  Google Scholar 

  100. Weng J. Developmental robotics: theory and experiments. Int J Humanoid Rob. 2004;1:199–236.

    Google Scholar 

  101. Deshpande A, Kumar R, Minai AA, Kumar M. Developmental reinforcement learning of control policy of a quadcopter UAV with thrust vectoring rotors. Proc 2020 Dyn Syst Contr Confer. 5–7 Oct. 2020. https://arxiv.org/abs/2007.07793.

  102. Nguyen SM, Duminy N, Manoury A, Duhaut D, Bouche C. Robots learn increasingly complex tasks with intrinsic motivation and automatic curriculum learning. Künstlische Intelligenz. 2021;35:81–90. https://doi.org/10.1007/s13218-021-00708-8.

    Article  Google Scholar 

  103. Chiel HJ, Beer RD. The brain has a body: Adaptive behavior emerges from interactions of nervous system, body and environment. Trends Neurosci. 1997;20:553–7.

    Google Scholar 

  104. Chemero A. Radical embodied cognitive science. Bradford Books; 2011.

    Google Scholar 

  105. Pfeifer R, Lungarella M, Iida F. Self-organization, embodiment, and biologically inspired robotics. Science. 2007;318:1088–93.

    Google Scholar 

  106. Schöner G. The dynamics of neural populations capture the laws of the mind. Top Cogn Sci. 2020;12:1257–71.

    Google Scholar 

  107. Smolensky P. On the proper treatment of connectionism. Behav Brain Sci. 1988;11(1):1–23.

    MathSciNet  Google Scholar 

  108. Descartes R. Meditations on first philosophy. in The Philosophical Writings of René Descartes 2 (1984), translated by J. Cottingham, R. Stoothoff, and D. Murdoch. Cambridge: Camb Univ Press. 1641;1–62.

  109. Hart WD. Dualism. In: Guttenplan S, editor. A companion to the philosophy of mind. Oxford: Blackwell; 1996. p. 265–7.

    Google Scholar 

  110. Eliasmith C. How to build a brain: a neural architecture for biological cognition. Oxford University Press; 2013.

    Google Scholar 

  111. Smolensky P. Symbolic functions from neural computation. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2012;370(1971):3543–69.

    MathSciNet  Google Scholar 

  112. Besold TR, Garcez ADA, Bader S, Bowman H, Domingos PM, Hitzler P, Kühnberger K, Lamb LC, Lowd D, Lima PMV, de Penning L. Neural-symbolic learning and reasoning: a survey and interpretation. CoRR abs/1711.03902. 2017. arXiv preprint arXiv:1711.03902.

  113. Schlag, and Schmidhuber, J. Learning to reason with third order tensor products. Adv Neural Inf Process Syst. 2018;2018:9981–93.

    Google Scholar 

  114. Huang Q, Deng L, Wu D, Liu C, He X. Attentive tensor product learning. Proceedings of the 33rd AAAI Conference on Artificial Intelligence. 2019;1344–1351.

  115. D’Avila Garcez A, Lamb LC. Neurosymbolic AI: the 3rd wave. 2020. arXiv 2012.05876. https://arxiv.org/abs/2012.05876.

  116. Smolensky P, McCoy RT, Fernadez R, Goldrick M, Gao J. Neurocompositional computing: from the Central Paradox of Cognition to a new generation of AI systems. 2022. arXiv:2205.01128v1 [cs.AI].

  117. Cohen L, Dehaene S, Naccache L, Lehéricy S, Dehaene-Lambertz G, Hénaff MA, Michel F. The visual word form area: spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients. Brain. 2000;123(Pt 2):291–307. https://doi.org/10.1093/brain/123.2.291.

    Article  Google Scholar 

  118. Harvey BM, Klein BP, Petridou N, Dumoulin SO. Topographic representation of numerosity in the human parietal cortex. Science. 2013;341:1123–6. https://doi.org/10.1126/science.1239052.

    Article  Google Scholar 

  119. Amalric M, Dehaene S. Origins of the brain networks for advanced mathematics in expert mathematicians. PNAS. 2016;113:4909–17. https://doi.org/10.1073/pnas.1603205113.

    Article  Google Scholar 

  120. Huth AG, de Heer WA, Griffiths TL, Theunissen FE, Gallant JL. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature. 2016;532:453–8.

    Google Scholar 

  121. García AM, Moguilner S, Torquati K, García-Marco E, Herrera E, Muñoz E, Castillo EM, Kleineschay T, Sedeño L, Ibáñez A. How meaning unfolds in neural time: Embodied reactivations can precede multimodal semantic effects during language processing. Neuroimage. 2019;197:439–49. https://doi.org/10.1016/j.neuroimage.2019.05.002.

    Article  Google Scholar 

  122. Leminen A, Smolka A, Duñabeitia JA, Pliatsikas C. Morphological processing in the brain: The good (inflection), the bad (derivation) and the ugly (compounding). Cortex. 2019;116:4–44. https://doi.org/10.1016/j.cortex.2018.08.016.

    Article  Google Scholar 

  123. Rugani R, Vallortigara G, Priftis K, Regolin L. Number-space mapping in the newborn chick resembles humans’ mental number line. Science. 2015;347:534–6.

    Google Scholar 

  124. Vallortigara G. Comparative cognition of number and space: the case of geometry and of the mental number line. Philosophical Transactions of the Royal Society (London) B. 2017;373:20170120. https://doi.org/10.1098/rstb.2017.0120.

  125. Hawkins J, Lewis M, Klukas M, Purdy S, Ahmad S. A framework for intelligence and cortical function based on grid cells in the neocortex. Frontiers in Neural Circuits. 2019;12:121. https://doi.org/10.3389/fncir.2018.00121.

    Article  Google Scholar 

  126. Kelly MA, Arora N, West RL, Reitter D. Holographic declarative memory: distributional semantics as the architecture of memory. Cogn Sci. 2020;44:e12904. https://doi.org/10.1111/cogs.12904.

  127. Smith R, Schwartenbeck P, Parr T, Friston KJ. An active inference approach to modeling structure learning: concept learning as an example. Front Comput Neurosci. 2020;14:41. https://doi.org/10.3389/fncom.2020.00041.

    Article  Google Scholar 

  128. Bruffaerts R, De Deyne S, Meersmans K, Liuzzi AG, Storms G, Vandenberghe R. Redefining the resolution of semantic knowledge in the brain: advances made by the introduction of models of semantics in neuroimaging. Neuroscience and Behavioral Reviews. 2019;103:3–13.

    Google Scholar 

  129. Zeithamova D, Mack ML, Braunlich K, Davis T, Seger CA, van Kesteren MTR, Wutz A. Brain mechanisms of concept learning. J Neurosci. 2019;39(42):8259–66.

    Google Scholar 

  130. Zhang Y, Han K, Worth R, Liu Z. Connecting concepts in the brain by mapping cortical representations of semantic relations. Nat Comm. 2020;11:1877. https://doi.org/10.1038/s41467-020-15804-w.

  131. Fernandino L, Tong JQ, Conant LL, Humphries CJ, Binder JR. Decoding the information structure underlying the neural representation of concepts. PNAS. 2022;119:e2108091119. https://doi.org/10.1073/pnas.2108091119.

  132. Friston K. The free-energy principle: a unified brain theory. Nat Rev Neurosci. 2010;11:127–38. https://doi.org/10.1038/nrn2787.

    Article  Google Scholar 

  133. Clark A. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav Brain Sci. 2013;36:181–253.

    Google Scholar 

  134. Butz MV. Towards a unified sub-symbolic computational theory of cognition. Front Psychol. 2016;7:925. https://doi.org/10.3389/fpsyg.2016.00925.

    Article  Google Scholar 

  135. Butz MV. Event-predictive cognition: a root for conceptual human thought. Top Cogn Sci. 2021;13:10–24. https://doi.org/10.1111/tops.12522.

    Article  Google Scholar 

  136. Butz MV. Towards strong AI. Künstlische Intelligenz. 2021. https://doi.org/10.1007/s13218-021-00705-x.

    Article  Google Scholar 

  137. Tresch MC, Saltiel P, Bizzi E. The construction of movement by the spinal cord. Nat Neurosci. 1999;2:162–7.

    Google Scholar 

  138. d’Avella A, Saltiel P, Bizzi E. Combinations of muscle synergies in the construction of a natural motor behavior. Nat Neurosci. 2003;6:300–8.

    Google Scholar 

  139. Latash ML, Scholz JP, Schöner G. Toward a new theory of motor synergies. Mot Control. 2007;11:276–308.

    Google Scholar 

  140. Byadarhaly KV, Perdoor MC, Minai AA. A modular neural model of motor synergies. Neural Netw. 2012;32:96–108.

    Google Scholar 

  141. Bernstein N. The coordination and regulation of movements. Pergamon Press; 1967.

    Google Scholar 

  142. Kuppuswamy N, Harris CM. Do muscle synergies reduce the dimensionality of behavior?. Front Comp Neurosci. 2014;8. https://doi.org/10.3389/fncom.2014.00063.

  143. Brooks R. Intelligence without representation. Artif Intell. 1991;47(1–3):139–59. https://doi.org/10.1016/0004-3702(91)90053-M.

    Article  Google Scholar 

  144. Brooks R. Cambrian intelligence: the early history of the new AI. MIT Press; 1999.

    Google Scholar 

  145. Schaal S, Peters J, Nakanishi J, Ijspeert A. Control, planning, learning, and imitation with dynamic movement primitives. In: Workshop on bilateral paradigms on humans and humanoids. IEEE International Conference on Intelligent Robots and Systems (IROS 2003). Las Vegas, NV, Oct. 27–31. 2003.

  146. Schaal S, Mohajerian P, Ijspeert A. Dynamics systems vs. optimal control – a unifying view. In: P. Cisek, T. Drew and J.F. Kalaska (Eds.). Prog Brain Res. 2007;165:425–445.

  147. Kelso JAS. Synergies: atoms of brain and behavior, In: Progress in motor control – a multidisciplinary perspective, Sternad D. (ed), Springer. 2007.

  148. Amit DJ. Modeling brain function. New York: Cambridge University Press; 1989.

    Google Scholar 

  149. Yuste R, MacLean JN, Smith J, Lansner A. The cortex as a central pattern generator. Nat Rev Neurosci. 2005;6:477–83.

    Google Scholar 

  150. Bassett DS, Greenfield DL, Meyer-Lindenberg A, Weinberger DR, Moore SW, Bullmore ET. Efficient physical embedding of topologically complex information processing networks in brains and computer circuits, PLoS computational biology, 04/2010. 2010;6(4).

  151. Hawkins J, Ahmad S, Cui Y. A theory of how columns in the neocortex enable learning the structure of the world. Frontiers in Neural Circuits. 2017;11:81. https://doi.org/10.3389/fncir.2017.00081.

    Article  Google Scholar 

  152. Hawkins J. A thousand brains: a new theory of intelligence. Basic Books; 2021.

    Google Scholar 

  153. Yufik YM. Understanding, consciousness and thermodynamics of cognition. Chaos, Solitons Fractals. 2013;55:44–59. https://doi.org/10.1016/j.chaos.2013.04.010.

    Article  Google Scholar 

  154. Yufik YM. The understanding capacity and information dynamics in the human brain. Entropy. 2019;21:308. https://doi.org/10.3390/e21030308.

    Article  Google Scholar 

  155. Yufik YM, Friston K. Life and understanding: the origins of “understanding” in self-organizing nervous systems. Front Syst Neurosci. 2016;10. https://doi.org/10.3389/fnsys.2016.00098.

  156. Tsuda I. Towards an interpretation of dynamic neural activity in terms of chaotic dynamical systems. Behavioral and Brain Sciences. 2001;24:793–847.

    Google Scholar 

  157. Rabinovich MI, Huerta R, Varona P, Afraimovich VS. Transient cognitive dynamics, metastability, and decision making. PLoS Comp Biol. 2008;4(5):e1000072. https://doi.org/10.1371/journal.pcbi.1000072.

  158. Gros C. Cognitive computation with autonomously active neural networks: an emerging field. Cogn Comput. 2009;1:77–90. https://doi.org/10.1007/s12559-008-9000-9.

    Article  Google Scholar 

  159. Marupaka N, Iyer LR, Minai AA. Connectivity and thought: the influence of semantic network structure in a neurodynamical model of thinking. Neural Netw. 2012;32:147–58.

    Google Scholar 

  160. Mattia M, Pani P, Mirabella G, Costa S, Del Giudice P, Ferraina S. Heterogeneous attractor cell assemblies for motor planning in premotor cortex. J Neurosci. 2013;33(27):11155–68. https://doi.org/10.1523/JNEUROSCI.4664-12.2013.

    Article  Google Scholar 

  161. Minai AA, Iyer LR, Doumit S, et al. IDEA—itinerant dynamics with emergent attractors: a neural model for conceptual combination. In: Doboli S, et al., editors. Creativity and Innovation: Cognitive, Social, and Computational Approaches. Springer; 2021. p. 195–227.

    Google Scholar 

  162. Fukushima K. Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern. 1980;36(4):193–202. https://doi.org/10.1007/bf00344251.

    Article  Google Scholar 

  163. Sperber D. Modularity and relevance: how can a massively modular mind be flexible and context‐sensitive? In The Innate Mind: Structure and Contents, Carruthers, P., Laurence, S. and Stich, S. (eds). Oxford University Press. 2005. https://doi.org/10.1093/acprof:oso/9780195179675.003.0004.

  164. Iyer L, Doboli S, Minai A, Brown V, Levine D, Paulus P. Neural dynamics of idea generation and the effects of priming. Neural Netw. 2009;22:674–86.

    Google Scholar 

  165. Rinkus GJ. A cortical sparse distributed coding model linking mini- and macrocolumn-scale functionality. Front Neuroanat. 2010;4:17. https://doi.org/10.3389/fnana.2010.00017.

    Article  Google Scholar 

  166. Iyer LR, Minai AA. CANDID: A neurodynamical model for ddaptive context-dependent idea generation. In: Creativity and innovation. Understanding Complex Systems, Doboli, S., Kenworthy, J.B., Minai, A.A., Paulus, P.B. (eds), Springer, Cham. 2021. https://doi.org/10.1007/978-3-030-77198-0_7.

  167. Hinton GE. How to represent part-whole hierarchies in a neural network. 2021. arXiv:2102.12627 [cs.CV]. https://doi.org/10.48550/arXiv.2102.12627.

  168. Pouget A, Snyder LH. Computational approaches to sensorimotor transformations. Nature Neuroscience Supp. 2000;3:1192–8.

    Google Scholar 

  169. Morse AF, de Greeff J, Belpeame T, Cangelosi A. Epigenetic robotics architecture (ERA). IEEE Trans On Autonomous Mental Development. 2010;2:325–39.

    Google Scholar 

  170. Niv Y. Learning task-state representations. Nat Neurosci. 2019;22:1544–1553. https://doi-org.uc.idm.oclc.org/10.1038/s41593-019-0470-8.

  171. Müller GB. Evo-devo: extending the evolutionary synthesis. Nat Rev Genet. 2007;8:943–9.

    Google Scholar 

  172. Carroll SB. Evo-devo and an expanding evolutionary synthesis: a genetic theory of morphological evolution. Cell. 2008;134:25–36.

    Google Scholar 

  173. Gilbert SF, Bosch TCG, Ledón-Retting C. Eco-evo-devo: developmental symbiosis and developmental plasticity as evolutionary agents. 2015.

  174. Marcus G. Kluge: the haphazard evolution of the human mind. Mariner Books. 2009.

  175. Alabi A, Vanderelst D, Minai AA (in press). Rapid learning of spatial representations for goal-directed navigation based on a novel model of hippocampal place fields, Neural Networks (in press).

  176. Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science. 2006;313:504–7.

    MathSciNet  Google Scholar 

  177. Zhuang F, Zhiyuan Q, Duan K, Xi D, Zhu Y, Zhu H, Xiong H, He Q. A comprehensive survey on transfer learning. Proc IEEE. 2021;109:43–76.

    Google Scholar 

  178. Carr MF, Jadhav SP, Frank LM. Hippocampal replay in the awake state: a potential substrate for memory consolidation and retrieval. Nat Neurosci. 2011;14:147.

    Google Scholar 

  179. Carpenter GA, Grossberg S. ART 2: Self-organization of stable category recognition codes for analog input patterns. Appl Opt. 1987;26:4919–30.

    Google Scholar 

  180. Aguera y Arcas B. Do large language models understand us?. 2021. https://medium.com/@blaisea/do-large-language-models-understand-us-6f881d6d8e75.

  181. Marcus G. Does AI really need a paradigm shift?. 2022. https://garymarcus.substack.com/p/does-ai-really-need-a-paradigm-shift?s=r.

  182. Li A. Google engineer claims that its LaMDA conversation AI is ‘sentient,’ industry disagrees, 9TO5Google. 12 June 2022. https://9to5google.com/2022/06/12/google-ai-lamda-sentient/.

  183. Minai A. Between golem and god, 3 Quarks Daily. 2020. https://3quarksdaily.com/3quarksdaily/2021/06/between-golem-and-god-the-future-of-ai.html.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ali A. Minai.

Ethics declarations

Ethical Approval

This article does not contain any studies with human participants or animals.

Informed Consent

The work in this paper did not involve any studies requiring informed consent.

Conflict of Interest

The author declares no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Minai, A.A. Deep Intelligence: What AI Should Learn from Nature’s Imagination. Cogn Comput 16, 2389–2404 (2024). https://doi.org/10.1007/s12559-023-10124-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1007/s12559-023-10124-9

Keywords