Abstract
In this position paper, I outline the caveats of the current artificial intelligence (AI) field driven by deep learning (DL) and large data volumes. Although AI/DL has demonstrated huge potential and attracted huge investments globally, it encounters big problems – it not only need to collect huge datasets and spend enormous time and resources to be trained on them, but also the trained system cannot deal effectively with any never encountered before (novel) data. From a human perspective, any current AI/DL system is completely unintelligent. It is only able to represent information but have no awareness of what this information means. I propose as an alternative the Neuromorphic Cognitive Learning Systems (NCLS), intimate imitations of animal and human brains, able to address the AI/DL limitations and achieve true artificial general intelligence. Similar to human and animal brains NCLS are unparalleled in their ability to rapidly, and on their own, adapt and learn from changing and unexpected environmental contingencies with very limited resources. I describe how NCLS driven AI inspired by human/animal brains can pave the way to new computing technologies with the potential to revolutionize the industry, economy and society. It is my strong belief that NCLS investigations will have major impact to real-time autonomous systems to achieve human-like intelligence capabilities.
Data Availability
No datasets were generated or analysed during the current study.
References
Waldrop MM. What are the limits of deep learning? PNAS. 2019;116(4):1074–7.
Tishby N, Zaslavsky N. Deep learning and the information bottleneck principle. 2015. arXiv:1503.02406.
Thompson NC, Greenewald K, Lee K, Manso GF. The computational limits of deep learning. 2022. arXiv:2007.05558.
Grossberg S. How does a brain build a cognitive code? Psych Rev. 1980;87:1–51.
McCloskey M, Cohen NJ. Catastrophic interference in connectionist networks: the sequential learning problem. Psychol Lear Motiv. 1989;24:109–65.
Touretzky DS. Connectionism and compositional semantics. In: Barnden JA, Pollack JB, editors. High-level connectionist models. Hillsdale, NJ: Erlbaum; 1991. p. 17–31.
Kansky K, Silver T, Mély DA, Eldawy M et al. Schema networks: zero-shot transfer with a generative causal model of intuitive physics. 2015. arXiv:1706.04317.
Marcus G. Deep learning: a critical appraisal. 2017. arXiv:1801.00631.
Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Computer Vision and Pattern Recognition (CVPR ’15). IEEE; 2015.
VanRullen R, Guyonneau R, Thorpe SJ. Spike times make sense. Trends Neurosci. 2005;28(1):1–4.
Bi GQ, Poo MM. Synaptic modification by correlated activity: Hebb’s postulate revisited. Ann Rev Neurosci. 2001;24:139–66.
Gütig R, Sompolinsky H. The tempotron: a neuron that learns spike timing-based decisions. Nat Neurosci. 2006;9(3):420–8.
Bohte SM, Kok JN, Poutré JAL. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing. 2002;48(1–4):17–37.
Mohemmed A, Schliebs S, Matsuda S, Kasabov N. SPAN: spike pattern association neuron for learning spatio-temporal spike patterns. Int J Neural Syst. 2012;22(04):1250.
Florian RV. The Chronotron: a neuron that learns to fire temporally precise spike patterns. PLoS ONE. 2012;7(8):e40.233.
Ponulak R. ReSuMe-new supervised learning method for spiking neural networks. Institute of Control and Information Engineering, Pozno´n University of Technology, Tech. Rep.; 2005.
Painkras E, Plana L, Garside J, Temple S, et al. Spinnaker: a multi-core system-on-chip for massively-parallel neural net simulation. In: NaIn Proceedings of the IEEE 2012 Custom Integrated Circuits Conference. San Jose, CA: IEEE; 2012. p. 1–4.
Akopyan F, Sawada J, Cassidy A, Alvarez-Icaza R, et al. TrueNorth: design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE Trans Comput Aided Des Integr Circ Syst. 2015;34:1537–57.
Davies M, Srinivasa N, Lin T, Chinya G, et al. Loihi: a neuromorphic manycore processor with on-chip learning. IEEEMicro. 2018;38:82–99.
Pei J, Deng L, Song S, Zhao M, et al. Towards artificial general intelligence with hybrid tianjic chip architecture. Nature. 2019;572:106–11.
Kim S, Park S, Na B, Yoon S. Spiking-YOLO: spiking neural network for energy-efficient object detection. 2019. arXiv:1903.06530.
Imam N, Cleland TA. Rapid online learning and robust recall in a neuromorphic olfactory circuit. Nat Mach Intell. 2020;2:181–91.
Wang Y, Zeng Y. Multisensory concept learning framework based on spiking neural networks. Front Syst Neurosci. 2022;16:845177.
Fang H, Zeng Y, Tang J, Wang Y, et al. Brain-inspired graph spiking neural networks for commonsense knowledge representation and reasoning. 2022. https://doi.org/10.48550/arXiv.2207.05561.
Zeng Y, Zhao D, Zhao F, et al. BrainCog: a spiking neural network based, brain inspired cognitive intelligence engine for brain inspired AI and brain simulation. Patterns. 2023;4:100789.
Cutsuridis V, Graham BP, Cobb S. Encoding and retrieval in the hippocampal CA1 microcircuit model. Hippocampus. 2010;20(3):423–46.
Andreakos N, Yue S, Cutsuridis V. Associative memory retrieval evaluation in a brainmorphic microcircuit model. 2024. Under review.
Knipper RA, Mishty K, Sadi M, Santu SKK. SNNLP: energy-efficient natural language processing using spiking neural networks. 2024. arXiv:2401.17911.
Funding
This work was supported by the EU HORIZON 2020 Project ULTRACEPT under Grant 778062.
Author information
Authors and Affiliations
Contributions
V.C. wrote the manuscript.
Corresponding author
Ethics declarations
Competing Interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Cutsuridis, V. Neuromorphic Cognitive Learning Systems: The Future of Artificial Intelligence?. Cogn Comput (2024). https://doi.org/10.1007/s12559-024-10308-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s12559-024-10308-x