Skip to main content
Log in

Neuromorphic Cognitive Learning Systems: The Future of Artificial Intelligence?

  • Correspondence
  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

In this position paper, I outline the caveats of the current artificial intelligence (AI) field driven by deep learning (DL) and large data volumes. Although AI/DL has demonstrated huge potential and attracted huge investments globally, it encounters big problems – it not only need to collect huge datasets and spend enormous time and resources to be trained on them, but also the trained system cannot deal effectively with any never encountered before (novel) data. From a human perspective, any current AI/DL system is completely unintelligent. It is only able to represent information but have no awareness of what this information means. I propose as an alternative the Neuromorphic Cognitive Learning Systems (NCLS), intimate imitations of animal and human brains, able to address the AI/DL limitations and achieve true artificial general intelligence. Similar to human and animal brains NCLS are unparalleled in their ability to rapidly, and on their own, adapt and learn from changing and unexpected environmental contingencies with very limited resources. I describe how NCLS driven AI inspired by human/animal brains can pave the way to new computing technologies with the potential to revolutionize the industry, economy and society. It is my strong belief that NCLS investigations will have major impact to real-time autonomous systems to achieve human-like intelligence capabilities.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Data Availability

No datasets were generated or analysed during the current study.

References

  1. Waldrop MM. What are the limits of deep learning? PNAS. 2019;116(4):1074–7.

    Article  Google Scholar 

  2. Tishby N, Zaslavsky N. Deep learning and the information bottleneck principle. 2015. arXiv:1503.02406.

  3. Thompson NC, Greenewald K, Lee K, Manso GF. The computational limits of deep learning. 2022. arXiv:2007.05558.

  4. Grossberg S. How does a brain build a cognitive code? Psych Rev. 1980;87:1–51.

    Article  Google Scholar 

  5. McCloskey M, Cohen NJ. Catastrophic interference in connectionist networks: the sequential learning problem. Psychol Lear Motiv. 1989;24:109–65.

    Article  Google Scholar 

  6. Touretzky DS. Connectionism and compositional semantics. In: Barnden JA, Pollack JB, editors. High-level connectionist models. Hillsdale, NJ: Erlbaum; 1991. p. 17–31.

    Google Scholar 

  7. Kansky K, Silver T, Mély DA, Eldawy M et al. Schema networks: zero-shot transfer with a generative causal model of intuitive physics. 2015. arXiv:1706.04317.

  8. Marcus G. Deep learning: a critical appraisal. 2017. arXiv:1801.00631.

  9. Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Computer Vision and Pattern Recognition (CVPR ’15). IEEE; 2015.

    Google Scholar 

  10. VanRullen R, Guyonneau R, Thorpe SJ. Spike times make sense. Trends Neurosci. 2005;28(1):1–4.

    Article  Google Scholar 

  11. Bi GQ, Poo MM. Synaptic modification by correlated activity: Hebb’s postulate revisited. Ann Rev Neurosci. 2001;24:139–66.

    Article  Google Scholar 

  12. Gütig R, Sompolinsky H. The tempotron: a neuron that learns spike timing-based decisions. Nat Neurosci. 2006;9(3):420–8.

    Article  Google Scholar 

  13. Bohte SM, Kok JN, Poutré JAL. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing. 2002;48(1–4):17–37.

    Article  Google Scholar 

  14. Mohemmed A, Schliebs S, Matsuda S, Kasabov N. SPAN: spike pattern association neuron for learning spatio-temporal spike patterns. Int J Neural Syst. 2012;22(04):1250.

    Article  Google Scholar 

  15. Florian RV. The Chronotron: a neuron that learns to fire temporally precise spike patterns. PLoS ONE. 2012;7(8):e40.233.

    Article  Google Scholar 

  16. Ponulak R. ReSuMe-new supervised learning method for spiking neural networks. Institute of Control and Information Engineering, Pozno´n University of Technology, Tech. Rep.; 2005.

    Google Scholar 

  17. Painkras E, Plana L, Garside J, Temple S, et al. Spinnaker: a multi-core system-on-chip for massively-parallel neural net simulation. In: NaIn Proceedings of the IEEE 2012 Custom Integrated Circuits Conference. San Jose, CA: IEEE; 2012. p. 1–4.

    Google Scholar 

  18. Akopyan F, Sawada J, Cassidy A, Alvarez-Icaza R, et al. TrueNorth: design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE Trans Comput Aided Des Integr Circ Syst. 2015;34:1537–57.

    Article  Google Scholar 

  19. Davies M, Srinivasa N, Lin T, Chinya G, et al. Loihi: a neuromorphic manycore processor with on-chip learning. IEEEMicro. 2018;38:82–99.

    Google Scholar 

  20. Pei J, Deng L, Song S, Zhao M, et al. Towards artificial general intelligence with hybrid tianjic chip architecture. Nature. 2019;572:106–11.

    Article  Google Scholar 

  21. Kim S, Park S, Na B, Yoon S. Spiking-YOLO: spiking neural network for energy-efficient object detection. 2019. arXiv:1903.06530.

  22. Imam N, Cleland TA. Rapid online learning and robust recall in a neuromorphic olfactory circuit. Nat Mach Intell. 2020;2:181–91.

    Article  Google Scholar 

  23. Wang Y, Zeng Y. Multisensory concept learning framework based on spiking neural networks. Front Syst Neurosci. 2022;16:845177.

    Article  MathSciNet  Google Scholar 

  24. Fang H, Zeng Y, Tang J, Wang Y, et al. Brain-inspired graph spiking neural networks for commonsense knowledge representation and reasoning. 2022. https://doi.org/10.48550/arXiv.2207.05561.

    Article  Google Scholar 

  25. Zeng Y, Zhao D, Zhao F, et al. BrainCog: a spiking neural network based, brain inspired cognitive intelligence engine for brain inspired AI and brain simulation. Patterns. 2023;4:100789.

    Article  Google Scholar 

  26. Cutsuridis V, Graham BP, Cobb S. Encoding and retrieval in the hippocampal CA1 microcircuit model. Hippocampus. 2010;20(3):423–46.

    Article  Google Scholar 

  27. Andreakos N, Yue S, Cutsuridis V. Associative memory retrieval evaluation in a brainmorphic microcircuit model. 2024. Under review.

    Google Scholar 

  28. Knipper RA, Mishty K, Sadi M, Santu SKK. SNNLP: energy-efficient natural language processing using spiking neural networks. 2024. arXiv:2401.17911.

Download references

Funding

This work was supported by the EU HORIZON 2020 Project ULTRACEPT under Grant 778062.

Author information

Authors and Affiliations

Authors

Contributions

V.C. wrote the manuscript.

Corresponding author

Correspondence to Vassilis Cutsuridis.

Ethics declarations

Competing Interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cutsuridis, V. Neuromorphic Cognitive Learning Systems: The Future of Artificial Intelligence?. Cogn Comput (2024). https://doi.org/10.1007/s12559-024-10308-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s12559-024-10308-x

Keywords

Navigation