Advertisement

Nonlinear Computing and Nonlinear Artificial Intelligence

  • Behnam Kia
  • William DittoEmail author
Conference paper
Part of the Understanding Complex Systems book series (UCS)

Abstract

The importance and the necessity of nonlinearity in Artificial Intelligence, AI, and deep learning are very well understood. A multi-layer neural network with linear activation function is equivalent to a single layer of neurons. It is nonlinearity of activation functions that adds complexity to each layer, transforming the network to a universal computing machine that can approximate any continuous function. However, nonlinearity and the complexity that it creates have not been investigated enough in AI and modern deep learning systems. NC State University’s Nonlinear Artificial Intelligence Lab focuses on nonlinearity and the complexity that comes with it, and investigates how this can be an engine of artificial intelligence. We peruse our research at different levels with different goals. In this article we explain our approach, and present an overview of our results.

References

  1. 1.
    T.M. McKenna, T.A. McMullen, M.F. Shlesinger, The brain as a dynamic physical system. Neuroscience 60(3), 587–605 (1994)CrossRefGoogle Scholar
  2. 2.
    M.D. Fox, A.Z. Snyder, J.L. Vincent, M. Corbetta, D.C. Van Essen, M.E. Raichle, The human brain is intrinsically organized into dynamic, anticorrelated functional networks. Proc. Natl. Acad. Sci. U.S.A. 102, 9673-9678 (2005)Google Scholar
  3. 3.
    R.T. Canolty, M. Soltani, S.S. Dalal, E. Edwards, N.F. Dronkers, S.S. Nagarajan et al., Spatiotemporal dynamics of word processing in the human brain. Front. Neurosci. 1, 185-196 (2007)CrossRefGoogle Scholar
  4. 4.
    Chris A. Mack, Fifty years of Moore’s law. IEEE Trans. Semicond. Manuf. 24(2), 202–207 (2011)CrossRefGoogle Scholar
  5. 5.
    Thomas N. Theis, H.-S. Philip Wong, The end of Moore’s law: a new beginning for information technology. Comput. Sci. Eng. 19(2), 41–50 (2017)CrossRefGoogle Scholar
  6. 6.
    A. Blum, R.L. Rivest, Training a 3-node neural network is NP-complete. Advances in neural information processing systems (1989)Google Scholar
  7. 7.
    Marti A. Hearst et al., Support vector machines. IEEE Intell. Syst. Appl. 13(4), 18–28 (1998)CrossRefGoogle Scholar
  8. 8.
    A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems (2012)Google Scholar
  9. 9.
    B. Kia, J.F. Lindner, W.L. Ditto, A simple nonlinear circuit contains an infinite number of functions. IEEE Trans. Circuits Syst. II: Express Briefs 63(10), 944–948 (2016)CrossRefGoogle Scholar
  10. 10.
    Behnam Kia, John F. Lindner, William L. Ditto, Nonlinear dynamics as an engine of computation. Philos. Trans. R. Soc. A 375(2088), 20160222 (2017)MathSciNetCrossRefGoogle Scholar
  11. 11.
    B. Kia, K. Mobley, W.L. Ditto, An integrated circuit design for a dynamics-based reconfigurable logic block. IEEE Trans. Circuits Syst. II: Express Briefs (2017)Google Scholar
  12. 12.
    B. Kia et al., Nonlinear dynamics-based adaptive hardware, in 2017 NASA/ESA Conference on Adaptive Hardware and Systems (AHS) (IEEE, 2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Nonlinear Artificial Intelligence LabNorth Carolina State UniversityRaleighUSA

Personalised recommendations