Skip to main content

Advertisement

Log in

Review of State-of-the-Art in Deep Learning Artificial Intelligence

  • Published:
Optical Memory and Neural Networks Aims and scope Submit manuscript

Abstract

The current state-of-the-art in Deep Learning (DL) based artificial intelligence (AI) is reviewed. A special emphasis is made to compare the level of a concrete AI system with human abilities to show what remains to be done to achieve human level AI. Several estimates are proposed for comparison of the current “intellectual level” of AI systems with the human level. Among them is relation of Shannon’s estimate for lower bound on human word perplexity to recent progress in natural language AI modeling. Relations between the operation of DL constructions and principles of live neural information processing are discussed. The problem of AI risks and benefits is also reviewed based on arguments from both sides.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. https://www.frontiersin.org/research-topics/6714/toward-and-beyond-human-level-ai.

  2. Yudkowsky, E., Artificial Intelligence As a Positive and Negative Factor in Global Risk. https://intelligence.org/files/AIPosNegFactor.pdf.

  3. Muerlhauser, L., What is AGI? https//intelligence.org/2013/08/11/what-is-agi/.

  4. Grace, K., Algorithmic Progress in Six Domains. https://intelligence.org/files/AlgorithmicProgress.pdf.

  5. Schmidhuber, J., Deep Learning in Neural Networks: An Overview. http://arxiv.org/abs/1404.7828.

  6. Dunin-Barkowski, W. and Solovyeva, K., Pavlov Principle and Brain Reverse Engineering, 2018 Conference on Computational Intelligence in Bioinformatics and Computational Biology, CIBCB, May 30–June 2, 2018, Saint Louis, USA, Paper # 37, 5 p.

  7. Pavlov, I., Conditioned reflexes, in Twenty Years of Experience of Objective Studies of Higher Nervous Activity (Behavior) of Animals by I.P. Pavlov, 10th ed. (1st ed. in 1923), Moscow: Nauka, 1973, pp. 485–502 [in Russian].

  8. Gatys, L.A., Ecker, A.S., and Bethge, M., A Neural Algorithm of Artistic Style. http://arxiv.org/abs/1508.06576.

  9. Shannon, C.E., Prediction and Entropy of Written English. http://languagelog.ldc.upenn.edu/myl/Shannon1950.pdf.

  10. http://cs.fit.edu/~mmahoney/dissertation/entropy1.html.

  11. Ghosh, S., Vinyals, O., Strope, B., Roy, S., Dean, T., and Heck, L., Contextual LSTM (CLSTM) Models for Large Scale NLP Tasks. http://arxiv.org/abs/1602.06291.

  12. Jozefowicz, R., Vinyals, O., Schuster, M., Shazeer, N., and Wu, Y., Exploring the Limits of Language Modeling. http://arxiv.org/abs/1602.02410.

  13. Husz’ar, F., How (not) to Train Your Generative Model: Scheduled Sampling, Likelihood, Adversary? http://arxiv.org/abs/1511.05101.

  14. Theis, L., van den Oord, A., and Bethge, M., A Note on the Evaluation of Generative Models. http://arxiv.org/abs/1511.01844.

  15. Bowman, S.R., Vilnis, L., Vinyals, O., Dai, A.M., Jozefowicz, R., and Bengio, S., Generating Sentences from a Continuous Space. http://arxiv.org/abs/1511.06349.

  16. Arjovsky, M., Chintala, S., and Bottou, L., Wasserstein GAN. https://arxiv.org/pdf/1701.07875.pdf.

  17. https://www.cs.sfu.ca/~anoop/papers/pdf/jhu-ws03-report.pdf.

  18. Shen, S., Cheng, Y., He, Z., He, W., Wu, H., Sun, M., and Liu, Y., Minimum Risk Training for Neural Machine Translation. http://arxiv.org/abs/1512.02433.

  19. Bahdanau, D., Cho, K., and Bengio, Y., Neural Machine Translation by Jointly Learning to Align and Translate. http://arxiv.org/abs/1409.0473.

  20. Sennrich, R., Haddow, B., and Birch, A., Improving Neural Machine Translation Models with Monolingual Data. http://arxiv.org/abs/1511.06709.

  21. Devlin, J., Zbib, R., Huang, Z., Lamar, T., Schwartz, R., and Makhoul, J., Fast and Robust Neural Network Joint Models for Statistical Machine Translation. http://acl2014.org/acl2014/P14-1/pdf/P14-1129.pdf.

  22. Lample, G., Denoyer, L., and Ranzato, M.A., Unsupervised Machine Translation Using Monolingual Corpora Only. http://arXiv:1711.00043v1.

  23. He, K., Zhang, X., Ren, S., and Sun, J., Identity Mappings in Deep Residual Networks. http://arxiv.org/abs/1603.05027.

  24. http://karpathy.github.io/2014/09/02/what-i-learned-from-competing-against-a-convnet-onimagenet/.

  25. He, K., Zhang, X., Ren, S., and Sun, J., Deep Residual Learning for Image Recognition. http://arxiv.org/abs/1512.03385.

  26. Szegedy, C., Ioffe, S., and Vanhoucke, V., Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. http://arxiv.org/abs/1602.07261.

  27. Huang, G., Sun, Y., Liu, Z., Sedra, D., and Weinberger, K., Deep Networks with Stochastic Depth. http://arxiv.org/abs/1603.09382.

  28. http://myungjun-youn-demo.readthedocs.org/en/latest/tutorial/imagenet_full.html.

  29. Kokkinos, I., Pushing the Boundaries of Boundary Detection Using Deep Learning. http://arxiv.org/abs/1511.07386.

  30. http://vision.stanford.edu/pdf/karpathy14.pdf.

  31. Ng, J.Y.-H., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, R., and Toderici, G., Beyond Short Snippets: Deep Networks for Video Classification. http://arxiv.org/abs/1503.08909.

  32. http://www.bloomberg.com/news/articles/2015-12-08/why-2015-was-a-breakthrough-year-inartificial-intelligence.

  33. https://github.com/soumith/convnet-benchmarks.

  34. Han, S., Liu, X., Mao, H., Pu, J., Pedram, A., Horowitz, M.A., and Dally, W.J., EIE: Efficient Inference Engine on Compressed Deep Neural Network. http://arxiv.org/abs/1602.01528.

  35. Xiong, C., Merity, S., and Socher, R., Dynamic Memory Networks for Visual and Textual Question Answering. http://arxiv.org/abs/1603.01417.

  36. Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C.L., and Parikh, D., VQA: Visual Question Answering. http://arxiv.org/abs/1505.00468.

  37. Yang, Z., He, X., Gao, J., Deng, L., and Smola, A., Stacked Attention Networks for Image Question Answering. http://arxiv.org/abs/1511.02274.

  38. http://competitions.codalab.org/competitions/3221#results.

  39. https://github.com/samim23/NeuralTalkAnimator.

  40. Pan, P., Xu, Z., Yang, Y., Wu, F., and Zhuang, Y., Hierarchical Recurrent Neural Encoder for Video Representation with Application to Captioning. http://arxiv.org/abs/1511.03476.

  41. Yu, H., Wang, J., Huang, Z., Yang, Y., and Xu, W., Video Paragraph Captioning Using Hierarchical Recurrent Neural Networks. http://arxiv.org/abs/1510.07712.

  42. Mansimov, E., Parisotto, E., Ba, J.L., and Salakhutdinov, R., Generating Images from Captions with Attention. http://arxiv.org/abs/1511.02793.

  43. Yan, X., Yang, J., Sohn, K., and Lee, H., Attribute2Image: Conditional Image Generation from Visual Attribute. http://arxiv.org/abs/1512.00570.

  44. Pandey, G. and Dukkipati, A., Variational methods for Conditional Multimodal Learning: Generating Human Faces from Attributes. http://arxiv.org/abs/1603.01801.

  45. http://venturebeat.com/2015/05/28/google-says-its-speech-recognition-technology-now-has-onlyan-8-word-errorrate/.

  46. Wu, Z., Jiang, Y.-G., Wang, X., Ye, H., Xue, X., and Wang, J., Fusing Multi-Stream Deep Networks for Video Classification. http://arxiv.org/abs/1509.06086.

  47. Mao, J., Xu, W., Yang, Y., Wang, J., Huang, Z., and Yuille, A., Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN). http://arxiv.org/abs/1412.6632.

  48. Kahou, S.E., Bouthillier, X., Lamblin, L., Gulcehre, C., Michalski, V., Konda, K., Jean, S., Froumenty, P., Dauphin, Y., Boulanger-Lewandowski, N., Ferrari, R.C., Mirza, M., Warde-Farley, D., Courville, A., Vincent, P., Memisevic, R., Pal, C., and Bengio, Y., EmoNets: Multimodal Deep Learning Approaches for Emotion Recognition in Video. http://arxiv.org/abs/1503.01800.

  49. Moon, S., Kim, S., and Wang, H., Multimodal Transfer Deep Learning with Applications in Audio-Visual Recognition. http://arxiv.org/abs/1412.3121.

  50. Rohrbach, A., Rohrbach, M., Hu, R., Darrell, T., and Schiele, B., Grounding of Textual Phrases in Images by Reconstruction. http://arxiv.org/abs/1511.03745.

  51. Yang, Y., Li, Y., Fermuller, C., and Aloimonos, Y., Neural Self Talk: Image Understanding via Continuous Questioning and Answering. http://arxiv.org/abs/1512.03460.

  52. Silver, D., Schrittwieser, J., et al., Mastering the game of Go without human knowledge, Nature, 2017, vol. 550, pp. 354–359.

    Article  Google Scholar 

  53. Gu, S., Lillicrap, T.P., Sutskever, I., and Levine, S., Continuous Deep Q-Learning with Model-Based Acceleration. http://arxiv.org/abs/1603.00748.

  54. Schulman, J., Moritz, P., Levine, S., Jordan, M., and Abbeel, P., High-Dimensional Continuous Control Using Generalized Advantage Estimation. http://arxiv.org/abs/1506.02438.

  55. Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D., Continuous Control with Deep Reinforcement Learning. http://arxiv.org/abs/1509.02971.

  56. Finn, C., Levine, S., and Abbeel, P., Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization. http://arxiv.org/abs/1603.00448.

  57. Hausknecht, M. and Stone, P., Deep Reinforcement Learning in Parameterized Action Space. http://arxiv.org/abs/1511.04143.

  58. Heess N., Wayne, G., Silver, D., Lillicrap, T., Tassa, Y., and Erez, T., Learning Continuous Control Policies by Stochastic Value Gradients. http://arxiv.org/abs/1510.09142.

  59. Heess, N., Hunt, J.J., Lillicrap, T.P., and Silver, D., Memory-Based Control with Recurrent Neural Networks. http://arxiv.org/abs/1512.04455.

  60. Balduzzi, D. and Ghifary, M., Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies. http://arxiv.org/abs/1509.03005.

  61. Dulac-Arnold, G., Evans, R., Sunehag, P., and Coppin, B., Reinforcement Learning in Large Discrete Action Spaces. http://arxiv.org/abs/1512.07679.

  62. http://togelius.blogspot.ru/2016/01/why-video-games-are-essential-for.html.

  63. He, J., Chen, J., He, X., Gao, J., Li, L., Deng, L., and Ostendorf, M., Deep Reinforcement Learning with an Action Space Defined by Natural Language. http://arxiv.org/abs/1511.04636.

  64. Narasimhan, K., Kulkarni, T., and Barzilay, R., Language Understanding for Text-Based Games Using Deep Reinforcement Learning. http://arxiv.org/abs/1506.08941.

  65. Sutton, R.S. and Barto, A.G., Reinforcement Learning: An Introduction, Massachusetts: MIT Press, 2018.

    Google Scholar 

  66. Dream. https://en.wikipedia.org/wiki/Dream.

  67. Radford, A., Metz, L., and Chintala, S., Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. http://arxiv.org/abs/1511.06434.

  68. https://www.facebook.com/yann.lecun/posts/10153269667222143.

  69. Kiros, R., Zhu, Y., Salakhutdinov, R., Zemel, R.S., Torralba, A., Urtasun, R., and Fidler, S., Skip-Thought Vectors. http://arxiv.org/abs/1506.06726.

  70. Kingma, D.P. and Welling, M., Auto-Encoding Variational Bayes. http://arxiv.org/abs/1312.6114.

  71. Rezende, D.J., Mohamed, S., and Wierstra, D., Stochastic Backpropagation and Approximate Inference in Deep Generative Models. http://arxiv.org/abs/1401.4082.

  72. Eslami, S.M.A., Heess, N., Weber, T., Tassa, Y., Kavukcuoglu, K., and Hinton, G.E., Attend, Infer, Repeat: Fast Scene Understanding with Generative Models. http://arxiv.org/abs/1603.08575.

  73. https://en.wikipedia.org/wiki/Tetra-amelia_syndrome.

  74. https://en.wikipedia.org/wiki/Hirotada_Ototake.

  75. https://en.wikipedia.org/wiki/Nick_Vujicic.

  76. https://en.wikipedia.org/wiki/Prince_Randian.

  77. http://googleresearch.blogspot.ru/2016/03/deep-learning-for-robots-learning-from.html.

  78. https://kaiserfamilyfoundation.files.wordpress.com/2013/04/8010.pdf.

  79. http://bbc.com/news/technology-28677674.

  80. https://faculty.washington.edu/chudler/facts.html.

  81. Kennedy, D.N., Lange N., et al., Gyri of the human neocortex, Cereb. Cortex, 1998, vol. 8, pp. 372–384.

    Article  Google Scholar 

  82. Maruoka, H., Nakagawa, N., Tsuruno, S., Sakai, S., Yoneda, T., and Hosoya, T., Lattice system of functionally distinct cell types in the neocortex, Science, 2017, vol. 358, pp. 610–615.

    Article  Google Scholar 

  83. Vinyals, O. and Le, Q., A Neural Conversational Model. http://arxiv.org/abs/1506.05869.

  84. Ghosh, S., Vinyals, O., Strope, B., Roy, S., Dean, T., and Heck, L., Contextual LSTM (CLSTM) Models for Large Scale NLP Tasks. http://arxiv.org/abs/1602.06291.

  85. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M., Playing Atari with Deep Reinforcement Learning. http://arxiv.org/abs/1312.5602.

  86. Lake, B.M., Ullman, T.D., Tenenbaum, J.B., and Gersshman, S.J., Building machines that learn and think like people, Behav. Brain Sci., 2017, vol. 40, p. 72. doi doi 10.1017/S0140525X16001837

    Article  Google Scholar 

  87. Eberle, A.L., Mikula, S., Schalek, R., Lichtman, J., Knothe Tate, M.L., and Zeidler, D., High-Resolution, High-Throughput Imaging with a Multibeam Scanning Electron Microscope. http://onlinelibrary.wiley.com/doi/10.1111/jmi.12224/pdf.

  88. http://news.harvard.edu/gazette/story/2016/01/28m-challenge-figure-out-why-brains-are-so-goodat-learning/.

  89. Kasthuri, N., Hayworth, K.J., et al., Saturated Reconstruction of a Volume of Neocortex. https://www.mcb.harvard.edu/mcb_files/media/editor_uploads/2015/07/PIIS0092867415008247.pdf.

  90. Lillicrap, T.P., Cownden, D., Tweed, D.B., and Akerman, C.J., Random Feedback Weights Support Learning in Deep Neural Networks. http://arxiv.org/abs/1411.0247.

  91. Lillicrap, T.P., Cownden, D., Tweed, D.B., and Akerman, C.J., Random synaptic feedback weights support error backpropagation for deep learning, Nat. Commun., 2016, vol. 7, no. 13276, p. 10.

    Google Scholar 

  92. Hebb, D., The Organization of Behavior, New York: Wiley, 1949.

    Google Scholar 

  93. Noklund, A., Direct Feedback Alignment Provides Learning in Deep Neural Networks. https://arxiv.org/pdf/1609.01596.pdf.

  94. Karandashev, I.M. and Dunin-Barkowski, W.L., Computational verification of approximate probabilistic estimates of operational efficiency of random neural networks, Opt. Mem. Neural Networks, 2015, vol. 24, pp. 8–17.

    Article  Google Scholar 

  95. Solovyeva, K.P., Karandashev, I.M., Zhavoronkov, A., and Dunin-Barkowski, W.L., Models of innate neural attractors and their applications for neural information processing, Front. Syst. Neurosci., 2016, vol. 9, no. 176, p. 13.

    Google Scholar 

  96. Gilmer, J., Raffel, C., Schoenholz, S.S., Ragnu, M., and Sohl-Dickstein, J., Explaining the Learning Dynamics of Direct Feedback Alignment, Workshop Track–ICLR 2017, p. 4.

    Google Scholar 

  97. Bengio, Y. and Fische, A., Early Inference in Energy-Based Models Approximates BackPropagation. http://arxiv.org/abs/1510.02777.

  98. Baldi, P. and Sadowski, P., The Ebb and Flow of Deep Learning: a Theory of Local Learning. http://arxiv.org/abs/1506.06472.

  99. Bengio, Y., Lee, D.-H., Bornschein, J., and Lin, Z., Towards Biologically Plausible Deep Learning. http://arxiv.org/abs/1502.04156.

  100. Liao, Q., Leibo, J.Z., and Poggio, T., How Important is Weight Symmetry in Backpropagation? http://arxiv.org/abs/1510.05067.

  101. Bengio, Y., Mesnard, T., Fischer, A., Zhang, S., and Wu, Y., STDP As Presynaptic Activity Times Rate of Change of Postsynaptic Activity. http://arxiv.org/abs/1509.05936.

  102. Ollivier, Y., Tallec, C., and Charpiat, G., Training Recurrent Networks Online without Backtracking. http://arxiv.org/abs/1507.07680.

  103. Andrychowicz, M. and Kurach, K., Learning Efficient Algorithms with Hierarchical Attentive Memory. http://arxiv.org/abs/1602.03218.

  104. Zaremba, W. and Sutskever, I., Reinforcement Learning Neural Turing Machines. http://arxiv.org/abs/1505.00521.

  105. Neelakantan, A., Le, Q.V., and Sutskever, I., Neural Programmer: Inducing Latent Programs with Gradient Descent. http://arxiv.org/abs/1511.04834.

  106. Fu, J., Luo, H., Feng, J., Low, K.H., and Chua, T.-S., DrMAD: Distilling ReverseMode Automatic Differentiation for Optimizing Hyperparameters of Deep Neural Networks. arxiv. org/abs/1601.00917.

  107. Loshchilov, I. and Hutter, F., Online Batch Selection for Faster Training of Neural Networks. http://arxiv.org/abs/1511.06343.

  108. Bengio, E., Bacon, P.-L., Pineau, J., and Precup, V., Conditional Computation in Neural Networks for faster models. http://arxiv.org/abs/1511.06297.

  109. Shah, V., Asteris, M., Kyrillidis, A., and Sanghavi, S., Trading-Off Variance and Complexity in Stochastic Gradient Descent. http://arxiv.org/abs/1603.06861.

  110. Arjovsky, M., Shah, A., and Bengio, Y., Unitary Evolution Recurrent Neural Networks. http://arxiv.org/abs/1511.06464.

  111. Novikov, A., Podoprikhin, D., Osokin, A., and Vetrov, D., Tensorizing Neural Networks. http://arxiv.org/abs/1509.06569.

  112. Rhu, M., Gimelshein, N., Clemons, J., Zulfiqar, A., and Keckler, S.V., Virtualizing Deep Neural Networks for Memory-Efficient Neural Network Design. http://arxiv.org/abs/1602.08124.

  113. Chen, T., Goodfellow, I., and Shlens, J., Net2Net: Accelerating Learning via Knowledge Transfer. http://arxiv.org/abs/1511.05641.

  114. Wei, T., Wang, C., Rui, Y., and Chen, C.W., Network Morphism. http://arxiv.org/abs/1603.01670.

  115. https://github.com/BVLC/caffe/wiki/Model-Zoo.

  116. http://myungjun-youn-demo.readthedocs.org/en/latest/pretrained.html.

  117. http://www.vlfeat.org/matconvnet/pretrained/.

  118. Pascanu, R., Gulcehre, C., Cho, K., and Bengio, Y., How to Construct Deep Recurrent Neural Networks. http://arxiv.org/abs/1312.6026.

  119. Zhang, S., Wu, Y., Che, T., Lin, Z., Memisevic, R., Salakhutdinov, R., and Bengio, Y., Architectural Complexity Measures of Recurrent Neural Networks. http://arxiv.org/abs/1602.08210.

  120. Cooijmans, T., Ballas, N., Laurent, C., and Courville, A., Recurrent Batch Normalization. http://arxiv.org/abs/1603.09025.

  121. Semeniuta, S., Severyn, A., and Barth, E., Recurrent Dropout without Memory Loss. http://arxiv.org/abs/1603.05118.

  122. Serban, I.V., Sordoni, A., Bengio, Y., Courville, A., and Pineau, J., Building End-to-End Dialogue Systems Using Generative Hierarchical Neural Network Models. http://arxiv.org/abs/1507.04808.

  123. Yo, K., Zweig, G., and Peng, B., Attention with Intention for a Neural Network Conversation Model. http://arxiv.org/abs/1510.08565.

  124. Sordoni, A., Bengio, Y., Vahabi, H., Lioma, C., Simonsen, J.G., and Nie, J., A Hierarchical Recurrent Encoder-Decoder for Generative Context-Aware Query Suggestion. http://arxiv.org/abs/1507.02221.

  125. Graves, A., Adaptive Computation Time for Recurrent Neural Networks. http://arxiv.org/abs/1603.08983.

  126. Gokmen, T. and Vlasov, Yu., Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices. http://arxiv.org/abs/1603.07341.

  127. Negrov, D., Karandashev, I., Shakirov, V., Matveev, Yu., Dunin-Barkowski, W., and Zenkevich, A., An approximate backpropagation learning rule for memristor-based neural networks using synaptic plasticity, Neurocomputing, 2017, vol. 237, pp. 193–199.

    Article  Google Scholar 

  128. http://www.technologyreview.com/news/544421/googles-quantum-dream-machine/.

  129. Lloyd, S., Mohseni, M., and Patrick, P., Quantum Algorithms for Supervised and Unsupervised Machine Learning. http://arxiv.org/abs/1307.0411.

  130. Bernien, H., Schwartz, S., Keesling, A., Levine, H., Omran, A., Pichler, H., Choi, S., Zibrov, A.S., Endres, M., Gremer, M., Vuletic, V., and Lukin, M.D., Probing Many Body Dynamics on a 51-Atom Quantum Simulator. http://arxiv.org/abs/1707.04344.

  131. https://twitter.com/andrewyng/status/693182932530262016.

  132. Quora, Ten Things Everyone Should Know About Machine Learning, Forbes, 2017, September 6.

  133. Jastrzebski, S., Le’sniak, D., and Czarnecki, W.M., Learning to SMILE(S). http://arxiv.org/abs/1602.06289.

  134. Dunin-Barkowski, W.L., Theory of cerebellum, Lectures on Neuroinformatics, Moscow: MEPHI, 2010, pp. 15–48 [in Russian].

    Google Scholar 

  135. Saggar, M., Quintin, E.M., et al., Pictionary-based fMRI paradigm to study the neural correlates of spontaneous improvisation and figural creativity, Sci. Rep., 2015, vol. 5: 10894, p. 11.

    Google Scholar 

  136. http://blogs.nvidia.com/blog/2015/03/19/riding-the-ai-rocket-top-artificial-intelligence-researchersaysrobots-wont-kill-us-all/.

  137. http://www.macleans.ca/society/science/the-meaning-of-alphago-the-ai-program-that-beat-a-gochamp/.

  138. http://www.vetta.org/2011/12/goodbye-2011-hello-2012/.

  139. http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/.

  140. Muller, V.C. and Bostrom, N., Future Progress in Artificial Intelligence: A Survey of Expert Opinion. https://intelligence.org/files/AlgorithmicProgress.pdf.

  141. http://rebrain.2045.com.

  142. http://www.facebook.com/groups/467062423469736/permalink/573838239458820/.

  143. Loosemore, R., The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation. http://richardloosemore.com/docs/2014a_MaverickNanny_rpwl.pdf.

  144. Loosemore, R., Defining Benevolence in the context of Safe AI. http://ieet.org/index.php/IEET/more/loosemore20141210.

  145. http://www.facebook.com/groups/467062423469736/permalink/532404873602157/.

  146. Yudkowsky, E., Coherent Extrapolated Volition. https://intelligence.org/files/CEV.pdf.

  147. Bostrom, N., Superintelligence: Pathys, Dangers, Strategies, MP3 CD, Oxford: Oxford Univ. Press, 2014.

    Google Scholar 

  148. https://intelligence.org/our-research/.

  149. http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html.

  150. http://blog.samaltman.com/machine-intelligence-part-1.

  151. August, M. and Ni, X., Using Recurrent Neural Networks to Optimize Dynamical Decoupling for Quantum Memory. http://arxiv.org/abs/1604.00279.

  152. http://www.cnet.com/news/quantum-science-is-so-weird-that-ai-is-choosing-the-experiments/.

  153. http://singularityhub.com/2015/12/20/inside-openai-will-transparency-protect-us-from-artificialintelligencerun-amok/.

  154. http://en.wikipedia.org/wiki/Storm_botnet.

  155. http://lesswrong.com/lw/691/qa_with_shane_legg_on_risks_from_ai/.

  156. https://backchannel.com/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-takingover-17e0e27dd02a.

  157. http://www.nickbostrom.com/superintelligentwill.pdf.

  158. de Grey, A. and Rae, M., Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our Lifetime, New York: St. Martin’s Press, 2008.

    Google Scholar 

  159. Yampolsky, R., Taxonomy of Pathways to Dangerous AI. http://arxiv.org/abs/1511.03246.

  160. Ji, Y., Cohn, T., Kong, L., Dyer, C., and Eisenstein, J., Document Context Language Models. http://arxiv.org/abs/1511.03962.

  161. http://host.robots.ox.ac.uk:8080/leaderboard/displaylb.php?challengeid=11&compid=4.

  162. Hannun, A., Case, C., Casper, J., Catanzaro, B., Diamos, G., Elsen, E., Prenger, R., Satheesh, S., Sengupta, S., Coates, A., and Ng, A.Y., Deep Speech: Scaling up End-to-End Speech Recognition. http://arxiv.org/abs/1412.5567.

  163. Amodei, D., Anubhai, R., et al., Deep Speech 2: End-to-End Speech Recognition in English and Mandarin. http://arxiv.org/abs/1512.02595.

  164. https://github.com/hanzhanggit/StackGAN.

  165. https://habrahabr.ru/company/parallels/blog/331726/.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to W. L. Dunin-Barkowski.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shakirov, V.V., Solovyeva, K.P. & Dunin-Barkowski, W.L. Review of State-of-the-Art in Deep Learning Artificial Intelligence. Opt. Mem. Neural Networks 27, 65–80 (2018). https://doi.org/10.3103/S1060992X18020066

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.3103/S1060992X18020066

Keywords

Navigation