AI-Completeness: Using Deep Learning to Eliminate the Human Factor



Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A solution for the so-called NP-complete problems will also be a solution for any other such problems. Its artificial-intelligence analogue is the class of AI-complete problems, for which a complete mathematical formalization still does not exist. In this chapter we will focus on analysing computational classes to better understand possible formalizations of AI-complete problems, and to see whether a universal algorithm, such as a Turing test, could exist for all AI-complete problems. In order to better observe how modern computer science tries to deal with computational complexity issues, we present several different deep-learning strategies involving optimization methods to see that the inability to exactly solve a problem from a higher order computational class does not mean there is not a satisfactory solution using state-of-the-art machine-learning techniques. Such methods are compared to philosophical issues and psychological research regarding human abilities of solving analogous NP-complete problems, to fortify the claim that we do not need to have an exact and correct way of solving AI-complete problems to nevertheless possibly achieve the notion of strong AI.


  1. 1.
    Ahn LV, Blum M, Hopper N, Langford J (2003) Using hard AI problems for security. In: EUROCRYPT, CAPTCHAGoogle Scholar
  2. 2.
    Arora S, Barak B (2009) Computational complexity: a modern approach. Cambridge University Press, CambridgeGoogle Scholar
  3. 3.
    Blum A, Rivest R (1992) Training a 3-node neural network is NP-complete. Neural Netw 5(1):117–127CrossRefGoogle Scholar
  4. 4.
    Chalmers D (1995) Facing up to the problem of consciousness. J Conscious Stud 2(3):200–219Google Scholar
  5. 5.
    Conneau A, Schwenk H, LeCun Y (2017) Very deep convolutional networks for text classification. In: Proceedings of the 15th Conference of the European chapter of the Association for computational linguistics: vol I, Long papers. Association for Computational Linguistics, Valencia, Spain, pp 1107–1116Google Scholar
  6. 6.
    Dennett D (1991) Consciousness explained. Little, Brown and Co., BostonGoogle Scholar
  7. 7.
    Fernando C et al (2017) Pathnet: evolution channels gradient descent in super neural networks. arXiv:1701.08734
  8. 8.
    Foundalis H, Phaeco: a cognitive architecture inspired by Bongard’s problems. PhD thesisGoogle Scholar
  9. 9.
    Girshick R (2015) Fast R-CNN. In: Proceedings of the 2015 IEEE International conference on computer vision (ICCV), ICCV ’15. IEEE Computer Society, Washington, DC, USA, pp 1440–1448Google Scholar
  10. 10.
    Gu C et al (2009) Recognition using regions. In: 2009 IEEE Conference on computer vision and pattern recognitionGoogle Scholar
  11. 11.
    Harvey D, van der Hoeven J (2019) Integer multiplication in time O(n log n). hal-02070778,
  12. 12.
    Jackson F (1982) Epiphenomenal qualia. Philos Q 32:127–136CrossRefGoogle Scholar
  13. 13.
    Judd S (1988) Learning in neural networks. In: Proceedings of the First annual workshop on computational learning theory, COLT ’88. Morgan Kaufmann Publishers Inc, Cambridge, MA, USA, pp 2–8Google Scholar
  14. 14.
    Karatsuba AA (1995) The complexity of computations. Proc Steklov Inst Math 211:169–183Google Scholar
  15. 15.
    Khan S et al (2018) A guide to convolutional neural networks for computer vision. Morgan & ClaypoolGoogle Scholar
  16. 16.
    Livni R, Shalev Shwartz S, Shamir O (2014) On the computational efficiency of training neural networks. In: Proceedings of the 27th International conference on neural information processing systems - vol 1, NIPS ’14. MIT Press, Cambridge, MA, USA, pp 855–863Google Scholar
  17. 17.
    MacGregor J, Ormerod T (1996) Human performance on the traveling salesman problem. Percept Psychophys 58(4):527–539CrossRefGoogle Scholar
  18. 18.
    Mallery JC (1988) Thinking about foreign policy: finding an appropriate role for artificially intelligent computers. Paper presented on the 1988 annual meeting of the International Studies AssociationGoogle Scholar
  19. 19.
    Milan A, Rezatofighi SH, Garg R, Dick A, Reid I (2017) Learning in neural networks. In: Proceedings of the First annual workshop on computational learning theory, AAAI ’17. Morgan Kaufmann Publishers Inc, San Francisco, CA, USA, pp 1453–1459Google Scholar
  20. 20.
    Schönhage A, Strassen V (1971) Schnelle Multiplikation großer Zahlen. Computing 7:281–292MathSciNetCrossRefGoogle Scholar
  21. 21.
    Searle J (1980) Minds, brains and programs. Behav Brain Sci 3(3):417–457CrossRefGoogle Scholar
  22. 22.
    Shahaf D, Amir E (2007) Towards a theory of AI completeness. In: Commonsense 2007, 8th International symposium on logical formalizations of commonsense reasoningGoogle Scholar
  23. 23.
    Shapiro SC (ed) (1992) Artificial intelligence. In: Encyclopedia of artifical intelligence, 2nd edn. Wiley, New York, pp 54–57Google Scholar
  24. 24.
    Trazzi M, Yampolskiy R (2018) Building safer AGI by introducing artificial stupidity. arXiv:1808.03644
  25. 25.
    Tshitoyan V et al (2019) Unsupervised word embeddings capture latent knowledge from materials science literature. Nature 571:7CrossRefGoogle Scholar
  26. 26.
    Weston J et al (2015) Towards AI-complete question answering: a set of prerequisite toy tasks. arXiv:1502.05698
  27. 27.
    Yampolskiy R, AI-complete, AI-hard, or AI-easy: classification of problems in artificial intelligence. In: The 23rd Midwest artificial intelligence and cognitive science conference, Cincinnati, OH, USAGoogle Scholar
  28. 28.
    Yampolskiy R, Turing test as a defining feature of AI-completeness. In: Yang X-S (ed) Artificial intelligence, evolutionary computing and metaheuristicsGoogle Scholar
  29. 29.
    Ye G et al (2018) Yet another text captcha solver: a generative adversarial network based approach. In: Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, CCS ’18. ACM, New York, NY, USA, pp 332–348Google Scholar
  30. 30.
    Yi SKM, Steyvers M, Lee M, Dry M (2012) The wisdom of the crowd in combinatorial problems. Cogn Sci 36:452–470CrossRefGoogle Scholar
  31. 31.
    Young T, Hazarika D, Poria S, Cambria E (2018) Recent trends in deep learning based natural language processing. IEEE Comput Intell Mag 13(3):55–75CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.University of ZagrebZagrebCroatia

Personalised recommendations