• Tiansi DongEmail author
Part of the Studies in Computational Intelligence book series (SCI, volume 910)


The methodology in the research of Artificial Intelligence (AI) consists of two competing paradigms, namely symbolic approach and connectionist approach. The symbolic approach is based on symbolic structures and rules, in which thinking is reviewed as symbolic manipulation. Associated with this paradigm are features such as logical, serial, discrete, localized, left-brained. The connectionist approach is inspired by the physiology of the mind, in which thinking is reviewed as information fusion and transfer of a large network of neurons.


  1. Belinkov, Y., & Bisk, Y. (2017). Synthetic and natural noise both break neural machine translation. CoRR arXiv:abs/1711.02173.
  2. Blackburn, P., de Rijke, M., & Venema, Y. (2001). Modal logic. New York, NY, USA: Cambridge University Press.CrossRefGoogle Scholar
  3. Blank, D. S., Meeden, L. A., & Marshall, J. (1992). Exploring the symbolic/subsymbolic continuum: A case study of RAAM. In The symbolic and connectionist paradigms: Closing the gap (pp. 113–148). Erlbaum.Google Scholar
  4. Bordes, A., Chopra, S., & Weston, J. (2014). Question answering with subgraph embeddings. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 615–620). Doha, Qatar: Association for Computational Linguistics.Google Scholar
  5. Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., & Yakhnenko, O. (2013). Translating embeddings for modelling multi-relational data. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, & K. Q. Weinberger (Eds.), Advances in neural information processing systems (Vol. 26, pp. 2787–2795). Curran Associates, Inc.Google Scholar
  6. Chomsky, N. (1955a). Logical structure of linguistic theory. MIT Humanities Library, Microfilm.Google Scholar
  7. Chomsky, N. (1955b). Syntactic structures. The Hague: Mouton.Google Scholar
  8. Chomsky, N. (1959). On certain formal properties of grammars. Information and Control, 2(2), 137–167.MathSciNetCrossRefGoogle Scholar
  9. Chomsky, N. (1965). Aspects of the theory of syntax. Massachusetts: The MIT Press.Google Scholar
  10. Ciodaro, T., Deva, D., de Seixas, J. M., & Damazio, D. (2012). Online particle detection with neural networks based on topological calorimetry information. Journal of Physics: Conference Series, 368, 012030.Google Scholar
  11. Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., & Kuksa, P. (2011). Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12, 2493–2537.zbMATHGoogle Scholar
  12. Dinsmore, J. (1992). Thunder in the gap. In The symbolic and connectionist paradigms: closing the gap (pp. 1–23). Erlbaum.Google Scholar
  13. Dong, H., Mao, J., Lin, T., Wang, C., Li, L., & Zhou, D. (2019a). Neural logic machines. In ICLR-19, New Orleans, USA.Google Scholar
  14. Dong, T., Bauckhage, C., Jin, H., Li, J., Cremers, O. H., Speicher, D., Cremers, A. B., & Zimmermann, J. (2019b). Imposing category trees onto word-embeddings using a geometric construction. In ICLR-19, New Orleans, USA, 6-9 May 2019.Google Scholar
  15. Dong, T., Wang, Z., Li, J., Bauckhage, C., & Cremers, A. B. (2019c). Triple classification using regions and fine-grained entity typing. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19) (pp. 77–85), Honolulu, Hawaii, USA. 27 January–1 February 2019.Google Scholar
  16. Elkan, C. (1993). The paradoxical success of fuzzy logic. IEEE Expert, 698–703.Google Scholar
  17. Farabet, C., Couprie, C., Najman, L., & LeCun, Y. (2013). Learning hierarchical features for scene labeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1915–1929.CrossRefGoogle Scholar
  18. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. The MIT Press.Google Scholar
  19. Harris, Z. (1954). Distributional structure. Word, 10(23), 146–162.CrossRefGoogle Scholar
  20. Hebb, D. (1949). The organization of behavior: A neuropsychological theory. Washington, USA: Psychology Press.Google Scholar
  21. Helmstaedter, M., Briggman, K. L., Turaga, S. C., Jain, V., Seung, H. S., & Denk, W. (2013). Connectomic reconstruction of the inner plexiform layer in the mouse retina. Nature, 500, 168–174.CrossRefGoogle Scholar
  22. Hilbert, D., & Ackermann, W. (1938). Principles of mathematical logic. Berlin. Citation based on the reprinted version by the American Mathematical Society (1999)Google Scholar
  23. Hinton, G. E. (1981). Implementing semantic networks in parallel hardware. In G. E. Hinton & J. A. Anderson (Eds.), Parallel models of associative memory (pp. 161–187). Hillsdale, NJ: Erlbaum.Google Scholar
  24. Hinton, G., Deng, L., Yu, D., Dahl, G., Rahman Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T., & Kingsbury, B. (2012a). Deep neural networks for acoustic modeling in speech recognition. Signal Processing Magazine, 29(6), 82–97Google Scholar
  25. Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A.-R., Jaitly, N., et al. (2012b). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6), 82–97.Google Scholar
  26. Jean, S., Cho, K., Memisevic, R., & Bengio, Y. (2015). On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) (pp. 1–10). Beijing, China: Association for Computational Linguistics.Google Scholar
  27. Jia, R., & Liang, P. (2017). Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, 9–11 September 2017 (pp. 2021–2031).Google Scholar
  28. Kahneman, D. (2011). Thinking, fast and slow. Allen Lane, Penguin Books. Nobel laureate in Economics in 2002.Google Scholar
  29. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems—Volume 1, NIPS’12 (pp. 1097–1105). USA: Curran Associates Inc.Google Scholar
  30. Lake, B., Salakhutdinov, R., & Tenenbaum, J. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266), 1332–1338.MathSciNetCrossRefGoogle Scholar
  31. LeCun, Y., Bengio, Y., & Hinton, G. E. (2015). Deep learning. Nature, 521(7553), 436–444.CrossRefGoogle Scholar
  32. Leung, M. K. K., Xiong, H. Y., Lee, L. J., & Frey, B. J. (2014). Deep learning of the tissue-regulated splicing code. Bioinformatics, 30(12), 121–129.CrossRefGoogle Scholar
  33. Ma, J., Sheridan, R. P., Liaw, A., Dahl, G. E., & Svetnik, V. (2015). Deep neural nets as a method for quantitative structure activity relationships. Journal of Chemical Information and Modeling, 55(2), 263–274.CrossRefGoogle Scholar
  34. McCarthy, J. (1995). Programs with common sense. In G. F. Luger (Ed.), Computation and intelligence (pp. 479–492). Menlo Park, CA, USA: American Association for Artificial Intelligence.Google Scholar
  35. McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, 115–133.MathSciNetCrossRefGoogle Scholar
  36. Mikolov, T., Deoras, A., Povey, D., Burget, L., & Cernocký, J. (2011). Strategies for training large scale neural network language models. In D. Nahamoo & M. Picheny (Eds.), ASRU (pp. 196–201). IEEE.Google Scholar
  37. Mikolov, T., Yih, W.-T., & Zweig, G. (2013). Linguistic regularities in continuous space word representations. Proceedings of NAACL-HLT, 746–751.Google Scholar
  38. Miller, G. A. (1995). Wordnet: A lexical database for english. Communication of ACM, 38(11), 39–41.CrossRefGoogle Scholar
  39. Minsky, M., & Papert, S. (1988). Perceptrons. Cambridge, MA, USA: MIT Press.zbMATHGoogle Scholar
  40. Newell, A., & Simon, H. A. (1976). Computer science as empirical inquiry: Symbols and search. Communication of ACM, 19(3), 113–126.MathSciNetCrossRefGoogle Scholar
  41. Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible inference. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.zbMATHGoogle Scholar
  42. Pennington, J., Socher, R., & Manning, C. D. (2014). GloVe: global vectors for word representation. In EMNLP’14 (pp. 1532–1543).Google Scholar
  43. Rosenblatt, F. (1962). Principles of neurodynamics: Perceptrons and the theory of brain mechanisms. Washington, USA: Spartan Books.zbMATHGoogle Scholar
  44. Rumelhart, D. E. (1984). The emergence of cognitive phenomena from sub-symbolic processes. In Proceedings of the Sixth Annual Conference of the Cognitive Science Society (pp. 59–62). Hillsdale, NJ and Bolder, Colorado: Erlbaum.Google Scholar
  45. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1988). Neurocomputing: Foundations of research. In Learning representations by back-propagating errors (pp. 696–699). MIT Press, Cambridge, MA, USA.Google Scholar
  46. Rumelhart, D. E., McClelland, J. L., & PDP Research Group (Eds.). (1986). Parallel distributed processing: Explorations in the microstructure of cognition, Vol. 1: Foundations. MIT Press, Cambridge, MA, USA.Google Scholar
  47. Russell, B. (1919). Introduction to mathematical philosophy. George Allen & Unwin, Ltd., London and The Macmillan Co., New York. Citation is based on the reprint by Dover Publications, Inc. (1993).Google Scholar
  48. Sainath, T. N., Kingsbury, B., Saon, G., Soltau, H., Mohamed, A.-R., Dahl, G., & Ramabhadran, B. (2015). Deep convolutional neural networks for large-scale speech tasks. Neural Network, 64(C), 39–48.Google Scholar
  49. Selfridge, O. G. (1988). Pandemonium: A paradigm for learning. In J. A. Anderson & E. Rosenfeld (Eds.), Neurocomputing: Foundations of research (pp. 115–122). Cambridge, MA, USA: MIT Press.Google Scholar
  50. Seo, M. J., Kembhavi, A., Farhadi, A., & Hajishirzi, H. (2016). Bidirectional attention flow for machine comprehension. CoRR arXiv:abs/1611.01603.
  51. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., et al. (2017). Mastering the game of go without human knowledge. Nature, 550, 354–359.Google Scholar
  52. Smolensky, P. (1988). On the proper treatment of connectionism. Behavioral and Brain Sciences, 1, 1–23.Google Scholar
  53. Socher, R., Pennington, J., Huang, E. H., Ng, A. Y., & Manning, C. D. (2011). Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11 (pp. 151–161). Stroudsburg, PA, USA: Association for Computational Linguistics.Google Scholar
  54. Sun, R. (2015). Artificial intelligence: Connectionist and symbolic approaches. In D. W. James (Ed.), International encyclopedia of the social and behavioral sciences (2nd ed., pp. 35–40). Oxford: Pergamon/Elsevier.CrossRefGoogle Scholar
  55. Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems—Volume 2, NIPS’14 (pp. 3104–3112). Cambridge, MA, USA: MIT Press.Google Scholar
  56. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S. E., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2014). Going deeper with convolutions. CoRR arXiv:abs/1409.4842.
  57. Tarski, A. (1946). Introduction to logic and to the methodology of deductive sciences. Oxford University Press, New York. Citation based on the Dover edition, first published in 1995.Google Scholar
  58. Tompson, J., Jain, A., LeCun, Y., & Bregler, C. (2014). Joint training of a convolutional network and a graphical model for human pose estimation. In Proceedings of the 27th International Conference on Neural Information Processing Systems—Volume 1, NIPS’14 (pp. 1799–1807). Cambridge, MA, USA: MIT Press.Google Scholar
  59. Tversky, A. (1977). Features of similarity. Psychological review, 84, 327–353.CrossRefGoogle Scholar
  60. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131.CrossRefGoogle Scholar
  61. Venn, J. (1880). On the diagrammatic and mechanical representation of propositions and reasonings. The London, Edinburgh and Dublin Philosophical Magazine and Journal of Science, 10(58), 1–18.CrossRefGoogle Scholar
  62. Vukotic, V., Pintea, S.-L., Raymond, C., Gravier, G., & Van Gemert, J. C. (2017). One-step time-dependent future video frame prediction with a convolutional encoder-decoder neural network. In International Conference of Image Analysis and Processing (ICIAP), Proceedings of the 19th International Conference of Image Analysis and Processing, Catania, Italy.Google Scholar
  63. Wittgenstein, L. (1953). Philosophical investigations. Oxford: Basil Blackwell.zbMATHGoogle Scholar
  64. Xiong, H. Y., Alipanahi, B., Lee, L. J., Bretschneider, H., Merico, D., Yuen, R. K. C., et al. (2015). The human splicing code reveals new insights into the genetic determinants of disease. Science,347(6218).Google Scholar
  65. Zadeh, L. A. (1965). Fuzzy sets. Informations and Control, 8, 338–353.Google Scholar

Copyright information

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.ML2R Competence Center for Machine Learning Rhine-Ruhr, MLAI Lab, AI Foundations Group, Bonn-Aachen International Center for Information Technology (b-it)University of BonnBonnGermany

Personalised recommendations