Advertisement

The Criteria, Challenges, and the Back-Propagation Method

  • Tiansi DongEmail author
Chapter
  • 247 Downloads
Part of the Studies in Computational Intelligence book series (SCI, volume 910)

Abstract

In this chapter, we describe our task of symbol spatialization, list the criteria and challenges. We show that despite of the magic power, back-propagation method will not be the right tool to fulfill the criteria.

References

  1. Adamson, M. J., & Damper, R. I. (1999). B-RAAM: A connectionist model which develops holistic internal representations of symbolic structures. Connection Science, 11(1), 41–71.CrossRefGoogle Scholar
  2. Athiwaratkun, B., & Wilson, A. (2017). Multimodal word distributions. In ACL’17, pp. 1645–1656.Google Scholar
  3. Bengio, Y., Ducharme, R., Vincent, P., & Janvin, C. (2003). A neural probabilistic language model. Journal of Machine Learning Research, 3, 1137–1155.zbMATHGoogle Scholar
  4. Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., & Kuksa, P. (2011). Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12, 2493–2537.zbMATHGoogle Scholar
  5. Erk, K. (2009). Supporting inferences in semantic space: Representing words as regions. In IWCS-8’09, pp. 104–115, Stroudsburg, PA, USA. Association for Computational Linguistics.Google Scholar
  6. Faruqui, M., Dodge, J., Jauhar, S. K., Dyer, C., Hovy, E., & Smith, N. A. (2015). Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1606–1615. ACL.Google Scholar
  7. Freeman, W. J. (1988). Dynamic systems and the “subsymbolic level”. Behavioral and Brain Sciences, 1, 33–34.Google Scholar
  8. Fu, R., Guo, J., Qin, B., Che, W., Wang, H., & Liu, T. (2015). Learning semantic hierarchies: A continuous vector space approach. Transactions Audio, Speech and Language Proceedings, 23(3):461–471.Google Scholar
  9. Han, X., Liu, Z., & Sun, M. (2016). Joint representation learning of text and knowledge for knowledge graph completion. CoRR arXiv:abs/1611.04125.
  10. Kelley, J. K. (1955). General topology. New York: Springer.Google Scholar
  11. Kruszewski, G., Paperno, D., & Baroni, M. (2015). Deriving boolean structures from distributional vectors. Transactions of the Association of Computational Linguistics, 3, 375–388.CrossRefGoogle Scholar
  12. LeCun, Y., Chopra, S., Hadsell, R., Ranzato, M., & Huang, F. (2006). A tutorial on energy-based learning. In G. Bakir, T. Hofman, B. Schölkopf, A. Smola, & B. Taskar (Eds.), Predicting Structured Data. MIT Press.Google Scholar
  13. LeCun, Y., Bengio, Y., & Hinton, G. E. (2015). Deep learning. Nature, 521(7553), 436–444.CrossRefGoogle Scholar
  14. Lenci, A., & Benotto, G. (2012). Identifying hypernyms in distributional semantic spaces. In SemEval ’12, pp. 75–79, Stroudsburg, PA, USA. ACL.Google Scholar
  15. Li, X., Vilnis, L., Zhang, D., Boratko, M., & McCallum, A. (2019). Smoothing the geometry of box embeddings. In International Conference on Learning Representations (ICLR).Google Scholar
  16. Manning, C. D., Raghavan, P., & Schütze, H. (2008). Introduction to information retrieval. New York, NY, USA: Cambridge University Press.Google Scholar
  17. Mikolov, T. (2012). Statistical Language Models Based on Neural Networks. Ph.D. thesis, Brno University of Technology, Brno, CZ.Google Scholar
  18. Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. CoRR arXiv:abs/1301.3781.
  19. Miller, G. A. (1995). Wordnet: A lexical database for english. Communication ACM, 38(11), 39–41.CrossRefGoogle Scholar
  20. Nickel, M., & Kiela, D. (2017). Poincaré embeddings for learning hierarchical representations. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.) Advances in Neural Information Processing Systems 30, pp. 6338–6347. Curran Associates, Inc.Google Scholar
  21. Pennington, J., Socher, R., & Manning, C. D. (2014). GloVe: Global vectors for word representation. In EMNLP’14, pp. 1532–1543.Google Scholar
  22. Pollack, J. B. (1990). Recursive distributed representations. Artificial Intelligence, 46(1–2), 77–105.CrossRefGoogle Scholar
  23. Rumelhart, D. E., McClelland, J. L., & PDP Research Group, C. (Eds.) (1986). Parallel distributed processing: Explorations in the microstructure of cognition, Vol. 1: Foundations. MIT Press, Cambridge, MA, USA.Google Scholar
  24. Sebastiani, F. (2002). Machine learning in automated text categorization. ACM Computing Surveys, 34(1), 1–47.CrossRefGoogle Scholar
  25. Smolensky, P. (1988). On the proper treatment of connectionism. Behavioral and Brain Sciences, 1, 1–23.Google Scholar
  26. Socher, R., Chen, D., Manning, C. D., & Ng, A. (2013). Reasoning with neural tensor networks for knowledge base completion. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, & K. Q. Weinberger (Eds.) Advances in Neural Information Processing Systems 26, pp. 926–934. Curran Associates, Inc.Google Scholar
  27. Speer, R., Chin, J., & Havasi, C. (2017). Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pp. 4444–4451.Google Scholar
  28. Tellex, S., Katz, B., Lin, J., Fernandes, A., & Marton, G. (2003). Quantitative evaluation of passage retrieval algorithms for question answering. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’03, pp. 41–47, New York, NY, USA. ACM.Google Scholar
  29. Tversky, B. (2019). Mind in motion. New York, USA: Basic Books.Google Scholar
  30. Xiao, H., Huang, M., & Zhu, X. (2016). From one point to a manifold: Knowledge graph embedding for precise link prediction. IJCAI, 1315–1321.Google Scholar

Copyright information

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.ML2R Competence Center for Machine Learning Rhine-Ruhr, MLAI Lab, AI Foundations Group, Bonn-Aachen International Center for Information Technology (b-it)University of BonnBonnGermany

Personalised recommendations