Advertisement

Commonsense Reasoning Using Theorem Proving and Machine Learning

  • Sophie SiebertEmail author
  • Claudia Schon
  • Frieder Stolzenburg
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11713)

Abstract

Commonsense reasoning is a difficult task for a computer to handle. Current algorithms score around 80% on benchmarks. Usually these approaches use machine learning which lacks explainability, however. Therefore, we propose a combination with automated theorem proving here. Automated theorem proving allows us to derive new knowledge in an explainable way, but suffers from the inevitable incompleteness of existing background knowledge. We alleviate this problem by using machine learning. In this paper, we present our approach which uses an automatic theorem prover, large existing ontologies with background knowledge, and machine learning. We present first experimental results and identify an insufficient amount of training data and lack of background knowledge as causes for our system not to stand out much from the baseline.

Keywords

Commonsense reasoning Causal reasoning Machine learning Theorem proving Large background knowledge 

References

  1. 1.
    Álvez, J., Lucio, P., Rigau, G.: Adimen-SUMO: reengineering an ontology for first-order reasoning. Int. J. Semant. Web Inform. Syst. (IJSWIS) 8(4), 80–116 (2012)CrossRefGoogle Scholar
  2. 2.
    Basile, V., Cabrio, E., Schon, C.: KNEWS: using logical and lexical semantics to extract knowledge from natural language. In: Proceedings of the European Conference on Artificial Intelligence (ECAI) (2016)Google Scholar
  3. 3.
    Bender, Markus, Pelzer, Björn, Schon, Claudia: System description: E-KRHyper 1.4. In: Bonacina, Maria Paola (ed.) CADE 2013. LNCS (LNAI), vol. 7898, pp. 126–134. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-38574-2_8CrossRefGoogle Scholar
  4. 4.
    Bengio, Y.: Practical recommendations for gradient-based training of deep architectures. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 437–478. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-35289-8_26CrossRefGoogle Scholar
  5. 5.
    Bos, J.: Is there a place for logic in recognizing textual entailment? Perspect. Semant. Represent. Text. Inference 9, 27–44 (2013)Google Scholar
  6. 6.
    Church, K.W., Hanks, P.: Word association norms, mutual information, and lexicography. Comput. Linguist. 16(1), 22–29 (1989)Google Scholar
  7. 7.
    Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. CoRR - Computing Research Repository abs/1810.04805, Cornell University Library (2018). http://arxiv.org/abs/1810.04805
  8. 8.
    Diederich, J., Tickle, A.B., Geva, S.: Quo vadis? Reliable and practical rule extraction from neural networks. In: Koronacki, J., Ras, Z.W., Wierzchon, S.T., Kacprzyk, J. (eds.) Advances in Machine Learning I. Studies in Computational Intelligence, vol. 262, pp. 479–490. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-05177-7_24. Dedicated to the Memory of Professor Ryszard S. MichalskiCrossRefGoogle Scholar
  9. 9.
    Doran, D., Schulz, S., Besold, T.R.: What does explainable AI really mean? A new conceptualization of perspectives. In: Besold, T.R., Kutz, O. (eds.) Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017, Co-located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017). CEUR Workshop Proceedings, vol. 2071. CEUR-WS.org, Bari (2018). http://ceur-ws.org/Vol-2071/CExAIIA_2017_paper_2.pdf
  10. 10.
    Furbach, U., Schon, C., Stolzenburg, F., Weis, K.H., Wirth, C.P.: The RatioLog project: rational extensions of logical reasoning. KI 29(3), 271–277 (2015).  https://doi.org/10.1007/s13218-015-0377-9CrossRefGoogle Scholar
  11. 11.
    d’Avila Garcez, A.S., Broda, K., Gabbay, D.M.: Symbolic knowledge extraction from trained neural networks: a sound approach. Artif. Intell. 125(1–2), 155–207 (2001).  https://doi.org/10.1016/S0004-3702(00)00077-1MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    d’Avila Garcez, A.S., Zaverucha, G.: The connectionist inductive learning and logic programming system. Appl. Intell. 11(1), 59–77 (1999).  https://doi.org/10.1023/A:1008328630915CrossRefGoogle Scholar
  13. 13.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. Adaptive Computation and Machine Learning. MIT Press, Cambridge (2016). http://www.deeplearningbook.orgzbMATHGoogle Scholar
  14. 14.
    Gordon, A.S., Bejan, C.A., Sagae, K.: Commonsense causal reasoning using millions of personal stories. In: Twenty-Fifth AAAI Conference on Artificial Intelligence (2011)Google Scholar
  15. 15.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997).  https://doi.org/10.1162/neco.1997.9.8.1735CrossRefGoogle Scholar
  16. 16.
    Hoder, K., Voronkov, A.: Sine qua non for large theory reasoning. In: Bjørner, N., Sofronie-Stokkermans, V. (eds.) CADE 2011. LNCS (LNAI), vol. 6803, pp. 299–314. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-22438-6_23CrossRefGoogle Scholar
  17. 17.
    Holzinger, A.: Explainable AI (ex-AI). Inform. Spekt. 41(2), 138–143 (2018).  https://doi.org/10.1007/s00287-018-1102-5. Aktuelles Schlagwort, in GermanCrossRefGoogle Scholar
  18. 18.
    Lenat, D.B.: CYC: a large-scale investment in knowledge infrastructure. Commun. ACM 38(11), 33–38 (1995)CrossRefGoogle Scholar
  19. 19.
    Liu, H., Singh, P.: ConceptNet - a practical commonsense reasoning tool-kit. BT Technol. J. 22(4), 211–226 (2004)CrossRefGoogle Scholar
  20. 20.
    Miller, G.A.: WordNet: a lexical database for English. Commun. ACM 38(11), 39–41 (1995)CrossRefGoogle Scholar
  21. 21.
    Mostafazadeh, N., Roth, M., Louis, A., Chambers, N., Allen, J.: LSDSem 2017 shared task: the story cloze test. In: Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, pp. 46–51 (2017)Google Scholar
  22. 22.
    Mueller, E.T.: Commonsense Reasoning, 2nd edn. Morgan Kaufmann, San Francisco (2014)Google Scholar
  23. 23.
    Niles, I., Pease, A.: Towards a standard upper ontology. In: Proceedings of the International Conference on Formal Ontology in Information Systems, pp. 2–9. ACM (2001)Google Scholar
  24. 24.
    Ostermann, S., Roth, M., Modi, A., Thater, S., Pinkal, M.: SemEval-2018 task 11: machine comprehension using commonsense knowledge. In: Proceedings of the 12th International Workshop on Semantic Evaluation, pp. 747–757 (2018)Google Scholar
  25. 25.
    Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training. Technical report Open AI (2018). http://openai.com/blog/language-unsupervised/
  26. 26.
    Ramos, J., et al.: Using TF-IDF to determine word relevance in document queries. In: Proceedings of the First Instructional Conference on Machine Learning, Piscataway, NJ, USA, vol. 242, pp. 133–142 (2003)Google Scholar
  27. 27.
    Roemmele, M., Bejan, C.A., Gordon, A.S.: Choice of plausible alternatives: an evaluation of commonsense causal reasoning. In: AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, pp. 90–95 (2011)Google Scholar
  28. 28.
    Siebert, S., Stolzenburg, F.: CoRg: commonsense reasoning using a theorem prover and machine learning. In: Benzmüller, C., Parent, X., Steen, A. (eds.) Selected Student Contributions and Workshop Papers of LuxLogAI 2018. Kalpa Publications in Computing, vol. 10, pp. 20–26. EasyChair (2019). Deduktionstreffen 2018, Luxembourg.  https://doi.org/10.29007/lt5p
  29. 29.
    Speer, R., Chin, J., Havasi, C.: ConceptNet 5.5: an open multilingual graph of general knowledge. In: AAAI Conference on Artificial Intelligence, pp. 4444–4451 (2017). http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14972
  30. 30.
    Suchanek, F.M., Kasneci, G., Weikum, G.: YAGO: a large ontology from Wikipedia and WordNet. Web Semant. 6(3), 203–217 (2008).  https://doi.org/10.1016/j.websem.2008.06.001CrossRefGoogle Scholar
  31. 31.
    Tan, M., Santos, C.D., Xiang, B., Zhou, B.: LSTM-based deep learning models for non-factoid answer selection. CoRR - Computing Research Repository abs/1511.04108, Cornell University Library (2015). http://arxiv.org/abs/1511.04108

Copyright information

© IFIP International Federation for Information Processing 2019

Authors and Affiliations

  1. 1.Automation and Computer Sciences DepartmentHarz University of Applied SciencesWernigerodeGermany
  2. 2.Institute for Web Science and TechnologiesUniversität Koblenz-LandauKoblenzGermany

Personalised recommendations