Abstract
A robot’s ability to provide explanatory descriptions of its decisions and beliefs promotes effective collaboration with humans. Providing such transparency in decision making is particularly challenging in integrated robot systems that include knowledge-based reasoning methods and data-driven learning algorithms. Towards addressing this challenge, our architecture couples the complementary strengths of non-monotonic logical reasoning with incomplete commonsense domain knowledge, deep learning, and inductive learning. During reasoning and learning, the architecture enables a robot to provide on-demand explanations of its decisions, beliefs, and the outcomes of hypothetical actions, in the form of relational descriptions of relevant domain objects, attributes, and actions. The architecture’s capabilities are illustrated and evaluated in the context of scene understanding tasks and planning tasks performed using simulated images and images from a physical robot manipulating tabletop objects. Experimental results indicate the ability to reliably acquire and merge new information about the domain in the form of constraints, and to provide accurate explanations in the presence of noisy sensing and actuation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
For a recent debate on whether interpretability is needed in machine learning, please see: https://www.youtube.com/watch?v=93Xv8vJ2acI.
References
Anjomshoae, S., Najjar, A., Calvaresi, D., Framling, K.: Explainable agents and robots: results from a systematic literature review. In: International Conference on Autonomous Agents and Multiagent Systems, Montreal, Canada (2019)
Assaf, R., Schumann, A.: Explainable deep neural networks for multivariate time series predictions. In: International Joint Conference on Artificial Intelligence, Macao, China, pp. 6488–6490 (2019)
Borgo, R., Cashmore, M., Magazzeni, D.: Towards providing explanations for AI planner decisions. In: IJCAI Workshop on Explainable Artificial Intelligence, pp. 11–17 (2018)
David, H., Tom, B.: An Enquiry Concerning Human Understanding: A Critical Edition. Oxford University Press, New York (2000)
Erdem, E., Patoglu, V.: Applications of ASP in robotics. Künstliche Intelligenz 32, 143–149 (2018). https://doi.org/10.1007/s13218-018-0544-x
Fox, M., Long, D., Magazzeni, D.: Explainable Planning. In: IJCAI Workshop on Explainable AI (2017)
Friedman, M.: Explanation and scientific understanding. Philosophy 71(1), 5–19 (1974)
Gelfond, M., Inclezan, D.: Applications of ASP in robotics. J. Appl. Non-Class. Logics 23(1–2), 105–120 (2013). Special Issue on Equilibrium Logic and Answer Set ProgrammingSpecial Issue on Equilibrium Logic and Answer Set ProgrammingSpecial Issue on Equilibrium Logic and Answer Set ProgrammingSpecial Issue on Equilibrium Logic and Answer Set ProgrammingSpecial Issue on Equilibrium Logic and Answer Set Programming
Gelfond, M., Kahl, Y.: Knowledge Representation, Reasoning and the Design of Intelligent Agents. Cambridge University Press, New York (2014)
de Kleer, J., Williams, B.C.: Diagnosing multiple faults. Artif. Intell. 32, 97–130 (1987)
Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: International Conference on Machine Learning, pp. 1885–1894 (2017)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Neural Information Processing Systems, pp. 1097–1105 (2012)
Laird, J.E.: The Soar Cognitive Architecture. The MIT Press, Cambridge (2012)
Langley, P., Meadows, B., Sridharan, M., Choi, D.: Explainable agency for intelligent autonomous systems. In: Innovative Applications of Artificial Intelligence (2017)
Langley, P.: Progress and challenges in research on cognitive architectures. In: AAAI Conference on Artificial Intelligence, San Francisco, USA, 4–9 February 2017
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Lewandowsky, S., Mundy, M., Tan, G.: The dynamics of trust: comparing humans to automation. J. Exp. Psychol. Appl. 6(2), 104 (2000)
Miller, G.A.: WordNet: a lexical database for English. Commun. ACM 38(11), 39–41 (1995)
Miller, T.: Explanations in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Mota, T., Sridharan, M.: Incrementally grounding expressions for spatial relations between objects. In: International Joint Conference on Artificial Intelligence, pp. 1928–1934 (2018)
Mota, T., Sridharan, M.: Commonsense reasoning and knowledge acquisition to guide deep learning on robots. In: Robotics Science and Systems (2019)
Mota, T., Sridharan, M.: Scene understanding, reasoning, and explanation generation (2020). https://github.com/tmot987/Scenes-Understanding
Norcliffe-Brown, W., Vafeais, E., Parisot, S.: Learning conditioned graph structures for interpretable visual question answering. In: Neural Information Processing Systems, Montreal, Canada, 3–8 December 2018
Read, S.J., Marcus-Newhall, A.: Explanatory coherence in social explanations: a parallel distributed processing account. Pers. Soc. Psychol. 65(3), 429 (1993)
Ribeiro, M., Singh, S., Guestrin, C.: Why should I trust You? Explaining the predictions of any classifier. In: International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Samek, W., Wiegand, T., Müller, K.R.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. ITU J. ICT Discoveries Impact Artif. Intell. Commun. Netw. Serv. 1, 1–10 (2017)
Seegebarth, B., Müller, F., Schattenberg, B., Biundo, S.: Making hybrid plans more clear to human users: a formal approach for generating sound explanations. In: International Conference on Automated Planning and Scheduling (2012)
Someya, Y.: Lemma list for English language (1998)
Sridharan, M., Gelfond, M., Zhang, S., Wyatt, J.: REBA: a refinement-based architecture for knowledge representation and reasoning in robotics. J. Artif. Intell. Res. 65, 87–180 (2019)
Sridharan, M., Meadows, B.: Knowledge representation and interactive learning of domain knowledge for human-robot collaboration. Adv. Cogn. Syst. 7, 69–88 (2018)
Sridharan, M., Meadows, B.: Towards a theory of explanations for human-robot collaboration. Kunstliche Intelligenz 33(4), 331–342 (2019)
Yi, K., Wu, J., Gan, C., Torralba, A., Kohli, P., Tenenbaum, J.B.: Neural-symbolic VQA: disentangling reasoning from vision and language understanding. In: Neural Information Processing Systems, Montreal, Canada, 3–8 December 2018
Zhang, Y., Sreedharan, S., Kulkarni, A., Chakraborti, T., Zhuo, H.H., Kambhampati, S.: Plan explicability and predictability for robot task planning. In: International Conference on Robotics and Automation, pp. 1313–1320 (2017)
Acknowledgments
This work was supported in part by the Asian Office of Aerospace Research and Development award FA2386-16-1-4071 and the U.S. Office of Naval Research Science of Autonomy Award N00014-17-1-2434. Reported opinions are those of the authors.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Mota, T., Sridharan, M., Leonardis, A. (2020). Integrated Commonsense Reasoning and Deep Learning for Transparent Decision Making in Robotics. In: Bassiliades, N., Chalkiadakis, G., de Jonge, D. (eds) Multi-Agent Systems and Agreement Technologies. EUMAS AT 2020 2020. Lecture Notes in Computer Science(), vol 12520. Springer, Cham. https://doi.org/10.1007/978-3-030-66412-1_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-66412-1_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-66411-4
Online ISBN: 978-3-030-66412-1
eBook Packages: Computer ScienceComputer Science (R0)