Skip to main content

Integrated Commonsense Reasoning and Deep Learning for Transparent Decision Making in Robotics

  • Conference paper
  • First Online:
Multi-Agent Systems and Agreement Technologies (EUMAS 2020, AT 2020)

Abstract

A robot’s ability to provide explanatory descriptions of its decisions and beliefs promotes effective collaboration with humans. Providing such transparency in decision making is particularly challenging in integrated robot systems that include knowledge-based reasoning methods and data-driven learning algorithms. Towards addressing this challenge, our architecture couples the complementary strengths of non-monotonic logical reasoning with incomplete commonsense domain knowledge, deep learning, and inductive learning. During reasoning and learning, the architecture enables a robot to provide on-demand explanations of its decisions, beliefs, and the outcomes of hypothetical actions, in the form of relational descriptions of relevant domain objects, attributes, and actions. The architecture’s capabilities are illustrated and evaluated in the context of scene understanding tasks and planning tasks performed using simulated images and images from a physical robot manipulating tabletop objects. Experimental results indicate the ability to reliably acquire and merge new information about the domain in the form of constraints, and to provide accurate explanations in the presence of noisy sensing and actuation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For a recent debate on whether interpretability is needed in machine learning, please see: https://www.youtube.com/watch?v=93Xv8vJ2acI.

References

  1. Anjomshoae, S., Najjar, A., Calvaresi, D., Framling, K.: Explainable agents and robots: results from a systematic literature review. In: International Conference on Autonomous Agents and Multiagent Systems, Montreal, Canada (2019)

    Google Scholar 

  2. Assaf, R., Schumann, A.: Explainable deep neural networks for multivariate time series predictions. In: International Joint Conference on Artificial Intelligence, Macao, China, pp. 6488–6490 (2019)

    Google Scholar 

  3. Borgo, R., Cashmore, M., Magazzeni, D.: Towards providing explanations for AI planner decisions. In: IJCAI Workshop on Explainable Artificial Intelligence, pp. 11–17 (2018)

    Google Scholar 

  4. David, H., Tom, B.: An Enquiry Concerning Human Understanding: A Critical Edition. Oxford University Press, New York (2000)

    Google Scholar 

  5. Erdem, E., Patoglu, V.: Applications of ASP in robotics. Künstliche Intelligenz 32, 143–149 (2018). https://doi.org/10.1007/s13218-018-0544-x

    Article  Google Scholar 

  6. Fox, M., Long, D., Magazzeni, D.: Explainable Planning. In: IJCAI Workshop on Explainable AI (2017)

    Google Scholar 

  7. Friedman, M.: Explanation and scientific understanding. Philosophy 71(1), 5–19 (1974)

    Google Scholar 

  8. Gelfond, M., Inclezan, D.: Applications of ASP in robotics. J. Appl. Non-Class. Logics 23(1–2), 105–120 (2013). Special Issue on Equilibrium Logic and Answer Set ProgrammingSpecial Issue on Equilibrium Logic and Answer Set ProgrammingSpecial Issue on Equilibrium Logic and Answer Set ProgrammingSpecial Issue on Equilibrium Logic and Answer Set ProgrammingSpecial Issue on Equilibrium Logic and Answer Set Programming

    Article  MathSciNet  Google Scholar 

  9. Gelfond, M., Kahl, Y.: Knowledge Representation, Reasoning and the Design of Intelligent Agents. Cambridge University Press, New York (2014)

    Book  Google Scholar 

  10. de Kleer, J., Williams, B.C.: Diagnosing multiple faults. Artif. Intell. 32, 97–130 (1987)

    Article  Google Scholar 

  11. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: International Conference on Machine Learning, pp. 1885–1894 (2017)

    Google Scholar 

  12. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  13. Laird, J.E.: The Soar Cognitive Architecture. The MIT Press, Cambridge (2012)

    Book  Google Scholar 

  14. Langley, P., Meadows, B., Sridharan, M., Choi, D.: Explainable agency for intelligent autonomous systems. In: Innovative Applications of Artificial Intelligence (2017)

    Google Scholar 

  15. Langley, P.: Progress and challenges in research on cognitive architectures. In: AAAI Conference on Artificial Intelligence, San Francisco, USA, 4–9 February 2017

    Google Scholar 

  16. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  17. Lewandowsky, S., Mundy, M., Tan, G.: The dynamics of trust: comparing humans to automation. J. Exp. Psychol. Appl. 6(2), 104 (2000)

    Article  Google Scholar 

  18. Miller, G.A.: WordNet: a lexical database for English. Commun. ACM 38(11), 39–41 (1995)

    Article  Google Scholar 

  19. Miller, T.: Explanations in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  Google Scholar 

  20. Mota, T., Sridharan, M.: Incrementally grounding expressions for spatial relations between objects. In: International Joint Conference on Artificial Intelligence, pp. 1928–1934 (2018)

    Google Scholar 

  21. Mota, T., Sridharan, M.: Commonsense reasoning and knowledge acquisition to guide deep learning on robots. In: Robotics Science and Systems (2019)

    Google Scholar 

  22. Mota, T., Sridharan, M.: Scene understanding, reasoning, and explanation generation (2020). https://github.com/tmot987/Scenes-Understanding

  23. Norcliffe-Brown, W., Vafeais, E., Parisot, S.: Learning conditioned graph structures for interpretable visual question answering. In: Neural Information Processing Systems, Montreal, Canada, 3–8 December 2018

    Google Scholar 

  24. Read, S.J., Marcus-Newhall, A.: Explanatory coherence in social explanations: a parallel distributed processing account. Pers. Soc. Psychol. 65(3), 429 (1993)

    Article  Google Scholar 

  25. Ribeiro, M., Singh, S., Guestrin, C.: Why should I trust You? Explaining the predictions of any classifier. In: International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  26. Samek, W., Wiegand, T., Müller, K.R.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. ITU J. ICT Discoveries Impact Artif. Intell. Commun. Netw. Serv. 1, 1–10 (2017)

    Google Scholar 

  27. Seegebarth, B., Müller, F., Schattenberg, B., Biundo, S.: Making hybrid plans more clear to human users: a formal approach for generating sound explanations. In: International Conference on Automated Planning and Scheduling (2012)

    Google Scholar 

  28. Someya, Y.: Lemma list for English language (1998)

    Google Scholar 

  29. Sridharan, M., Gelfond, M., Zhang, S., Wyatt, J.: REBA: a refinement-based architecture for knowledge representation and reasoning in robotics. J. Artif. Intell. Res. 65, 87–180 (2019)

    Article  MathSciNet  Google Scholar 

  30. Sridharan, M., Meadows, B.: Knowledge representation and interactive learning of domain knowledge for human-robot collaboration. Adv. Cogn. Syst. 7, 69–88 (2018)

    Google Scholar 

  31. Sridharan, M., Meadows, B.: Towards a theory of explanations for human-robot collaboration. Kunstliche Intelligenz 33(4), 331–342 (2019)

    Article  Google Scholar 

  32. Yi, K., Wu, J., Gan, C., Torralba, A., Kohli, P., Tenenbaum, J.B.: Neural-symbolic VQA: disentangling reasoning from vision and language understanding. In: Neural Information Processing Systems, Montreal, Canada, 3–8 December 2018

    Google Scholar 

  33. Zhang, Y., Sreedharan, S., Kulkarni, A., Chakraborti, T., Zhuo, H.H., Kambhampati, S.: Plan explicability and predictability for robot task planning. In: International Conference on Robotics and Automation, pp. 1313–1320 (2017)

    Google Scholar 

Download references

Acknowledgments

This work was supported in part by the Asian Office of Aerospace Research and Development award FA2386-16-1-4071 and the U.S. Office of Naval Research Science of Autonomy Award N00014-17-1-2434. Reported opinions are those of the authors.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohan Sridharan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mota, T., Sridharan, M., Leonardis, A. (2020). Integrated Commonsense Reasoning and Deep Learning for Transparent Decision Making in Robotics. In: Bassiliades, N., Chalkiadakis, G., de Jonge, D. (eds) Multi-Agent Systems and Agreement Technologies. EUMAS AT 2020 2020. Lecture Notes in Computer Science(), vol 12520. Springer, Cham. https://doi.org/10.1007/978-3-030-66412-1_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-66412-1_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-66411-4

  • Online ISBN: 978-3-030-66412-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics