The Dynamics of Human-Agent Trust with POMDP-Generated Explanations

  • Ning Wang
  • David V. Pynadath
  • Susan G. Hill
  • Chirag Merchant
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10498)

Abstract

Partially Observable Markov Decision Processes (POMDPs) enable optimized decision making by robots, agents, and other autonomous systems. This quantitative optimization can also be a limitation in human-agent interaction, as the resulting autonomous behavior, while possibly optimal, is often impenetrable to human teammates, leading to improper trust and, subsequently, disuse or misuse of such systems [1].

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Parasuraman, R., Riley, V.: Humans and automation: Use, misuse, disuse, abuse. Human Factors 39(2), 230–253 (1997)CrossRefGoogle Scholar
  2. 2.
    Wang, N., Pynadath, D.V.: Building trust in a human-robot team. In: Proceedings of the Inteservice/Industry Training, Simulation and Education Conference (2015)Google Scholar
  3. 3.
    Wang, N., Pynadath, D.V., Hill, S.G.: The impact of POMDP-generated explanations on trust and performance in human-robot teams. In: Proceedings of the International Joint Conference on Autonomous Agents and MultiAgent Systems (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Ning Wang
    • 1
  • David V. Pynadath
    • 1
  • Susan G. Hill
    • 2
  • Chirag Merchant
    • 1
  1. 1.University of Southern California Institute for Creative TechnologiesLos AngelesUSA
  2. 2.U.S. Army Research LaboratoryAdelphiUSA

Personalised recommendations