Shaping Trust Through Transparent Design: Theoretical and Experimental Guidelines
The current research discusses transparency as a means to enable trust of automated systems. Commercial pilots (N = 13) interacted with an automated aid for emergency landings. The automated aid provided decision support during a complex task where pilots were instructed to land several aircraft simultaneously. Three transparency conditions were used to examine the impact of transparency on pilot’s trust of the tool. The conditions were: baseline (i.e., the existing tool interface), value (where the tool provided a numeric value for the likely success of a particular airport for that aircraft), and logic (where the tool provided the rationale for the recommendation). Trust was highest in the logic condition, which is consistent with prior studies in this area. Implications for design are discussed in terms of promoting understanding of the rationale for automated recommendations.
KeywordsTrust Transparency Automation
- 4.Chen, J.Y.C., Barnes, M.J.: Human-agent teaming for multirobot control: a review of the human factors issues. IEEE Transactions on Human-Machine Systems, 13–29 (2014)Google Scholar
- 6.Lyons, J.B.: Being transparent about transparency: a model for human-robot interaction. In: Sofge, D., Kruijff, G.J., Lawless, W.F. (eds.) Trust and Autonomous Systems: papers from the AAAI spring symposium (Technical Report SS-13-07). AAAI Press, Menlo Park, CA (2013)Google Scholar
- 7.Mercado, J.E., Rupp, M.A., Chen. J.Y.C., Barnes, M.J., Barber, D., Procci, K.: Intelligent agent transparency in human-agent teaming for multi-UxV management. Human Factors (in press)Google Scholar
- 10.Meuleau, N., Plaunt, C., Smith, D., Smith, C.: Emergency landing planner for damaged aircraft. In: Proceedings of the Scheduling and Planning Applications Workshop (2008)Google Scholar