Shaping Trust Through Transparent Design: Theoretical and Experimental Guidelines

  • Joseph B. Lyons
  • Garrett G. Sadler
  • Kolina Koltai
  • Henri Battiste
  • Nhut T. Ho
  • Lauren C. Hoffmann
  • David Smith
  • Walter Johnson
  • Robert Shively
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 499)

Abstract

The current research discusses transparency as a means to enable trust of automated systems. Commercial pilots (N = 13) interacted with an automated aid for emergency landings. The automated aid provided decision support during a complex task where pilots were instructed to land several aircraft simultaneously. Three transparency conditions were used to examine the impact of transparency on pilot’s trust of the tool. The conditions were: baseline (i.e., the existing tool interface), value (where the tool provided a numeric value for the likely success of a particular airport for that aircraft), and logic (where the tool provided the rationale for the recommendation). Trust was highest in the logic condition, which is consistent with prior studies in this area. Implications for design are discussed in terms of promoting understanding of the rationale for automated recommendations.

Keywords

Trust Transparency Automation 

References

  1. 1.
    Onnasch, L., Wickens, C.D., Li, H., Manzey, D.: Human performance consequences of stages and levels of automation: an integrated meta-analysis. Hum Factors 56, 476–488 (2014)CrossRefGoogle Scholar
  2. 2.
    Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum Factors 46, 50–80 (2004)CrossRefGoogle Scholar
  3. 3.
    Lyons, J.B., Stokes, C.K.: Human-human reliance in the context of automation. Hum Factors 54, 111–120 (2012)CrossRefGoogle Scholar
  4. 4.
    Chen, J.Y.C., Barnes, M.J.: Human-agent teaming for multirobot control: a review of the human factors issues. IEEE Transactions on Human-Machine Systems, 13–29 (2014)Google Scholar
  5. 5.
    Hoff, K.A., Bashir, M.: Trust in automation: integrating empirical evidence on factors that influence trust. Hum Factors 57, 407–434 (2015)CrossRefGoogle Scholar
  6. 6.
    Lyons, J.B.: Being transparent about transparency: a model for human-robot interaction. In: Sofge, D., Kruijff, G.J., Lawless, W.F. (eds.) Trust and Autonomous Systems: papers from the AAAI spring symposium (Technical Report SS-13-07). AAAI Press, Menlo Park, CA (2013)Google Scholar
  7. 7.
    Mercado, J.E., Rupp, M.A., Chen. J.Y.C., Barnes, M.J., Barber, D., Procci, K.: Intelligent agent transparency in human-agent teaming for multi-UxV management. Human Factors (in press)Google Scholar
  8. 8.
    Wang, L., Jamieson, G.A., Hollands, J.G.: Trust and reliance on an automated combat identification system. Hum Factors 51, 281–291 (2009)CrossRefGoogle Scholar
  9. 9.
    Lyons, J.B., Koltai, K.S., Ho, N.T., Johnson, W.B., Smith, D.E., Shively, J.R.: Engineering trust in complex automated systems. Ergon in Design 24, 13–17 (2016)CrossRefGoogle Scholar
  10. 10.
    Meuleau, N., Plaunt, C., Smith, D., Smith, C.: Emergency landing planner for damaged aircraft. In: Proceedings of the Scheduling and Planning Applications Workshop (2008)Google Scholar
  11. 11.
    Brandt, S.L., Lachter, J., Battiste, V., Johnson, W.: Pilot situation awareness and its implications for single pilot operations: analysis of a human-in-the-loop study. Procedia Manufacturing 3, 3017–3024 (2015)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2017

Authors and Affiliations

  • Joseph B. Lyons
    • 1
  • Garrett G. Sadler
    • 2
  • Kolina Koltai
    • 2
  • Henri Battiste
    • 2
  • Nhut T. Ho
    • 2
  • Lauren C. Hoffmann
    • 2
  • David Smith
    • 3
  • Walter Johnson
    • 3
  • Robert Shively
    • 3
  1. 1.Air Force Research LaboratoryDaytonUSA
  2. 2.NVH Human Systems IntegrationLos AngelesUSA
  3. 3.NASA Ames Research CenterLos AngelesUSA

Personalised recommendations