Skip to main content

Learning Trustworthy Behaviors Using an Inverse Trust Metric

  • Chapter
  • First Online:
Robust Intelligence and Trust in Autonomous Systems

Abstract

The addition of a robot to a human team can be beneficial if the robot can perform important tasks, provide additional skills, or otherwise help the team achieve its goals. However, if the human team members do not trust the robot they may underutilize it or excessively monitor its behavior. We present an algorithm that allows a robot to estimate its trustworthiness based on interactions with a team member and adapt its behavior in an attempt to increase its trustworthiness. The robot is able to learn as it performs behavior adaptation and increase the efficiency of future adaptation. We compare our approach for inverse trust estimation and behavior adaptation to a variant that does not learn. Our results, in a simulated robotics environment, show that both approaches can identify trustworthy behaviors but the learning approach does so significantly faster.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    An interruption could also be a result of the operator identifying a more important task for the robot to perform or failures could be the result of unachievable tasks. The robot works under the assumption that those situations occur rarely and most failures/interruptions are a result of poor performance.

  2. 2.

    We plan to validate these findings in a series of user studies.

References

  • Baier JA, McIlraith SA (2008) Planning with preferences. AI Mag 29(4):25–36

    Google Scholar 

  • Berlin M, Gray J, Thomaz AL, Breazeal C (2006) Perspective taking: an organizing principle for learning in human-robot interaction. In: Proceedings of the 21st National conference on artificial intelligence, pp 1444–1450

    Google Scholar 

  • Biros DP, Daly M, Gunsch G (2004) The influence of task load and automation trust on deception detection. Group Decis Negot 13(2):173–189

    Article  Google Scholar 

  • Breazeal C, Gray J, Berlin M (2009) An embodied cognition approach to mindreading skills for socially intelligent robots. Int J Robot Res 28(5):656–680

    Article  Google Scholar 

  • Carlson MS, Desai M, Drury JL, Kwak H, Yanco HA (2014) Identifying factors that influence trust in automated cars and medical diagnosis systems. In Proceedings of the AAAI symposium on the intersection of robust intelligence and trust in autonomous systems, pp 20–27

    Google Scholar 

  • Chen K, Zhang Y, Zheng Z, Zha H, Sun G (2008) Adapting ranking functions to user preference. In: Proceedings of the 24th International conference on data engineering workshops, pp 580–587

    Google Scholar 

  • Desai M, Kaniarasu P, Medvedev M, Steinfeld A, Yanco H (2013) Impact of robot failures and feedback on real-time trust. In: Proceedings of the 8th International conference on human-robot interaction, pp 251–258

    Google Scholar 

  • Esfandiari B, Chandrasekharan S (2001) On how agents make friends: mechanisms for trust acquisition. In: Proceedings of the 4th Workshop on deception, fraud and trust in agent societies, pp. 27–34

    Google Scholar 

  • Hancock PA, Billings DR, Schaefer KE, Chen JY, De Visser EJ, Parasuraman R (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors 53(5):517–527

    Article  Google Scholar 

  • Horvitz E (1999) Principles of mixed-initiative user interfaces. In: Proceedings of the 18th Conference on human factors in computing systems, pp 159–166

    Google Scholar 

  • Jian J-Y, Bisantz AM, Drury CG (2000) Foundations for an empirically determined scale of trust in automated systems. Int J Cogn Ergon 4(1):53–71

    Article  Google Scholar 

  • Kaniarasu P, Steinfeld A, Desai M, Yanco HA (2012) Potential measures for detecting trust changes. Proceedings of the 7th International conference on human-robot interaction, pp 241–242

    Google Scholar 

  • Kaniarasu P, Steinfeld A, Desai M, Yanco HA (2013) Robot confidence and trust alignment. In: Proceedings of the 8th International conference on human-robot interaction, pp 155–156

    Google Scholar 

  • Kiesler S, Powers A, Fussell SR, Torrey C (2008) Anthropomorphic interactions with a robot and robot-like agent. Soc Cogn 26(2):169–181

    Article  Google Scholar 

  • Kim T, Hinds P (2006) Who should I blame? Effects of autonomy and transparency on attributions in human-robot interaction. In: Proceedings of the 15th IEEE International symposium on robot and human interactive communication, pp 80–85

    Google Scholar 

  • Knexus Research Corporation (2015) eBotworks. http://www.knexusresearch.com/products/ebotworks.php

  • Li N, Kambhampati S, Yoon SW (2009) Learning probabilistic hierarchical task networks to capture user preferences. In: Proceedings of the 21st International joint conference on artificial intelligence, pp 1754–1759

    Google Scholar 

  • Li D, Rau PP, Li Y (2010) A cross-cultural study: effect of robot appearance and task. Int J Soc Robot 2(2):175–186

    Article  Google Scholar 

  • Maes P, Kozierok R (1993) Learning interface agents. In: Proceedings of the 11th National conference on artificial intelligence, pp 459–465

    Google Scholar 

  • Mahmood T, Ricci F (2009) Improving recommender systems with adaptive conversational strategies. In: Proceedings of the 20th ACM Conference on Hypertext and Hypermedia, pp. 73–82

    Google Scholar 

  • McGinty L, Smyth B (2003) On the role of diversity in conversational recommender systems. In: Proceedings of the 5th International conference on case-based reasoning, pp 276–290

    Google Scholar 

  • Muir BM (1987) Trust between humans and machines, and the design of decision aids. Int J Man Mach Stud 27(5–6):527–539

    Article  MathSciNet  Google Scholar 

  • Oleson KE, Billings DR, Kocsis V, Chen JY, Hancock PA (2011) Antecedents of trust in human-robot collaborations. In: Proceedings of the 1st International multi-disciplinary conference on cognitive methods in situation awareness and decision support, pp 175–178

    Google Scholar 

  • Richter MM, Weber RO (2013) Case-based reasoning—a textbook. Springer, Berlin

    Book  Google Scholar 

  • Sabater J, Sierra C (2005) Review on computational trust and reputation models. Artif Intell Rev 24(1):33–60

    Article  MATH  Google Scholar 

  • Saleh JA, Karray F, Morckos M (2012) Modelling of robot attention demand in human-robot interaction using finite fuzzy state automata. In: Proceedings of the International conference on fuzzy systems, pp 1–8

    Google Scholar 

  • Schlimmer JC, Hermens LA (1993) Software agents: completing patterns and constructing user interfaces. J Artif Intell Res 1:61–89

    Google Scholar 

  • Shapiro D, Shachter R (2002) User-agent value alignment. In: Proceedings of the Stanford Spring Symposium—Workshop on safe learning agents

    Google Scholar 

Download references

Acknowledgment

Thanks to the United States Naval Research Laboratory and the Office of Naval Research for supporting this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael W. Floyd .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media (outside the USA)

About this chapter

Cite this chapter

Floyd, M.W., Drinkwater, M., Aha, D.W. (2016). Learning Trustworthy Behaviors Using an Inverse Trust Metric. In: Mittu, R., Sofge, D., Wagner, A., Lawless, W. (eds) Robust Intelligence and Trust in Autonomous Systems. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7668-0_3

Download citation

  • DOI: https://doi.org/10.1007/978-1-4899-7668-0_3

  • Published:

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4899-7666-6

  • Online ISBN: 978-1-4899-7668-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics