Skip to main content

How Much Do You Trust Me? Learning a Case-Based Model of Inverse Trust

  • Conference paper
Case-Based Reasoning Research and Development (ICCBR 2014)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8765))

Included in the following conference series:

Abstract

Robots can be important additions to human teams if they improve team performance by providing new skills or improving existing skills. However, to get the full benefits of a robot the team must trust and use it appropriately. We present an agent algorithm that allows a robot to estimate its trustworthiness and adapt its behavior in an attempt to increase trust. It uses case-based reasoning to store previous behavior adaptations and uses this information to perform future adaptations. We compare case-based behavior adaptation to behavior adaptation that does not learn and show it significantly reduces the number of behaviors that need to be evaluated before a trustworthy behavior is found. Our evaluation is in a simulated robotics environment and involves a movement scenario and a patrolling/threat detection scenario.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Oleson, K.E., Billings, D.R., Kocsis, V., Chen, J.Y., Hancock, P.A.: Antecedents of trust in human-robot collaborations. In: 1st International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support, pp. 175–178 (2011)

    Google Scholar 

  2. Desai, M., Kaniarasu, P., Medvedev, M., Steinfeld, A., Yanco, H.: Impact of robot failures and feedback on real-time trust. In: 8th International Conference on Human-robot Interaction, pp. 251–258 (2013)

    Google Scholar 

  3. Sabater, J., Sierra, C.: Review on computational trust and reputation models. Artificial Intelligence Review 24(1), 33–60 (2005)

    Article  MATH  Google Scholar 

  4. Kaniarasu, P., Steinfeld, A., Desai, M., Yanco, H.A.: Robot confidence and trust alignment. In: 8th International Conference on Human-Robot Interaction, pp. 155–156 (2013)

    Google Scholar 

  5. Jian, J.Y., Bisantz, A.M., Drury, C.G.: Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics 4(1), 53–71 (2000)

    Article  Google Scholar 

  6. Muir, B.M.: Trust between humans and machines, and the design of decision aids. International Journal of Man-Machine Studies 27(5-6), 527–539 (1987)

    Article  MathSciNet  Google Scholar 

  7. Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y., De Visser, E.J., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society 53(5), 517–527 (2011)

    Article  Google Scholar 

  8. Carlson, M.S., Desai, M., Drury, J.L., Kwak, H., Yanco, H.A.: Identifying factors that influence trust in automated cars and medical diagnosis systems. In: AAAI Symposium on the Intersection of Robust Intelligence and Trust in Autonomous Systems, pp. 20–27 (2014)

    Google Scholar 

  9. Kaniarasu, P., Steinfeld, A., Desai, M., Yanco, H.A.: Potential measures for detecting trust changes. In: 7th International Conference on Human-Robot Interaction, pp. 241–242 (2012)

    Google Scholar 

  10. Knexus Research Corporation: eBotworks (2013), http://www.knexusresearch.com/products/ebotworks.php (Online; accessed April 9, 2014)

  11. Floyd, M.W., Drinkwater, M., Aha, D.W.: Adapting autonomous behavior using an inverse trust estimation. In: Murgante, B., et al. (eds.) ICCSA 2014, Part I. LNCS, vol. 8579, pp. 728–742. Springer, Heidelberg (2014)

    Chapter  Google Scholar 

  12. Saleh, J.A., Karray, F., Morckos, M.: Modelling of robot attention demand in human-robot interaction using finite fuzzy state automata. In: International Conference on Fuzzy Systems, pp. 1–8 (2012)

    Google Scholar 

  13. Tavakolifard, M., Herrmann, P., Öztürk, P.: Analogical trust reasoning. In: Ferrari, E., Li, N., Bertino, E., Karabulut, Y. (eds.) IFIPTM 2009. IFIP AICT, vol. 300, pp. 149–163. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  14. Briggs, P., Smyth, B.: Provenance, trust, and sharing in peer-to-peer case-based web search. In: Althoff, K.-D., Bergmann, R., Minor, M., Hanft, A. (eds.) ECCBR 2008. LNCS (LNAI), vol. 5239, pp. 89–103. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  15. Leake, D., Whitehead, M.: Case provenance: The value of remembering case sources. In: Weber, R.O., Richter, M.M. (eds.) ICCBR 2007. LNCS (LNAI), vol. 4626, pp. 194–208. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  16. Ros, R., Veloso, M.M., de Màntaras, R.L., Sierra, C., Arcos, J.-L.: Retrieving and reusing game plays for robot soccer. In: Roth-Berghofer, T.R., Göker, M.H., Güvenir, H.A. (eds.) ECCBR 2006. LNCS (LNAI), vol. 4106, pp. 47–61. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  17. Likhachev, M., Arkin, R.C.: Spatio-temporal case-based reasoning for behavioral selection. In: International Conference on Robotics and Automation, pp. 1627–1634 (2001)

    Google Scholar 

  18. Urdiales, C., Peula, J.M., Fernández-Carmona, M., Sandoval, F.: Learning-based adaptation for personalized mobility assistance. In: Delany, S.J., Ontañón, S. (eds.) ICCBR 2013. LNCS, vol. 7969, pp. 329–342. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  19. Shapiro, D., Shachter, R.: User-agent value alignment. In: Stanford Spring Symposium - Workshop on Safe Learning Agents (2002)

    Google Scholar 

  20. McGinty, L., Smyth, B.: On the role of diversity in conversational recommender systems. In: Ashley, K.D., Bridge, D.G. (eds.) ICCBR 2003. LNCS, vol. 2689, pp. 276–290. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  21. Maes, P., Kozierok, R.: Learning interface agents. In: 11th National Conference on Artificial Intelligence, pp. 459–465 (1993)

    Google Scholar 

  22. Horvitz, E.: Principles of mixed-initiative user interfaces. In: 18th Conference on Human Factors in Computing Systems, pp. 159–166 (1999)

    Google Scholar 

  23. Breazeal, C., Gray, J., Berlin, M.: An embodied cognition approach to mindreading skills for socially intelligent robots. International Journal of Robotic Research 28(5) (2009)

    Google Scholar 

  24. Li, N., Kambhampati, S., Yoon, S.W.: Learning probabilistic hierarchical task networks to capture user preferences. In: 21st International Joint Conference on Artificial Intelligence, pp. 1754–1759 (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Floyd, M.W., Drinkwater, M., Aha, D.W. (2014). How Much Do You Trust Me? Learning a Case-Based Model of Inverse Trust. In: Lamontagne, L., Plaza, E. (eds) Case-Based Reasoning Research and Development. ICCBR 2014. Lecture Notes in Computer Science(), vol 8765. Springer, Cham. https://doi.org/10.1007/978-3-319-11209-1_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-11209-1_10

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-11208-4

  • Online ISBN: 978-3-319-11209-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics