Advertisement

Autonomous Agents and Multi-Agent Systems

, Volume 30, Issue 1, pp 136–173 | Cite as

Evaluation of a trust-modulated argumentation-based interactive decision-making tool

  • Elizabeth I. Sklar
  • Simon Parsons
  • Zimi Li
  • Jordan Salvit
  • Senni Perumal
  • Holly Wall
  • Jennifer Mangels
Article

Abstract

The interactive ArgTrust application is a decision-making tool that is based on an underlying formal system of argumentation in which the evidence that influences a recommendation, or conclusion, is modulated according to values of trust that the user places in that evidence. This paper presents the design and analysis of a user study which was intended to evaluate the effectiveness of ArgTrust in a collaborative human–agent decision-making task. The results show that users’ interactions with ArgTrust helped them consider their decisions more carefully than without using the software tool.

Keywords

Argumentation Trust Human–agent interaction 

Notes

Acknowledgments

This research was funded under Army Research Laboratory Cooperative Agreement Number W911NF-09-2-0053, by the National Science Foundation under grant #1117761 , and by the National Security Agency under the Science of Security Lablet grant (SoSL). Additional funding was provided by a University of Liverpool Research Fellowship and by a Fulbright-King’s College London Scholar Award. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the funders. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.

References

  1. 1.
    Abrams, Z., McGrew, R., & Plotkin, S. (2004). Keeping peers honest in EigenTrust. In Proceedings of the 2nd Workshop on the Economics of Peer-to-Peer Systems Google Scholar
  2. 2.
    Adler, B. T., & de Alfaro, L. (2007). A content-driven reputation system for the Wikipedia. In Proceedings of the 16th International World Wide Web Conference, Banff, Alberta, May.Google Scholar
  3. 3.
    Amgoud, L. (1999). Contribution a l’integration des préferences dans le raisonnement argumentatif. PhD thesis, Université Paul Sabatier, Toulouse.Google Scholar
  4. 4.
    Amgoud, L., & Cayrol, C. (2002). A reasoning model based on the production of acceptable arguments. Annals of Mathematics and Artifical Intelligence, 34(3), 197–215.zbMATHMathSciNetCrossRefGoogle Scholar
  5. 5.
    Amgoud, L., Maudet, N., & Parsons, S. (2000). Modelling dialogues using argumentation. Proceedings of the Fourth International Conference on Multi-Agent Systems (pp. 31–38). Boston, MA: IEEE Press.CrossRefGoogle Scholar
  6. 6.
    Azhar, M. Q., Schneider, E., Salvit, J., Wall, H., & Sklar, E. I. (2013). Evaluation of an argumentation-based dialogue system for human-robot collaboration. In Proceedings of the Workshop on Autonomous Robots and Multirobot Systems (ARMS) at Autonomous Agents and MultiAgent Systems (AAMAS), St Paul, MN, USA, May.Google Scholar
  7. 7.
    Baroni, P., Caminada, M., & Giacomin, M. (2011). An introduction to argumentation semantics. The Knowledge Engineering Review, 26, 365–410.CrossRefGoogle Scholar
  8. 8.
    Besnard, P., & Hunter, A. (2001). A logic-based theory of deductive arguments. Artificial Intelligence, 128, 203–235.zbMATHMathSciNetCrossRefGoogle Scholar
  9. 9.
    Birnbaum, L., Flowers, M., & McGuire, R. (1980). Towards an AI model of argumentation. In Proceedings of the 1st National Conference on Artificial Intelligence (pp. 313–315).Google Scholar
  10. 10.
    Carr, C. S. (2003). Using computer supported argument visualization to teach legal argumentation. In P. A. Kirschner, S. J. Buckingham-Shum, & C. S. Carr (Eds.), Visualizing argumentation: Software tools for collaborative and educational sense-making (pp. 75–96). London: Springer.CrossRefGoogle Scholar
  11. 11.
    Dong, X. L., Berti-Equille, L., & Srivastava, D. (2009). Integrating conflicting data: The role of source dependence. In Proceedings of the 35th International Conference on Very Large Databases, Lyon, France, August.Google Scholar
  12. 12.
    Dung, P. M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and \(n\)-person games. Artificial Intelligence, 77, 321–357.zbMATHMathSciNetCrossRefGoogle Scholar
  13. 13.
    Dwyer, C. P., Hogan, M. J., & Stewart, I. (2013). An examination of the effects of argument mapping on students’ memory and comprehension performance. Thinking Skills and Creativity, 8, 11–24.CrossRefGoogle Scholar
  14. 14.
    Emery, J., Walton, R., Coulson, A., Glasspool, D., Ziebland, S., & Fox, J. (1999). Computer support for recording and interpreting family histories of breast and ovarian cancer in primary care (RAGs): Qualitative evaluation with simulated patients. British Medical Journal, 319(7201), 32–36.CrossRefGoogle Scholar
  15. 15.
    Feldman, M., Papadimitriou, C., Chuang, J., & Stoica, I. (2004). Free-riding and whitewashing in Peer-to-Peer systems. In Proceedings of the 3rd Annual Workshop on Economics and Information Security.Google Scholar
  16. 16.
    Ferrando, S. P., & Onaindia, E. (2012). Defeasible argumentation for multi-agent planning in ambient intelligence applications. In V. Conitzer, W. van der Hoek, L. Padgham, & M. Winikoff (Eds.), Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems. IFAAMAS: Valencia, Spain.Google Scholar
  17. 17.
    Fox, J., Glowinski, A., Gordon, C., Hajnal, S., & O’Neil, M. (1990). Logic engineering for knowledge engineering: Design and implementation of the oxford system of medicine. Artificial Intelligence in Medicine, 2(6), 323–339.CrossRefGoogle Scholar
  18. 18.
    Fox, J., & Parsons, S. (1998). Arguing about beliefs and actions. In A. Hunter & S. Parsons (Eds.), Applications of uncertainty formalisms. Berlin: Springer-Verlag.Google Scholar
  19. 19.
    García, A. J., & Simari, G. (2004). Defeasible logic programming: An argumentative approach. Theory and Practice of Logic Programming, 4(1), 95–138.zbMATHMathSciNetCrossRefGoogle Scholar
  20. 20.
    Golbeck, J. (2005). Computing and applying trust in web-based social networks. PhD thesis, University of Maryland, College Park.Google Scholar
  21. 21.
    Golbeck, J. (May 2006). Combining provenance with trust in social networks for semantic web content filtering. In Proceedings of the International Provenance and Annotation Workshop, Chicago, Illinois.Google Scholar
  22. 22.
    Govindan, K., Mohapatra, P., & Abdelzaher, T. F. (2010, December). Trustworthy wireless networks: Issues and applications. In Proceedings of the International Symposium on Electronic System Design, Bhubaneswar, India.Google Scholar
  23. 23.
    Grandison, T., & Sloman, M. (2000). A survey of trust in internet applications. IEEE Communications Surveys and Tutorials, 4(4), 2–16.CrossRefGoogle Scholar
  24. 24.
    Guha, R., Kumar, R., Raghavan, P., & Tomkins, A. (2004). Propagation of trust and distrust. In Proceedings of the 13th International Conference on the World Wide Web.Google Scholar
  25. 25.
    Hang, C.-W., Wang, Y., & Singh, M. P. (2008). An adaptive probabilistic trust model and its evaluation. In Proceedings of the 7th International Conference on Autonomous Agents and Multiagent Systems, Estoril, Portugal.Google Scholar
  26. 26.
    Hart, S. G. (2006). NASA-task load index (NASA-TLX); 20 years later. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 50, 904–908.CrossRefGoogle Scholar
  27. 27.
    Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (task load index): Results of empirical and theoretical research. Advances in Psychology, 52, 139–183.CrossRefGoogle Scholar
  28. 28.
    Harwood, W. T., Clark, J. A., & Jacob, J. L. (2010). Networks of trust and distrust: Towards logical reputation systems. In D. M. Gabbay & L. van der Torre (Eds.), Proceedings of the Workshop on Logics in Security, August 2010, Copenhagen, Denmark. http://lis.gforge.uni.lu/proceedings.pdf.
  29. 29.
    Jøsang, A., Hayward, R., & Pope, S. (2006). Trust network analysis with subjective logic. In Proceedings of the 29th Australasian Computer Society Conference, Hobart, January.Google Scholar
  30. 30.
    Judson, P. N., Fox, J., & Krause, P. J. (1996). Using new reasoning technology in chemical information systems. Journal of Chemical Information and Computer Sciences, 36, 621–624.Google Scholar
  31. 31.
    Kakas, A., & Moraitis, P. (2003). Argumentation based decision making for autonomous agents. In 2nd International Conference on Autonomous Agents and Multi-Agent Systems. New York, NY: ACM Press.Google Scholar
  32. 32.
    Kamvar, S. D., Schlosser, M. T., & Garcia-Molina, H. (2004). The EigenTrust algorithm for reputation management in P2P networks. In Proceedings of the 12th World Wide Web Conference, May.Google Scholar
  33. 33.
    Kanselaar, G., Erkens, G., Andriessen, J., Prangsma, M., Veerman, A., & Jaspers, J. (2003). Designing argumentation tools for collaborative learning. In P. A. Kirschner, S. J. Buckingham-Shum, & C. S. Carr (Eds.), Visualizing argumentation: Software tools for collaborative and educational sense-making (pp. 51–73). London: Springer.CrossRefGoogle Scholar
  34. 34.
    Karlof, C., & Wagner, D. (2003). Secure routing in wireless sensor networks: Attacks and countermeasures. Ad Hoc Network, 1, 293–315.CrossRefGoogle Scholar
  35. 35.
    Katz, Y., & Golbeck, J. (2006). Social network-based trust in prioritzed default logic. In Proceedings of the 21st National Conference on Artificial Intelligence.Google Scholar
  36. 36.
    Khopkar, T., Li, X., & Resnick, P. (2005). Self-selection, slipping, salvaging, slacking and stoning: The impacts of. In Proceedings of the 6th ACM Conference on Electronic Commerce, June. Vancouver: ACM.Google Scholar
  37. 37.
    Khosravifar, B., Bentahar, J., Moazin, A., & Thiran, P. (2010). On the reputation of agent-based web services. Proceedings of the 24th AAAI Conference on Artificial Intelligence, July (pp. 1352–1357). Atlanta: AAAI Press.Google Scholar
  38. 38.
    Kirschner, P. A., Buckingham-Shum, S. J., & Carr, C. S. (Eds.). (2003). Using computer supported argument visualization to teach legal argumentation. Berlin: Springer.Google Scholar
  39. 39.
    Kok, E., Meyer, J.-J., Prakken, H., & Vreeswijk, G. (2012). Testing the benefits of structured argumentation in multi-agent deliberation dialogues. In Proceedings of the 9th International Workshop on Argumentation in Multiagent Systems, Valencia, Spain.Google Scholar
  40. 40.
    Kraus, S., Sycara, K., & Evenchik, A. (1998). Reaching agreements through argumentation: A logical model and implementation. Artificial Intelligence, 104(1–2), 1–69.zbMATHMathSciNetCrossRefGoogle Scholar
  41. 41.
    Lang, J., Spear, M., & Wu, S. F. (2010). Social manipulation of online recommender systems. In Proceedings of the 2nd International Conference on Social Informatics, Laxenburg, Austria.Google Scholar
  42. 42.
    Lerman, K., & Galstyan, A. (2008). Analysis of social voting patterns on Digg. In Proceedings of the 1st Workshop on Online Social Networks, Seattle, August.Google Scholar
  43. 43.
    Liau, C.-J. (2003). Belief, information acquisition, and trust in multi-agent systems–a modal logic formulation. Artificial Intelligence, 149, 31–60.zbMATHMathSciNetCrossRefGoogle Scholar
  44. 44.
    Matt, P.-A., Morge, M., & Toni, F. (2010). Combining statistics and arguments to compute trust. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagents Systems, Toronto, Canada, May.Google Scholar
  45. 45.
    Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34(2), 57–74.CrossRefGoogle Scholar
  46. 46.
    Modgil, S., & Prakken, H. (2013). A general account of argumentation with preferences. Artificial Intelligence, 195, 361–397.zbMATHMathSciNetCrossRefGoogle Scholar
  47. 47.
    Mui, L., Moteashemi, M., & Halberstadt, A. (2002). A computational model of trust and reputation. In Proceedings of the 35th Hawai’i International Conference on System Sciences.Google Scholar
  48. 48.
    Naylor, S. (2005). Not a good day day to die: The untold story of operation Anaconda. New York: Berkley Caliber Books.Google Scholar
  49. 49.
    Oren, N., Norman, T., & Preece, A. (2007). Subjective logic and arguing with evidence. Artificial Intelligence, 171(10–15), 838–854.zbMATHMathSciNetCrossRefGoogle Scholar
  50. 50.
    Parsons, S., Atkinson, K., Li, Z., McBurney, P., Sklar, E., Singh, M., et al. (2014). Argument schemes for reasoning about trust. Argument and Computation, 5(2–3), 160–190.CrossRefGoogle Scholar
  51. 51.
    Parsons, S., & Green, S. (1999). Argumentation and qualitative decision making. In Proceedings of the 5th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty.Google Scholar
  52. 52.
    Parsons, S., McBurney, P., & Sklar, E. (May 2010). Reasoning about trust using argumentation: A position paper. In Proceedings of the Workshop on Argumentation in Multiagent Systems, Toronto, Canada.Google Scholar
  53. 53.
    Parsons, S., Sierra, C., & Jennings, N. R. (1998). Agents that reason and negotiate by arguing. Journal of Logic and Computation, 8(3), 261–292.zbMATHMathSciNetCrossRefGoogle Scholar
  54. 54.
    Parsons, S., Sklar, E. I., Salvit, J., Wall, H., & Li, Z. (2013). ArgTrust: Decision making with information from sources of varying trustworthiness (Demonstration). In Proceedings of Autonomous Agents and Multiagent Systems (AAMAS), St Paul, MN, USA, May.Google Scholar
  55. 55.
    Parsons, S., Tang, Y., Sklar, E., McBurney, P., & Cai, K. (2011). Argumentation-based reasoning in agents with varying degrees of trust. In Proceedings of the 10th International Conference on Autonomous Agents and Multi-Agent Systems, Taipei, Taiwan.Google Scholar
  56. 56.
    Parsons, S., Wooldridge, M., & Amgoud, L. (2003). Properties and complexity of formal inter-agent dialogues. Journal of Logic and Computation, 13(3), 347–376.zbMATHMathSciNetCrossRefGoogle Scholar
  57. 57.
    Pasquier, P., Hollands, R., Rahwan, I., Dignum, F., & Sonenberg, L. (2011). An empirical study of interest-based negotiation. Journal of Autonomous Agents and Multi-Agent Systems, 22(2), 249–288.CrossRefGoogle Scholar
  58. 58.
    Prakken, H. (2000). On dialogue systems with speech acts, arguments, and counterarguments. In Proceedings of the Seventh European Workshop on Logic in Artificial Intelligence. Berlin: Springer Verlag.Google Scholar
  59. 59.
    Prakken, H. (2005). Coherence and flexibility in dialogue games for argumentation. Journal of Logic and Computation, 15, 1009–1040.zbMATHMathSciNetCrossRefGoogle Scholar
  60. 60.
    Rahwan, I., Madakkatel, M. I., Bonnefon, J. F., Awan, R. N., & Abdallah, S. (2010). Behavioral experiments for assessing the abstract argumentation semantics of reinstatement. Cognitive Science, 34(8), 1483–1502.CrossRefGoogle Scholar
  61. 61.
    Rahwan, I., & Simari, G. R. (Eds.). (2009). Argumentation in Artificial Intelligence. Berlin: Springer Verlag.Google Scholar
  62. 62.
    Resnick, P., & Zeckhauser, R. (2002). Trust among strangers in internet transactions: Empirical analysis of eBay’s reputation system. In M. R. Baye (Ed.), The economics of the internet and E-commerce (pp. 127–157). Amsterdam: Elsevier Science.CrossRefGoogle Scholar
  63. 63.
    Schank, P., & Ranney, M. (1995). Improved reasoning with Convince Me. In CHI’95 Conference Companion (pp. 276–277).Google Scholar
  64. 64.
    Stranders, R., de Weerdt, M., & Witteveen, C. (2008). Fuzzy argumentation for trust. In F. Sadri & K. Satoh, (Eds.), Proceedings of the Eighth Workshop on Computational Logic in Multi-Agent Systems. Lecture Notes in Computer Science (vol. 5056, pp. 214–230). Berlin: Springer Verlag.Google Scholar
  65. 65.
    Sun, Y., Yu, W., Han, Z., & Liu, K. J. R. (2005). Trust modeling and evaluation in ad hoc networks. In Proceedings of the YYth Annual IEEE Global Communications Conference (pp. 1862–1867).Google Scholar
  66. 66.
    Suthers, D., Weiner, A., Connelly, J., & Paolucci, M. (1995). Belvedere: Engaging students in critical discussion of science and public policy issues. In Proceedings of the 7th World Conference on Artificial Intelligence in Education, Washington, DC, August (pp. 266–273).Google Scholar
  67. 67.
    Sycara, K. (1990). Persuasive argumentation in negotiation. Theory and Decision, 28, 203–242.CrossRefGoogle Scholar
  68. 68.
    Tang, Y., Cai, K., McBurney, P., Sklar, E., & Parsons, S. (2012). Using argumentation to reason about trust and belief. Journal of Logic and Computation, 22(5), 979–1018.zbMATHMathSciNetCrossRefGoogle Scholar
  69. 69.
    Tang, Y., Cai, K., Sklar, E., & Parsons, S. (2011). A prototype system for argumentation-based reasoning about trust. In Proceedings of the 9th European Workshop on Multiagent Systems, Maastricht, Netherlands, November.Google Scholar
  70. 70.
    Tang, Y., Sklar, E. I., & Parsons, S. (2012). An argumentation engine: ArgTrust. In Proceedings of the Workshop on Argumentation in Multiagent Systems (ArgMAS) at Autonomous Agents and MultiAgent Systems (AAMAS), Valencia, Spain, June.Google Scholar
  71. 71.
    Tolchinsky, P., Modgil, S., Cortes, U., & Sanchez-Marre, M. (2006). Cbr and argument schemes for collaborative decision making. In Proceedings of the First International Conference on Computational Models of Argument, Liverpool (pp. 71–82).Google Scholar
  72. 72.
    van den Braak, S. W., van Oostendorp, H., Prakken, H., & Vreeswijk, G. A. (2006). A critical review of argument visualization tools: Do users become better reasoners? In Proceedings of the Workshop on Computational Models of Natural Argument (pp. 67–75).Google Scholar
  73. 73.
    Villata, S., Boella, G., Gabbay, D. M., & van der Torre, L. (2011). Arguing about the trustworthiness of the information sources. In Proceedings of the European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty, Belfast, UK.Google Scholar
  74. 74.
    Vogel, C. M. (1995). Inheritance reasoning: Psychological plausibility, proof theory and semantics. PhD thesis, University of Edinburgh, Centre for Cognitive Science.Google Scholar
  75. 75.
    Vreeswijk, G., & Prakken, H. (2000). Credulous and sceptical argument games for preferred semantics. In Proceedings of the 7th European Workshop on Logics in Artificial Intelligence.Google Scholar
  76. 76.
    Walton, D. N., & Krabbe, E. C. W. (1995). Commitment in dialogue: Basic concepts of interpersonal reasoning. Albany, NY: State University of New York Press.Google Scholar
  77. 77.
    Walton, R., Gierl, C., Mistry, H., Vessey, M. P., & Fox, J. (1997). Evaluation of computer support for prescribing (CAPSULE) using simulated cases. British Medical Journal, 315, 791–795.Google Scholar
  78. 78.
    Wang, Y., & Singh, M. P. (2006). Trust representation and aggregation in a distributed agent system. In Proceedings of the 21st National Conference on Artificial Intelligence, Boston, MA.Google Scholar
  79. 79.
    Yu, B., & Singh, M. (2002). Distributed reputation management for electronic commerce. Computational Intelligence, 18(4), 349–535.MathSciNetCrossRefGoogle Scholar

Copyright information

© The Author(s) 2015

Authors and Affiliations

  • Elizabeth I. Sklar
    • 1
  • Simon Parsons
    • 1
  • Zimi Li
    • 2
  • Jordan Salvit
    • 2
  • Senni Perumal
    • 3
  • Holly Wall
    • 2
  • Jennifer Mangels
    • 4
  1. 1.Department of Computer ScienceUniversity of LiverpoolLiverpoolUK
  2. 2.Department of Computer Science, Graduate CenterCity University of New YorkNew YorkUSA
  3. 3.Raytheon BBN TechnologiesCambridgeUSA
  4. 4.Department of Psychology, Baruch CollegeCity University of New YorkNew YorkUSA

Personalised recommendations