Abstract
Threats are used in persuasive negotiation dialogues when a proponent agent tries to persuade an opponent of him to accept a proposal. Depending on the information the proponent has modeled about his opponent(s), he may generate more than one threat, in which case he has to evaluate them in order to select the most adequate to be sent. One way to evaluate the generated threats is by calculating their strengths, i.e., the persuasive force of each threat. Related work considers mainly two criteria to do such evaluation: the certainty level of the beliefs that compose the threat and the importance of the goal of the opponent. This article aims to study the components of threats and propose further criteria that lead to improve their evaluation and to select more effective threats during the dialogue. Thus, the contribution of this paper is a model for calculating the strength of threats that is mainly based on the status of the goal of the opponent and the credibility of the proponent. The model is empirically evaluated and the results demonstrate that the proposed model is more efficient than previous works in terms of the number of exchanged arguments, and the number of reached agreements.
Similar content being viewed by others
Notes
When an agent uses rhetorical arguments to back their proposals, the negotiation is called persuasive negotiation [38].
A computational formalization of the BBGP model can be found in [30].
Minimal means that there is no \(\mathcal {S}' \subset \mathcal {S}\) such that \(\mathcal {S}\vdash h\) and consistent means that it is not the case that \(\mathcal {S}\vdash h\) and \(\mathcal {S}\vdash \lnot h\), for any h [25].
Variable number of exchanged threats determines the number of cycles to reach agreements since the former is half of the latter.
Recall that the reputation is an evidence of the proponent’s past behavior of an agent with respect to his opponents. We assume that this value is already estimated and it is not private information; thus, the reputation value of an agent is visible for any other agent. On the other hand, the “accurate” value of the credibility of an agent P with respect to an opponent O—whose threshold is \({\mathtt {THRES}}(O)\)—is given by \({\mathtt {ACCUR\_CRED}}(P,O) = {\mathtt {REP}}(P) - {\mathtt {THRES}}(O)\).
According to Meyer [29], an emotional agent is an artificial system that is designed in such a manner that emotions play a role. Thus, the emotional state the agent may determine his actions or part of his actions.
In literature, there are two approaches about the order of the presentation of the persuasive message: (i) the anti-climax approach claims that it is better to present the most important part of the argument first in order, and (ii) the climax approach claims that the most crucial or important evidence have to be kept until the end of the message. According to O’Keefe [33], it is more beneficial if the arguments are arranged based on the climax approach.
A social environment is a communication environment in which agents interact in a coordinated manner [32].
References
Abdul-Rahman A, Hailes S (2000) Supporting trust in virtual communities. In: Proceedings of the 33rd annual Hawaii international conference on system sciences. IEEE, pp 9–18
Amgoud L (2003) A formal framework for handling conflicting desires. In: Nielsen TD, Zhang NL (eds) Symbolic and quantitative approaches to reasoning with uncertainty. Springer, Berlin, pp 552–563
Amgoud L, Parsons S, Maudet N (2000) Arguments, dialogue, and negotiation. In: Horn W (ed) Proceedings of the 14th European Conference on Artificial Intelligence (ECAI’00). IOS Press, Amsterdam, The Netherlands, pp 338–342
Amgoud L, Prade H (2004) Threat, reward and explanatory arguments: generation and evaluation. In: Proceedings of the ECAI workshop on computational models of natural argument, pp 73–76
Amgoud L, Prade H (2005) Formal handling of threats and rewards in a negotiation dialogue. In: Proceedings of the 4th international joint conference on autonomous agents and multiagent systems (AAMAS). ACM, pp 529–536
Amgoud L, Prade H (2005) Handling threats, rewards, and explanatory arguments in a unified setting. Int J Intell Syst 20(12):1195–1218
Amgoud L, Prade H (2006) Formal handling of threats and rewards in a negotiation dialogue. In: Parsons S, Maudet N, Moraitis P, Rahwan I (eds) Argumentation in multi-agent systems. Springer, Berlin, pp 88–103
Artz D, Gil Y (2007) A survey of trust in computer science and the semantic web. Web Semant Sci Serv Agents World W Web 5(2):58–71
Atkinson K, Bench-Capon T, McBurney P (2005) Persuasive political argument. In: Proceedings of the fifth international workshop on computational models of natural argument (CMNA), pp 44–51
Atkinson K, Bench-Capon TJ, McBurney P (2005) Multi-agent argumentation for edemocracy. In: Proceedings of the 3rd European conference on multi-agent systems (EUMAS), pp 35–46
Baarslag T, Hendrikx MJ, Hindriks KV, Jonker CM (2016) Learning about the opponent in automated bilateral negotiation: a comprehensive survey of opponent modeling techniques. Auton Agents Multi-Agent Syst 30(5):849–898
Braet AC (1992) Ethos, pathos and logos in Aristotle’s Rhetoric: a re-examination. Argumentation 6(3):307–320
Bratman M (1987) Intention, plans, and practical reason. Cambridge University Press, Cambridge
Castelfranchi C, Paglieri F (2007) The role of beliefs in goal dynamics: Prolegomena to a constructive theory of intentions. Synthese 155(2):237–263
Dasgupta P (2000) Trust as a commodity. Trust Mak Break Coop Relat 4:49–72
Demirdöğen ÜD (2010) The roots of research in (political) persuasion: ethos, pathos, logos and the Yale studies of persuasive communications. Int J Soc Inq 3(1):189–201
Dimopoulos Y, Moraitis P (2014) Advances in argumentation based negotiation. In: Negotiation and argumentation in multi-agent systems: fundamentals, theories, systems and applications, vol 44, pp 82–125
Dong-Huynha T, Jennings N, Shadbolt N (2004) Fire: an integrated trust and reputation model for open multi-agent systems. In: ECAI 2004: 16th European conference on artificial intelligence, August 22–27, 2004, Valencia, Spain: including prestigious applicants [sic] of intelligent systems (PAIS 2004): proceedings, vol 110, p 18
Fogg BJ (1998) Persuasive computers: perspectives and research directions. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM Press, pp 225–232
Guerini M, Castelfranchi C (2006) Promises and threats in persuasion. In: CMNA VI-computational models of natural argument, pp 14–21
Hadjinikolis C, Modgil S, Black E (2015) Building support-based opponent models in persuasion dialogues. In: International workshop on theorie and applications of formal argumentation. Springer, pp 128–145
Hadjinikolis C, Siantos Y, Modgil S, Black E, McBurney P (2013) Opponent modelling in persuasion dialogues. In: Proceedings of the 23rd international joint conference on artificial intelligence (IJCAI), pp 164–170
Higgins C, Walker R (2012) Ethos, logos, pathos: strategies of persuasion in social/environmental reports. Account Forum 36:194–208
Hovland CI (1957) The order of presentation in persuasion. Yale University Press, New Haven
Hunter A (2010) Base logics in argumentation. In: Baroni P, Cerutti F, Giacomin M, Simari GR (eds) Proceedings of the 2010 conference on Computational Models of Argument: Proceedings of COMMA 2010. IOS Press, Amsterdam, The Netherlands, pp 275–286
Hunter A (2015) Modelling the persuadee in asymmetric argumentation dialogues for persuasion. In: Proceedings of the 24th international joint conference on artificial intelligence, pp 3055–3061
Hunter A (2018) Invited Talk: Computational Persuasion with Applications in Behaviour Change. In: Arai S, Kojima K, Mineshima K, Bekki D, Satoh K, Ohta Y (eds) New Frontiers in Artificial Intelligence. JSAI-isAI 2017. Lecture Notes in Computer Science, vol 10838. Springer, Cham
Medić A (2012) Survey of computer trust and reputation models-the literature overview. Int J Inf Commun Technol Res 2(3):254–275
Meyer JJC (2006) Reasoning about emotional agents. Int J Intell Syst 21(6):601–619
Morveli-Espinoza M, Possebom A, Puyol-Gruart J, Tacla CA (2019) Argumentation-based intention formation process. DYNA 86(208):82–91
Morveli-Espinoza M, Possebom AT, Tacla CA (2016) Construction and strength calculation of threats. In: Computational models of argument—proceedings of COMMA 2016, Potsdam, Germany, 12–16 September, 2016, pp 403–410
Odell JJ, Parunak HVD, Fleischer M, Brueckner S (2002) Modeling agents and their environment. In: International workshop on agent-oriented software engineering. Springer, pp 16–31
O’keefe DJ (2016) Persuasion: theory and research, 3rd edn. SAGE Publications, Inc., Thousand Oaks, CA
Pinyol I, Sabater-Mir J (2013) Computational trust and reputation models for open multi-agent systems: a review. Artif Intell Rev 40(1):1–25
Poggi I (2005) The goals of persuasion. Pragmat Cogn 13:297–336
Rahwan I, Ramchurn SD, Jennings NR, Mcburney P, Parsons S, Sonenberg L (2003) Argumentation-based negotiation. Knowl Eng Rev 18(04):343–375
Ramchurn SD, Huynh D, Jennings NR (2004) Trust in multi-agent systems. Knowl Eng Rev 19(1):1–25
Ramchurn SD, Jennings NR, Sierra C (2003) Persuasive negotiation for autonomous agents: a rhetorical approach. In: IJCAI workshop on computational models of natural argument, pp 9–17
Rao AS, Georgeff MP et al (1995) BDI agents: from theory to practice. ICMAS 95:312–319
Rienstra T, Thimm M, Oren N (2013) Opponent models with uncertainty for strategic argumentation. In: Proceedings of the 23rd international joint conference on artificial intelligence (IJCAI), pp 332–338
Sabater J, Sierra C (2001) Regret: a reputation model for gregarious societies. In: Proceedings of the 4th workshop on deception fraud and trust in agent societies, vol 70. pp 61–69
Sierra C, Jennings NR, Noriega P, Parsons S (1998) A framework for argumentation-based negotiation. In: Intelligent agents IV agent theories, architectures, and languages. Springer, pp 177–192
Sycara KP (1990) Persuasive argumentation in negotiation. Theory Decis 28(3):203–242
Verheij B (1999) Automated argument assistance for lawyers. In: Proceedings of the 7th international conference on artificial intelligence and law. ACM, pp 43–52
Verheij B (2003) Artificial argument assistants for defeasible argumentation. Artif Intell 150(1–2):291–324
Walton D (2005) Fundamentals of critical argumentation. Cambridge University Press, Cambridge
Yu B, Singh MP (2000) A social mechanism of reputation management in electronic communities. In: International workshop on cooperative information agents. Springer, pp 154–165
Acknowledgements
Mariela Morveli Espinoza is founded by CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior). The authors are very grateful to Prof. Juan Carlos Nieves for making a number of important suggestions for improving the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This is an extended version of a short-paper originally presented at the Computational Models of Argument Conference, COMMA’16 [31].
Rights and permissions
About this article
Cite this article
Morveli Espinoza, M., Possebom, A.T. & Tacla, C.A. On the calculation of the strength of threats. Knowl Inf Syst 62, 1511–1538 (2020). https://doi.org/10.1007/s10115-019-01399-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10115-019-01399-2