Skip to main content
Log in

Trusting the messenger because of the message: feedback dynamics from information quality to source evaluation

  • SI: ICORE2011
  • Published:
Computational and Mathematical Organization Theory Aims and scope Submit manuscript

Abstract

Information provided by a source should be assessed by an intelligent agent on the basis of several criteria: most notably, its content and the trust one has in its source. In turn, the observed quality of information should feed back on the assessment of its source, and such feedback should intelligently distribute among different features of the source—e.g., competence and sincerity. We propose a formal framework in which trust is treated as a multi-dimensional concept relativized to the sincerity of the source and its competence with respect to specific domains: both these aspects influence the assessment of the information, and also determine a feedback on the trustworthiness degree of its source. We provide a framework to describe the combined effects of competence and sincerity on the perceived quality of information. We focus on the feedback dynamics from information quality to source evaluation, highlighting the role that uncertainty reduction and social comparison play in determining the amount and the distribution of feedback.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Sincerity and competence are not the only features required to assess the trustworthiness of sources, so we propose to focus on them only as a useful first approximation. Demolombe (2001) emphasized the importance for a source to be not only correct (if it asserts something, then that information is true), but also endowed with a complete knowledge of a given domain (if something is true, the source is informed of it) and willing to share such knowledge (if the source is informed of something, then it shares that information with others). Moreover, it is also essential that the information provided by the source is relevant for the receiver’s goals, and that the latter has reasons to trust the source not to deliver useless news, even if they happen to be correct and complete (for discussion of trust in relevance, see Paglieri and Castelfranchi 2012). Discussing these further dimensions of information dynamics is left to future work.

  2. \(c_{d}^{+}\) and \(c_{d}^{-}\) obey the property, typical of necessities and beliefs, that \(c_{d}^{+} > 0 \Rightarrow c_{d}^{-} = 0\) and, vice versa, \(c_{d}^{-} > 0 \Rightarrow c_{d}^{+} = 0\).

  3. con(A) represent the conclusion of argument A.

  4. Note that these principles are based on a number of assumptions (most notably, high level of independence and low probability of collusion among sources), and thus are not meant to be universally valid. Rather, they exemplify how simple rules-of-thumb can be identified to regulate feedback distribution, even without any explicit representation of context or agent’s mental states. Testing their validity across various communicative situations (e.g., how much collusion is required to make these heuristics ineffective?) is left as future work.

References

  • Abdul-Rahman A, Hailes S (1997) A distributed trust model. In: Proceedings of the 1997 workshop on new security paradigms (NSPW 1997). ACM, New York, pp 48–60

    Chapter  Google Scholar 

  • Castelfranchi C (1997) Representation and integration of multiple knowledge sources: issues and questions. In: Cantoni V, Di Gesù V, Setti A, Tegolo D (eds) Human & machine perception: information fusion. Plenum, New York, pp 235–254

    Chapter  Google Scholar 

  • Castelfranchi C, Falcone R (2010) Trust theory: a socio-cognitive and computational model. Wiley, New York

    Book  Google Scholar 

  • Conte R, Paolucci M (2002) Reputation in artificial societies: social beliefs for social order. Kluwer Academic, Dordrecht

    Book  Google Scholar 

  • da Costa Pereira C, Tettamanzi A, Villata S (2011) Changing ones mind: erase or rewind? In: Walsh T (ed) IJCAI. IJCAI/AAAI Press, Menlo Park, pp 164–171

    Google Scholar 

  • Demolombe R (2001) To trust information sources: a proposal for a modal logical framework. In: Castelfranchi Y-HTC (ed) Trust and deception in virtual societies. Kluwer Academic, Dordrecht, pp 111–124

    Chapter  Google Scholar 

  • Dix J, Parsons S, Prakken H, Simari GR (2009) Research challenges for argumentation. Comput Sci Res Dev 23(1):27–34

    Article  Google Scholar 

  • Dubois D, Prade H (2008) An introduction to bipolar representations of information and preference. Int J Intell Syst 23:866–877

    Article  Google Scholar 

  • Dung PM (1995) On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif Intell 77(2):321–358

    Article  Google Scholar 

  • Etuk A, Norman T, Sensoy M (2012) Reputation-based trust evaluations through diversity. In: Proceedings of the 15th international workshop on trust in agent societies, pp 13–24

    Google Scholar 

  • Fullam K, Barber KS (2004) Using policies for information valuation to justify beliefs. In: AAMAS. IEEE Comput Soc, Los Alamitos, pp 404–411

    Google Scholar 

  • Fullam K, Barber KS (2007) Dynamically learning sources of trust information: experience vs. reputation. In: AAMAS, p 164

    Google Scholar 

  • Gambetta D (1988) Can we trust them? In: Trust: making and breaking cooperative relations, pp 213–238

    Google Scholar 

  • Koster A, Mir JS, Schorlemmer M (2013) Argumentation and trust. In: Ossowski S (ed) Agreement technologies. Springer, Berlin, pp 441–451

    Chapter  Google Scholar 

  • Liau C-J (2003) Belief, information acquisition, and trust in multi-agent systems—a modal logic formulation. Artif Intell 149(1):31–60

    Article  Google Scholar 

  • Lorini E, Demolombe R (2008) From binary trust to graded trust in information sources: a logical perspective. In: AAMAS-TRUST, pp 205–225

    Google Scholar 

  • Matt P-A, Morge M, Toni F (2010) Combining statistics and arguments to compute trust. In: AAMAS, pp 209–216

    Google Scholar 

  • Mir JS (2003) Trust and reputation for agent societies. PhD thesis, CSIC

  • Mui L (2002) Computational models of trust and reputation: agents, evolutionary games, and social networks. PhD thesis, Massachusetts Institute of Technology, Boston, USA

  • Paglieri F, Castelfranchi C (2012) Trust in relevance. In: Ossowski S, Toni F, Vouros GA (eds) AT. CEUR workshop proceedings, vol 918, pp 332–346. CEUR-WS.org

    Google Scholar 

  • Parsons S, McBurney P, Sklar E (2010) Reasoning about trust using argumentation: a position paper. In: ArgMAS

    Google Scholar 

  • Parsons S, Tang Y, Sklar E, McBurney P, Cai K (2011) Argumentation-based reasoning in agents with varying degrees of trust. In: AAMAS, pp 879–886

    Google Scholar 

  • Prade H (2007) A qualitative bipolar argumentative view of trust. In: SUM, pp 268–276

    Google Scholar 

  • Rahwan I, Simari G (eds) (2009) Argumentation in artificial intelligence. Springer, Berlin

    Google Scholar 

  • Stranders R, de Weerdt M, Witteveen C (2007) Fuzzy argumentation for trust. In: CLIMA, pp 214–230

    Google Scholar 

  • Tang Y, Cai K, Sklar E, McBurney P, Parsons S (2010) A system of argumentation for reasoning about trust. In: EUMAS

    Google Scholar 

  • Teacy WTL, Patel J, Jennings NR, Luck M (2006) TRAVOS: trust and reputation in the context of inaccurate information sources. Auton Agents Multi-Agent Syst 12(2):183–198

    Article  Google Scholar 

  • Toulmin S (1958) The uses of argument. Cambridge University Press, Cambridge

    Google Scholar 

  • Villata S, Boella G, Gabbay DM, van der Torre L (2011) Arguing about the trustworthiness of the information sources. In: ECSQARU, pp 74–85

    Google Scholar 

  • Walton D, Reed C, Macagno F (2008) Argumentation schemes. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Wang Y, Singh MP (2007) Formal trust model for multiagent systems. In: Veloso MM (ed) IJCAI, pp 1551–1556

    Google Scholar 

  • Zadeh LA (1978) Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst 1:3–28

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fabio Paglieri.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Paglieri, F., Castelfranchi, C., da Costa Pereira, C. et al. Trusting the messenger because of the message: feedback dynamics from information quality to source evaluation. Comput Math Organ Theory 20, 176–194 (2014). https://doi.org/10.1007/s10588-013-9166-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10588-013-9166-x

Keywords

Navigation