Trust in Multi-Agent Systems
Research in Multi-Agent Systems has revealed that Agents must enter into a relationship voluntarily in order to collaborate, otherwise that collaborative efforts may fail [1,2]. When examining this problem, trust becomes the focus in promoting the ability to collaborate, however trust itself is defined from several perspectives. Trust between agents within Multi-Agent System may be analogous to the trust that is required between humans. A Trust, Negotiation, Communication model currently being developed, is based around trust and may be used as a basis for future research and the ongoing development of Multi-Agent System (MAS).
This paper is focused on discussing how the architecture of an agent could be designed to provide it the ability to foster trust between agents and therefore to dynamically organise within a team environment or across distributed systems to enhance individual abilities. The Trust, Negotiation, Communication (TNC) model is a proposed building block that provides an agent with the mechanisms to develop a formal trust network both through cooperation or confederated or collaborative associations. The model is conceptual, therefore discussion is limited to the basic framework.
KeywordsMultiagent System Communication Layer Agent Architecture Team Formation Controller Agent
Unable to display preview. Download preview PDF.
- 3.Urlings, P.: Teaming Human and Machine. PhD thesis, School of Electrical and Information Engineering, University of South Australia (2003)Google Scholar
- 4.Mann, R.: Interpersonal styles and group development. American Journal of Psychology 81, 137–140 (1970)Google Scholar
- 6.Prinzel, L.J.: The relationship of self-efficacy and complancency in pilot-automation interaction. Technical Report TM-2002-211925, NASA, Langley Research Center, Hampton, Virginia (2002)Google Scholar
- 8.Frankel, C.B., Bedworth, M.D.: Control, estimation and abstraction in fusion architectures: Lessons from human information processing. In: Proceedings of the Third International Conference on Information Fusion (FUSION 2000), vol. 1, pp. MOC5–3 – MOC5–10 (2000)Google Scholar
- 9.Kelly, C., Boardman, M., Goillau, P., Jeannot, E.: Guidelines for trust in future ATM systems: A literature review. Technical Report 030317-01, European Organisation for the Safety of Air Navigation (May 2003)Google Scholar
- 10.Hoffman, R.R.: Whom (or what) do you (mis)trust?: Historical reflections on the psychology and sociology of information technology. In: Proceedings of the Fourth Annual Symposium on Human Interaction with Complex Systems, pp. 28–36 (1998)Google Scholar
- 11.Cahill, V., Gray, E., Seigneur, J.M., Jensen, C.D., Chen, Y., Shand, B., Dimmock, N., Twigg, A., Bacon, J., English, C., Wagealla, W., Terzis, S., Nixon, P., Serugendo, G.D.M., Bryce, C., Carbone, M., Krukow, K., Nielson, M.: Using trust for secure collaboration in uncertain environments. Pervasive Computing, IEEE 2, 52–61 (2003)CrossRefGoogle Scholar
- 14.Wooldridge, M., Jennings, N.: Agent theories, architectures, and languages: A survey. In: Wooldridge, M.J., Jennings, N.R. (eds.) ECAI 1994 and ATAL 1994. LNCS, vol. 890, p. 403. Springer, Heidelberg (1995)Google Scholar
- 15.Consoli, A., Tweedale, J.W., Jain, L.: The link between agent coordination and cooperation. In: 10th International Conference on Kowledge Based Intelligent Information and Engineering Systems, Bournemouth, England. LNCS, Springer, Heidelberg (2006)Google Scholar
- 16.Cohen, M.S.: A situation specific model of trust to decision aids. Cognitive Technologies (2000)Google Scholar