Emergence and Stability of Collaborations Among Rational Agents
Autonomous agents interacting in an open world can be considered to be primarily driven by self interests. In this paper, we evaluate the hypotheses that self-interested agents with complementary expertise can learn to recognize cooperation possibilities and develop stable, mutually beneficial coalitions that is resistant to exploitation by malevolent agents. Previous work in this area has prescribed a strategy of reciprocal behavior for promoting and sustaining cooperation among self-interested agents. That work have considered only the task completion time as the cost metric. To represent more realistic domains, we expand the cost metric by using both the time of delivery and quality of work. In contrast to the previous work, we use heterogeneous agents with varying expertise for different job types. This necessitates the incorporation of the novel aspect of learning about other‘s capabilities within the reciprocity framework. We also present a new mechanism where agents base their decisions both on historical data as well as on future interaction expectations. A decision mechanism is presented that compares current helping cost with expected future savings from interaction with the agent requesting help.
Unable to display preview. Download preview PDF.
- 1.Bradshaw, J.M.: Software Agents. AAAI Press/The MIT Press, Menlo Park, CA (1997)Google Scholar
- 2.Bradshaw, J.M.: Communications of the ACM 37(7) (July 1994); Special Issue on Intelligent AgentsGoogle Scholar
- 3.Bradshaw, J.M.: Communications of the ACM 42(3) (March 1999); Special Issue on Multiagent Systems on the Net and Agents in E-commerceGoogle Scholar
- 4.Huhns, M.N., Singh, M.P.: Readings in Agents. Morgan Kaufmann, San Francisco (1997)Google Scholar
- 5.Shoham, Y., Tennenholtz, M.: On the synthesis of useful social laws for artificial agent societies (preliminary report). In: Proceedings of the National Conference on Artificial Intelligence, San Jose, California, pp. 276–281 (1992)Google Scholar
- 6.Biswas, A., Sen, S., Debnath, S.: Limiting deception in groups of social agents. Applied Artificial Intelligence. Special Issue on Deception, Fraud, and Trust in Agent Societies 14, 785–797 (2000)Google Scholar
- 9.Lerman, K., Shehory, O.: Coalition formation for large-scale electronic markets. In: Proceedings of the Fourth International Conference on Multi-Agent Systems, pp. 167–174 (2000)Google Scholar
- 11.Brooks, C.H., Durfee, E.H., Armstrong, A.: An introduction to congregation in multiagent systems. In: Proceedings of the Fourth International Conference on Multi-Agent Systems, pp. 79–86 (2000)Google Scholar
- 13.Castelfranchi, C., Conte, R., Paolucci, M.: Normative reputation and the costs of compliance. Journal of Artificial Societies and Social Simulation 1 (1998)Google Scholar
- 16.Azoulay-Schwartz, R., Kraus, S.: Stable strategies for sharing information among agents. In: Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, pp. 1128–1134 (2001)Google Scholar