Emergence and Stability of Collaborations Among Rational Agents

  • Sandip Sen
  • Partha Sarathi Dutta
  • Sabyasachi Saha
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2782)

Abstract

Autonomous agents interacting in an open world can be considered to be primarily driven by self interests. In this paper, we evaluate the hypotheses that self-interested agents with complementary expertise can learn to recognize cooperation possibilities and develop stable, mutually beneficial coalitions that is resistant to exploitation by malevolent agents. Previous work in this area has prescribed a strategy of reciprocal behavior for promoting and sustaining cooperation among self-interested agents. That work have considered only the task completion time as the cost metric. To represent more realistic domains, we expand the cost metric by using both the time of delivery and quality of work. In contrast to the previous work, we use heterogeneous agents with varying expertise for different job types. This necessitates the incorporation of the novel aspect of learning about other‘s capabilities within the reciprocity framework. We also present a new mechanism where agents base their decisions both on historical data as well as on future interaction expectations. A decision mechanism is presented that compares current helping cost with expected future savings from interaction with the agent requesting help.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bradshaw, J.M.: Software Agents. AAAI Press/The MIT Press, Menlo Park, CA (1997)Google Scholar
  2. 2.
    Bradshaw, J.M.: Communications of the ACM  37(7) (July 1994); Special Issue on Intelligent AgentsGoogle Scholar
  3. 3.
    Bradshaw, J.M.: Communications of the ACM 42(3) (March 1999); Special Issue on Multiagent Systems on the Net and Agents in E-commerceGoogle Scholar
  4. 4.
    Huhns, M.N., Singh, M.P.: Readings in Agents. Morgan Kaufmann, San Francisco (1997)Google Scholar
  5. 5.
    Shoham, Y., Tennenholtz, M.: On the synthesis of useful social laws for artificial agent societies (preliminary report). In: Proceedings of the National Conference on Artificial Intelligence, San Jose, California, pp. 276–281 (1992)Google Scholar
  6. 6.
    Biswas, A., Sen, S., Debnath, S.: Limiting deception in groups of social agents. Applied Artificial Intelligence. Special Issue on Deception, Fraud, and Trust in Agent Societies 14, 785–797 (2000)Google Scholar
  7. 7.
    Sen, S., Biswas, A., Debnath, S.: Believing others: Pros and cons. In: Proceedings of the Fourth International Conference on Multiagent Systems, Los Alamitos, CA, pp. 279–285. IEEE Computer Society, Los Alamitos (2000)CrossRefGoogle Scholar
  8. 8.
    Shehory, O., Kraus, S.: Methods for task allocation via agent coalition formation. Artificial Intelligence Journal 101, 165–200 (1998)CrossRefMathSciNetMATHGoogle Scholar
  9. 9.
    Lerman, K., Shehory, O.: Coalition formation for large-scale electronic markets. In: Proceedings of the Fourth International Conference on Multi-Agent Systems, pp. 167–174 (2000)Google Scholar
  10. 10.
    Denzinger, J., Kordt, M.: Evolutionary on-line learning of cooperative behavior with situation-action-pairs. In: Proceedings of Fourth International Conference on MultiAgent Systems, ICMAS 2000, Los Alamitos, CA, pp. 103–110. IEEE Computer Society, Los Alamitos (2000)CrossRefGoogle Scholar
  11. 11.
    Brooks, C.H., Durfee, E.H., Armstrong, A.: An introduction to congregation in multiagent systems. In: Proceedings of the Fourth International Conference on Multi-Agent Systems, pp. 79–86 (2000)Google Scholar
  12. 12.
    Sen, S.: Believing others: Pros and cons. Artificial Intelligence 142, 179–203 (2002)CrossRefGoogle Scholar
  13. 13.
    Castelfranchi, C., Conte, R., Paolucci, M.: Normative reputation and the costs of compliance. Journal of Artificial Societies and Social Simulation 1 (1998)Google Scholar
  14. 14.
    Glass, A., Grosz, B.: Socially conscious decision-making. In: Proceedings of the Fourth International Conference on Autonomous Agents, New York, NY, pp. 217–224. ACM Press, New York (2000)CrossRefGoogle Scholar
  15. 15.
    Sullivan, D.G., Grosz, B., Kraus, S.: Intention reconciliation by collaborative agents. In: Proceedings of the Fourth International Conference on Multiagent Systems, Los Alamitos, CA, pp. 293–300. IEEE Computer Society, Los Alamitos (2000)CrossRefGoogle Scholar
  16. 16.
    Azoulay-Schwartz, R., Kraus, S.: Stable strategies for sharing information among agents. In: Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, pp. 1128–1134 (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Sandip Sen
    • 1
  • Partha Sarathi Dutta
    • 2
  • Sabyasachi Saha
    • 1
  1. 1.University of TulsaTulsaUSA
  2. 2.University of SouthamptonUK

Personalised recommendations