Advertisement

Social Intelligence for Computers

Making Artificial Entities Creative in their Interactions
  • Juliette Rouchier
Part of the Multiagent Systems, Artificial Societies, and Simulated Organizations book series (MASA, volume 3)

Abstract

I review two main principles that have been developed to coordinate artificial agents in Multi-Agent systems. The first is based on the elaboration of complex communications among highly cognitive agents. The other is eco-resolution, where very simple agents have no consciousness of the existence of others. Both approaches fail to produce a social life that is close to that of humans, in terms of creativity or exchange of abstractions. Humans can build new ways of communicating, even with unknown entities, because they suppose that the other is able to give a meaning to messages, and are able to transfer a protocol from one social field to another. Since we want social intelligence to be creative, it seems that a first step would be to have agents be willing to communicate and know more than one reason and way to do so.

Keywords

Social Life Intelligent Agent Cognitive Agent Social Intelligence Agent Passing 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Byrne R., Whiten A. Machiavellian Intelligence. Clarendon Press, Oxford, 1988.Google Scholar
  2. [2]
    Boissier O., Demazeau Y., Sichman J. Le problème du contrôle dans un Système Multi-Agent (vers un modèle de contrôle Social). In: 1ère Journée Nationale du PRC-IA sur les Systèmes Multi-Agents, Nancy, 1992.Google Scholar
  3. [3]
    Boltanski L., Thévenot L. De la justification: les économies de la grandeur, Gallimard, Paris, 1987.Google Scholar
  4. [4]
    Bordini R. H. Contributions to an Anthropological Approach to the Cultural Adaptation of Migrant Agents, University College, London, Department of Computer science, University of London, 1999.Google Scholar
  5. [5]
    Castelfranchi C., Conte R. Distributed Artificial Intelligence and social science: critical issues, In: Foundations in Distributed Artificial Intelligence, Wiley, 527–542, 1996.Google Scholar
  6. [6]
    Cardon A. Conscience artificielle et systèmes adaptatifs. Eyrolles, Paris, 1999.Google Scholar
  7. [7]
    Dautenhahn, K. I Could Be You: The phenomenological dimension of social understanding. Cybernetics and Systems, 28:417–453, 1997.Google Scholar
  8. [8]
    Doran J. Simulating Collective Misbelief, Journal of Artificial Societies and Social Simulation, <http://www.soc.surrey.ac.uk/JASSS/1/1/3.html>, 1998.
  9. [9]
    Drogoul A., Corbara B., Lalande S. MANTA: new experimental results on the emergence of (artificial) ant societies, In: Artificial societies. The computer simulation of social life, UCL, London, pp 190–211, 1995.Google Scholar
  10. [10]
    Drogoul A. Systèmes multi-agents situés. Mémoireď habilitation à diriger des recherches (habilitation thesis), 17 march 2000.Google Scholar
  11. [11]
    Esfandiari B., et al.. Systèmes Multi-Agents et gestion de réseaux, Actes des 5ème Journées Internationales PRC-GDR Intelligence Artificielle, Toulouse, 317–345, 1995.Google Scholar
  12. [12]
    Ferber J. Multi-Agent System. An introduction to Distributed Artificial Intelligence, Addison-Wesley, England, 1999.Google Scholar
  13. [13]
    Ferber J., Gutknecht O. A Meta-Model for the Analysis and Design of Organizations in Multi-Agent Systems, Proc. ICMAS’ 98, IEEE Comp. Soc., 128–135, 1998.Google Scholar
  14. [14]
    Glance N.S, Huberman B.A. Organizational fluidity and sustainable cooperation. In: From Reaction to Cognition, Castelfranchi C., Muller J.P. (eds) LNAI, 957, Springer, Heidelberg, 89–103, 1995.Google Scholar
  15. [15]
    Goffman E. Encounters. Two Studies in the Sociology of interaction, Bobbs-Herrill Company, Indianapolis, 1965.Google Scholar
  16. [16]
    Hall E. T. The hidden dimension: man’s use of space in public and private, Bodley Head, London, 1969.Google Scholar
  17. [17]
    Hirayama K., Toyoda J. Forming coalition for breaking deadlocks In: Proc. of ICMAS’95, MIT Press, Cambridge, 155–160, 1995.Google Scholar
  18. [18]
    Marsh S., A Community of Autonomous Agents for the Search and Distribution of Information in Networks, 19th BCS-IRSG, 1997.Google Scholar
  19. [19]
    Premack D. Does the Chimpanzee Have a Theory of Mind ?’ Revisited, In: Richard Byrne and Andrew Whiten (eds) Machiavellian Intelligence. Social Expertise and the Evolution of Intellect in Monkeys, Apes, and Humans, Clarendon Press, Oxford, pp 160–179, 1988.Google Scholar
  20. [20]
    Sichman J., et al. Reasoning about others using dependence networks, 3rd Italian Workshop on D.A.I., AI*IA, Rome, 1993.Google Scholar
  21. [21]
    Vygotsky L.S. Mind in Society, In: Michael Coke et al. (eds.), Harvard University Press, Cambridge, Massachusetts, 1978.Google Scholar

Copyright information

© Kluwer Academic Publishers 2002

Authors and Affiliations

  • Juliette Rouchier

There are no affiliations available

Personalised recommendations