Artificial Intelligence and Law

, Volume 7, Issue 1, pp 17–35

Norms in artificial decision making

Authors

  • Magnus Boman
    • The DECIDE Research Group, Department of Computer and Systems Sciences, Stockholm University and the Royal Institute of Technology
Article

DOI: 10.1023/A:1008311429414

Cite this article as:
Boman, M. Artificial Intelligence and Law (1999) 7: 17. doi:10.1023/A:1008311429414

Abstract

A method for forcing norms onto individual agents in a multi-agent system is presented. The agents under study are supersoft agents: autonomous artificial agents programmed to represent and evaluate vague and imprecise information. Agents are further assumed to act in accordance with advice obtained from a normative decision module, with which they can communicate. Norms act as global constraints on the evaluations performed in the decision module and hence no action that violates a norm will be suggested to any agent. Further constraints on action may then be added locally. The method strives to characterise real-time decision making in agents, in the presence of risk and uncertainty.

norm constraint real-time decision making decisions with risk decisions under uncertainty vague information policy social space

Copyright information

© Kluwer Academic Publishers 1999