Abstract
Witnessing a growing number of increasingly autonomous software agents we interact with or that operate on our behalf under circumstances that are not fully known in advance, we argue that there is a need to provide these agents with moral reasoning capabilities. Looking at the current literature on behaviour constraints and multi-agent (software) systems (MAS), one can distinguish various topics. The first topic concerns the analysis of various forms of restraint and their basis. This topic is at the core of moral philosophy. The second topic concerns the formalized specification of, and the reasoning about the constraints. The research on this topic focuses predominantly on the use of logic, mostly modal logic, and defeasible logic. The last topic is the MAS and implementation related topic of designing a working system in which there are rules that can be enforced and deviant behaviour be detected.
Here we argue that all three topics need addressing and strong integration. The moral philosophical analysis is needed to provide a detailed conceptualization of the various forms of behaviour constraint and direction. This analysis goes beyond what is usual in the more technical/design focus. The (modal) logic provides the rigour required to ultimately allow implementation. The implementation itself is the ultimate objective. We outline the three components and demonstrate how they can be integrated. We observe here that we do not intend, or claim, that this moral reasoning is on par with human moral reasoning. Our claim is that the analysis of human moral reasoning may provide a useful model for constraining software agent behaviour. And, as equally important, it is recognizable by humans which is an important characteristic when it comes to ‘human–artificial agent’ interaction. Recognizing and understanding the precise basis for the behaviour constraint in the artificial entity will make the agent more trustful which, in its turn, will facilitate the acceptance of the use of and the interaction with artificial agents.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
Agent Oriented Software Pty. Ltd. JACK (2002). www.agentsoftware.com.au
Artikis A, Pitt J, Sergot MJ (2002) Animated specifications of computational societies. In: Proceedings of autonomous agents and multi-agent systems (AAMAS), Bologna, pp 1053–1062
Bates J (1994) The role of emotion in believable agents. Commun ACM 37(7):122–125
Bratman ME (1987) Intention, plans and practical reasoning. Harvard University Press, Cambridge
Bratman ME, Israel DJ, Pollack ME (1991) Plans and resource-bounded practical reasoning. In: Pollock J, Cummins R (eds) Philosophy and AI: essays at the interface. MIT Press, Cambridge, pp 7–22
Boella G, van der Torre L (2004) Fulfilling or violating obligations in normative multiagent systems. In: IAT, pp 483–486
Danielson P (1992) Artificial morality. Routledge, London
Danielson P (ed) (1998) Modeling rationality, morality and evolution. Oxford University Press, New York
Dastani M, Hulstijn J, van der Torre L (2001) The BOID architecture: conflicts between beliefs, obligations, intentions and desires. In: Proceedings international conference on autonomous agents
Dastani M, Hulstijn J, Dignum MV, Meyer JC (2004) Issues in multiagent system development. In: Proceedings AMAAS
Dignum F, Kinny F, Sonenberg L (2002) From desires, obligations and norms to goals. Cogn Sci Q 2(3–4):407–430
Dumas M, Governatori G, ter Hofstede AHM, Oaks P (2002) A formal approach to negotiating agents development. Electron Commer Res Appl 1(2):193–207
Georgeff MP, Pell B, Pollack ME, Tambe M, Wooldridge M (1998) The belief-desire-intention model of agency. In: ATAL, pp 1–10
Governatori G, Rotolo A (2004) Defeasible logic: agency, intention and obligation. In: Deontic logic. Lecture notes in computer science, vol 3065. Springer, Berlin, pp 114–128
Halpern J (2000) On the adequacy of model logic, II. Eletron. News J. Reason. Action Change
Hoven MJ, van den Lokhorst GJ (2002) Deontic logic and computer supported computer ethics in cyberphilosophy, Bynum et al (eds)
IBM, Kasparov vs DeepBlue. The rematch, web page: http://www.research.ibm.com/deepblue/
Moor JH (2006) The nature, importance, and difficulty of machine ethics. IEEE Intell Syst 21(4):18–21
Raymond YK (2005) Lau, Adaptive negotiation agents for e-business. In: ACM proceedings of the 7th international conference on electronic commerce, Xi’an, China, pp 271–278, ISBN:1-59593-112-0, 2005
Sergot M, Richards F (2001) On the representation of action and agency in the theory of normative positions. Fundam Inform 48(2–3):273–293
Vrancken J, Van den Berg J, Dos Santos Soares M (2008) Human factors in system reliability: lessons learnt from the Maeslant storm surge barrier in the Netherlands. J Crit Infrastruct 4(4):418–429
Walzer M (1983) Spheres of justice. Basic Books, New York
Wiegel V, Van den Hoven MJ, Lokhorst G-J (2005) Privacy, deontic epistemic action logic and software agents, an executable approach to modeling moral constraints in complex informational relationships. Ethics Inf Technol 7(4):251–264. doi:10.1007/s10676-006-0011-5
Wiegel V (2007) SophoLab. A laboratory for experimental philosophy, Delft
Wooldridge M (2000) Reasoning about rational agents. MIT Press, Cambridge
Wooldridge M (2002) MulitAgents systems. Wiley, Chichester
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/by-nc/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
About this article
Cite this article
Wiegel, V., van den Berg, J. Combining Moral Theory, Modal Logic and Mas to Create Well-Behaving Artificial Agents. Int J of Soc Robotics 1, 233–242 (2009). https://doi.org/10.1007/s12369-009-0023-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-009-0023-5