Abstract
There has been considerable discussion in the past about the assumptions and basis of different ethical rules. For instance, it is commonplace to say that ethical rules are defaults rules, which means that they tolerate exceptions. Some authors argue that morality can only be grounded in particular cases while others defend the existence of general principles related to ethical rules. Our purpose here is not to justify either position, but to try to model general ethical rules with artificial intelligence formalisms and to compute logical consequences of different ethical theories. More precisely, this is an attempt to show that progress in non-monotonic logics, which simulates default reasoning, could provide a way to formalize different ethical conceptions. From a technical point of view, the model developed in this paper makes use of the Answer Set Programming (ASP) formalism. It is applied comparatively to different ethical systems with respect to their attitude towards lying. The advantages of such formalization are two-fold: firstly, to clarify ideas and assumptions, and, secondly, to use solvers to derive consequences of different ethical conceptions automatically, which can help in a rigorous comparison of ethical theories.
Similar content being viewed by others
Abbreviations
- ASP:
-
Answer Set Programming
- PROLOG:
-
Programming in Logic
- RFID:
-
Radio Frequency Identification
References
A. Aaby, Computational Ethics. Technical Report, 2005.
Aristotle, Nicomachean Ethics. Oxford University Press, Oxford, 2002
I. Asimov, I Robot. Spectra, New York, 2004
Baral C. (2003) Knowledge Representation, Reasoning and Declarative Problem Solving. Cambridge University Press, Cambridge
B. Constant and I. Kant, Le droit de mentir, Mille et une nuits no. 426, 2003
B. Constant, Des réactions politiques (1797) in De la force du gouvernement actuel de la France et de la nécessité de s’y rallier (1796) éditions Flammarion, Collection, champs, Paris, 1988
Davis M., Putnam H. (1960) A Computing Procedure for Quantification Theory. Journal of the ACM 7:201–215
Dershowitz N. (1985) Computing with Rewrite Rules. Information and Control 65:122–157
Floridi L., Sanders J. (2004) On the Morality of Artificial Agents. Minds and Machines 14(3):349–379
G. Harman, Application of Statistical Learning Theory to Machines and People, Jean Nicod Lectures, 2005a.
G. Harman, Moral Particularism and Transduction, Philosophical Issues, 15, 2005b.
Kant I. (1798) On a Putative Right to Lie from Love of Mankind in The Metaphysics of Morals. Cambridge University Press, Cambridge
Kant I. (1997) Critical of Practical Reason. Cambridge University Press, Cambridge
Kant I. (1998) Groundwork of the Metaphysics of Morals. Cambridge University Press, Cambridge
McCarthy J. (1980) Circumscription: A Form of Non-Monotonic Reasoning. Artificial Intelligence 13:27–39 and 171–172
McDermott D., Doyle J. (1980) Non-monotonic logic 1. Artificial Intelligence 13:41–72
Reiter R. (1980) A Logic for Default Reasoning. Artificial Intelligence 13:81–132
A.Robinson. Generalized Resolution Principle. In E. Dale and D. Michie, editors, Machine Intelligence , Vol. 3, pp.␣77–94. American Elsevier, New York, 1968.
Vapnik V. (2000) The Nature of Statistical Learning Theory. 2nd edition Wiley Springer, New-York
P. Väyrynen, Moral Generalism: Enjoy in Moderation, Ethics, 116: 707–741, 2006
Author information
Authors and Affiliations
Corresponding author
Appendices
Annex
This annex contains AnsProlog* programs that model the three ethical conceptions previously mentioned, that is, from Aristotle, Kant and Constant.
Programming using AnsProlog*
AnsProlog* is an efficient implementation of ASP; an easy to manage graphical user interface makes it accessible to everybody. Interested readers can run those programs by downloading the AnsProlog* solver and GUI available from Chitta Baral’s homepage at URL http://www.baral.us/bookone/.
The AnsProlog* syntax and semantics are clearly described in Chitta Baral’s books in the reference␣list. In brief, the syntax is inspired from the Edinburgh Prolog syntax. Variables are strings, which first character is an uppercase letter while constants are strings that begin with an underscore or a lowercase.
The ASP rule \({\rho:L_0 \,or\,L_1 \,or\,\ldots\,L_k \leftarrow L_{k+1} ,\,L_{k+2} , \ldots\,L_m,\ \hbox{not}\,L_{m+1} ,\;\ldots,\,\hbox{not}\,L_n }\) where L i are literals, i.e. atoms or atom negations, can be translated with the following AnsProlog* rule: \({L_{0} \vert L_{1}\vert\ldots\vert L_{k}:- L_{k+1}, L_{k+2}, \ldots L_{m},}\) not L m+1,... not L n .
An ASP program is composed of a set of ASP rules ρ; consequently, an AnsProlog* program consists of a set of AnsProlog* rules.
AnsProlog* implementation of the Aristotelian ethical conception
#domain action(A; AA; C; CC).
#domain goal(G).
#domain person(P; PP).
action(tell(P, lie); tell(P, truth); murder; eat(P); dis- cuss(P)).
person("I"; paul; peter).
goal(answer_question(P)).
worse(tell(P, lie), A):- neq(A, tell(P, lie)),
neq(A, murder).
worse(murder, A):- neq(A, murder).
consequence(A, A).
consequence(tell("I", truth), murder).
not_worst_consequence(A, C):- consequence(A, C),
consequence(A, CC), worse(CC, C), not worse(C, CC).
worst_consequence(A, C) :- consequence(A, C),
not not_worst_consequence(A, C).
solve_goal("I", answer_question("I"), tell("I", lie)).
solve_goal("I", answer_question("I"), tell("I", truth)).
act(P, G, A):- solve_goal(P, G, A), not unjust(A).
:- act(P, G, A), act(P, G, AA), neq(A, AA).
obliged(P):- act(P, answer_question(P), A).
:- not obliged("I").
unjust(A):- worst_consequence(A, C),
worst_consequence(AA, CC),
worse(C, CC), not just(A).
just(A):- worst_consequence(A, C),
worst_consequence(AA, CC),
worse(CC, C), not unjust(A).
The predicate worse defines the ethical preferences. In the first experiment, it is specified that to murder is worse than to lie. As a consequence, in all solutions, murder is unjust. However, if we replace the rule:
“worse(tell(P, lie), A):- neq(A, tell(P, lie)),
neq(A, murder).”
by the rule: “worse(tell(P, lie), A):- neq(A, tell(P, lie)).”
then some answer sets (i.e. some possible worlds) assert that lying is unjust while others assert that murder is unjust.
AnsProlog* implementation of the Kantian ethical conception
The Kantian ethical conception evacuates ethical preferences and replaces them with the categorical imperative, which says that the maxim of my will has to be universalized. In the case of lying, if I accept a right to lie, I have to consider a world where everybody could lie. The consequence would be that that trusting anyone would be impossible.
#domain action(A; AA; C; CC).
#domain goal(G).
#domain person(P; PP).
#domain speech_act(S; SS).
action(tell(P, lie); tell(P, truth); murder; eat(P);
discuss(P)).
person("I"; paul; peter).
goal(answer_question(P)).
speech_act(lie; truth).
consequence(A, A).
consequence(tell("I", truth), murder).
consequence(tell(peter, truth), murder).
solve_goal(P, answer_question(P), tell(P, lie)).
solve_goal(P, answer_question(P), tell(P, truth)).
act(P, G, A):- solve_goal(P, G, A), maxim_will(P, G, A), not unjust(A).
maxim_will("I", answer_question("I"), tell("I", lie)).
maxim_will(P, answer_question(P), tell(P, S)):-
maxim_will("I", answer_question("I"), tell("I", S)), not maxim_will(P, answer_question(P), tell(P, SS)), neq(S, SS).
untrust(P):- maxim_will(P, G, tell(P, lie)).
trust(P) :- not untrust(P).
obliged(P):- act(P, answer_question(P), A).
:- not obliged("I").
AnsProlog* implementation of the Constant ethical conception
The third ethical conception is from Constant. It has to be noted that the predicate ``tell'' has one more argument than in the previous models: it is now a communication act between two actors, an emitter and a receptor.
#domain action(A; AA; C; CC).
#domain goal(G).
#domain person(P; PP).
#domain speech_act(S; SS).
action(tell(P, PP, lie); tell(P, PP, truth); murder;
eat(P); discuss(P)).
person(″I″; paul; peter; murderer).
goal(answer_question(P, PP)).
speech_act(lie; truth).
worse(tell(P, PP, lie), A):- neq(A, tell(P, PP, lie)).
worse(murder, A):- neq(A, murder).
consequence(A, A).
consequence(tell(″I″, murderer, truth), murder).
consequence(tell(peter, murderer, truth), murder).
not_worst_consequence(A, C):- consequence(A, C),
consequence(A, CC), worse(CC, C), not worse(C, CC).
worst_consequence(A, C):- consequence(A, C),
not not_worst_consequence(A, C).
solve_goal(P, answer_question(P, PP), tell(P, PP, lie)).
solve_goal(P, answer_question(P, PP), tell(P, PP, truth)).
act(P, G, A):- solve_goal(P, G, A), principle(P, G, A).
principle(P, answer_question(P, PP), tell(P, PP, truth)):-
not not_deserve(PP, tell(P, PP, truth)).
principle(P, answer_question(P, PP), tell(P, PP, lie)):-
not_deserve(PP, tell(P, PP, truth)).
not_deserve(PP, tell(P, PP, truth)):-
worst_consequence(tell(P, PP, truth), C),
worse(C, tell(P, PP, lie)).
obliged(P):- act(P, answer_question(P, PP), A).
:- not obliged("I").
Rights and permissions
About this article
Cite this article
Ganascia, JG. Modelling ethical rules of lying with Answer Set Programming. Ethics Inf Technol 9, 39–47 (2007). https://doi.org/10.1007/s10676-006-9134-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10676-006-9134-y