Minds and Machines

, Volume 21, Issue 2, pp 261–274 | Cite as

Computational Meta-Ethics

Towards the Meta-Ethical Robot
Open Access
Article

Abstract

It has been argued that ethically correct robots should be able to reason about right and wrong. In order to do so, they must have a set of do’s and don’ts at their disposal. However, such a list may be inconsistent, incomplete or otherwise unsatisfactory, depending on the reasoning principles that one employs. For this reason, it might be desirable if robots were to some extent able to reason about their own reasoning—in other words, if they had some meta-ethical capacities. In this paper, we sketch how one might go about designing robots that have such capacities. We show that the field of computational meta-ethics can profit from the same tools as have been used in computational metaphysics.

Keywords

Automated moral reasoning Computational meta-ethics 

Introduction

A continually growing number of computers/robots is being deployed on the battlefield, in hospitals, in law enforcement, in electronic business negotiation and other ethically sensitive areas. It is desirable that the computers/robots in these areas behave in an ethically correct manner. It might be argued that they can only be guaranteed to do so if they can reason about right and wrong on the basis of a set of moral standards: only “explicit ethical agents” (as they have been called; see Moor 2006) can be expected to behave in an ethically correct manner. However, sets of moral standards can be inconsistent, incomplete, or inappropriate in view of other sets of standards; it would therefore desirable that robots equipped with such standards were to some extent able to reason about them—in other words, if they had some capacity for meta-ethical reasoning.

These considerations suggest that when one wants to design ethically correct robots one should not only explore the field of automated moral reasoning, but also the field of automated moral meta-reasoning, in the sense of reasoning about moral reasoning. The development track that we foresee is roughly as follows: informal moral reasoning (reasoning about obligations, permissions and prohibitions)—formal moral reasoning (deontic logic)—automated moral reasoning (computational deontic logic)—automated reasoning about automated moral reasoning (computational meta-ethics).

Along this development track, the area of automated moral reasoning has already received some attention. Computational deontic logic has been discussed in the context of computer-supported computer ethics (Hoven and Lokhorst 2002). This work has been used in books on the design of moral machines that can distinguish right from wrong (Wallach and Allen 2008) and engineering autonomous (unsupervised) moral robots that are capable of carrying out lethal behavior (Arkin 2009). Similar research has been described in papers on moral reasoning by ethically correct robots (Arkoudas et al. 2005; Bringsjord et al. 2006).

However, the subject of automated moral meta-reasoning has not received much attention so far. We intend to set some cautious first steps in this area. We are interested in machines that reason about moral reasoning—meta-ethical robots—for two reasons. First, it is an intellectual challenge to think about how one might go about constructing such machines, which might be called philosophical in the sense that philosophers (inspired by Carnap 1937) are sometimes fond of saying: “anything you can do, I can do meta.” Second, it seems important for practical applications. Ethical robots need ethical standards. However, these standards may need examining, either during the design process or in the deployment stage. To the extent that meta-ethical reasoning can be delegated to computers/robots, the results will be both more easily obtainable and more reliable than they would be if this reasoning were left to the humans who design or deploy them.

We will proceed on the assumption that one can use some of the same computational tools in computational meta-ethics as have been used in computational metaphysics (Fitelson and Zalta 2007) and in meta-reasoning about different systems of modal logic (Rabe et al. 2009). We make this assumption because we have little reason to believe that meta-ethics is essentially different from (let alone more difficult than) metaphysics or metamathematics.

Design of a Meta-Ethical Robot

Nine Modules

For the purpose of illustration, we envisage a robot with nine modules:
  1. 1.

    Seven non-deontic logical modules.

     
  2. 2.

    One deontic module.

     
  3. 3.

    One meta-logical module.

     
The seven non-deontic modules have the following functions.
  1. 1.
    Module \({\varvec{\mathfrak{S}}_{\rm T}^{\circ}}\) implements system \({\varvec{\mathfrak{S}}_{\rm T}^{\circ}}\), which we define as the weakest traditionally strict classical logic closed under strict modus ponens. \({\varvec{\mathfrak{S}}_{\rm T}^{\circ}}\) has the same language as the propositional calculus, except that there is an additional unary connective square, read as “necessarily,” and the definitions \(A\rightarrow B\mathop{=}\limits^{\hbox{df}}\square(A\supset B)\) and \(A\leftrightarrow B\mathop{=}\limits^{\hbox{df}} (A\rightarrow B)\,\&(B\rightarrow A)\), where → is strict implication, \(\supset\) classical material implication, and \(\leftrightarrow\) strict equivalence. The axioms and rules of inference are as follows (Chellas and Segerberg 1996):
    • PL The set of all tautologies.

    • \(\square{\text {PL}}\;\{{\square{A}:A \in {\text {PL}}}\} \).

    • US Uniform substitution.

    • \(\hbox{RRSE}_{\rm T}\) From \(A\leftrightarrow B\) and F to infer \(F^{A/B}\), where \(F^{A/B}\) is the result of replacing one occurrence of A in F by B.

    • MP From A and AB to infer B.

    • SMP From A and \(A\rightarrow B\) to infer B.

    All Lewis systems of strict implication (as studied in Zeman 1973, for example) are traditionally strict classical logics closed under strict modus ponens. Some of these systems and the relations between them are as follows:
    $$ {{\mathfrak{S}}}_{\rm T}^{\circ} \subset {\bf S0.9}^\circ \subset \left\{ \begin{array}{l} {\bf S0.9}\\ {\bf S1}^{\circ}\\ \end{array} \right\} \subset {\bf S1} \subset {\bf S2}\subset {\bf S3}\subset {\bf S4}\subset {\bf S5}. $$
     
  2. 2.
    Module \({\bf \L}_{\aleph_0}\) implements Łukasiewicz’s infinite-valued logic\({\bf \L}_{\aleph_0}\), which has the following axioms and rules (Malinowski 2001):
    1. 1

      \(A\rightarrow (B\rightarrow A)\)

       
    2. 2

      \((A\rightarrow B)\rightarrow ((B\rightarrow C)\rightarrow (A\rightarrow C))\)

       
    3. 3

      \(((A\rightarrow B)\rightarrow B)\rightarrow ((B\rightarrow A)\rightarrow A)\)

       
    4. 4

      \((\sim A \rightarrow \sim B)\rightarrow (B\rightarrow A)\)

       
    5. 5

      \(((A\rightarrow B)\rightarrow (B\rightarrow A))\rightarrow (B\rightarrow A)\)

       
    6. 6

      From A and \(A\rightarrow B\) to infer B.

       

    Definitions: \(A\lor B\mathop{=}\limits^{\hbox{df}} (A\rightarrow B)\rightarrow B\), \(A\,\& B\mathop{=}\limits^{\hbox{df}}\sim(\sim A\lor \sim B)\), \(A\leftrightarrow B\mathop{=}\limits^{\hbox{df}}(A\rightarrow B)\,\&(B\rightarrow A)\). \({\bf \L}_{\aleph_0}\) is the weakest of all Łukasiewicz systems, in the sense that A is a theorem of \({\bf \L}_{\aleph_0}\) if and only if A is a theorem of all finite-valued Łukasiewicz calculi \({{\bf \L}_n, n\geq 2, n\in{\mathbb{N}}}\). Formally, if Th(S) is the set of theorems of S, then \({Th({\bf \L}_{\aleph_0}) = \bigcap\{Th({\bf \L}_n)\colon n\geq 2, n\in{\mathbb{N}}\}}\) (Malinowski 2001). These systems are nowadays popular in the field of fuzzy logic.

     
  3. 3.
    Module H implements Heyting’s system of intuitionistic logicH, which has the following axioms and rules (Dalen 2001):
    1. 1

      \(A\rightarrow (B\rightarrow A)\)

       
    2. 2

      \((A\rightarrow (B \rightarrow C)) \rightarrow ((A\rightarrow B)\rightarrow (A\rightarrow C))\)

       
    3. 3

      \((A \,\& B) \rightarrow A\)

       
    4. 4

      \((A \,\& B) \rightarrow B\)

       
    5. 5

      \((A\rightarrow (B \rightarrow (A \,\& B))\)

       
    6. 6

      \(A\rightarrow (A\lor B)\)

       
    7. 7

      \(B\rightarrow (A\lor B)\)

       
    8. 8

      \((A\rightarrow C)\rightarrow ((B\rightarrow C)\rightarrow ((A\lor B)\rightarrow C))\)

       
    9. 9

      \(A \rightarrow (\sim A \rightarrow A)\)

       
    10. 10

      \((A \rightarrow B) \rightarrow ( (A\rightarrow \sim B) \rightarrow \sim A)\)

       
    11. 11

      From A and \(A\rightarrow B\) to infer B.

       
    Intuitionistic logic is related to minimal logic, which we will not study separately.
     
  4. 4.
    Module R implements relevant systemR, which has the following axioms and rules (Dunn and Restall 2002):
    1. 1

      \(A\rightarrow A\)

       
    2. 2

      \((A\rightarrow B)\rightarrow ((C \rightarrow A)\rightarrow (C\rightarrow B))\)

       
    3. 3

      \((A\rightarrow (B \rightarrow C)) \rightarrow ((A\rightarrow B)\rightarrow (A\rightarrow C))\)

       
    4. 4

      \(A\rightarrow ((A\rightarrow B)\rightarrow B)\)

       
    5. 5

      \((A \,\& B) \rightarrow A\)

       
    6. 6

      \((A \,\& B) \rightarrow B\)

       
    7. 7

      \(((A\rightarrow B)\,\& (A\rightarrow C)) \rightarrow (A\rightarrow (B \,\& C))\)

       
    8. 8

      \(A\rightarrow (A\lor B)\)

       
    9. 9

      \(B\rightarrow (A\lor B)\)

       
    10. 10

      \(((A\rightarrow C)\,\& (B\rightarrow C))\rightarrow ((A\lor B)\rightarrow C)\)

       
    11. 11

      \((A\,\&(B\lor C))\rightarrow ((A\,\& B)\lor (A\,\& C))\)

       
    12. 12

      \((A\rightarrow\sim B)\rightarrow (B\rightarrow \sim A)\)

       
    13. 13

      \(\sim\sim A\rightarrow A\)

       
    14. 14

      From A and \(A\rightarrow B\) to infer B.

       
    15. 15

      From A and B to infer \(A\,\& B\).

       
    Relevance logic is a predecessor of linear logic, which we will not study separately.
     
  5. 5.

    Module RM implements relevant systemRM, which is defined as R plus axiom \(A\rightarrow(A\rightarrow A)\).

     
  6. 6.

    Module RM3 implements relevant systemRM3, which is defined as R plus axioms \((\sim A\,\& B)\rightarrow (A\rightarrow B)\) and \(A\lor (A\rightarrow B)\) (Dunn and Restall 2002).

     
  7. 7.

    Module KR implements relevant systemKR, which is defined as R plus axiom \((A\,\&\sim A) \rightarrow B\) (Dunn and Restall 2002). Systems R, RM, RM3 and KR are related as follows: \({\bf R} \subset {\bf RM} \subset {\bf RM3}\), \({\bf R} \subset {\bf KR}\), \({\bf RM} \nsubseteq {\bf KR}, {\bf RM3} \nsubseteq {\bf KR}, {\bf KR} \nsubseteq {\bf RM}, {\bf KR} \nsubseteq {\bf RM3}\). System KR is famous for being the first relevance logic that was shown to be undecidable.

     
The deontic module Mally embodies the principles of Mally’s deontic logic, the very first system of deontic logic, originally published in 1926 (Lokhorst 2008). We use the following symbolic language:
  • OA: it is obligatory that A.

  • u: that which is unconditionally obligatory.

  • \({\circledR}_{{A}}{{B}} \mathop{=}\limits^{\hbox{df}} A \rightarrow O B: A\) requires B.

Mally’s deontic principles are as follows:
  1. 1

    \(({\circledR}_{{A}}{{B}}\,\& (B\rightarrow C) )\rightarrow {\circledR}_{{A}}{{C}}\)

     
  2. 2

    \(({\circledR}_{{A}}{{B}}\,\& {\circledR}_{{A}}{{C}})\rightarrow {\circledR}_{{A}}{{(B\,\&\, C)}}\)

     
  3. 3

    \({\circledR}_{{A}}{{B}}\leftrightarrow O (A\rightarrow B)\)

     
  4. 4

    Ou

     
  5. 5

    \(\sim{\circledR}_{u}\sim u\)

     
Similar postulates have been proposed by others (Anderson 1967, Castañeda 1981). Mally identified → with material implication, but we let its logical properties depend on the context: → is identical with strict implication in the context of \({\varvec{\mathfrak{S}}_{\hbox {T}}^{\circ}}\), with Łukasiewicz implication in the context of \({\bf \L}_{\aleph_0}\), with intuitionistic implication in the context of H, and with relevant implication in the context of R, RM, RM3 and KR.
Finally, the meta-logical module Meta is the meta-ethical monitor. It selects suitable logical modules according to the following criteria:
  1. 1

    Formula T1\(A\rightarrow O A\) (if A is the case, then A is obligatory) is undesirable.

     
  2. 2

    Formula T2\(O A\rightarrow A\) (if A is obligatory, then A is the case) is undesirable.

     
  3. 3

    Formula T3\(O(A \lor B)\rightarrow (O A\lor O B)\) (if it is obligatory that either A or B, then either A is obligatory or B is obligatory) is undesirable.

     
  4. 4

    Formula D\(O A\rightarrow \sim O\sim A\) (if A is obligatory, then the negation of A is not obligatory) is desirable.

     
We owe these criteria both to Mally himself and to some of this commentators (see Lokhorst 2008 and Lokhorst 2011).

The nine modules we have mentioned embody insights that were available around 1930. We confine our attention to these ideas because logicians have put forward so many proposals since then that we cannot possibly take all of them into account.

Flexibility of Architecture

The architecture of the robotic reasoning system is flexible (modifiable).
  • The non-deontic modules can be turned on and off.

  • The deontic and meta-ethical modules are fixed.

Modus Operandi

In order to evaluate the logical modules, module Meta is equipped with theorem provers and model generators (which produce counterexamples to formulas). These programs work either concurrently on one processor or in parallel on multiple processors. If the logic under examination is decidable, then given a formula A, either a theorem prover or a model generator will produce a result (if the programs work as advertised).

The Unix shell script displayed in Fig. 1 illustrates this process. The countermodel generator MaGIC (Slaney 2008) and the theorem prover Prover9 (McCune 2008) run concurrently in the background (as indicated by the ampersand & at the end of the respective command lines) until one of them terminates. The value of the argument of sleep is irrelevant because only one of the two programs can terminate if the inputs to the programs are equivalent, the logical systems are consistent and the programs work correctly.
Fig. 1

Simple shell script to execute MaGIC and Prover9 concurrently

The scores for the four reference formulas T1, T2, T3 and D determine whether module Meta accepts or rejects the logic under examination. Module Meta accepts the logic as soon as each of the undesirable formulas (T1, T2 and T3) is shown to be invalid by some countermodel generator and the desirable formula D is shown to be derivable by some theorem prover. Module Meta rejects the logic it as soon as some undesirable formula (T1, T2 or T3) is shown to be derivable by some theorem prover or the desirable formula D is shown to be invalid by some countermodel generator. Module Meta remains in a state of indecision as long as the logic under examination has not been accepted or rejected. This state of indecision may last forever in the case of R and KR because these systems are undecidable. This would be fatal for a robot on the battlefield or in a hospital, but we have not encountered this situation in the cases we are examining.

Meta-Ethical Reasoning Process

In the example that we are going to describe, module Meta contains one theorem prover, namely Prover9 (McCune 2008), and two countermodel generators, namely Mace4 (McCune 2008) and MaGIC (Slaney 2008). The latter program is especially suitable in the case of relevant systems, but it was useful for the refutation of T3 in H as well.

The meta-ethical reasoning process proceeds as follows. The robot starts with modules \({\varvec{\mathfrak{S}}_{\hbox {T}}^{\circ}}\) and Mally. It connects these to module Meta (Fig. 2).
Fig. 2

Meta-ethical reasoning process, stage 1: probing Mally and \({\varvec{\mathfrak{S}}_{\rm T}^{\circ}}\)

Theorem prover Prover9 quickly derives theorems T1, T2 and T3 and D. The countermodel generators produce no result. From this, the robot concludes that \({\varvec{\mathfrak{S}}_{\rm T}^{\circ}}\) plus Mally is unacceptable in the light of its meta-ethical standards (see Lokhorst 2010 and Lokhorst 2011 for the details). This concludes the case of \({\varvec{\mathfrak{S}}_{\rm T}^{\circ}}\). The case of \({\bf \L}_{\aleph_0}\) is more difficult. It takes Prover9 several hours to prove that T1 and T2 are theorems of \({\bf \L}_{\aleph_0}\) plus Mally’s axioms; the derivations of these formulas are about a hundred lines long. The derivations of T1 and T2 are presented in the Appendix, if only to demonstrate that it is not advisable to do this kind of reasoning without computer assistance. The remaining cases (H and the relevant systems) are relatively easy, both for computers and humans (even though the refutation of T3 in H required some human ingenuity, as described in Lokhorst 2011).
The results for the seven systems and the four bench-mark formulas are summed up in the following table, in which + indicates derivability and − invalidity:

Formula

D1–D5 plus

\({\varvec{\mathfrak{S}}_{\hbox {T}}^{\circ}}\)

\({\bf \L}_{\aleph_0}\)

H

R

RM

RM3

KR

\({\bf T1}\quad A\rightarrow O A\)

+

+

+

\({\bf T2}\quad O A\rightarrow A\)

+

+

+

\({\bf T3} \quad O(A\lor B)\rightarrow(O A\lor O B)\)

+

+

+

+

\({\bf D} \quad O A\rightarrow\sim O\sim A\)

+

+

+

+

+

The table shows that only KR is acceptable in view of the meta-ethical standards employed by the robot.

Conclusion

We have shown that computational meta-ethics is to some extent feasible, using current, off-the-shelf, generic, freely available open-source software.

The above design of a meta-ethical robot is nothing but a “proof of concept.” Further work is needed. We mention the following themes.
  • Different logical systems, both deontic and non-deontic.

  • Different theorem-provers and model-generators, for example the award-winning Vampire and Paradox (Rabe et al. 2009).

  • Different meta-ethical criteria, for example, acceptability and unacceptability of alternative deontic principles. Freedom from “is-ought fallacies” would be an example. Is-ought fallacies are formulas of the form \(A\rightarrow O B\) or \(A\rightarrow\sim O B\), where A and B contain no occurrences of O and u. It can be proven that R plus Mally’s axioms is the only system on our list that avoids such fallacies, but we doubt whether the computer is clever enough to see this.

  • Different ways of reasoning about (moral) reasoning, for example in terms of computational complexity and computational tractability.

  • More expressive languages, for example languages with perception, knowledge, action, multiple agents, strategies, intention and time (see Hoven and Lokhorst 2002 and Lokhorst and Hoven 2011 for further discussion). In this context, it is interesting to know that large parts of multimodal correspondence theory have recently been mechanized, which makes it easier to study the interactions of modalities such as belief, seeing to it that, possibility, obligation and temporal notions (Georgiev et al. 2006).

  • On the fringe of logic and beyond logic: probabilistic reasoning, moral belief revision, the moral frame problem, non-monotonic reasoning and non-inferential moral judgment, for example case-based judgment and pattern recognition with neural networks (see Gärdenfors 2005 for more about these topics).

In other words, the design of a fully fledged meta-ethical robot is still a long way off. However, developments in the recent past suggest at least one direction in which we can set out on the long road that lies ahead.

Notes

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

References

  1. Anderson, A. R. (1967). Some nasty problems in the formal logic of ethics. Noûs, 1, 345–360.CrossRefGoogle Scholar
  2. Arkin, R. C. (2009). Governing lethal behavior in autonomous robots. Boca Raton, Fl.: Chapman & Hall.CrossRefGoogle Scholar
  3. Arkoudas, K., Bringsjord, S., & Bello, P. (2005). Toward ethical robots via mechanized deontic logic. In Machine ethics: Papers from the AAAI fall symposium, Technical Report FS–05–06. Menlo Park, Cal.: AAAI Press.Google Scholar
  4. Bringsjord, S., Arkoudas, K., & Bello, P. (2006). Toward a general logicist methodology for engineering ethically correct robots. IEEE Intelligent Systems, 21(4), 38–44.CrossRefGoogle Scholar
  5. Carnap, R. (1937). The logical syntax of language. London: Routledge and Kegan Paul.Google Scholar
  6. Castañeda, H.-N. (1981). The paradoxes of deontic logic: The simplest solution to all of them in one fell swoop. In R. Hilpinen (Ed.) New studies in deontic logic. Dordrecht: Reidel.Google Scholar
  7. Chellas, B. F., & Segerberg, K. (1996). Modal logics in the vicinity of S1. Notre Dame Journal of Formal Logic, 37, 1–24.MathSciNetMATHCrossRefGoogle Scholar
  8. Dunn, J. M., & Restall, G. (2002). Relevance logic. In D. Gabbay & F. Guenthner (Eds.), Handbook of philosophical logic (2nd Edn., Vol. 6). Dordrecht: Kluwer.Google Scholar
  9. Fitelson, B., & Zalta, E. N. (2007). Steps toward a computational metaphysics. Journal of Philosophical Logic, 36, 227–247.MathSciNetMATHCrossRefGoogle Scholar
  10. Gärdenfors, P. (2005). The dynamics of thought. Dordrecht: Springer.Google Scholar
  11. Georgiev, D., Tinchev, T., & Vakarelov, D. (2006). Sqema (an algorithm for computing first-order correspondences in modal logic) version 0.9.8, September 2006. http://www.fmi.uni-sofia.bg/fmi/logic/sqema/.
  12. Lokhorst, G. J. C. (2008). Mally’s deontic logic. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. plato.stanford.edu/entries/mally-deontic/.Google Scholar
  13. Lokhorst, G. J. C. (2010). Where did Mally go wrong? Lecture Notes in Artificial Intelligence, 6181, 247–258.Google Scholar
  14. Lokhorst, G. J. C. (2011). Where did Mally go wrong? Journal of Applied Logic (accepted).Google Scholar
  15. Lokhorst, G. J. C., & van den Hoven, M. J. (2011). Responsibility for robots. In P. Lin, K. Abney & G. Bekey (Eds.) Robot ethics: The ethical and social implications of robotics. Cambridge, Mass.: MIT Press. (Forthcoming in 2011).Google Scholar
  16. Malinowski, G. (2001). Many-valued logics. In L. Goble (Ed.), The Blackwell guide to philosophical logic. Oxford: Blackwell.Google Scholar
  17. McCune, W. (2008). Prover9 and Mace4 version 2008-11A, November 2008. http://www.cs.unm.edu/~mccune/prover9/.
  18. Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21.CrossRefGoogle Scholar
  19. Rabe, F., Pudlák, P., Sutcliffe, G., & Shen, W. (2009). Solving the $100 modal logic challenge. Journal of Applied Logic, 7, 113–130.MathSciNetMATHCrossRefGoogle Scholar
  20. Slaney, J. K. (2008). MaGIC (Matrix Generator for Implication Connectives) version 2.2.1, November 2008. users.rsise.anu.edu.au/~jks/magic.html.Google Scholar
  21. Van Dalen, D. (2001). Intuitionistic logic. In L. Goble (Ed.), The Blackwell guide to philosophical logic. Oxford: Blackwell.Google Scholar
  22. Van den Hoven, M. J., & Lokhorst, G. J. C. (2002). Deontic logic and computer-supported computer ethics. Metaphilosophy, 33, 376–386. Reprinted in J. H. Moor & T. W. Bynum (Eds.) (2002). CyberPhilosophy. Oxford: Blackwell.Google Scholar
  23. Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.Google Scholar
  24. Zeman, J. J. (1973). Modal logic: The Lewis-Modal systems. Oxford: The Clarendon Press. http://www.clas.ufl.edu/users/jzeman/modallogic/.

Copyright information

© The Author(s) 2011

Authors and Affiliations

  1. 1.Section of Philosophy, Faculty of Technology, Policy and ManagementDelft University of TechnologyGA DelftThe Netherlands

Personalised recommendations