Formalizing preference utilitarianism in physical world models
Abstract
Most ethical work is done at a low level of formality. This makes practical moral questions inaccessible to formal and natural sciences and can lead to misunderstandings in ethical discussion. In this paper, we use Bayesian inference to introduce a formalization of preference utilitarianism in physical world models, specifically cellular automata. Even though our formalization is not immediately applicable, it is a first step in providing ethics and ultimately the question of how to “make the world better” with a formal basis.
Keywords
Preference utilitarianism Formalization Artificial life (Machine) ethics1 Introduction
Usually, ethical imperatives are not formulated with sufficient precision to study them and their realization mathematically. (McLaren 2011, p. 297; Gips 2011, p. 251) In particular, it is impossible to implement them on an intelligent machine to make it behave benevolently in our universe, which is the subject of a field known as Friendly AI (e.g. see Yudkowsky 2001, p. 2) or machine ethics (e.g. see Anderson and Anderson 2011, p. 1). Whereas existing formalizations of utilitarian ethics have been successfully applied to economics, they are incomplete due to the nature of their dualistic world model in which agents are assumed to be ontologically fundamental.
We describe the problem of informality in ethics and the shortcomings of previous dualist approaches to formalizing utilitarian ethics (Sect. 2).
We justify cellular automata as a world model, use Bayes’ theorem to extract utility functions from a given space-time embedded agent and introduce a formalization of preference utilitarianism (Sect. 3).
We compare our approach with existing work in ethics, game theory and artificial intelligence (Sect. 4). Our formalization is novel but nevertheless relates to a growing movement to treat agents as embedded into the environment.
2 The problem of formalizing ethics in physical systems
Discussion on informally specified moral imperatives can be difficult due to different interpretations of the texts describing the imperative. Thus, formalizing moral imperatives could augment informal ethical discussion. (Gips 2011, p. 251; Anderson 2011; Dennett 2006; Moor 2011, p. 19)
Furthermore, science and engineering answer formally described questions and solve well-specified tasks, but are not immediately applicable to the informal question of how to make the world “better”.
This problem has been identified in economics and game theory, which has led to some very useful formalizations of utilitarianism (e.g. Harsanyi 1982).
The classic agent-environment-model
What objects are ethically relevant? (What are the agents of our non-dualist world?)
What is a space-time embedded agent’s or, more generally, an object’s utility function?
3 A Bayesian approach to formalizing preference utilitarianism in physical systems
3.1 Cellular automata as non-dualist world models
To overcome the described problems of dualist approaches to utilitarianism, we first have to choose a new, physical setting for our ethical imperative. Instead of employing string theory and other contemporary theoretical frameworks, we choose a model that is much more simple to handle formally: cellular automata. These have sometimes even been pointed out to be candidates for modeling our own universe, (Wolfram 2002, ch. 9; Schmidhuber 1999; Zuse 1967, 1969) but even if physics will prove cellular automata to be a wrong model, they may still be of instrumental value for the purpose of this paper. (compare Downey 2012, pp. 70f., 77–79; Hawking and Mlodinow 2010, ch. 8)
For detailed introductions to classic cellular automata with neighbor-based rules, see Wolfram (2002) or Shiffman (2012, ch. 7) for a purely informal and Wolfram (1983) for a slightly more technical treatment that focuses on one-dimensional cellular automata. In Sect. 3.1.1, we will consider a generalized and relatively simple formalism, which is not limited to rules that only depend on neighbors of a cell.
In CA, it is immediately clear that for a (preference) utilitarian morality we have to answer the questions that are avoided by assuming a set of agents and their utility functions to be known from the beginning. It also frees us from many ethical intuitions that we build up specifically for our own living situations and reduces moral intuition to its very fundamentals.
A state of a two-dimensional cellular automaton. It is very unclear, what agents are and which preferences they have. Adapted from http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life#mediaviewer/File:Conways_game_of_life_breeder
3.1.1 A formal introduction to cellular systems
We now introduce some very basic notation and terminology of cellular systems, a generalization of classic cellular automata, thus setting the scene for our ethical imperative.
For given sets A and B, let \(A^B\) denote the set of functions from B to A. A cellular system is a triple (C, S, d) of a countable set of cells C, a finite set of cell states S and a function \(d\!: S^C \rightarrow S^C\) that maps a world state\(s\!:C\rightarrow S\) onto its successor. So basically a world consists of a set of cells that can have different values and a function that models deterministic state-transitions.2
Cells of cellular systems do not necessarily have to be on a regular grid and computing new states does not have to be done via neighbor-based lookup tables. This makes formalization much easier.
But before anything else, we have to define structures which represent objects in our cellular systems. A space\({Spc}\subseteq C\) in a cellular system (C, S, d) is a finite subset of the set of cells C. A structurestr on a space Spc is a function \({str} :{Spc}\rightarrow S\) that maps the cells of the space onto cell values.
A history is a function \(h:{\mathbb {N}}\rightarrow S^C\) that maps natural numbers as time steps onto states of the system. For example, the history \(h_s\) of an initial state s can then be defined recursively by \(h_s(n)=d(h_s(n-1))\) for \(n\ge 1\) with the base case \(h_s(0)=s\).
3.2 Posterior probabilities and the priority of a (given) goal to a given agent
Before extracting preferences from a given structure, we have to decide on a model of preferences. Preferences themselves are mere orderings of alternative outcomes or lotteries over these outcomes with the outcomes being entire histories \(h\in (S^C) ^{{\mathbb {N}}}\) in our case. The problem is that this makes it difficult to compare two outcomes when the preferences of multiple individuals are involved. To be able to make such comparisons, we move from orderings to utility functions \(u:( S^C)^{{\mathbb {N}}} \rightarrow {\mathbb {R}}\) that map histories of the world onto their (cardinal) utilities.3 This will make it possible to just add up the utilities of different individuals and then compare the sum among outcomes. This by no means “solves” the problem of interpersonal comparison. Rather, it makes it more explicit. For example, a given set of preferences is represented equally well by u and \(2\cdot u\), but ceteris paribus\(2\cdot u\) will make the preferences more significant in summation. Different approaches to the problem have been proposed. (Hammond 1989) In this paper we will ignore the problem (or hope that the fair treatment in determining all individuals’ utility functions induces moral permissibility). Now we ask the question: Does a particular structure str want to maximize some utility function u?
It is fruitful to think about how one would approach such questions in our world, when encountering some very odd organism. At least one possible approach would be to put it into different situations or environments and see what it does to them. If the structure increases some potential utility function in different environments, it seems as if this utility function represents an aspect of the structure’s preferences.4
However, for some utility functions it is not very special that their values are increased and then it might just be coincidence that the structure in question also does so. For example, it is usually not considered a structure’s preference to increase entropy even if entropy increases in environments including this structure, because an increase in entropy is extremely common with or without the structure.
Also, we feel that some utility functions are less likely than others by themselves, e.g. because they are very complex or specific.
But how can we formally capture these notions?
The utility function u is the goal of structure str.
Maximizing u was the goal of an entity that chose str.
Nonetheless, there seem to be canonical approaches. P(str@i|u) should be understood as the probability that str is chosen at time step i by an approximately rational agent that wants to maximize u. So, structures that are better at maximizing or more suitable for u should receive higher P(str@i|u) values. This corresponds to the assumption of (approximate) rationality in Dennett’s intentional stance. (Dennett 1989, pp. 21, 49f.) Unfortunately, the debate about causal and evidential decision theory (e.g. Peterson 2009, ch. 9) shows that formalizing the notion of rational choice is difficult.
The prior of utility functions P(u) on the other hand should denote the “intrinsic plausibility” of a goal u. That does not have to mean defining and excluding “evil” or “banal” utility functions. In the preference extraction context, utility functions are models or hypotheses that explain the behavior of a structure. And Solomonoff’s formalization of Occam’s razor is often cited as a universal prior distribution of hypotheses. (Legg 1997) It assumes complicated hypotheses (utility functions), i.e. ones that require more symbols to be described in some programming language, to be less likely than simpler ones.
Finally, note how Bayes’ theorem catches our intuitions from above, especially when assuming probability distributions similar to the suggested ones: When some structure str maximizes some utility function u very well, then P(str@i|u) and thereby the relevance of the utility function to the object would increase. On the other hand, if many other structures are comparably good, then the probability for each one to be chosen when given the utility function is smaller (due to the sum of the probabilities of all possible structures on a given space and time step being 1) and the probability of the utility function being a real preference would decrease with it. Finally, multiplying by P(u) catches abstruse utility functions, e.g. utility functions that are specifically suited to be fulfilled by the structure in question.
3.3 An individual structure’s welfare function
We call this term expected utility, because this expression is generally used for adding utilities based on their likelihood, which is a common concept. However, the term usually suggests that there is also an actual utility. In our case of ascribing preferences to physical objects however, no such thing exists. We only imagine there to be some real utility or welfare functions and that we use Bayesian inference to find them. But in fact, the structure itself is all there exists and thus the expected utility is as actual as possible.
The sum in the term for expected utility is over an uncountably infinite set, which can only converge when only countably many summands are non-zero.7 Some other concerns are described in footnote 9 and addressed in footnote 10.
3.4 Summing over all agents
4 Related work
Preferentist utilitarianism has become a common form of utilitarianism in the second half of the 20th century, with the best known proponents being Hare and Singer. However, the intuitions underlying the presented formalization are different from the most common ethical intuitions in preference utilitarianism. Since our formal preference utilitarianism is not meant to describe a decision procedure for humans (or, more generally and in Hare’s (1981, pp. 44f.) terminology, non-“archangels”), we do not consider an application-oriented utilitarianism like Hare’s two-level consequentialism. (Hare 1981, p. 25ff.) Also, most preference utilitarians ascribe preferences only to humans (or abstract agents) and do not contain prioritization among individuals, (Harsanyi 1982, p. 46) or they use a low number of classes of moral standing. (Singer 1993, pp. 101ff., 283f.) Whereas some have pointed out that a variety of behavior and even trivial systems can be viewed from an “intentional stance”, (Dennett 1971, 1989, especially pp. 29f.; compare Hofstadter 2007, pp. 52ff.) only relatively recent articles in preference utilitarianism have discussed the connection between goal-directed behavior and ethically relevant preferences and with the universality of the former pointed out the potential universality of the latter. (Tomasik 2015b, ch. 7; Tomasik 2015a, ch. 4, 6; Tomasik 2015c) This idea is an important step when formalizing preference utilitarianism because otherwise one would have to define moral standing depending on other, usually binary, notions: being alive, the ability to suffer (Bentham 1823, ch. 17 note 122) personhood (Gruen 2014, ch. 1), free will, sentience and (self-)consciousness (Singer 1993, pp. 101ff.) or the ability of moral judgment. However, all of them seem to be very difficult to define (universally) in physical systems in the intended binary sense.11 Also, continuous definitions of these terms are often connected with goal-directed behavior. (Tomasik 2015a, ch. 4; Wolfram 2002, p. 1136)
In Artificial Intelligence, the idea of learning preferences has become more popular, e.g. see Fürnkranz and Hüllermeier (2010) and Nielsen and Jensen (2004) for technical treatments or Bostrom (2014, pp. 192ff.) for an introduction in the context of making an AI do what the engineers value. However, most of the time, the agent is still presumed to be separated from the environment.
Nonetheless, the idea of evaluating space-time-embedded intelligence is beginning to be established in artificial (general) intelligence, (Orseau and Ring 2012) which is closely related to the probability distribution P(str@i|u).
5 Conclusion
Through sums over all structures, possible utility functions and states and the application of incomputable concepts like Solomonoff’s prior in P(u), our formalization is incomputable in theory and practice. So even in simulations of cellular automata our formalization is not immediately applicable.
Computing our global welfare function in the real world is even more difficult, because it requires full information about the world on particle level. Also, the formalization must first be translated into the physical laws of our universe.
The difficulty to apply our formalization is by no means only relevant to actually using it as a moral imperative. Instead, it is also relevant to discussing our formalization from a normative standpoint: Even though the derivation of our formalization is plausible, it may still differ significantly from intuition. There could be some kind of trivial agents with trivial preferences that dominate comparison of different histories. Because the formalization’s incomputability makes it difficult to assess whether such problems are present, further work on its potential flaws is necessary. Based on such discussion, our formalization may be revised or even discarded. In any case, we could learn a lot from its shortcomings especially due to the formalization’s simplicity and plausible derivation.
We outlined how P(str@i|u) and P(u) could be determined in principle. However, they need to be specified more formally, which in the case of P(str@i|u) seems to require a solution to the problem of normative decision theory. Some problems of our formalization could inspire additional refinements of these distributions.
For introductions to and ethical discussions of the underlying notion of preference utilitarianism see Tomasik (2015a, b).
The choice of deterministic systems was made primarily to simplify the formalization. It appears to be unproblematic to transfer formal preference utilitarianism to non-deterministic systems, but defining non-deterministic cellular automata themselves is a little more difficult.
Other codomains of utility functions seem possible as long as they are subsets of a totally ordered vector space over \({\mathbb {R}}\). Intervals like [0, 1] seem specifically suitable, because they avoid problems of infinite utility and allow for normalization. (Isbell 1959)
Alternatively, one can try to avoid this hypothetical experiment by predicting the organism’s behavior. For example, one could try to ask the organism what it would do or infer its typical behavior from its internals.
Including i into the data is important, because otherwise identical structures at different points in time would have identical utility functions. This is a problem, when the utility function u is applied to the whole history, because then structures cannot have preferences about themselves (“personal happiness”) without also having preferences about all other identical structures (at the same place). An alternative would be to apply utility functions only to the part of the history from the point of the existence of the structure onwards, so that identical structures at different points in time have equal utility functions that are applied differently. However, it seems like this neglects that the past can depend on the action of an agent in the present, as illustrated in Newcomb’s paradox by Nozick (1969).
Intuitively, some structures can have more than one utility function, while others have no utility function at all. One way to model this would be to understand different utility functions as events in separate sample spaces. So, the sum \(\sum _u P(u|{str})\) could vary among different structures str. A similar scenario is the inference of multiple diseases from a set of symptoms. (Charniak 1983) While some individuals may have no diseases or preferences at all, others may be thought of as having more than one disease or utility function. In more technical terms, for each utility function there would be a sample space of having that utility function and not having that utility function.
In this paper however, we will assume mutual exclusivity and collective exhaustiveness of utility functions. All utility functions live in the same sample space and thus \(\sum _u P(u|{str}) =1\) for all structures str. This does not mean that all structures have equal moral standing: The idea is that “meaningless” structures str have high P(u|str) only for constant utility functions u, i.e. for “don’t care”-utility functions, which are irrelevant for decision making.
If Solomonoff’s prior is chosen for P(u), all incomputable utility functions have zero probability. Since the set of computable functions is countable, only countably many summands could possibly be non-zero.
Specifically, the Riemann series theorem states that any conditionally convergent series can be reordered to have arbitrary values.
Acknowledgments
I am grateful to Brian Tomasik for giving me important comments that led me to systematize my formalization. I also thank Adrian Hutter for an interesting discussion on the formalization, as well as Alina Mendt, Duncan Murray, Henry Heinemann, Juliane Kraft and Nils Weller for reading and commenting on earlier versions of the paper. I owe thanks to the two anonymous reviewers whose comments and suggestions helped improve and clarify this manuscript.
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.



