Formalizing preference utilitarianism in physical world models
Abstract
Most ethical work is done at a low level of formality. This makes practical moral questions inaccessible to formal and natural sciences and can lead to misunderstandings in ethical discussion. In this paper, we use Bayesian inference to introduce a formalization of preference utilitarianism in physical world models, specifically cellular automata. Even though our formalization is not immediately applicable, it is a first step in providing ethics and ultimately the question of how to “make the world better” with a formal basis.
Keywords
Preference utilitarianism Formalization Artificial life (Machine) ethics1 Introduction
Usually, ethical imperatives are not formulated with sufficient precision to study them and their realization mathematically. (McLaren 2011, p. 297; Gips 2011, p. 251) In particular, it is impossible to implement them on an intelligent machine to make it behave benevolently in our universe, which is the subject of a field known as Friendly AI (e.g. see Yudkowsky 2001, p. 2) or machine ethics (e.g. see Anderson and Anderson 2011, p. 1). Whereas existing formalizations of utilitarian ethics have been successfully applied to economics, they are incomplete due to the nature of their dualistic world model in which agents are assumed to be ontologically fundamental.

We describe the problem of informality in ethics and the shortcomings of previous dualist approaches to formalizing utilitarian ethics (Sect. 2).

We justify cellular automata as a world model, use Bayes’ theorem to extract utility functions from a given spacetime embedded agent and introduce a formalization of preference utilitarianism (Sect. 3).

We compare our approach with existing work in ethics, game theory and artificial intelligence (Sect. 4). Our formalization is novel but nevertheless relates to a growing movement to treat agents as embedded into the environment.
2 The problem of formalizing ethics in physical systems
Discussion on informally specified moral imperatives can be difficult due to different interpretations of the texts describing the imperative. Thus, formalizing moral imperatives could augment informal ethical discussion. (Gips 2011, p. 251; Anderson 2011; Dennett 2006; Moor 2011, p. 19)
Furthermore, science and engineering answer formally described questions and solve wellspecified tasks, but are not immediately applicable to the informal question of how to make the world “better”.
This problem has been identified in economics and game theory, which has led to some very useful formalizations of utilitarianism (e.g. Harsanyi 1982).

What objects are ethically relevant? (What are the agents of our nondualist world?)

What is a spacetime embedded agent’s or, more generally, an object’s utility function?
3 A Bayesian approach to formalizing preference utilitarianism in physical systems
3.1 Cellular automata as nondualist world models
To overcome the described problems of dualist approaches to utilitarianism, we first have to choose a new, physical setting for our ethical imperative. Instead of employing string theory and other contemporary theoretical frameworks, we choose a model that is much more simple to handle formally: cellular automata. These have sometimes even been pointed out to be candidates for modeling our own universe, (Wolfram 2002, ch. 9; Schmidhuber 1999; Zuse 1967, 1969) but even if physics will prove cellular automata to be a wrong model, they may still be of instrumental value for the purpose of this paper. (compare Downey 2012, pp. 70f., 77–79; Hawking and Mlodinow 2010, ch. 8)
For detailed introductions to classic cellular automata with neighborbased rules, see Wolfram (2002) or Shiffman (2012, ch. 7) for a purely informal and Wolfram (1983) for a slightly more technical treatment that focuses on onedimensional cellular automata. In Sect. 3.1.1, we will consider a generalized and relatively simple formalism, which is not limited to rules that only depend on neighbors of a cell.
In CA, it is immediately clear that for a (preference) utilitarian morality we have to answer the questions that are avoided by assuming a set of agents and their utility functions to be known from the beginning. It also frees us from many ethical intuitions that we build up specifically for our own living situations and reduces moral intuition to its very fundamentals.
3.1.1 A formal introduction to cellular systems
We now introduce some very basic notation and terminology of cellular systems, a generalization of classic cellular automata, thus setting the scene for our ethical imperative.
For given sets A and B, let \(A^B\) denote the set of functions from B to A. A cellular system is a triple (C, S, d) of a countable set of cells C, a finite set of cell states S and a function \(d\!: S^C \rightarrow S^C\) that maps a world state \(s\!:C\rightarrow S\) onto its successor. So basically a world consists of a set of cells that can have different values and a function that models deterministic statetransitions.^{2}
Cells of cellular systems do not necessarily have to be on a regular grid and computing new states does not have to be done via neighborbased lookup tables. This makes formalization much easier.
But before anything else, we have to define structures which represent objects in our cellular systems. A space \({Spc}\subseteq C\) in a cellular system (C, S, d) is a finite subset of the set of cells C. A structure str on a space Spc is a function \({str} :{Spc}\rightarrow S\) that maps the cells of the space onto cell values.
A history is a function \(h:{\mathbb {N}}\rightarrow S^C\) that maps natural numbers as time steps onto states of the system. For example, the history \(h_s\) of an initial state s can then be defined recursively by \(h_s(n)=d(h_s(n1))\) for \(n\ge 1\) with the base case \(h_s(0)=s\).
3.2 Posterior probabilities and the priority of a (given) goal to a given agent
Before extracting preferences from a given structure, we have to decide on a model of preferences. Preferences themselves are mere orderings of alternative outcomes or lotteries over these outcomes with the outcomes being entire histories \(h\in (S^C) ^{{\mathbb {N}}}\) in our case. The problem is that this makes it difficult to compare two outcomes when the preferences of multiple individuals are involved. To be able to make such comparisons, we move from orderings to utility functions \(u:( S^C)^{{\mathbb {N}}} \rightarrow {\mathbb {R}}\) that map histories of the world onto their (cardinal) utilities.^{3} This will make it possible to just add up the utilities of different individuals and then compare the sum among outcomes. This by no means “solves” the problem of interpersonal comparison. Rather, it makes it more explicit. For example, a given set of preferences is represented equally well by u and \(2\cdot u\), but ceteris paribus \(2\cdot u\) will make the preferences more significant in summation. Different approaches to the problem have been proposed. (Hammond 1989) In this paper we will ignore the problem (or hope that the fair treatment in determining all individuals’ utility functions induces moral permissibility). Now we ask the question: Does a particular structure str want to maximize some utility function u?
It is fruitful to think about how one would approach such questions in our world, when encountering some very odd organism. At least one possible approach would be to put it into different situations or environments and see what it does to them. If the structure increases some potential utility function in different environments, it seems as if this utility function represents an aspect of the structure’s preferences.^{4}
However, for some utility functions it is not very special that their values are increased and then it might just be coincidence that the structure in question also does so. For example, it is usually not considered a structure’s preference to increase entropy even if entropy increases in environments including this structure, because an increase in entropy is extremely common with or without the structure.
Also, we feel that some utility functions are less likely than others by themselves, e.g. because they are very complex or specific.
But how can we formally capture these notions?

The utility function u is the goal of structure str.

Maximizing u was the goal of an entity that chose str.
Nonetheless, there seem to be canonical approaches. P(str@iu) should be understood as the probability that str is chosen at time step i by an approximately rational agent that wants to maximize u. So, structures that are better at maximizing or more suitable for u should receive higher P(str@iu) values. This corresponds to the assumption of (approximate) rationality in Dennett’s intentional stance. (Dennett 1989, pp. 21, 49f.) Unfortunately, the debate about causal and evidential decision theory (e.g. Peterson 2009, ch. 9) shows that formalizing the notion of rational choice is difficult.
The prior of utility functions P(u) on the other hand should denote the “intrinsic plausibility” of a goal u. That does not have to mean defining and excluding “evil” or “banal” utility functions. In the preference extraction context, utility functions are models or hypotheses that explain the behavior of a structure. And Solomonoff’s formalization of Occam’s razor is often cited as a universal prior distribution of hypotheses. (Legg 1997) It assumes complicated hypotheses (utility functions), i.e. ones that require more symbols to be described in some programming language, to be less likely than simpler ones.
Finally, note how Bayes’ theorem catches our intuitions from above, especially when assuming probability distributions similar to the suggested ones: When some structure str maximizes some utility function u very well, then P(str@iu) and thereby the relevance of the utility function to the object would increase. On the other hand, if many other structures are comparably good, then the probability for each one to be chosen when given the utility function is smaller (due to the sum of the probabilities of all possible structures on a given space and time step being 1) and the probability of the utility function being a real preference would decrease with it. Finally, multiplying by P(u) catches abstruse utility functions, e.g. utility functions that are specifically suited to be fulfilled by the structure in question.
3.3 An individual structure’s welfare function
We call this term expected utility, because this expression is generally used for adding utilities based on their likelihood, which is a common concept. However, the term usually suggests that there is also an actual utility. In our case of ascribing preferences to physical objects however, no such thing exists. We only imagine there to be some real utility or welfare functions and that we use Bayesian inference to find them. But in fact, the structure itself is all there exists and thus the expected utility is as actual as possible.
The sum in the term for expected utility is over an uncountably infinite set, which can only converge when only countably many summands are nonzero.^{7} Some other concerns are described in footnote 9 and addressed in footnote 10.
3.4 Summing over all agents
4 Related work
Preferentist utilitarianism has become a common form of utilitarianism in the second half of the 20th century, with the best known proponents being Hare and Singer. However, the intuitions underlying the presented formalization are different from the most common ethical intuitions in preference utilitarianism. Since our formal preference utilitarianism is not meant to describe a decision procedure for humans (or, more generally and in Hare’s (1981, pp. 44f.) terminology, non“archangels”), we do not consider an applicationoriented utilitarianism like Hare’s twolevel consequentialism. (Hare 1981, p. 25ff.) Also, most preference utilitarians ascribe preferences only to humans (or abstract agents) and do not contain prioritization among individuals, (Harsanyi 1982, p. 46) or they use a low number of classes of moral standing. (Singer 1993, pp. 101ff., 283f.) Whereas some have pointed out that a variety of behavior and even trivial systems can be viewed from an “intentional stance”, (Dennett 1971, 1989, especially pp. 29f.; compare Hofstadter 2007, pp. 52ff.) only relatively recent articles in preference utilitarianism have discussed the connection between goaldirected behavior and ethically relevant preferences and with the universality of the former pointed out the potential universality of the latter. (Tomasik 2015b, ch. 7; Tomasik 2015a, ch. 4, 6; Tomasik 2015c) This idea is an important step when formalizing preference utilitarianism because otherwise one would have to define moral standing depending on other, usually binary, notions: being alive, the ability to suffer (Bentham 1823, ch. 17 note 122) personhood (Gruen 2014, ch. 1), free will, sentience and (self)consciousness (Singer 1993, pp. 101ff.) or the ability of moral judgment. However, all of them seem to be very difficult to define (universally) in physical systems in the intended binary sense.^{11} Also, continuous definitions of these terms are often connected with goaldirected behavior. (Tomasik 2015a, ch. 4; Wolfram 2002, p. 1136)
In Artificial Intelligence, the idea of learning preferences has become more popular, e.g. see Fürnkranz and Hüllermeier (2010) and Nielsen and Jensen (2004) for technical treatments or Bostrom (2014, pp. 192ff.) for an introduction in the context of making an AI do what the engineers value. However, most of the time, the agent is still presumed to be separated from the environment.
Nonetheless, the idea of evaluating spacetimeembedded intelligence is beginning to be established in artificial (general) intelligence, (Orseau and Ring 2012) which is closely related to the probability distribution P(str@iu).
5 Conclusion

Through sums over all structures, possible utility functions and states and the application of incomputable concepts like Solomonoff’s prior in P(u), our formalization is incomputable in theory and practice. So even in simulations of cellular automata our formalization is not immediately applicable.

Computing our global welfare function in the real world is even more difficult, because it requires full information about the world on particle level. Also, the formalization must first be translated into the physical laws of our universe.

The difficulty to apply our formalization is by no means only relevant to actually using it as a moral imperative. Instead, it is also relevant to discussing our formalization from a normative standpoint: Even though the derivation of our formalization is plausible, it may still differ significantly from intuition. There could be some kind of trivial agents with trivial preferences that dominate comparison of different histories. Because the formalization’s incomputability makes it difficult to assess whether such problems are present, further work on its potential flaws is necessary. Based on such discussion, our formalization may be revised or even discarded. In any case, we could learn a lot from its shortcomings especially due to the formalization’s simplicity and plausible derivation.

We outlined how P(str@iu) and P(u) could be determined in principle. However, they need to be specified more formally, which in the case of P(str@iu) seems to require a solution to the problem of normative decision theory. Some problems of our formalization could inspire additional refinements of these distributions.
Footnotes
 1.
 2.
The choice of deterministic systems was made primarily to simplify the formalization. It appears to be unproblematic to transfer formal preference utilitarianism to nondeterministic systems, but defining nondeterministic cellular automata themselves is a little more difficult.
 3.
Other codomains of utility functions seem possible as long as they are subsets of a totally ordered vector space over \({\mathbb {R}}\). Intervals like [0, 1] seem specifically suitable, because they avoid problems of infinite utility and allow for normalization. (Isbell 1959)
 4.
Alternatively, one can try to avoid this hypothetical experiment by predicting the organism’s behavior. For example, one could try to ask the organism what it would do or infer its typical behavior from its internals.
 5.
Including i into the data is important, because otherwise identical structures at different points in time would have identical utility functions. This is a problem, when the utility function u is applied to the whole history, because then structures cannot have preferences about themselves (“personal happiness”) without also having preferences about all other identical structures (at the same place). An alternative would be to apply utility functions only to the part of the history from the point of the existence of the structure onwards, so that identical structures at different points in time have equal utility functions that are applied differently. However, it seems like this neglects that the past can depend on the action of an agent in the present, as illustrated in Newcomb’s paradox by Nozick (1969).
 6.
Intuitively, some structures can have more than one utility function, while others have no utility function at all. One way to model this would be to understand different utility functions as events in separate sample spaces. So, the sum \(\sum _u P(u{str})\) could vary among different structures str. A similar scenario is the inference of multiple diseases from a set of symptoms. (Charniak 1983) While some individuals may have no diseases or preferences at all, others may be thought of as having more than one disease or utility function. In more technical terms, for each utility function there would be a sample space of having that utility function and not having that utility function.
In this paper however, we will assume mutual exclusivity and collective exhaustiveness of utility functions. All utility functions live in the same sample space and thus \(\sum _u P(u{str}) =1\) for all structures str. This does not mean that all structures have equal moral standing: The idea is that “meaningless” structures str have high P(ustr) only for constant utility functions u, i.e. for “don’t care”utility functions, which are irrelevant for decision making.
 7.
If Solomonoff’s prior is chosen for P(u), all incomputable utility functions have zero probability. Since the set of computable functions is countable, only countably many summands could possibly be nonzero.
 8.More precisely, but less elegantly, one could writewhere \(Fin(C):=\{ A \subseteq C  A\in {\mathbb {N}} \}\) is the set of finite subsets of C and \(h(i)_{{Spc}} :{Spc}\rightarrow S:c\mapsto h(i)(c)\) is the restriction of the state h(i) to the space Spc and therefore the structure on that space.
 9.
Specifically, the Riemann series theorem states that any conditionally convergent series can be reordered to have arbitrary values.
 10.It is very important to differentiate the series from its value. Otherwise, one may identify the series with positive or negative infinity or as being undefined. Two infinite values of the series would then not be comparable anymore, which Bostrom (2011) identified as a problem for (consequentialist) ethics. But this problem can sometimes be eliminated by comparing the series itself to another. In this particular case, a history h is better than another history \(h'\), ifwhere \(h(i)_{{Spc}} :{Spc} \rightarrow S: c \mapsto h(i)(c)\) denotes the restriction of h(i) to Spc, i.e. the structure on Spc in the state h(i). If no such relation can be established then the two histories are arguably incomparable or may be called approximately equally good. Again, the ordering could be important in some cases, see footnote 9.$$\begin{aligned} \sum _i \sum _{{Spc}} \sum _u u(h) P(u(h(i)_{{Spc}})@i)  u(h') P(u(h'(i)_{{Spc}})@i) >0, \end{aligned}$$
 11.
Notes
Acknowledgments
I am grateful to Brian Tomasik for giving me important comments that led me to systematize my formalization. I also thank Adrian Hutter for an interesting discussion on the formalization, as well as Alina Mendt, Duncan Murray, Henry Heinemann, Juliane Kraft and Nils Weller for reading and commenting on earlier versions of the paper. I owe thanks to the two anonymous reviewers whose comments and suggestions helped improve and clarify this manuscript.
References
 Anderson, S. L. (2011). How machines might help us achieve breakthroughs in ethical theory and inspire us to behave better. In M. Anderson & S. L. Anderson (Eds.), Machine ethics (pp. 524–530). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
 Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Cambridge: Cambridge University Press. http://www.cambridge.org/cr/academic/subjects/computerscience/artificialintelligenceandnaturallanguageprocessing/machineethics.
 Anderson, M., Anderson, S. L. & Armen, C. (2004). Towards machine ethics. In: AAAI04 Workshop on agent organizations: Theory and practice. American Association for Artificial Intelligence, Menlo Park. http://www.researchgate.net/publication/259656154_Towards_Machine_Ethics.
 Arneson, R. J. (1998). What, if anything, renders all humans morally equal? http://philosophyfaculty.ucsd.edu/faculty/rarneson/singer.pdf.
 Bentham, J. (1823). An introduction to the principles of morals and legislation. Oxford: Clarendon Press. http://www.econlib.org/library/Bentham/bnthPMLCover.html.
 Bostrom, N. (2011). Infinite ethics. Analysis and Metaphysics, 10, 9–59.Google Scholar
 Bostrom, N. (2014). Superintelligence. Paths, dangers, strategies (1st ed.). Oxford: Oxford University Press.Google Scholar
 Charniak, E. (1983). The Bayesian basis of common sense medical diagnosis. In: AAAI83 Proceedings, Washington, DC.Google Scholar
 Dennett, D. (1971). Intentional systems. The Journal of Philosophy, 68(4), 87–106. http://www.jstor.org/stable/2025382.
 Dennett, D. (1989). The intentional stance. Cambridge: mit press.Google Scholar
 Dennett, D. (2006). Computers as prostheses for the imagination. In: Talk at The International Computers and Philosophy Conference. Laval, France.Google Scholar
 Downey, A. B. (2012). Think complexity. Sebastopol: O’Reilly.Google Scholar
 Emmeche, C. (1997). Center for the Philosophy of Nature and Science Studies Niels Bohr Institute. http://www.nbi.dk/~emmeche/cePubl/97e.defLife.v3f.html.
 Ettinger, R. C. W. (2009). Youniverse: Toward a selfcentered philosophy of immortalism and cryonics. Boca Raton: UniversalPublishersGoogle Scholar
 Fürnkranz, J., & Hüllermeier, E. (Eds.). (2010). Preference learning. Berlin: Springer.Google Scholar
 Gips, J. (2011). Towards the ethical robot. In M. Anderson & S. L. Anderson (Eds.), Machine ethics (1st ed., pp. 244–253). Cambridge: Cambridge University Press.Google Scholar
 Gruen, L. (2014). In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/archives/fall2014/entries/moralanimal.
 Hammond, P. J. (1989). Interpersonal comparison of utility: Why and how they are and should be made. http://homepages.warwick.ac.uk/~ecsgaj/icuSurvey.pdf.
 Hare, R. M. (1981). Moral thinking. Its levels method, and point. Twolevel consequentialism. Oxford: Clarendon Press.CrossRefGoogle Scholar
 Harsanyi, J. C. (1982). Morality and the theory of rational behaviour. In A. Sen & B. Williams (Eds.), Utilitarianism and beyond, Chap. 2 (pp. 39–62). Cambridge: Cambridge University Press. http://ebooks.cambridge.org/chapter.jsf?bid=CBO9780511611964&cid=CBO9780511611964A009&tabName=Chapter.
 Hawking, S. & Mlodinow, L. (2010). The grand design. New York: Bantam B.Google Scholar
 Hofstadter, D. (2007). I am a strange loop. New York: Basic Books.Google Scholar
 Isbell, J. R. (1959). Absolute games. In A. W. Tucker & R. D. Luce (Eds.), Contributions to the theory of games (Vol. 4, pp. 357–396). Princeton: Princeton University Press.Google Scholar
 Legg, S. (1997). Solomonoff induction. MA Thesis. University of Auckland. http://www.vetta.org/documents/legg1996solomonoffinduction.pdf.
 McLaren, B. N. (2011). Computation models of ethical reasoning. Challenges, initial steps, and future directions. In M. Anderson & S. L. Anderson (Eds.), Machine Ethics (1st ed., pp. 297–315). Cambridge: cambridge University Press.CrossRefGoogle Scholar
 Moor, J. H. (2011). The nature, importance, and difficulty of machine ethics. In M. Anderson & S. L. Anderson (Eds.), Machine ethics, Chap. 1 (pp. 13–20). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
 Nielsen, T. D. & Jensen, F. V. (2004). Learning a decision maker’s utility function from (possibly) inconsistent behavior. In: Artificial Intelligence, 160(1–2), 53–78. http://www.sciencedirect.com/science/article/pii/S0004370204001328.
 Nozick, R. (1969). Newcomb’s problem and two principles of choice. In: N. Rescher et al. (Ed.), Essays in honor of Carl G. Hempel (pp. 114–146). Berlin: Springer. http://faculty.arts.ubc.ca/rjohns/nozick_newcomb.pdf.
 Olshausen, B. A. (2004). Bayesian probability theory. Retrieved from http://redwood.berkeley.edu/bruno/npb163/bayes.pdf
 Orseau, L. & Ring, M. (2012). Spacetime embedded intelligence. In: J. Bach, B. Goertzel & M. Iklé (Eds.), Artificial general intelligence, Vol. 5 (p. 391). http://agiconference.org/2012/wpcontent/uploads/2012/12/paper_76.pdf.
 Peterson, M. (2009). An introduction to decision theory. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
 Robert, C. P. (1994). The Bayesian choice. A decisiontheoretic motivation (1st ed.). Berlin: Springer.Google Scholar
 Schmidhuber, J. (1999). A computer scientist’s view of life, the universe, and everything. http://arxiv.org/pdf/quantph/9904050v1.pdf.
 Shiffman, D. (2012). In S. Fry (Ed.), The nature of code. http://natureofcode.com/book/.
 Singer, P. (1993). Practical ethics (2nd ed.). Cambridge: Cambridge University Press.Google Scholar
 Tomasik, B. (2015a). Do videogame characters matter morally? http://www.webcitation.org/6X7w4AJnb.
 Tomasik, B. (2015b). Hedonistic vs. preference utilitarianism. http://www.webcitation.org/6X7vepyJP.
 Tomasik, B. (2015c). Is there suffering in fundamental physics? http://www.webcitation.org/6X7vte3XP.
 Wolfram, S. (1983). Cellular automata. In: Los Alamos Science, 9, 2–21. http://www.stephenwolfram.com/publications/academic/cellularautomata.pdf.
 Wolfram, S. (2002). A new kind of science. Champaign: Wolfram Media. http://www.wolframscience.com/nksonline/toc.html.
 Yudkowsky, E. (2001). Creating friendly AI 1.0: The analysis and design of benevolent goal architectures. Machine Intelligence Research Institute. http://intelligence.org/files/CFAI.pdf.
 Zuse, K. (1967). Rechnender Raum. In: Elektronische Datenverarbeitung, 8, 336–344. ftp://ftp.idsia.ch/pub/juergen/zuse67scan.pdf.
 Zuse, K. (1969). Rechnender Raum. Brunswick: Vieweg & Sohn.CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.