The pervert’s dilemma is inevitably induced by the emergence of sophisticated information technology. For this reason, it makes sense that our approach to unpacking the problem should also take an informational viewpoint. That is, we should understand A, B, and their actions, as agents acting in response to some kind of informational environment. From this perspective, the question to ask becomes: what type of information is relevant in making a moral judgement regarding the pervert’s dilemma? Or better, a more formalised way of asking this question is: what is the relevant Level of Abstraction (LoA) for approaching this problem?
The method of LOA is a philosophical mode of inquiry developed by Floridi (2008) with inspiration from Formal Methods in Computer Science. A LoA refers to the extent to which an entity has been “abstracted” from its natural unique context. A person, with her almost infinite complexity, can for instance be reduced to her physical attributes. At this level, in turn, we may introduce a number of variables, such as height h. When variable h is defined using say, the metric system, it becomes an observable, something we can measure and use as a means to compare the height of different persons. A LoA can thus be described as a collection of observables, that is a set of “possible values and outcomes” (Floridi 2013, p. 31) that enables comparison between entities, be it technologically, morally (e.g. alternative moral actions) or logically.
This is basically just to say that without a common frame of reference, a specification as to what information is relevant, it is impossible to make a comparison. Since an entity consists of an enormous number of possible data, Alice can be a mother, a waitress, an American and a human, and depending on the LoA, some of these will be relevant and others will not. On the LoA of Family Relations, “mother” becomes a relevant observable; on the LoA of Career it is more relevant that she is a waitress. It follows that higher LoAs allow for broader generalization, since the particularities of the analysed system have been reduced. On lower levels, however, generalization is much more difficult since each case has its unique properties. This means that two entities may be the same or different, depending on the LoA we apply. On the LoA of Species, there is no difference between Alice and Bob. On the LoA of Career (lower than species), on the other hand, they may differ. Consider for instance the following example given by Floridi (2011, p. 553):
Whether a hospital transformed now into a school is still the same building seems a very idle question to ask, if one does not specify in which context and for which purpose the question is formulated, and therefore what the required observables are that would constitute the right LoA at which the relevant answer may be correctly provided. If the question is asked in order to get there, for example, then the relevant observable is ‘‘location’’ and the answer is yes, they are the same building. If the question is asked in order to understand what happens inside, then ‘‘social function’’ is the relevant observable and therefore the answer is obviously no, they are very different.
The difference between any two things thus depends on which observables we choose to focus on. Note, however, that the method of LoA is in no way a relativist approach. A question is always asked for a purpose—a request for some specific information—and for that specific purpose, there are more or less appropriate LoAs. For instance, the true answer to the question “Is this the hospital?” is very different for someone in need of a doctor than for someone interested in nineteenth century architecture. This is because a different LoA is required in order to generate a proper response, i.e. different observables come into question. The same principle applies when it comes to moral judgments. Two options may seem equally permissible on one LoA, but different on another. Let me provide an example:
Consider the question of whether it is morally permissible for Alice to break a strike. At the LoA of Nationality (Alice as a citizen of her country), she should arguably break the strike to get industry rolling again; but at the LoA of Class (Alice as a member of a union), she should not. Then again, at the LoA of Family (Alice as a mother) she is morally obligated to break the strike so that she can feed her children. Wittgenstein famously pointed out that we will not find the “real” artichoke by peeling of its leaves (1958, §164). Likewise, we will not find Alice’s “real” obligation regarding her strike by stripping her of all her roles (mother, worker, citizen), that is, on a very high LoA. It is only in her capacity of such roles that she has any moral obligations in the first place. Some actions, such as murder, can be morally evaluated at a very high LoA. Given that we know that it is indeed a case of murder and not manslaughter or mere self-defence, we need to know very little else in order to state that murder is wrong, because it is wrong almost independently on its context. But other actions, or aspects of actions, require a much lower LoA to qualify for ethical evaluation.
To further illustrate the importance of “roles” (i.e. observables) in moral judgements, let us consider another example: is it morally impermissible for Alice to call Bob the N-word behind his back? I believe most people would require more information before they responded to this question. In this case, the relevant LoA is undoubtedly race. If Alice is white and Bob is black, then the answer to the question is yes. However, if Alice also happens to be black, then the answer is probably no. The moral status of the action in question thus depends on the social relations between the categories (observables) at the LoA in question; not so much on the relationship between Alice and Bob as individuals, but on the relationship between the societal groups to which they belong. In the present case, the history of slavery and racism simply cannot be subtracted when making a moral judgement. Even though it may not harm Bob as an individual, most people would agree that it is bad for black people as a collective identity to be referred to in such terms.
Now consider the case of hate crimes. A hate crime consists of two types of harm, one which is directed to the individual who is immediately harmed by the action, and one which is directed towards the group or collective identity of which the individual is part. While the former is prevalent on very high LoAs, the latter can only be detected at a lower LoA. Moreover, an action fails to produce the former may in certain instances still lead to the latter. Corvino (2002, p. 218) provides an illuminating real-life example:
Some years ago I attended a large Southern university where one of the local fraternities annually held an “Old South Ball.” The fraternity, which was notorious for its white-only membership, would hire black students to pose as “slaves” at the ball for the sake of verisimilitude. Needless to say, this event regularly provoked a serious outcry within the campus community. While some defended the fraternity on the grounds that the black actors were willfully (though, to many minds inexplicably) participating, most thought that the event involved a serious failure on the part of all participants to adopt an appropriate attitude toward slavery. The fact that these actors were paid well was beside the point.
While the Old Southern Ball failed to produce the first type of harm mentioned above, it surely produced the latter, and anyone who fails to appreciate this also fails to make an adequate moral assessment. What Corvino describes is a clash between two LoAs—one which focuses on the individuals involved and one which focuses on the relationship between the collective identities involved. Both sides are right, but the latter level is arguably more relevant because it engages more ethically adequate observables. The lower LoA here contains what Patridge (2011, p. 307) calls an incorrigible social meaning. That is, the “range of reasonable interpretations” is limited so that “anyone who has a proper understanding of and is properly sensitive to the moral landscape” will find it objectionable. In the case of the Old Southern Ball, the proper understanding of the moral landscape is that which considers the harm that arises from a system of actions, rather than a series of isolated events.
Essentially, this is saying that the ethical significance of the totality of a series of actions may in some cases amount to more than the sum of its individual parts. A more formalised way of expressing the same argument is through the concept of Distributed Morality (DM) (Floridi 2012), which analyses ethics from the viewpoint of Multi-Agent Systems (MAS). A MAS is an assemblage of several human actors, machines, virtual environments and even mere concepts. Because of the distributed nature of the system, it may be difficult to allocate the responsibility when it comes to the consequences of the MAS working as a unit. To describe this, DM draws inspiration from distributed knowledge in epistemology. Floridi (2012, p. 729) provides an illuminating example:
Consider the case in which A knows only that [P ∨ Q], e.g. that ‘‘the car is in the garage or Jill got it’’, whereas B only knows that: P, i.e. that ‘‘the car is not in the garage’’. Neither A nor B knows that Q, only the supra-agent (with ‘‘supra’’ as i ‘‘supranational’’) C = A⋃B knows that Q. It is the aggregation of A’s and B’s epistemic states that leads to C knowing that Q.
The same logic applies to morality. That is, although its components may be individually morally permissible, Q can still be morally impermissible. The actions of agent A and B can both be neutral, yet their consequences devastating. For example, (at least under appropriate circumstances of pressure and gravity), fire is the direct sum of fuel, oxygen, and heat combined. Yet the damage caused by a fire is not the sum of the damage of fuel, oxygen and heat in isolation. Thus, when we consider the morality of an action, we must place focus also on the system in which this action takes place—the lower LoA. Lighting a cigarette may be disastrous if you are at a gas station, yet the isolated action is per se (relatively) harmless. In some cases, it may be impossible to isolate the role of a single unit in building the totality, (a so called Sorites paradox). For instance, 100,000 grains of sand is certainly a heap, and removing one grain does not change that. Yet repeating the removal of one grain of sand will ultimately leave you with one grain, which is obviously not a heap. Here, it is the system of removal (the MAS), not any of the individual actions in themselves, that turns the heap into a non-heap. Thus, a series of actions that have little or no moral significance when viewed in isolation may amount to a morally impermissible phenomenon when combined.
In fact, even a series of benevolent actions may cause harm when combined, while ill intended actions may amount to something good depending on the constitution of the MAS. Adam Smith’s theory of the market economy is a good example; individual actors acting in self-interest result in benefit for society. It is not the sum of the moral significance of actions that matters, but their impact as a MAS. It follows, therefore, that some alternatives will seem equally morally permissible considered on the level of individuals but will differ once we consider the MAS of which they are part (see de Font-Reaulx 2017, for a similar argument applied to discrimination).