Skip to main content
Log in

Skyrms on the possibility of universal deception

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Everybody’s been telling me practically the whole truth. What I

want’s some impractical son of a gun that’ll shoot the works.

–Lieutenant Guild to Nick Charles

Abstract

In the Groundwork, Immanuel Kant famously argued that it would be self-defeating for everyone to follow a maxim of lying whenever it is to his or her advantage. In his recent book Signals, Brian Skyrms claims that Kant was wrong about the impossibility of universal deception. Skyrms argues that there are Lewisian signaling games in which the sender always sends a signal that deceives the receiver. I show here that these purportedly deceptive signals simply fail to make the receiver as epistemically well off as she could have been. Since the receiver is not actually misled, Kant would not have considered these games to be examples of deception, much less universal deception. However, I argue that there is an important sense of deception, endorsed by Roderick Chisholm and Thomas Feehan in their seminal work on the topic, under which Skyrms has shown that universal deception is possible.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Although Kant focused specifically on “lying promises” in the Groundwork, the argument clearly applies to lying generally (see Mahon 2006a, p. 660). The argument was intended to show why it is wrong to lie, but I do not address this moral question in this paper.

  2. In fact, it would not even be possible to lie if (a) you have to intend to deceive in order to lie and (b) you cannot intend to do what you take to be impossible. However, while most philosophers think that lying requires an intention to deceive, some philosophers (e.g., Carson 2010, pp. 20–22) do not.

  3. I modify the case slightly and fill in some numbers. Skyrms (2010, p. 81) describes a different three-state signaling game that makes the same point.

  4. Or as Skyrms (2010, p. 77) puts it, it conveys, “I am the kind who sends this signal.”

  5. Although his book was published a year after Skyrms’s book, Parfit’s manuscript had been circulating for several years before that.

  6. Sober (1994, pp. 81–82) describes a different signaling game (also involving Batesian mimicry) which shows that there can be a lot of deception at equilibrium. The equilibrium is in mixed strategies. Most of the time, the sender sends a signal that deceives the receiver, but not all of the time.

  7. Skyrms just gives the payoffs for this game. The story about the cabinet is my own attempt to make the game slightly more intuitive.

  8. I am assuming here that the receiver flips a fair coin to decide which act to perform when two acts are tied with the highest expected payoff. However, there are other ways that the receiver might break ties. For instance, it might be his policy to open the Left half rather than the Right half whenever there is a tie. In that case, if Top-Left is the true state, the receiver will be made worse off by opening the Top half upon receipt of Signal 1. (He will get 8 rather than 10.) However, under this policy for breaking ties, the sender does not always profit at the expense of the receiver. For instance, if Top-Right is the true state, the receiver will be made better off by performing Act 3 on receipt of Signal 1. (He will get 8 rather than 0.) So, the LRTB game would still not be an example of universal deception on Skyrms’s analysis.

  9. Similarly, Skyrms (2010, p. 76) notes that, in the firefly game, “the receiving males are led to actions that they would not take if they could directly observe the state.”

  10. I have confirmed this in personal correspondence with Skyrms.

  11. Sober (1994, p. 73) refers to them as examples of “evolutionary lying.” I have referred to them elsewhere (see Fallis 2011, p. 211) as examples of “evolutionary disinformation” or “adaptive disinformation.”

  12. We need a more careful argument for this claim than Peter Godfrey-Smith provides. In his review of Skyrms’s book, he defends this claim by offering a competing analysis of deception and then arguing that the senders in Skyrms’s examples do not deceive the receiver on this analysis. According to Godfrey-Smith (2011, p. 1295), “there is a difference between the maintaining and the non-maintaining uses of the signal. Some uses contribute to stabilization of the sender-receiver configuration and some, if more common, would undermine it. Those ones are deceptive.” But it turns out that the senders in Skyrms’s examples do deceive the receiver on this competing analysis of deception. For instance, contra Godfrey-Smith, Signal 1 sent in State 2 of Skyrms’s (2010, p. 81) three-state signaling game is a non-maintaining use of the signal. If the initial probability of State 2 were to exceed 2/3 (and the other two states remained equally likely), then the receiver would go back to doing what he would have done if he had not received the signal. Similarly, in the LRTB game, if Top-Left senders became sufficiently common, then the receiver would start opening the Left half rather than the Top half on receipt of their signal. So, these are deceptive signals on Godfrey-Smith’s analysis of deception.

  13. In fact, even if he ends up epistemically better off in one respect, the receiver might end up epistemically worse overall. For instance, as a result of reading an unreliable textbook, you might acquire one true belief and a hundred false ones.

  14. When there are just two possible states, an increase in the probability of a false state requires a decrease the probability of the true state as well. Of course, even when there are three or more possible states, we can partition the possible states into just two sets. However, this only helps us determine whether or not a signal is misleading if one particular partition is privileged. For instance, while the Bad firefly’s signal decreases the probability of the set containing the true state if we consider {{Good}, {Bad, Ugly}}, it increases the probability of the set containing the true state if we consider {{Bad}, {Good, Ugly}}.

  15. For instance, Fallis (2006, p. 103) describes a case where, if you learn what cards your opponent in a poker game actually holds, it will be rational for you to call a bet that, as a matter of fact, you will lose.

  16. Note that the fact that the male firefly descends does not necessarily mean that he holds the false belief that he is dealing with a Good firefly. In any event, as noted above, Skyrms does not characterize the epistemic states of receivers in terms of categorical beliefs.

  17. Kant makes similar statements elsewhere in his published work (see Mahon 2006b, pp. 429–430).

  18. Strictly speaking, Jennifer Lackey only counts actively concealing information, but not passively withholding information, as deception. But why it should matter that someone’s state of ignorance is brought about by an act of omission rather than by an act of commission (see Chisholm and Feehan 1977, pp. 144–145; Carson 2010, p. 56; Mahon 2007, p. 188)? For instance, it is not clear that there is a moral difference between doing and allowing (epistemic) harm (see Howard-Snyder 2011).

  19. Game theorists who, like Skyrms, characterize the epistemic states of receivers in terms of probabilities (rather than in terms of categorical beliefs) should have little reason to resist this broader notion of deception. In that context, there is not really a clear dividing line between ignorance and false belief.

  20. In all of these cases, information is being withheld about a proposition that the victim has not explicitly considered and, thus, holds no belief about. So, these are not cases of maintaining or strengthening a false belief (which would uncontroversially count as deception). For instance, as Mahon (2006b, p. 440) points out, nothing in Langton’s example requires that Dora acquire a false belief. It is enough that Dora is “ignorant rather than [positively] deceived about the use to which the cake is to be put.”

  21. Although there are some philosophers (e.g., Carson 2010, pp. 20–22) who think that lying does not require an intention to deceive, Ekman agrees with the majority that it does.

  22. Similarly, even if it contributes causally to someone having a false belief, an act (of commission or of omission) does not necessarily constitute deception. If you do not intend (or at least, systematically benefit from) this outcome, this person has just been accidentally misled.

  23. Intentional manipulation may not be the only thing that can make withholding information deceptive. For instance, withholding information from someone who has a reasonable expectation that such information will be provided might be sufficient for deception (cf. Carson 2010, p. 54). But as long as withholding information in order to manipulate someone’s decisions is also sufficient for deception, the signals in Skyrms’s examples will count as being deceptive.

  24. There are acts of reticence that do not violate the Categorical Imperative (see Mahon 2006b, pp. 435–436). But there are also acts of reticence that do violate the Categorical Imperative (see Mahon 2006b, p. 438).

  25. Dora presumably knows that something will be done with the cake and that Rae has not told her what that is. But she clearly does not know that it is anything she would be concerned about. Otherwise, Rae would probably have had to mislead Dora in order to get her to participate in the cake baking.

  26. Of course, we do not usually know how our attention is misdirected by magicians. By contrast, the receiver in the LRTB game knows exactly how the sender has deceived him (viz., by not conveying the whole truth). However, magicians can sometimes deceive us even when we know exactly how they are doing it. For instance, Penn and Teller are famous for fully explaining to their audience how some of their tricks are done (see Teller 2012). Yet they still manage to deceive their audience. (We still don’t see how the trick is done!).

  27. Admittedly, the Top-Left sender cannot send such a signal when the game is at equilibrium. Indeed, it may be that any signal sent at equilibrium in a signaling game that does not convey the whole truth does increase the probability of a false state. But that does not make it alright for an analysis of deception to include this requirement. Skyrms (2010, p. 78) himself is interested in studying what happens when games are not at equilibrium as well as what happens when they are.

  28. The expected payoff for opening the Top half would still be greater than the expected payoff of the other three possible actions that the receiver can perform. This is not something that the Top-Left sender would be able to bring about without leaving the receiver epistemically worse off than he could have been. If the receiver knew whole truth, he would open the Left half and the sender would only get 2 rather than 10.

  29. Like any analysis of deception that attempts to replace the “intentionality” requirement with a “systematic benefit to the sender” requirement, Skyrms’s analysis is also open to the following sort of counter-example. Although deceptive messages typically benefit the sender, people sometimes deceive even when it is not in their best interest. For instance, in order to avoid embarrassment, people often lie to their doctors about their diet, about how much they exercise, or about what medications they are taking (see Reddy 2013). And if their doctors are misled, it can lead to incorrect treatment recommendations that can harm the patient. However, since it is irrational to send such messages, they are not counter-examples in the context of signaling games. Whenever we assume that the players are intentional agents, we assume that they are rational.

  30. We sometimes say that someone has suffered a cost when she does not get what she deserves or what has a right to. But that clearly does not apply here.

  31. This analysis only requires that there is some amount of additional knowledge such that the sender would do worse. As the variation on the LRTB game indicates, a signal can still be deceptive even if there is also some amount of additional knowledge such that the sender would do even better.

  32. My analysis also counts the variation on the LRTB game as a case of universal deception. For instance, a Top-Left-Front sender benefits from conveying partial information (viz., that she is a Top sender) and she would end up worse off if the receiver knew more (viz., that she is a Left sender as well).

  33. The production costs include opportunity costs (e.g., all the useful things that I could be doing instead of writing an essay about my summer vacation).

  34. Although Skyrms has not given an example of universal “positive deception,” it might nevertheless be possible to do so. The argument at the end of Sect. 2 above only shows that a sender cannot always make the receiver worse off all things considered. It might be possible for a sender to always make the receiver epistemically worse off. After all, a signal that makes a receiver epistemically worse off can sometimes make him better off all things considered. Just imagine a signal that reverses the effect of the Bad firefly’s signal.

References

Download references

Acknowledgments

For helpful feedback on earlier versions of this paper, I would like to thank Jason Alexander, Brandon Ashby, Jeff Barrett, Luc Bovens, Tom Carson, Tony Doyle, Paul Faulkner, Jerry Gaus, Peter Godfrey-Smith, Sandy Goldberg, Terry Horgan, Chris Howard, Peter Lewis, Justin Lillge, Christian List, Kay Mathiesen, Greg McWhirter, Eliot Michaelson, Philip Nickel, Andrew Peet, Alexander Pruss, Brian Skyrms, Elliott Sober, Katie Steele, Andreas Stokke, Bill Talbott, Chad Van Schoelandt, Elliott Wagner, Dan Zelinski, and audiences at the Freedom Center of the University of Arizona, at the Eindhoven University of Technology, at the London School of Economics, and at the Pacific Division Meeting of the American Philosophical Association.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Don Fallis.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Fallis, D. Skyrms on the possibility of universal deception. Philos Stud 172, 375–397 (2015). https://doi.org/10.1007/s11098-014-0308-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-014-0308-x

Keywords

Navigation