Skip to main content
Log in

Toward a formal analysis of deceptive signaling

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Deception has long been an important topic in philosophy (see Augustine in Treatises on various subjects, New York, Fathers of the Church, 1952; Kant in Practical philosophy, Cambridge University Press, Cambridge, 1996; Chisholm and Feehan in J Philos 74: 143–159, 1977; Mahon in Int J Appl Philos 21: 181–194, 2007; Carson in Lying and deception, Oxford University Press, New York, 2010). However, the traditional analysis of the concept, which requires that a deceiver intentionally cause her victim to have a false belief, rules out the possibility of much deception in the animal kingdom. Cognitively unsophisticated species, such as fireflies and butterflies, have simply evolved to mislead potential predators and/or prey. To capture such cases of “functional deception,” several researchers (e.g., Sober, From a biological point of view, Cambridge University Press, Cambridge, 1994; Hauser in: Whiten, Byrne (eds) Machiavellian intelligence II, Cambridge University Press, Cambridge, pp 112–143, 1997; Searcy and Nowicki, The evolution of animal communication, Princeton University Press, Princeton, 2005; Skyrms, Signals, Oxford University Press, Oxford 2010) have endorsed the broader view that deception only requires that a deceiver benefit from sending a misleading signal. Moreover, in order to facilitate game-theoretic study of deception in the context of Lewisian sender-receiver games, Brian Skyrms has proposed an influential formal analysis of this view. Such formal analyses have the potential to enhance our philosophical understanding of deception in humans as well as animals. However, as we argue in this paper, Skyrms’s analysis, as well as two recently proposed alternative analyses (viz., Godfrey-Smith in Review of signals: evolution, learning, and information by Brian Skyrms, Mind, 120: 1288–1297, 2001; McWhirter in Brit J Philos Sci 67: 757–780, 2016), are seriously flawed and can lead us to draw unwarranted conclusions about deception.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Skyrms (2010, p. 80) equates the terms misleading information and misinformation. We just use the first term here as it fits better with the philosophical literature on deception.

  2. We might even want to talk about deceiving simple machines as well as deceiving living creatures (see Lynch 2001, pp. 13–14).

  3. Even though they may not be the receiver’s credences, these are the probabilities from the standpoint of the receiver. The probabilities of the possible states of the world from an objective standpoint (or from the standpoint of the sender who has observed the true state of the world) are all either zero or one.

  4. Ideally, we are able to analyze what a phenomenon is without having to explain why that phenomenon occurs. But deceptive signals are distinguished from merely misleading signals precisely because there is some explanation for why they are sent.

  5. Artiga and Paternotte also point to instances of intentional human deception where the sender does not benefit in any respect (see also Fallis 2015b, pp. 412–413). Since it eschews talk of intentionality in favor of talk of costs and benefits, the framework of sender-receiver games may not be able to capture such instances of deception. But humans typically do intend to mislead others because they benefit from others being misled (see Smith 2005). Thus, the “sender benefit” requirement applies to the vast majority of instances of human deception.

  6. These constraints are not essential to the framework (see Martínez 2015, pp. 223–227; McWhirter 2016, p. 760). Indeed, signaling costs play an important role in many explanations of honesty in animal signaling (see Searcy and Nowicki 2005, pp. 9–10).

  7. We assume that the location where the female and the male encounter each other does nothing to give away her type.

  8. Again, this constraint is not essential to the framework.

  9. Throughout this paper, we round expected payoffs to the nearest tenth.

  10. Lewis restricts the term signaling system to pairs of strategies that are equilibria of a game (see Skyrms 2010, p. 7). But as Skyrms (2010, p. 78) points out, “a lot of life is lived out of equilibrium ... deception is one of the forces that drive[s] the system to equilibrium.” Thus, for purposes of analyzing deception, we will not adopt this restriction.

  11. This constraint is also not essential to the framework. It simply insures that our examples of deception do not depend on the receiver being ill-informed or behaving irrationally. After all, deception should be possible (e.g., at the poker table) even if the receiver knows that the sender is engaged in deception. Also, if the sender can do no better given the receiver’s strategy, this constraint insures that the signaling system is an equilibrium.

  12. It is certainly possible for two pieces of evidence to be individually misleading, but jointly beneficial. The way this generally works is that the two pieces of evidence are each misleading on their own, but the second piece is not misleading to someone who is already in possession of the first piece. However, in this example, evidence \(e_3\) is misleading on Skyrms’s analysis even after we have gotten evidence \(e_1\).

  13. Shea et al. (forthcoming, Sect. 4.4) also contend that sending M1 in S2 in Signaling System 1.1 is “merely a case of strategic withholding of information by the sender, a phenomenon quite distinct from deception.” For the same reason, Bruner’s (2015, p. 660) proposed example of deception is not a case of deception.

  14. Skyrms (2010, p. 36) uses the Kullback-Leibler divergence to measure the informational distance between probability distributions. When the first probability distribution is the one that assigns probability 1 to the true state of the world and probability 0 to all of the false states, this is equivalent to using the logarithmic rule to measure the inaccuracy of the second probability distribution (see Godfrey-Smith 2011, p. 1293). So, it would make sense for Skyrms to appeal to this rule in his analysis of misleadingness, which would amount to saying that a signal is misleading if and only if it diminishes the probability of the state in which it is sent. Several formal epistemologists (e.g., Levinstein 2012; Roche & Shogenji forthcoming) have defended the logarithmic rule as a measure of inaccuracy. But many others (e.g., Joyce 2009; Pettigrew 2016) have defended the Brier rule. For purposes of this paper, we remain agnostic on the issue as nothing here hangs on the outcome of this debate.

  15. Skyrms (2010, p. 77) does go on to say, “But it is only a half-truth.” However, as Godfrey-Smith (2011, p. 1295) points out, “to tell half the truth is not to tell a half-truth.”

  16. Although it is possible to mislead someone by withholding information, it is not always misleading (see Carson 2010, pp. 56–57). Nevertheless, withholding information can be sneaky even when it merely prevents someone from ending up epistemically better off.

  17. The sender might be able to cook up some completely new signal. But such a signal would not change the probabilities since signals have no meaning outside of the context of an overall signaling strategy (see below).

  18. Basically, we can think of the states as urns containing different colored balls representing the different possible signals (see Skyrms 2010, pp. 13–14). On each play of the game, nature chooses an urn and a ball is chosen at random from that urn in order to determine which signal is sent. Before the next play of the game, balls of the chosen color are added to (removed from) the urn, where the number of balls added (removed) depends on the payoffs.

  19. In order to compute the expected payoff of the sender’s strategy, an assumption must be made about how the receiver will respond. As noted in Sect. 3, we assume that the receiver’s strategy is his best response to the sender’s strategy. If Skyrms’s analysis does not guarantee that it is no accident that the signal is sent when the receiver’s strategy is his best response, Skyrms’s analysis does not guarantee that it is no accident that the signal is sent.

  20. Strangely enough, there are examples that satisfy Skyrms’s formal analysis of deception, but where always revealing the whole truth has a higher expected payoff than the strategy that involves sending the misleading signal. For instance, suppose that the sender in Game 2 adopts the strategy:

          S1 \(\rightarrow \) M1(4/5), M2(1/5)

          S2 \(\rightarrow \) M2(1/2), M1(1/2)

    And suppose that the receiver adopts the strategy:

          M1 \(\rightarrow \) A1

          M2 \(\rightarrow \) A2

    In that case, sending M1 in S2 is deception on Skyrms’s analysis, but the expected payoff to the sender of adopting this strategy is less than the expected payoff for always revealing the whole truth (5.8 rather than 6).

  21. The story of the lost keys apparently derives from a story about the Sufi master, Mulla Nasrudin (see 1966, p. 24).

  22. In this case, Skyrms’s formal analysis gets the right result (albeit for the wrong reasons).

  23. According to the standard measures of inaccuracy that do take into account falsity distributions, such as the Brier rule and the spherical rule, symmetric distributions are (ceteris paribus) more accurate.

  24. Not every signal sent at this equilibrium is deceptive. For instance, M2 sent in S2 is not even misleading. So, it is still not the example of universal deception that Skyrms (2010, p. 81) was looking for.

  25. Thus, there is a reason for the sender to adopt this strategy regardless of whether the receiver’s strategy is his best response.

  26. Similarly, in our second example of deception below, the payoff to the receiver is lower than his payoff if he knew the true state (0 rather than 10).

  27. Skyrms (2010, p. 76) himself is actually ambivalent on the issue. He writes that “one could argue over whether the clause about the detriment of the receiver should be included ...I do not think that much hangs on the choice.”

  28. It might seem like the “sender benefit” requirement itself rules out the possibility of altruistic deception. But even though altruism requires benefiting someone else at a cost to oneself, the standard explanation for such behavior is that it benefits one’s genes (see Skyrms 2010, p. 25). And as noted in Sect. 2, we and Skyrms include benefits to the sender’s genes as part of sender benefit.

  29. The sender may not have had time yet to develop the capacity to send more than two different signals.

  30. The male may not initially realize that the Amphibious female lies sometimes. But even once he does catch on, it will not lead him to alter his strategy.

  31. The sender might be able to cook up some completely new signal. But since such a signal would not have any meaning for the receiver, the probabilities would not change. In that case, the receiver would play A3 and the sender would still get a lower payoff (2 rather than 5).

  32. Moreover, sending M1 in S2 in Signaling System 3.2 is reinforced at the level of full strategies if the sender had previously adopted the signaling strategy from Signaling System 3.1. The signaling strategy from Signaling System 3.2 has a higher expected payoff than the signaling strategy from Signaling System 3.1 (7.6 rather then 7.3).

  33. Even though sending M1 in S2 in Signaling System 3.2 is misleading and it is no accident that M1 is sent in S2, this example does not count as deception on Skyrms’s analysis. The payoff to the sender is lower than her payoff if the receiver knew that the true state was S2 (5 rather than 10). So, this example indicates that Skyrms’s analysis is too narrow (as well as being too broad) and, thus, that it is not safe to use Skyrms’s analysis to make universal claims about deception.

  34. Shea et al. (forthcoming, Sect. 4.5) offer yet another non-Skyrmsian analysis of deception. But the only example of “bone fide deception” that they provide also counts as deception on the Skyrmsian view. The receiver is misled (as sending M1 in S2 shifts the probabilities from (1/4, 3/4) to (1/2, 1/2)) and the sender benefits (as sending M1 in S2 is part of an equilibrium strategy).

  35. Martínez (2015, pp. 219–221) suggests a different way of cashing out the notion of non-maintainingness. He claims that sending signal M in state S is a non-maintaining use if and only if the receiver would shift from a separating strategy to a pooling strategy if M were always sent in S. That is, the receiver starts out responding differently to some signals and, as the probability of M being sent in S increases, ends up responding the same way to all signals. However, this analysis seems to give the wrong result in at least some cases. For instance, Martínez describes a signaling system in which he claims that sending M1 in S2 is a non-maintaining use. But on his own analysis, it is not a non-maintaining use. It is true that, when the probability that M1 is sent in S2 is equal to 1, the receiver always performs the same act. But that does not mean that the receiver has adopted a pooling strategy. In the original signaling system, the sender always sends M1 in the other states. Thus, when the probability that M1 is sent in S2 increases to 1, the only signal that the receiver ever gets is M1. But the receiver would do something different if he ever did receive some other signal. It is not the case that the receiver “stops listening” to what the sender says. In contrast, the analysis that we suggest in the text gives the correct result that sending M1 in S2 in this signaling system is a non-maintaining use.

  36. Just as the sender might randomly choose which signal to send with a certain probability in each state, the receiver might randomly choose which act to perform with a certain probability in response to each signal. Thus, A might be a mixture rather than a pure act.

  37. Martínez (1977, pp. 221–223) also claims to have found a signaling system that involves “deception without non-maintaining uses of signals.” However, we think that sending M3 in S1 in this signaling system is a non-maintaining use. When the probability that M3 is sent in S1 increases to 1, the receiver does not adopt a pooling strategy. But he does abandon his previous response to M3 and always performs A2 when he gets M3.

  38. In order for deception to occur on McWhirter’s analysis, there must be two or more senders. Similarly, there might be two or more receivers. Just for the sake of simplicity, we assume here that there is a single receiver.

  39. Similarly, M1 sent in S2 in Signaling System 3.2 shows that misuse is not necessary for deception.

  40. For extremely helpful feedback on earlier versions of this material, we would like to thank Jeff Barrett, Justin Bruner, Terry Horgan, Kay Mathiesen, Brian Skyrms, Rory Smead, Eyal Tal, Dan Zelinski, two anonymous referees, and audiences at the Freedom Center, University of Arizona, the School of Information, University of Arizona, and the Department of Logic and Philosophy of Science, University of California, Irvine.

References

  • Artiga, M., & Paternotte, C. (forthcoming). Deception: A functional account. Philosophical Studies.

  • Augustine, (1952). Treatises on various subjects. New York: Fathers of the Church.

    Google Scholar 

  • Bell, J. B., & Whaley, B. (1991). Cheating and deception. New Brunswick: Transaction Publishers.

    Google Scholar 

  • Bruner, J. P. (2015). Disclosure and information transfer in signaling games. Philosophy of Science, 82, 649–666.

    Article  Google Scholar 

  • Carson, T. L. (2010). Lying and deception. New York: Oxford University Press.

    Book  Google Scholar 

  • Chisholm, R. M., & Feehan, T. D. (1977). The intent to deceive. Journal of Philosophy, 74, 143–159.

    Article  Google Scholar 

  • Earman, J. (1992). Bayes or bust?. Cambridge: MIT Press.

    Google Scholar 

  • Fallis, D. (2007). Attitudes toward epistemic risk and the value of experiments. Studia Logica, 86, 215–246.

    Article  Google Scholar 

  • Fallis, D. (2009). What is lying? Journal of Philosophy, 106, 29–56.

    Article  Google Scholar 

  • Fallis, D. (2015a). Skyrms on the possibility of universal deception. Philosophical Studies, 172, 375–97.

    Article  Google Scholar 

  • Fallis, D. (2015b). What is disinformation? Library Trends, 63, 401–426.

    Article  Google Scholar 

  • Fallis, D., & Lewis, P. J. (2016). The Brier rule is not a good measure of epistemic utility (and other useful facts about epistemic betterness). Australasian Journal of Philosophy, 94, 576–590.

    Article  Google Scholar 

  • Godfrey-Smith, P. (2011). Review of Signals: Evolution, learning, and information by Brian Skyrms. Mind, 120, 1288–1297.

    Article  Google Scholar 

  • Hauser, M. D. (1997). Minding the behaviour of deception. In A. Whiten & R. W. Byrne (Eds.), Machiavellian intelligence II (pp. 112–143). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Joyce, J. M. (2009). Accuracy and coherence: Prospects for an alethic epistemology of partial belief. In F. Huber & C. Schmidt-Petri (Eds.), Degrees of belief (pp. 263–297). Dordrecht: Springer.

    Chapter  Google Scholar 

  • Kant, I. (1996). Practical philosophy. Cambridge: Cambridge University Press.

    Google Scholar 

  • Levinstein, B. A. (2012). Leitgeb and Pettigrew on accuracy and updating. Philosophy of Science, 79, 413–424.

    Article  Google Scholar 

  • Lewis, D. (1969). Convention. Cambridge: Harvard University Press.

    Google Scholar 

  • Lynch, C. A. (2001). When documents deceive: Trust and provenance as new factors for information retrieval in a tangled web. Journal of the American Society for Information Science and Technology, 52, 12–17.

    Article  Google Scholar 

  • Mackie, J. L. (1977). Ethics: Inventing right and wrong. New York: Penguin Books.

    Google Scholar 

  • Mahon, J. E. (2007). A definition of deceiving. International Journal of Applied Philosophy, 21, 181–194.

    Article  Google Scholar 

  • Martínez, M. (2015). Deception in sender-receiver games. Erkenntnis, 80, 215–227.

    Article  Google Scholar 

  • McWhirter, G. (2016). Behavioural deception and formal models of communication. British Journal for the Philosophy of Science, 67, 757–780.

    Article  Google Scholar 

  • Pettigrew, R. (2016). Accuracy and the laws of credence. Cambridge: Oxford University Press.

    Book  Google Scholar 

  • Roche, W., & Shogenji, T. (forthcoming). Information and inaccuracy. British Journal for the Philosophy of Science.

  • Ruse, M. (1986). Taking Darwin seriously. New York: Basil Blackwell.

    Google Scholar 

  • Searcy, W. A., & Nowicki, S. (2005). The evolution of animal communication. Princeton: Princeton University Press.

    Google Scholar 

  • Shah, I. (1966). The exploits of the incomparable Mulla Nasrudin. New York: Simon and Schuster.

    Google Scholar 

  • Shea, N., Godfrey-Smith, P., & Cao, R. (forthcoming). Content in simple signalling systems. British Journal for the Philosophy of Science.

  • Skyrms, B. (2010). Signals. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Smead, R. (2014). Deception and the evolution of plasticity. Philosophy of Science, 81, 852–865.

    Article  Google Scholar 

  • Smith, D. L. (2005). Natural-born liars. Scientific American Mind, 16, 16–23.

    Article  Google Scholar 

  • Sober, E. (1994). From a biological point of view. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Staffel, J. (2011). Reply to Sorensen, ‘knowledge-lies’. Analysis, 71, 300–302.

    Article  Google Scholar 

  • Wagner, E. O. (2012). Deterministic chaos and the evolution of meaning. British Journal for the Philosophy of Science, 63, 547–575.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter J. Lewis.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fallis, D., Lewis, P.J. Toward a formal analysis of deceptive signaling. Synthese 196, 2279–2303 (2019). https://doi.org/10.1007/s11229-017-1536-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-017-1536-3

Keywords

Navigation