We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

(Noisy) communication | SpringerLink

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

  • Published:

(Noisy) communication

volume 5pages211237(2007)


Communication is central to many settings in marketing and economics. A focal attribute of communication is miscommunication. We model this key characteristic as a noise in the messages communicated, so that the sender of a message is uncertain about its perception by the receiver, and then identify the strategic consequences of miscommunication. We study a model where competing senders (of different types) can invest in improving the precision of the informative but noisy message they send to a receiver, and find that there exists a separating equilibrium where senders’ types are completely revealed. Thus, although communication is noisy it delivers perfect results in equilibrium. This result stems from the fact that a sender’s willingness to invest in improving the precision of their messages can itself serve as a signal. Interestingly, the content of the messages is ignored by the receiver in such a signaling equilibrium, but plays a central role by shaping her beliefs off the equilibrium path (and thus, enables separation between the types). This result also illustrates the uniqueness of the signaling model presented here. Unlike other signaling models, the suggested model does not require that the costs and benefits of the senders will be correlated with their types to achieve separation. The model’s results have implications for various marketing communication tools such as advertising and sales forces.

This is a preview of subscription content, access via your institution.


  1. 1.

    See the following articles in the New York Times: “Bush, Facing Criticism, Abandons Debate Stance” (September 9, 2000); “One Debate Down, Three to Go” (September 10, 2000); “Dropping All of His Objections, Bush Agrees to Panel’s Debates” (September 15, 2000); and, “Candidates Agree on Formats for Three Debates” (September 17, 2000).

  2. 2.

    For example, in their text book on advertising Belch and Belch (2007) write: “Throughout the communication process, the message is subject to extraneous factors that can distort or interfere with its reception. This unplanned distortion or interference is known as noise.”

  3. 3.

    Put differently, in any signaling model there is more than one sender’s type. However, in a standard signaling model the number of senders is limited to one. Thus, these models do not accommodate the possibility that there is more than one player whose actions are observed by the receiver.

  4. 4.

    While the model here explores communication in general, Anand and Shachar (2006) study the specific application of advertising. As a result, that paper focuses on a quite different mechanism of information transmission. Specifically, while the study here, message precision serves as the signal, in the other study the signaling is done via media selection (i.e., targeting).

  5. 5.

    Bhardwaj et al. (2005) show that a monopolist who offers a high quality product prefers to give up control over the message and allow the buyer to ask whatever she wants about the product. The revelation format facilitates the use of price as a signaling mechanism.

  6. 6.

    An important strand of this literature, going back to Crawford and Sobel (1982), examines how information is strategically transmitted through messages in cheap talk games.

  7. 7.

    For example, the information systems theory of communication (Shannon 1948) and subsequent papers in economics that draw on it, notably the theory of teams (Marshack and Radner 1972). In all those studies agents are non-strategic.

  8. 8.

    The setup of our model is different in other aspects such as the cost of sending a mes sage. Exceptions are those that focus on deception in which the cost of a non-truthful messsage is higher than the cost of a truthful one; see, for example, Sanchirico (2001) and Deneckere and Severinov (2003).

  9. 9.

    Exceptions are Hertzendorf and Overgaard (2001) and Daughety and Reinganum (2000).

  10. 10.

    In other words, by ignoring the cases that both senders are of the same type (either L or H), we focus on the interesting scenario in which there is some difference between the two senders and thus one of them is better than the other from the receiver’s point of view.

  11. 11.

    (a) Of course, when the message is so simple, the probability of miscommunication is minuscule. However, in reality messages are rarely so simple.

    (b) Like other signaling models, L can try and deceive the receiver by imitating H’s actions. However, he is not allowed to deceive in the message content. Still, as pointed out, the perception of the message is random and thus there is still a chance that the receiver will interpret a message from L as saying “I am H.”

  12. 12.

    The last statement is true when at least one player sends a message. When neither one of them sends a message the probability of being selected is, obviously, \(\frac{1}{2}\) for each of the senders.

  13. 13.

    Recall that a high q implies that the content of the message is more likely to indicate that this player is L.

  14. 14.

    (a) Another way to think about the limits of the interval for c is the following. They are based on the probability that L is selected when any sender deviate from his equilibrium strategy (which is 0.5 if H deviates and 1 − q if L deviates). This reflects the fact that both senders in this game are competing over the “market share” of L.

    (b) This interval is at a higher cost level than the interval for the non-strategic equilibrium. This is not surprising: in the non-strategic equilibrium the cost should be low enough for H to send a message. In the separating equilibrium, it should be high enough to deter L from mimicking.

  15. 15.

    See http://www.vanquish.com/press/ps_clean_email.shtml.

  16. 16.

    For example, in many situations ad agencies find it difficult to increase the precision of their ads, because they need to ensure that the ad is memorable, attractive, etc. As the ad become more precise, essential aspects of memorability and attractiveness are likely to be blemished.

  17. 17.

    Notice, for example, that if C(q) = c(q − 0.5)2, then C (0.5) = 0 for any cost parameter.

  18. 18.

    Recall that q° is the precision selected by H in the non-strategic equilibrium and that it satisfies the following equation C (q°) = 1.

  19. 19.

    Notice that the suggested explanation differs from the signaling theory of advertising presented by Kihlstrom and Riordan (1984) and Milgrom and Roberts (1986) in various significant ways. For example, while the standard theory assumes that the content of the ads is empty, the suggested model focuses on informative advertising. Furthermore, while in the standard model the separation is enabled by repeated purchases, here we do not require repeated purchases and the separation is based on the usefulness of ads’ content off the equilibrium path.

    Interestingly, Zhao (2000) shows that under a certain condition the result of the studies mentioned above is reversed and higher advertising is associated with a lower quality firm. The condition is that advertising spending does not serve only as a signal but also as a determinant of the size of the market (via raising awareness). Furthermore, Desai (2000) demonstrates that advertising spending can serve as a signal of quality to the retailer.

  20. 20.

    See discussion in Bhardwaj et al. (2005).

  21. 21.

    For example, according to Forrester Research, 26% of online retailers currently allow individuals to input product reviews on the firm’s website. Until recently, many firms reviewed these reviews and rejected those that were negative about the firm’s product. Some firms are now changing their policy. For example, web retailer Overstock.com “had been relying on its merchandising group—the employees responsible for deciding which products to sell on the site—to monitor reviews submitted by customers, but found that the group tended to approve only positive reviews. In January, the Salt Lake City-based company changed the monitoring responsibilities to its marketing team. The company now says it posts both positive and negative comments. “We learned that customers won’t trust the site if there are only positive reviews,” says Tad Martin, senior vice president of merchandising and operations at Overstock.’ ” (“Giving Reviews the Thumbs Down”, Wall Street Journal, August 4, 2005).

  22. 22.

    We thank Dina Mayzlin for this example.

  23. 23.

    It is well known that political candidates occasionally increase the nosiness of their messages in order to maximize their winning probability. The rationale behind such a strategy is related to the “median voter theory” which suggests minimum differentiation between the candidates.

  24. 24.

    If q 2 < q 1, the receiver bases her decision only on m 1 (i.e., the perceived content of the more precise message). The probability that m 1 = − 1 (i.e., indicating that 2 is H) is 1 − q 1. Thus, E(d 2|q 2,q 1) = 1 − q 1.

    If q 2 = q 1, the probability that m 1 = − 1 and m 2 = 1 (i.e., both messages indicate that 2 is H and the receiver selects 2) is equal to \( (1-q_{1})^{2}\) and the probability that the messages contradict one another (in this case the receiver selects one of the senders randomly) is 2q 1(1 − q 1). Thus, \(E(d_{2}|q_{2},q_{1})=(1-q_{1})^{2}+\frac{1}{2} 2q_{1}(1-q_{1})=(1-q_{1})\).


  1. Anand, B., & Shachar, R. (2005). Advertising, the Matchmaker, http://www.tau.ac.il/~rroonn/Papers/match05.pdf.

  2. Anand, B., & Shachar, R. (2006). Targeted Advertising as a Signal. Mimeo.

  3. Ayres, I., & Funk, M. (2002). Marketing privacy: A solution for the blight of telemarketing (and Spam and Junk Mail). Mimeo.

  4. Belch, G., & Belch, M. (2007). Advertising and promotion: An integrated marketing communication perspective. New York, USA: McGraw-Hill/Irwin.

    Google Scholar 

  5. Bhardwaj, P., Yuxin, C., & Godes, D. (2005). Buyer versus seller-initiated information revelation. Mimeo.

  6. Bull, J., & Watson, J. (2004). Hard evidence and mechanism design. Mimeo.

  7. Crawford, V., & Sobel, J. (1982). Strategic information transmission. Econometrica, 50(5), 1431–1451.

    Article  Google Scholar 

  8. Daughety, A., & Reinganum, J. (2000). Appealing judgments. Rand Journal of Economics, 31(3), 502–525.

    Article  Google Scholar 

  9. Deneckere, R., & Severinov, S. (2003). Mechanism design and communication costs. Mimeo.

  10. Desai, P. (2000). Multiple messages to retain retailers: Signaling new product demand. Marketing Science, 19(4), 381–389.

    Article  Google Scholar 

  11. Fishman, M., & Hagerty, K. (1990). The optimal amount of discretion to allow in disclosure. Quarterly Journal of Economics, 105(2), 427–444.

    Article  Google Scholar 

  12. Fudenberg, D., & Tirole, J. (1991). Game theory. Cambridge, USA: The MIT Press.

    Google Scholar 

  13. Glazer, J., & Rubinstein, A. (2000). Debates and decisions, on a rationale of argumentation. Games and Economic Behavior, 36(2), 158–173.

    Article  Google Scholar 

  14. Hertzendorf, M. N., & Overgaard, P. (2001). Price competition and advertising signals - signaling by competing senders. Journal of Economics and Management Strategy, 10(4), 621–662.

    Article  Google Scholar 

  15. Jacoby, J., & Hoyer, W. (1982). Viewer miscomprehension of televised communication: Selected findings. Journal of Marketing, 46(4), 12–26.

    Article  Google Scholar 

  16. Jacoby, J., & Hoyer, W. (1989). The comprehension/miscomprehension of print communcation: Selected findings. Journal of Consumer Research, 15(4), 434–443.

    Article  Google Scholar 

  17. Kihlstrom, R., & Riordan, M. (1984). Advertising as a signal. Journal of Political Economy, 92(3), 427–450.

    Article  Google Scholar 

  18. Lipman, B., & Seppi, D. (1995). Robust inference in communication games with partial provability. Journal of Economic Theory, 66(2), 370–405.

    Article  Google Scholar 

  19. Marshack, J., & Radner, R. (1972). An economic theory of teams. New Haven: Yale University Press.

    Google Scholar 

  20. Milgrom, P., & Roberts, J. (1986). Price and advertising signals of product quality. Journal of Political Economy, 94(4), 796–821.

    Article  Google Scholar 

  21. Okuno-Fujiwara, Postelwaite, & Suzumura (1990). Strategic information revelation. Review of Economic Studies, 57 (1), 25–47.

    Article  Google Scholar 

  22. Sanchirico, C. (2001). Relying on the information of interested—and potentially dishonest—parties. American Law and Economics Review, 3(2), 320–357.

    Google Scholar 

  23. Shannon, C. (1948). A mathematical theory of communication. Bell System Technical Journal, 27, 379–423.

    Google Scholar 

  24. Zhao, H. (2000). Raising awareness and signaling quality to uninformed consumers: A price-advertising model. Marketing Science, 19(4), 390–396.

    Article  Google Scholar 

Download references


We are grateful to Adam Brandenburger, Eddie Dekel, Chaim Fershtman, Elon Kohlberg, Sridhar Moorthy, Barry Nalebuff, Ariel Pakes, Ben Polak, Julio Rotemberg, Ariel Rubinstein, Dennis Yao, and seminar participants at Hebrew University, Tel-Aviv University, Washington University, Yale University, and various conferences for helpful comments. Our editor, Rajiv Lal, and reviewers were very helpful with their constructive comments and guidance and we appreciate their effort. Anand gratefully acknowledges financial support from the Division of Research at Harvard Business School.

Author information



Corresponding author

Correspondence to Ron Shachar.



Appendix A: Uniqueness of the simple model

There are two types of separating equilibria: (1) sender’s strategies depend on their type, and (2) sender’s strategies do not depend on their type.

In Appendix A.1 we show that for the first case there is a unique sequential equilibrium (the one presented in Section 2) in which c > 1 − q.

In Appendix A.2 we show that for the second case there is a unique sequential equilibrium in which c < 1 − q.

Thus, for c > 1 − q there is a unique separating (sequential) equilibrium as stated in Proposition 2.

A.1 Strategies depend on senders’ types

Here, we study a separating equilibrium in which sender’s strategies depend on their types.

We start by characterizing the only set of consistent beliefs in such a separating equilibrium. Then we show that for these beliefs and the given parameter values, there is a unique separating equilibrium.

In any separating equilibrium, either H sends a message and L not, or the reverse is true. We consider each case in turn.

Case 1

H sends a message, and L does not.

Let H send a message with probability 1 − ε H , and L send a message with probability ε L (both ε H and ε L are greater than 0). Recall that the prior probability that player s is H is \(\mu _{s}^{0}(r_{s}).\) Then (using Bayes rule) the receiver’s beliefs at each of her four information sets are:

$$\mu ^{\varepsilon }_{s} = {\left\{ {\begin{array}{*{20}l} {{\frac{{(1 - \varepsilon _{H} )(1 - \varepsilon _{L} )\mu ^{0}_{s} (r_{s} )}}{{(1 - \varepsilon _{H} )(1 - \varepsilon _{L} )\mu ^{0}_{s} (r_{s} ) + \varepsilon _{H} \varepsilon _{L} (1 - \mu ^{0}_{s} (r_{s} ))}}} \hfill} & {{{\text{if}}} \hfill} & {{A_{s} = (1,0)} \hfill} \\ {{\frac{{\varepsilon _{H} \varepsilon _{L} \mu ^{0}_{s} (r_{s} )}}{{\varepsilon _{H} \varepsilon _{L} \mu ^{0}_{s} (r_{s} ) + (1 - \varepsilon _{H} )(1 - \varepsilon _{L} )(1 - \mu ^{0}_{s} (r_{s} ))}}} \hfill} & {{{\text{if}}} \hfill} & {{A_{s} = (0,1)} \hfill} \\ {{\frac{{(1 - \varepsilon _{H} )\varepsilon _{L} \mu ^{0}_{s} (r_{s} )}}{{(1 - \varepsilon _{H} )\varepsilon _{L} \mu ^{0}_{s} (r_{s} ) + (1 - \varepsilon _{H} )\varepsilon _{L} (1 - \mu ^{0}_{s} (r_{s} ))}}} \hfill} & {{{\text{if}}} \hfill} & {{A_{s} = (1,1)} \hfill} \\ {{\frac{{\varepsilon _{H} (1 - \varepsilon _{L} )\mu ^{0}_{s} (r_{s} )}}{{\varepsilon _{H} (1 - \varepsilon _{L} )\mu ^{0}_{s} (r_{s} ) + (1 - \varepsilon _{L} )\varepsilon _{H} (1 - \mu ^{0}_{s} (r_{s} ))}}} \hfill} & {{{\text{if}}} \hfill} & {{A_{s} = (0,0)} \hfill} \\ \end{array} } \right\}}$$

where A s is an indicator vector in which the first variable is equal to 1 if s sends a message and zero otherwise, and the second variable is equal to 1 if s’s competitor sends a message and zero otherwise.

It is straightforward to show that the limit of μ ε as ε H ,ε L →0 is:

$$ \mu _{s}=\left\{ \begin{array}{c} {\kern20pt}1\;\text{ if }\;A_{s}=(1,0) \\ {\kern20pt}0\;\text{ if }\;A_{s}=(0,1) \\ \mu _{s}^{0}(r_{s})\;\text{ if }\;A_{s}=(1,1) \\[2pt] \mu _{s}^{0}(r_{s})\;\text{ if }\;A_{s}=(0,0) \end{array} \right\} $$

which are exactly the beliefs in (B).

Recall that under these beliefs, the following table represents the net payoff functions of both senders.

Now, one can see that when c is not in the interval [1 − q,0.5], there is no separating equilibrium in which H sends a message and L does not: (a) when c < 1 − q, sender L finds it profitable to imitate H; (b) when c > 0.5 , sender H deviates.

Case 2

L sends a message, and H does not.

In this case, it is easy to show that the only consistent beliefs are:

$$ \mu _{s}=\left\{ \begin{array}{c} {\kern20pt}0\;\text{ if }\;A_{s}=(1,0) \\ {\kern20pt}1\;\text{ if }\;A_{s}=(0,1) \\ \mu _{s}^{0}(r_{s})\;\text{ if }\;A_{s}=(1,1) \\[2pt] \mu _{s}^{0}(r_{s})\;\text{ if }\;A_{s}=(0,0) \end{array} \right\} $$

The following table represents the net payoff functions of both senders.

It is clear that in this case, it is not optimal for either sender to send a message.

A.2 Strategies do not depend on senders’ types

Lemma 5

A separating equilibrium in which sender’s strategies do not depend on their type, exists if and only if 1 − q > c.


Without loss of generality, consider the case where sender 1 sends a message and sender 2 does not.

First, we characterize beliefs. To obtain consistent beliefs, we describe the players strategies.

Player 1 sends a message with probability 1 − ε 1H if he is H and 1 − ε 1L if he is L. Player 2 sends a message with probability ε 2H if he is H and ε 2L if he is L. Denote the prior probability that player 1 is H by p. Then (using Bayes rule) the beliefs that player 1 is H are:

$$ \mu _{1}^{\varepsilon }=\left\{\begin{array}{*{20}r}{\frac{(1-\varepsilon _{1H})(1-\varepsilon _{2L})p}{(1-\varepsilon _{1H})(1-\varepsilon_{2L})p+(1-\varepsilon _{1L})(1-\varepsilon _{2H})(1-p)}\;\text{ if }\;A_{1}=(1,0)} \\{\frac{\varepsilon _{1H}\varepsilon _{2L}p}{\varepsilon _{1H}\varepsilon_{2L}p+\varepsilon _{1L}\varepsilon _{2H}(1-p)}\;\text{ if }\;A_{1}=(0,1)} \\{\frac{(1-\varepsilon _{1H})\varepsilon _{2L}p}{(1-\varepsilon_{1H})\varepsilon _{2L}p+(1-\varepsilon _{1L})\varepsilon _{2H}(1-p)}\;\text{ if }\;A_{1}=(1,1)} \\{\frac{\varepsilon _{1H}(1-\varepsilon _{2L})p}{\varepsilon _{1H}(1-\varepsilon _{2L})p+\varepsilon_{1L}(1-\varepsilon _{2H})(1-p)} \;\text{ if }\;A_{1}=(0,0)}\end{array}\right\} $$

This can be rewritten as:

$$ \mu _{1}^{\varepsilon }=\left\{\begin{array}{*{20}r}{\frac{p}{p+\frac{(1-\varepsilon _{1L})(1-\varepsilon _{2H})}{(1-\varepsilon_{1H})(1-\varepsilon _{2L})}(1-p)}\;\text{ if }\;A_{1}=(1,0)} \\{\frac{p}{p+\frac{\varepsilon _{1L}\varepsilon _{2H}}{\varepsilon_{1H}\varepsilon _{2L}}(1-p)}\;\text{ if }\;A_{1}=(0,1)} \\{\frac{p}{p+\frac{(1-\varepsilon _{1L})\varepsilon _{2H}}{(1-\varepsilon_{1H})\varepsilon _{2L}}(1-p)}\;\text{ if }\;A_{1}=(1,1)} \\{\frac{p}{p+\frac{\varepsilon _{1L}(1-\varepsilon _{2H})}{\varepsilon _{1H}(1-\varepsilon_{2L})}(1-p)}\;\text{ if }\;A_{1}=(0,0)}\end{array}\right\} $$

It is clear that the limit of \(\mu _{1}^{\varepsilon }\) for A 1 = (1,0) is p. The \(\mu _{1}^{\varepsilon }\) of the other elements can be either 0 or 1 (depending on the the ratios \(\frac{\varepsilon _{1L}}{\varepsilon _{1H}}\) and \(\frac{\varepsilon _{2H}}{\varepsilon _{2L}}.)\)

Using these beliefs, we can now check for optimality of sender strategies. Note that we can ignore the case where the limit of \(\mu _{1}^{\varepsilon }\) for A 1 = (0,0) is 1, since in this case, it is optimal for any type of player 1 to deviate. Thus, we only focus on cases where the limit of \(\mu _{1}^{\varepsilon }\) for A 1 = (0,0) is 0.

Thus, we are interested in two cases [notice also that the limit of \(\mu _{1}^{\varepsilon }\) for A 1 = (0,1) is irrelevant for the Nash equilibrium].

Case 1

μ is given by:

$$ \mu =\left\{ \begin{array}{r} p \quad\text{if}\quad A_{1}=(1,0) \\ \text{Not relevant} \quad\text{if}\quad A_{1}=(0,1) \\ 0 \quad\text{if}\quad A_{1}=(1,1) \\ 0 \quad\text{if}\quad A_{1}=(0,0) \end{array} \right\} $$

In this case, the expected net payoffs to senders are:

Irrespective of the choice by nature, there is no (separating) equilibrium in this case: (a) If p > c, then it is optimal for 2 to deviate. (b) But if p < c, then it is optimal for 1 to deviate.

Case 2

μ is given by:

$$ \mu =\left\{ \begin{array}{r} p\quad\text{if}\quad A_{1}=(1,0) \\ \text{Not relevant}\quad\text{if}\quad A_{1}=(0,1) \\ 1\quad\text{if}\quad A_{1}=(1,1) \\ 0\quad\text{if}\quad A_{1}=(0,0) \end{array} \right\} $$

In this case, the expected net payoffs to senders are:

In this case, 2 has no incentive to deviate. The conditions that assure that 1 will not deviate are: (a) if he is of type H, it must be that q > c and if he is of type L it must be that 1 − q > c. Thus, a necessary condition to sustain a (separating) equilibrium is that 1 − q > c.□

Appendix B: Non-informative messages

Here, we show that when the messages are non-informative (i.e., q = 0.5), there is no separating (sequential) equilibrium.

Lemma 6

When q = 0.5 a separating sequential equilibrium does not exist.


There are two types of potential separating equilibria: (1) H sends a message and L does not, and (b) the other way around.

Case 1

H send a message with probability 1 − ε H and L sends a message with probability ε L . It is easy to show that the consistent beliefs are:

$$ \mu _{s}=\left\{\begin{array}{*{20}l}{1\quad\text{ if }\;A_{s}=(1,0)} \\{0\quad\text{ if }\;A_{s}=(0,1)} \\{ 0.5\;\text{ otherwise}}\end{array}\right\} $$

Thus, the expected net payoffs are:

When c < 0.5, then the only equilibrium of this game is (1,1) and when c > 0.5, then the only equilibrium of this game is (0,0). Thus, there is no separating equilibrium that is consistent with these beliefs.

Case 2

L sends a message with probability 1 − ε L and H sends a message with probability ε H . It is easy to show that the consistent beliefs are:

$$ \mu _{s}=\left\{\begin{array}{*{20}l}{0\quad\text{ if }\;A_{s}=(1,0)} \\{1\quad\text{ if }\;A_{s}=(0,1)} \\{0.5\;\text{ otherwise}}\end{array}\right\} $$

and the only equilibrium is (0,0) irrespective of the cost.□

Appendix C: Endogenous precision: non-strategic equilibrium

Proposition 3

(Non strategic equilibrium)


We start by demonstrating that sending a non-informative message (q = 0.5) is a dominant strategy for L. Then we show that as a result, it is optimal for H to send an informative message (q > 0.5). To simplify the presentation, assume (without loss of generality) that player 1 is H.

If q 2 ≤ q 1 (i.e., the precision of the message sent by L is at most as precise as the message sent by H), the probability that L is selected (irrespective of q 2) is 1 − q 1.Footnote 24

And, if q 2 > q 1, the probability that L is selected is 1 − q 2, which is lower than 1 − q 1.

Thus, the probability that L is selected is a non-increasing function in q 2, while his cost is an increasing function in q 2. It follows that q 2 = 0.5 is a dominant strategy for L.

Next, we show that if C (0.5) < 1, it is optimal for H to send an informative message (q > 0.5). It is easy to show that E(d 1|q 1,q 2 = 0.5) = q 1 and thus:

$$ \frac{\partial E\left[ \pi _{1}(q_{1});q_{2}=0.5\right] }{\partial q_{1}} =1-C^{\prime }(q_{1}) $$

Let q 0 denote the optimal precision for H. Specifically, q 0 satisfies the following condition 1 − C (q 0) = 0. Since C  > 0 and C (0.5) < 1, it immediately follows that q 0 > 0.5.□

Appendix D: Endogenous precision: strategic equilibrium

Lemma 7

There exists a \(\underline{q}\) where \(0.5<\underline{q}<1\) that satisfies the condition: \((1-\underline{q})-C(\underline{q})=0\) . Furthermore, for any \(q>\underline{q}\) , (1 − q) − C(q) < 0.


The function (1 − q) − C(q) is decreasing in q, and is positive at q = 0.5 and negative at q = 1.□

Lemma 8

There exists a \(\overline{q}\) where \(\overline{q} >0.5\) that satisfies the condition: \( 1-C(\overline{q}) =\max\limits_{0.5\leq q\leq 1}q-C(q)\) . Furthermore, for any \(q<\overline{q}\) , \( 1-C(q) >\max\limits_{0.5\leq q\leq 1}q-C(q) \) .


The function \( \begin{array}{cc} 1-C(q) \end{array} \)is decreasing in q and is positive at q = 0.5.□

Lemma 9

\(\overline{q}>\) q.


The function \( \begin{array}{cc} 1-C(q) \end{array} \)is decreasing in q. Next, we show that it is positive at q.

If \(\underline{q}<q^{0}\), \( \begin{array}{cc} 1-C(\underline{q}) \end{array} =\left[ 1-q^{0}\right] +\left[ C(q^{0})-C(\underline{q})\right] \) where both elements are positive.

If \(\underline{q}>q^{0}\), \( \begin{array}{cc} 1-C(\underline{q}) \end{array} >\left[ 1-\underline{q}\right] -C(\underline{q})+C(q^{0})>\left[ 1- \underline{q}\right] -C(\underline{q})=0\).□

We are now ready to prove Proposition 4:


(Proposition 4) To simplify the presentation, assume (without loss of generality) that player 1 is H. Given the beliefs and \(q_{1}=q^{\ast }\), where \(\underline{ q}<q^{\ast }<\overline{q}\), L will optimally choose q 2 = 0.5 since: (a) choosing any q such that 0.5 < q < q  ∗  or q > q  ∗  involves a cost without any revenues, and (b) choosing \(q_{2}=q^{\ast }\) leads to losses from Lemma 8.

Given the beliefs and q 2 = 0.5, H will optimally choose \(q_{1}=q^{\ast } \) since the highest payoff from any q ≠ q  ∗  is q 0 − C(q 0) which is smaller than the equilibrium payoff 1 − C(q  ∗ ) as illustrated by Lemma 9.

It is trivial to show that beliefs agree with senders’ strategies.□

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Anand, B.N., Shachar, R. (Noisy) communication. Quant Market Econ 5, 211–237 (2007). https://doi.org/10.1007/s11129-007-9028-2

Download citation


  • Information
  • Signaling
  • Communication

JEL Classification

  • C72
  • D82
  • D83
  • M31