Persuasion under ambiguity


This paper introduces a receiver who perceives ambiguity in a binary model of Bayesian persuasion. The sender has a well-defined prior, while the receiver considers an interval of priors and maximizes a convex combination of worst and best expected payoffs (\(\alpha \)-maxmin preferences). We characterize the sender’s optimal signal and find that the receiver’s payoff differences across states given each action (sensitivities), play a fundamental role in the characterization and the comparative statics. If the sender’s preferred action is the least (most) sensitive one, then the sender’s equilibrium payoff, as well as the sender’s preferred degree of receiver ambiguity, is increasing (decreasing) in the receiver’s pessimism. We document a tendency for ambiguity-sensitive receivers to be more difficult to persuade.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3

Change history

  • 17 August 2020

    In Proposition 4, lines 3, 4 and 8, 9 were scrambled by mistake during the pagination process of the online published article.


  1. 1.

    Ellsberg (1961) was the first to point out that when individuals perceive uncertainty about probabilities (ambiguity), they frequently do not behave as if they were governed by a well-defined prior. The literature refers to uncertainty about probabilities also as “Knightian Uncertainty,” in reference to Knight (1921).

  2. 2.

    This updating rule is known as “Full Bayesian updating,” see Pires (2002).

  3. 3.

    See Hurwicz (1951), Jaffray (1989) and Ghirardato et al. (2004). A prominent special case of \(\alpha \)-maxmin preferences are maxmin-preferences, which are axiomatized in Gilboa and Schmeidler (1989).

  4. 4.

    Another similarity is that the optimal signal under receiver ambiguity collapses the receiver’s prior set such that she is certain (i.e., she perceives no ambiguity) about the state, whenever she chooses the low action.

  5. 5.

    In the common prior case, the optimal signal is characterized by the receiver’s cost of type I and type II errors, i.e., the receiver’s payoff differences across actions given state. In contrast, in our model, with everything else fixed, the optimal signal is characterized by the cost of type II errors and the receiver’s payoff differences across states given action.

  6. 6.

    Riedel and Sass (2014) introduce the class of Ellsberg games where players may choose ambiguous strategies (sets of probability distributions over their pure strategies) in addition to classical mixed strategies (single probability distributions over pure strategies). The ambiguous communication device in Beauchene et al. can be considered as such an ambiguous strategy and their setup as an “Ellsberg communication game.”

  7. 7.

    Sometimes, it is argued that the size of the prior set can be varied to incorporate different ambiguity attitudes. For instance, a “small” prior set would represent more optimistic responses to ambiguity. In this case, maxmin expected utility preferences would not represent extreme ambiguity aversion. As our analysis shows, however, in a persuasion context important differences between the \( \alpha \)-maxmin and maxmin model emerge in the updating stage of beliefs. This provides a justification for our emphasis on the \(\alpha \)-maxmin model.

  8. 8.

    The \(\alpha \)-maxmin model explicitly distinguishes between ambiguity and ambiguity attitude by the ambiguity attitude parameter \(\alpha \). Ghirardato et al. (2004) attempted to axiomatize this model. However, as shown by Eichberger et al. (2011), their axiomatization has some flaws if the state space is finite.

  9. 9.

    We parameterize the receiver’s prior set in this way because it is convenient for our comparative statics analysis. This is discussed in more detail in Sect. 3.3. The assumption that the limits of the prior set are different from 0 and 1 is essentially without loss. We make this assumption because there are sometimes discontinuities at \(\varepsilon =m\), which makes the analysis more tedious (but with nearly identical results). Furthermore, we assume a closed and convex set of priors. As the analysis shows, only the extreme points of the set of priors matter. Hence, the results would be the same for any closed set of priors. However, the underlying axiomatizations of the multiple prior model (Gilboa and Schmeidler 1989; Ghirardato et al. 2004) imply that the set of priors is also convex.

  10. 10.

    For convenience of exposition, in the comparative statics analysis below, we sometimes refer to the parameter \(\varepsilon \) as the receiver’s perceived ambiguity.

  11. 11.

    An \(\alpha =1/2\) receiver is ambiguity neutral in our model in the sense that her behavior under her prior set does not respond to our “midpoint preserving” expansions of the prior set, \([m-\varepsilon ,m+\varepsilon ]\). In fact, the utility \( U_{[m-\varepsilon ,m+\varepsilon ]}(a)\) of such a receiver is constant in \( \varepsilon \) for each action a.

  12. 12.

    This assumption is without loss as long as the receiver is restricted to pure strategies, and then merely implies that we do not need to specify the receiver’s behavior as a part of our equilibrium characterizations. We discuss the implications of allowing the receiver to choose probabilistically in the concluding remarks.

  13. 13.

    This nomenclature suggests an interpretation of state \(\omega _L\) as null hypothesis and of action \(a=1\) as the action consistent with the alternative hypothesis. For example, if the receiver is a patient and the sender is a physician, the null hypothesis \(\omega _L\) could be that the receiver is healthy, and action a=1 could mean that the patient is subjected to treatment.

  14. 14.

    “Comonotonic” stands for “common monotonicity”: two actions a and \(a'\) are said to be comonotonic if there are no states \(\omega , \omega '\) such that \(u(a, \omega )<u(a,\omega ')\) and \(u(a', \omega )>u(a',\omega ')\).

  15. 15.

    It follows from a straightforward calculation that a signal is uninformative if and only if \(\pi _{1L}=\pi _{1H}\).

  16. 16.

    We omit the proof of this claim. The main difference between both cases is that solving for the optimal value of \(\pi _{1L}\) in the case of a common prior requires solving a linear equation, while a quadratic equation must be solved in the case of comonotonic actions. The reader is referred to Appendix A for the explicit solution to this equation.

  17. 17.

    Notice, however, that in the special cases of an ambiguity neutral receiver (\(\alpha =1/2\)) and equal sensitivities (\(\Delta _{0}=\Delta _{1}\)), the characterization under non-comonotonic actions aligns with the common prior case.

  18. 18.

    Our emphasis on sensitivity as the driving force is not artificial. If one differentiates \(G(\mu _{1},\mu _{2})\) with respect to \(\alpha \) without imposing our assumption \(\Delta _{L},\Delta _{H}>0\), it explicitly follows that the derivative shares sign with \(|\Delta _{0}|-|\Delta _{1}|\).

  19. 19.

    On the other hand, our exercise only gives a partial notion of the effect of expanding the upper and lower bound of the prior set at different rates. An alternative, more exhaustive, exercise, would trace level curves in terms of the upper and lower bound, such that the optimal value of \(\pi _{1L}\) is kept constant. The non-linearity of Bayes’ rule and the relatively large number of parameters in our model; however, make this exercise exceedingly complex, to the point that it becomes difficult to reach conclusions beyond the ones we obtain with the approach taken here.

  20. 20.

    The concavity of Bayes’ rule in the prior in this case is well known and can be verified by a simple calculation.

  21. 21.

    In passing, notice that in these cases, the optimal signal is again characterized by the ratio of the costs of type I and type II errors, as in the common prior counterpart to our model.

  22. 22.

    The apparent paradox in the sender strictly preferring an ambiguity neutral receiver to perceive no ambiguity, is due to the updating of beliefs. An ambiguity neutral receiver does not respond to our “midpoint preserving” expansions of the prior set when acting under her prior set. However, as explained here, once the change in the prior set goes through Bayes rule, it becomes “midpoint reducing” and then affects the ambiguity neutral receiver to the sender’s detriment.

  23. 23.

    Naturally, the type of conclusion we obtain depends on the fact that our exercise consists of expanding the upper and lower bounds of the receiver’s prior set uniformly. If we allowed the upper and lower bounds to expand at different rates, the conclusion would be of a similar flavor, however. In this case, given \(\alpha =1/2\) or \(\Delta _{0}=\Delta _{1}\), the upper bound would need to change quicker than the lower bound for the sender to be indifferent as the receiver’s prior set expands. This also reflects, in a related sense, a sender inclination for smaller receiver prior sets.

  24. 24.

    This condition implies that we ignore the cases in which uninformative signals are optimal for some parameter values. While the result requires additional qualifications if we allow uninformative signals to be optimal, its qualitative structure remains identical.

  25. 25.

    This effect can be anticipated by simply staring at the expression for the sender’s constraint in (1).

  26. 26.

    Notice that several important extensions of the Bayesian persuasion framework emphasize similar binary environments, see, e.g., Bergemann et al. (2018), Felgenhauer and Loerke (2017), Orlov et al. (2020), and Perez-Richet (2014).

  27. 27.

    Case (C), where \(\Delta _{0}\le 0\) and \(\Delta _{1}\ge 0\), is not included because we assumed \(\Delta _{0}>\Delta _{1}\).


  1. Alonso, R., & Camara, A. (2016). Persuading voters. American Economic Review, 106, 3590–3605.

    Article  Google Scholar 

  2. Alonso, R., & Camara, O. (2018). On the value of persuasion by experts. Journal of Economic Theory, 174, 103–123.

    Article  Google Scholar 

  3. Beauchene, D., Li, J., & Li, M. (2019). Ambiguous persuasion. Journal of Economic Theory, 179, 312–365.

    Article  Google Scholar 

  4. Bergemann, D., Bonatti, A., & Smolin, A. (2018). The design and price of information. American Economic Review, 108, 1–48.

    Article  Google Scholar 

  5. Blackwell, D. (1951). Comparison of experiments, in ’Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability’ (pp. 93–102). Berkeley: University of California Press.

    Google Scholar 

  6. Eichberger, J., Grant, S., Kelsey, D., & Koshevoy, G. A. (2011). The \(\alpha \)-MEU model: A comment. Journal of Economic Theory, 146(4), 1684–1698.

    Article  Google Scholar 

  7. Ellsberg, D. (1961). Risk, ambiguity, and the savage axioms. The Quarterly Journal of Economics, 75, 643–669.

    Article  Google Scholar 

  8. Felgenhauer, M., & Loerke, P. (2017). Bayesian persuasion with private experimentation. International Economic Review, 58, 829–856.

    Article  Google Scholar 

  9. Gentzkow, M., & Kamenica, E. (2016). Competition in persuasion. The Review of Economic Studies, 84, 300–322.

    Article  Google Scholar 

  10. Gentzkow, M., & Kamenica, E. (2017). Bayesian persuasion with multiple senders and rich signal spaces. Games and Economic Behavior, 104, 411–429.

    Article  Google Scholar 

  11. Ghirardato, P., Maccheroni, F., & Marinacci, M. (2004). Differentiating ambiguity and ambiguity attitude. Journal of Economic Theory, 118, 133–173.

    Article  Google Scholar 

  12. Ghirardato, P., & Marinacci, M. (2002). Ambiguity made precise: A comparative foundation. Journal of Economic Theory, 102, 251–289.

    Article  Google Scholar 

  13. Gilboa, I., & Schmeidler, D. (1989). Maxmin expected utility with non-unique prior. Journal of Mathematical Economics, 18(2), 141–153.

    Article  Google Scholar 

  14. Hedlund, J. (2017). Bayesian persuasion by a privately informed sender. Journal of Economic Theory, 167, 229–268.

    Article  Google Scholar 

  15. Hu, J. & Weng, X. (2018). Robust persuasion of a privately informed receiver. Mimeo.

  16. Hurwicz, L. (1951). Optimality criteria for decision making under ignorance, Discussion paper 370, Cowles Comission.

  17. Jaffray, J. (1989). Linear utility theory for belief functions. Operations Research Letters, 8, 107–112.

    Article  Google Scholar 

  18. Kamenica, E., & Gentzkow, M. (2011). Bayesian persuasion. American Economic Review, 101, 2590–2615.

    Article  Google Scholar 

  19. Kellner, C., & Le Quement, M. (2017). Modes of ambiguous communication. Games and Economic Behavior, 104, 271–292.

    Article  Google Scholar 

  20. Kellner, C., & Le Quement, M. (2018). Endogenous ambiguity in cheap talk. Journal of Economic Theory, 173, 1–17.

    Article  Google Scholar 

  21. Knight, F. (1921). Risk, uncertainty, and profit. Boston: Houghton Mifflin.

    Google Scholar 

  22. Kolotilin, A. (2018). Optimal information disclosure: A linear programming approach. Theoretical Economics, 13, 607–635.

    Article  Google Scholar 

  23. Kolotilin, A., Mylovanov, T., Zapechelnyuk, A., & Li, M. (2017). Persuasion of a privately informed receiver. Econometrica, 85, 1949–1964.

    Article  Google Scholar 

  24. Kosterina, S. (2018). Persuasion with unknown beliefs. Mimeo.

  25. Laclau, M. & Renou, L. (2016). Public persuasion. Mimeo.

  26. Li, F. & Norman, P. (2018). Sequential persuasion. Mimeo.

  27. Orlov, D., Skrzypacz, A. & Zryumov, P. (2020). Persuading the principal to wait. Journal of Political Economy, 128, 2542–2578.

  28. Perez-Richet, E. (2014). Interim bayesian persuasion: First steps. American Economic Review, 104(5), 469–474.

    Article  Google Scholar 

  29. Pires, C. P. (2002). A rule for updating ambiguous beliefs. Theory and Decision, 53, 137–152.

    Article  Google Scholar 

  30. Riedel, F., & Sass, L. (2014). Ellsberg games. Theory and Decision, 76, 469–509.

    Article  Google Scholar 

Download references


We thank Ani Guerdjikova, Jürgen Eichberger and two anonymous referees for very helpful feedback.

Author information



Corresponding author

Correspondence to Jonas Hedlund.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original version of this article was revised: In Proposition 4, lines 3, 4 and 8, 9 were scrambled by mistake. Now, they have been corrected.


Appendix A. Explicit form of optimal signal

Here we provide explicit formulae for the sender’s optimal signal for cases (A) and (B) in Proposition 1. These are given by the solutions to Eqs. (*) and (**).

$$\begin{aligned} \pi _{1L} =&\frac{(m-\varepsilon )((1-\alpha )\Delta _{0}+\alpha \Delta _{1}-\Delta _{L})}{(1-(m-\varepsilon ))2\Delta _{L}}+\frac{(m+\varepsilon )(\alpha \Delta _{0}+(1-\alpha )\Delta _{1}-\Delta _{L})}{(1-(m+\varepsilon ))2\Delta _{L}} \\&+\sqrt{ {\scriptstyle \left[ \frac{(m-\varepsilon )((1-\alpha )\Delta _{0}+\alpha \Delta _{1}-\Delta _{L})}{(1-(m-\varepsilon ))2\Delta _{L}}+\frac{(m+\varepsilon )(\alpha \Delta _{0}+(1-\alpha )\Delta _{1}-\Delta _{L})}{(1-(m+\varepsilon ))2\Delta _{L}}\right] ^{2}+\frac{(m-\varepsilon )(m+\varepsilon )\Delta _{H} }{(1-(m-\varepsilon ))(1-(m+\varepsilon ))\Delta _{L}}}} \\ \end{aligned}$$
$$\begin{aligned} \pi _{1L} =&\frac{(m-\varepsilon )((1-\alpha )\Delta _{H}-\alpha \Delta _{L})}{(1-(m-\varepsilon ))2\Delta _{L}}+\frac{(m+\varepsilon )(\alpha \Delta _{H}-(1-\alpha )\Delta _{L})}{(1-(m+\varepsilon ))2\Delta _{L}}\\&+\sqrt{{\scriptstyle \left[ \frac{(m-\varepsilon )((1-\alpha )\Delta _{H}-\alpha \Delta _{L})}{(1-(m-\varepsilon ))2\Delta _{L}}+\frac{(m+\varepsilon )(\alpha \Delta _{H}-(1-\alpha )\Delta _{L})}{(1-(m+\varepsilon ))2\Delta _{L}}\right] ^{2}+\frac{(m-\varepsilon )(m+\varepsilon )\Delta _{H}}{(1-(m-\varepsilon ))(1-(m+\varepsilon ))\Delta _{L}}}}. \end{aligned}$$

The expression for \(\pi _{1L}\) for case (B) confirms the fact that the optimal signal is characterized by the ratio \(\Delta _{H}/\Delta _{L}\) in the case of comonotonic actions (as discussed in the main text). This, however, is not the case when actions are not comonotonic, as reflected in the expression for \(\pi _{1L}\) for case (A) above.

Appendix B. Proofs

Proof of proposition 1

Suppose \(G(m-\varepsilon ,m+\varepsilon )<0\). The formulas in the statement follow from the arguments in the main text. Existence of an optimal signal follows from the intermediate value theorem, because \(G(\beta _{1}(\pi ,m-\varepsilon ),\beta _{1}(\pi ,m+\varepsilon ))\) is continuous in \(\pi _{1L}\), and because \(G(\beta _{1}(\pi ,m-\varepsilon ),\beta _{1}(\pi ,m+\varepsilon ))=G(1,1)=\Delta _{H}>0\) for \(\pi _{1H}=1\) and \(\pi _{1L}=0\), and \(G(\beta _{1}(\pi ,m-\varepsilon ),\beta _{1}(\pi ,m+\varepsilon ))=G(m-\varepsilon ,m+\varepsilon )<0\) for \(\pi _{1H}=1\) and \(\pi _{1L}=1\). Uniqueness follows because \(G(\beta _{1}(\pi ,m-\varepsilon ),\beta _{1}(\pi ,m+\varepsilon ))\) is strictly decreasing in \(\pi _{1L}\). \(\square \)

Proof of proposition 2

Continuity of \(\tilde{\pi }_{1L}(\varepsilon ,\alpha )\) in \(\alpha \) follows from the maximum theorem, as \(\tilde{\pi }_{1L}(\varepsilon ,\alpha )=\max \{\pi _{1L}\in [ 0,1]:G(\beta (\pi _{1L},m-\varepsilon ),\beta (\pi _{1L},m+\varepsilon ))\ge 0\}\) and G is continuous.

Suppose \(\tilde{\pi }_{1L}(\varepsilon ,\alpha )<1\). Since \(\beta (\pi _{1L},m-\varepsilon )\) and \(\beta (\pi _{1L},m+\varepsilon )\) are strictly decreasing in \(\pi _{1L}\), and since G is strictly increasing in both its arguments, \(\partial G(\beta (\pi _{1L},m-\varepsilon ),\beta (\pi _{1L},m+\varepsilon ))/\partial \pi _{1L}|_{\pi _{1L}=\tilde{\pi } _{1L}(\varepsilon ,\alpha )}<0\). The implicit function theorem then yields that \(\tilde{\pi }_{1L}(\varepsilon ,\alpha )\) is differentiable in \(\alpha \) with

$$\begin{aligned} \frac{\partial \tilde{\pi }_{1L}(\varepsilon ,\alpha )}{\partial \alpha } =\left. -\frac{\partial G(\beta (\pi _{1L},m-\varepsilon ),\beta (\pi _{1L},m+\varepsilon ))/\partial \alpha }{\partial G(\beta (\pi _{1L},m-\varepsilon ),\beta (\pi _{1L},m+\varepsilon ))/\partial \pi _{1L}} \right| _{\pi _{1L}=\tilde{\pi }_{1L}(\varepsilon ,\alpha )}\text {.} \end{aligned}$$

As the denominator is negative, \(\partial \tilde{\pi }_{1L}(\varepsilon ,\alpha )/\partial \alpha \) shares sign with \(\partial G(\beta (\pi _{1L},m-\varepsilon ),\beta (\pi _{1L},m+\varepsilon ))/\partial \alpha \).

In case (B), we have \(\Delta _{0}\ge 0\ge \Delta _{1}\) and, therefore, \( \Delta _{0}-\Delta _{1}>0\). In case (C), we have \(\Delta _{0}\le 0\le \Delta _{1}\) and, therefore, \(\Delta _{0}-\Delta _{1}<0\). The explicit formula for \(\partial G(\mu _{1},\mu _{2})/\partial \alpha \) in the main text then implies that \(\partial G(\beta (\pi _{1L},m-\varepsilon ),\beta (\pi _{1L},m+\varepsilon ))/\partial \alpha \) shares sign with \(\Delta _{0}-\Delta _{1}\). \(\square \)

Proof of lemma 2

Suppose \(I=\{\varepsilon \in [ 0,\bar{\varepsilon }):\tilde{\pi } _{1L}(\varepsilon ,\alpha )<1\}\not =\varnothing \). Continuity of \(\tilde{\pi } _{1L}(\varepsilon ,\alpha )\) follows from the maximum theorem, as \(\tilde{\pi }_{1L}(\varepsilon ,\alpha )=\max \{\pi _{1L}\in [ 0,1]:G(\beta (\pi _{1L},m-\varepsilon ),\beta (\pi _{1L},m+\varepsilon ))\ge 0\}\) and G is continuous. Differentiability of \(\tilde{\pi }_{1L}(\varepsilon ,\alpha )\) in \(\varepsilon \) on I follows by an argument similar to the one for the differentiability of \(\tilde{\pi }_{1L}(\varepsilon ,\alpha )\) in \(\alpha \) in the proof of Proposition 2. The remainder of the proof consists of three steps.\(\square \)

Step 1. The set I is a right-open interval.


The result is obvious if \(I=[0,\bar{\varepsilon })\), so suppose \(\tilde{\pi }_{1L}(\tilde{\varepsilon },\alpha )=1\) for some \(\tilde{ \varepsilon }\in [ 0,\bar{\varepsilon })\). Notice that

$$\begin{aligned}&G(m-\varepsilon ,m+\varepsilon )\\&\quad =\left\{ \begin{array}{l} (m-\varepsilon )\left( \alpha \Delta _{1}+(1-\alpha )\Delta _{0}\right) +(m+\varepsilon )\left( \alpha \Delta _{0}+(1-\alpha )\Delta _{1}\right) -\Delta _{L} \\ (1-\alpha )(m-\varepsilon )(\Delta _{0}+\Delta _{1})+\alpha (m+\varepsilon )(\Delta _{0}+\Delta _{1})-\Delta _{L} \\ \alpha (m-\varepsilon )(\Delta _{0}+\Delta _{1})+(1-\alpha )(m+\varepsilon )(\Delta _{0}+\Delta _{1})-\Delta _{L} \end{array} \begin{array}{c} \text {(A)} \\ \,\text {(B)} \\ \text {(C)} \end{array} \right. \end{aligned}$$

is a linear function of \(\varepsilon \). Therefore, either \(G(m-\varepsilon ,m+\varepsilon )\ge 0\) (and \(\tilde{\pi }_{1L}(\varepsilon ,\alpha )=1\)) on some segment [0, x] and \(G(m-\varepsilon ,m+\varepsilon )<0\) (and \(\tilde{ \pi }_{1L}(\varepsilon ,\alpha )<1\)) on \((x,\bar{\varepsilon })\), or \( G(m-\varepsilon ,m+\varepsilon )<0\) on some segment [0, x) and \( G(m-\varepsilon ,m+\varepsilon )\ge 0\) on \([x,\bar{\varepsilon })\). \(\square \)

Step 2. If \(\partial \tilde{\pi }_{1L}(\tilde{\varepsilon },\alpha )/\partial \varepsilon =0\) for some \(\tilde{\varepsilon }\in I\), then \( \partial ^{2}\tilde{\pi }_{1L}(\tilde{\varepsilon },\alpha )/\partial \varepsilon ^{2}<0\).


Suppose \(\partial \tilde{\pi }_{1L}(\tilde{\varepsilon } ,\alpha )/\partial \varepsilon =0\) and \(\tilde{\pi }_{1L}(\tilde{ \varepsilon },\alpha )<1\). The implicit function theorem implies that \(\tilde{ \pi }_{1L}\) is differentiable at \(\tilde{\varepsilon }\) with derivative

$$\begin{aligned} \frac{\partial \tilde{\pi }_{1L}(\tilde{\varepsilon },\alpha )}{\partial \varepsilon }=\left. -\frac{\partial G(\beta (\pi _{1L},m-\tilde{ \varepsilon }),\beta (\pi _{1L},m+\tilde{\varepsilon }))/\partial \varepsilon }{\partial G(\beta (\pi _{1L},m-\tilde{\varepsilon }),\beta (\pi _{1L},m+\tilde{\varepsilon }))/\partial \pi _{1L}}\right| _{\pi _{1L}= \tilde{\pi }_{1L}(\tilde{\varepsilon },\alpha )}\text {.} \end{aligned}$$

Since the denominator is strictly negative, \(\partial \tilde{\pi }_{1L}( \tilde{\varepsilon },\alpha )/\partial \varepsilon \) shares sign with the numerator. It follows that \(\partial G(\beta (\pi _{1L},m-\tilde{ \varepsilon }),\beta (\pi _{1L},m+\tilde{\varepsilon }))/\partial \varepsilon =0\).

With a slight abuse of notation, abbreviate \(\partial G(\beta (\pi _{1L},m-\varepsilon ),\beta (\pi _{1L},m+\varepsilon ))/\partial x\) as \( \partial G/\partial x\). The second (implicit) derivative at \(\varepsilon = \tilde{\varepsilon }\) is given by

$$\begin{aligned} \frac{\partial ^{2}\tilde{\pi }_{1L}(\tilde{\varepsilon },\alpha )}{\partial \varepsilon ^{2}}= & {} \left. -\frac{\frac{\partial ^{2}G}{\partial \varepsilon ^{2}}\left( \frac{\partial G}{\partial \pi _{1L}}\right) ^{2}-2 \frac{\partial ^{2}G}{\partial \varepsilon \partial \pi _{1L}}\frac{\partial G}{\partial \pi _{1L}}\frac{\partial G}{\partial \varepsilon }+\frac{ \partial ^{2}G}{\partial \pi _{1L}^{2}}\left( \frac{\partial G}{\partial \varepsilon }\right) ^{2}}{\left( \frac{\partial G}{\partial \pi _{1L}} \right) ^{3}}\right| _{\pi _{1L}=\tilde{\pi }_{1L}(\tilde{\varepsilon } ,\alpha ),\varepsilon =\tilde{\varepsilon }} \\= & {} \left. -\frac{\partial ^{2}G(\beta (\pi _{1L},m-\tilde{\varepsilon } ),\beta (\pi _{1L},m+\tilde{\varepsilon }))/\partial \varepsilon ^{2}}{ \partial G(\beta (\pi _{1L},m-\tilde{\varepsilon }),\beta (\pi _{1L},m+\tilde{ \varepsilon }))/\partial \pi _{1L}}\right| _{\pi _{1L}=\tilde{\pi }_{1L}( \tilde{\varepsilon },\alpha )}, \end{aligned}$$

which shares sign with \(\partial ^{2}G(\beta (\pi _{1L},m-\tilde{\varepsilon } ),\beta (\pi _{1L},m+\tilde{\varepsilon }))/\partial \varepsilon ^{2}\) . Straightforward differentiation gives

$$\begin{aligned} \frac{\partial ^{2}G}{\partial \varepsilon ^{2}}=\left\{ \begin{array}{c} {-2\pi }_{1L}{(1-\pi }_{1L})\left( \frac{\alpha \Delta _{1}+(1-\alpha )\Delta _{0}}{\left( m-\varepsilon +\pi _{1L}(1-m+\varepsilon )\right) ^{3}}+\frac{\alpha \Delta _{0}+(1-\alpha )\Delta _{1}}{\left( m+\varepsilon +\pi _{1L}(1-m-\varepsilon )\right) ^{3}} \right) \text { (A)} \\ {-2\pi }_{1L}{(1-\pi }_{1L})\left( \frac{(1-\alpha )(\Delta _{0}+\Delta _{1})}{\left( m-\varepsilon +(1-m+\varepsilon )\pi _{1L}\right) ^{3}}+\frac{\alpha (\Delta _{0}+\Delta _{1})}{\left( m+\varepsilon +(1-m-\varepsilon )\pi _{1L}\right) ^{3}}\right) \text { (B)} \\ {-2\pi }_{1L}{(1-\pi }_{1L})\left( \frac{\alpha (\Delta _{0}+\Delta _{1})}{\left( m-\varepsilon +(1-m+\varepsilon )\pi _{1L}\right) ^{3}}+\frac{(1-\alpha )(\Delta _{0}+\Delta _{1})}{\left( m+\varepsilon +(1-m-\varepsilon )\pi _{1L}\right) ^{3}}\right) \text { (C)} \end{array} \right. \text {.} \end{aligned}$$

Since \(\Delta _{0}\ge 0\) and \(\Delta _{1}\ge 0\) in case (A), and since \( \Delta _{0}+\Delta _{1}>0\), we have that \(\partial ^{2}G(\beta (\pi _{1L},m-\varepsilon ),\beta (\pi _{1L},m+\varepsilon ))/\partial \varepsilon ^{2}<0\), which proves the claim. \(\square \)

Step 3. Suppose \(\varepsilon ,\varepsilon ^{\prime }\in I\) with \( \varepsilon <\varepsilon ^{\prime }\). If \(\partial \tilde{\pi } _{1L}(\varepsilon ,\alpha )/\partial \varepsilon \le 0\), then \(\partial \tilde{\pi }_{1L}(\varepsilon ^{\prime },\alpha )/\partial \varepsilon <0\). If \(\partial \tilde{\pi }_{1L}(\varepsilon ^{\prime },\alpha )/\partial \varepsilon \ge 0\), then \(\partial \tilde{\pi } _{1L}(\varepsilon ,\alpha )/\partial \varepsilon >0\).


Suppose \(\partial \tilde{\pi }_{1L}(\varepsilon ,\alpha )/\partial \varepsilon \le 0\) and, to contradiction, that

$$\begin{aligned} \partial \tilde{ \pi }_{1L}(\varepsilon ^{\prime },\alpha )/\partial \varepsilon >0 \end{aligned}$$

for some \(\varepsilon ^{\prime }>\varepsilon \). Let \(\tilde{\varepsilon } =\max \{x\in [ \varepsilon ,\varepsilon ^{\prime }]:\partial \tilde{\pi }_{1L}(x,\alpha )/\partial \varepsilon =0\}\), where \(\tilde{\varepsilon }\) is well defined by the hypothesis, and because \(\partial \tilde{\pi }_{1L}(x,\alpha )/\partial \varepsilon \) is differentiable on I and, therefore, continuous in its first argument on I. Step 2 implies \(\partial ^{2}\tilde{\pi }_{1L}(\tilde{ \varepsilon },\alpha )/\partial \varepsilon ^{2}<0\), and, combining with the definition of \(\tilde{\varepsilon }\), we obtain \(\partial \tilde{\pi } _{1L}(x,\alpha )/\partial \varepsilon <0\) on \((\tilde{\varepsilon },\varepsilon ^{\prime }]\), a contradiction. Therefore, \(\partial \tilde{\pi }_{1L}(\varepsilon ^{\prime },\alpha )/\partial \varepsilon \le 0\).

Suppose \(\partial \tilde{\pi }_{1L}(\varepsilon ^{\prime },\alpha )/\partial \varepsilon =0\). Step 2 implies \(\partial ^{2}\tilde{\pi } _{1L}(\varepsilon ^{\prime },\alpha )/\partial \varepsilon ^{2}<0\), and there is then an interval \((z,\varepsilon ^{\prime })\) on which \( \partial \tilde{\pi }_{1L}(x,\alpha )/\partial \varepsilon >0\), which produces the same contradiction as in the previous paragraph. Hence, \(\partial \tilde{\pi } _{1L}(\varepsilon ^{\prime },\alpha )/\partial \varepsilon <0\). \(\square \)

The statement that we must have either case (i), (ii), or (iii), follows by combining Step 1 and Step 3. More precisely, suppose \(\partial \tilde{\pi } _{1L}(\varepsilon ^{*},\alpha )/\partial \varepsilon =0\) for some \(\varepsilon ^{*}\in I\). If \(\varepsilon ^{*}\) is an element of the interior of I, then Step 3 implies that we are in case (iii), and otherwise \(\varepsilon ^{*}=\min I\) and we are in case (ii). If there is no such \(\varepsilon ^{*}\), then the continuity of \(\partial \tilde{\pi }_{1L}(\varepsilon ,\alpha )/\partial \varepsilon \) in \(\varepsilon \) implies that we are either in case (i) or (ii).

Proof of Proposition 3

Suppose \(I=\{\varepsilon \in [ 0,\bar{\varepsilon }):\tilde{\pi } _{1L}(\varepsilon ,\alpha )<1\}\not =\varnothing \). We prove each direction of the statement separately.

Step 1. If either (a) \(\Delta _{0}=\Delta _{1}\), (b) \(\alpha =1/2\), (c) \(\Delta _{0}>\Delta _{1}\) and \(\alpha <1/2\), or (d) \(\Delta _{0}<\Delta _{1}\) and \(\alpha >1/2\), then \(\partial \tilde{\pi }_{1L}(\varepsilon ,\alpha )/\partial \varepsilon \le 0\) for all \(\varepsilon \in I\).\(\square \)


Suppose, to contradiction, \(\partial \tilde{\pi } _{1L}(\varepsilon ,\alpha )/\partial \varepsilon >0\) for some \(\varepsilon \in I\). Lemma 2 then implies \(\tilde{\pi }_{1L}(0,\alpha )<1\) and \(\partial \tilde{\pi }_{1L}(0,\alpha )/\partial \varepsilon >0\). The implicit function theorem implies that \(\partial \tilde{\pi }_{1L}(0,\alpha )/\partial \varepsilon \) shares sign with

$$\begin{aligned}&\left. \frac{\partial G(\beta (\pi ,m-\varepsilon ),\beta (\pi ,m+\varepsilon ))}{\partial \varepsilon }\right| _{\varepsilon =0}\\&\quad =\left\{ \begin{array}{c} \frac{\tilde{\pi }_{1L}(0,\alpha )(2\alpha -1)(\Delta _{0}-\Delta _{1})}{ \left( m+\tilde{\pi }_{1L}(0,\alpha )(1-m)\right) ^{2}}\text { if}\ \Delta _{0}\ge 0\text { and }\Delta _{1}\ge 0\text { (A)} \\ \frac{\tilde{\pi }_{1L}(0,\alpha )(2\alpha -1)(\Delta _{0}+\Delta _{1})}{ \left( m+\tilde{\pi }_{1L}(0,\alpha )(1-m)\right) ^{2}}\text { if }\Delta _{0}\ge 0\text { and }\Delta _{1}\le 0\text { (B)} \\ \frac{\tilde{\pi }_{1L}(0,\alpha )(1-2\alpha )(\Delta _{0}+\Delta _{1})}{ \left( m+\tilde{\pi }_{1L}(0,\alpha )(1-m)\right) ^{2}}\text { if }\Delta _{0}\le 0\text { and }\Delta _{1}\ge 0\text { (C)} \end{array} \right. \end{aligned}$$

Therefore, we must have either \(\Delta _{0}>\Delta _{1}\) and \(\alpha >1/2\) or \(\Delta _{0}<\Delta _{1}\) and \(\alpha <1/2\), which is the negation of the hypothesis in the claim. \(\square \)

Step 2. If \(\partial \tilde{\pi }_{1L}(\varepsilon ,\alpha )/\partial \varepsilon \le 0\) for all \(\varepsilon \in I\), then either (a) \( \Delta _{0}=\Delta _{1}\), (b) \(\alpha =1/2\), (c) \(\Delta _{0}>\Delta _{1}\) and \(\alpha <1/2\), or (d) \(\Delta _{0}<\Delta _{1}\) and \(\alpha >1/2\).


Suppose \(\partial \tilde{\pi }_{1L}(\varepsilon ,\alpha )/\partial \varepsilon \le 0\) for all \(\varepsilon \in I\). If \(\tilde{\pi } _{1L}(0,\alpha )<1\), then \(\partial \tilde{\pi }_{1L}(0,\alpha )/\partial \varepsilon \le 0\), and the formula derived in Step 1 immediately implies the result. Suppose, therefore, that \(\tilde{\pi }_{1L}(0,\alpha )=1\), which implies \(G(m,m)\ge 0\). We have that

$$\begin{aligned} \frac{\partial G(m-\varepsilon ,m+\varepsilon )}{\partial \varepsilon } =\left\{ \begin{array}{l} (2\alpha -1)(\Delta _{0}-\Delta _{1}) \\ (2\alpha -1)(\Delta _{0}+\Delta _{1}) \\ (1-2\alpha )(\Delta _{0}+\Delta _{1}) \end{array} \begin{array}{c} \text {if}\ \Delta _{0}\ge 0\text { and }\Delta _{1}\ge 0 \\ \text {if }\Delta _{0}\ge 0\text { and }\Delta _{1}\le 0 \\ \text {if }\Delta _{0}\le 0\text { and }\Delta _{1}\ge 0 \end{array} \begin{array}{c} \text {(A)} \\ \text {(B)} \\ \text {(C)} \end{array} .\right. \end{aligned}$$

If \(\Delta _{0}>\Delta _{1}\) and \(\alpha >1/2\) we are either in case (A) or (B) and \(\partial G(m-\varepsilon ,m+\varepsilon )/\partial \varepsilon >0\). If \(\Delta _{0}<\Delta _{1}\) and \(\alpha <1/2\), we are either in case (A) or (C) and \(\partial G(m-\varepsilon ,m+\varepsilon )/\partial \varepsilon >0\). But then \(G(m-\varepsilon ,m+\varepsilon )\ge 0\) for all \(\varepsilon \in [ 0,\bar{\varepsilon })\), which contradicts the assumption that \(\tilde{ \pi }_{1L}(\varepsilon ,\alpha )<1\) for some \(\varepsilon \in [ 0,\bar{ \varepsilon })\). Therefore, one of conditions (i)–(iv) in the statement of the claim must hold. \(\square \)

Proof of Proposition 4

Let \(G(\mu _{1},\mu _{2},\alpha )\) be the sender’s constraint function G such that \(\alpha \) is explicitly included in the list of arguments. We first prove the following claim.

Claim. Suppose \(\tilde{\pi }_{1L}(\varepsilon ,\alpha )<1\) and \( \partial \tilde{\pi }_{1L}(\varepsilon ,\alpha )/\partial \varepsilon =0\). If \(\Delta _{0}>\Delta _{1}\), then \(\partial ^{2}\tilde{\pi }_{1L}(\varepsilon ,\alpha )/\partial \varepsilon \partial \alpha >0\). If \(\Delta _{0}<\Delta _{1}\), then \(\partial ^{2}\tilde{\pi }_{1L}(\varepsilon ,\alpha )/\partial \varepsilon \partial \alpha >0\).\(\square \)


Suppose \(\tilde{\pi }_{1L}(\varepsilon ,\alpha )<1\), \( \partial \tilde{\pi }_{1L}(\varepsilon ,\alpha )/\partial \varepsilon =0\) and \(\Delta _{0}>\Delta _{1}\). With a slight abuse of notation, abbreviate \( \partial G(\beta (\pi _{1L},m-\varepsilon ),\beta (\pi _{1L},m+\varepsilon ),\alpha )/\partial x\) as \(\partial G/\partial x\). Notice that

$$\begin{aligned} \frac{\partial \tilde{\pi }_{1L}(\varepsilon ,\alpha )}{\partial \varepsilon }&=\left. -\frac{\frac{\partial G(\beta (\pi _{1L},m-\varepsilon ),\beta (\pi _{1L},m+\varepsilon ),\alpha )}{\partial \varepsilon }}{\frac{\partial G(\beta (\pi _{1L},m-\varepsilon ),\beta (\pi _{1L},m+\varepsilon ),\alpha ) }{\partial \pi _{1L}}}\right| _{\pi _{1L}=\tilde{\pi }_{1L}(\varepsilon ,\alpha )}\implies \\ \frac{\partial \tilde{\pi }_{1L}(\varepsilon ,\alpha )}{\partial \varepsilon \partial \alpha }&=\left. \frac{\left( \frac{\partial ^{2}G}{\partial \pi _{1L}^{2}}\frac{\partial \tilde{\pi }_{1L}}{\partial \alpha }+\frac{ \partial ^{2}G}{\partial \pi _{1L}\partial \alpha }\right) \frac{\partial G}{ \partial \varepsilon }-\left( \frac{\partial ^{2}G}{\partial \varepsilon \partial \pi _{1L}}\frac{\partial \tilde{\pi }_{1L}}{\partial \alpha }+\frac{\partial ^{2}G}{\partial \varepsilon \partial \alpha }\right) \frac{\partial G}{\partial \pi _{1L}}}{\left( \frac{\partial G}{\partial \pi _{1L}}\right) ^{2}}\right| _{\pi _{1L}=\tilde{\pi }_{1L}(\varepsilon ,\alpha )} \\&=\left. -\frac{\frac{\partial ^{2}G}{\partial \varepsilon \partial \pi _{1L}}\frac{\partial \tilde{\pi }_{1L}}{\partial \alpha }+\frac{\partial ^{2}G }{\partial \varepsilon \partial \alpha }}{\frac{\partial G}{\partial \pi _{1L}}}\right| _{\pi _{1L}=\tilde{\pi }_{1L}(\varepsilon ,\alpha )}, \end{aligned}$$

where the second line uses the fact that \(\partial \tilde{\pi } _{1L}(\varepsilon ,\alpha )/\partial \varepsilon =0\Leftrightarrow \partial G/\partial \varepsilon =0\). We have that \(\partial G/\partial \pi _{1L}<0\), so the sign of \(\partial ^{2}\tilde{\pi }_{1L}(\varepsilon ,\alpha )/\partial \varepsilon \partial \alpha \) coincides with that of the numerator. Proposition 2 implies that \(\partial \tilde{\pi }_{1L}/\partial \alpha >0\). The remaining terms need to be computed.

We computeFootnote 27

$$\begin{aligned}&\frac{\partial G}{\partial \varepsilon }\\&\quad ={\left\{ \begin{array}{ll} \pi _{1L}\left( -\frac{\left( \alpha \Delta _{1}+(1-\alpha )\Delta _{0}\right) }{\left( m-\varepsilon +\pi _{1L}(1-m+\varepsilon )\right) ^{2}}+ \frac{\left( \alpha \Delta _{0}+(1-\alpha )\Delta _{1}\right) }{\left( m+\varepsilon +\pi _{1L}(1-m-\varepsilon )\right) ^{2}}\right) \text { if}\ \Delta _{0}\ge 0\text { and }\Delta _{1}\ge 0\text { (A)} \\ \pi _{1L}\left( -\frac{1-\alpha }{\left( m-\varepsilon +\pi _{1L}(1-m+\varepsilon )\right) ^{2}}+\frac{\alpha }{\left( m+\varepsilon +\pi _{1L}(1-m-\varepsilon )\right) ^{2}}\right) \text { if }\Delta _{0}\ge 0 \text { and }\Delta _{1}\le 0\text { (B)}. \end{array}\right. } \end{aligned}$$


$$\begin{aligned}&\frac{\partial G}{\partial \varepsilon }=0\nonumber \\&\quad \Leftrightarrow \,{\left\{ \begin{array}{ll} \frac{\left( \alpha \Delta _{1}+(1-\alpha )\Delta _{0}\right) }{\left( m-\varepsilon +\pi _{1L}(1-m+\varepsilon )\right) ^{2}}=\frac{\left( \alpha \Delta _{0}+(1-\alpha )\Delta _{1}\right) }{\left( m+\varepsilon +\pi _{1L}(1-m-\varepsilon )\right) ^{2}}\text { if}\ \Delta _{0}\ge 0\text { and } \Delta _{1}\ge 0\text { (A)} \\ \frac{1-\alpha }{\left( m-\varepsilon +\pi _{1L}(1-m+\varepsilon )\right) ^{2}}=\frac{\alpha }{\left( m+\varepsilon +\pi _{1L}(1-m-\varepsilon )\right) ^{2}}\text { if }\Delta _{0}\ge 0\text { and }\Delta _{1}\le 0\text { (B)}. \end{array}\right. } \end{aligned}$$

We compute

$$\begin{aligned}&\frac{\partial ^{2}G}{\partial \varepsilon \partial \pi _{1L}}\nonumber \\&\quad ={\left\{ \begin{array}{ll} 2\pi _{1L}\left( \frac{(1-m+\varepsilon )\left( \alpha \Delta _{1}+(1-\alpha )\Delta _{0}\right) }{\left( m-\varepsilon +\pi _{1L}(1-m+\varepsilon )\right) ^{3}}-\frac{(1-m-\varepsilon )\left( \alpha \Delta _{0}+(1-\alpha )\Delta _{1}\right) }{\left( m+\varepsilon +\pi _{1L}(1-m-\varepsilon )\right) ^{3}}\right) \text {(A)} \\ 2\pi _{1L}\left( \frac{\alpha (1-m+\varepsilon )\left( \Delta _{1}+\Delta _{0}\right) }{\left( m-\varepsilon +\pi _{1L}(1-m+\varepsilon )\right) ^{3}}- \frac{(1-\alpha )(1-m-\varepsilon )\left( \Delta _{0}+\Delta _{1}\right) }{ \left( m+\varepsilon +\pi _{1L}(1-m-\varepsilon )\right) ^{3}}\right) \text {(B)} \end{array}\right. }\nonumber \\&={\left\{ \begin{array}{ll} 2\pi _{1L}\frac{\left( \alpha \Delta _{1}+(1-\alpha )\Delta _{0}\right) }{ \left( m-\varepsilon +\pi _{1L}(1-m+\varepsilon )\right) ^{2}}\left( \frac{ (1-m+\varepsilon )}{\left( m-\varepsilon +\pi _{1L}(1-m+\varepsilon )\right) }-\frac{(1-m-\varepsilon )}{\left( m+\varepsilon +\pi _{1L}(1-m-\varepsilon )\right) }\right) \text {(A)} \\ 2\pi _{1L}\frac{\alpha \left( \Delta _{1}+\Delta _{0}\right) }{\left( m-\varepsilon +\pi _{1L}(1-m+\varepsilon )\right) ^{2}}\left( \frac{ (1-m+\varepsilon )}{\left( m-\varepsilon +\pi _{1L}(1-m+\varepsilon )\right) }-\frac{(1-m-\varepsilon )}{\left( m+\varepsilon +\pi _{1L}(1-m-\varepsilon )\right) }\right) \text {(B)}, \end{array}\right. } \end{aligned}$$

where the first line follows by applying the product rule and (B.1 ), and the second line follows by again applying (B.1). Hence, \( \partial ^{2}G/\partial \varepsilon \partial \pi _{1L}>0\).


$$\begin{aligned}&\frac{\partial ^{2}G}{\partial \varepsilon \partial \alpha }\\&\quad ={\left\{ \begin{array}{ll} \pi _{1L}\left( \frac{\Delta _{0}-\Delta _{1}}{\left( m-\varepsilon +\pi _{1L}(1-m+\varepsilon )\right) ^{2}}-\frac{\Delta _{1}-\Delta _{0}}{\left( m+\varepsilon +\pi _{1L}(1-m-\varepsilon )\right) ^{2}}\right) \text { if}\ \Delta _{0}\ge 0\text { and }\Delta _{1}\ge 0\text { (A)} \\ \pi _{1L}\left( \frac{\Delta _{0}+\Delta _{1}}{\left( m-\varepsilon +\pi _{1L}(1-m+\varepsilon )\right) ^{2}}+\frac{\Delta _{0}+\Delta _{1}}{\left( m+\varepsilon +\pi _{1L}(1-m-\varepsilon )\right) ^{2}}\right) \text { if } \Delta _{0}\ge 0\text { and }\Delta _{1}\le 0\text { (B)} \end{array}\right. } \end{aligned}$$

Since \(\Delta _{0}>\Delta _{1}\), we have \(\partial ^{2}G/\partial \varepsilon \partial \alpha >0\).

Summing up, we have \(\partial ^{2}G/\partial \varepsilon \partial \pi _{1L}>0 \), \(\partial \tilde{\pi }_{1L}/\partial \alpha >0\) and \(\partial ^{2}G/\partial \varepsilon \partial \alpha >0\) and, therefore,

$$\begin{aligned} \frac{\partial ^{2}G}{\partial \varepsilon \partial \pi _{1L}}\frac{\partial \tilde{\pi }_{1L}}{\partial \alpha }+\frac{\partial ^{2}G}{\partial \varepsilon \partial \alpha }\,>0\text {.} \end{aligned}$$

Hence, \(\partial ^{2}\tilde{\pi }_{1L}(\varepsilon ,\alpha )/\partial \varepsilon \partial \alpha >0\). The proof for the case \(\Delta _{1}>\Delta _{0}\) is virtually identical, and is omitted. \(\square \)

The remainder of the proof establishes statement 1 in Proposition 4 (the proof of statement 2 is analogous). Therefore, suppose \( \Delta _{0}>\Delta _{1}\) and \(\alpha >1/2\), and that \(\tilde{\pi } _{1L}(\varepsilon ,\alpha )<1\) for some \((\varepsilon ,\alpha )\in [ 0, \bar{\varepsilon })\times (1/2,1]\). Notice that we have \(G(m,m,1/2)<0\). To see this, suppose, to contradiction, \(G(m,m,1/2)\ge 0\). But

$$\begin{aligned}&G(m-\varepsilon ,m+\varepsilon ,\alpha \\&\quad ={\left\{ \begin{array}{ll} (m-\varepsilon )\left( \alpha \Delta _{1}+(1-\alpha )\Delta _{0}\right) +(m+\varepsilon )\left( \alpha \Delta _{0}+(1-\alpha )\Delta _{1}\right) -\Delta _{L} &{}\text {(A)}\\ (1-\alpha )(m-\varepsilon )(\Delta _{0}+\Delta _{1})+\alpha (m+\varepsilon )(\Delta _{0}+\Delta _{1})-\Delta _{L} &{} \text {(B)} \end{array}\right. } \end{aligned}$$

is increasing in \(\varepsilon \) and \(\alpha \) if \(\Delta _{0}>\Delta _{1}\) and \(\alpha >1/2\). Therefore, \(G(m-\varepsilon ,m+\varepsilon ,\alpha )\ge 0\) for all admissible values of \((\varepsilon ,\alpha )\), which contradicts \( \tilde{\pi }_{1L}(\varepsilon ,\alpha )<1\) for some \((\varepsilon ,\alpha )\).

We now prove the first part of statement 1 in Proposition 4.

Step 1. There is some \(\hat{\alpha }\in (1/2,1]\) such that there is an interior ambiguity bliss point \(\varepsilon (\alpha )\) for all \(\alpha < \hat{\alpha }\) and such that \(\tilde{\pi }_{1L}(\varepsilon ,\alpha )\) is increasing in \(\varepsilon \) for any \(\alpha >\hat{\alpha }\). It holds that \( \varepsilon ^{\prime }(\alpha )>0\) on \((1/2,\hat{\alpha })\) and \(\lim _{\alpha \downarrow 1/2}\varepsilon (\alpha )=0\).


First, the implicit function theorem implies that if \(\tilde{ \pi }_{1L}(\varepsilon ,\alpha )<1\) and \(\partial \tilde{\pi } _{1L}(\varepsilon ,\alpha )/\partial \varepsilon =0\), then this equality defines an implicit function \(\varepsilon (\alpha )\) on some neighborhood around \(\alpha \), which satisfies

$$\begin{aligned} \varepsilon ^{\prime }(\alpha )=-\frac{\frac{\partial ^{2}\tilde{\pi } _{1L}(\varepsilon ,\alpha )}{\partial \varepsilon \partial \alpha }}{\frac{ \partial ^{2}\tilde{\pi }_{1L}(\varepsilon ,\alpha )}{\partial \varepsilon ^{2}}}\text {.} \end{aligned}$$

The Claim and Step 2 in the proof of Lemma 2 imply \(\varepsilon ^{\prime }(\alpha )>0\). That is, if an interior bliss point \(\varepsilon (\alpha )\) exists, then \(\varepsilon ^{\prime }(\alpha )>0\).

Since \(G(m,m,1/2)<0\), we can confirm that \(\tilde{\pi }_{1L}(0,1/2)<1\), and consequently, by the proof of Proposition 3, \(\partial \tilde{ \pi }_{1L}(0,1/2)/\partial \varepsilon =0\). Combining with the arguments in the previous paragraph, the implicit function theorem implies existence of an interior bliss point \(\varepsilon (\alpha )\) on some interval \((1/2,\hat{ \alpha })\).

The proof of Step 1 concludes by demonstrating that there is some \(\hat{ \alpha }\in (1/2,1]\) such that an interior bliss point exists for all \(\alpha <\hat{\alpha }\) but not for any \(\alpha >\hat{\alpha }\). To this end, suppose \( \alpha _{2}>\alpha _{1}\) and that \(\partial \tilde{\pi }_{1L}(\varepsilon (\alpha _{2}),\alpha _{2})/\partial \varepsilon =0\). Notice that

$$\begin{aligned} G(\beta (\pi _{1L},m-\varepsilon (\alpha _{2})),\beta (\pi _{1L},m+\varepsilon (\alpha _{2})),\alpha )=0 \end{aligned}$$

defines \(\tilde{\pi }_{1L}(\varepsilon (\alpha _{2}),\alpha )\) as an implicit function of \(\alpha \) globally over the interval \((1/2,\alpha _{2}]\). This follows because \(G(m-\varepsilon (\alpha _{2}),m+\varepsilon (\alpha _{2}),\alpha _{2})<0\) implies \(G(m-\varepsilon (\alpha _{2}),m+\varepsilon (\alpha _{2}),\alpha )<0\) for all \(\alpha <\alpha _{2}\). Since \(\partial \tilde{\pi }_{1L}(\varepsilon (\alpha _{2}),\alpha )/\partial \varepsilon =0\) implies \(\partial ^{2}\tilde{\pi }_{1L}(\varepsilon (\alpha _{2}),\alpha )/\partial \varepsilon \partial \alpha >0\), and \(\alpha _{2}>\alpha _{1}\), it must be that \(\partial \tilde{\pi }_{1L}(\varepsilon (\alpha _{2}),\alpha _{1})/\partial \varepsilon <0\). Therefore, \(\tilde{\pi }_{1L}(\varepsilon ,\alpha _{1})\) is not increasing in \(\varepsilon \). Since we know that either \(\tilde{\pi }_{1L}(\varepsilon ,\alpha _{1})\) is increasing in \( \varepsilon \), or there is an interior ambiguity bliss point \(\varepsilon (\alpha _{1})\), we conclude the latter. Since \(\alpha _{2}\) and \(\alpha _{1}\) were arbitrary, whenever \(\varepsilon (\alpha _{2})\) exists, \(\varepsilon (\alpha _{1})\) exists for any \(\alpha _{1}<\alpha _{2}\). This implies that there is some \(\hat{\alpha }\in (1/2,1]\) such that there is an interior bliss point for all \(\alpha <\hat{\alpha }\), but not for any \(\alpha >\hat{\alpha }\). \(\square \)

Step 2. There is some \(\hat{\Delta }_{0}>\Delta _{1}\) such that the value \(\hat{\alpha }\) in the statement of Step 1 satisfies \(\hat{\alpha }=1\) if \(\Delta _{0}<\hat{\Delta }_{0}\) and \(\hat{\alpha }<1\) if \(\Delta _{0}>\hat{ \Delta }_{0}\).


Let \(\tilde{\varepsilon }(\Delta _{0})\) denote the interior ambiguity bliss-point as a function of \(\Delta _{0}\) with \(\alpha \) fixed at \(\alpha =1\). The comparative statics with respect to \(\alpha \) in Step 1 and 2 can be repeated (virtually identically), to conclude that (1) \(\tilde{ \varepsilon }^{\prime }(\Delta _{0})>0\) whenever \(\tilde{\varepsilon }(\Delta _{0})\) exists; (2) if \(\Delta _{0}\) is sufficiently close to \(\Delta _{1}\), then there is an interior bliss point \(\tilde{\varepsilon }(\Delta _{0})\); (3) if \(\tilde{\varepsilon }(\Delta _{0})\) exists, then \(\tilde{\varepsilon } (\Delta _{0}^{\prime })\) exists for any \(\Delta _{0}^{\prime }\in (\Delta _{1},\Delta _{0})\).

To conclude the argument, it suffices to notice that there is some \(\hat{ \Delta }_{0}\) such that if \(\Delta _{0}>\hat{\Delta }_{0}\), then \(\tilde{\pi } _{1L}(\varepsilon ,1)=1\) for some (large) values of \(\varepsilon \). In this case, \(\tilde{\pi }_{1L}(\varepsilon ,1)\) is increasing, i.e., there is no \( \tilde{\varepsilon }(\Delta _{0})\). Therefore, there must be a value \(\hat{ \Delta }_{0}\) such that \(\tilde{\varepsilon }(\Delta _{0})\) exists if \(\Delta _{0}<\hat{\Delta }_{0}\), and \(\tilde{\pi }_{1L}(\varepsilon ,1)\) is increasing if \(\Delta _{0}>\hat{\Delta }_{0}\). Combining with Step 1, \(\hat{\alpha }=1\) if \(\Delta _{0}<\hat{\Delta }_{0}\), and \(\hat{\alpha }<1\) if \(\Delta _{0}>\hat{ \Delta }_{0}\). \(\square \)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Hedlund, J., Kauffeldt, T.F. & Lammert, M. Persuasion under ambiguity. Theory Decis (2020).

Download citation


  • Persuasion
  • Ambiguity
  • Signaling
  • Information transmission