Skip to main content
Log in

Optimal monotone signals in Bayesian persuasion mechanisms

  • Research Article
  • Published:
Economic Theory Aims and scope Submit manuscript

Abstract

This paper develops a new approach—based on the majorization theory—to the information design problem in Bayesian persuasion mechanisms, i.e., models in which the sender selects the signal structure of the agent(s) who then reports it to the non-strategic receiver. We consider a class of mechanisms in which the posterior payoff of the sender depends on the value of a realized posterior mean of the state, its order in the sequence of possible means, and the marginal distribution of signals. We provide a simple characterization of mechanisms in which optimal signal structures are monotone partitional. Our approach has two economic implications: it is invariant to monotone transformations of the state and allows to decompose setups with multiple agents into independent Bayesian persuasion mechanisms with a single agent. As the main application of our characterization, we show the optimality of monotone partitional signal structures in all selling mechanisms with independent private values and quasi-linear preferences. We also provide sharp characterization of optimal signal structures for second-price auctions and posted-price sales.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. Focusing on finite signal spaces is justified by several arguments. For example, in Bayesian persuasion games any equilibrium with a finite number of actions is outcome equivalent to an equilibrium with a finite signal space (Kamenica and Gentzkow 2011). In other applications, it is optimal for the sender to select a signal structure with a finite number of signals even if the signal space is arbitrarily large (Bergemann and Pesendorfer 2007; Ivanov 2010). Also, if the signal structure is interpreted as an experiment, which reveals information about the state, then the signal space can be bounded by technological constraints of the experiment. Finally, a finite signal space can reflect the “vocabulary” constraints, that is, the ability of players to distinguish among a limited number of signals only.

  2. As shown below, this representation implies that: (1) the sender’s and the agent’s marginal payoffs from the receiver’s action are linear in the posterior mean; and (2) the receiver’s action is a function of the agent’s signal and the distribution of posterior means.

  3. Monotone partition signal structures have several natural interpretations. First, they imply that the transformation of the state into a signal is deterministic and monotone. That is, each state is mapped into a single signal which is non-decreasing in the state. Also, such signal structures can be considered as a convex bundling of states into signals. That is, if two states are mapped into a single signal, then all states between them are also mapped into the same signal. Finally, monotone partitional signal structures arise endogenously in economic applications. A classical example is the cheap-talk game by Crawford and Sobel (1982) in which a perfectly informed agent uses only monotone partitional strategies for mapping states into messages sent to the receiver.

  4. Majorization is extensively covered by Marshall et al. (2011).

  5. In particular, we employ the positive-quadrant dependent (PQD) stochastic order and the comonotonicity of distributions.

  6. Formally, Kolotilin (2018) considers a model of Bayesian persuasion with a privately informed receiver and a binary action space. However, he shows that this model is equivalent to the standard Bayesian persuasion game with the uninformed receiver and a continuum of actions.

  7. It is without loss of generality to consider signal structures \(\sigma \) with positive marginal probabilities of signals, \(g_{s}>0\) for all \(s\in \left[ n\right] \). The reason is that we mostly restrict attention to signal structures with a fixed marginal distribution of signals \(g^{o}=\left\{ g_{s}^{o}\right\} _{s=1}^{n}\). In this light, signal structures with zero probabilities of some signals are essentially equivalent to the ones with a smaller number of signals \(m<n\) that are generated with positive probabilities.

  8. That is, \(\Pi _{n}=\{\left\{ \sigma _{s}\left( \theta \right) \right\} _{s=1}^{n}:\sigma _{s}\left( \theta \right) \ge 0\) for all \(s\in \left[ n\right] ,\sum^{n} _{s=1 }\sigma _{s}\left( \theta \right) =1\) for all \(\theta \in \Theta ,\left\{ g_{s}\right\} _{s=1}^{n}\in \Delta _{n}\), and \(\omega _{s+1} \ge \omega _{s}\) for all \(s\in \left[ n-1\right] \}\), where \(g_{s}\) and \(\omega _{s}\) are given by (1) and (4), respectively.

  9. I am thankful to the anonymous referee for bringing this point to my attention.

  10. Kamenica and Gentzkow (2011) focus on equilibria in which the receiver’s action is solely determined by his posterior belief. In general, Bayesian persuasion games can contain equilibria in which the action of the receiver also depends on the signal realization. These equilibria, however, are very specific, since the dependence of the receiver’s action on the signal stems from his indifference between actions.

  11. Kolotilin and Zapechelnyuk (2019, Section 5) call this setup a “linear” model, though the literature has been less formally referring to it as the model where all payoffs depend on a posterior distribution of the state only via its posterior mean.

  12. It follows directly from the first-order conditions \(\frac{\partial }{\partial a}E_{\mu _{s}}\left[ U^{i}\left( a,\theta \right) \right] =\phi ^{i}\left( a\right) \omega _{s}-\psi ^{i}\left( a\right) =0.\)

  13. In addition, it is technically easier to characterize the set of distributions of posterior means \({\mathcal {G}}_{n}\) by using the convex stochastic order and then restrict attention to the subset \(\left\{ \left\{ g,\omega \right\} \in {\mathcal {G}}_{n}:g=g^{o}\right\} \), which includes distributions of posterior means with a given distribution of signals \(g^{o}\), instead of characterizing the set of sequences of feasible posterior means \(\left\{ \omega :\left\{ g^{o},\omega \right\} \in {\mathcal {G}}_{n}\right\} \) for a given \(g^{o}\).

  14. For example, Theorem 1 in Bergemann and Pesendorfer (2007) relies on the fact that the local downward incentive-compatibility constraints of buyers are binding in any revenue-maximizing mechanism.

  15. Below, we extend the model to allocation rules of the form \(q_{i}\left( {\vec {\omega }}_{{\vec {z}}}\right) \), where \({\vec {\omega }}_{{\vec {z}}}=\left\{ \omega _{z_{i}}^{i}\right\} _{i\in {\mathcal {N}}}\) is the profile of buyers’ posterior means induced by reported signals \(z_{i}\). That is, \(q_{i}\) depends on values of posterior means induced by signals, but not on signals themselves.

  16. First, \(Q_{s}\omega _{s}-T_{s}\left( \omega \right) \ge Q_{z}\omega _{s}-T_{z}\left( \omega \right) \) and \(Q_{z}\omega _{z}-T_{z}\left( \omega \right) \ge Q_{s}\omega _{z}-T_{s}\left( \omega \right) \), where \(z\ne s\), result in \(\left( Q_{s}-Q_{z}\right) \left( \omega _{s}-\omega _{z}\right) \ge 0\) for all \(s,z\in \left[ n\right] ,\omega \in \Omega _{n}\). Since \(\omega \) is increasing, this inequality is equivalent to \(\left( Q_{s}-Q_{z}\right) \left( s-z\right) \ge 0\). Also, the sufficiency of local incentive-compatibility constraints holds by induction.

  17. The bounds on \({\tilde{Q}}_{s}\) follow from the fact that (22) is equivalent to \(U_{s}\left( \omega \right) \ge U_{z}\left( \omega \right) -Q_{s}\left( \omega _{s}-\omega _{z}\right) \) for all \(s,z\in \left[ n\right] ,\omega \in \Omega _{n}\) and the local incentive-compatibility constraints (24).

  18. Formally, \(\sigma ^{o}\) is defined up to the set of cutoff points \(\left\{ \theta _{s}\right\} _{s=1}^{n-1}\) of measure zero.

  19. In addition, \(c\left( x\right) \) must be tangent to \({\bar{c}}\left( x\right) \) at boundary points \(\upsilon _{0}=\upsilon \left( 0\right) \ \) and \(\upsilon _{n}=\upsilon \left( 1\right) .\)

  20. An interval of signals is a set \(\left\{ k,k+1,\ldots ,l-1,l\right\} \) for \(1\le k\le l\le n\).

  21. See Marshall et al. (2011) for an extensive analysis of the majorization order.

  22. In order to see the difference between the convex order and the weighted majorization, consider any discrete distribution \(\left\{ g,\omega \right\} \). An increase in the probabilities of extreme values results in the distribution, which dominates the initial one by the convex order, but not by the weighted majorization. For example, put \(n=3\) and consider the distribution \(\left\{ g,\omega \right\} =\left\{ \left( \frac{1}{3},\frac{1}{3},\frac{1}{3}\right) ,\left( 1,2,3\right) \right\} \). Then, both \(\left\{ g^{1},\omega ^{1}\right\} =\left\{ \left( \frac{1}{3},\frac{1}{3},\frac{1}{3}\right) ,\left( 0,2,4\right) \right\} \) and \(\left\{ g^{2} ,\omega ^{2}\right\} =\left\{ \left( \frac{2}{5},\frac{1}{5},\frac{2}{5}\right) ,\left( 1,2,3\right) \right\} \) dominate \(\left\{ g,\omega \right\} \) by the convex order, but only \(\left\{ g^{1},\omega ^{1}\right\} \) dominates \(\left\{ g,\omega \right\} \) by the weighted majorization, since \(g^{1}=g\) and \(\omega ^{1}\succ _{g}\omega \).

  23. In fact, \(\omega ^{o}\succ _{g}\omega \) if and only if the second line segments of the integral functions are parallel. The first and the third line segments are parallel, since \(c^{\prime }\left( x\right) =c^{o\prime }\left( x\right) =0\) for \(x<\omega _{1}^{o}\) and \(c^{\prime }\left( x\right) =c^{o\prime }\left( x\right) =1\) for \(x>\omega _{2}^{o}.\)

  24. The PQD order is also called the concordance order by Joe (1997).

  25. Given a monotone partitional signal structure \(\sigma ^{o}\), a state \(\theta \in (\theta _{s-1},\theta _{s}]\) generates a signal s. Equivalently, a function \({\mathcal {X}} \left( \theta \right) =\sum_{k=1}^{n}k{\mathbf {1}}_{(\theta _{k-1}^{0},\theta _{k}^{0}]}{(\theta)}\) is increasing in \(\theta \), which implies the state \(\theta \) and the signal \(s={\mathcal {X}}({\theta}) \) are comonotonic, or perfectly correlated random variables. For comonotonic variables s and \(\theta \) with marginal distributions \(G_{s}\) and \(F\left( \theta \right), \) respectively, the joint distribution \(H\left( s,\theta \right) \) is given by (36). A notable reference on comonotonic variables is Puccetti and Scarsini (2010).

  26. Amir (2019) provides an extensive review of this property and its recent applications to economic theory.

  27. In particular, \(\frac{\partial }{\partial \omega _{s}}V_{s}\left( g,\omega _{s}\right) \ge \frac{\partial }{\partial \omega _{s}}V_{k}\left( g,\omega _{s}\right) \ge \frac{\partial }{\partial \omega _{k}}V_{k}\left( g,\omega _{k}\right) \) for all \(\omega _{s}\ge \omega _{k}\) and \(\ s>k\), where the first inequality follows from the supermodularity of \(V_{s}\left( g,x\right) \) in \(\left( x,s\right) \) and the second inequality follows from the convexity of \(V\left( g,x\right) \) in x.

  28. Consider \(V_{1}\left( \,g,x\right) =-x^{2}\) and \(V_{2}\left( g,x\right) =\ln x\), which are concave in x. However, condition (1) is satisfied: \(\frac{\partial }{\partial x}V_{2}\left( g,x\right) |_{x=\omega _{s}}=\frac{1}{\omega _{s}}>0>-2\omega _{k} =\frac{\partial }{\partial x}V_{1}\left( g,x\right) |_{x=\omega _{k}}\) for all \(\omega _{s},\omega _{k}>0\).

  29. In contrast to Theorem 1, however, there exist multiple signal structures in \(\Pi _{n}\) that induce the same posterior mean \(E\left[ \upsilon \left( \theta \right) \right] \) for all signals.

  30. That is, in the case of receiving the highest bid from \(M>1\) bidders the seller provides the object to each of these bidders with probability 1/M.

  31. In particular, any monotone partitional \(\left\{ g^{*},\omega ^{*}\right\} \in {\mathcal {G}}_{m}^{+}\), where \(m<n\), is suboptimal, since the seller can increase the ex-ante revenue by decomposing the highest posterior mean \(\omega _{m}^{*}\) into two means, \(\omega _{m}^{o}\) and \(\omega _{m+1}^{o}\), where \(\omega _{m}^{o}<\omega _{m}^{*}<\omega _{m+1}^{o}\). If the number of bidders is sufficiently large, the second price will likely be equal to the highest posterior mean, i.e., \(\omega _{m+1}^{o}>\omega _{m}^{*}\).

  32. Suppose \(\upsilon \left( \theta \right) =\theta ,c=0,p=0.55\), and \(\theta \) is distributed uniformly on \(\left[ 0,1\right] \). Consider two signal structures. The first one is monotone partitional with three subintervals and cutoffs \(\left\{ \theta _{1},\theta _{2}\right\} =\left\{ 1/3,2/3\right\} \). It generates the sequence of posterior means \(\omega ^{o}=\left\{ 1/6,1/2,5/6\right\} \) with corresponding probabilities \(g=\left\{ 1/3,1/3,1/3\right\} \). The second one is non-partitional, such that \(\left[ \theta _{1},\theta _{2}\right] \) is mapped into signals \(s_{2}\) and \(s_{3}\) with probabilities 3/4 and 1/4, respectively, and \(\left[ \theta _{2},1\right] \) is mapped into \(s_{2}\) and \(s_{3}\) with probabilities 1/4 and 3/4, respectively. This signal structure generates the sequence of posterior means \(\omega =\left\{ 1/6,7/12,3/4\right\} \) with the same distribution of signals. Then, \(\omega ^{o}\succ _{g}\omega \), however, \(EV\left( g,\omega \right) =11/30>11/60=EV\left( g,\omega ^{o}\right) \).

  33. Other notable works on delegation are Melumad and Shibano (1991), Alonso and Matouschek (2008), and Kovàc and Mylovanov (2009).

  34. See, for example, Ivanov (2010).

  35. Similarly, in the model by Bergemann and Pesendorfer (2007), the seller (the sender) optimizes over signal structures of buyers (agents) and the allocation-transfer mechanism (the receiver’s action).

  36. As noted above, the sender’s posterior payoff function in the model of sales with a fixed price does not satisfy Condition 1, but the optimal signal structure is monotone partitional.

  37. Moreover, \(\left\{ g,\omega ^{t}\right\} \in {\mathcal {G}}_{n},t=1,\ldots ,T\). This follows from the characterization of \({\mathcal {G}}_{n}\) via integral functions (29) and the fact that g-majorization is a special case of the convex stochastic order. Then, \(\left\{ g,\omega \right\} \in {\mathcal {G}}_{n},\left\{ g,\omega ^{o}\right\} \in {\mathcal {G}}_{n}\), and \(\omega ^{o}\succ _{g}\omega ^{t}\succ _{g}\omega \) result in \({\bar{c}}\left( x\right) \ge c^{o}\left( x\right) \ge c^{t}\left( x\right) \ge c\left( x\right) \ge c_{0}\left( x\right) \), where \(c^{o}\left( x\right) ,c^{t}\left( x\right) \), and \(c\left( x\right) \) are the integral functions associated with \(\left\{ g,\omega ^{o}\right\} ,\left\{ g,\omega ^{t}\right\} \), and \(\left\{ g,\omega \right\} \), respectively; and \({\bar{c}}\left( x\right) \) and \(c_{0}\left( x\right) \) are given by (30) and (31).

  38. If \(\alpha _{s=1}^{T}\) is constant for all \(s\in \left[ n\right] \), we put \(n^{*}=1\).

  39. For example, given any \(g^{-i}\in \Delta ^{-i}\), we can put \(t_{i}^{*}\left( {\vec {s}},\omega ^{i}\right) =T_{s_{i}}^{*,i}\left( g^{-i},\omega ^{i}\right) \).

References

  • Alonso, R., Camara, O.: Bayesian persuasion with heterogeneous priors. J. Econ. Theory 165, 672–706 (2016)

    Article  Google Scholar 

  • Alonso, R., Matouschek, N.: Optimal delegation. Rev. Econ. Stud. 75, 259–293 (2008)

    Article  Google Scholar 

  • Amir, R.: Supermodularity and complementarity in economic theory. Econ. Theory 67, 487–496 (2019). https://doi.org/10.1007/s00199-019-01196-6

    Article  Google Scholar 

  • Arnold, B., Balakrishnan, N., Nagaraja, H.: A First Course in Order Statistics. SIAM Publishers, Philadelphia (2008)

    Book  Google Scholar 

  • Bergemann, D., Pesendorfer, M.: Information structures in optimal auctions. J. Econ. Theory 137, 580–609 (2007)

    Article  Google Scholar 

  • Brandt, N., Burkhard, D., Eckwert, E., Vàrdy, F.: Information and the dispersion of posterior expectations. J. Econ. Theory 154, 604–611 (2014)

    Article  Google Scholar 

  • Cheng, K.-W.: Majorization: its extensions and the preservation theorems. Technical report 121, Stanford University (1977)

  • Crawford, V., Sobel, J.: Strategic information transmission. Econometrica 50, 1431–1451 (1982)

    Article  Google Scholar 

  • Dworczak, P., Martini, G.: The simple economics of optimal persuasion. J. Polit. Econ. 127, 1993–2048 (2019)

    Article  Google Scholar 

  • Ganuza, J.J., Penalva, J.: Signal orderings based on dispersion and the supply of private information in auctions. Econometrica 78, 1007–1030 (2010)

    Article  Google Scholar 

  • Gentzkow, M., Kamenica, E.: A Rothschild–Stiglitz approach to Bayesian persuasion. Am. Econ. Rev. Pap. Proc. 106, 597–601 (2016)

    Article  Google Scholar 

  • Holmström, B.: On incentives and control in organizations. Ph.D. dissertation, Stanford University (1977)

  • Ivanov, M.: Informational control and organizational design. J. Econ. Theory 145, 721–751 (2010)

    Article  Google Scholar 

  • Ivanov, M.: Information revelation in competitive markets. Econ. Theory 52, 337–365 (2013). https://doi.org/10.1007/s00199-011-0629-3

    Article  Google Scholar 

  • Joe, H.: Multivariate Models and Multivariate Dependence Concepts. Monographs on Statistics & Applied Probability. Chapman & Hall, Boca Raton (1997)

    Book  Google Scholar 

  • Johnson, J., Myatt, D.: On the simple economics of advertising, marketing, and product design. Am. Econ. Rev 93, 756–784 (2006)

    Article  Google Scholar 

  • Kamenica, E., Gentzkow, M.: Bayesian persuasion. Am. Econ. Rev. 101, 2590–2615 (2011)

    Article  Google Scholar 

  • Kolotilin, A.: Optimal information disclosure: a linear programming approach. Theor. Econ. 13, 607–635 (2018)

    Article  Google Scholar 

  • Kolotilin, A., Li, M., Mylovanov, T., Zapechelnyuk, A.: Persuasion of a privately informed receiver. Econometrica 85, 1949–1964 (2017)

    Article  Google Scholar 

  • Kolotilin, A., Zapechelnyuk, A.: Persuasion meets delegation. Working paper (2019)

  • Kovàc, E., Mylovanov, T.: Stochastic mechanisms in settings without monetary transfers: regular case. J. Econ. Theory 144, 1373–1395 (2009)

    Article  Google Scholar 

  • Lewis, T., Sappington, D.: Supplying information to facilitate price discrimination. Int. Econ. Rev. 35, 309–327 (1994)

    Article  Google Scholar 

  • Li, H., Shi, X.: Discriminatory information disclosure. Am. Econ. Rev. 107, 3363–3385 (2017)

    Article  Google Scholar 

  • Marshall, A., Olkin, I., Arnold, B.: Inequalities: Theory of Majorization and Its Applications. Springer Series in Statistics. Springer, New York (2011)

    Book  Google Scholar 

  • Melumad, N., Shibano, T.: Communication in settings with no transfers. RAND J. Econ. 22, 173–198 (1991)

    Article  Google Scholar 

  • Mensch, J.: Monotone persuasion. Working paper (2016)

  • Myerson, R.: Optimal auction design. Math. Oper. Res. 6, 58–73 (1981)

    Article  Google Scholar 

  • Puccetti, G., Scarsini, M.: Multivariate comonotonicity. J. Multivar. Anal. 101, 291–304 (2010)

    Article  Google Scholar 

  • Saak, A.: The optimal private information in single unit monopoly. Econ. Lett. 91, 267–272 (2006)

    Article  Google Scholar 

  • Shaked, M., Shanthikumar, J.G.: Stochastic Orders. Springer Series in Statistics. Springer, New York (2007)

    Book  Google Scholar 

  • Tchen, A.: Inequalities for distributions with given marginals. Ann. Probab. 8, 814–827 (1980)

    Article  Google Scholar 

Download references

Acknowledgements

I owe special thanks to the Associate Editor and the anonymous referee for their valuable comments. I am also thankful to Ricardo Alonso, Sourav Bhattacharya, Seungjin Han, Sergei Izmalkov, Emir Kamenica, René Kirkegaard, Nicolas Klein, Vijay Krishna, Hao Li, Michael Peters, Jeffrey Racine, Sergei Severinov, Xianwen Shi, Alex Smolin, Joel Sobel, and audiences at University of Toronto, University of Western Ontario, University of British Columbia, University of Pittsburgh, University of Montreal, the World Congress of the Econometric Society 2015, and Midwest Theory Conference 2015 for comments and discussions. All mistakes are mine. This work was supported by the Canadian Social Sciences and Humanities Research Council (SSHRC) Insight Development Grant 430-2015-00242. The paper was previously circulated under the title “Optimal Signals in Bayesian Persuasion Mechanisms with Ranking”

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maxim Ivanov.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Proof of Lemma 1

Consider a distribution of posterior means \(\left\{ g,\omega \right\} \in {\mathcal {G}}_{n}\), such that \(\omega _{s}=E\left[ \upsilon \left( \theta \right) |s\right] ,s\in \left[ n\right] \), where \(\upsilon \left( \theta \right) \) is right-continuous and strictly increasing (that is, integrable). Let \(\sigma \in \Pi _{n}\) be a signal structure that generates \(\left\{ g,\omega \right\} \) and \(H\left( s,\theta \right) \) be the joint distribution of \(\left( s,\theta \right) \) induced by \(\sigma \) according to (2).

Because \(f\left( \theta \right) >0\) almost everywhere on \(\left[ 0,1\right] \), \(F\left( \theta \right) \) is strictly increasing and differentiable in \(\theta \) for \(\theta \in \left( 0,1\right) \). As a result, \(F^{-1}\left( x\right) \) exists, uniquely determined, and strictly increasing in x on \(\left( 0,1\right) \). Consider the monotone partitional \(\sigma ^{o}\in \Pi _{n}\) with the cutoffs \(\left\{ \theta _{s}^{o}\right\} _{s=1}^{n-1}\), such that \(\theta _{s}^{o}=F^{-1}\left( G_{s}\right) \), where \(G_{s} =\sum \nolimits _{i=1}^{s}g_{i}\) is the cdf of signals generated by \(\sigma \). Because \(g>0\) and \(F\left( \theta \right) \) is strictly increasing, the sequence \(\left\{ \theta _{s}^{o}\right\} _{s=1}^{n}\) is unique, strictly increasing, and \(\theta _{s}^{o}\in \left( 0,1\right) ,s\in \left[ n-1\right] \). By construction, we have

$$\begin{aligned} G_{s}^{o}=F\left( \theta _{s}^{o}\right) =G_{s}\text { for all }s\in \left[ n\right] , \end{aligned}$$
(48)

or, equivalently, the marginal distribution of signals generated by \(\sigma ^{o}\) is g. Hence, \(H\left( s,\theta \right) \in {\mathcal {M}}(G,F)\) and \(H^{o}\left( s,\theta \right) \in {\mathcal {M}}(G,F)\), where \({\mathcal {M}}(G,F)\) is the class of all bivariate distributions with fixed marginal cdf \(F\left( \theta \right) \) and \(G_{s}\). Also, there is only one monotone partitional \(\left\{ g,\omega ^{o}\right\} \) \(\in {\mathcal {G}}_{n}^{+}\), since any changes in \(\left\{ \theta _{s}^{o}\right\} _{s=1}^{n-1}\) result in the violation of (48) due to the strict monotonicity of \(F\left( \theta \right) \).

Second, since the step function \(\varkappa \left( \theta \right) =\sum_{k-1}^n k{\mathbf {1}}_{(\theta^{o} _{k-1},\theta^{o} _{k}]}\left( \theta \right) \) is increasing in \(\theta \) on \(\left[ 0,1\right] \), then the joint distribution \(H^{o}\left( s,\theta \right) \) generated by \(\sigma ^{o}\) is given by

$$\begin{aligned} H^{o}\left( s,\theta \right) =\Pr \left\{ s^{\prime }\le s,\theta ^{\prime }\le \theta \right\} =\Pr \left\{ \varkappa \left( \theta ^{\prime }\right) \le s,\theta ^{\prime }\le \theta \right\} =\min \left\{ G_{s},F\left( \theta \right) \right\} . \end{aligned}$$

As shown by Joe (1997), if \(H\left( s,\theta \right) \in {\mathcal {M}}(G,F)\), then

$$\begin{aligned} H^{o}\left( s,\theta \right) =\min \left\{ G_{s},F\left( \theta \right) \right\} \ge H\left( s,\theta \right) \text { for all }s\in \left[ n\right] \text { and}\ \theta \in \left[ 0,1\right] . \end{aligned}$$
(49)

Finally, Tchen (1980) shows that if (49) holds for \(H^{o}\left( s,\theta \right) \in {\mathcal {M}}(G,F)\) and \(H\left( s,\theta \right) \in {\mathcal {M}}(G,F)\), and \(\ \upsilon \left( \theta \right) \) are right continuous, increasing, and integrable, then

$$\begin{aligned} E\left[ E^{o}\left[ \upsilon \left( \theta \right) |s\right] |s\le k\right] \le E\left[ E\left[ \upsilon \left( \theta \right) |s\right] |s\le k\right] \text { for all }k\in \left[ n\right] , \end{aligned}$$
(50)

which implies (33). Finally, \(E\left[ \omega _{s}^{o}\right] =E\left[ \omega _{s}\right] =E\left[ \upsilon \left( \theta \right) \right] \) and (48) result in \(\omega ^{o}\succ _{g}\omega \). \(\square \)

Proof of Theorem 1

Consider a distribution of posterior means \(\left\{ g,\omega \right\} \in {\mathcal {G}}_{n}\), which provides the ex-ante payoff \(EV\left( g,\omega \right) \) according to (13), where \(V_{s}\left( g,\omega _{s}\right) \) satisfies Condition 1 for g. We show below that \(EV\left( g,\omega \right) \le EV\left( g,\omega ^{o}\right) \), where \(\left\{ g,\omega ^{o}\right\} \in {\mathcal {G}}_{n}^{+}\) is generated by the monotone partitional \(\sigma ^{o} \in \Pi _{n}\) with cutoffs \(\left\{ \theta _{s}\right\} _{s=1}^{n-1}=\left\{ F^{-1}\left( G_{s}\right) \right\} _{s=1}^{n-1}\).

By Lemma 1, it follows that \(\omega ^{o}\) is unique and \(\omega ^{o} \succ _{g}\omega \). Then, Theorem 2.4 in Cheng (1977) implies that there is a finite collection of weakly increasing sequences \(\left\{ \omega ^{t}\right\} _{t=1}^{T}\), such that \(\omega ^{o}=\omega ^{T+1}\succ _{g}\omega ^{T}\succ _{g}\cdots \succ _{g}\omega ^{1}\succ _{g}\omega ^{0}=\omega \), and all pairs \(\omega ^{t},\omega ^{t+1},t=0,\ldots ,T\) differ in only two elements. Hence, it is sufficient to consider \(\omega ^{t}\) and \(\omega ^{t+1}\), which differ in \(\left\{ \omega _{k}^{t},\omega _{s}^{t}\right\} \) and \(\left\{ \omega _{k}^{t+1},\omega _{s}^{t+1}\right\} \), where \(k<s\). Since \(\omega ^{t+1} \succ _{g}\omega ^{t}\), this means

$$\begin{aligned} \omega _{k}^{t+1}&<\omega _{k}^{t}\le \omega _{s}^{t}<\omega _{s}^{t+1}\text {, and}\\ g_{k}\omega _{k}^{t}+g_{s}\omega _{s}^{t}&=g_{k}\omega _{k}^{t+1}+g_{s} \omega _{s}^{t+1}, \end{aligned}$$

where the last equality can be written as

$$\begin{aligned} g_{k}\left( \omega _{k}^{t+1}-\omega _{k}^{t}\right) =-g_{s}\left( \omega _{s}^{t+1}-\omega _{s}^{t}\right) . \end{aligned}$$
(51)

Also, because \(\omega ^{o}\succ _{g}\omega ^{t},t=1,\ldots ,T\), and \(\omega ^{o} \in \Omega _{n}^{+}\), we have \(\upsilon \left( 0\right) <\omega _{1}^{o} \le \omega _{1}^{t}\) and \(\omega _{n}^{t}\le \omega _{n}^{o}\le \upsilon \left( 1\right) \). Since \(\omega ^{t}\) is also weakly increasing, it follows that \(\omega ^{t}\in \Omega _{n}\), that is, \(\left\{ g,\omega ^{t}\right\} \in \Delta _{n}\times \Omega _{n}\).Footnote 37 Therefore, \(EV\left( g,\omega ^{t}\right) \) is defined for all \(t=1,\ldots ,T\).

Suppose first that \(\omega ^{t}\in \Omega _{n}^{+}\) and \(\omega ^{t+1}\in \Omega _{n}^{+}\), i.e., \(\omega ^{t}\) and \(\omega ^{t+1}\) are strictly increasing. Since \(\Omega _{n}^{+}\) is convex, then \(\omega ^{\theta }=\left( 1-\theta \right) \omega ^{t}+\theta \omega ^{t+1}\in \Omega _{n}^{+}\) for any \(\theta \in \left( 0,1\right) \). Also, because \(\Omega _{n}^{+}\) is open and \(EV\left( g,\omega \right) \) is differentiable in \(\omega \) on \(\Omega _{n} ^{+}\) for all \(g\in \Delta _{n}\), then \(\frac{\partial }{\partial \omega _{i} }EV\left( g,\omega ^{\theta }\right) \) exists for all \(i\in \left[ n\right] \) and \(\theta \in \left( 0,1\right) \) and is given by

$$\begin{aligned} \frac{\partial EV\left( g,\omega ^{\theta }\right) }{\partial \omega _{i}} =g_{i}\frac{\partial V_{i}\left( g,\omega _{i}^{\theta }\right) }{\partial \omega _{i}}. \end{aligned}$$
(52)

Then, Lagrange’s mean value theorem implies

$$\begin{aligned} \Delta EV= & {} EV\left( g,\omega ^{t+1}\right) -EV\left( g,\omega ^{t}\right) =\left( \omega _{k}^{t+1}-\omega _{k}^{t}\right) \frac{\partial EV\left( g,\omega ^{\theta }\right) }{\partial \omega _{k}}\\&+\left( \omega _{s} ^{t+1} -\omega _{s}^{t}\right) \frac{\partial EV\left( g,\omega ^{\theta }\right) }{\partial \omega _{s}}, \end{aligned}$$

for some \(\theta \in \left( 0,1\right) \). This gives

$$\begin{aligned} \Delta EV&=g_{k}\left( \omega _{k}^{t+1}-\omega _{k}^{t}\right) \frac{\partial V_{k}\left( g,\omega _{k}^{\theta }\right) }{\partial \omega _{k}}+g_{s}\left( \omega _{s}^{t+1}-\omega _{s}^{t}\right) \frac{\partial V_{s}\left( g,\omega _{s}^{\theta }\right) }{\partial \omega _{s}}\nonumber \\&=g_{s}\left( \omega _{s}^{t+1}-\omega _{s}^{t}\right) \left( \frac{\partial V_{s}\left( g,\omega _{s}^{\theta }\right) }{\partial \omega _{s}}-\frac{\partial V_{k}\left( g,\omega _{k}^{\theta }\right) }{\partial \omega _{k}}\right) \ge 0, \end{aligned}$$
(53)

where first and second equalities follow from (52) and (51), respectively, and the inequality follows from Condition 1, \(s>k\), and \(\omega _{s}^{\theta }>\omega _{k}^{\theta }\).

If \(\omega ^{t}\in \Omega _{n}\) and/or \(\omega ^{t+1}\in \Omega _{n}\) are not strictly increasing, then consider \(\omega ^{t,\varepsilon }=\left( 1-\varepsilon \right) \omega ^{t}+\varepsilon \omega ^{o}\) and \(\omega ^{t+1,\varepsilon }=\left( 1-\varepsilon \right) \omega ^{t+1}+\varepsilon \omega ^{o}\), where \(\varepsilon \in \left( 0,1\right) \). Because \(\omega ^{o}\in \Omega _{n}^{+}\), then \(\omega ^{t,\varepsilon }\in \Omega _{n}^{+}\) and \(\omega ^{t+1,\varepsilon }\in \Omega _{n}^{+}\) for any \(\varepsilon \in \left( 0,1\right) \). Then, \(\omega _{k}^{t+1}<\omega _{k}^{t}\le \omega _{s}^{t} <\omega _{s}^{t+1}\) and \(\omega _{k}^{o}<\omega _{s}^{o}\) result in

$$\begin{aligned} \left( 1-\varepsilon \right) \omega _{k}^{t+1}+\varepsilon \omega _{k} ^{o}<\left( 1-\varepsilon \right) \omega _{k}^{t}+\varepsilon \omega _{k} ^{o}<\left( 1-\varepsilon \right) \omega _{s}^{t}+\varepsilon \omega _{s} ^{o}<\left( 1-\varepsilon \right) \omega _{s}^{t+1}+\varepsilon \omega _{s}^{o} \end{aligned}$$

for all \(\varepsilon \in \left( 0,1\right) \). Also, \(\omega _{i}^{t}=\omega _{i}^{t+1},i\notin \left\{ k,s\right\} \) implies

$$\begin{aligned} \omega _{i}^{t,\varepsilon }=\left( 1-\varepsilon \right) \omega _{i} ^{t}+\varepsilon \omega _{i}^{o}=\left( 1-\varepsilon \right) \omega _{i} ^{t+1}+\varepsilon \omega _{i}^{o}=\omega _{i}^{t+1,\varepsilon },i\notin \left\{ k,s\right\} . \end{aligned}$$

Thus, \(\omega ^{t,\varepsilon }\) and \(\omega ^{t+1,\varepsilon }\) differ only in \(\left\{ \omega _{k}^{t,\varepsilon },\omega _{s}^{t,\varepsilon }\right\} \) and \(\left\{ \omega _{k}^{t+1,\varepsilon },\omega _{s}^{t+1,\varepsilon }\right\} \), and \(\omega ^{t+1,\varepsilon }\succ _{g}\omega ^{t,\varepsilon }\). As a result, (53) holds for \(\omega ^{t,\varepsilon }\) and \(\omega ^{t+1,\varepsilon }\):

$$\begin{aligned} EV\left( g,\omega ^{t+1,\varepsilon }\right) -EV\left( g,\omega ^{t,\varepsilon }\right) \ge 0\text { for all }\varepsilon \in \left( 0,1\right) . \end{aligned}$$
(54)

Finally, (54) and the continuity of \(EV\left( g,\omega \right) \) in \(\omega \) on \(\Omega _{n}\) for all \(g\in \Delta _{n}\) imply

$$\begin{aligned} EV\left( g,\omega ^{t+1}\right) -EV\left( g,\omega ^{t}\right) =\lim _{\varepsilon \downarrow 0}\left( EV\left( g,\omega ^{t+1,\varepsilon }\right) -EV\left( g,\omega ^{t,\varepsilon }\right) \right) \ge 0\text {.} \end{aligned}$$

\(\square \)

Proof of Corollary 1

If \(\frac{\partial }{\partial x}V_{s}\left( g,x\right) |_{x=\omega _{s}}\le \frac{\partial }{\partial x}V_{k}\left( g,x\right) |_{x=\omega _{k}}\) for all \(s>k\) and \(\omega _{s}\ge \omega _{k}\), then the result follows from the same lines as those in the proof of Theorem 1 by taking \(\omega _{s}^{o}=E\left[ \upsilon \left( \theta \right) \right] ,s\in \left[ n\right] \) and noting that \(\omega \succ _{g}\omega ^{o}\). Finally, \(\left\{ g,\omega ^{o}\right\} \in {\mathcal {G}}_{n}\), since it can be generated by \(\sigma ^{o}\in \Pi _{n}\), such that \(\sigma _{s}^{o}\left( \theta \right) =g_{s},s\in \left[ n\right] \). \(\square \)

Proof of Theorem 2

Consider a Bayesian persuasion mechanism with the ex-ante payoff to the sender of the form

$$\begin{aligned} EV\left( g,\omega \right) =\sum \limits _{s=1}^{n}g_{s}\left( \alpha _{s}\left( g\right) \omega _{s}+\beta _{s}\left( g\right) \right) , \end{aligned}$$

and suppose that \(\alpha _{s}\ \) is not weakly increasing in s. Given any signal structure \(\sigma \in \Pi _{n}\), we will construct a redundant monotone partitional \(\sigma ^{o}\in \Pi _{n}\), which generates the same marginal distribution of signals, i.e., \(g^{o}=g\), and provides a higher ex-ante payoff to the sender, \(EV\left( g,\omega ^{o}\right) \ge EV\left( g,\omega \right) \). A new signal structure \(\sigma ^{o}\) is derived from \(\sigma \) via a finite iterated sequence \(\left\{ \sigma ^{t}\right\} _{t=1}^{T}\), where \(T\le n\), such that each \(\sigma ^{t}\) improves the ex-ante payoff of the sender relative to \(\sigma ^{t-1}\).

We start the proof by noting first that \(EV\left( g,\omega \right) \) represents the sender’s ex-ante payoff in the hypothetical Bayesian persuasion mechanism such that the receiver’s action rule is \(a=\left\{ \alpha _{s}\left( g\right) ,\beta _{s}\left( g\right) \right\} \); the sender’s payoff function is \(U\left( a,\theta \right) =\alpha _{s}\left( g\right) \upsilon \left( \theta \right) +\beta _{s}\left( g\right) \); and the agent’s payoff function is \(U^{M}\left( a,\theta \right) =0\) for all \(\left( a,\theta \right) \).

Next, consider the following finite iterative sequence of hypothetical mechanisms. Each iteration \(t=1,\ldots ,T\) consists of two hypothetical mechanisms: one with an initial signal structure \(\sigma \) and the modified \(\alpha _{s}^{t}\left( g\right) \); and the other one with an initial \(\alpha _{s}\left( g\right) \) and a modified \(\sigma ^{t}\). Along this sequence, \(\beta _{s}\left( .\right) \) and the marginal distribution of signals \(g^{t}\) remain unchanged, i.e., \(g^{t}=g\) for all t. Because of that, without loss of generality we put \(\beta _{s}\left( g\right) =0\) for all \(s\in \left[ n\right] \). Then, we call \(\left\{ \alpha _{s}\left( g\right) \right\} _{s=1}^{n}\) the action rule and denote it \(\alpha _{s}\) for the notational simplicity.

Put \(\sigma ^{0}=\sigma \) and \(\omega ^{0}=\omega \). Also, denote \(\alpha _{s} ^{0}=\alpha _{s}\) the initial action rule, and \(\alpha _{s}^{t},t>0\) the auxiliary “ironing” action rule, which is defined below. Denote \({\mathcal {S}}_{\alpha }=\left\{ s|\alpha _{s} =\alpha \right\} \subset \left[ n\right] \) the subset of signals which induce an action \(\alpha \) (initial or ironing, depending on the mechanism). Thus, the probability of inducing an action \(\alpha \) is \(g_{\alpha }=\sum \nolimits _{s\in {\mathcal {S}}_{\alpha }}g_{s}\). In the t-mechanism with a modified action rule \(\alpha ^{t}\), where \(t=0,\ldots ,n-1\), let \(k_{t}\in \left[ n\right] \backslash \left\{ 1\right\} \) be such that \(\alpha _{k_{t}}^{t}<\alpha _{k_{t}-1}^{t}\). Consider a \(\left( t+1\right) \)-mechanism with the signal structure \(\sigma ^{t+1}\), which is derived from \(\sigma ^{t}\) and \(\alpha ^{t}\ \) as follows:

$$\begin{aligned} \sigma _{s}^{t+1}\left( \theta \right) =\left\{ \begin{array} [c]{ll} \sigma _{s}^{t}\left( \theta \right) &{} \quad \text { if }s\notin {\mathcal {S}} _{\alpha _{k_{t}-1}^{t}}\cup {\mathcal {S}}_{\alpha _{k_{t}}^{t}}\text {, and}\\ \frac{g_{s}}{g_{\alpha _{k_{t}-1}^{t}}+g_{\alpha _{k_{t}}^{t}}}\sum \limits _{j\in {\mathcal {S}}_{\alpha _{k_{t}-1}^{t}}\cup {\mathcal {S}}_{\alpha _{k_{t} }^{t}}}\sigma _{j}^{t}\left( \theta \right) &{} \quad \text { if }s\in {\mathcal {S}} _{\alpha _{k_{t}-1}^{t}}\cup {\mathcal {S}}_{\alpha _{k_{t}}^{t}}. \end{array} \right. \end{aligned}$$

That is, \(\sigma ^{t+1}=\left\{ \sigma _{s}^{t+1}\left( \theta \right) \right\} _{s=1}^{n}\) is identical to \(\sigma ^{t}\) for signals \(s\notin {\mathcal {S}}_{\alpha _{k_{t}-1}^{t}}\cup {\mathcal {S}}_{\alpha _{k_{t}}^{t}}\), but randomizes among signals \(s\in {\mathcal {S}}_{\alpha _{k_{t}-1}^{t}} \cup {\mathcal {S}}_{\alpha _{k_{t}}^{t}}\) with probabilities \(g_{s}^{t+1} =g_{s}^{t}=g_{s}\). Thus, \(\sigma ^{t+1}\) generates a sequence of n posterior means \(\left\{ \omega _{s}^{t+1}\right\} _{s=1}^{n}\), such that \(\omega _{s}^{t+1}=\omega _{s}^{t}\ \) if \(s\notin {\mathcal {S}}_{\alpha _{k-1}^{t}} \cup {\mathcal {S}}_{\alpha _{k}^{t}}\). If \(s\in {\mathcal {S}}_{\alpha _{k_{t}-1}^{t} }\cup {\mathcal {S}}_{\alpha _{k_{t}}^{t}}\), then

$$\begin{aligned} \omega _{s}^{t+1}&=E\left[ \omega _{s}|s\in {\mathcal {S}}_{\alpha _{k_{t} -1}^{t}}\cup {\mathcal {S}}_{\alpha _{k_{t}}^{t}}\right] =E\left[ \omega _{s} ^{t}|s\in {\mathcal {S}}_{\alpha _{k_{t}-1}^{t}}\cup {\mathcal {S}}_{\alpha _{k_{t}} ^{t}}\right] \\&=\frac{1}{\sum \limits _{j\in {\mathcal {S}}_{\alpha _{k_{t}-1}^{t}} \cup {\mathcal {S}}_{\alpha _{k_{t}}^{t}}}g_{j}}\sum \limits _{j\in {\mathcal {S}} _{\alpha _{k_{t}-1}^{t}}\cup {\mathcal {S}}_{\alpha _{k_{t}}^{t}}}g_{j}\omega _{j}=\frac{g_{\alpha _{k_{t}-1}^{t}}}{g_{\alpha _{k_{t}-1}^{t}}+g_{\alpha _{k_{t}}^{t}}}\omega _{k_{t}-1}^{t}+\frac{g_{\alpha _{k_{t}}^{t}}}{g_{\alpha _{k_{t}-1}^{t}}+g_{\alpha _{k_{t}}^{t}}}\omega _{k_{t}}^{t}. \end{aligned}$$

By construction, \(\left\{ g,\omega ^{t+1}\right\} \in {\mathcal {G}}_{n}^{+}\). Next, consider the \(\left( t+1\right) \)-mechanism with a modified “ironing” action rule \(\left\{ \alpha _{s}^{t+1}\right\} _{s=1}^{n}\ \) defined as follows. First, \(\alpha _{s} ^{t+1}=\alpha _{s}^{t}\ \) if \(s\notin {\mathcal {S}}_{\alpha _{k_{t}-1}^{t}} \cup {\mathcal {S}}_{\alpha _{k_{t}}^{t}}\). If \(s\in {\mathcal {S}}_{\alpha _{k_{t} -1}^{t}}\cup {\mathcal {S}}_{\alpha _{k_{t}}^{t}}\), then

$$\begin{aligned} \alpha _{s}^{t+1}&=E\left[ \alpha _{s}|s\in {\mathcal {S}}_{\alpha _{k_{t} -1}^{t}}\cup {\mathcal {S}}_{\alpha _{k_{t}}^{t}}\right] =E\left[ \alpha _{s} ^{t}|s\in {\mathcal {S}}_{\alpha _{k_{t}-1}^{t}}\cup {\mathcal {S}}_{\alpha _{k_{t}} ^{t}}\right] \\&=\frac{1}{\sum \limits _{j\in {\mathcal {S}}_{\alpha _{k_{t}-1}^{t}} \cup {\mathcal {S}}_{\alpha _{k_{t}}^{t}}}g_{j}}\sum \limits _{j\in {\mathcal {S}} _{\alpha _{k_{t}-1}^{t}}\cup {\mathcal {S}}_{\alpha _{k_{t}}^{t}}}g_{j}\alpha _{j}=\frac{g_{\alpha _{k_{t}-1}^{t}}}{g_{\alpha _{k_{t}-1}^{t}}+g_{\alpha _{k_{t}}^{t}}}\alpha _{k_{t}-1}^{t}+\frac{g_{\alpha _{k_{t}}^{t}}}{g_{\alpha _{k_{t}-1}^{t}}+g_{\alpha _{k_{t}}^{t}}}\alpha _{k_{t}}^{t}. \end{aligned}$$

Denote \(EV\left( g,\omega ^{t}\right) \) the sender’s ex-ante payoff in the t-mechanism with the signal structure \(\sigma ^{t}\) and the initial action rule \(\left\{ \alpha _{s}\right\} _{s=1}^{n}\). Then, the marginal ex-ante payoff to the sender in \(\left( t+1\right) \)-mechanism, \(\Delta EV^{t}=EV\left( g,\omega ^{t+1}\right) -EV\left( g,\omega ^{t}\right) \), is determined as

$$\begin{aligned} \Delta EV^{t}= & {} \sum \limits _{s\in {\mathcal {S}}_{\alpha _{k_{t}-1}^{t}} \cup {\mathcal {S}}_{\alpha _{k_{t}}^{t}}}g_{s}\alpha _{s}\left( \omega _{s} ^{t+1}-\omega _{s}^{t}\right) =\sum \limits _{s\in {\mathcal {S}}_{\alpha _{k_{t} -1}^{t}}\cup {\mathcal {S}}_{\alpha _{k_{t}}^{t}}}g_{s}\alpha _{s}E\left[ \omega _{s}|s\in {\mathcal {S}}_{\alpha _{k_{t}-1}^{t}}\cup {\mathcal {S}} _{\alpha _{k_{t}}^{t}}\right] \\&-\left( \sum \limits _{s\in {\mathcal {S}}_{\alpha _{k_{t}-1}^{t}}}g_{s}\alpha _{s}E\left[ \omega _{s}|s\in {\mathcal {S}}_{\alpha _{k_{t}-1}^{t}}\right] +\sum \limits _{s\in {\mathcal {S}}_{\alpha _{k_{t}}^{t}}}g_{s}\alpha _{s}E\left[ \omega _{s}|s\in {\mathcal {S}}_{\alpha _{k_{t}}^{t}}\right] \right) \\= & {} \left( \alpha _{k_{t}-1}^{t}g_{\alpha _{k_{t}-1}^{t}}+\alpha _{k_{t}} ^{t}g_{\alpha _{k_{t}-1}^{t}}\right) E\left[ \omega _{s}|s\in {\mathcal {S}} _{\alpha _{k_{t}-1}^{t}}\cup {\mathcal {S}}_{\alpha _{k_{t}}^{t}}\right] \\&-\left( \alpha _{k_{t}-1}^{t}g_{\alpha _{k_{t}-1}^{t}}\omega _{k_{t}-1} ^{t}+\alpha _{k_{t}}^{t}g_{\alpha _{k_{t}}^{t} }\omega _{k_{t} }^{t}\right) \\= & {} \sum \limits _{l\in \left\{ k_{t}-1,k_{t}\right\} }g_{\alpha _{l}^{t}} V^{t}\left( E\left[ \omega _{l}^{t}|l\in \left\{ k_{t}-1,k_{t}\right\} \right] ,l\right) -\sum \limits _{l\in \left\{ k_{t}-1,k_{t}\right\} }g_{\alpha _{l}^{t}}V^{t}\left( \omega _{l}^{t},l\right) , \end{aligned}$$

where \(V^{t}\left( x,l\right) =\alpha _{l}^{t}x\). By construction, \(\omega ^{t}\succ _{g}\omega ^{t+1}\). Also, since \(\alpha _{k_{t}}^{t} <\alpha _{k_{t}-1}^{t}\), then Condition 1 is reversed for \(k=k_{t}-1,s=k_{t}\), and \(\omega _{k_{t}}\ge \omega _{k_{t}-1}\). Therefore, \(\Delta EV^{t}\ge 0\) by Corollary 1.

Because \(n\mathcal {\ }\) is finite, after \(T\le n-1\) iterations of the original mechanism we obtain the signal structure \(\sigma ^{T}\) and the weakly increasing ironing action rule \(\left\{ \alpha _{s}^{T}\right\} _{s=1}^{n}\), such that \(EV\left( g,\omega ^{T}\right) \ge EV\left( g,\omega \right) \), where \(EV\left( g,\omega ^{T}\right) \) is the sender’s ex-ante payoff in the mechanism with the signal structure \(\sigma ^{T}\) and the initial action rule \(\left\{ \alpha _{s}\right\} _{s=1}^{n}\).

Let \(\left\{ \alpha _{s_{k}}^{T}\right\} _{k=1}^{n^{*}}\) be a strictly increasing subsequence of \(\left\{ \alpha _{s}^{T}\right\} _{s=1}^{n}\), which includes all values of \(\left\{ \alpha _{s}^{T}\right\} _{s=1}^{n} \).Footnote 38 Since \(\omega _{s}^{T}=\omega _{s_{k}}^{T}\) for all \(s\in {\mathcal {S}}_{\alpha _{s_{k}}^{T}}\), \(\left\{ \omega _{s_{k}}^{T}\right\} _{k=1}^{n^{*}}\) is a strictly increasing subsequence of \(\left\{ \omega _{s}^{T}\right\} _{s=1}^{n}\) which includes all values of \(\left\{ \omega _{s}^{T}\right\} _{s=1}^{n}\). Hence, the probability of inducing an ironing action \(\alpha _{s_{k}}^{T}\) or, equivalently, generating a mean \(\omega _{s_{k}}^{T}\) is \(g_{k}^{*} =g_{\alpha _{s_{k}}^{T}}=\sum \limits _{s\in {\mathcal {S}}_{\alpha _{s_{k}}^{T}} }g_{s},k\in \left[ {n^{*}}\right] \). Then, the sender’s ex-ante payoff in the T-mechanism is given by

$$\begin{aligned} EV\left( g,\omega ^{T}\right) =\sum \limits _{s=1}^{n}g_{s}\alpha _{s}\omega _{s}^{T}=\sum \limits _{k=1}^{n^{*}}g_{k}^{*}\sum \limits _{s\in {\mathcal {S}}_{\alpha _{s_{k}}^{T}}}\frac{g_{s}}{g_{k}^{*}}\alpha _{s} \omega _{s_{k}}^{T}=\sum \limits _{k=1}^{n^{*}}g_{k}^{*}\alpha _{s_{k}} ^{T}\omega _{s_{k}}^{T}. \end{aligned}$$

Now, consider a redundant monotone partitional \(\sigma ^{*}\in \Pi _{n}\) defined as follows. First, partition \(\Theta \) into \(n^{*}\) intervals \(\left\{ \Theta _{k}\right\} _{k=1}^{n^{*}}\), such that \(g_{k}^{*} =\Pr \left\{ \theta \in \Theta _{k}\right\} \). Then, for each \(k\in \left[ n^{*}\right] \), \(\sigma ^{*}\) generates \(s\in {\mathcal {S}}_{\alpha _{s_{k} }^{T}}\) with the probability \(\frac{g_{s}}{g_{k}^{*}}\). Thus, each \(s\in {\mathcal {S}}_{\alpha _{s_{k}}^{T}}\) induces \(\omega _{s}^{*} =\omega _{s_{k}}^{*}\) with probability \(g_{s}\). Then, the sender’s ex-ante payoff in the mechanism with the signal structure \(\sigma ^{*}\) and the initial action rule \(\alpha _{s}\) is

$$\begin{aligned} EV\left( g,\omega ^{*}\right)&=\sum \limits _{s=1}^{n}g_{s}\alpha _{s}\omega _{s}^{*}=\sum \limits _{k=1}^{n^{*}}g_{k}^{*}\sum \limits _{s\in {\mathcal {S}}_{\alpha _{s_{k}}^{T}}}\frac{g_{s}}{g_{k}^{*}} \alpha _{s}\omega _{s_{k}}^{*}\\&=\sum \limits _{k=1}^{n^{*}}g_{k}^{*}\omega _{s_{k}}^{*} \sum \limits _{s\in {\mathcal {S}}_{\alpha _{s_{k}}^{T}}}\frac{g_{s}}{g_{k}^{*} }\alpha _{s}=\sum \limits _{k=1}^{n^{*}}g_{k}^{*}\alpha _{s_{k}}^{T} \omega _{s_{k}}^{*}. \end{aligned}$$

Since \(\left\{ g_{k}^{*},\omega _{k}^{*}\right\} _{k=1}^{n^{*}}\) is monotone partitional, we have \(\left\{ \omega _{s_{k}}^{*}\right\} _{k=1}^{n^{*}}\succ _{g^{*}}\left\{ \omega _{s_{k}}^{T}\right\} _{k=1}^{n^{*}}\) by Lemma 1. Also, since \(\left\{ \alpha _{s_{k}} ^{T}\right\} _{k=1}^{n^{*}}\) is strictly increasing, then Condition 1 holds. Therefore, Theorem 1 implies that \(EV\left( g,\omega ^{*}\right) \ge EV\left( g,\omega ^{T}\right) \). \(\square \)

Proof of Theorem 3

Consider a mechanism \({\mathcal {M}}\) with an interim incentive-compatible and individually rational receiver \({\mathcal {R}} =\left\{ {\vec {q}}\left( {\vec {\omega }}_{{\vec {s}}}\right) ,{\vec {t}}\left( {\vec {\omega }}_{{\vec {s}}}\right) \right\} =\left\{ q_{i}\left( {\vec {\omega }}_{{\vec {s}}}\right) ,t_{i}\left( {\vec {\omega }}_{{\vec {s}}}\right) \right\} _{i\in {\mathcal {N}}}\) and a profile of distributions \(\left\{ {\vec {g}} ,\vec {\omega }^{*}\right\} \in {\mathcal {G}}_{\left[ {\vec {n}}\right] } =\times _{i\in {\mathcal {N}}}{\mathcal {G}}_{n_{i}}\). Then, the interim payoff of buyer i upon observing signal \(s_{i}\) and reporting \(z_{i}\) in the mechanism \({\mathcal {M}}\) is

$$\begin{aligned} U_{s_{i},z_{i}}^{i}\left( g^{-i},{\vec {\omega }}^{*}\right) =Q_{z_{i}} ^{i}\left( g^{-i},{\vec {\omega }}^{*}\right) \omega _{s_{i}}^{*i}-T_{z_{i}}^{i}\left( g^{-i},{\vec {\omega }}^{*}\right) . \end{aligned}$$

By Bergemann and Pesendorfer (2007), the interim payoff of buyer i with a signal \(s_{i}\) in a mechanism \({\mathcal {M}}\) with an incentive-compatible and individually rational receiver \({\mathcal {R}}\) and a profile of distributions \(\left\{ {\vec {g}},{\vec {\omega }}\right\} \in {\mathcal {G}}_{\left[ {\vec {n}}\right] }\) can be expressed as

$$\begin{aligned} U_{s_{i}}^{i}\left( g^{-i},{\vec {\omega }}\right)&=Q_{s_{i}}^{i}\left( g^{-i},{\vec {\omega }}\right) \omega _{s_{i}}^{i}-T_{s_{i}}^{i}\left( g^{-i},{\vec {\omega }}\right) \nonumber \\&=U_{s_{i}-1}^{i}\left( g^{-i},\omega _{i}\right) +{\tilde{Q}}_{s_{i}} ^{i}\left( g^{-i},{\vec {\omega }}\right) \left( \omega _{s_{i}}-\omega _{s_{i}-1}\right) , \end{aligned}$$
(55)

where

$$\begin{aligned} {\tilde{Q}}_{s_{i}}^{i}\left( g^{-i},{\vec {\omega }}\right) \in \left[ Q_{s_{i}-1}^{i}\left( g^{-i},{\vec {\omega }}\right) ,Q_{s_{i}}^{i}\left( g^{-i},{\vec {\omega }}\right) \right] . \end{aligned}$$
(56)

Also, the expected payment to the seller from buyer i with a signal \(s_{i}\) in a mechanism \({\mathcal {M}}\) with a profile of distributions \(\left\{ {\vec {g}},{\vec {\omega }}^{*}\right\} \) is

$$\begin{aligned} T_{s_{i}}^{i}\left( g^{-i},{\vec {\omega }}^{*}\right)&=T_{s_{i}-1} ^{i}\left( g^{-i},{\vec {\omega }}^{*}\right) +Q_{s_{i}}^{i}\left( g^{-i},{\vec {\omega }}^{*}\right) \omega _{s_{i}}^{*i}\nonumber \\&\quad -Q_{s_{i}-1}^{i}\left( g^{-i},{\vec {\omega }}^{*}\right) \omega _{s_{i} -1}^{*i}-{\tilde{Q}}_{s_{i}}^{i}\left( g^{-i},{\vec {\omega }}^{*}\right) \left( \omega _{s_{i}}^{*i}-\omega _{s_{i}-1}^{*i}\right) . \end{aligned}$$
(57)

Given an initial mechanism \({\mathcal {M}}\) with a profile of sequences \({\vec {\omega }}^{*}\in \Omega _{{\vec {n}}}\), consider a reduced receiver \({\mathcal {R}}^{*}=\left\{ {\vec {q}}^{*}\left( {\vec {s}}\right) ,{\vec {t}}^{*}\left( {\vec {s}},\omega ^{i}\right) \right\} \) such that

$$\begin{aligned} {\vec {q}}^{*}\left( {\vec {s}}\right) ={\vec {q}}\left( {\vec {\omega }}_{{\vec {s}} }^{*}\right) \text { for all }{\vec {s}}\in \left[ {\vec {n}}\right] \text {.} \end{aligned}$$
(58)

That is, \({\vec {q}}^{*}\left( {\vec {s}}\right) \) replicates the allocation rule \({\vec {q}}\left( {\vec {\omega }}_{{\vec {s}}}\right) \) for the profile of sequences \({\vec {\omega }}^{*}\). This implies

$$\begin{aligned} Q_{s_{i}}^{*,i}\left( g^{-i}\right) & =E_{s_{-i}}\left[ q_{i}^{*}\left( {\vec {s}}\right) \right] =E_{s_{-i}}\left[ q_{i}\left( {\vec {\omega }}_{{\vec {s}}}^{*}\right) \right] \\ & =Q_{s_{i}}^{i}\left( g^{-i},{\vec {\omega }}^{*}\right) \text { for all }s_{i}\in \left[ n_{i}\right] \text { and }g^{-i}\in \Delta ^{-i}. \end{aligned}$$

For an arbitrary \(\left\{ {\vec {g}},{\vec {\omega }}\right\} \in {\mathcal {G}} _{\left[ {\vec {n}}\right] }\), consider a payment rule \({\vec {t}}^{*}\left( {\vec {s}},\omega ^{i}\right) \), such that the expected payment \(T_{s_{i}} ^{*,i}\left( g^{-i},\omega ^{i}\right) =E_{s_{-i}}\left[ t_{i}^{*}\left( {\vec {s}},\omega ^{i}\right) \right] \) takes the formFootnote 39

$$\begin{aligned} T_{s_{i}}^{*,i}\left( g^{-i},\omega ^{i}\right)&=T_{s_{i}-1}^{*,i}\left( g^{-i},\omega ^{i}\right) +Q_{s_{i}}^{*,i}\left( g^{-i}\right) \omega _{s_{i}}^{i}\nonumber \\&\quad -Q_{s_{i}-1}^{*,i}\left( g^{-i}\right) \omega _{s_{i}-1}^{i}-{\tilde{Q}}_{s_{i}}^{*,i}\left( g^{-i}\right) \left( \omega _{s_{i}}^{i} -\omega _{s_{i}-1}^{i}\right) , \end{aligned}$$
(59)

where

$$\begin{aligned} {\tilde{Q}}_{s_{i}}^{*,i}\left( g^{-i}\right) ={\tilde{Q}}_{s_{i}}^{i}\left( g^{-i},{\vec {\omega }}^{*}\right) . \end{aligned}$$
(60)

Then, (59) is equivalent to

$$\begin{aligned}&T_{s_{i}}^{*,i}\left( g^{-i},\omega ^{i}\right) =\left( {\tilde{Q}} _{s_{i}+1}^{*,i}\left( g^{-i}\right) -Q_{s_{i}}^{*,i}\left( g^{-i}\right) \right) \omega _{s_{i}}\\&\quad +\sum \limits _{k_{i}=1}^{s_{i}}\left( {\tilde{Q}}_{k_{i}+1}^{*,i}\left( g^{-i}\right) -{\tilde{Q}}_{k_{i}}^{*,i}\left( g^{-i}\right) \right) \omega _{k_{i}}. \end{aligned}$$

This results in the buyer’s interim payoff in a mechanism \({\mathcal {M}}^{*}=\left\{ {\vec {g}},{\vec {\omega }},{\mathcal {R}}^{*}\right\} \) with a reduced receiver \({\mathcal {R}}^{*}\) and a profile of distributions \(\left\{ {\vec {g}},{\vec {\omega }}\right\} \in {\mathcal {G}}_{\left[ {\vec {n}}\right] }\):

$$\begin{aligned} U_{s_{i}}^{*,i}\left( g^{-i},\omega ^{i}\right)&=Q_{s_{i}}^{*,i}\left( g^{-i}\right) \omega _{s_{i}}^{i}-T_{s_{i}}^{*,i}\left( g^{-i},\omega ^{i}\right) \nonumber \\&=U_{s_{i}-1}^{*,i}\left( g^{-i},\omega ^{i}\right) +{\tilde{Q}}_{s_{i} }^{*,i}\left( g^{-i}\right) \left( \omega _{s_{i}}^{i}-\omega _{s_{i} -1}^{i}\right) , \end{aligned}$$
(61)

where \({\tilde{Q}}_{s_{i}}^{*,i}\left( g^{-i}\right) \in \left[ Q_{s_{i} -1}^{*,i}\left( g^{-i}\right) ,Q_{s_{i}}^{*,i}\left( g^{-i}\right) \right] \) by (56) and (60). Since (61) is a special case of (55) under the condition (60), the reduced receiver \({\mathcal {R}}^{*}\) is interim incentive compatible and individually rational. Moreover, (57) and (59) imply

$$\begin{aligned} T_{s_{i}}^{*,i}\left( g^{-i},\omega ^{*,i}\right) =T_{s_{i}}^{i}\left( g^{-i},{\vec {\omega }}^{*}\right) \text { for all }i\in {\mathcal {N}}\text {,} \end{aligned}$$

that is, the ex-ante payments of a single buyer in the mechanisms \({\mathcal {M}}\) and \({\mathcal {M}}^{*}\) with the profile of distributions \(\left\{ {\vec {g}},{\vec {\omega }}^{*}\right\} \) are equal:

$$\begin{aligned} EV_{i}^{{\mathcal {M}}^{*}}=\sum \limits _{s_{i}=1}^{n_{i}}g_{s_{i}}^{i} T_{s_{i}}^{*,i}\left( g^{-i},\omega ^{*,i}\right) =\sum \limits _{s_{i} =1}^{n_{i}}g_{s_{i}}^{i}T_{s_{i}}^{i}\left( g^{-i},{\vec {\omega }}^{*}\right) =EV_{i}^{{\mathcal {M}}}. \end{aligned}$$

By (26) and (27), the ex-ante payment \(EV_{i}\) of buyer i in a mechanism with the profile of distributions \(\left\{ {\vec {g}},{\vec {\omega }}\right\} \in {\mathcal {G}}_{\left[ {\vec {n}}\right] }\) and the reduced receiver \({\mathcal {R}}^{*}\) can be expressed as

$$\begin{aligned} EV_{i}=\sum \limits _{s_{i}=1}^{n_{i}}g_{s_{i}}^{i}V_{s_{i}}^{i}\left( g^{i},\omega _{s_{i}}^{i}\right) =\sum \limits _{s_{i}=1}^{n_{i}}g_{s_{i}} ^{i}\alpha _{s_{i}}^{i}\left( {\vec {g}}\right) \omega _{s_{i}}^{i}. \end{aligned}$$
(62)

Here, \(V_{s_{i}}^{i}\left( \omega _{s_{i}}^{i},{\vec {g}}\right) \) is the posterior payoff function:

$$\begin{aligned} V_{s_{i}}^{i}\left( {\vec {g}},\omega _{s_{i}}\right) =\alpha _{s_{i}}^{i}\left( {\vec {g}}\right) \omega _{s_{i}}^{i}, \end{aligned}$$

such that

$$\begin{aligned} \alpha _{s_{i}}^{i}\left( {\vec {g}}\right) ={\tilde{Q}}_{s_{i}+1}^{*,i}\left( g^{-i}\right) -Q_{s_{i}}^{*,i}\left( g^{-i}\right) +\Delta \tilde{Q}_{s_{i}}^{*,i}\left( g^{-i}\right) \left( 1+\frac{1}{\lambda _{s_{i} }\left( g^{i}\right) }\right) , \end{aligned}$$

where \(\Delta {\tilde{Q}}_{s_{i}}^{*,i}\left( g^{-i}\right) =\tilde{Q}_{s_{i}+1}^{*,i}\left( g^{-i}\right) -{\tilde{Q}}_{s_{i}}^{*,i}\left( g^{-i}\right) ,G_{s_{i}}^{i}=\sum \limits _{k_{i}=1}^{s_{i}} g_{k_{i}}^{i}\), and \(\lambda _{s_{i}}\left( g^{i}\right) =\frac{g_{s_{i}} ^{i}}{1-G_{s_{i}}^{i}}\).

Since \(V_{s_{i}}^{i}\left( {\vec {g}},\omega _{s_{i}}\right) \) takes the form (38) for all \(i\in {\mathcal {N}}\), and ex-ante payments \(EV_{i}\) of form (62) are independent across \(i\in {\mathcal {N}}\) for all profiles of distributions \(\left\{ {\vec {g}},\vec {\omega }\right\} \in {\mathcal {G}}_{\left[ {\vec {n}}\right] }\) with a fixed \({\vec {g}}\), Theorem 2 implies that there exists a profile of redundant monotone partitional distributions \(\left\{ {\vec {g}},\vec {\omega }^{o}\right\} \in {\mathcal {G}}_{\left[ {\vec {n}}\right] }\), such that the ex-ante payment of each buyer \(i\in {\mathcal {N}}\) in the mechanism \({\mathcal {M}}^{o}=\left\{ {\vec {g}},{\vec {\omega }}^{o},{\mathcal {R}}^{*}\right\} \) is higher than that in the mechanism \({\mathcal {M}}^{*}=\left\{ {\vec {g}},{\vec {\omega }}^{*},{\mathcal {R}}^{*}\right\} \):

$$\begin{aligned} EV_{i}^{{\mathcal {M}}^{o}}=\sum \limits _{s_{i}=1}^{n_{i}}g_{s_{i}}^{i} \alpha _{s_{i}}^{i}\left( {\vec {g}}\right) \omega _{s_{i}}^{o,i}\ge \sum \limits _{s_{i}=1}^{n_{i}}g_{s_{i}}^{i}\alpha _{s_{i}}^{i}\left( {\vec {g}}\right) \omega _{s_{i}}^{*,i}=EV_{i}^{{\mathcal {M}}^{*}}\text { for all }i\in {\mathcal {N}}\text {.} \end{aligned}$$

We complete the proof by showing the equivalence of distributions over interim allocation profiles in all three mechanisms, \({\mathcal {D}}^{{\mathcal {M}} }={\mathcal {D}}^{{\mathcal {M}}^{*}}={\mathcal {D}}^{{\mathcal {M}}^{o}}\). First, note that the distributions of signals \({\vec {g}}\) in mechanisms \({\mathcal {M}}\), \({\mathcal {M}}^{*}\), and \({\mathcal {M}}^{o}\) are identical. In addition, mechanism \({\mathcal {M}}^{*}\) replicates the interim allocation of mechanism \({\mathcal {M}}\) for the distribution \(\left\{ {\vec {g}},{\vec {\omega }}^{*}\right\} \) by (58). Finally, the distributions of interim allocations in mechanisms \({\mathcal {M}}^{*}\) and \({\mathcal {M}}^{o}\) are identical, since these mechanisms include the same receiver \({\mathcal {R}} ^{*}\), which allocates the product on the basis of signals only. Together, these arguments imply \({\mathcal {D}}^{{\mathcal {M}}^{o}}={\mathcal {D}} ^{{\mathcal {M}}^{*}}={\mathcal {D}}^{{\mathcal {M}}}\). Formally, the probability \(g_{q}^{{\mathcal {M}}}\) of inducing an allocation profile \(q\in {\mathcal {Q}} ^{{\mathcal {M}}}\) in the mechanism \({\mathcal {M}}\) is given by

$$\begin{aligned} g_{q}^{{\mathcal {M}}}=\sum \limits _{{\vec {s}}\in \left[ n\right] :{\vec {q}}\left( {\vec {\omega }}_{{\vec {s}}}^{*}\right) =q}\prod \limits _{i\in {\mathcal {N}} }g_{s_{i}}^{i}=\sum \limits _{{\vec {s}}\in \left[ n\right] :{\vec {q}}^{*}\left( {\vec {s}}\right) =q}\prod \limits _{i\in {\mathcal {N}}}g_{s_{i}}^{i}=g_{q} ^{{\mathcal {M}}^{*}}=g_{q}^{{\mathcal {M}}^{o}}. \end{aligned}$$

\(\square \)

Proof of Theorem 4

We start the proof by showing the optimality of monotone partitional signal structures. First, it follows from (42) and (44) that \(V_{s}^{1}\left( \omega _{s},g\right) \) and \(V_{s}^{2}\left( \omega _{s},g\right) \) are of the form (38). Thus, given the maximum number of signals \(n<\infty \), Theorem 2 implies that there are optimal redundant monotone partitional distributions \(\left\{ {\bar{g}}^{r},{\bar{\omega }}^{r}\right\} \in {\mathcal {G}}_{n},r=1,2\). Denote \({\bar{\sigma }}^{r}\in \Pi _{n}\) the signal structure that generates \(\left\{ {\bar{g}}^{r},{\bar{\omega }}^{r}\right\} \).

Second, the allocation and payments in the auction depend only on the values of posterior means rather than on signals, which induce them. Thus, if different signals in a distribution \(\left\{ {\bar{g}}^{r},{\bar{\omega }} ^{r}\right\} \in {\mathcal {G}}_{n}\) induce identical posterior means, then \({\mathcal {V}}_{r:N}\left( {\bar{g}}^{r},{\bar{\omega }}^{r}\right) ={\mathcal {V}} _{r:N}\left( g^{r},\omega ^{r}\right) \), where \(\left\{ g^{r},\omega ^{r}\right\} \in {\mathcal {G}}_{m_{r}}^{+}\) is derived from \(\left\{ {\bar{g}}^{r},{\bar{\omega }}^{r}\right\} \) by collapsing all signals that induce identical posterior means and relabeling signals in such a way that the new signal space is \(\left[ m\right] \). This implies that \(\left\{ g^{r} ,\omega ^{r}\right\} \) is also optimal.

  1. (i)

    Consider any \(n>1\) and \(\left\{ g,\omega \right\} \in {\mathcal {G}}_{n}\). Since \(\upsilon \left( 0\right) \ge 0\), \(\upsilon \left( \theta \right) \) is strictly increasing, and \(g_{s}>0,s\in \left[ n\right] \), then \(\omega _{s}>0\) for all \(s\in \left[ n\right] \). From (43) and \(g_{s}=G_{s} -G_{s-1}\), we get

    $$\begin{aligned} g_{s}\alpha _{1,s}\left( g,N\right)&=G_{s}^{N}-G_{s-1}^{N}=\left( G_{s}-G_{s-1}\right) \left( \sum \limits _{j=0}^{N-1}G_{s}^{N-1-j}G_{s-1} ^{j}\right) \\&=g_{s}\sum \limits _{j=0}^{N-1}G_{s}^{N-1-j}G_{s-1}^{j}\text {, and}\\ \alpha _{1,s}\left( g,N\right)&=\sum \limits _{j=0}^{N-1}G_{s}^{N-1-j} G_{s-1}^{j}, \end{aligned}$$

    which is strictly increasing in s. Thus, given a fixed number of signals m, there is an optimal monotone partitional \(\left\{ g^{*},\omega ^{*}\right\} \in {\mathcal {G}}_{m}^{+}\) by Theorem 1.

    In order to show that any \(m<n\) is suboptimal, consider the monotone partitional \(\left\{ g^{o},\omega ^{o}\right\} \in {\mathcal {G}}_{m+1}^{+}\), which is derived from \(\ \left\{ g^{*},\omega ^{*}\right\} \in {\mathcal {G}}_{m}^{+}\) by splitting the interval (partition element) \(\Theta _{1}^{*}=[0, \theta_1^{*}]\) into two subintervals of a positive length, \(\Theta _{1}^{o}\) and \(\Theta _{2}^{o}\), and relabeling the signals in \(\left\{ g^{o},\omega ^{o}\right\} \) so that \(\omega _{s}^{o}=\omega _{s-1}^{*}\) for \(s=3,\ldots ,m+1\). This transformation decomposes the lowest posterior mean \(\omega _{1}^{*}\) into two means, \(\omega _{1}^{o}\ \) and \(\omega _{2}^{o}\), where \(\omega _{1}^{o}<\omega _{1}^{*}<\omega _{2}^{o}<\omega _{3}^{o} =\omega _{2}^{*}\) by construction. Also, \(\omega _{1}^{o}\ \) and \(\omega _{2}^{o}\) are induced with probabilities \(g_{1}^{o}=\Pr \left\{ \theta \in \Theta _{1}^{o}\right\} >0\) and \(g_{2}^{o}=\Pr \left\{ \theta \in \Theta _{2}^{o}\right\} >0\), where \(g_{1}^{o}+g_{2}^{o}=g_{1}^{*}\) and \(g_{1} ^{o}\omega _{1}^{o}+g_{2}^{o}\omega _{2}^{o}=g_{1}^{*}\omega _{1}^{*}\). Then, the difference \(\Delta {\mathcal {V}}_{1}={\mathcal {V}}_{1:N}\left( g^{o},\omega ^{o}\right) -{\mathcal {V}}_{1:N}\left( g^{*},\omega ^{*}\right) \) between the expected valuations of the winning bidder under \(\left\{ g^{o},\omega ^{o}\right\} \) and \(\left\{ g^{*},\omega ^{*}\right\} \) is equal to

    $$\begin{aligned} \Delta {\mathcal {V}}_{1}&=\sum \limits _{s=1}^{m+1}g_{s}^{o}\alpha _{1,s}\left( g^{o}\right) \omega _{s}^{o}-\sum \limits _{s=1}^{m}g_{s}^{*}\alpha _{1,s}\left( g^{*}\right) \omega _{s}^{*}\\&=g_{1}^{o}\alpha _{1,1}\left( g^{o}\right) \omega _{1}^{o}+g_{2}^{o} \alpha _{1,2}\left( g^{o}\right) \omega _{2}^{o}-g_{1}^{*}\alpha _{1,1}\left( g^{*}\right) \omega _{1}^{*}\\&=\left( G_{1}^{o}\right) ^{N}\omega _{1}^{o}+\left( \left( G_{2} ^{o}\right) ^{N}-\left( G_{1}^{o}\right) ^{N}\right) \omega _{2} ^{o}-\left( G_{1}^{*}\right) ^{N}\omega _{1}^{*}. \end{aligned}$$

    Because \(G_{1}^{o}=g_{1}^{o},G_{2}^{o}=G_{1}^{*}=g_{1}^{*}=g_{1} ^{o}+g_{2}^{o}\), and \(G_{0}^{o}=G_{0}^{*}=0\), we get

    $$\begin{aligned} \Delta {\mathcal {V}}_{1}&=\left( g_{1}^{o}\right) ^{N}\omega _{1} ^{o}+\left( \left( g_{1}^{o}+g_{2}^{o}\right) ^{N}-\left( g_{1} ^{o}\right) ^{N}\right) \omega _{2}^{o}-\left( g_{1}^{*}\right) ^{N-1}g_{1}^{*}\omega _{1}^{*}\\&=\left( g_{1}^{o}\right) ^{N}\omega _{1}^{o}+\left( \left( g_{1} ^{o}+g_{2}^{o}\right) ^{N}-\left( g_{1}^{o}\right) ^{N}\right) \omega _{2}^{o}-\left( g_{1}^{o}+g_{2}^{o}\right) ^{N-1}\left( g_{1}^{o}\omega _{1}^{o}+g_{2}^{o}\omega _{2}^{o}\right) . \end{aligned}$$

    Consider the function

    $$\begin{aligned} \varphi \left( x,y\right) =c^{N}y+\left( \left( c+d\right) ^{N} -c^{N}\right) x-\left( c+d\right) ^{N-1}\left( cy+dx\right) , \end{aligned}$$

    where \(c>0,d>0\), and \(N>1\). Then, we have

    $$\begin{aligned} \varphi \left( y,y\right)&=c^{N}y+\left( \left( c+d\right) ^{N} -c^{N}\right) y-\left( c+d\right) ^{N-1}\left( cy+dy\right) \\&=c^{N}y+\left( c+d\right) ^{N}y-c^{N}y-\left( c+d\right) ^{N}y\equiv 0, \end{aligned}$$

    and

    $$\begin{aligned} \frac{\partial \varphi \left( x,y\right) }{\partial x}=\left( c+d\right) ^{N}-c^{N}-d\left( c+d\right) ^{N-1}=\left( c+d\right) ^{N-1}c-c^{N}. \end{aligned}$$

    Thus, \(\frac{\partial \varphi \left( x,y\right) }{\partial x}>\left( c+0\right) ^{N-1}c-c^{N}=0\) for \(c\ge 0,d>0\), and \(N\ge 1\). This property and \(\varphi \left( y,y\right) \equiv 0\) imply \(\varphi \left( x,y\right) >0\) for \(x>y,c>0,d>0\), and \(N>1\). Finally, note that

    $$\begin{aligned} \Delta {\mathcal {V}}_{1}=\varphi \left( \omega _{2}^{o},\omega _{1}^{o}\right) \text { for }c=g_{1}^{o}\text { and }d=g_{2}^{o}. \end{aligned}$$

    Because \(g_{1}^{o}>0,g_{2}^{o}>0,N>1\), and \(\omega _{2}^{o}>\omega _{1}^{o}\), we have \(\varphi \left( \omega _{2}^{o},\omega _{1}^{o}\right) >0\); that is, \({\mathcal {V}}_{1:N}\left( g^{o},\omega ^{o}\right) >{\mathcal {V}}_{1:N}\left( g^{*},\omega ^{*}\right) \) for all N.

  2. (ii)

    Consider any \(\left\{ g,\omega \right\} \in {\mathcal {G}}_{n}\), where \(n>1\). For \(N=2\), it follows from (45) that

    $$\begin{aligned} \alpha _{2,s}\left( g\right) =\frac{2G_{s}-2G_{s}^{2}-2G_{s-1}+2G_{s-1} ^{2}+G_{s}^{2}-G_{s-1}^{2}}{g_{s}}=2-G_{s}-G_{s-1}, \end{aligned}$$

    which is decreasing in s, i.e., Condition 1 is reversed. Then, Corollary 1 results in \({\mathcal {V}}_{2:N}\left( g,\omega \right) \le {\mathcal {V}}_{2:N}\left( g,\omega ^{0}\right) \), where \(\left\{ g,\omega ^{0}\right\} \in {\mathcal {G}}_{n}\) is such that each signal \(s\in \left[ n\right] \) induces the same posterior mean \(\omega _{s}^{0}=\) \(E\left[ \upsilon \left( \theta \right) \right] \) with probability \(g_{s}\). Finally, \(\left\{ g,\omega ^{0}\right\} \) is ex-ante payoff equivalent to the degenerated \(\left\{ g_{1}^{0},\omega _{1}^{0}\right\} =\left\{ 1,E\left[ \upsilon \left( \theta \right) \right] \right\} \), which is induced by the uninformative signal structure \(\sigma _{1}\left( \theta \right) =1,\theta \in \Theta \).

  3. (iii)

    For a given maximum number of signals \(n<\infty \), there is a solution to (46) in the set of monotone partitional signal structures of size \(m\le n\) by the first part of the proof. In order to show that any \(m<n\) is suboptimal if N is large enough, we follow the lines similar to those in part (i). In particular, given any \(m<n\) and a monotone partitional \(\left\{ g^{*},\omega ^{*}\right\} \in {\mathcal {G}}_{m}^{+}\), consider the monotone partitional \(\left\{ g^{o},\omega ^{o}\right\} \in {\mathcal {G}}_{m+1}^{+}\), which is derived from \(\ \left\{ g^{*},\omega ^{*}\right\} \in {\mathcal {G}}_{m}^{+}\) by decomposing the highest posterior mean \(\omega _{m}^{*}\) into two means, \(\omega _{m}^{o}\ \) and \(\omega _{m+1}^{o}\), where \(\omega _{m-1}^{o}=\omega _{m-1}^{*}<\omega _{m}^{o}<\omega _{m}^{*} <\omega _{m+1}^{o}\) by construction. Posterior means \(\omega _{m}^{o}\ \) and \(\omega _{m+1}^{o}\) are induced with probabilities \(g_{m}^{o}\ \)and \(g_{m+1}^{o} \), respectively, such that \(g_{m}^{o}+g_{m+1}^{o}=g_{m}^{*}\) and \(g_{m} ^{o}\omega _{m}^{o}+g_{m+1}^{o}\omega _{m+1}^{o}=g_{m}^{*}\omega _{m}^{*}\). Then, the difference \(\Delta {\mathcal {V}}_{2}\left( N\right) ={\mathcal {V}} _{2:N}\left( g^{o},\omega ^{o}\right) -{\mathcal {V}}_{2:N}\left( g^{*},\omega ^{*}\right) \) in the seller’s ex-ante revenues under \(\left\{ g^{o},\omega ^{o}\right\} \) and \(\left\{ g^{*},\omega ^{*}\right\} \) is equal to

    $$\begin{aligned} \Delta {\mathcal {V}}_{2}\left( N\right)&=\sum \limits _{s=1}^{m+1}g_{s} ^{o}\alpha _{2,s}\left( g^{o},N\right) \omega _{s}^{o}-\sum \limits _{s=1} ^{m}g_{s}^{*}\alpha _{2,s}\left( g^{*},N\right) \omega _{s}^{*}\\&=g_{m}^{o}\alpha _{2,m}\left( g^{o},N\right) \omega _{m}^{o}+g_{m+1} ^{o}\alpha _{2,m+1}\left( g^{o},N\right) \omega _{m+1}^{o}-g_{m}^{*} \alpha _{2,m}\left( g^{*},N\right) \omega _{m}^{*}. \end{aligned}$$

    From (45), we have

    $$\begin{aligned}&g_{m}^{o}\alpha _{2,m}\left( g^{o},N\right) \\&\quad =N\left( \left( G_{m} ^{o}\right) ^{N-1}\left( 1-G_{m}^{o}\right) -\left( G_{m-1}^{o}\right) ^{N-1}\left( 1-G_{m-1}^{o}\right) \right) +\left( G_{m}^{o}\right) ^{N}-\left( G_{m-1}^{o}\right) ^{N}\\&\quad =N\left( \left( 1-g_{m+1}^{o}\right) ^{N-1}g_{m+1}^{o}-\left( 1-g_{m}^{o}-g_{m+1}^{o}\right) ^{N-1}\left( g_{m}^{o}+g_{m+1}^{o}\right) \right) \\&\qquad +\left( 1-g_{m+1}^{o}\right) ^{N}-\left( 1-g_{m}^{o}-g_{m+1}^{o}\right) ^{N}, \\&g_{m+1}^{o}\alpha _{2,m+1}\left( g^{o},N\right) =N\left( \left( G_{m+1}^{o}\right) ^{N-1}\left( 1-G_{m+1}^{o}\right) -\left( G_{m} ^{o}\right) ^{N-1}\left( 1-G_{m}^{o}\right) \right) \\&\qquad +\left( G_{m+1} ^{o}\right) ^{N}-\left( G_{m}^{o}\right) ^{N}\\&\quad =-N\left( 1-g_{m+1}^{o}\right) ^{N-1}g_{m+1}^{o}+1-\left( 1-g_{m+1} ^{o}\right) ^{N}\text {, and} \\&g_{m}^{*}\alpha _{2,m}\left( g^{*},N\right) =N\left( \left( G_{m}^{*}\right) ^{N-1}\left( 1-G_{m}^{*}\right) \right. \\&\qquad \left. -\left( G_{m-1}^{*}\right) ^{N-1}\left( 1-G_{m-1}^{*}\right) \right) +\left( G_{m}^{*}\right) ^{N}-\left( G_{m-1}^{*}\right) ^{N}\\&\quad =-N\left( 1-g_{m}^{*}\right) ^{N-1}g_{m}^{*}+1-\left( 1-g_{m}^{*}\right) ^{N}, \end{aligned}$$

    where \(G_{m}^{o}=1-g_{m+1}^{o},G_{m-1}^{*}=G_{m-1}^{o}=1-g_{m}^{*}\), and \(G_{m}^{*}=G_{m+1}^{o}=1\).

    Since \(g_{m}^{o}>0,g_{m+1}^{o}>0\), and \(g_{m}^{o}+g_{m+1}^{o}=g_{m}^{*} \le 1\), it follows that

    $$\begin{aligned} \lim _{N\rightarrow \infty }g_{m}^{o}\alpha _{2,m}\left( g^{o},N\right) & =0\text {, }\lim _{N\rightarrow \infty }g_{m+1}^{o}\alpha _{2,m+1}\left( g^{o},N\right) =1, \\ &\quad \text { and }\lim _{N\rightarrow \infty }g_{m}^{*} \alpha _{2,m}\left( g^{*},N\right) =1. \end{aligned}$$

    This results in \(\lim _{N\rightarrow \infty }\Delta {\mathcal {V}}_{2}\left( N\right) =\omega _{m+1}^{o}-\omega _{m}^{*}>0\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ivanov, M. Optimal monotone signals in Bayesian persuasion mechanisms. Econ Theory 72, 955–1000 (2021). https://doi.org/10.1007/s00199-020-01277-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00199-020-01277-x

Keywords

JEL Classification

Navigation