1 Introduction

In many real-life strategic interactions, agents rely on information revealed by an exogenous entity to take decisions. The latter acts as an informed principal whose goal is to shape the agents’ beliefs so as to achieve a desired outcome. In this context, deciding what information to reveal amounts to an information structure design problem. When information is incomplete, the information structure determines “which agent gets to know what” about the current state of the environment (i.e., the parameters determining payoff functions). There has been a recent surge of interest in the study of how an informed principal may steer agents’ collective behavior towards a favorable outcome. The study of these problems has been largely driven by their application in various domains such as auctions and online advertisement [1,2,3], voting [4, 5], traffic routing [6, 7], recommendation systems [8], security [9,10,11], and marketing [12, 13].

Persuasion is the task faced by an informed principal, which we call the sender, which tries to influence the behavior of the self-interested agent(s) (i.e., the receivers) taking part in a strategic interactions. The sender faces the algorithmic problem of determining the optimal information structure to achieve her objectives. A solution to this problem is described through the notion of signaling scheme, which is a mapping from the sender’s observations to the space of probability distributions over the set of available signals. A foundational model describing the persuasion problem is the Bayesian persuasion framework (BP) introduced by Kamenica and Gentzkow in [14]. That model describes a setting with a sender and a single receiver. There is a set of parameters influencing the payoff functions of the sender and of the receiver. These parameters are collectively called the state of nature, and model exogenous stochasticity in the environment. The sender and the receiver share a common prior over the possible states of nature. However, only the sender gets to observe the realized state of nature, which is drawn according to the shared prior probability distribution. This originates a fundamental asymmetry in the information available to the two agents. The sender can exploit this additional knowledge to steer the receiver’s actions towards a favorable outcome. Specifically, the action selected by the receiver is the best action available under her current posterior distribution, which is updated in a classical Bayesian fashion after observing the sender’s signal. Therefore, the prior distribution together with the sender’s signaling scheme determine the receiver equilibrium behavior. We observe that the BP framework assumes the sender’s commitment power, which is a natural assumption in many settings (see, e.g., the arguments by [14, 15]). One argument to that effect is that reputation and credibility may be a key factor for the long-term utility of the sender [16].

In many practical scenarios, the sender may need to persuade multiple receivers, revealing information to each one of them. In the multi-receiver setting, it is useful to make a distinction between private and public signaling schemes.Footnote 1 In the former setting, the sender may reveal different information to each receiver through private communication channels. In the latter, which is the focus of this paper, the sender has to reveal the same information to all receivers. Public persuasion is well suited for settings where private communication channels are either too costly or impractical. This is the case in scenarios with a large population of receivers, such as elections, and scenarios where receivers may share private information, which are frequent in practice.

In our paper, we adopt and generalize the multi-agent persuasion model introduced by Arieli and Babichenko in [18], which rules out the possibility of inter-agent externalities. Specifically, each receiver’s utility depends only on her action and on the realized state of nature, but not on the actions of other receivers. This assumption allows one to focus on the key problem of coordinating the receivers’ behavior, without the additional complexity arising from externalities which have been shown to make the problem largely intractable [6, 19]. Previous works on Arieli and Babichenko’s model either address the private persuasion setting [18, 20, 21] or make some structural assumptions which render them special cases of our model [22]. To the best of our knowledge, this is the first work which generalizes the model by Arieli and Babichenko to settings with arbitrary spaces of states of nature, arbitrary receivers’ action spaces, and arbitrary sender’s utility functions. The generality of our setting raises a number of technical difficulties with respect to previous works on the same model. Our solution to these challenges is a first step towards actionable persuasion models that can be applied to real-world multi-receiver problems without structural restrictions.

1.1 Context: Persuasion with Multiple Receivers

Dughmi and Xu [23] analyze for the first time Bayesian persuasion from a computational perspective, focusing on the single receiver case. In [18], Arieli and Babichenko introduce the model of persuasion with multiple receivers and without inter-agent externalities, with a focus on private Bayesian persuasion. In particular, they study the setting with a binary action space for the receivers and a binary space of states of nature. They provide a characterization of the optimal signaling scheme in the case of supermodular, anonymous submodular, and supermajority sender’s utility functions. In [20], Babichenko and Barman extend the work by Arieli and Babichenko providing a tight \((1-1/e)\)-approximate signaling scheme for monotone submodular sender’s utilities and showing that an optimal private scheme for anonymous utility functions can be found efficiently. In [21], Dughmi and Xu generalize the previous model to settings with an arbitrary number of states of nature.

When considering the problem of designing public persuasive signaling schemes, some previous works study scenarios with inter-agent externalities by making some structural assumptions on the nature of the strategic interaction. For instance, Bhaskar et al. [6] and Rubinstein [19] study public signaling problems in which two receivers play a zero-sum game. In particular, Bhaskar et al. rule out an additive PTAS assuming hardness of the planted clique problem. The setting studied by Bhaskar et al. [6] and Rubinstein [19] is fundamentally different from ours. In their setting the game can be compactly represented through its normal-form representation, and the complexity of the problem lies in handling externalities among players. On the other hand, in our setting, a compact normal-form representation is not possible since it is exponential in the (arbitrary) number of receivers.

Moreover, Rubinstein proves that the problem of computing an \(\epsilon \)-optimal signaling scheme requires at least quasi-polynomial time assuming the Exponential Time Hypothesis (ETH). This result is tight due to the quasi-polynomial approximation scheme proposed by Cheng et al. [5].

A number of previous works focus on the public signaling problem in the no inter-agent externalities framework of Arieli and Babichenko. In particular, Dughmi and Xu [21] rule out the existence of a PTAS even when receivers have binary action spaces and objectives are linear, unless \(\textsc {P}=\textsc {NP}\). For this reason, most of the following works focus on the computation of bi- or tri-criteria approximations in which the persuasion constraints can be violated by a small amount. In [5], Cheng et al. describe a polynomial-time tri-criteria approximation algorithm for k-voting scenarios. The work of [5] on k-voting is related to the voting problem that we study in this paper. However, while we relax the problem allowing approximately optimal and approximately persuasive signaling schemes, [5] considers also a third type of relaxation. In particular, they consider a relaxed sender’s utility function in which less than k votes are sufficient to win the election. This third relaxation is necessary to provide a PTAS to the problem, while we show that without relaxing the utility function the problem requires at least quasi-polynomial time. In [22], Xu studies public persuasion with binary action spaces and an arbitrary number of states of nature, showing that no bi-criteria FPTAS is possible, unless \(\textsc {P}=\textsc {NP}\). Furthermore, the author proposes a bi-criteria PTAS for monotone submodular sender’s utility functions and shows that, when the number of states of nature is fixed and a non-degeneracy assumption holds, an optimal signaling scheme can be computed in polynomial time.

1.2 Our Results and Techniques

We provide a tight characterization of the complexity of computing bi-criteria approximations of optimal public signaling schemes in arbitrary persuasion problems with n receivers and no inter-agent externalities.

Impossibility result  Previous works studying the same model (i.e., one with no inter-agent externalities and public signaling schemes) exploit specific structures of the sender’s utility functions to provide optimal or approximate polynomial-time algorithms. We show that the complexity of the approximation problem shifts from poly-time to quasi-poly-time when the utility function of the sender can be arbitrary. Indeed, we show that the positive results by Xu [22], which assumes noise-stability of the sender’s utility function, cannot be extended to the case of arbitrary sender’s utility functions. Specifically, we argue that no polynomial-time bi-criteria approximation algorithm is possible in general settings. This is shown by reasoning over simple k-voting instances, which are per se an interesting application scenario of Bayesian persuasion (see, e.g., the work by Castiglioni et al. [24]), and are sufficient to extend the result to the general case of arbitrary sender’s utility functions. In addressing this result, we follow a different approach from that used in [22]. Specifically, we cannot hope for a ‘standard’ \(\textsf{NP}\)-hardness result because there exist quasi-polynomial time bi-criteria approximation algorithms (Theorem 2). Therefore, by assuming ETH, we show that it is unlikely that there exists a bi-criteria polynomial-time approximation algorithm even in instances with simple utility functions and a fixed space of actions. Let n be the size of the instance in input. Our main impossibility result reads as follows.

Theorem 1

Assuming ETH, there exists a constant \(\epsilon ^*>0\) such that, for any \(0<\epsilon \le \epsilon ^*\), finding a signaling scheme that is \(\epsilon \)-persuasive and \(\alpha \)-approximate requires time \(n^{{\tilde{\Omega }}(\log n)}\) for any multiplicative or additive factor \(\alpha \in (0,1)\), even with binary action spaces.

The proof of this result requires an intermediate step that is of independent interest and of general applicability. Specifically, we study a slight variation of the Maximum Feasible Subsystem of Linear Inequalities problem (\(\epsilon \)-MFS) [5], where, given a linear system \(A\,\textbf{x}\ge 0\), \(A\in [-1,1]^{n_{\text {row}}\times n_{\text {col}}}\), we look for the vector \(\textbf{x}\in \Delta _{n_{\text {col}}}\) almost (i.e., except for an additive factor \(\epsilon \)) satisfying the highest number of inequalities (Definition 4). This is a constrained variant of the Max FLS problem previously studied by Amaldi and Kann in [25], and it is commonly used in scheduling [26], signaling, and mechanism design [5]. In Sect. 5, we prove that solving \(\epsilon \)-MFS requires at least a quasi-polynomial number of steps assuming ETH. The proof is based on a reduction from two-provers games [27, 28]. Then, equipped with the result on \(\epsilon \)-MFS, we focus on a simple public persuasion problem where the receivers are voters, and they have a binary action space since they must choose one between two candidates. In Sect. 6, we prove a hardness result (Theorem 8) for this setting which directly implies Theorem 1. We show that the \(\epsilon \)-MFS problem is deeply connected to the problem of computing ‘good’ posteriors, as finding of an optimal \(\textbf{x}\) in \(\epsilon \)-MFS maps to the problem of finding an \(\epsilon \)-persuasive posterior, which is equivalent to determining an \(\epsilon \)-persuasive signaling scheme.

Positive result  In order to design an approximation algorithm (in the multiplicative sense), we resort to the assumption of \(\alpha \)-approximable utility functions for the sender, as previously defined by Xu in [22]. An \(\alpha \)-approximable sender’s utility function is such that it is possible to obtain in polynomial time a tie-breaking for the receivers guaranteeing to the sender an \(\alpha \)-approximation of the optimal objective value. The \(\alpha \)-approximability condition is a natural minimal requirement since, otherwise, even the problem of evaluating the sender’s objective function for a given posterior over the states of nature would not be tractable. When the sender’s utility function is \(\alpha \)-approximable, there is no hope for a better approximation than an \(\alpha \)-approximate signaling scheme. The following theorem, presented in Sect. 7, shows that it is possible to compute, in quasi-polynomial time, a bi-criteria approximation with a factor arbitrarily close to \(\alpha \).

Theorem 2

Let the sender’s utility function f be \(\alpha \)-approximable. Then, for any \(\delta >0\) and \(\epsilon >0\), there exists a \(\text {poly}\left( n^{\log (n/\delta )\,/\,\epsilon ^2}\right) \) algorithm that outputs an \(\alpha \,(1-\delta )\)-approximate \(\epsilon \)-persuasive public signaling scheme.

Therefore, our approximation algorithm guarantees the best possible factor on the objective value, and an arbitrary small loss in persuasiveness. For 1-approximable functions, Theorem 2 yields a bi-criteria QPTAS. In the setting of Xu [22] (i.e., binary action spaces and state-independent sender’s utility function), our result directly yields a QPTAS for any monotone sender’s utility function. In order to prove the result, we show that any posterior can be represented as a convex combination of k-uniform posteriors with only a small loss in the objective value. By restricting our attention to the set of k-uniform posteriors, which has a quasi-polynomial size, the problem can be solved via a linear program of quasi-polynomial size.

2 Preliminaries

This section describes the instantiation of the Bayesian persuasion framework which is the focus of this work (Sect. 2.1), public signaling problems (Sect. 2.2) and the notion of the bi-criteria approximation which we adopt (Sect. 2.3). For a comprehensive overview of the Bayesian persuasion framework we refer the reader to[15, 29], and [30].Footnote 2

2.1 Basic Model

Our model is a generalization of the framework introduced by Arieli and Babichenko in [18], that is, multi-agent persuasion with no inter-agent externalities. We adopt the perspective of a sender facing a finite set of receivers \(\mathcal {R}:=[{\bar{n}}]\). Each receiver \(r \in \mathcal {R}\) has a finite set of \(\varrho ^r\) actions \(\mathcal {A}^r:=\{a_i\}_{i=1}^{\varrho ^r}\). Each receiver’s payoff depends only on the action she takes and on a (random) state of nature \(\theta \), drawn from a finite set \(\Theta :=\{\theta _i\}_{i=1}^d\) of cardinality d. In particular, receiver r’s utility is given by the function \(u^r: \mathcal {A}^r \times \Theta \rightarrow [0,1]\). The utility of each receiver does not depend on other receivers’ actions because of the no inter-agent externalities assumption. We denote by \(u_\theta ^r(a^r)\in [0,1]\) the utility observed by receiver r when the state of nature is \(\theta \) and she plays \(a^r\). Let \(\mathcal {A}:=\times _{r\in \mathcal {R}} \mathcal {A}^r\) be the set of joint receivers’ actions. An action profile (i.e., a tuple specifying an action for each receiver) is denoted by \(\textbf{a}=(a^r)_{r=1}^{{\bar{n}}}\in \mathcal {A}\). The sender’s utility, when the state of nature is \(\theta \), is given by the function \(f_\theta : \mathcal {A}\rightarrow [0,1]\). We write \(f_\theta (\textbf{a})\) to denote the sender’s payoff when the receivers behave according to action profile \(\textbf{a}\) and the state of nature is \(\theta \). As it is customary in Bayesian persuasion, we assume \(f_\theta \) can be represented succinctly, that is without explicitly describing the function through its (exponentially many) input–output pairs. As an example, the reader can refer to Eq. 3, where it is possible to compute the sender’s payoff by reasoning on the structure of the action profile at hand.

The state of nature \(\theta \) is drawn from a common prior distribution \(\mu \in \text {int}(\Delta _\Theta )\), which is explicitly known to the sender and the receivers. Moreover, the sender can publicly commit to a policy \(\phi \) (i.e., a signaling scheme, see Sect. 2.2) which maps states of nature to signals for the receivers. A generic signal for receiver r is denoted by \(s^r\), while the set of available signals to each receiver r is denoted by \(\mathcal {S}^r\). The interaction between the sender and the receivers goes as follows:

  1. 1.

    The sender commits to a publicly known signaling scheme \(\phi \);

  2. 2.

    The sender observes the realized state of nature \(\theta \sim \mu \);

  3. 3.

    The sender draws a signal \(s^r\) for each receiver according to the signaling scheme \(\phi _\theta \), and communicates to each receiver r the signal \(s^r\);

  4. 4.

    Each receiver r observes \(s^r\) and updates her prior beliefs over \(\Theta \) following Bayes rule. Then, each receiver r selects an action \(a^r\in \mathcal {A}^r\) maximizing her expected reward.

Let \(\textbf{a}=(a^1,\ldots ,a^{{\bar{n}}})\in \mathcal {A}\) be the tuple of the receivers’ choices, then each receiver r gets payoff \(u^r_{\theta }(a^r)\), and the sender observes payoff \(f_\theta (\textbf{a})\). This work focuses on the specific setting in which \(\phi \) is a public signaling schemes. We give more details on the structure of public signaling schemes in the following section.

2.2 Public Signaling Schemes

A signal profile is a tuple \(\textbf{s}=(s^r)_{r=1}^{{\bar{n}}}\in \mathcal {S}\) specifying a signal for each receiver, where \(\mathcal {S}:=\times _{r\in \mathcal {R}} \mathcal {S}^r\). A public signaling scheme is a function \(\phi :\Theta \rightarrow \Delta _\mathcal {S}\) mapping states of nature to probability distributions over signal profiles, with the constraint that each receiver has to receive the same signal, that is, for any \(\theta \) and \(\textbf{s}\sim \phi _\theta \), it holds \(s^r=s^{r'}\) for each pair of receivers \(r,r'\). With an overload of notation we write \(s\in \mathcal {S}\) to denote the public signal received by all receivers. The probability with which the sender selects s after observing \(\theta \) is denoted by \(\phi _{\theta }(s)\). Thus, it holds \(\sum _{s\in \mathcal {S}} \phi _\theta (s)=1\) for each \(\theta \in \Theta \).

After observing \(s\in \mathcal {S}\), receiver r performs a Bayesian update and infers a posterior belief \(\textbf{p}\in \Delta _\Theta \) over the states of nature. Specifically, the realized state of nature is \(\theta \) with probability

$$\begin{aligned} p_\theta :=\frac{\mu _\theta \, \phi _\theta (s)}{\sum _{\theta \in \Theta }\mu _\theta \,\phi _\theta (s)}. \end{aligned}$$

Since the prior is common knowledge and all receivers observe the same s, they all perform the same Bayesian update and have the same posterior belief regarding the realized state of nature. After computing \(\textbf{p}\), since the problem is without inter-agent externalities, each receiver solves a disjoint single-agent decision problem to find the action maximizing her expected utility.

A signaling scheme is direct when signals can be mapped to actions of the receivers, and interpreted as action recommendations. Each receiver is sent the same signal \(\textbf{s}\in \mathcal {A}\) specifying a (possibly different) action for each other receiver, that is, the set of possible signals is \(\mathcal {S}=\mathcal {A}\). Moreover, a signaling scheme is persuasive if following the recommendations is an equilibrium of the underlying Bayesian game [31, 32]. A direct signaling scheme is persuasive if the sender’s action recommendations belongs to \(\mathop {\mathrm {arg\,max}}\limits _{a\in \mathcal {A}^r} \sum _{\theta \in \Theta } p_\theta \, u^r_{\theta }(a)\) for every receiver r. A simple revelation-principle style argument shows that there always exists an optimal public signaling scheme which is both direct and persuasive [17, 22]. A signal in a direct signaling scheme can be equivalently expressed as an action profile \(\textbf{a}\in \mathcal {A}\). It is easy to see that there is an exponential number of such signals. We write \(\phi _\theta (\textbf{a})\) to denote the probability with which the sender selects \(\textbf{s}=\textbf{a}\) when the realized state of nature is \(\theta \). The problem of determining an optimal public signaling scheme which is direct and persuasive can be formulated with the following (exponentially sized) linear program (LP):

$$\begin{aligned} \max&\sum _{\theta \in \Theta ,\textbf{a}\in \mathcal {A}}\,\mu _\theta \,\phi _\theta (\textbf{a}) \,f_\theta (\textbf{a}) \end{aligned}$$
(1a)
$$\begin{aligned} \text {s.t. }&\sum _{\theta \in \Theta }\mu _\theta \,\phi _\theta (\textbf{a}) \,\Big (u^r_\theta (a^{r})-u^r_\theta (a')\Big )\ge 0&\forall r\in \mathcal {R},\forall \textbf{a}\in \mathcal {A},a'\in \mathcal {A}^r\nonumber \\&\sum _{\textbf{a}\in \mathcal {A}}\phi _\theta (\textbf{a})=1&\forall \theta \in \Theta \nonumber \\&\phi _\theta (\textbf{a})\ge 0&\forall \theta \in \Theta ,\forall \textbf{a}\in \mathcal {A}\end{aligned}$$
(1b)

where \(\textbf{a}=(a^r)_{r=1}^{{\bar{n}}}\in \mathcal {A}\). The sender’s goal is computing the signaling scheme maximizing her expected utility (Objective Function (1a)). Constraints (1b) force the public signaling scheme to be persuasive.Footnote 3\(^,\)Footnote 4

2.3 Bi-criteria Approximation

Let \(\epsilon \in [0,1]\). We say that a public signaling scheme is \(\epsilon \)-persuasive if the following holds for any \(r\in \mathcal {R}\), \(\textbf{a}\in \mathcal {A}\), and \(a'\in \mathcal {A}^r\):

$$\begin{aligned} \sum _{\theta \in \Theta }\,\mu _\theta \,\phi _\theta (\textbf{a}) \,\Big (u^r_\theta (a^{r})-u^r_\theta (a')\Big )\ge -\epsilon . \end{aligned}$$
(2)

Throughout the paper, we focus on the computation of approximately optimal signaling schemes. Let Opt be the optimal value of LP (1), i.e., the best expected revenue that the sender can reach under public persuasion constraints. For each state of nature \(\theta \), \(f_\theta \) is a non-negative function, and we have that \(\textsc {Opt}\ge 0\). When a signaling scheme yields an expected sender utility of at least \(\alpha \,\textsc {Opt}\), with \(\alpha \in (0,1]\), we say that the signaling scheme is \(\alpha \) -approximate (that is, approximate in multiplicative sense). When a signaling scheme yields an expected sender utility of at least \(\textsc {Opt}-\alpha \), with \(\alpha \in [0,1)\), we say that the scheme is \(\alpha \)-optimal (that is, approximate in additive sense).

Finally, we consider approximations which relax both the optimality and the persuasiveness constraints. When a signaling scheme is both \(\epsilon \)-persuasive and \(\alpha \)-approximate (or \(\alpha \)-optimal), we say it is a bi-criteria approximation. We say that one such signaling scheme is \((\alpha ,\epsilon )\)-persuasive.

3 An Application: Persuasion in Voting Problems

In order to clarify the framework we just described, we present a simple example of a possible application of public Bayesian persuasion with no inter-agent externalities. This example is going to be useful in the remainder of the paper.

In an election with the k-voting rule, candidates are elected if they receive at least \(k\in [{\bar{n}}]\) votes (see Brandt et al. [33] for further details). In this setting, a sender (e.g., a politician or a lobbyist) may send signals to the voters on the basis of her private information which is hidden to them. After observing the sender’s signal, each voter (i.e., one of the receivers) chooses one among the set of candidates.

In the following, we will employ instances of k-voting in which receivers have to choose one between two candidates. Then, they have a binary action space with actions \(a_0\) and \(a_1\) corresponding to choosing the first or the second candidate, respectively. Each receiver r has utility \(u_\theta ^r(a)\in [0,1]\) for each \(a\in \{a_0,a_1\}\), where \(\theta \in \Theta \). The sender’s preferred candidate is the one corresponding to action \(a_0\). Therefore, her objective is maximizing the probability that \(a_0\) receives more than k votes. Formally, the sender’s utility function is such that \(f_\theta =f\) for each \(\theta \), and

$$\begin{aligned} f(\textbf{a}):={\left\{ \begin{array}{ll} \begin{array}{ll} 1 &{} \text { if } |\{r\in \mathcal {R}: a^r=a_0\}|\ge k\\ 0 &{} \text { otherwise} \end{array} \end{array}\right. }\text { for each } \textbf{a}\in \mathcal {A}. \end{aligned}$$
(3)

Moreover, let \(W:\Delta _\Theta \rightarrow \mathbb {N}_0^+\) be a function returning, for a given posterior distribution \(\textbf{p}\in \Delta _\Theta \), the number of receivers such that \(\sum _{\theta }p_\theta \,(u_\theta ^r(a_0)-u_\theta ^r(a_1))\ge 0\), i.e., the number of voters that will vote for \(a_0\) with a persuasive signaling scheme. Analogously, \(W_\epsilon (\textbf{p})\) is the number of receivers for which \(\sum _{\theta }p_\theta \,(u_\theta ^r(a_0)-u_\theta ^r(a_1))\ge -\epsilon \), i.e., the number of voters that will vote for \(a_0\) with an \(\epsilon \)-persuasive signaling scheme. In the above voting setting, we refer to the problem of finding an \(\epsilon \)-persuasive signaling scheme which is also \(\alpha \)-approximate (or \(\alpha \)-optimal) as \((\alpha ,\epsilon )\)-k-voting. To further clarify this election scenario, we provide the following simple example, adapted from Castiglioni et al. [24].

Example 1

There are three voters \(\mathcal {R}=\{1,2,3\}\) who must select one between two candidates \(\{a_0,a_1\}\). The sender (e.g., a politician or a lobbyist) observes the realized state of nature, drawn from the uniform probability distribution (1/3, 1/3, 1/3) over \(\Theta =\{A,B,C\}\), and exploits this information to support the election of \(a_0\). The state of nature describes the position of \(a_0\) on a matter of particular interest to the voters. Moreover, all the voters have a slightly negative opinion of candidate \(a_1\), independently of the state of nature, while the opinion on candidate \(a_0\) can be better or worse than the opinion on \(a_1\) depending the state of nature. Table 1 describes the utility of the three voters.

Table 1 Voters’ payoffs from voting different candidates
Table 2 Optimal signaling scheme

We consider a k-voting rule with \(k=2\). Without any form of signaling, all the voters would vote for \(a_1\) because it provides an expected utility of \(-1/4\), against \(-1/3\), and the sender would get a utility of 0. If the sender discloses all the information regarding the state of nature (i.e., with a fully informative signal), the sender would still get a utility of 0, since two out of three receivers would pick \(a_1\) in each of the possible states. However, the sender can design a public signaling scheme guaranteeing herself a utility of 1 for each state of nature. Table 2 describes one such scheme with arbitrary signals. Suppose the observed state is A, and that the signal sent by the sender is not B. Then, the posterior distribution over the states of nature is (1/2, 0, 1/2). Therefore, receiver 1 and receiver 3 would vote for \(a_0\) since their expected utility would be 0 against \(-1/4\). Similarly, for any other signal, two receivers vote for \(a_0\). Then, the sender’s expected payoff is 1. We can recover an equivalent direct signaling scheme by sending a tuple with a candidates’ suggestion for each voter. For example, not A would become \((a_1,a_0,a_0)\), and each voter would observe the recommendations given to the others.

4 Technical Toolkit

In this section, we summarize some key results previously studied in the literature that we will extensively use in the remainder of the paper. In particular, we describe some of the results on two-prover games by Babichenko et al. [34] and Deligkas et al. [28] (Sect. 4.1), and we describe a useful theorem on error-correcting codes due to Gilbert [35] (Sect. 4.2).

4.1 Two-Prover Games

A two-prover game \(\mathcal {G}\) is a co-operative game played by two players (Merlin \(_1\) and Merlin \(_2\), respectively), and an adjudicator (verifier) called Arthur. At the beginning of the game, Arthur draws a pair of questions \((x,y)\in \mathcal {X}\times \mathcal {Y}\) according to a probability distribution \(\mathcal {D}\) over the joint set of questions (i.e., \(\mathcal {D}\in \Delta _{\mathcal {X}\times \mathcal {Y}}\)). Merlin \(_1\) (resp., Merlin \(_2\)) observes x (resp., y) and chooses an answer \(\xi _1\) (resp., \(\xi _2\)) from her finite set of answers \(\Xi _1\) (resp., \(\Xi _2\)). Then, Arthur declares the Merlins to have won with a probability equal to the value of a verification function \(\mathcal {V}(x,y,\xi _1,\xi _2)\). A strategy for Merlin \(_1\) is a function \(\eta _1:\mathcal {X}\rightarrow \Xi _1\) mapping each possible question to an answer. Analogously, \(\eta _2: \mathcal {Y}\rightarrow \Xi _2\) is a strategy of Merlin \(_2\). Before the beginning of the game, Merlin \(_1\) and Merlin \(_2\) can agree on their pair of (possibly mixed) strategies \((\eta _1,\eta _2)\), but no communication is allowed during the games. The payoff of a game \(\mathcal {G}\) to the Merlins under \((\eta _1,\eta _2)\) is defined as: \( u(\mathcal {G},\eta _1,\eta _2):=\mathbb {E}_{{(x,y)\sim \mathcal {D}}} [\mathcal {V}(x,y,\eta _1(x),\eta _2(y))] \). The value of a two-prover game \(\mathcal {G}\), denoted by \(\omega (\mathcal {G})\), is the maximum expected payoff to the Merlins when they play optimally: \(\omega (\mathcal {G}):=\max _{\eta _1}\max _{\eta _2} u(\mathcal {G},\eta _1,\eta _2)\). The size of the game is \(|\mathcal {G}|=|\mathcal {X}\times \mathcal {Y}\times \Xi _1\times \Xi _2|\).

A two-prover game is called a free game if \(\mathcal {D}\) is a uniform probability distribution over \(\mathcal {X}\times \mathcal {Y}\). This implies that there is no correlation between the questions sent to Merlin \(_1\) and Merlin \(_2\). It is possible to build a family of free games mapping to 3SAT formulas arising from Dinur’s PCP theorem. We say that the size n of a formula \(\varphi \) is the number of variables plus the number of clauses in the formula. Moreover, SAT(\(\varphi \))\(\in [0,1]\) is the maximum fraction of clauses that can be satisfied in \(\varphi \). With this notation, the Dinur’s PCP Theorem reads as follows:

Theorem 3

[Dinur’s PCP Theorem [36]] Given any 3SAT instance \(\varphi \) of size n, and a constant \(\rho \in (0,1/8)\), we can produce in polynomial time a 3SAT instance \(\varphi '\) such that:

  1. 1.

    The size of \(\varphi '\) is \(n\,\cdot \,\text {polylog}(n)\);

  2. 2.

    Each clause of \(\varphi '\) contains exactly 3 variables, and every variable is contained in at most \(d=O(1)\) clauses;

  3. 3.

    If \(\text {SAT}(\varphi )=1\), then \(\text {SAT}(\varphi ')=1\);

  4. 4.

    If \(\text {SAT}(\varphi )<1\), then \(\text {SAT}(\varphi ')<1-\rho \).

A 3SAT formula can be seen as a bipartite graph in which the left vertices are the variables, the right vertices are the clauses, and there is an edge between a variable and a clause whenever that variable appears in that clause. Then, such a bipartite graph has constant degree since each vertex has constant degree. This holds because each clause has at most 3 variables and each variable is contained in at most d clauses. A useful result on bipartite graphs is the following.

Lemma 1

(Lemma 1 of Deligkas et al. [28]) Let (VE) be a bipartite graph with \(|V|=n\), where U and W are the two disjoints and independent sets such that \(V=U\cup W\) (i.e., U and W are the two sides of the graph), and where each vertex has a degree at most \(\nu \).

Suppose that U and W both have a constant fraction of the vertices, i.e., \(|U|=c\,\cdot \,n\) and \(|W|=(1-c)\,\cdot \,n\) for some \(c\in [0,1)\).

Then, we can efficiently find a partition \(\{S_i\}_{i=1}^{\sqrt{n}}\) of U, and a partition \(\{T_j\}_{j=1}^{\sqrt{n}}\) of W, such that each set has a size of at most \(2\sqrt{n}\), and for all i and j we have \(|(S_i\times T_j) \cap E|\le 2\,\nu ^2\).

Lemma 1 can be used to build the following free game.

Definition 1

(Definition 2 of Deligkas et al. [28]) Given a 3SAT formula \(\varphi \) of size n, we define a free game \(\mathcal {F}_\varphi \) as follows:

  1. 1.

    Arthur applies Theorem 3 to obtain formula \(\varphi '\) of size \(n\cdot \text {polylog}(n)\);

  2. 2.

    let \(m=\sqrt{n\cdot \text {polylog}(n)}\). Arthur applies Lemma 1 to partition the variables of \(\varphi '\) in sets \(\{S_i\}_{i=1}^{m}\), and the clauses in sets \(\{T_j\}_{j=1}^{m}\);

  3. 3.

    Arthur draws an index i uniformly at random from [m], and draws independently an index j uniformly at random from [m]. Then, he sends \(S_i\) to Merlin \(_1\) and \(T_j\) to Merlin \(_2\);

  4. 4.

    Merlin \(_1\) responds by choosing a truth assignment for each variable in \(S_i\), and Merlin \(_2\) responds by choosing a truth assignment to every variable that is involved with a clause in \(T_j\);

  5. 5.

    Arthur awards the Merlins payoff 1 if and only if the following conditions are both satisfied:

    • Merlin \(_2\)’s assignment satisfies all clauses in \(T_j\);

    • the two Merlins’ assignments are compatible, i.e., for each variable v appearing in \(S_i\) and each clause in \(T_j\) that contains v, Merlin \(_1\)’s assignment to v agrees with Merlin \(_2\)’s assignment to v;

Arthur awards payoff 0 otherwise.

When computing the Merlins’ rewards, the second condition is always satisfied when \(S_i\) and \(T_j\) share no variables. Moreover, when Merlin \(_1\)’s and Merlin \(_2\)’s assignments are not compatible, we say that they are in conflict. The following lemma shows that, if \(\varphi \) is unsatisfiable, then the value of the corresponding free game \(\mathcal {F}_\varphi \) is bounded away from 1.

Lemma 2

(Lemma 2 by Deligkas et al. [28]) Given a 3SAT formula \(\varphi \), the following holds:

  • If \(\varphi \) is satisfiable then \(\omega (\mathcal {F}_\varphi )=1\);

  • If \(\varphi \) is unsatisfiable then \(\omega (\mathcal {F}_\varphi )\le 1-\rho /2\nu \).

We define \(\textsc {FreeGame}_\delta \) as a specific problem within the class of promise problems (see, e.g., Even et al. [37], Goldreich [38]).

Definition 2

A \(\textsc {FreeGame}_\delta \) problem is defined as:

  • INPUT: a free game \(\mathcal {F}_\varphi \) and a constant \(\delta \in (0,1)\).

  • OUTPUT: Yes-instances: \(\omega (\mathcal {F}_\varphi )=1\); No-instances: \(\omega (\mathcal {F}_\varphi )\le 1-\delta \).

Finally, we will need to assume the Exponential Time Hypothesis (ETH), which conjectures that any deterministic algorithm solving 3SAT requires \(2^{\Omega (n)}\) time.

Theorem 4

[Theorem 2 by Deligkas et al. [28]] Assuming ETH, there exists a constant \(\delta =\rho /2\nu \) such that \(\textsc {FreeGame}_\delta \) requires time \(n^{{\tilde{\Omega }}(\log n)}\).Footnote 5

4.2 Error-Correcting Codes

A message of length \(k\in \mathbb {N}_+\) is encoded as a block of length \(n\in \mathbb {N}_+\), with \(n\ge k\). A code is a mapping \(e:\{0,1\}^k \rightarrow \{0,1\}^n\). Moreover, let \(\text {dist}(e(x),e(y))\) be the relative Hamming distance between e(x) and e(y), which is defined as the Hamming distance weighted by 1/n. The rate of a code is defined as \(R:=k/n\). Finally, the relative distance \(\text {dist}(e)\) of a code e is the maximum value \(d^\textsc {rel}\) such that \(\text {dist}(e(x),e(y))\ge d^\textsc {rel}\) for each \(x,y\in \{0,1\}^k\).

In the following, we will need an infinite sequence of codes \(\mathcal {E}:=\{e_k:\{0,1\}^k\rightarrow \{0,1\}^n\}_{k\in \mathbb {N_{+}}}\) containing one code \(e_k\) for each possible message length k. The following result, due to Gilbert [35], can be used to construct an infinite sequence of codes with constant rate and distance.

Theorem 5

(Gilbert-Varshamov Bound) For every \(k \in \mathbb {N}_+\), \(0 \le d^\textsc {rel} < \frac{1}{2}\) and \(n \ge k\,/\,(1-\mathcal {H}_2(d^\textsc {rel}))\), there exists a code \(e:\{0,1\}^k\rightarrow \{0,1\}^n\) with \(\text {dist}(e)=d^\textsc {rel}\), where

$$\begin{aligned} \mathcal {H}_2(d^\textsc {rel}):=d^\textsc {rel} \log _2\left( \frac{1}{d^\textsc {rel}}\right) + (1-d^\textsc {rel})\log _2\left( \frac{1}{1-d^\textsc {rel}}\right) . \end{aligned}$$

Moreover, such a code can be computed in time \(2^{O(n)}\).

5 Maximum \(\epsilon \)-Feasible Subsystem of Linear Inequalities

First, we prove the following auxiliary results that follow from Lemma 2 and will be useful in the remainder of the paper. Omitted proofs can be found in the Appendix.

Lemma 3

Given a 3SAT formula \(\varphi \),

if \(\varphi \) is unsatisfiable, then for each (possibly randomized) Merlin \(_2\)’s strategy \(\eta _2\) there exists a set \(S_i\) such that each Merlin \(_1\)’s assignment to variables in \(S_i\) is in conflict with Merlin \(_2\)’s assignment with a probability of at least \(\rho /2\nu \).

Now, we introduce the maximum \(\epsilon \) -feasible subsystem of linear inequalities problem. Given a system of linear inequalities \(A\,{\textbf{x}}\ge 0\) with \(A\in [-1,1]^{n_{\text {row}}\times n_{\text {col}}}\) and \(\textbf{x}\in \Delta _{n_{\text {col}}}\), we study the problem of finding the largest subsystem of linear inequalities that violate the constraints of at most \(\epsilon \). As we will show in Sect. 6, this problem presents some deep analogies with the problem of determining good posteriors in Bayesian persuasion problems.

Definition 3

(MFS) Given a matrix \(A\in [-1,1]^{n_{\text {row}}\times n_{\text {col}}}\), the problem of finding the maximum feasible subsystem of linear inequalities (MFS) reads as follows:

$$\begin{aligned} \max _{\textbf{x}^*\in \Delta _{n_{\text {col}}}}&\sum _{i\in [n_{\text {row}}]}I[w_i^*\ge 0] \,\,\text { s.t. } \textbf{w}^*= A\,\textbf{x}^*. \end{aligned}$$

We are interested in the problem of finding a vector \(\textbf{x}\) which yields at least the same number of feasible inequalities of MFS under a relaxation of the constraints with respect to Definition 3.

Definition 4

(\(\epsilon \)-MFS) Given a matrix \(A\in [-1,1]^{n_{\text {row}}\times n_{\text {col}}}\), let

$$\begin{aligned} k^*:=\max _{\textbf{x}^*\in \Delta _{n_{\text {col}}}}&\sum _{i\in [n_{\text {row}}]}I[w_i^*\ge 0] \,\,\text { s.t. } \textbf{w}^*= A\,\textbf{x}^*. \end{aligned}$$

Then, the problem of finding the maximum \(\epsilon \) -feasible subsystem of linear inequalities ( \(\epsilon \)-MFS) amounts to finding a probability vector \(\textbf{x}\in \Delta _{n_{\text {col}}}\) such that, by letting \(\textbf{w}= A\,\textbf{x}\), it holds:

$$\begin{aligned} \sum _{i\in [n_{\text {row}}]}I[w_i\ge -\epsilon ]\ge k^*. \end{aligned}$$

This problem was previously studied by Cheng et al. [5]. In particular, they design a PTAS for the \(\epsilon \)-MFS problem guaranteeing the satisfaction of at least \(k^*-\epsilon \cdot n_{\text {row}}\) inequalities. This yields a bi-criteria PTAS for the MFS problem.

5.1 Upper-bound on \(\epsilon \)-MFS

First, we show that \(\epsilon \)-MFS can be exactly solved in \(n^{O(\log n)}\) steps for every fixed \(\epsilon >0\). We introduce the following auxiliary definition.

Definition 5

(k-uniform distribution) A probability distribution \(\textbf{x}\in \Delta _X\) is k-uniform if and only if it is the average of a multiset of k basis vectors in an |X|-dimensional space.

Equivalently, each entry \(x_i\) of a k-uniform distribution has to be a multiple of 1/k. Then, the following result holds.

Proposition 6

\(\epsilon \)-MFS can be solved in \(n^{O(\log n)}\) steps.

Proof

Denote by \(\textbf{x}^*\) the optimal solution of \(\epsilon \)-MFS.

Let \({\tilde{\textbf{x}}}\in \Delta _{n_{\text {col}}}\) be the empirical distribution of k i.i.d. samples drawn from probability distribution \(\textbf{x}^*\).

Moreover, let \(\textbf{w}^*:=A\,\textbf{x}^*\) and \({\tilde{\textbf{w}}}:=A\,{\tilde{\textbf{x}}}\).

By Hoeffding’s inequality we have

$$\begin{aligned} \text {Pr}(w_i^*-{\tilde{w}}_i\ge \epsilon )\le e^{-2k\epsilon ^2} \end{aligned}$$

for each \(i\in [n_{\text {row}}]\).

Then, by the union bound, we get

$$\begin{aligned} \text {Pr}(\exists i \text { s.t. } w_i^*-{\tilde{w}}_i\ge \epsilon )\le n_{\text {row}}\cdot e^{-2k\epsilon ^2}. \end{aligned}$$

Finally, we can write

$$\begin{aligned} \text {Pr}(w_i^*-{\tilde{w}}_i\le \epsilon ~\forall i\in [n_{\text {row}}])\ge 1-n_{\text {row}}\cdot e^{-2k\epsilon ^2}. \end{aligned}$$

Thus, setting \(k= \log n_{\text {row}}/\epsilon ^2\) ensures the existence of a vector \({\tilde{x}}\) guaranteeing that, if \(w_i^*\ge 0\), then \({\tilde{w}}_i\ge -\epsilon \).

Since \({\tilde{x}}\) is k-uniform by construction, we can find it by enumerating over all the \(O((n_{\text {col}})^k)\) k-uniform probability vectors where \(k=\log n_{\text {row}}/\epsilon ^2\). Trivially, this task can be performed in \(n^{\log n_{\text {row}}/\epsilon ^2}\) steps and, therefore, in \(n^{O(\log n)}\) steps. \(\square \)

5.2 Lower-bound on \(\epsilon \)-MFS

Now we show that \(\epsilon \)-MFS requires at least \(n^{{\tilde{\Omega }}(\log n)}\) steps. In doing so, we close the gap with the upper bound stated by Proposition 6 except for polylogarithmic factors of \(\log n\) in the denominator of the exponent.

Theorem 7

Assuming ETH, there exists a constant \(\epsilon >0\) such that solving \(\epsilon \)-MFS requires time \(n^{{\tilde{\Omega }}(\log n)}\).

Proof

Overview. We provide a polynomial-time reduction from \(\textsc {FreeGame}_\delta \) (Definition 1) to \(\epsilon \)-MFS, where \(\epsilon =\delta \,/\,26=\rho \,/\, (52 \nu )\) (see Sect. 4.1 for the definition of parameters \(\delta , \rho , \nu \)). We show that, given a free game instance \(\mathcal {F}_\varphi \), it is possible to build a matrix A such that, for a certain value \(k^*\), the following holds:

  1. (i)

    If \(\omega (\mathcal {F}_\varphi )=1\), then there exists a vector \(\textbf{x}\) such that

    $$\begin{aligned} \sum _{i\in [n_{\text {row}}]} I[w_i\ge 0]=k^*, \end{aligned}$$
    (4)

    where \(\textbf{w}=A\,\textbf{x}\);

  2. (ii)

    If \(\omega (\mathcal {F}_\varphi )\le 1-\delta \), then all vectors \(\textbf{x}\) are such that

    $$\begin{aligned} \sum _{i\in [n_{\text {row}}]} I[w_i\ge -\epsilon ]<k^*, \end{aligned}$$
    (5)

    with \(\textbf{w}= A\,\textbf{x}\).

Construction. In the free game \(\mathcal {F}_\varphi \), Arthur sends a set of variables \(S_i\) to Merlin \(_1\) and a set of clauses \(T_j\) to Merlin \(_2\), where \(i,j\in [m]\), \(m=\sqrt{n\,\text {polylog}(n)}\). Then, Merlin \(_1\)’s (resp., Merlin \(_2\)’s) answer is denoted by \(\xi _1\in \Xi _1\) (resp., \(\xi _2\in \Xi _2\)). The system of linear inequalities used in the reduction has a vector of variables \(\textbf{x}\) structured as follows.

  1. 1.

    Variables corresponding to Merlin \(_2\)’s answers. There is a variable \(x_{T_j,\xi _2}\) for each \(j\in [m]\) and, due to Lemma 1 and assuming \(|T_j|=2m\), it holds \(\xi _2\in \Xi _2=\{0,1\}^{6m}\) (if \(|T_j|< 2m\), we extend \(\xi _2\) with a sufficient number of extra bits).

  2. 2.

    Variables corresponding to Merlin \(_1\)’s answers. We need to introduce some further machinery to augment the dimensionality of \(\Xi _1\) through a viable mapping. Let \(e:\{0,1\}^{2m}\rightarrow \{0,1\}^{8m}\) be the code defined in Theorem 5 with rate 1/4 and relative distance \(\text {dist}(e)\ge 1/5\). We can safely assume that \(|S_i| = 2\,m\) and \(\xi _1\in \Xi _1=\{0,1\}^{2\,m}\) (if \(|S_i|< 2\,m\), we extend \(\xi _1\) with a sufficient number of extra bits). Then, \(e(\xi _1)\) is the 8m-dimensional encoding of answer \(\xi _1\) via code e. Let \(e(\xi _1)_j\) be the j-th bit of vector \(e(\xi _1)\). We have a variable \(x_{i,\ell }\) for each index \(i\in [8m]\) and \(\ell :=(\ell _j)_{j\in [m]}\in \{0,1\}^{m}\). These \(x_{i,\ell }\) variables can be interpreted as follows. Suppose to have an encoding of an answer for each of the possible set \(S_j\). There are m such encodings, each of them having 8m bits. Then, it holds \(x_{i,\ell }>0\) if and only if the i-th bit of the encoding corresponding to \(S_j\) is \(\ell _j\).

There is a total of \(m\cdot 2^m\cdot (2^{5m}+8)\) variables. Matrix A has a number of columns equal to the number of variables. We denote with \(A_{\cdot ,(T_j,\xi _2)}\) the entry in row \((\cdot )\) and column corresponding to variable \(x_{T_j,\xi _2}\). Analogously, \(A_{\cdot ,(i,\ell )}\) is the entry in row \((\cdot )\) and column corresponding to variable \(x_{i,\ell }\). Rows are grouped in four types, denoted by \(\{\texttt{t}_i\}_{i=1}^4\). We write \(A_{\texttt{t}_i,\cdot }\) when referring to an entry of any row of type \(\texttt{t}_i\). Further arguments may be added as a subscript to identify specific entries of A. Rows are structured as follows.

  1. 1.

    Rows of type \(\texttt{t}_1\): there are q rows of type \(\texttt{t}_1\) such that \(A_{\texttt{t}_1,(T_j,\xi _2)}=1\) for each \(j\in [m],\xi _2\in \Xi _2\), and \(-1\) otherwise (the value of q is specified later in the proof).

  2. 2.

    Rows of type \(\texttt{t}_2\): there are q rows for each subset \(\mathcal {T}\subseteq \{T_j\}_{j\in [m]}\) with cardinality m/2 (i.e., there is a total of \(q\cdot \left( {\begin{array}{c}m\\ m/2\end{array}}\right) \) rows of type \(\texttt{t}_2\)). Then, the following holds for each \(\mathcal {T}\):

    $$\begin{aligned} \begin{array}{ll} A_{(\texttt{t}_2,\mathcal {T}),(T_j,\xi _2)}=&{}{\left\{ \begin{array}{ll}\begin{array}{ll} -1 &{} \text { if } T_j\in \mathcal {T},\xi _2\in \Xi _2\\ {-}1 &{} \text { if } T_j\notin \mathcal {T},\xi _2\in \Xi _2\end{array} \end{array}\right. } \text { and }\\ A_{(\texttt{t}_2,\mathcal {T}),(i,\ell )}=&{}0 \text { for each } i\in [8m], \ell \in \{0,1\}^{m}. \end{array} \end{aligned}$$
  3. 3.

    Rows of type \(\texttt{t}_3\): there are q rows of type \(\texttt{t}_3\) for each subset of 4m indices \(\mathcal {I}\) drawn from [8m], for a total of \(q\cdot \left( {\begin{array}{c}8m\\ 4m\end{array}}\right) \) rows. For each subset of indices \(\mathcal {I}\) we have:

    $$\begin{aligned} \begin{array}{ll} A_{(\texttt{t}_3,\mathcal {I}),(T_j,\xi _2)}= &{} 0 \quad \text { for each } T_j,\xi _2\text { and } \\ A_{(\texttt{t}_3,\mathcal {I}),(i,\ell )}= &{} {\left\{ \begin{array}{ll} -1 &{} \text { if } i\in \mathcal {I},\ell \in \{0,1\}^m\\ {-}1 &{} \text { if } i\notin \mathcal {I},\ell \in \{0,1\}^m \end{array}\right. }. \end{array} \end{aligned}$$
  4. 4.

    Rows of type \(\texttt{t}_4\): there is a row of type \(\texttt{t}_4\) for each \(S_i\) and \(\xi _1\). Each of these rows is such that:

    $$\begin{aligned} \begin{array}{ll} A_{(\texttt{t}_4,S_i,\xi _1),(T_j,\xi _2)} = &{} {\left\{ \begin{array}{ll} -1/2 &{} \text {if } \mathcal {V}(S_i,T_j,\xi _1,\xi _2)=1\\ -1 &{} \text {otherwise} \end{array}\right. } \quad \text { and } \\ A_{(\texttt{t}_4,S_i,\xi _1),(j,\ell )} = &{} {\left\{ \begin{array}{ll} {-}1/2 &{} \text {if } e(\xi _1)_j=\ell _i \\ -1 &{} \text {otherwise} \end{array}\right. }. \end{array} \end{aligned}$$

Finally, we set \(k^*= \left( 1+\left( {\begin{array}{c}m\\ m/2\end{array}}\right) +\left( {\begin{array}{c}8\,m\\ 4\,m\end{array}}\right) \right) q + m\) and \(q\gg m\) (for example, \(q=2^{10m}\)). We say that row i satisfies \(\epsilon \)-MFS condition for a certain \(\textbf{x}\) if \(w_i\ge -\epsilon \), where \(\textbf{w}=A\,\textbf{x}\) (in the following, we will also consider \(w_i\ge 0\) as an alternative condition). We require at least \(k^*\) rows to satisfy the \(\epsilon \)-MFS condition. Then, all rows of types \(\texttt{t}_1\), \(\texttt{t}_2\), \(\texttt{t}_3\) and at least m rows of type \(\texttt{t}_4\) must be such that \(w_i\) satisfies the \(\epsilon \)-MFS condition.

Completeness. Given a satisfiable assignment of variables \(\zeta \) to \(\varphi \), we build vector \(\textbf{x}\) as follows. Let \(\zeta _{T_j}\) be the partial assignment obtained by restricting \(\zeta \) to the variables in the clauses of \(T_j\) (if \(|T_j|<2m\) we pad \(\zeta _{T_j}\) with bits 0 until \(\zeta _{T_j}\) has length 6m). Then, we set \(x_{T_j,\zeta _{T_j}}=1/2m\). Moreover, for each \(i\in [8m]\) and \(\ell ^i=(e(\zeta _{S_1})_i,\ldots ,e(\zeta _{S_m})_i)\), we set \(x_{i,\ell ^i}=1/16m\). We show that \(\textbf{x}\) is such that there are at least \(k^*\) rows i with \(w_i\ge 0\) (Condition (4)). First, each row i of type \(\texttt{t}_1\) is such that \(w_i=0\) since \(\sum _{T_j,\xi _2} x_{T_j,\xi _2}=\sum _{i,\ell } x_{i,\ell }=1/2\). For each \(T_j\), \(\sum _{\xi _2}x_{T_j,\xi _2}=1/2\,m\). Then, for each subset \(\mathcal {T}\) of \(\{T_j\}_{j\in [m]}\), we have \(\sum _{\xi _2,T_j\in \mathcal {T}} x_{T_j,\xi _2}=1/4\). This implies that each row i of type \(\texttt{t}_2\) is such that \(w_i=0\). A similar argument holds for rows of type \(\texttt{t}_3\). Finally, we show that for each \(S_i\) there is at least a row i of type \(\texttt{t}_4\) such that \(w_i\ge 0\). Take the row corresponding to \((S_i,\zeta _{S_i})\). For each \(x_{b,\ell }>0\) where \(b\in [8\,m]\) and \(\ell \in \{0,1\}^m\), it holds \(e(\zeta _{S_i})_b=\ell _i\). Then, there are 8m columns played with probability \(1/16\,m\) with value 1/2, i.e., \(\sum _{b,\ell } A_{(\texttt{t}_4,S_i,\zeta _{S_i}),(b,\ell )}\,x_{b,\ell }=1/4\). Moreover, for each \((T_j,\zeta _{T_j})\), it holds \(\mathcal {V}(S_i,T_j,\zeta _{S_i},\zeta _{T_j})=1\). Then, \(\sum _{T_j,\xi _2}A_{(\texttt{t}_4,S_i,\zeta _{S_i}),(T_j,\zeta _{T_j})}\,x_{T_j,\xi _2}=-1/4\). This concludes the proof of completeness.

Soundness. We show that, if \(\omega (\mathcal {F}_\varphi )\le 1-\delta \), there is no probability distribution \(\textbf{x}\) such that

$$\begin{aligned} \sum _{i\in n_{\text {row}}} I[w_i\ge -\epsilon ]\ge k^*, \end{aligned}$$
(6)

with \(\textbf{w}=A\,\textbf{x}\). Assume, by contradiction, that one such vector \(\textbf{x}\) exists. For the sake of clarity, we summarize the structure of the proof:

  1. (i)

    We show that the probability assigned by \(\textbf{x}\) to columns with index \((T_j,\xi _2)\) has to be close to 1/2, and the same has to hold for columns of type \((i,\ell )\).

  2. (ii)

    We show that \(\textbf{x}\) has to assign probability almost uniformly among \(T_j\)s and indices i of the encoding of \(\Xi _1\) (resp., Lemmas 5 and 6 below). Intuitively, this resembles the fact that, in \(\mathcal {F}_\varphi \), Arthur draws questions \(T_j\) according to a uniform probability distribution.

  3. (iii)

    For each \(S_i\), there is at most one row \((\texttt{t}_4,S_i,\xi _1)\) such that \(w_{(\texttt{t}_4,S_i,\xi _1)}\ge -\epsilon \) (Lemma 7). Together with the hypothesis that at least m rows of type \(\texttt{t}_4\) satisfy the \(\epsilon \)-MFS condition, this implies that there exists exactly one such row for each \(S_i\).

  4. (iv)

    Finally, we show that the above construction leads to a contradiction with Lemma 3 for a suitable free game.

Before providing the details of the four above steps, we introduce the following result, due to Babichenko et al. [34].

Lemma 4

(Lemma 2 of Babichenko et al. [34]) Let \({\textbf{v}}\in \Delta ^n\) be a probability vector, and \({\textbf{u}}\) be the n-dimensional uniform probability vector.

If \(||{\textbf{v}}-{\textbf{u}}||>c\), then there exists a subset of indices \(\mathcal {I}\subseteq [n]\) such that \(|\mathcal {I}|=n/2\) and \(\sum _{i\in \mathcal {I}}{\textbf{v}}_i>\frac{1}{2}+\frac{c}{4}\).

Then, we proceed with the following steps (the proofs of the auxiliary results can be found in Appendix A.2):

  1. (i)

    Equation 6 requires all rows i of type \(\texttt{t}_1\), \(\texttt{t}_2\), \(\texttt{t}_3\) to be such that \(w_i\ge -\epsilon \). This implies that, for rows of type \(\texttt{t}_1\), it holds

    $$\begin{aligned} \sum _{T_j,\xi _2} x_{T_j,\xi _2}\ge \frac{1}{2}\,\big (1-\epsilon \big ). \end{aligned}$$
    (7)

    If, by contradiction, this inequality did not hold, each row i of type \(\texttt{t}_1\) would be such that \(w_i<1/2-\epsilon /2-(1/2+\epsilon /2)=-\epsilon \), thus violating Eq. 6. Moreover, Eq. 6 implies that at least a row \((\texttt{t}_4,S_i,\xi _1)\) has \(w_{(\texttt{t}_4,S_i,\xi _1)}\ge -\epsilon \). Therefore, it holds \(\sum _{i,\ell }x_{i,\ell }\ge 1/2-\epsilon \). Indeed, if, by contradiction, this condition did not hold, all rows of type \(\texttt{t}_4\) would have \(w_i<1/2\,(1/2-\epsilon )-1/2\,(1/2+\epsilon )=-\epsilon \).

  2. (ii)

    Let \(\textbf{v}_1\in \Delta _m\) be the probability vector defined as

    $$\begin{aligned} v_{1,j}:=\frac{\sum _{\xi _2} x_{T_j,\xi _2}}{\sum _{j,\xi _2}x_{T_j,\xi _2}}, \end{aligned}$$

    and \({\tilde{\textbf{v}}}\) be a uniform probability vector of suitable dimension. The following result shows that having a bounded element-wise difference between \(\textbf{v}_1\) and \({\tilde{\textbf{v}}}\) is a necessary condition for Eq. 6 to be satisfied.

Lemma 5

If \(||\textbf{v}_1-{\tilde{\textbf{v}}}||_1>16\epsilon \), there exists a row i of type \(\texttt{t}_2\) such that \(w_i<-\epsilon \).

Let \(\textbf{v}_2\in \Delta _{[8m]}\) be the probability vector defined as

$$\begin{aligned}v_{2,i}:=\frac{\sum _{\ell }x_{i,\ell }}{\sum _{i,\ell }x_{i,\ell }},\end{aligned}$$

and \({\tilde{\textbf{v}}}\) be a suitable uniform probability vector. Moreover, the following holds.

Lemma 6

If \(||\textbf{v}_2-{\tilde{\textbf{v}}}||_1>16\epsilon \), there exists a row i of type \(\texttt{t}_3\) such that \(w_i<-\epsilon \).

In order to satisfy Eq. 6, all rows i of type \(\texttt{t}_2\) and \(\texttt{t}_3\) have to be such that \(w_i\ge -\epsilon \). Then, by Lemmas 5 and 6, it holds that \(||{\textbf{v}}_{1}-{\tilde{\textbf{v}}}||_{1}\le 16\,\epsilon \) and \(||{\textbf{v}}_{2}-{\tilde{\textbf{v}}}||_{1}\le 16\,\epsilon \).

  1. (iii)

    We show that, for each \(S_i\), there exists at most one row \((\texttt{t}_4, S_i,\xi _1)\) for which \(w_{(\texttt{t}_4,S_i,\xi _1)}\ge -\epsilon \).

Lemma 7

For each \(S_i\), \(i\in [m]\), there exists at most one row \((\texttt{t}_4, S_i,\xi _1)\) such that  \(w_{(\texttt{t}_4,S_i,\xi _1)}\ge -\epsilon \).

Then, there are at least m rows \((\texttt{t}_4,S_i,\xi _1)\) such that \(w_{(\texttt{t}_4,S_i,\xi _1)}\ge -\epsilon \) and, by Lemma 7, we get that there exists exactly one such row for each \(S_i\), \(i\in [m]\). Therefore, for each \(S_i\), there exists \(\xi _1^i\in \Xi _1\) such that

$$\begin{aligned} \sum _{(T_j,\xi _2):\mathcal {V}(S_i,T_j,\xi _1^i,\xi _2)=1}x_{(T_j,\xi _2)}\ge \frac{1}{2}-4\,\epsilon . \end{aligned}$$

Notice that, if this condition did not hold, by Step (i) we would obtain

$$\begin{aligned} w_{\texttt{t}_4,S_i,\xi _1^i}<-\frac{1}{2}\left( \frac{1}{2}-4\,\epsilon \right) -\frac{7}{2}\epsilon +\frac{1}{2}\left( \frac{1}{2}+\frac{\epsilon }{2}\right) =-\epsilon , \end{aligned}$$

which would go against the satisfiability of Eq. 6.

  1. (iv)

    Finally, let \(\mathcal {F}_\varphi ^*\) be a free game in which Arthur (i.e., the verifier) chooses question \(T_j\) with probability \(v_{1,j}\) as defined in Step (ii), and Merlin \(_2\) (i.e., the second prover) answers \(\xi _2\) with probability \(x_{T_j,\xi _2}/v_{1,j}\). In this setting (i.e., \(\mathcal {F}_\varphi ^*\)), given question \(S_i\) to Merlin \(_1\), the two provers will provide compatible answers with probability

    $$\begin{aligned} \mathbb {P}\left( \mathcal {V}^*(S_i,T_j,\xi _1^i,\xi _2\right) =1\mid S_i)=\frac{1/2-4\,\epsilon }{\sum _{j,\xi _2}x_{T_j,\xi _2}}\ge \frac{1/2-4\,\epsilon }{1/2+\epsilon }\ge 1-10\,\epsilon , \end{aligned}$$

    where the first inequality holds for Eq. 7 at Step (i). In a canonical (as in Definition 1) free game \(\mathcal {F}_\varphi \), Arthur picks questions according to a uniform probability distribution. Therefore, the main difference between \(\mathcal {F}_\varphi \) and \(\mathcal {F}_\varphi ^*\) is that, in the latter, Arthur draws questions for Merlin \(_2\) from \(\textbf{v}_1\) which may not be a uniform probability distribution. However, we know that differences between \(\textbf{v}_1\) and a uniform probability vector must be limited. Specifically, by Lemma 5, we have \(||\textbf{v}_1-{\tilde{\textbf{v}}}||_1\le 16\,\epsilon \). Then, if Merlin \(_1\) and Merlin \(_2\) applied in \(\mathcal {F}_\varphi \) the strategies we described for \(\mathcal {F}_\varphi ^*\), their answers would be compatible with probability at least \(\mathbb {P}(\mathcal {V}(S_i,T_j,\xi _1^i,\xi _2)=1\mid S_i)\ge 1-26\,\epsilon \), for each \(S_i\). Finally, by picking \(\epsilon =\rho /52\,\nu \), we reach a contradiction with Lemma 3.

This concludes the proof. \(\square \)

6 Hardness of \((\alpha ,\epsilon )\)-Persuasion

We show that a public signaling scheme approximating the value of the optimal one cannot be computed in polynomial time even if we allow it to be \(\epsilon \)-persuasive (see Eq. 2). Specifically, assuming ETH, computing an \((\alpha ,\epsilon )\)-persuasive signaling scheme requires at least \(n^{{\tilde{\Omega }}(\log n)}\), where the dimension of the instance is \(n=O({\bar{n}} \, d)\). We prove this result for the specific case of the k-voting problem, as introduced in Sect. 3. Besides its practical applicability, this problem is particularly instructive in highlighting the strong connection between the problem of finding suitable posteriors and the \(\epsilon \)-MFS problem, as discussed in the following lemma. An analogous observation was also highlighted by Cheng et al. in [5].

Lemma 8

Given a k-voting instance, the problem of finding a posterior \(\textbf{p}\in \Delta _\Theta \) such that \(W_\epsilon (\textbf{p})\ge k\) is equivalent to finding an \(\epsilon \)-feasible subsystem of k linear inequalities over the simplex when \(A\in [-1,1]^{{\bar{n}}\times d}\) is such that:

$$\begin{aligned} A_{r,\theta }=u^r_\theta (a_0)-u_\theta ^r(a_1) \quad \text { for each } r\in \mathcal {R},\theta \in \Theta . \end{aligned}$$
(8)

Proof

By setting \(\textbf{x}=\textbf{p}\), it directly follows that \(\sum _{i\in [{\bar{n}}]}I[A_i\,\textbf{x}\ge -\epsilon ]\ge k\) if and only if \(W_\epsilon (\textbf{p})\ge k\). \(\square \)

The above lemma shows that deciding if there exists a posterior \(\textbf{p}\) such that \(W(\textbf{p})\ge k\) or if all the posteriors have \(W_\epsilon (\textbf{p})<k\) (i.e., deciding if the utility of the sender can be greater than zero) is as hard as solving the \(\epsilon \)-MFS problem. More precisely, if an \(\epsilon \)-MFS instance does not admit any solution, then there does not exist a posterior guaranteeing a strictly positive winning probability for the sender’s preferred candidate. On the other hand, if an \(\epsilon \)-MFS instance admits a solution, there exists a signaling scheme where at least one of the induced posteriors guarantees strictly positive winning probability to the sender’s preferred candidate. However, the above connection between the \(\epsilon \)-MFS problem and the k-voting problem is not sufficient to prove the inapproximability of the k-voting problem, as the probability whereby this posterior is reached may be arbitrarily small.

Luckily enough, the next theorem shows that it is possible to strengthen the inapproximability result by constructing an instance in which, when 3SAT is satisfiable, there exists a signaling scheme such that all the induced posteriors satisfy \(W(\textbf{p})\ge k\) (i.e., the sender’s preferred candidate wins with a probability of 1). The main idea is to suitably extend the construction of Theorem 7 with an additional set of states \(\{\theta _{\textbf{d}}\}_{\textbf{d}} \in \{0,1\}^{7m}\), where we can see vectors \(\textbf{d}\) as the concatenation of a subvector \(\textbf{d}_S\in \{0,1\}^m\) and a subvector \(\textbf{d}_T\in \{0,1\}^{6m}\). Moreover, we need to extend the set of receivers. In particular, we replace each receiver relative to a set \(S_i\) and an answer \(\xi _1\) with a set including a receiver for each \(\textbf{d}\). The new receivers’ payoffs are defined as follows. Let \(\oplus \) be the XOR operator. The payoff of the receiver relative to \(S_i\), \(\xi _1\), and \(\textbf{d}\) in a state \(\theta _{(T_j,\xi _2 \oplus d_T)}\) is equivalent to the payoff of the original receiver in the state \(\theta _{(T_j,\xi _2)}\), while we use a similar procedure for the payoffs in the states \(\theta _{(i,\ell )}\). Then, the signaling scheme employs a signal \(s_{\textbf{d}}\) for each \(\textbf{d}\). Each signal \(s_{\textbf{d}}\) defines which of the \(\{0,1\}^{7m}\) games we are playing. All these games are equivalent since we are simply changing the meaning of the states: for example, a state \(\theta _{(T_j,\xi _2 \oplus d_T)}\) is equivalent to the original state \(\theta _{(T_j,\xi _2)}\). Using this construction, we have that all the signals induce a posterior in which at least k voters votes for \(c_0\), while in the original game only one signal induces a posterior that satisfies this condition.

Theorem 8

Given a k-voting instance and assuming ETH, there exists a constant \(\epsilon ^*>0\) such that, for any \(0<\epsilon \le \epsilon ^*\), finding an \((\alpha ,\epsilon )\)-persuasive signaling scheme requires \(n^{{\tilde{\Omega }}(\log n)}\) steps for any multiplicative or additive factor \(\alpha \in (0,1)\).

Proof

Overview. By following the proof of Theorem 7, we can provide a polynomial-time reduction from \(\textsc {FreeGame}_\delta \) to the problem of finding an \(\epsilon \)-persuasive signaling scheme in k-voting, with \(\epsilon =\delta /780=\rho /1560\nu \). Specifically, if \(\omega (\mathcal {F}_\varphi )=1\), there exists a signaling scheme guaranteeing the sender an expected value of 1. Otherwise, if \(\omega (\mathcal {F}_\varphi )\le 1-\delta \), then all posteriors are such that \(W_\epsilon (\textbf{p})<k\) (i.e., the sender cannot obtain more than 0).

Construction. The k-voting instance has the following possible states of nature.

  1. 1.

    \(\theta _{(T_j,\xi _2)}\) for each set of clauses \(T_j\), \(j\in [m]\), and answer \(\xi _2\in \Xi _2=\{0,1\}^{6m}\). Let \(e:\{0,1\}^{2m}\rightarrow \{0,1\}^{8m}\) be an encoding function with \(R=1/4\) and \(\text {dist}(e)\ge 1/5\) (as in the proof of Theorem 7). We have a state \(\theta _{(i,\ell )}\) for each \(i\in [8\,m]\), and \(\ell =(\ell _1,\ldots ,\ell _m)\in \{0,1\}^m\).

  2. 2.

    There is a state \(\theta _{\textbf{d}}\) for each \(\textbf{d}\in \{0,1\}^{7\,m}\). It is useful to see vector \(\textbf{d}\) as the union of the subvector \(\textbf{d}_S\in \{0,1\}^m\) and the subvector \(\textbf{d}_T\in \{0,1\}^{6m}\).

The common prior \(\mu \) is such that:

$$\begin{aligned} \mu _{\theta _{(T_j,\xi _2)}}&=\frac{1}{m \,2^{2+6m}} \quad \text {for each }\theta _{(T_j,\xi _2)}, \\ \mu _{\theta _{(i,\ell )}}&=\frac{1}{m\,2^{5+m}} \quad \text {for each }\theta _{(i,\ell )}, \\ \mu _{\theta _{\textbf{d}}}&=\frac{1}{2^{1+7m}} \quad \text {for each }\theta _{\textbf{d}}. \end{aligned}$$

To simplify the notation, in the remaining of the proof, let \(u_\theta ^r:=u_\theta ^r(a_0)-u_\theta ^r(a_1)\).

The k-voting instance comprises the following receivers.

  1. 1.

    Receivers of type \(\texttt{t}_1\): there are q (the value of q is specified later in the proof) receivers of type \(\texttt{t}_1\), which are such that \(u^{\texttt{t}_1}_{\theta _{(T_j,\xi _2)}}=1\) for each \((T_j,\xi _2)\), and \(-1/3\) otherwise.

  2. 2.

    Receivers of type \(\texttt{t}_2\): there are q receivers of type \(\texttt{t}_2\) such that \(u^{\texttt{t}_2}_{\theta _{(i,\ell )}}=1\) for each \((i,\ell )\), and \(-1/3\) otherwise.

  3. 3.

    Receivers of type \(\texttt{t}_3\): there are q receivers of type \(\texttt{t}_3\) for each subset \(\mathcal {T}\subseteq \{T_j\}_{j\in [m]}\) of cardinality m/2. Each receiver corresponding to the subset \(\mathcal {T}\) is such that:

    $$\begin{aligned} u^{(\texttt{t}_3,\mathcal {T})}_{\theta _{(T_j,\xi _2)}}= {\left\{ \begin{array}{ll}\begin{array}{ll} -1 &{} \text { if } T_j\in \mathcal {T},\xi _2\in \Xi _2\\ {-}1 &{} \text { if } T_j \notin \mathcal {T}, \xi _2\in \Xi _2\end{array} \end{array}\right. } \text { and } u_{\theta }^{(\texttt{t}_3,\mathcal {T})}=0 \text { for every other} \theta . \end{aligned}$$
  4. 4.

    Receivers of type \(\texttt{t}_4\): we have q receivers of type \(\texttt{t}_4\) for each subset \(\mathcal {I}\) of 4m indices selected from [8m]. Each receiver corresponding to subset \(\mathcal {I}\) is such that:

    $$\begin{aligned} u_{\theta _{(i,\ell )}}^{(\texttt{t}_4,\mathcal {I})} = {\left\{ \begin{array}{ll}\begin{array}{ll} -1 &{} \text { if }i\in \mathcal {I},\ell \in \{0,1\}^m\\ {-}1 &{} \text { if } i\notin \mathcal {I},\ell \in \{0,1\}^m \end{array} \end{array}\right. } \text { and } u_{\theta }^{(\texttt{t}_4,\mathcal {I})}=0 \text { for every other} \theta . \end{aligned}$$
  5. 5.

    Receivers of type \(\texttt{t}_5\): there is a receiver of type \(\texttt{t}_5\) for each \(S_i\), \(\xi _1\in \Xi _1\) and \(\textbf{d}\in \{0,1\}^{7m}\). Then, for each receiver of type \(\texttt{t}_5\) the following holds:

    $$\begin{aligned} u_{\theta }^{(\texttt{t}_5,S_i,\xi _1,\textbf{d})} = {\left\{ \begin{array}{ll}\begin{array}{ll} -1/2 &{} \text { if }\theta =\theta _{(T_j,\xi _2)} \text { and }\mathcal {V}(S_i,T_j,\xi _1,\xi _2\oplus \textbf{d}_{T})=1\\ -1/2 &{} \text { if }\theta = \theta _{(i',\ell )}\text { and } e(\xi _1)_{i'}=[\ell \oplus \textbf{d}_S]_{i}\\ {-}1/2 &{} \text { if } \theta =\theta _{\textbf{d}}\\ -1 &{} \text {otherwise} \end{array} \end{array}\right. } \end{aligned}$$

Finally, we set \(k=\left( 2+\left( {\begin{array}{c}m\\ m/2\end{array}}\right) +\left( {\begin{array}{c}8\,m\\ 4\,m\end{array}}\right) \right) q + m\). By setting \(q\gg m\) (e.g., \(q=2^{10m}\)), candidate \(a_0\) can get at least k votes only if all receivers of type \(\texttt{t}_1\), \(\texttt{t}_2\), \(\texttt{t}_3\), \(\texttt{t}_4\) vote for her.

Completeness. Given a satisfiable assignment \(\zeta \) to the variables in \(\varphi \), let \([\zeta ]_{T_j}\in \{0,1\}^{6m}\) be the vector specifying the variables assignment of each clause in \(T_j\), and \([\zeta ]_{S_i}\in \{0,1\}^{2\,m}\) be the vector specifying the assignment of each variable belonging to \(S_i\). The sender has a signal for each \(\textbf{d}\in \{0,1\}^{7m}\). The set of signals is denoted by \(\mathcal {S}\), where \(|\mathcal {S}|=2^{7m}\), and a signal is denoted by \(s_{\textbf{d}}\in \mathcal {S}\). We define a signaling scheme \(\phi \) as follows. First, we set \(\phi _{\theta _{\textbf{d}}}(s_{\textbf{d}})=1\) for each \(\theta _{\textbf{d}}\). If \(|T_j|<2m\) for some \(j\in [m]\), we pad \([\zeta ]_{T_j}\) with bits 0 util \(|[\zeta ]_{T_j}|=6m\). Then, for each \(T_j\), \(\phi _{\theta _{(T_j,[\zeta ]_{T_j}\oplus \textbf{d}_T)}}(s_{\textbf{d}})=1/2^m\). For each \(i\in [8m]\), set \(\phi _{\theta _{(i,\ell \oplus \textbf{d}_S)}}=1/2^{6m}\), where \(\ell =(e([\zeta ]_{S_1})_i,\ldots , e([\zeta ]_{S_m})_i)\). First, we prove that the signaling scheme is well-formed. For each state \(\theta _{(T_j,\xi _2)}\), it holds that

$$\begin{aligned} \sum _{s_{\textbf{d}}\in \mathcal {S}}\phi _{\theta _{(T_j,\xi _2)}}(s_{\textbf{d}})=\frac{1}{2^m} \left| \left\{ \textbf{d}: [\zeta ]_{T_j}\oplus \textbf{d}_T=\xi _2\right\} \right| =1, \end{aligned}$$

and, for each \(\theta _{(i,\ell )}\), the following holds:

$$\begin{aligned} \sum _{s_{\textbf{d}}\in \mathcal {S}}\phi _{\theta _{(i,\ell )}}(s_{\textbf{d}})=\frac{1}{2^{6m}} \left| \left\{ \textbf{d}: (e([\zeta ]_{S_1})_i,\ldots , e([\zeta ]_{S_m})_i\oplus \textbf{d}_S=\ell \right\} \right| =1. \end{aligned}$$

Now, we show that there exist at least k voters that will choose \(a_0\). Let \(\textbf{p}\in \Delta _\Theta \) be the posterior induced by a signal \(s_{\textbf{d}}\). All receivers of type \(\texttt{t}_1\) choose \(a_0\) since it holds:

$$\begin{aligned} \sum _{(T_j,\xi _2)}p_{\theta _{(T_j,\xi _2)}}&= \frac{\sum _{(T_j,\xi _2)}\mu _{\theta _{(T_j,\xi _2)}}\phi _{\theta _{(T_j,\xi _2)}}(s_{\textbf{d}})}{\sum _{\theta \in \Theta }\mu _\theta \phi _\theta (s_{\textbf{d}})}\\&\quad =\frac{1}{2^{2+7m}}\left( \frac{1}{2^{1+7m}}+\frac{1}{2^{2+7m}}+\frac{1}{2^{2+7m}}\right) ^{-1}\\&\quad =\frac{1}{4}. \end{aligned}$$

Analogously, all receivers of type \(\texttt{t}_2\) select \(a_0\).Furthermore, for each \(T_j\), it holds \(\sum _{\xi _2} p_{\theta _{(T_j,\xi _2)}}\) \(=1/4\,m\). Then, for each subset \(\mathcal {T}\subseteq \{T_j\}_{j\in [m]}\) of cardinality m/2, it holds \(\sum _{T_j\in \mathcal {T},\xi _2}p_{\theta _{(T_j,\xi _2)}}=m/2\;1/4\;m=1/8\). Therefore, each receiver of type \(\texttt{t}_3\) chooses \(a_0\). An analogous argument holds for receivers of type \(\texttt{t}_4\). Finally, we show that, for each \(S_i\), the receiver \((\texttt{t}_5,S_i,[\zeta ]_{S_i},\textbf{d})\) chooses \(a_0\). In particular, receiver \((\texttt{t}_5,S_i,[\zeta ]_{S_i}, \textbf{d})\) has the following expected utility:

$$\begin{aligned} \frac{1}{2}p_{\theta _{\textbf{d}}}-\frac{1}{2} \sum _{(T_j,\xi _2)}p_{\theta _{(T_j,\xi _2)}}-\frac{1}{2}\sum _{(i',\ell )} p_{\theta _{(i',\ell )}}=0 \end{aligned}$$

since, for each \(p_{(T_j,\xi _2)}>0\), the following holds \(\xi _2\oplus \textbf{d}_T= [\zeta ]_{T_j}\oplus \textbf{d}_T \oplus \textbf{d}_T=[\zeta ]_{T_j}\) and \(\mathcal {V}(S_i,T_j,[\zeta ]_{S_i}, \xi _2\oplus \textbf{d}_T)=\mathcal {V}(S_i,T_j,[\zeta ]_{S_i}, [\zeta ]_{T_j}) =1\) for each \(T_j\). Moreover, for each \(p_{(\theta _{i',l})}>0\), it holds \([l \oplus d_S]_{i}=e([\zeta ]_{S_i})_{i'} \oplus d_{S,i} \oplus d_{S,i}=e([\zeta ]_{S_i})_{i'}\). This concludes the proof of completeness.Footnote 6

Soundness. We prove that, if \(\omega (\mathcal {F}_\varphi )\le 1-\delta \), there is no posterior in which \(a_0\) is chosen by at least k receivers, thus implying that the sender’s utility is equal to 0. Now, suppose, towards a contradiction, that there exists a posterior \(\textbf{p}\) such that at least k receivers select \(a_0\). Let \(\gamma :=\sum _{(T_j,\xi _2)} p_{\theta _{(T_j,\xi _2)}} + \sum _{(i,\ell )} p_{\theta _{(i,\ell )}}\). Since all voters of types \(\texttt{t}_1\) and \(\texttt{t}_2\) vote for \(a_0\), it holds that \(\sum _{(T_j,\xi _2)} p_{\theta _{(T_j,\xi _2)}} \ge \frac{1}{4}-\epsilon \) and \(\sum _{(i,\ell )} p_{\theta _{(i,\ell )}} \ge \frac{1}{4}-\epsilon \). Moreover, since at least a receiver \((\texttt{t}_5,S_i,\xi _1,\textbf{d})\) must play \(a_0\), there exists a \(\textbf{d}\in \{0,1\}^{7m}\) and a state \(\theta _{\textbf{d}}\) with \(p_{\theta } \ge \frac{1}{2}-\epsilon \). This implies that \(\frac{1}{2}-2\epsilon \le \gamma \le \frac{1}{2}+\epsilon \).

Consider the reduction to \(\epsilon '\)-MFS, with \(\epsilon '=\rho /52 \nu \) (Theorem 7). Let \(x_{(T_j,\xi _2)}=p_{\theta _{(T_j,\xi _2\oplus \textbf{d}_T)}}/\gamma \), \(x_{(i,\ell )}= p_{\theta _{(i,\ell \oplus \textbf{d}_{S})}}/{\gamma }\), and \(\epsilon =\epsilon '/30\). All rows of type \(\texttt{t}_1\) of \(\epsilon '\)-MFS are such that

$$\begin{aligned} w_{\texttt{t}_1}=\frac{1}{\gamma }\left( \sum _{(T_j,\xi _2)}p_{\theta _{(T_j,\xi _2)}}-\sum _{(i,\ell )} p_{\theta _{(i,\ell )}}\right) \ge -\frac{3\epsilon }{\gamma }\ge -9\epsilon \ge - \epsilon '. \end{aligned}$$

All voters of type \(\texttt{t}_3\) choose \(a_0\). Then, for all \(\mathcal {T}\subseteq \{T_j\}_{j \in [m]}\) of cardinality m/2, it holds:

$$\begin{aligned} \sum _{(T_j,\xi _2): T_j \in \mathcal {T}} p_{\theta _{(T_j,\xi _2)}}- \sum _{(T_j,\xi _2): T_j \notin \mathcal {T}} p_{\theta _{(T_j,\xi _2)}} \ge -\epsilon . \end{aligned}$$

Then, all rows of type \(\texttt{t}_2\) of \(\epsilon '\)-MFS are such that:

$$\begin{aligned} w_{(\texttt{t}_2,\mathcal {T})}= \frac{1}{\gamma }\left( \sum _{(T_j,\xi _2): T_j \in \mathcal {T}} p_{\theta _{(T_j,\xi _2)}}- \sum _{(T_j,\xi _2): T_j \notin \mathcal {T}} p_{\theta _{(T_j,\xi _2)}}\right) \ge -\frac{\epsilon }{\gamma }\ge -3\epsilon \ge -\epsilon '. \end{aligned}$$

A similar argument proves that all rows of type \(\texttt{t}_3\) of the instance of \(\epsilon '\)-MFS have \(w_{(\texttt{t}_3,\mathcal {I})}\ge -\epsilon '\).

To conclude the proof, we prove that, for each voter \((\texttt{t}_5,S_i,\xi _1,\textbf{d})\) that votes for \(a_0\), the corresponding row \((\texttt{t}_4,S_i,\xi _1)\) of the instance \(\epsilon '\)-MFS is such that \(w_{(\texttt{t}_4,S_i,\xi _1)}\ge -\epsilon '\). Let \(\gamma ':=\sum _{(T_j,\xi _2):\mathcal {V}(S_i,T_j,\xi _1,\xi _2) =1} x_{(T_j,\xi _2)} \) and \(\gamma '':=\sum _{(i',\ell ):e(\xi _1)_{i'}=\ell _i} x_{(i',\ell )}\). First, we have that \(\gamma '\ge 1/4-7\epsilon \). If this did not hold, we would have

$$\begin{aligned} \sum _{\theta }p_\theta u_\theta ^{(\texttt{t}_5,S_i,\xi _1,\textbf{d})}<-\frac{1}{2}\left( \frac{1}{4}-\epsilon \right) -\frac{1}{2}\left( \frac{1}{4}-7\epsilon \right) -6\epsilon +\frac{1}{2}\left( \frac{1}{2}+2\epsilon \right) =\epsilon . \end{aligned}$$

Similarly, it holds \(\gamma '' \ge 1/4-7\epsilon \). Hence

$$\begin{aligned} \begin{aligned} w_{(\texttt{t}_4,S_i,\xi _1)}&= - \frac{1}{2} \gamma ' + \frac{1}{2} \gamma '' - (1 -\gamma '-\gamma '')\\&\quad = \frac{1}{2\gamma }\left( \sum _{(T_j,\xi _2):\mathcal {V}(S_i,T_j,\xi _1,\xi _2)=1}p_{\theta _{(T_j,\xi _2\oplus \textbf{d}_T)}} +3\sum _{(i',\ell ):e(\xi _1)_{i'}=\ell _i} p_{\theta _{(i',\ell \oplus \textbf{d}_S)}}\right) -1\\&\quad \ge \frac{2(1/4-7\epsilon )}{1/2+\epsilon }-1 \ge -30\,\epsilon =-\epsilon '. \end{aligned} \end{aligned}$$

Thus, there exists a probability vector \(\textbf{x}\) for the instance of \(\epsilon '\)-MFS in which at least k rows satisfy the \(\epsilon '\)-MFS condition (Eq. 5), which is in contradiction with \(\omega (\mathcal {F}_\varphi )\le 1-\delta \). This concludes the proof. \(\square \)

Theorem 8 shows that, assuming the ETH, computing an \((\alpha ,\epsilon )\)-persuasive signaling schemes requires at least a quasi-polynomial number of steps in the specific scenario of a k-voting instance. Therefore, the same holds in the general setting of arbitrary public persuasion problems with binary action spaces, which is precisely the claim of Theorem 1.

7 A Quasi-Polynomial Time Algorithm

In this section, we prove that our hardness result (Theorem 8) is tight by devising a bi-criteria approximation algorithm. Our result extends the results by Cheng et al. [5] and Xu [22], which deal with signaling problems with binary action spaces and sender’s utility functions which are independent from the state of nature. This is arguably a restrictive assumption, and even the original Bayesian persuasion framework by Kamenica and Gentzkow [14] describes state-dependent sender’s utility functions. Our results generalize those by Cheng et al. [5] to the case of state-dependent sender’s utility functions, and arbitrary discrete action spaces.

In order to prove our result, we need some further machinery. Let \(\mathcal {Z}^r:=2^{\mathcal {A}^r}\) be the power set of \(\mathcal {A}^r\). Then, \(\mathcal {Z}:=\times _{r\in \mathcal {R}}\mathcal {Z}^r\) is the set of tuples specifying a subset of \(\mathcal {A}^r\) for each receiver r. For a given probability distribution over the states of nature, we are interested in determining the set of best responses of each receiver r, i.e., the subset of \(\mathcal {A}^r\) maximizing her expected utility. Formally, we have the following.

Definition 6

(BR-set) Given a probability distribution over states of nature \(\textbf{p}\in \Delta _\Theta \), the best-response set (BR-set) \(\mathcal {M}_{\textbf{p}}:=(Z^1,\ldots ,Z^n)\in \mathcal {Z}\) is such that

$$\begin{aligned} Z^r = \mathop {\mathrm {arg\,max}}\limits _{a\in \mathcal {A}^r}\sum _{\theta \in \Theta }p_\theta \,u_\theta ^r(a)\qquad \text { for each } r\in \mathcal {R}. \end{aligned}$$

Similarly, we define a notion of \(\epsilon \)-BR-set which comprises \(\epsilon \)-approximate best responses to a given distribution over the states of nature.

Definition 7

(\(\epsilon \)-BR-set) Given a probability distribution over states of nature \(\textbf{p}\in \Delta _\Theta \), the \(\epsilon \)-best-response set (\(\epsilon \)-BR-set) \(\mathcal {M}_{\textbf{p},\epsilon }:=(Z^1,\ldots ,Z^n)\in \mathcal {Z}\) is such that, for each \(r\in \mathcal {R}\), action a belongs to \(Z^r\) if and only if

$$\begin{aligned} \sum _{\theta \in \Theta }p_\theta \,u_\theta ^r(a)\ge \sum _{\theta \in \Theta }p_\theta \,u_\theta ^r(a')-\epsilon \qquad \text { for each }a'\in \mathcal {A}^r. \end{aligned}$$

We introduce a suitable notion of approximability of the sender’s objective function. Our notion of \(\alpha \)-approximable function is a generalization of the one proposed by Xu [22, Definition 4.5] to the setting of arbitrary action spaces and state-dependent sender’s utility functions.

Definition 8

(\(\alpha \)-Approximability) Let \(f:=\{f_\theta \}_{\theta \in \Theta }\) be a set of functions \(f_\theta :\mathcal {A}\rightarrow [0,1]\).

We say that f is \(\alpha \)-approximable if there exists a function \(g:\Delta _\Theta \times \mathcal {Z}\rightarrow \mathcal {A}\) computable in polynomial time such that, for all \(\textbf{p}\in \Delta _\Theta \) and \(Z\in \mathcal {Z}\), it holds: \(\textbf{a}=g(\textbf{p},Z)\), \(\textbf{a}\in Z\) and

$$\begin{aligned} \sum _{\theta \in \Theta }p_\theta \,f_\theta (\textbf{a})\ge \alpha \max _{\textbf{a}^*\in Z}\sum _{\theta \in \Theta } p_\theta \,f_\theta (\textbf{a}^*). \end{aligned}$$

The voting function f defined in Sect. 3 is 1-approximable, while, for example, when the action space is binary a non-monotone submodular function is 1/2-approximable. The \(\alpha \)-approximability assumption is a natural requirement since, otherwise, even evaluating the sender’s objective value would result in an intractable problem. When f is \(\alpha \)-approximable, it is possible to find an approximation of the optimal receivers’ actions profile when they are constrained to select actions profiles in Z.

We now provide an algorithm which computes in quasi-polynomial time, for any \(\alpha \)-approximable f, a bi-criteria approximation of the optimal solution with an approximation on the objective value arbitrarily close to \(\alpha \). When f is 1-approximable our result yields a bi-criteria QPTAS for the problem. The key idea is showing that an optimal signaling scheme can be approximated by a convex combination of suitable k-uniform posteriors. As in previous works [5, 22], the key part of the proof is a decomposition lemma that proves that all the posteriors can be decomposed in a convex combination of k-uniform posteriors with a small loss in utility. However, the assumption of state-dependent sender’s utility functions makes previous approaches ineffective in our setting. In particular, we observe that previous decomposition lemmas are based on a direct application of the Hoeffding’s and union bounds. In our case, such a direct derivation is not possible, and we need to introduce some technical intermediate results (Lemmas 9 – 12). In particular, we need to develop a new probabilistic analysis of the decomposition lemma. Let \(\varrho :=\max _{r\in \mathcal {R}}\varrho _r\), \({\bar{n}}:=|\mathcal {R}|\), and \(d:=|\Theta |\). The proof of our main positive result, as stated in Theorem 2, goes as follows.

Proof of Theorem 2

We show that there exists a \(\text {poly}\left( d^{\log ({\bar{n}}\varrho /\delta )\,/\,\epsilon ^2}\right) \) algorithm that computes the given approximation. Let \(k=32\log (4 {\bar{n}}\varrho /\delta )\,/\,\epsilon ^2\) and \(\mathcal {K}\subset \Delta _\Theta \) be the set of k-uniform distributions over \(\Theta \) (Def. 5). We prove that all posteriors \(\textbf{p}^*\in \Delta _\Theta \) can be decomposed as a convex combination of k-uniform posteriors without lowering too much the sender’s expected utility. Formally, each posterior \(\textbf{p}^*\in \Delta _\Theta \) can be written as \(\textbf{p}^*=\sum _{\textbf{p}\in \mathcal {K}}\gamma _{\textbf{p}}\,\textbf{p}\), with \(\gamma \in \Delta _\mathcal {K}\) such that

$$\begin{aligned} \sum _{\textbf{p}\in \mathcal {K}}\gamma _{\textbf{p}}\sum _{\theta \in \Theta } p_\theta \, f_\theta (g(p,\mathcal {M}_\epsilon (p)))\ge \alpha \,(1-\delta ) \max _{\textbf{a}^*\in \mathcal {M}(\textbf{p}^*)}\sum _{\theta \in \Theta }p^*_\theta \,f_\theta (\textbf{a}^*). \end{aligned}$$

Let \({\tilde{\gamma }}\in \mathcal {K}\) be the empirical distribution of k i.i.d. samples from \(\textbf{p}^*\), where each \(\theta \) has probability \(p^*_\theta \) of being sampled. Therefore, the vector \({\tilde{\gamma }}\) is a random variable supported on k-uniform posteriors with expectation \(\textbf{p}^*\). Moreover, let \(\gamma \in \Delta _{\mathcal {K}}\) be a probability distribution such as, for each \(\textbf{p}\in \mathcal {K}\), \(\gamma _{\textbf{p}}:=\Pr ({\tilde{\gamma }}=\textbf{p})\). For each \(\gamma \in \Delta _\mathcal {K}\) and \(\textbf{p}\in \mathcal {K}\), we denote by \(\gamma _{\textbf{p}}^{(\theta ,i)}\) the conditional probability of having observed posterior \(\textbf{p}\), given that the posterior must assign probability i/k to state \(\theta \). Formally, for each \(\textbf{p}\in \mathcal {K}\), if \(p_\theta =i/k\), we have

$$\begin{aligned} \gamma _{\textbf{p}}^{(\theta ,i)}=\frac{\gamma _{\textbf{p}}}{\displaystyle \sum _{\textbf{p}':p'_\theta =i/k}\gamma _{\textbf{p}'}}, \end{aligned}$$

and \(\gamma _{\textbf{p}}^{(\theta ,i)}=0\) otherwise. The random variable \({\tilde{\gamma }}^{(\theta ,i)}\in \mathcal {K}\) is such that, for each \(\textbf{p}\in \mathcal {K}\), \(\Pr ({\tilde{\gamma }}^{(\theta ,i)}=\textbf{p})=\gamma ^{(\theta ,i)}_{\textbf{p}}\). Finally, let \(\mathcal {P}\subseteq \mathcal {K}\) be the set of posteriors such that

$$\begin{aligned} \mathcal {P}:=\left\{ \textbf{p}\in \mathcal {K}: \left| \sum _{\theta }p_\theta u^r_\theta (a)-\sum _{\theta }p^*_\theta u_\theta ^r(a)\right| \le \frac{\epsilon }{2} \ \forall r \in \mathcal {R}, a\in \mathcal {A}^r \right\} . \end{aligned}$$
(9)

Now, we prove the following intermediate results (the proofs of the auxiliary results are provided in Appendix A.3). The following lemma show that, given a posterior \(\textbf{p}^*\) and a state \(\theta \), if we take k i.i.d. samples from \(\textbf{p}^*\) and we consider only the induced posteriors \(\textbf{p}\) in which \(p_\theta \) is close to \(p^*_\theta \), then the probability that the utility of all the receivers in \(\textbf{p}\) is close to the their utility in \(\textbf{p}^*\) is close to 1.

Lemma 9

Given \(\textbf{p}^*\in \Delta _\Theta \), for each \(\theta \in \Theta \), and for each \(i\in [k]\) such that

$$\begin{aligned} \left| \frac{i}{k}-p^*_\theta \right| \le \frac{\epsilon }{4}, \end{aligned}$$

it holds:

$$\begin{aligned} \sum _{\textbf{p}\in \mathcal {P}:p_\theta =i/k}\gamma _{\textbf{p}}\ge \left( 1-\frac{\delta }{2}\right) \sum _{\textbf{p}\in \mathcal {K}:p_\theta =i/k}\gamma _{\textbf{p}}, \end{aligned}$$

where \(\gamma \) is the distribution of k i.i.d. samples from \(\textbf{p}^*\).

Then, we show that the condition in the previous lemma is satisfied with high probability. In particular, we show that given a posterior \(\textbf{p}^*\) and a state \(\theta \), if we take k i.i.d. samples from \(\textbf{p}^*\), then with probability close to 1 the induced posterior \(\textbf{p}\) is such that \(p_\theta \) is close to \(p^*_\theta \). Formally, we prove the following lemma.

Lemma 10

Given \(\textbf{p}^*\in \Delta _\Theta \), for each \(\theta \in \Theta \), it holds:

$$\begin{aligned} \sum _{i: |i/k-\textbf{p}^*_\theta |\ge \epsilon /4\,\,} \sum _{\textbf{p}\in \mathcal {K}:p_\theta =i/k} \gamma _p \le \frac{\delta }{2} \,p^*_\theta , \end{aligned}$$

where \(\gamma \) is the distribution of k i.i.d. samples from \(\textbf{p}^*\).

The following result combines Lemmas 9 and 10. In particular, if we consider the distribution of k i.i.d. samples from a posterior \(\textbf{p}^*\), we have that, for each \(\theta \), the probability that in state \(\theta \) the utility of all the receivers is close to their utility in \(\textbf{p}^*\) is close to 1. Equivalently, the induced posterior belongs to \(\mathcal {P}\) as defined in (9).

Lemma 11

Given a \(\textbf{p}^*\in \Delta _\Theta \), for each \(\theta \in \Theta \), it holds:

$$\begin{aligned} \sum _{\textbf{p}\in \mathcal {P}} \gamma _p \,p_\theta \ge (1-\delta ) \,p^*_\theta , \end{aligned}$$

where \( \gamma \) is the distribution of k i.i.d. samples from \(\textbf{p}^*\).

Now, we need to prove that all the posteriors in \(\mathcal {P}\) guarantee to the sender at least the same expected utility of \(\textbf{p}^*\). Formally, we prove that the \(\epsilon \)-BR-set of each \(\textbf{p}\in \mathcal {P}\) contains the BR-set of \(\textbf{p}^*\). This is shown via the following lemma.

Lemma 12

Given \(\textbf{p}^*\in \Delta _{\Theta }\), for each \(\textbf{p}\in \mathcal {P}\), it holds: \(\mathcal {M}(\textbf{p}^*) \subseteq \mathcal {M}_\epsilon (\textbf{p})\).

Finally, we prove that we can represent each posterior \(\textbf{p}^*\) as a convex combination of k-uniform posteriors with a small loss in the sender’s expected utility. For \(\textbf{p}\in \mathcal {K}\) and \(Z \in \mathcal {Z}\), let \(g^*: \Delta _\Theta \times \mathcal {Z}\rightarrow [0,1]\) be a function such that

$$\begin{aligned} g^*(\textbf{p},Z):=\max _{\textbf{a}\in Z} \sum _{\theta } p_\theta f_\theta (\textbf{a}). \end{aligned}$$

Given \(\textbf{p}^*\in \Delta _\Theta \), we are interested in bounding the difference in the sender’s expected utility when \(\textbf{p}^*\) is approximated as a convex combination \(\gamma \) of k-uniform posteriors, the sender exploits an \(\alpha \)-approximation of f, and receivers play \(\epsilon \)-persuasive best-responses. Formally,

Lemma 13

Given a \(\textbf{p}^*\in \Delta _\Theta \), it holds:

$$\begin{aligned} \sum _{\textbf{p}\in \mathcal {K}} \gamma _{\textbf{p}} \sum _{\theta } p_\theta \, f_\theta (g(\textbf{p},\mathcal {M}_\epsilon (\textbf{p}))) \ge \alpha (1-\delta )f_\theta (g^*(\textbf{p}^*,\mathcal {M}(\textbf{p}^*))), \end{aligned}$$

where \( \gamma \) is the distribution of k i.i.d. samples from \(\textbf{p}^*\).

Therefore, we can safely restrict to posteriors in \(\mathcal {K}\). Since there are \(|\mathcal {K}|=\text {poly}\left( d^{\log ({\bar{n}}\varrho /\epsilon )\,/\, \epsilon ^2}\right) \) posteriors, the following linear program (LP 10) has \(O(|\mathcal {K}|)\) variables and constraints, and finds an \(\alpha \,(1-\delta )\)-approximation of the optimal signaling scheme:

$$\begin{aligned}&\max _{\gamma \in \Delta _\mathcal {K}} \sum _{\textbf{p}\in \mathcal {K}} \gamma _{\textbf{p}} \sum _{\theta \in \Theta } p_\theta \,f_\theta (g(\textbf{p},\mathcal {M}_\epsilon ( \textbf{p}))) \end{aligned}$$
(10a)
$$\begin{aligned}&\text {s.t.} \sum _{p \in \mathcal {K}} \gamma _p \,p_\theta =\mu _\theta \forall \theta \in \Theta \end{aligned}$$
(10b)

Given the distribution on the k-uniform posteriors \(\gamma \), we can construct a direct signaling scheme \(\phi \) by setting:

$$\begin{aligned} \phi _\theta (\textbf{a})=\sum _{\textbf{p}\in \mathcal {K}:\textbf{a}=g(\textbf{p},\mathcal {M}_\epsilon (\textbf{p}))} \gamma _{\textbf{p}} p_\theta , \text { for each } \theta \in \Theta \text { and } \textbf{a}\in \mathcal {A}. \end{aligned}$$

This shows that such signaling scheme \(\phi \) is \(\alpha (1-\delta )\)-approximate and \(\epsilon \)-persuasive, which are precisely our desiderata, thus concluding the proof. \(\square \)