Ex-post implementation with social preferences

The current literature on mechanism design in models with social preferences discusses social-preference-robust mechanisms, i.e., mechanisms that are implementable in any environment with social preferences. The literature also discusses payoff-information-robust mechanisms, i.e., mechanisms that are implementable for any belief and higher-order beliefs of the agents about the payoff types of the other agents. In the present paper, I address the question of whether deterministic mechanisms that are robust in both of these dimensions exist. I consider environments where each agent holds private information about his personal payoff and about the existence and extent of his social preferences. In such environments, a mechanism is robust in both dimensions only if it is ex-post implementable, i.e., only if incentive compatibility holds for every realization of payoff signals and for every realization of social preferences. I show that ex-post implementation of deterministic mechanisms is impossible in such environments; i.e., deterministic mechanisms that are both social-preference-robust and payoff-information-robust do not exist.


Introduction
Models of mechanism design usually consider selfish agents, that is, agents whose utilities consist of their own personal payoffs. However, it is well established that in many economic environments subjects often have social preferences. 1 In these environments, agents' utilities depend not only on their own personal payoff but also on the payoffs of other agents in the society. In this paper, I study the problem of ex-post implementation of deterministic mechanisms in a simple model of social preferences. 2 I consider environments where each agent holds private information about his personal payoff from allocations and about the extent of his social preferences. 3 In the first part of the paper, I investigate the implementation of decision rules that depend only on information about the agents' personal payoffs. I find that the possibility of implementing such decision rules in environments with social preferences heavily depends on the solution concept that is used for the implementation. I first consider Bayesian implementation and reestablish the result of Bierbrauer and Netzer (2016) that for each decision rule that is implementable in the environment where agents are selfish, there exists a mechanism that implements it in a Bayes-Nash equilibrium in every environment with social preferences as well as in the environment where agents are selfish. I then consider ex-post implementation and show that the ex-post implementation of non-trivial decision rules is impossible in environments with social preferences.
In the second part of the paper, I consider the ex-post implementation of decision rules that depend both on information about the agents' personal payoffs and on information about the agents' social preferences. I present an impossibility result on ex-post implementation in environments where there exists an agent whose utility depends on the payoff of a selfish agent. This result indicates that the difficulty of robust implementation extends beyond decision rules that depend only on the agents' payoff signals.
This paper relates to the existing literature on implementation in models with social preferences and in particular the papers of Bierbrauer and Netzer (2016), Bartling and Netzer (2016), and Bierbrauer et al. (2017). The focus of these papers is on the implementation of decision rules that depend only on agents' payoff types. They revolve around the notion of social-preference-robust mechanisms, i.e., mechanisms that are implementable in any setup with social preferences, including the setup where agents are selfish. Such mechanisms ensure the implementability of a decision rule even if there is no common knowledge about the existence and extent of agents' social preferences. Bartling and Netzer (2016) and Bierbrauer et al. (2017) conduct experiments that show that social-preference-robust mechanisms perform significantly better than mechanisms that are suitable only to the setup where agents are selfish. These findings indicate that this notion of robustness is indeed important. Another important dimension of robustness is the robustness to the distributions of other agents' payoff signals. A mechanism is payoff-information-robust if it ensures the implementability of the decision rule for any belief and higher-order beliefs of the agents about the payoff types of the other agents, see Bergemann and Morris (2005). The question arises of whether it is possible to construct a mechanism that is robust both in the dimension of the agents' payoff information and in the dimension of social preferences. Such a mechanism would require that the incentive-compatibility constraints of each agent hold for every realization of payoff signals and for every realization of social preferences. The first result on the impossibility of 1 3 Ex-post implementation with social preferences ex-post implementation implies that it is impossible to construct mechanisms that are robust in both of these dimensions.
The construction of the social-preference-robust mechanisms in Bierbrauer and Netzer (2016), Bartling and Netzer (2016), and Bierbrauer et al. (2017) is based on two properties. The first is that under the mechanism one agent's actions cannot affect the payoff of another agent. This property is referred to in the literature as externality-free. 4 The second property is that the mechanism is incentive compatible in an environment where agents are selfish. Externality-free and incentive compatibility imply the social-preference-robustness of a mechanism in every model of social preferences in which agents behave selfishly whenever they cannot affect other agents' payoffs. For example, inequity aversion models, e.g., Fehr and Schmidt (1999) and models of intention-based preferences, e.g, Rabin (1993). In the second part of the proof of the impossibility theorem I show that under mild assumptions on the economic environment, that are satisfied in most of the standard settings of mechanism design, externality-freeness and ex-post incentive compatibility cannot coexist. This result is general and does not depend on the specific model of social preferences. The implication of this result is that in any model of social preferences it would be impossible to construct a mechanism that is social-preference-robust and payoff-information-robust by constructing a mechanism that is both externality-free and ex-post incentive compatible.
One economic environment in which the assumptions of this paper do not hold appears in Bierbrauer et al. (2017). They consider a bilateral trade environment where both the buyer and the seller have two types and present mechanisms that are social-preference-robust and payoff-information-robust by constructing mechanisms that are externality-free and ex-post incentive compatible. 5 They conduct a laboratory experiment that compares participants' behavior under a mechanism that is both social-preference-robust and payoff-information-robust and under a mechanism that is only payoff-information-robust but not social-preference-robust. They find that the first mechanism performs significantly better than the latter. The fact that the mechanisms they compare are both payoff-information-robust and differ only in their social-preference-robustness enables to account the difference in the participants' behavior under the different mechanisms to the existence of social preferences. 6 This paper relates to their work by implying that such an experiment cannot be replicated in most mechanism design environments, where externality-freeness and ex-post incentive compatibility cannot coexist, and that in order to conduct such an experiment one needs to carefully design the economic environment. Bierbrauer and Netzer (2016) consider social-preferences-robust mechanisms that are Bayesian implementable and present an extensive possibility result that is based on the properties of externality-freeness and incentive compatibility. The reason for the difference in the possibility to achieve both externality-freeness and incentive compatibility between Bayesian and ex-post implementation is the following. Externality-freeness means that an agent's report does not affect the payoffs of other agents. This implies that the other agents' transfers should eliminate the effect of this agent report on their valuation. In addition, under the requirement of incentive compatibility, these agents' transfers must also incentivize each of them to report truthfully. Under ex-post implementation, these requirements for an agent's transfer must be satisfied for every realization of signals. I show that this cannot happen without contradictions. However, under Bayesian implementation, these requirements should only be met in expectation, and then it is possible to construct transfer schemes that satisfy these requirements.
The rest of the paper is organized as follows. In Sect. 2, I present the model. In Sect. 3, I discuss the implementation of decision rules that depend only on agents' payoff signals. I characterize the set of Bayes-Nash implementable decision rules and construct a transfer scheme that implements a decision rule that belongs to this set in every setup with social preferences as well as in the independent private values setup. I present an impossibility result of ex-post implementation. In Sect. 4, I present an impossibility result on the ex-post implementation of decision rules that depend both on agents' payoff signals and their social preferences. I also discuss the difference between the social preferences model of this paper model and the classical interdependent values model. Section 5 concludes.

The model
I consider a model with two agents, i ∈ I = {1, 2} , and two social alternatives, 7 A = {a, b} . Each agent i ∈ I receives a signal i ∈ Θ i , where Θ i is a convex subset of a finite dimensional Euclidean space. If alternative k ∈ A is chosen, if the signal realization is i , and if agent i obtains a transfer t i , then agent i's payoff is given by Π i = v i k, i + t i . I assume that v i k, i is a convex function of i for every i ∈ I . The utility of agent i depends in a linear manner on her personal payoff and on the payoff of agent j, i.e., The signals i and i are the private information of agent i. I denote

Decisions that depend only on payoff signals
In this section, I discuss the implementation of decision rules that depend only on information about the personal payoffs of the agents. I consider situations where the designer wants to implement such a decision rule irrespective of whether agents are selfish or have social preferences. A first line of such situations is agency problems in institutions with a hierarchical organizational structure. For example, consider a conglomerate's central administration that needs to choose an alternative from a set of possible alternatives. The central administration wants to choose the alternative that maximizes the conglomerate's profit, i.e., that maximizes the sum of the profits of the conglomerate's corporations. The effect of each alternative on a corporation's profit is the private information of the corporation's manager. Now in many environments managers' utilities may depend not only on the profits of their corporations but also on the profits of other corporations in the conglomerate. Such dependency may occur, for example, when a manager is a shareholder in the conglomerate and, therefore, profits from its success; when a manager is rewarded according to the relative success of her corporation with respect to the other corporations in the conglomerate; when a manager is connected in some way (say, through family, friendship, or business ties) to other managers in the conglomerate; or when a manager is invested in some other corporation of the conglomerate.
A second line of situations is that of utilitarian designers who are called upon to choose a social alternative. Consider a society some of whose members may have antisocial preferences, such as envy, spite, and so on. In such a society utilitarian theory suggests that the agents' preferences will be "laundered," i.e., that the antisocial aspects in these preferences will be removed before the preferences are incorporated into the social utility. 9 Harsanyi, one of the greatest advocates of utilitarian theory, suggests that: See, for example, Harsanyi (1977), Goodin (1986), and Blanchet and Fleurbaey (2006). Some preferences ... must be altogether excluded from our social-utility function. In particular we must exclude all clearly antisocial preferences such as sadism, envy, resentment and malice. ... Utilitarian ethics makes all of us members of the same moral community. A person displaying ill will toward others does remain a member of this community, but not with his whole personality. That part of his personality that harbors these hostile antisocial feelings must be excluded from membership, and has no claim to a hearing when it comes to defining our concept of social utility (Harsanyi 1977, p. 647) Laundering preferences means that when the designer is called to choose the social alternative, she should consider only information about agents' personal payoffs and disregard information about agents' social preferences. That is, her optimal decision rule depends only on agents' payoff signals.
The question of whether it is possible to Bayesian implement decision rules that depend only on agents' payoff signals in the presence of social preferences is analyzed in Bierbrauer and Netzer (2016) and Bartling and Netzer (2016). They show that any decision rule that is Bayesian implementable in the environment where agents are selfish is also Bayesian implementable in any environment with social preferences. Moreover, there exists a mechanism that implements the decision rule in a Bayes-Nash equilibrium in every environment with social preferences as well as in the environment where agents are selfish. Such a mechanism is called a social-preference-robust mechanism. The construction of this mechanism is achieved by constructing a transfer scheme that eliminates the effect of agent i's report on the expected payoff of agent j. At the same time, this transfer scheme incentivizes agent i to report truthfully when he is interested in maximizing her own personal payoff. Therefore, this transfer scheme incentivizes truth telling in every setup. I now show this result formally.
Proposition 1 Consider a profile Θ , v i i∈I . Let q( ), t 1 ( ), t 2 ( ) be Bayesian implementable in the environment where agents are selfish; then there exists a social choice function q( ), t � 1 ( ), t � 2 ( ) that is Bayesian implementable in any environment with social preferences and in the environment where agents are selfish.
Proof Given a transfer scheme t i ( ) i∈I that implements q( ) in the environment where agents are selfish, define t � i ( ) i∈I to be Consider j, l ∈ {1, 2} with j ≠ l . Now agent j ′ s expected utility as a function of her report is Ex-post implementation with social preferences i.e., from agent j's perspective, the report ̂j does not affect the expected payoff of agent l. It is therefore sufficient to show that t � i ( ) i∈I Bayesian implements q( ) in the environment where agents are selfish. This follows from the fact that t � i ( ) equals t i ( ) plus additive terms that do not depend on ̂i and that t i ( ) i∈I Bayesian implements q( ) in the environment where agents are selfish. ◻

Remark
The construction of the social-preference-robust mechanism is based on two properties. The first is that under this mechanism agent i's action does not affect the expected payoff that he assigns to agent j. This property is referred to in the literature as externality-free. The second property is that the mechanism is incentive compatible. Externality-free and incentive compatibility imply the social-preference-robustness of the mechanism not only in the particular model of this paper but in every model of social preferences in which agents behave selfishly whenever they cannot affect other agents' payoffs.

The impossibility of ex-post implementation
Another renowned and important dimension of robustness is robustness to the payoff information of others. A mechanism is payoff-information-robust if it ensures the implementability of the decision rule for any belief and higher-order beliefs of the agents about the payoff types of the other agents, see Bergemann and Morris (2005). Wilson (1987) suggests that mechanisms should be free from assumptions of common knowledge. The question then arises whether it is possible to implement decision rules in environments where there is no common knowledge of the distribution of other agents' payoff signals nor of the presence and the extent of social preferences. Robustness in both of these dimensions is captured by the notion of expost implementation, which requires that the strategy of each agent i be optimal with respect to the strategies of the other agents for every possible realization of payoff signals and social preferences. In the following theorem, I show that it is impossible to ex-post implement a decision rule that depends only on agents' payoff signals. This result implies that it is impossible to construct a mechanism that is robust in both dimensions.
Theorem 2 Consider a profile Θ , v h h∈I and a decision rule q( ). If there exist two signals i and ′ i and two signals j and Theorem 2 implies the impossibility of ex-post implementation of non-trivial deterministic decision rules in the standard settings of the mechanism design literature such as auctions and public goods environments. In these environments, the assumption of Theorem 2 is satisfied whenever one agent is pivotal for two different types of the other agent. For example, consider a single-unit auction with two agents. For each agent the type set is [ , ] . Any deterministic decision rule q i , j with the property that there exist two types of agent j, ̃j and ′ j , for which q(⋅,̃j) and q(⋅, � j ) are non-trivial functions of agent i's type, is not ex-post implementable. 10 The argument behind Theorem 2 is the following. Ex-post implementation implies that for any two signals i and ′ i the payoff of agent j must remain equal on a subset of measure one of the interval i , i for any fixed j , j . Therefore, if the decision rule assigns different alternatives for i and ′ i , and if agent j's valuation is different for each alternative, it is left for agent j's transfer function t j to eliminate this gap in agent j's payoff. However, t j also plays a role in incentivizing agent j to report truthfully. These two roles of t j lead to a contradiction and hence make expost implementation impossible.
Lemma 3 Let q( ) be ex-post implementable and consider some j , j . For every

Proof of Lemma 3
Consider some j , j . The payoff of agent j given j , j as a function of agent i's report, ̂i ,̂i , is Π j j , j ,̂i,̂i = v j q j ,̂i , j + t j j , j ,̂i,̂i . The transfer of agent i given j , j as a function of agent i's report is t i ̂i ,̂i, j , j . Agent i's utility function given j , j is v i q ̂i , j , i + i Π j j , j ,̂i,̂i + t i ̂i ,̂i, j , j . Now assume that agent i reports i truthfully. Ex-post implementability implies that he must report i truthfully. The problem is therefore to incentivize agent i to report i truthfully when his utility function is This problem is equivalent to the problem of incentivizing him to report truthfully in the environment where agents are selfish. 11 Since Θ i is a convex subset of a finite dimensional Euclidean space and since v i k, i is a convex function of i , revenue equivalence holds; i.e., the transfer to agent i given j in any transfer scheme that implements q( ) is unique up to a constant. 12 Hence a truthful report of i implies that for every i ∈ i , i and i ∈ Θ i we have where i ∶ Θ i × Θ j → ℝ and 13 i ∶ D × Θ 2 → ℝ . On the other hand, assume that agent i reports i truthfully. Ex-post implementability implies that he must report i truthfully; i.e, for every i ∈ Θ i and i ∈ i , i we have Revenue equivalence means that t i ̂i , j equals the sum of a function that depends on i , which I denote by i i , j , and a constant, which I denote by i , j . 11 Define t i ̂i , j = i Π j j , j ,̂i, i + t i ̂i , i , j , j and the problem is to incentivize agent i to report i truthfully given that his utility is v i q ̂i , j , i +t i ̂i , j .
12 See Krishna and Maenner (2001). 10 Ex-post implementability in the independent private value setting implies that both q(⋅,̃j) and q(⋅, � j ) have thresholds (not necessarily the same) such that agent i receives the item if and only if his reported type exceeds the threshold. Therefore, we can restrict our attention to non-trivial functions, q(⋅,̃j) and q(⋅, � j ) , with the above threshold property. The threshold property implies that the assumption of Theorem 2 holds.

3
Ex-post implementation with social preferences for every ̂i ∈ i , i . Subtracting v i q i , j , i from both sides of the inequality we have for every ̂i ∈ i , i . This implies that 14 Combining Eqs. (1) and (2) yields that for every i ∈ i , i and every i ∈ Θ i ,

I now complete the proof by showing that the requirements that Lemma 3 imposes on agent j's transfer function contradict the requirements that incentive compatibility imposes on agent j's transfer function.
Proof of theorem 2 Assume that j = 0 . According to the assumption of the theorem there exist signals i , In addition, Lemma 3 implies that we can find a signal i such that However, since j = 0 we get that for agent j to report truthfully, function t j must assign the same transfer to signals that map the same alternative for a given report of agent i. This implies that This stems from the following result. Let u( ,̂) = ⋅ q ̂ + t ̂ . If for every ∈ , , ∈ arg max ∈ , u( ,̂) then for every ∈ , , t( ) + q( ) = t + ⋅ q + ∫ q(s) ds. a contradiction. ◻ Remark Theorem 2 concerns decision rules that depend only on agents' payoff signals. However, throughout the analysis, I have allowed agents' transfers to depend also on the information about the agents' social preferences. In that sense, the theorem shows that the implementation of non-constant decision rules is not robust to social preferences. The literature on mechanism design with social preferences speaks of mechanisms that are robust to social preferences. In such mechanisms, not only the decision rule but also the agents' transfers need not depend on information about social preferences. Therefore, Theorem 2 shows a stronger result that implies the nonexistence of social-preference-robust mechanisms.

Remark
The proof of Theorem 2 is based on two claims. The first claim, which appears in Lemma 3, suggests that ex-post implementation implies that the property of externality-freeness, i.e, the property that agent i cannot affect the payoff of agent j, must hold for every realization of agent j's payoff signals. The second claim suggests that externality-freeness and ex-post incentive compatibility in the case where agent j is selfish cannot coexist. While the first claim depends on the specific model of social preferences, the second claim does not. This means that in any model of mechanism design with social preferences it is impossible to construct a mechanism that is social-preference-robust and payoff-information-robust by constructing a mechanism that is both externality-free for every realization of signals and ex-post incentive compatible in the environment where agents are selfish. Moreover, in any model of mechanism design with social preferences it suffices to show that ex-post implementability implies that externality-freeness must hold for every realization of payoff signals to prove that mechanisms that are social-preference-robust and payoff-information-robust do not exist.
Remark Bierbrauer et al. (2017) consider a bilateral trade problem in an environment where both the buyer and the seller have two types. They present non-trivial mechanisms that are social-preference-robust and payoff-information-robust by constructing a mechanism that is both externality-free for every signals realization and ex-post incentive compatible where agents are selfish. The construction of such a mechanism is possible because the decision rules they consider do not satisfy the assumption of Theorem 2. That is, there is no agent i that is pivotal between two alternatives a and b for two different types of agent j.

Remark
The ex-post implementation of a decision rule q( ) under the assumption that an agent's social preferences are privately known implies that q( ) is implementable in a model where the profile of agents' social preferences signals is commonly known. Under the assumption that the profile of agents' social preferences signals is commonly known, the model of the paper corresponds to a model with interdependent separable valuations. Jehiel et al. (2006) show an impossibility result Ex-post implementation with social preferences of ex-post implementation in models with interdependent valuations. However, their result does not imply the impossibility of ex-post implementation in the model of this paper for the following reasons. First, Jehiel et al. (2006) result depends on the assumption that the payoff type of each agent is multi-dimensional, while I allow the agents' payoff types to be uni-dimensional. When agents' types are uni-dimensional, it is possible to implement non-trivial decision rules in models with interdependent valuations. Second, when agents' social preferences signals is commonly known the model of the paper corresponds to a model with interdependent separable valuations and Jehiel et al. (2006) result does not apply to models with interdependent separable valuations. Indeed, non-trivial ex-post implementation is possible in models with interdependent separable valuations. 15 I further discuss the differences between the social preferences model and the interdependent valuation model in Sect. 4.2.

Decisions that depend on social preferences
In the previous section, I discussed the notion of social-preference robustness. This notion is suitable to situations where the designer does not want to condition her decision on information about the agents' social preferences. In this subsection, I consider the possibility of ex-post implementation of decision rules that depend both on information about the agents' payoffs and on information about the extent of the agents' social preferences. I present an impossibility result on ex-post implementation in environments where there is at least one agent whose utility relies on the payoff of a selfish agent. This result shows that at least in this important environment the possibility of conditioning decision rules on information about social preferences does not create enough freedom to enable ex-post implementation. I consider the 2 × 2 model that I presented in Sect. 2 except that now agent 2 is selfish, i.e., u 1 = Π 1 + 1 ⋅ Π 2 and u 2 = Π 2 . The impossibility theorem, Proposition 6, and its proof are relegated to the Appendix. In the following, I illustrate the theorem and its proof by considering the following example of an allocation problem of a single good.
Example 4 Consider a principal who is looking to allocate a single indivisible good between two agents. Each of the agents has a value for the good in [0, 1] and where q = a is the allocation where agent 1 gets the item and q = b is the allocation where agent 2 gets the item. The impossibility result implies that this decision rule is not ex-post implementable.
The impossibility of ex-post implementation follows from the fact that agent 2's transfer appear in the incentive compatibility conditions of agents 1 and 2 and there is no transfer function that can satisfy the IC constraints of both agents. The argument is the following. In the above example there exist where t 2 ′ 1 , 1 , a is agent 2's transfer for alternative a conditional on the report ′ 1 , 1 and t 2 ′′ 1 , 1 , a is agent 2's transfer for alternative a conditional on the report ′′ 1 , 1 . In an identical way we get that where t 2 ′ 1 , 1 , b is agent 2's transfer for alternative b conditional on the report ′ 1 , 1 and t 2 ′′ 1 , 1 , b is agent 2's transfer for alternative b conditional on the report . Incentive compatibility implies that agent 2 does not want to report ′′ 2 ; i.e., for every 1 ∈ 1 , 1 we have: Assume that agent 1's type is ′′ 1 . Incentive compatibility implies that agent 2 does not want to report ′ 2 ; i.e., for every 1 ∈ 1 , 1 we have: Ex-post implementation with social preferences In addition, we can find 1 ∈ 1 , 1 for which and so we get that An identical argument yields that but this contradicts the assumption that

Social preferences vs. interdependent values
In this paper, I presented impossibility theorems regarding ex-post implementation in a model with social preferences. Jehiel et al. (2006) present an impossibility result on ex-post implementation in a model with interdependent values. Although the social preferences model resembles the model in Jehiel et al. (2006), it is different from their model in the following important respect. In the social preferences model, an agent's utility depends on the other agent's signals and transfers, while in the interdependent values model an agent's utility depends only on the other agent's signals. In the interdependent values model, agent i's report affects his utility through the decision rule and his personal transfer, while in the social preferences model agent i's report affects his utility through the decision rule, his personal transfer, and the personal transfer of agent j. 16 That is, in the social preferences model mechanisms affect agents' incentives in a more complex way, compared to in the interdependent values model. On the one hand, since an agent's utility is affected by the other agent's transfer, mechanisms provide more tools to achieve implementation. On the other hand, since each agent's transfer also affects the incentives of the other agent, mechanisms also impose further restrictions on achieving implementation.
To illustrate the difference between the models, consider the interdependent values model where agent i's utility function is v i q, i + i ⋅ v j q, j + t i , where q ∈ A , whereas in the social preferences model agent i's utility is v i q, i + i ⋅ v j q, j + z i , where z i = i t j + t i . The difference between the models is that t i depends only on agent i's reported signal and not on her actual signal, while Note that while the effect of the agent's personal transfer on his utility is independent of the realization on signals, the effect of other agents' transfers on his utility depends on the realization of signals. z i depends both on agent i's reported signal and on her actual signal. 17 To further illustrate the difference between the models, I analyze two examples that show that the impossibility of ex-post implementation in one model does not imply the impossibility of ex-post implementation in the other model. In the first example, I present a decision rule that is not ex-post implementable in the social preferences model but is ex-post implementable in the interdependent values model. In the second example, I present decision rules that are ex-post implementable in the social preferences model but are not ex-post implementable in the interdependent values model.

Example 4 (continued)
Consider the setup of Example 4 (for which it has been shown that the optimal decision rule is not ex-post implementable in the social preferences model) in the interdependent values model. The optimal decision rule is expost implementable in the interdependent values model by applying the following transfer scheme: Under these transfer functions any type i , i of agent i receives the same utility, i + i ⋅ j , irrespective of his report. Therefore, the decision rule is ex-post implementable. 18 t 1 1 , 1 , 2 = − 2 if 1 > 1 + 1 ⋅ 2 0 otherwise 18 Ex-post implementation is possible because the assumption of Theorem 2 does not hold. 17 Another way to try to compare the two models in to make an adaptation of the utilities in the social preferences model to the standard quasi-linear utility by separating the term that depends on agent i's private signal and the term that depends solely on her report. For this I define V i q, , t j = v i q, i + i v j q, j + t j and so agent i's utility is V i q, , t j + t i , so the mechanism affects V i through q and t j , while in the interdependent values the term in agent i's utility that depends on agent i's private signal is her valuation that is affected by the mechanism only through q.
I now analyze the interdependent values model and show that it is impossible to ex-post implement non-constant decision rules in this model. Consider an arbitrary type ̃j ,̃j of agent j, j ≠ i. Ex-post implementability implies that for every i , i , � i , � i ∈ [0, 1] 2 such that q i ,̃j = q � i ,̃j we have 19 t i i , i ,̃j,̃j = t i � i , � i ,̃j,̃j . That is, agent i's transfer function depends only on the chosen alternative; hence, we denote t i i , i ,̃j,̃j ∶= t i q i , j ,̃j,̃j . Consider a non-constant decision rule q( ) . Look at a type ̃j ,̃j of agent j for which agent i is pivotal. This means that there exist two signals ′ i and ′′ i such that q � i ,̃j = a and q �� i ,̃j = b . Now, ex-post implementability implies that for every i ∈ [0, 1] we have that and hence we get that for every i ∈ [0, 1] Since the left-hand side of the equation varies with i and the right-hand side of the equation is constant we reach a contradiction.

Conclusion
I have considered the possibility of ex-post implementation in a model with social preferences where each agent holds private information about his personal payoff from allocations and about the extent of his social preferences. I presented an impossibility result on the ex-post implementation of decision rules that depend only on information about agents' payoffs. This result implies that it is impossible to construct mechanisms that are social-preference-robust and payoff-information-robust. The impossibility result also shows that in any model with social preferences it would be impossible to construct a mechanism that is social-preference-robust and payoff-information-robust by constructing a mechanism that is both externality-free and incentive compatible.