1 Introduction

Mechanism design theory studies institutions with privately informed agents. Using the tools of game theory, it proposes rules of interactions such that the participants’ strategic behavior complies with the designer’s objective. In a leading example, the designer’s purpose is to implement the socially efficient outcome, that is, to find the allocation that maximizes total welfare. The major challenge to efficient implementation is the fact that information about individual preferences is private.Footnote 1 In a setting with quasi-linear utilities, D’Aspremont and Gérard–Varet (1979) construct an ingenious mechanism that aligns the agents’ individual incentives with total welfare maximization. In a Bayes–Nash equilibrium, the agents reveal their types to the principal and thus efficiency can be achieved. The AGV mechanism has become an essential building block for the mechanism design theory (Athey and Segal 2013).

Since the AGV mechanism is tailored to the concept of Bayes–Nash equilibrium, its success in inducing truth-telling and, therefore, efficiency in practice depends on (1) whether the participants’ behavioral response to the mechanism coincides with the Bayes–Nash prediction and, if it does not, (2) whether efficiency still obtains under the possible deviations. While the first question has not been addressed directly in the literature, the experimental results in (simpler) complete information games suggest that the answer may be negative. As to the second question, little is known as to the loss of efficiency if the participants do not play equilibrium. This paper tries to fill this gap by studying how the mechanism performs in a behavioral framework where, contrary to the requirement of Bayes–Nash equilibrium, the agents conduct only a limited number of iterations of reasoning. The choice of the behavioral setting follows a large body of evidence from experimental games. Recent surveys by Crawford et al. (2009) and Camerer and Ho (2015) show that non-equilibrium models with finite depth of reasoning, such as the Level-k model (Lk; Nagel 1995; Stahl and Wilson 1994; Costa-Gomes et al. 2001; Costa-Gomes and Crawford 2006) and the cognitive hierarchy model (CH; Camerer et al. 2004), systematically outperform equilibrium in predicting human behavior. Along with closely fitting the lab data, these models are able to predict some frequently observed field phenomena such as the winner’s curse in common-value auctions: see Crawford and Iriberri 2007. We choose the Lk model due to its tractability, but most of our results also hold in the CH model.Footnote 2

Lk is a model of reasoning prior to a game, where the agent maximizes his payoff against a non-equilibrium belief about other agents’ strategies. The belief is constructed in the following iterative process. An agent of level \(k=1\) (“L1 agent) believes that his opponents (“L0”) behave non-strategically. In incomplete information games, such as the AGV mechanism, L0’s can be modeled in two distinct ways: either they truthfully reveal their type (”truthful L0”) or draw their actions (type reports) from a random distribution (“random L0”). An L2 agent best replies to the profile of L1 strategies, L3 best replies to L2, and so on. In general, an Lk strategy is best reply to the profile of \(L(k-1)\), suggesting the interpretation that agents try to “outguess” their opponents.Footnote 3 To illustrate, consider a seminal game in this literature,Footnote 4 where players pick a number between 0 and 100 and the one whose number is closest to some fraction, say one half, of the average wins the game. In this guessing game, if L0s randomize uniformly between 0 and 100, L1s will choose 50/2=25, L2s will choose 25/2, etc. As k increases, the best response of Lk approaches 0, the only Nash equilibrium of the game.

This paper applies the Lk model to the AGV mechanism with one-dimensional types. We look at the case where the principal knows the type distribution and expects equilibrium behavior on part of the agents. Such principal is ignorant of the fact that he operates in an Lk environment. In this setting we conduct a positive exercise and find conditions under which the mechanism remains robust to Lk. Throughout the paper we assume independent private valuations and utilities that are strictly concave with respect to the allocation.Footnote 5 First, we observe that in the truthful-L0 specification of the Lk model the mechanism never produces a loss in efficiency. In that specification, the L1 best reply is given by the equilibrium condition of AGV which implies truth-telling. By induction, this result extends to any higher level k, therefore the mechanism chooses the efficient allocation irrespective of the levels prevailing in the population.

Further, in the random-L0 specification of Lk, we show that if the distribution of random actions (L0) coincides with the distribution of payoff types, then the participants at any level larger than zero report truthfully to the mechanism. Next, we analyze the more interesting case where the type distribution used by the planner to assign transfers differs from L1s’ expectation of the opponents’ actions. In this case, the externality payment generally fails to align the agent’s incentives with total expected welfare maximization. As a result, the AGV mechanism does not induce truth-telling and produces a sub-optimal allocation. Denoting the distribution of random L0 strategies by \(\Phi \) and the distribution of types by F, we study how the relation between \(\Phi \) and F affects the Lk strategies in the mechanism.

We focus on the case where \(\Phi \) dominates F (in the sense of first-order stochastic dominance) or vice versa. This corresponds to scenarios where players believe a salient strategy is to systematically under- or over-report one’s type. The main result characterizes the deviations from equilibrium behavior for the case that the efficient choice rule is linear in agents’ types (the environment we call neutral). If L0 agents are expected to under-report their types, then all types of an L1 agent will over-report their types to the mechanism, and vice versa. Therefore L1 agents display compensatory bias in reports. The distortion carries over to higher levels, but the expected absolute value of the distortion of type decreases as level k goes up; in the case of quadratic utilities, the rate of decrease is exponential. Interestingly, the direction of the bias (i.e., whether the agents over-report or under-report their types) alternates at each iteration from k to \({k+1}\). This result has two interesting implications for the outcome of the mechanism. First, if the pool of agents is a mixture of two subsequent levels (e.g., L2 and L3), the distortion of efficiency is lower than in a group where only one of these levels is present. Second, as Lk goes up, the outcome approaches efficiency.

The results extend partially to the non-neutral case where types are complements or substitutes with respect to the efficient choice of allocation. Non-neutrality means that the marginal effect of one agent’s type on the efficient allocation is not invariant in the other agent’s type. In particular, when the other agent’s type is high, the marginal effect is stronger in case of complements and weaker in case of substitutes. In either of these environments reports have two counter-veiling effects on the choice of allocation. The first direct effect of compensating bias pushes the allocation in the direction of marginal payoff increase. The second indirect effect changes the choice rule’s sensitivity to the opponent’s report. Therefore, compensating bias remains best reply in type ranges where the direct effect dominates. We demonstrate by means of example that the dominance of the indirect effect changes the prediction.

While the main interest of this paper is positive, we conduct a separate normative analysis of the AGV mechanism. This part is concerned with a principal who is aware of the Lk environment and seeks the appropriate AGV-type mechanism for efficient implementation. In particular, we change the transfer rule to reflect the actual expected externality (under the level-k strategy profile) and thus to elicit the information correctly.Footnote 6 The Lk environment is characterized by three components: type distribution F, random actions distribution \(\Phi \) and agents’ levels k. When all three components are known, the efficient Lk mechanism differs from the original AGV in its transfer to L1 agents only. By correcting the incentives at level 1 the principal restores truth-telling at all levels and achieves efficiency. When the information on \(F, \Phi \) or k is missing, the principal can expand the mechanism to elicit the agents’ knowledge. One way to do this is to add a betting round where the agents guess each others’ reports. Ex post, the principal rewards correct guesses. Betting is a powerful tool for the elicitation of correlated informationFootnote 7 and turns out to be instrumental in the Lk environment. We show how betting can be used to elicit levels k and other information necessary to construct the efficient mechanism.

This paper is among the first studies of mechanisms in an Lk environment. Crawford (2015) looks at the double auction mechanism and revisits Myerson and Satterthwaite’s (1983) impossibility result in the Lk framework. He finds, in particular, that revelation principle does not hold in this framework since the choice of mechanism influences the correctness of Lk beliefs. Similar to his paper, the normative part of our analysis exploits the predictably incorrect beliefs of Lk agents. De Clippel et al. (2014) provide a characterization of implementable choice functions in a general setup with finite depth of reasoning. They consider the expected externality mechanism as an example and show that it achieves efficient implementation under the assumption that L0 report truthfully. In contrast, the present paper allows for L0 to be random and arbitrarily far from truthtelling.

The rest of this paper is organized as follows. Section 2 presents the key assumptions, the Lk model in incomplete information games and in the AGV mechanism in particular. Section 3 describes the properties of Lk strategies in the AGV mechanism: equivalence of Lk and equilibrium models in the AGV mechanism, the biases due to first order stochastic dominance and convergence in the neutral environment. Section 4 shows how the AGV mechanism can be adjusted to the Lk environment, and Sect. 5 concludes.

2 The model

Preferences The preference environment is characterized by the following assumptions:

  • A1  Utilities are linear in money.

  • A2  Values are private.

  • A3  Values are independent draws from a commonly known distribution F with density f.

Assumptions A1 and A2 imply that the utility function of a given agent \(i\in I=\left\{ 1,2\ldots n\right\} , n\ge 2\), can be represented as:

$$\begin{aligned} v_{i}\left( x,\theta _{i}\right) +T_{i}, \end{aligned}$$
(1)

where \(v_{i}\left( x,\theta _{i}\right) \) is the utility derived from allocation \(x\in X\subseteq \mathcal {R}, \theta _{i}\) is the privately known preference parameter that we refer to as the agent’s type, and \(T_{i}\) is the monetary transfer to agent i. Agent types \(\theta _{i}\) are drawn independently from \(\Theta \), a compact subset of \(\mathcal {R}\), according to a distribution F. We assume that \(v_{i}\left( x,\theta _{i}\right) \) is strictly concave in x and continuously differentiable with respect to both arguments on the entire domain. Some of our results require that the preferences satisfy a single crossing (Spence–Mirrlees) condition. The condition postulates that the cross-derivative of \(v_{i}\left( x,\theta _{i}\right) \) with respect to allocation x and type \(\theta _{i}\) has constant sign over the function’s domain:

  • A4. \(v_{i}\left( x,\theta _{i}\right) \) satisfies the Spence–Mirrlees condition, i.e., either A4.1 or A4.2 holds:

    • A4.1  \(\frac{\partial ^{2}v_{i}}{\partial x\partial \theta _{i}}\left( x,\theta _{i}\right) >0\),    for all i    and   \(\left( x,\theta _{i}\right) \in \left( X,\Theta \right) \),

    • A4.2  \(\frac{\partial ^{2}v_{i}}{\partial x\partial \theta _{i}}\left( x,\theta _{i}\right) <0\),   for all i    and   \(\left( x,\theta _{i}\right) \in \left( X,\Theta \right) \).

A1-A4 are the basic assumptions of mechanism design. A further standard assumption is that agents play Bayes–Nash equilibrium: the profile of strategies is a fixed point of a best reply correspondence. In this paper, we consider a framework with a finite number of best-reply iterations that do not generally start at equilibrium. This framework is described by the following model (Nagel 1995; Crawford and Iriberri 2007).

Level-k Consider a game of incomplete information where the payoffs are given by \(u_{i}\left( s;\theta _{i}\right) \), for each agent \(i\in I\) of type \(\theta _{i}\) and strategy profile \(s=\left( s_{1},s_{2},\ldots s_{n}\right) \), where \(s_{i}(\theta _{i})\), or simply \(s_{i}\), maps into an action. We look at agents who engage in iterations of best reply. The Lk strategy \(s_{i}^{(k)}\left( \theta _{i}\right) \) is recursively defined as function of agent’s type \(\theta _{i}\) that maximizes his expected payoff against level-\(\left( k-1\right) \) profile \(s_{-i}^{(k-1)}\left( \theta _{-i}\right) \). The agent believes with certainty that his opponents make exactly \(k-1\) iterations of best reply.Footnote 8 As starting point of the recursion, the model features nonstrategic L0 agents whose actions \(s_{i}^{(0)}\) are drawn from a given distribution \(\Phi \). By analogy, we say that \(s_{i}^{(0)}\left( \theta _{i}\right) \equiv s_{i}^{(0)}\) is an unobserved random mapping such that the induced cumulative distribution of actions is \(\Phi \) and the density is \(\varphi \).

Definition

For \(k\ge 1\) the optimal strategy \(s_{i}^{(k)}\) maximizes the expected payoff of agent i against \(s_{-i}^{(k-1)}\):Footnote 9

$$\begin{aligned} s_{i}^{(k)}\left( \theta _{i}\right) =\underset{s_{i}}{\arg \max }\,\mathbb {E}\left[ u_{i}\left( s_{i},s_{-i}^{(k-1)}\left( \theta _{-i}\right) ;\theta _{i}\right) \right] , \end{aligned}$$
(2)

where \(\theta _{-i}\) is the residual profile of types. The expectation is taken over the residual types and mappings \(s_{i}^{(0)}.\) The following simple observation establishes the relation between the Lk and equilibrium strategy profiles.Footnote 10

Observation::

If \(s^{(k)}\left( \theta \right) =s^{(k+1)}\left( \theta \right) \) for some \(k\ge 1\) and \(\theta \in \Theta \), then \(s^{(k)}\left( \theta \right) \) constitutes a Bayes-Nash equilibrium.

Choice rules and mechanisms  For a quasi-linear utility representation (1), we define a choice rule \(x^{*}\left( \theta \right) \) as efficient if it maximizes the total welfare for every profile of agents’ types \(\theta =\left( \theta _{1},\theta _{2},\ldots \theta _{n}\right) \):

$$\begin{aligned} x^{*}\left( \theta \right) \in \underset{x\in X}{\arg \max }\sum \limits _{i}v_{i}\left( x;\theta _{i}\right) \end{aligned}$$
(3)

We look at a direct mechanism, where the agents report their types to the principal: i’s report \(s_{i}\) is a member of \(\Theta \).Footnote 11 A mechanism implements the choice rule \(x^{*}\left( \cdot \right) \) if the profile of truth-telling reports is an equilibrium. The expected externality mechanism introduced in d’ Aspremont and Gérard–Varet (AGV, 1979) is an example of such mechanism. AGV chooses the efficient allocation \(x^{*}\left( \cdot \right) \) and assigns the following monetary transfers to the participants:

$$\begin{aligned} T_{i}\left( s\right) =t_{i}\left( s_{i}\right) -\frac{1}{n-1}\sum \limits _{l\ne i}t_{l}\left( s_{l}\right) , \end{aligned}$$
(4)

where

$$\begin{aligned} t_{i}\left( s_{i}\right) =\mathbb {E}\sum \limits _{j\ne i}v_{j}\left( x^{*}\left( s_{i},\theta _{-i}\right) ;\theta _{j}\right) . \end{aligned}$$
(5)

The transfer \(t_{i}\left( s_{i}\right) \) is constructed such that agent i internalizes the expected effect of his report on the others’ welfare, assuming they tell the truth. This guarantees that agent i’s incentives are aligned with the total welfare maximization, therefore truth-telling is Bayes–Nash equilibrium. Note that this implies immediately that in the truthful-L0 specification of the Lk model efficient implementation obtains for any k.

The second part of the transfer, \(\frac{1}{n-1}\sum \nolimits _{l\ne i}t_{l}\left( s_{l}\right) \), guarantees that mechanism satisfies ex post budget balance. In particular, in the level-k model the transfers add up to zero after any profile of reports s.Footnote 12 Note that this part of transfer does not depend on i’s own report \(s_{i}\), therefore it can be omitted from the analysis of incentives.

Level-k in the Mechanism In the expected externality mechanism, an Lk agent, \(k\ge 1\), maximizes the expected gain in the mechanism:

$$\begin{aligned} \mathbb {E}\,\left[ v_{i}\left( x^{*}\left( s_{i},s_{-i}^{(k-1)}\left( \theta _{-i}\right) \right) ;\theta _{i}\right) +t_{i}\left( s_{i}\right) \right] \end{aligned}$$
(6)

Given the incentive transfer (5), the optimal Lk strategy in the mechanism is defined by the following:Footnote 13

$$\begin{aligned} s_{i}^{(k)}\left( \theta _{i}\right) =\underset{s_{i}\in \Theta }{\arg \max }\,\mathbb {E}\left[ v_{i}\left( x^{*}\left( s_{i},s_{-i}^{(k-1)}\left( \theta _{-i}\right) \right) ;\theta _{i}\right) +\sum \limits _{j\ne i}v_{j}\left( x^{*}\left( s_{i},\theta _{-i}\right) ;\theta _{j}\right) \right] \end{aligned}$$
(7)

Recall that a strategy profile that satisfies \(s^{(k)}\left( \theta \right) =s^{(k-1)}\left( \theta \right) \) for all k and \(\theta \) is a Bayes-Nash equilibrium. The following section demonstrates an example where this is not the case and studies the differences between Lk and equilibrium behavior in the AGV mechanism.

3 Unadjusted mechanism

This section takes the AGV mechanism as given and studies its outcomes in the Level-k environment. We establish the conditions under which the mechanism still yields efficient outcomes, and look at the misreporting of preferences that may arise in certain stochastic environments. We start with a simple example to illustrate some of our main findings.

Example

Consider a setting with n agents and a quadratic utility representation \(v_{i}\left( x,\theta _{i}\right) =\theta _{i}x-\frac{x^{2}}{2}, i\in I\). In this setup, agent i has a bliss point at \(\theta _{i}\) and incurs quadratic loss if the allocation departs from it. It is easy to verify that the socially efficient allocation is the average of individual bliss points: \(x^{*}\left( \theta _{1}\right) =\frac{\sum _{i}\theta _{i}}{n}\). We prove the following simple lemma (see “Appendix”).

Lemma 1

In the quadratic case, the optimal Lk strategy, \(k\ge 1\), for agent i is given by the following:

$$\begin{aligned} s_{i}^{(k)}\left( \theta _{i}\right) =\theta _{i}+\Delta \times \left( -\frac{n-1}{n}\right) ^{k}, \end{aligned}$$
(8)

where \(\Delta =\int \theta dF(\theta )-\int sd\Phi (s)\) denotes the difference between the average type and the average random move of an L0 agent.

The Lk strategy (8) has several interesting properties. First, the size of the distortion diminishes as the level of rationality k increases. As k goes to infinity, the optimal strategies converge to truth-telling. This holds for any pair of distributions F and \(\Phi \). Second, if the distributions have equal means, \(\int \theta dF(\theta )=\int sd\Phi (s)\), then truth-telling obtains at every level of rationality, starting from \(k=1\). Third, the absolute size of the discrepancy \(\Delta \times \left( \frac{n-1}{n}\right) ^{k}\) between the true type \(\theta \) and the Lk report \(s_{i}^{(k)}\left( \theta _{i}\right) \) increases in the number of agents.

Next we study these properties in a more general setup. We maintain, however, that the efficient rule is linear in (a function of) types. Formally, we make the following assumption of neutrality:

  • A5. \(\frac{\partial ^{2}x^{*}}{\partial \theta _{i}\partial \theta _{j}}\left( \cdot \right) \equiv 0 \quad \) for all \(i,\,j\in I\).

Level 1 is central to the entire analysis, since any distortion of truthtelling that emerges at L1 propagates to higher levels. The analysis of L1 optimal strategy:

$$\begin{aligned} s_{i}^{(1)}\left( \theta _{i}\right) =\underset{s_{i}\in \Theta }{\arg \max }\left\{ \mathbb {E}_{s_{-i}^{(0)}}v_{i}\left( x^{*}\left( s_{i},s_{-i}^{(0)}\right) ;\theta _{i}\right) +\sum \limits _{j\ne i}\mathbb {E}_{\theta _{-i}}v_{j}\left( x^{*}\left( s_{i},\theta _{-i}\right) ;\theta _{j}\right) \right\} \end{aligned}$$
(9)

yields the following proposition.

Proposition 1

Under assumptions A1–A3, truth-telling is optimal at all levels of rationality if the distribution of random actions \(\Phi \) and the distribution of types F coincide.

Proposition 1 establishes the equivalence between equilibrium and Lk predictions of the AGV mechanism’s outcome. It shows that as long as the subjective distribution of random actions coincides with the (objective) distribution of types, it is irrelevant whether the agents stop at a finite level of reasoning or engage in equilibrium thinking. Proposition 1 trivially extends to the cognitive hierarchy (CH) model, since both Lk and CH models define level-1 equivalently. Overall, the AGV mechanism achieves efficient implementation in four models of reasoning: Lk and CH with truth-telling L0s; Lk and CH with random L0s and \(F\equiv \Phi \). Observe that the equivalence result does not rely on either the linearity of the social choice rule nor the Spence-Mirrlees condition.

If distributions F and \(\Phi \) do not coincide, Lk agents do not report truthfully in general. To study the report biases, we concentrate on the case where F and \(\Phi \) can be ordered with respect to first-order stochastic dominance relation, denoted \(\succ _{FOSD}\). This corresponds to scenarios where players believe a salient strategy is to systematically under- or over-report one’s type. We have the following result.

Proposition 2

Under assumptions A1–A5, L1 agents distort their type reports upwards if \(F\succ _{FOSD}\Phi \), and downwards if \(\Phi \succ _{FOSD}F\). If either \(F\succ _{FOSD}\Phi \) or \(\Phi \succ _{FOSD}F\), then \(\underset{k}{\lim }\mathbb {E}_{\theta _{i}}\left| s_{i}^{(k)}(\theta _{i})-\theta _{i}\right| =0\), \(\mathcal {sgn}\left( s_{i}^{(k)}(\theta _{i})-\theta _{i}\right) =-\mathcal {sgn}\left( s_{i}^{(k-1)}(\theta _{i})-\theta _{i}\right) \quad \) for all i.

The proof of the proposition is given in the “Appendix”. We start with the observation that any n-agent problem can be reduced to a problem with two agents due to the fact that stochastic dominance is preserved under monotone transformations and summation of random variables. Then, in the framework with two agents, we analyze the first-order condition that corresponds to the payoff-maximization problem (9) to obtain the result.

The first part of Proposition 2 states that L1 agents systematically (that is, for every realization of type) misreport their types, if one distribution dominates the other in the sense of first-order stochastic dominance. For example, if an L1 agent expects L0 agents’ reports to dominate the type distribution, then L1 will report a lower type than he actually has (and vice versa), even if this induces a less preferred allocation. The reason is that in the AGV mechanism, an agent’s report affects both (1) the expected externality, which is calculated based on the true distribution F, and (2) the agent’s own expected value from the allocation which depends on his own belief \(\Phi \) about other agents’ reports. If an agent believes the others over-report (\(\Phi \) dominates), he concludes that the allocation is on average higher than it would be under truthful reports by the others. Given that the utility function is strictly concave, this reduces his perceived marginal value of the allocation, therefore he under-reports. If higher types prefer lower alternatives (‘negative cross-derivative’, as in A4.2), then L0s’ over-reporting makes the chosen alternative lower and L1 over-reports to compensate. In either case, an L1 agent compensates the opponents’ random behavior by misreporting his type in the opposite direction.

The second part of the proposition states that the expected deviation of reported from true types decreases in absolute value as the level of rationality increases. The sign of the expected deviation alternates at every transition from k to \(k+1\). Thus the optimal level-k strategies follow a pattern similar to the example of Sect. 2. If level-2 agents overstate their type in the game, then level-3 agents will understate them. Note that this is good news for the AGV mechanism: if the group of agents is a mix of, say, level-2 and level-3 agents, then the expected chosen alternative is closer to efficiency.

3.1 Non-neutrality

The assumption of neutrality implies that the marginal effect of an agent’s type on the efficient allocation is invariant in other agents’ types. However, there are examples of preferences where this assumption is violated. Consider the case with two agents whose preferences are given by \(v_{1}=\theta _{1}x\) for Agent 1 and \(v_{2}=-\frac{x^{2}}{2\theta _{2}} (\theta _{2}>0)\) for Agent 2. The optimal allocation is \(x^{*}=\theta _{1}\theta _{2}\). Agent 1’s utility in mechanism (excluding the budget balancing part)Footnote 14 equals \(v_{1}+t_{1}=\mathbb {E}\left[ \theta _{1}x^{*}\left( {\hat{\theta }}_{1},s_{2}^{(0)}\right) -\frac{\left( x^{*}\left( {\hat{\theta }}_{1},\theta _{2}\right) \right) ^{2}}{\theta _{2}}\right] =\theta _{1}{\hat{\theta }}_{1}\mathbb {E}s_{2}^{(0)}-({\hat{\theta }}_{1})^{2}\mathbb {E}\theta _{2}\). Suppose \(\Phi \) dominates F such that \(\mathbb {E}s_{2}^{(0)}=1\) and \(\mathbb {E}\theta _{2}=0\), then \(v_{1}+t_{1}=\theta _{1}{\hat{\theta }}_{1}\). Thus Agent 1 will over-report if \(\theta _{1}>0\) and under-report if \(\theta _{1}<0\), which is not the prediction of Proposition 2. Contrary to the neutral environment, where \(\Phi \succ F\) would imply under-reporting by all types of an L1 agent (Proposition 2), this example features types that are complements with respect to the optimal allocation: \(\frac{\partial ^{2}x^{*}}{\partial \theta _{i}\partial \theta _{j}}=1>0\). In such environments, the result of Proposition 2 holds only for a subset of types, as we demonstrate below.

Agents’ types are complements Footnote 15 with respect to the efficient rule \(\frac{\partial ^{2}x^{*}}{\partial \theta _{i}\partial \theta _{j}}>0\) for all \(i\ne j\). Agents’ types are substitutes Footnote 16 with respect to the efficient rule \(\frac{\partial ^{2}x^{*}}{\partial \theta _{i}\partial \theta _{j}}<0\) for all \(i\ne j\). When types are substitutes, a higher type by agent i lowers the marginal effect of the opponent’s type. If types are complements, the interaction is the opposite: the marginal effect of j’s type increases with the type of agent i.

In this part of the analysis, we distinguish between positive (A 4.1) and negative (A 4.2) single crossing. Recall that, in the positive case, higher types receive higher marginal utility from allocation. In the negative case, the marginal utility diminishes with type. We separate the environments into four groups according to two criteria: first, whether the single-crossing holds as positive or as negative, and, second, whether the chosen alternative’s increment due to an increase in one agent’s report increases or decreases with the other agent’s report (types are complements or substitutes). In these propositions, we additionally assume the monotone likelihood ratio property (MLRP). It says that the ratio of probability distribution functions \(\frac{f(t)}{\varphi (t)}\) decreases in t if \(\Phi \succ _{FOSD}F\), and increases in t if \(F\succ _{FOSD}\Phi \).

Proposition 3

  1. (a)

    Under A1–A4.1, MLRP and complements environment, \(\exists t_{i}^{*}\) such that for all types \(\theta _{i}<t_{i}^{*}\) of L1 agent i he distorts his report downwards if \(\Phi \succ F\) and upwards if \(F\succ \Phi \).

  2. (b)

    Under A1–A4.1, MLRP and substitutes environment, \(\exists t_{i}^{*}\) such that for all types \(\theta _{i}>t_{i}^{*}\) of L1 agent i he distorts his report downwards if \(\Phi \succ F\) and upwards if \(F\succ \Phi \).

Proposition 4

  1. (a)

    Under A1–A4.2, MLRP and complements environment, \(\exists t_{i}^{*}\) such that for all types \(\theta _{i}>t_{i}^{*}\) of L1 agent i he distorts his report downwards if \(\Phi \succ F\) and upwards if \(F\succ \Phi \).

  2. (b)

    Under A1–A4.2, MLRP and substitutes environment, \(\exists t_{i}^{*}\) such that for all types \(\theta _{i}<t_{i}^{*}\) of L1 agent i he distorts his report downwards if \(\Phi \succ F\) and upwards if \(F\succ \Phi \).

Propositions 3 and 4 make four distinct claims. Consider the first claim, for example: If high types tend to have high valuations (A4.1, positive single-crossing) and the efficient social choice rule is more sensitive to i’s type if j’s type is high (i.e., types are complements), then low-valuation agents will tend to misreport their type so as to compensate the bias in the other agent’s report. This claim is the same as Proposition 2, except that it does not include a range of valuations above a threshold. If there is first-order stochastic dominance in distributions, in the neutral case, an L1 displays compensating behavior: L1 systematically under- or over-reports, regardless of whether his true type is high or low. However, in a non-neutral case this is different. Observe that when types are complements or substitutes the mechanism may become more sensitive to L0’s misreporting in the extreme ranges of L1’s type when L1 misreports. Therefore L1’s strategy of compensating report bias has a further indirect effect on the allocation choice. For this reason, both Propositions 3 and 4 include only the type ranges that correspond to low enough sensitivity of the social choice rule to the other agent’s report. Types in the low-sensitivity regions display the compensating behavior, similar to our benchmark result in Proposition 2.

Intuitively, the exclusion of some types in Propositions 3 and 4 can be understood as follows. Consider the more intuitive case of positive single crossing (A4.1). Suppose L1 agent’s type is high, so he prefers a high level of public good, and complements environment. Then compensatory under-reporting makes the choice rule less responsive to the opponent’s over-reporting and thus may lead to the allocation being too low for his preferences. On the other hand compensatory over-reporting makes the choice rule more responsive to the opponent’s under-reporting and thus, again, may lead to the choice of allocation that is too low. Suppose now that the agent’s type is low, so he prefers a low level of public good, and substitutes environment, as in the example given at the beginning of this section. In the example the choice rule does not respond to the opponent’s under-reporting and thus, if the agent over-reports his type, he increases the probability that the project is undertaken, and that is against his private interest. Therefore, the reaction of the choice rule to the opponent’s report determines whether the compensating bias is a profitable strategy.

4 Adjusting the mechanism

I am grateful to the anonymous referee who suggested writing this section and offered some important insights into adjusting the AGV mechanism.

Our analysis so far assumed that the principal is unaware of the Lk environment. In other words, the principal implements the allocation and transfers as if the agents were infinitely rational. But what if the principal knows that the agents conduct only a finite number of best-reply iterations? How can he adjust the mechanism and achieve efficiency in this case? This section discusses this question. The answer depends critically on the principal’s information about the setting. If the characteristics of stochastic setting—the type distribution F, distribution of random actions \(\Phi \), and the Lk identity of every agent—are known, then the principal can achieve efficiency by adjusting the incentive transfer. However, if some of that information is missing, the principal should expand the mechanism.

4.1 Known environment \((F,\Phi ,k)\)

When \(F,\Phi \), and \(k_{i}\) for all \(i\in I\) are known, the principal’s response to the Lk environment is to adjust the incentive transfers accordingly. Knowing that L1 agents expect their opponents to behave non-strategically according to the distribution \(\Phi \), the principal assigns the following transfer to any L1 agent:

$$\begin{aligned} t_{i}^{(1)}\left( s_{i}\right) =\mathbb {E}_{s_{-i}^{(0)}}\sum \limits _{j\ne i}v_{j}\left( x^{*}\left( s_{i},s_{-i}^{(0)}\right) ;s_{j}^{(0)}\right) \end{aligned}$$
(10)

The expectation in (10) is taken over the L0 strategies \(s_{-i}^{(0)}\), as opposed to type distributions as in the original AGV mechanism.

Thus, the incentive transfer to all higher-level agents Lk remains unchanged relative to the original AGV mechanism:

$$\begin{aligned} t_{i}^{(>1)}\left( s_{i}\right) =\mathbb {E}_{\theta _{-i}}\sum \limits _{j\ne i}v_{j}\left( x^{*}\left( s_{i},\theta _{-i}\right) ;\theta _{j}\right) \end{aligned}$$
(11)

Let AGVk \((F,\Phi )\) refer to the AGV mechanism with transfers Eqs. (10) and (11).

Lemma 2

Any Lk player \((k\ge 1)\) is truthful in AGVk \((F,\Phi )\).

Proof

Facing transfer (10), any L1 agents report their types truthfully, since \(s_{i}=\theta _{i}\) solves the utility maximization problem:

$$\begin{aligned} \underset{s_{i}\in \Theta }{\max }\,\mathbb {E}_{s_{-i}^{(0)}}\left[ v_{i}\left( x^{*}\left( s_{i},s_{-i}^{(0)}\right) ;\theta _{i}\right) +\sum \limits _{j\ne i}v_{j}\left( x^{*}\left( s_{i},s_{-i}^{(0)}\right) ;s_{j}^{(0)}\right) \right] . \end{aligned}$$
(12)

Provided that L1s receive transfers that make them reveal their types, L2s hold a belief over the reports that coincides with F, the distribution of types. Similar to the Bayes Nash equilibrium in the standard AGV mechanism L2 best replies to the incentives by reporting his type truthfully. By induction, truthfulness extends to all subsequent levels that face the standard AGV transfer (11). The induction relies on the fact that \({L(k+1)}\) believe that Lk best reply to \({L(k-1)}\) and believe that \({L(k-1)}\) best reply to \({L(k-2)}\) etc up to L1. \(\square \)

Therefore, in case where the stochastic Lk environment is known, the principal can implement the efficient allocation by changing the transfer to L1 agents only. As before, budget balance ex post is achieved through an additional term that is independent of agent i’s own report \(s_{i}: T_{i}\left( s\right) =t_{i}\left( s_{i}\right) -\frac{1}{n-1}\sum \nolimits _{j\ne i}t_{j}\left( s_{j}\right) \).

4.2 Unknown environment

The construction of transfers Eqs. (10) and (11) relies on the principal’s knowledge of distributions \(\Phi \) and F, respectively. The assignment of transfers to agents relies on the knowledge of levels \(k_{i}\) for \(i\in I\). If any part of this information is not available to the principal he has to elicit it from the agents. Unfortunately, there is little hope to get the information “for free”. Suppose that the principal knew he was facing an L1 agent i and asked him to report \(\Phi \). The agent would benefit from misrepresenting \(\Phi \) as it determines his incentive transfer (10). For example, in the quadratic utility case (Sect. 3) the agent gains in \(\Phi \)-expected externality if \(\Phi \) is such that the other agents’ preferences are very similar to his own preference report \({\hat{\theta }}_{i}\). In the extreme case, the agent reports a degenerate distribution \(\Phi \) with a mass point at \({\hat{\theta }}_{i}\). Asking an L2 agent to report \(\Phi \) would not result in truthful elicitation either. Contrary to L1, misreporting \(\Phi \) does not affect L2’s incentive transfer, but it does affect his expectation of the resulting allocation choice. Since an L2 believes that others are L1 he also believes that their type reports can be manipulated by falsely reporting \(\Phi \). Furthermore, since L2 believes that he pays a fraction \(\frac{1}{n-1}\) of L1s’ total incentive transfers as part of the budget balance program, his report of \(\Phi \) also affects his monetary gain in the mechanism. These considerations illustrate the need for a proper elicitation mechanism.

Let \(P_{i}\) denote agent i’s true belief about \((i+1)\)’s movesFootnote 18 and \(\hat{P}_{i}\) denote the reported belief. We assume that beliefs are differentiable for simplicity. Observe that \(P_{i}=\Phi \), if \(k_{i}=1\). However if \(k_{i}\ge 2\) then \(P_{i}=F\) under the assumption of truth-telling Lk. Neither F, \(\varvec{\Phi }\) or levels k are known to the principal.

Consider the following two-stage AGVk (TS-AGV k) mechanism:

  • Stage 1  Agent i reports \(\hat{P}_{i}\).Footnote 19

  • Stage 2  First-stage reports pin down the transfer schedule and i reports type \(\hat{\theta _{i}}\).

The principal implements the efficient allocation (3) and pays the transfer:Footnote 20

$$\begin{aligned} t_{i}+b_{i}-\frac{1}{n-1}\sum \limits _{l\ne i}t_{l}-b_{i+1}, \end{aligned}$$
(13)

where \(t_{i}=t_{i}\left( s_{i}\right) =\mathbb {E}\sum \limits _{j\ne i}v_{j}\left( x^{*}\left( s_{i},s_{-i}\right) ;s_{j}\right) \), expectation over \(s_{-i}\) is taken w.r.t. \(\hat{P}_{i}\) (incentive part); \(b_{i}=b_{i}\left( \hat{p}_{i}\left( s_{i+1}\right) \right) =\lambda \ln \hat{p}_{i}\left( s_{i+1}\right) \) (proper scoring or betting part), \(\lambda \) is a scalar and \(\hat{p}_{i}\left( s_{i+1}\right) =\frac{\partial }{\partial s_{i+1}}\hat{P}_{i}\left( s_{i+1}\right) \).Footnote 21 Note that compared to the standard AGV mechanism, the budget balancing part in TS-AGVk includes an extra term \(-b_{i+1}\) to balance the betting rewards.

Lemma 3

For any \(\varepsilon >0\) there exists \(\lambda >0\) in the TS-AGVk mechanism with \(n>2\), such that truth-telling is \(\varepsilon \)-optimal for an Lk-agent, given that I / i tell the truth.Footnote 22

Under the assumption that all agents tell the truth, the lemma states that no Lk-agent can deviate and gain more than \(\varepsilon \) by lying to the principal if the betting transfer is appropriately scaled. The proof is given in the “Appendix”. The proof relies on the observation that the expected betting transfer: \(\mathbb {E}b_{i}\left( s_{i+1};\hat{p}_{i}\right) =\lambda \int _{\Theta }\ln \hat{p}_{i}\left( s_{i+1}\right) dP_{i}\left( s_{i+1}\right) \) is maximized at \(\hat{p}_{i}\equiv p_{i}\) (Good 1952). However, since the report \(\hat{p}_{i}\) also affects i’s incentive transfer \(t_{i}\left( \cdot \right) \), the loss in betting reward has to be sufficiently large to nullify any gain from changing the allocation and \(t_{i}\left( \cdot \right) \) that i may achieve by misreporting \(p_{i}\) and \(\theta _{i}\).

Remark

TS-AGVk does not rely on the knowledge that the underlying model is Lk. Specifically, the transfers are constructed to induce truth-telling as best response of an agent with arbitrary beliefs, not necessary an Lk agent. In contrast, the mechanisms introduced below are tailored to the particular setting of Lk and are therefore less robust to the change of environment.Footnote 23

If F and \(\varvec{\Phi }\) are known but levels k are unknown, then the first stage of the mechanism above can be simplified. Here, we use the fact that in the Lk model, agent i’s level \(k_{i}\) can be inferred from his belief about another agents’ level \(k_{j}, j\ne i\). At the first stage of TS-AGV k \((\mathbf {F},{\varvec{\Phi }})\) the principal asks each agent to guess the level of another participant. To fix ideas, let agent 1 report on \(k_{2}\), agent 2 reports on \(k_{3}\), and so on until agent n who reports on \(k_{1}\). In the Lk model, agent i’s report \(\hat{k}_{i+1}^{i}\) about agent \(\left( i+1\right) \)’s level is truthful, if it is just below the agent’s own level: \(\hat{k}_{i+1}^{i}=k_{i}-1\). The true belief may not be correct (i.e., \(\hat{k}_{i+1}^{i}\) may or may not equal \(k_{i+1}^{i}\)); moreover, at least one agent’s belief must be incorrect.

The structure of transfers in TS-AGVk \((F,\Phi )\) is given by (13), where the incentive part \(t_{i}\) is given by (10), if \(\hat{k}_{i+1}^{i}=0\), and (11), if \(\hat{k}_{i+1}^{i}\ge 1\); the betting transfer \(b_{i}=b_{i}\left( \hat{k}_{i+1}^{i}\right) \) is 0, if \(\hat{k}_{i+1}^{i}=k_{i+1}^{i}\), and \(-\lambda \) otherwise.

Lemma 4

There exists \(\lambda >0\) in TS-AGVk \((F,\Phi )\) with \(n>2\), such that truth-telling is Lk-optimal for agent \(i\in I\), given that I / i tell the truth.

Unlike the TS-AGVk mechanism, TS-AGVk \((F,\Phi )\) with the appropriately chosen “punishment level” \(\lambda \) induces exact truth-telling. This is achieved because the reported levels k take on only discrete values \((0,1,2,\ldots )\).

If F and k are known but \(\varvec{\Phi }\) is unknown, then we can exploit the fact that \(\Phi \) is common knowledge among the agents. The principal can use a shoot-the-liar protocol by asking the agents to report \(\Phi \) and punishing them if there is no unanimity. In this mechanism, reporting \(\Phi \) truthfully is best reply to the residual profile of truthful reports. However, truth-telling is not a unique solution. Establishing uniqueness could involve using “nuisance” strategies, as in Maskin (1985), or additional stages, as in Moore and Repullo (1988).

5 Conclusion

The idea of relaxing the pervasive common knowledge assumption, often referred to as the Wilson doctrine, has motivated recent research in mechanism design. Significant progress was made in studying implementation in frameworks approaching the universal type space, where higher-order beliefs are virtually unrestricted.Footnote 24 Kets (2012) extends the notion of type space further to allow finite depths of reasoning, as in the level-k model. The next natural step for mechanism design is to accommodate the extended notion of type space and search for mechanisms that are robust with respect to changes not only in the structure of beliefs, but also in the depth of reasoning (as mentioned in the discussion, learning to play the mechanism is a related issue). This paper, first, studies one of the most influential existing mechanisms, d’Aspremont and Gerard-Varet (1979), in the Lk environment.

The AGV mechanism implements the efficient choice rule in Bayes-Nash equilibrium. It is conceptually similar to the Vickrey-Clarke-Groves (VCG) mechanism that taxes the agents with the amount of negative externality their preference report exerts on the welfare of other agents. The VCG mechanism implements the efficient social choice rule in dominant strategies, and hence is independent of the beliefs.Footnote 25 On the downside, the VCG mechanism fails to satisfy the overall budget constraint. The expected externality mechanism has the advantage of being exactly budget balanced, but it comes at the cost of achieving Bayesian, as opposed to dominant-strategy implementation. In the light of the Lk model, this is not entirely innocuous.

Using the setup of the Lk model we start by conducting a positive analysis of the mechanism in the behavioral environment. We show that if there is a systematic difference in the perceptions of random-L0 actions and true types, then the agents distort their types at the first level and, by extension, also at the higher levels of rationality. Thereby we observe compensating behavior of finite-level agents in an AGV mechanism, that is, distorting one’s report in the opposite direction to the opponents’ anticipated bias. This is due to the fact that the AGV mechanism rewards for the expected externality, where the expectation is measured with respect to the true types. A simple implication of this result is that the AGV mechanism could use the distribution of random actions, as opposed to types, to achieve truth-telling among Lk agents. Consequently, we adjust the AGV mechanism by changing transfer for L1 agents in the case where the principal’s has sufficient information. Otherwise, we introduce a betting scheme to elicit the agents’ knowledge of the environment that the principal uses at a subsequent stage to induce truth-telling.

Altogether, our results suggest that the AGV mechanism is fairly robust to the iterative thinking environment. First, in the truthful-L0 specification there is no distortion of truth-telling and efficiency. Second, if there is distortion of truth-telling, its sign alternates and its absolute value decreases with k. Therefore, in mixed groups of agents with various levels k the biases cancel out and the mechanism’s outcome is close to efficiency. This also implies that starting from L2 in the cognitive hierarchy model best replies are located within a smaller neighborhood of truth-telling. Third, the mechanism can be adjusted to the Lk framework in a way that maintains its key properties.