1 Introduction

Implementation theory studies what goals a social designer can achieve when agents possess more information than the designer and they try strategically to benefit from it. So far, the literature has mostly focused on one-shot implementation, while the existing work on repeated implementation has assumed that the same agents are alive in all periods. However, the current debate about environmental protection, sustainable development or pension reform often involves arguments about intergenerational equity. To study whether intergenerational equity can be achieved, we need a setup that explicitly allows for different generations of agents. Or said differently, once we introduce different generations, it gives rise to novel normative issuesFootnote 1 and we may want to know if these norms can be achieved when agents behave strategically. Therefore, our goal is to study what can or cannot be implemented in a setup with different generations of agents.

We now outline the model. Every period there are n agents alive and these agents belong to different generations, that is, they have different ages. Every T periods the oldest agent dies and a new agent is born. Thus, each agent lives exactly for nT periods. Agents’ preferences are additively separable over time and the current period preferences of agent depend on his type. The objective of designer is captured by a social choice function (SCF), which specifies a socially desirable alternative for every period as a function of types of those agents who are alive in that period. Because the designer never observes agents’ types, she needs to design a sequence of mechanisms, called a regime, that allows her to elicit this information and, at the same time, to select the desired alternative in every period on the equilibrium path. If there exists such a regime, we say that the SCF is repeatedly implementable. Our goal is to characterize the SCFs that are repeatedly implementable. We assume complete information among agents and use subgame perfect equilibrium (SPE) as our solution concept.

We model generations similar to how it is done in the literature on repeated games played by overlapping generations (OLG) of players. For example, Cremer (1986) studies the possibilities of cooperation among players in such games when \(T=1\), while Salant (1991), Kandori (1992) and Smith (1992) derive folk theorems by allowing T to be large. The only difference with the current model is that in these papers, it is assumed that when an agent dies, he is replaced with his exact copy.

The OLG setup can be applied in many contexts. As mentioned, we can study whether intergenerational equity is achievable in the context of sustainable development or pension reform. Cremer (1986) and Kandori (1992) argue that the OLG games provide a natural theory of organizations. For example, different workers join and leave a firm at different times, which helps to sustain cooperation among them. From the perspective of implementation theory, we can ask what goals a manager can achieve if she is less informed than the workers. The US Senate is another example of institution that neatly fits the OLG framework: senators serve a six years’ term and every two years, one third of them is reelected. Alternatively, we can think that there are n different institutions (firms, countries) that are run by individuals (managers, heads of state) and, more often than not, individuals of different institutions are replaced at different times.

We study repeated implementation of SCFs under two opposite assumptions about the agents’ types: when the agents’ types do not change during their lifetime and when new types are drawn every period. Under the former assumption, we obtain the following results. We first show that any SCF that is repeatedly implementable, must satisfy a mild condition, which we call OLG-unanimity. It states that if all agents, no matter what their types are, agree on the best alternative and if by lying, they can obtain this alternative in every period, then the SCF must be a constant function that always selects this alternative. Intuitively, it is impossible to incentivize the agents to reveal their true types because they already get their highest possible lifetime payoff.

Next, we construct a regime and show that any SCF that satisfies OLG-unanimity, is repeatedly implementable if there are at least three agents, \(n\ge 3\), and they live long enough, namely, \(T\ge 3\). Because agents’ types do not change during their lifetime, a lie about their types does not need to be challenged immediately, but it can also be done by next generations. Also, OLG-unanimity condition implies that if an SCF is not a constant function,Footnote 2 then there exists an agent who does not expect his highest possible lifetime payoff. This agent can always be incentivized to challenge the lie of other agents. This allows us to obtain very permissible results in terms of SCFs that are implementable. The regime that is used to prove the sufficiency result, borrows elements of the canonical mechanism that is used in one-shot implementation in SPE (Moore and Repullo 1988; Abreu and Sen 1990; Vartiainen 2007). We will elaborate on the connection to one-shot implementation in SPE in Sect. 3.1.

The results are different when the agents’ types are drawn anew every period. We show now that if an SCF is not generation-efficient in the range, then it is not repeatedly implementable when T is large. Generation-efficiency in the range extends the concept of efficiency in the range that was introduced by Lee and Sabourian (2011), to the OLG setup. It requires that there does not exist an SCF whose range is contained in the range of the SCF that is being implemented, but which gives higher expected payoff to every agent. The intuition for the result is as follows. If an SCF is not generation-efficient in the range, then by lying, the agents can obtain higher payoffs. Also, because types now change every period, any lie must be challenged and tested immediately. This, however, imposes a limit on the size of reward that the agent who challenges, can receive (otherwise he will also challenge when the others are not lying). Finally, for large enough T, the gains from sticking to the lie will exceed this reward. To show this result, we build on the aforementioned papers that prove folk theorems for OLG games and, specifically, on Smith (1992).

Repeated implementation has been studied by Kalai and Ledyard (1998), Chambers (2004), Lee and Sabourian (2011), Mezzetti and Renou (2017) and Azacis and Vida (2019).Footnote 3 However, in all these papers, the same set of agents is alive in all periods. Thus, these papers do not consider the implementation of SCFs that can capture any intergenerational social choice considerations. Further, Kalai and Ledyard (1998) and Chambers (2004) assume that the state of the world is drawn only once and is kept fixed for all periods. Hence, their setup corresponds to the case in our model when the type of agent is kept fixed throughout his lifetime. Like us, they obtain very permissible results. On the other hand, Lee and Sabourian (2011); Mezzetti and Renou (2017); Azacis and Vida (2019) assume that in a model with discounting, a new state is drawn in each period. While the last two papers study what can be implemented for a given discount factor, Lee and Sabourian (2011) show that only SCFs that are efficient in the range, can be repeatedly implementable when the discount factor is sufficiently high. Thus, we provide a counterpart to their result in the OLG setup.

The rest of the paper is organized as follows. Section 2 formally describes the model. Section 3 deals with the case when the agents’ types do not change during their lifetime, while Sect. 4 deals with the case when new types are drawn every period. Appendix A contains all the proofs.

2 The model

We consider a setup with overlapping generations of agents. Each generation consists of a single agent. Each agent lives for exactly nT periods where n and T are two positive integers. Let \(\mathbb {Z}\) be the set of integers equal or greater than \(-n+1\). The agent of generation \(z\in \mathbb {Z}\), or agent z for short, is born at the beginning of period zT and dies at the end of period \((z+n)T-1\). Hence, there are exactly n agents alive in any period.

Let A and \(\Theta\) be, respectively, a set of feasible alternatives and a finite set of possible agent’s types. Both sets remain the same over time. We assume that types are drawn independently and identically across generations according to a probability distribution p such that \(p(\theta )>0\) for each \(\theta \in \Theta\). When types are assumed to be persistent, every agent’s type is drawn only once and remains the same throughout his life. When types are assumed to be non-persistent, each period a new type for every agent who is alive, is drawn independently and identically also across time according to p. The payoff of agent z in period \(t=zT,\ldots ,(z+n)T-1\) is \(u(a_t,\theta ^t_z)\) if the agent’s type is \(\theta ^t_z\in \Theta\) and alternative \(a_t\in A\) is implemented in that period,Footnote 4 and his lifetime payoff is simply

$$\begin{aligned} \sum _{t=zT}^{(z+n)T-1}u(a_t,\theta ^t_z). \end{aligned}$$

We make the following assumptions about u. First, we assume that the agents have strict preferences over the alternatives for every realization of types:

Assumption A1

\(u(a,\theta )\ne u(b,\theta )\)for all\(a,b\in A\)such that\(a\ne b\), and for all\(\theta \in \Theta\).

Second, we assume that the change in the state leads to the change in the ordinal preferences:

Assumption A2

For every\(\theta ,\phi \in \Theta\)such that\(\theta \ne \phi\)there exists a pair\(a,b\in A\)such that\(u(a,\theta )>u(b,\theta )\)and\(u(a,\phi )<u(b,\phi )\).

Later it will be convenient to write the pair (ab) as \((a(\theta ,\phi ),b(\theta ,\phi ))\) with the convention that \(a(\theta ,\phi )\) is more desirable in state \(\theta\) and \(b(\theta ,\phi )\) is more desirable in state \(\phi\). Finally, we assume that the payoffs are bounded: \(\inf _{(a,\theta )\in A\times \Theta }u(a,\theta )\) and \(\sup _{(a,\theta )\in A\times \Theta }u(a,\theta )\) are finite.

We consider the implementation of socially desirable alternatives from period 0 onwards. In period t, a social choice function f assigns an alternative in A as a function of types of all agents who are alive in that period, that is, \(f(\theta ^t_{z-n+1},\ldots ,\theta ^t_z)\in A\) where \(z=\lfloor t/T\rfloor\) is the integer part when dividing t by T. Note that the first argument of f denotes the type of the oldest agent who is alive in that period; the second argument denotes the type of the second oldest agent who is alive and so on. Also note that f is assumed to be time independent. That is, \(f(\theta ^t_{z-n+1},\ldots ,\theta ^t_z)=f(\theta ^{\tau }_{k-n+1},\ldots ,\theta ^{\tau }_k)\) if \((\theta ^t_{z-n+1},\ldots ,\theta ^t_z)=(\theta ^{\tau }_{k-n+1},\ldots ,\theta ^{\tau }_k)\) for any z and k and any t and \(\tau\). Therefore, to describe f, it is enough to specify how f depends on the types of agents \(1,\ldots ,n\).

A mechanism consists of messages that the agents can announce and an outcome function that selects a feasible alternative as a function of these messages. Let the message space of every agent in every period be M. It is without loss of generality, since we can always choose M to be sufficiently large. Hence, we can associate every mechanism with its outcome function. We restrict attention to deterministic mechanisms in which the agents announce their messages simultaneously. Let G be the set of all feasible mechanisms or, equivalently, outcome functions with a typical element g. Thus, given \(g\in G\) and \(m\in M^n\), alternative \(a=g(m)\in A\) is implemented.

To avoid overcomplicating the notation, we next define histories and other relevant concepts for the case when types are persistent, which is the first case we analyse. When we will study the case with non-persistent types, we will clarify the necessary changes in the notation.

Since we now assume that agent’s type remains the same during his lifetime, we write \(\theta _z\) instead of \(\theta ^t_z\). A history of types at the start of period \(t\ge 0\) is \(\zeta _t=(\theta _{-n+1},\ldots ,\theta _{0},\ldots ,\theta _{\lfloor t/T\rfloor })\), while a history of messages at the start of period \(t>0\) is \(\mu _t=(m^0,\ldots ,m^{t-1})\) where \(m^t=(m^t_{\lfloor t/T\rfloor -n+1},\ldots ,m^t_{\lfloor t/T\rfloor })\) is a profile of period t messages with \(m^t_z\) being the message of agent z. A history is \(h_t=(\zeta _t,\mu _t)\) for \(t>0\) and \(h_0=\zeta _0\). Let \(H_t\) denote the space of all period t histories. We assume that the agents who are alive in period t, can distinguish between any two period t histories. That is, we are in a complete and perfect information environment. On the other hand, the social designer cannot distinguish between any two period t histories \(h_t=(\zeta _t,\mu _t)\) and \(h'_t=(\zeta '_t,\mu '_t)\) if \(\mu _t=\mu '_t\), and \(\zeta _t\) and \(\zeta '_t\) share the same first \(n-1\) elements \(\theta _{-n+1},\ldots ,\theta _{-1}\). That is, we assume that the types of those agents who were born before period 0 are also known to the designer.

A regime, r, describes which mechanism is selected after each possible history: \(g_t=r(h_t)\in G\) subject to the restriction that \(r(h_t)=r(h'_t)\) if the designer cannot distinguish between histories \(h_t\) and \(h'_t\). Note that we restrict attention to deterministic regimes. Because of that and also because the mechanisms are deterministic, it is fine to omit from the description of history \(h_t\) which mechanisms and alternatives have been selected in periods \(0,\ldots ,t-1\). We assume that the designer commits to a regime at the start of period 0 and that the agents know which regime the designer employs.

A pure strategy of agent z, \(s_z\), maps histories into messages: \(s_z(h_t)\in M\) for all \(t=zT,\ldots ,(z+n)T-1\) and \(h_t\in H_t\).Footnote 5 Let \(S_z\) be the space of agent z’s strategies. Let s be a profile of strategies, one strategy for each \(z\in \mathbb {Z}\). Also, let \(s(h_t)=(s_{\lfloor t/T\rfloor -n+1}(h_t),\ldots ,s_{\lfloor t/T\rfloor }(h_t))\). Given \(h_t\) and s, let \(q(h_t|h_t,s)=1\) and for any \(\tau >t\), let

$$\begin{aligned} q(h_{\tau }|h_{t},s)=\left\{ \begin{array}{ll} q(h_{\tau -1}|h_{t},s) &{}\quad \text {if }\tau /T\not \in \mathbb {Z}\text {, }\zeta _{\tau }=\zeta _{\tau -1}\text {, }\mu _{\tau }=\left( \mu _{\tau -1},s\left( h_{\tau -1}\right) \right) , \\ q(h_{\tau -1}|h_{t},s)p\left( \theta \right) &{}\quad \text {if }\tau /T\in \mathbb { Z}\text {, }\zeta _{\tau }=\left( \zeta _{\tau -1},\theta \right) \text {, }\mu _{\tau }=\left( \mu _{\tau -1},s\left( h_{\tau -1}\right) \right) , \\ 0 &{}\quad \text {otherwise.} \end{array} \right. \end{aligned}$$

For any \(z\in \mathbb {Z}\), any \(t=zT,\ldots ,(z+n)T-1\), and any \(h_t\), the (expected) payoff of agent z for the rest of his life is

$$\begin{aligned} v_z(s|h_t,r)=\sum _{\tau =t}^{(z+n)T-1}\sum _{h_{\tau }\in H_{\tau }}q(h_{\tau }|h_t,s)u(g_{\tau }(s(h_{\tau })),\theta _z), \end{aligned}$$

where \(\theta _z\) is the \(z+n\)-th element of \(\zeta _t\) (since the first element of \(\zeta _t\) is \(\theta _{-n+1}\)) and \(g_{\tau }=r(h_{\tau })\).

A strategy profile s is a subgame perfect equilibrium (SPE) of r if for all \(z\in \mathbb {Z}\), all \(t=zT,\ldots ,(z+n)T-1\), all \(h_t\in H_t\), and all \(s'_z\in S_z\), it is true that \(v_z(s|h_t,r)\ge v_z((s'_z,s_{-z})|h_t,r)\). A regime rrepeatedly implementsf in SPE if the set of SPE is non-empty and for each SPE s, we have that \(g_t(s(h_{t}))=f(\theta _{\lfloor t/T\rfloor -n+1},\ldots ,\theta _{\lfloor t/T\rfloor })\) for all t and \(h_t\) such that \(q(h_t|h_0,s)>0\), where \((\theta _{\lfloor t/T\rfloor -n+1},\ldots ,\theta _{\lfloor t/T\rfloor })\) are the last n elements of \(\zeta _t\). f is repeatedly implementable in SPE if there exists r that repeatedly implements it in SPE.

3 Persistent types

We start with an example that illustrates the results for the case when agents’ types do not change during their lifetime.

Example 1

Let \(n=3\), \(T\ge 3\), \(A=\{a,b,c,d\}\), \(\Theta =\{\theta ,\theta '\}\) with \(p(\theta )=\frac{1}{2}\). The per-period payoffs are given in the following table:

 

\(u\left( \cdot ,\theta \right)\)

\(u\left( \cdot ,\theta ^{\prime }\right)\)

a

0

0

b

1

2

c

2

1

d

3

3

The results of this section will tell that

  • Any f that never selects the best alternative d, is repeatedly implementable.

  • If f selects d for some but not all type profiles, then it might not be implementable. For example, let \(f(\theta _1,\theta _2,\theta _3)=d\) when \((\theta _1,\theta _2,\theta _3)=(\theta ,\theta ',\theta )\) or \((\theta _1,\theta _2,\theta _3)=(\theta ',\theta ,\theta ')\), and \(f(\theta _1,\theta _2,\theta _3)=a\) otherwise.Footnote 6 If agents, irrespective of their true types, behave as if the types of consecutive agents alternate between \(\theta\) and \(\theta '\), they can secure d in every period. Nobody has incentives to deviate and f is not implementable.

  • Even if f selects d only for some type profiles, it might still be implementable. For example, let \(f(\theta _1,\theta _2,\theta _3)=d\) when \((\theta _1,\theta _2,\theta _3)=(\theta ,\theta ',\theta )\) and \(f(\theta _1,\theta _2,\theta _3)=a\) otherwise. This f is implementable because agents cannot secure d in every period.

The example suggests that implementation of SCF will only fail if every agent can obtain his maximal lifetime payoff when misrepresenting their types. We now proceed to show it formally. Let \(f(\Theta )=\{a\in A|\exists (\theta _1,\ldots ,\theta _n)\in \Theta ^n \text{ s.t. } f(\theta _1,\ldots ,\theta _n)=a\}\) denote the range of f. Let \(\overline{a}(\theta )=\arg \max _{a\in A}u(a,\theta )\) and \(\underline{a}(\theta )=\arg \min _{a\in A}u(a,\theta )\), which are assumed to exist.Footnote 7 Let \(\mathbb {Z}_+\) be the set of non-negative integers.

Condition C1

(OLG-unanimity) If there exist\(a\in A\)and an infinite sequence of types\(\theta _0,\theta _1,\ldots\)such that

  1. 1.

    \(\overline{a}(\theta )=a\)for all\(\theta \in \Theta\), and

  2. 2.

    \(f(\theta _z,\ldots ,\theta _{z+n-1})=a\)for all\(z\in \mathbb {Z}_{+}\),

then\(f(\theta '_1,\ldots ,\theta '_{n})=a\)for all\((\theta '_1,\ldots ,\theta '_{n})\in \Theta ^n\), that is, fis constant.

Condition C1 is a mild unanimity condition, which we call OLG-unanimity. The first premise of the condition says that all types agree on the best alternative, while the second premise ensures that a can be selected by f in every period. In particular, there exists an infinite sequence of types such that if one evaluates f for any n adjacent types in this sequence, the selected alternative will be a. Further, the statement of the second premise can be strengthened by noting that this infinite sequence of types consists of repetitions of the same finite sequence. This follows from the fact that \(\Theta\) is finite and f only depends on the types of n consecutive generations. Hence, there must exist k and z in \(\mathbb {Z}_{+}\) such that \((\theta _k,\ldots ,\theta _{k+n-1})=(\theta _z,\ldots ,\theta _{z+n-1})\). But then we can construct another sequence that consists of repetitions of \(\theta _k,\ldots ,\theta _{z-1}\).

We claim that if the above two premises hold, then there exists an equilibrium in which a is selected in every period irrespective of the realized types. Hence, if f is repeatedly implementable, then it must be a constant function that selects a for all possible realizations of types. The intuition is simple: a is the best alternative for all types and they can ensure that it is selected in every period by pretending to have types as defined by the sequence \(\theta _0,\theta _1,\ldots\). Clearly, no agent will have incentives to deviate. Therefore, there exists an equilibrium in which a is implemented in every period. Thus, f must be constant. Formally,Footnote 8

Proposition 1

Iffis repeatedly implementable in SPE, then it must satisfy OLG-unanimity.

OLG-unanimity is satisfied by most (interesting) SCFs. For example, an SCF that selects an efficient alternative in every period (even in static sense), trivially satisfies it. Next we establish that OLG-unanimity is not only necessary but also sufficient if there are at least three agents alive at any moment and they live long enough.

Theorem 1

Suppose\(n\ge 3\)and\(T\ge 3\). Iffsatisfies OLG-unanimity, then it is repeatedly implementable in SPE.

To prove the theorem, we construct a regime and show that it implements f in SPE whenever \(n\ge 3\), \(T\ge 3\), and f satisfies OLG-unanimity. Although the description of the regime is rather involved, it shares similarities with the canonical mechanism that is used for one-shot implementation in SPE.Footnote 9 In a nutshell, in period zT, agents are asked to report the types of the three youngest agents. That is, on the one hand, period zT messages are used to elicit the type of the newborn agent z, but on the other hand, they also allow the agents to confess if they have misreported the types of agents \(z-2\) and \(z-1\) in periods \((z-2)T\) and \((z-1)T\). Further, agent \(z-1\) serves the role of “whistle-blower” in period zT. If he blows the whistle, then periods \(zT+1\) and \(zT+2\) are used to determine whether agent \(z-1\) or other agents are lying. Therefore, we need that \(n\ge 3\) and \(T\ge 3\).Footnote 10

3.1 Discussion

Comparison with one-shot subgame perfect implementation. Moore and Repullo (1988), Abreu and Sen (1990) and Vartiainen (2007) have studied what can be implemented in SPE using dynamic mechanisms, but unlike repeated implementation, an alternative is selected only once at the very end. Despite this difference, the above results can be related to one-shot implementation in SPE. Therefore, we now briefly sketch the problem of one-shot implementation in SPE, which we adapt to make it comparable to our model.

Thus, let the set of agents be \(N=\{1,\ldots ,n\}\), let the set of states be \(\Theta ^n\) with a typical element \(\vec {\theta }=(\theta _1,\ldots ,\theta _n)\), let the set of alternatives be \(\mathcal {A}=A^n\) with a typical element \(\vec {a}=(a_1,\ldots ,a_n)\), and let the payoff function of agent \(z\in N\) be \(u(\vec {a},\theta _z)=\sum _{t=1}^{z}u(a_t,\theta _z)\). The objective of the designer is to implement alternative \(f(\vec {\theta })\in \mathcal {A}\) when the state is \(\vec {\theta }\). To do that, the designer employs a multi-stage mechanism.

Abreu and Sen (1990) show in their Theorem 1 that if f is one-shot implementable in SPE, then it satisfies the following condition with respect to some \(B\subseteq \mathcal {A}\).Footnote 11

Condition C2

For all\((\vec {\theta },\vec {\phi })\in \Theta ^n\times \Theta ^n\)such that\(f(\vec {\theta })\ne f(\vec {\phi })\), there exist a sequence of agents\(z(1),\ldots ,z(K)\in N\)and a sequence of alternatives\(\vec {a}_1,\ldots ,\vec {a}_{K+1}\in B\)with\(\vec {a}_1=f(\vec {\theta })\)such that

  1. 1.

    \(u(\vec {a}_k,\theta _{z(k)})\ge u(\vec {a}_{k+1},\theta _{z(k)})\)for\(k=1,\ldots ,K\),

  2. 2.

    \(u(\vec {a}_{K},\phi _{z(K)})<u(\vec {a}_{K+1},\phi _{z(K)})\),

  3. 3.

    \(\vec {a}_k\ne \arg \max _{\vec {a}\in B}u(\vec {a},\phi _{z(k)})\)for\(k=1,\ldots ,K\).

We can interpret our results in terms of Condition C2. First, one can set \(K=3\) in Condition C2Footnote 12 and choose as z(3) an agent whose type differs in states \(\vec {\theta }\) and \(\vec {\phi }\). Assumption A2 guarantees that this agent has a preference reversal between some \(\vec {a}_3\) and \(\vec {a}_4\) that is required by C2.1 and C2.2. Second, we can choose as z(1) any agent for whom \(\vec {a}_1=f(\vec {\theta })\) does not result in his highest possible payoff in state \(\vec {\phi }\) as it would violate C2.3. In the OLG setup, the existence of such an agent is guaranteed by Condition C1. That is, if f is not a constant function, then Condition C1 is satisfied only if its premises are empty, which then implies that there always exists an agent who does not expect his maximal lifetime payoff. Finally, say, if \(z(1)=n-1\), then we set \(z(2)=n\). In the OLG setup, agent \(n-1\) will die before agent n. Therefore, it is always possible to find \(\vec {a}_2\) such that \(u(f(\vec {\theta }),\theta _{n-1})\ge u(\vec {a}_{2},\theta _{n-1})\) but \(\vec {a}_2\) is not the worst outcome for agent n in state \(\vec {\theta }\). Because of the latter, it is also always possible to find \(\vec {a}_3\) and \(\vec {a}_4\) that satisfy C2.1 and C.2.2. For example, if \(f(\vec {\theta })=(a_1,\ldots ,a_{n-1},a_{n})\), then we can set \(\vec {a}_{2}=(a_1,\ldots ,a_{n-1},\overline{a}(\theta _{n}))\), \(\vec {a}_{3}=(a(\theta _{z(3)},\phi _{z(3)}),a_1,\ldots ,a_{n-1})\), and \(\vec {a}_{4}=(b(\theta _{z(3)},\phi _{z(3)}),a_1,\ldots ,a_{n-1})\).Footnote 13

Relaxing AssumptionA1. If we allow for weak preferences, the OLG-unanimity condition remains necessary in its current form as long as the best alternative \(\overline{a}(\theta )\) is unique for all \(\theta\). The proof of Theorem 1 also remains true if \(\overline{a}(\theta )\) is unique for all \(\theta\) and, additionally, there exist two alternatives \(c,d\in A\) such that no type is indifferent between them.

Relaxing AssumptionA2. If we allow T to be sufficiently large, we can replace Assumption A2 with the following one:

Assumption A3

For every\(\theta ,\phi \in \Theta\), there do not exist\(\alpha >0\)and\(\beta\)such that\(u(a,\theta )=\alpha u(a,\phi )+\beta\)for all\(a\in A\).

Assumption A3 says that for every pair \(\theta ,\phi \in \Theta\), we can always find two lotteries over the alternatives in A (where the probabilities are rational numbers) such that type \(\theta\) prefers one lottery over the other, while it is opposite for type \(\phi\). But with each lottery we can associate a (finite) sequence of alternatives where the frequency of each alternative in the sequence is equal to the probability that this alternative is chosen in the corresponding lottery. Then, we can replace alternatives \(a(\theta ,\phi )\) and \(b(\theta ,\phi )\) that we use in the proof of Theorem 1, with the constructed sequences.

(Re)starting implementation. It is assumed that the designer knows the types of agents \(-n+1,\ldots ,-1\). The following example shows that it might not be possible to implement an SCF (from period 0) if the designer does not know their types.

Example 2

Let \(n=3\), \(T\ge 3\), \(A=\{a,b,c\}\), \(\Theta =\{\theta ,\theta ',\theta ''\}\) with \(p(\theta )=p(\theta ')=\frac{1}{3}\). The per-period payoffs are given in the following table:

 

\(u\left( \cdot ,\theta \right)\)

\(u\left( \cdot ,\theta ^{\prime }\right)\)

\(u\left( \cdot ,\theta ^{\prime \prime }\right)\)

a

0

1

2

b

1

0

1

c

2

2

0

Let \(f(\theta _1,\theta _2,\theta _3)=c\) if \(\theta _1=\theta\) and \(f(\theta _1,\theta _2,\theta _3)=a\) otherwise. f satisfies OLG-unanimity because there is no alternative that is best for all types. Therefore, f is implementable if the designer knows the types of agents \(-2\) and \(-1\).

Suppose now that the designer does not know their types. Also, suppose the types of agents \(-2\), \(-1\), and 0 are \(\theta '\), \(\theta\), and \(\theta\), respectively. If f is implemented, then alternative a is selected during periods \(0,\ldots ,T-1\) and alternative c during periods \(T,\ldots ,3T-1\). However, if agents \(-2\), \(-1\), and 0 behave as if they all were of type \(\theta\), then they can obtain alternative c during periods \(0,\ldots ,3T-1\). None of these agents can be incentivized to deviate from this deception since they are getting their best alternative for the rest of their lives. Agent 1 might have incentives to challenge the lie, but by the time he is born, agent \(-2\) has already passed away. Thus, we conclude that f cannot be implemented from period 0 if the designer does not know the types of agents \(-2\) and \(-1\).

However, the implementation is still possible starting with period \((n-1)T\), that is, once agents \(-n+1,\ldots ,-1\) have passed away. To see it, fix arbitrary types for these agents. We also slightly modify the regime in the proof of Theorem 1. As mentioned before, the regime allows agents in period zT to confess that they have lied about the types of agents \(z-2\) and \(z-1\) previously. The regime also allows agent \(z-1\) to challenge the types of agents \(z-2,z-1,z\) that are reported by other agents in period zT. Because the types of agents \(-2\) and \(-1\) are arbitrarily fixed, we need to preclude that the agents confess or challenge their types. With this modification, it is still true that in any equilibrium, the types of agents \(0,1,\ldots\) are reported truthfully and, hence, f is implemented from period \((n-1)T\) onwards.

The regime also has the property that once a deviation from the equilibrium play has occurred, a predetermined, infinite sequence of alternatives is implemented. This can be costly from the perspective of the designer. However, similar to how it is done in the previous paragraph, we can restart the implementation once the agents who are alive at the time of deviation, have passed away. Thus, the regime can be made robust to mistakes of agents.

Less than perfect information. We can relax the assumption that the agents who are alive in period t, can distinguish between any period t histories. It is enough if the type of agent z is common knowledge only among the agents who are alive in periods \(zT,(z+1)T,(z+2)T\) as these are the periods when his type is elicited by the regime in the proof of Theorem 1. Further, it is enough if only period zT and \(zT+1\) messages are observable for all \(z\in \mathbb {Z}_+\) and only by the agents who are alive in period zT. With such less than perfect information, Theorem 1 remains true if instead of SPE, we use extended subgame perfect equilibrium as the solution concept (for the definition, see page 877 in Kreps and Wilson 1982).Footnote 14

4 Non-persistent types

Now we consider the other extreme when for every agent, a new type is drawn every period during his lifetime. Specifically, we assume that the types are drawn independently and identically both across periods and agents according to p. Also, we maintain Assumption A1, while Assumption A2 is not needed for the results of this section.

Lee and Sabourian (2011) consider a setup with infinitely-lived agents and payoff discounting and show in their Theorem 1 that when new types are drawn every period, any SCF that is not weakly efficient in the range cannot be repeatedly implemented if the agents are sufficiently patient. We establish a similar negative result in the OLG setup when agents are sufficiently long-lived.

To show this result, we modify some of the notation and introduce some new one. Since now past realizations of types do not tell anything about the current period’s types, it is convenient to denote the oldest agent in the current period as agent 1, the second oldest agent as agent 2, and so on up to the youngest agent who is denoted as agent n. Let \(N=\{1,\ldots ,n\}\). If f is implemented, then the ex ante (that is, before the agent knows his type) per-period payoff of agent \(i\in N\) isFootnote 15

$$\begin{aligned} u_i(f)=\sum _{(\theta _1,\ldots ,\theta _n)\in \Theta ^n}\prod _{j=1}^{n}p(\theta _j)u(f(\theta _1,\ldots ,\theta _n),\theta _i). \end{aligned}$$

Note that every agent’s ex ante lifetime payoff is \(T\cdot \sum _{i=1}^{n}u_i(f)\).

Let \(\delta :\Theta ^n\rightarrow \Theta ^n\) be a deterministic deception where \(\delta (\theta _1,\ldots ,\theta _n)=(\theta '_1,\ldots ,\theta '_n)\) says that the agents act as if their types were \((\theta '_1,\ldots ,\theta '_n)\), although their true types are \((\theta _1,\ldots ,\theta _n)\). Let the space of all deterministic deceptions be D. Also, let \(f\circ \delta\) stand for the composite function of f and \(\delta\): \(f\circ \delta (\theta _1,\ldots ,\theta _n)=f(\delta (\theta _1,\ldots ,\theta _n))\) for all \((\theta _1,\ldots ,\theta _n)\in \Theta ^n\). Finally, let \(\tilde{\delta }\) be a random deception that selects a deterministic deception \(\delta\) with probability \(\tilde{\delta }(\delta )\) and let \(\Delta\) be the space of all random deceptions. The payoff of agent \(i\in N\) from a deception \(\tilde{\delta }\) is \(\sum _{\delta \in D}\tilde{\delta }(\delta )u_i(f\circ \delta )\), which we denote as \(u_i(f\circ \tilde{\delta })\).

Definition 1

A social choice function f is weakly efficient in the range if there does not exist \(\tilde{\delta }\in \Delta\) such that \(u_i(f\circ \tilde{\delta })>u_i(f)\) for all \(i\in N\).

This is essentially the definition that Lee and Sabourian (2011) use, except that they convexify the set of payoffs without public randomization. We are, however, going to use a different definition of efficiency. In the setup of Lee and Sabourian (2011), there is no difference between the ex ante per-period payoffs and the average expected lifetime payoffs. This is not the case in the OLG setup. For example, the current second-oldest agent will become the oldest agent once the current oldest agent dies. The right notion of efficiency should take into account the lifetime payoffs. Therefore, we will use the following stronger notion of efficiency (see more discussion after Theorem 2):

Definition 2

A social choice function f is weakly generation-efficient in the range if there does not exist \(\tilde{\delta }\in \Delta\) such that \(\sum _{j=1}^{i}u_j(f\circ \tilde{\delta })>\sum _{j=1}^{i}u_j(f)\) for all \(i\in N\).

To simplify the analysis, we also make the following assumption about SCFs:

Assumption A4

There exist types\(\theta _1\), \((\theta _2,\ldots ,\theta _n)\), and\((\theta '_2,\ldots ,\theta '_n)\)such that\(f(\theta _1,\theta _2,\ldots ,\theta _n)\ne f(\theta _1,\theta '_2,\ldots ,\theta '_n)\).

This assumption rules out SCFs that only vary with the type of the oldest agent. Unlike the setup of Lee and Sabourian (2011), agents live for a finite number of periods in our setup and the oldest agent might not have incentives to go along with a deception in his final periods. However, when f satisfies Assumption A4, such incentives can be provided. For example, if \(u(f(\theta _1,\theta '_2,\ldots ,\theta '_n), \theta _1)>u(f(\theta _1,\theta _2,\ldots ,\theta _n), \theta _1)\), then in state \((\theta _1,\theta _2,\ldots ,\theta _n)\), the agents can behave as if the state was \((\theta _1,\theta '_2,\ldots ,\theta '_n)\).

Theorem 2

Supposefsatisfies AssumptionA4and\(n\ge 2\). Iffis not weakly generation-efficient in the range, then there exists\(T^*\)such that for all\(T\ge T^*\), fis not repeatedly implementable in SPE.

Example 1 continued

Let \(f(\theta _1,\theta _2,\theta _3)=b\) when \((\theta _1,\theta _2,\theta _3)=(\theta ',\theta ',\theta ')\) and \(f(\theta _1,\theta _2,\theta _3)=c\) otherwise. This f satisfies Assumption A4 because \(f(\theta ',\theta ',\theta ')=b\ne f(\theta ',\theta ,\theta )=c\). Also, f is not weakly (generation) efficient. To see it, let \(\delta (\theta ',\theta ',\theta )=\delta (\theta ',\theta ,\theta ')=\delta (\theta ,\theta ',\theta ')=(\theta ',\theta ',\theta ')\) and \(\delta (\theta _1,\theta _2,\theta _3)=(\theta _1,\theta _2,\theta _3)\) for all other configurations of types. With this deception, the alternative that is preferred by the majority of agents is selected. Then, \(u_i(f\circ \delta )=\frac{14}{8}>u_i(f)=\frac{13}{8}\) for all \(i=1,2,3\). The theorem says that f is not repeatedly implementable if T is large enough. On the other hand, f satisfies Maskin monotonicityFootnote 16 and, hence, is one-shot implementable in Nash equilibrium.

To prove the theorem, we borrow ideas from the literature on folk theorems for OLG games and, in particular, from Smith (1992). If f is not weakly generation-efficient in the range, then there exists a deception \(\delta \in \Delta\) such that \(\sum _{j=1}^{i}u_j(f\circ \delta )>\sum _{j=1}^{i}u_j(f)\) for all \(i\in N\). Because of Assumption A4, there also exists a deception \(\gamma\) such that \(u(f(\gamma (\theta _1,\theta _2,\ldots ,\theta _n)), \theta _1)\ge u(f(\theta _1,\theta _2,\ldots ,\theta _n), \theta _1)\) for all \((\theta _1,\theta _2,\ldots ,\theta _n)\) with a strict inequality for at least one \((\theta _1,\theta _2,\ldots ,\theta _n)\). Call the T periods between the births of two consecutive agents as an overlap. We construct a strategy profile such that on the equilibrium path, agents deceive according to \(\delta\) for the first Q periods of each overlap and next they deceive according to \(\gamma\) for the remaining \(T-Q\) periods of every overlap. If a deviation from the path ever occurs, then the agents simply stop deceiving from that point on.

We prove that no agent has incentives to deviate from the equilibrium path for sufficiently large T, in three steps. First, the oldest agent does not have incentives to deviate during the last \(T-Q\) periods of his life while he is being rewarded through the deception \(\gamma\). Second, any gain that this agent can obtain from deviating in period Q, can be outweighed by the reward in the last \(T-Q\) periods when \(T-Q\) is sufficiently large. Thus, agent 1 does not want to deviate in period Q. The same is also true for the first \(Q-1\) periods of the overlap because agents stop deceiving after any deviation and \(u_1(f\circ \delta )>u_1(f)\). Third, we show that no agent \(i=2,\ldots ,n\) also wants to deviate in any period of the overlap when Q is sufficiently large because \(\sum _{j=1}^{i}u_j(f\circ \delta )>\sum _{j=1}^{i}u_j(f)\) holds.

4.1 Discussion

Implementation in Nash equilibrium. Theorem 2 clearly remains valid if we replace SPE with Nash equilibrium as the solution concept. Note that Lee and Sabourian (2011) only prove their result for Nash equilibrium.

Weak efficiency in the range. If there exists \(\delta \in \Delta\) such that \(u_i(f\circ \delta )>u_i(f)\) for all \(i\in N\), then it is also true that \(\sum _{j=1}^{i}u_j(f\circ \delta )>\sum _{j=1}^{i}u_j(f)\) for all \(i\in N\). That is, whenever f is not weakly efficient in the range, then it is also not weakly generation-efficient in the range. Therefore, if SCFs, which are not weakly generation-efficient in the range, cannot be repeatedly implemented, then it is also true about SCFs, which are not weakly efficient in the range. That is, the result that we prove is stronger than the one if we only assumed that f is not weakly efficient in the range.

Relaxing AssumptionA4. Although Assumption A4 can be relaxed, we claim that we cannot dispense of it completely. Thus, suppose that f only varies with the type of agent 1. Therefore, we now write \(f(\theta _1)\) instead of \(f(\theta _1,\ldots ,\theta _n)\). Also, suppose that f satisfies the following condition:

Condition C3

If\(f(\theta _1)\ne f(\theta '_1)\)for some\(\theta _1,\theta '_1\in \Theta\), then there exists\(a\in A\)such that\(u(f(\theta _1),\theta _1)>u(a,\theta _1)\)and\(u(f(\theta _1),\theta '_1)<u(a,\theta '_1)\).

It is simply the definition of Maskin monotonicity, given that only agent 1 can serve as a whistle-blower.

We now argue that whenever f satisfies Condition C3, one can design a regime such that no deception that results in an undesirable outcome, can be supported in equilibrium even if f is not weakly generation-efficient in the range and T is large.Footnote 17 Thus, suppose that the agents deceive according to some \(\delta\) (where now \(\delta :\Theta ^n\rightarrow \Theta\)) in period \((z+n)T-1\), which is the last period for agent z. But because f satisfies Condition C3, we can incentivize agent z to deviate. Thus, the agents cannot deceive in that period. Suppose now that the agents deceive according to some \(\delta\) in period \((z+n)T-2\). The payoff of agent z from the deception is \(u(f(\delta (\vec {\theta })),\theta _1)+u_1(f)\) where \(\vec {\theta }=(\theta _1,\ldots ,\theta _n)\). If he deviates from the deception, his payoff is \(u(a,\theta _1)+v\) where v is his expected payoff in period \((z+n)T-1\). We can always pick \(v=u_1(f)\) and, by Condition C3, find a such that \(u(a,\delta (\vec {\theta }))<u(f(\delta (\vec {\theta })),\delta (\vec {\theta }))\) and \(u(a,\theta _1)>u(f(\delta (\vec {\theta })),\theta _1)\) so that the deviation is profitable. Thus, the agents cannot deceive in period \((z+n)T-2\) either. Working backwards, the same argument also applies to the periods \((z+n-1)T,\ldots ,(z+n)T-3\). Also, since z was arbitrary, it follows that the agents will never deceive.

Example 3

Let \(n\ge 2\), \(T\ge 1\), \(A=\{a,b,c,d\}\), \(\Theta =\{\theta ,\theta '\}\) with \(p(\theta )=\frac{1}{2}\). The per-period payoffs are given in the following table:

 

\(u\left( \cdot ,\theta \right)\)

\(u\left( \cdot ,\theta ' \right)\)

a

4

6

b

6

4

c

0

8

d

8

0

f only depends on the type of agent 1 as follows: \(f(\theta )=a\) and \(f(\theta ')=b\). Then, \(u_1(f)=4\) and \(u_i(f)=5\) for \(i=2,\ldots ,n\). Let \(\delta (\vec {\theta })=\theta '\) if \(\theta _1=\theta\) and \(\delta (\vec {\theta })=\theta\) if \(\theta _1=\theta '\). Then, \(u_1(f\circ \delta )=6\) and \(u_i(f\circ \delta )=5\) for \(i=2,\ldots ,n\). Hence, f is not weakly generation-efficient in the range. Condition C3 is satisfied: 1) \(u(f(\theta ),\theta )>u(c,\theta )\) and \(u(f(\theta ),\theta ')<u(c,\theta ')\); 2) \(u(f(\theta '),\theta ')>u(d,\theta ')\) and \(u(f(\theta '),\theta )<u(d,\theta )\). The discussion in the previous paragraph says that no deception (not just \(\delta\)) can be maintained in the equilibrium.Footnote 18