# Nonmyopia and incentives in the piecewise linearized MDP procedures with variable step-sizes

- First Online:

- Received:
- Accepted:

DOI: 10.1186/2193-2409-1-5

- Cite this article as:
- Sato, K. Economic Structures (2012) 1: 5. doi:10.1186/2193-2409-1-5

- 2 Citations
- 2.7k Downloads

## Abstract

The paper formulates a piecewise linearized version of the procedure developed by Sato (1983) and analyzes its properties. In so doing, Fujigaki’s (1981) private goods economy is extended to involve a public good; and the intertemporal game of Champsaur and Laroque is piecewise localized by dividing the time interval in their game and by using variable step-sizes to formalize the piecewise linearized procedure, called *λMDP**Procedure*, that can possess similar desirable properties shared by continuous-time procedures. Under Nonmyopia assumption, each player’s best reply strategy at any discrete date is to reveal his/her anticipated marginal rate of substitution for a public good at the end of a current time interval of the *λ* MDP Procedure.

**JEL Classification:**H41.

### Keywords

incentives nonmyopia piecewise linearized procedures public goods variable step-sizes## 1 Introduction

This paper formulates a piecewise linearized version of the procedure developed by Sato ([1983]) and then analyzes its properties. In so doing, Fujigaki’s ([1981]) private good economy is extended to involve a public good. Also the intertemporal game of Champsaur and Laroque ([1981]) and ([1982]) is piecewise localized by dividing the time interval in their game and by using variable step-sizes for revising an amount of the public good to formalize the procedure. It is able to possess similar desirable features shared by a continuous one, i.e., efficiency and incentive compatibility. I employ the idea of modeling agents as having intermediate time horizon, which differs from the previous results on incentives in either continuous or discrete planning procedures.

The MDP Procedure received a lot of attention in the 1970s and 1980s, especially on the problem of incentives in planning procedures with public goods, but there has been very little work on it over the last fifteen years. This paper is a follow up on the literature on the use of processes as mechanisms for aggregating the decentralized information needed for guiding and financing a public good.

- (i)
Feasibility

- (ii)
Monotonicity

- (iii)
Pareto Efficiency

- (iv)
Local Strategy Proofness

- (v)
Neutrality.

The procedure to be presented is aiming at bridging the gap between local and intertemporal games. Our process differs from that of Champsaur, Drèze, and Henry ([1977]) in the sense that the step-sizes for revising a public good are variable at each iteration along the solution paths. Our procedure is also different from Green and Schoumaker ([1980]), where global information, viz., a part of each player’s indifference curve, is needed to be revealed. Only local information, i.e., marginal rates of substitution (MRSs) of any player, is required to determine the trajectories of our piecewise intertemporal process. It is verified that the best reply strategy for each player at each discrete date is to reveal his/her anticipated true MRS for the public good at the end of the current time interval, which maximizes each player’s payoff in the piecewise intertemporal incentive game. Thus, our procedure can achieve ‘piecewise intertemporal strategy proofness.’

The remainder of the paper is organized as follows. The next section outlines the general framework. Section 3 reviews the MDP Procedure and renames the LSP MDP and the Generalized MDP Processes. Section 4 presents a piecewise linearized version of the Generalized MDP Procedures with variable step-sizes and then examines their properties. Section 4 also explores players’ strategic manipulability in the piecewise intertemporal incentive game associated with each time interval of the procedure and presents our theorems. Discussions about myopia and discrete procedures are given in Section 5. The last section provides some final remarks. Proofs to the theorems are given in the Appendix.

## 2 The model

The simplest model incorporating the essential features of the problem proposed in this paper involves two goods, one public good and one private good, whose quantities are represented by *x* and *y*, respectively. ${y}_{i}$ is denoted as an amount of the private good allocated to the *i* th consumer. The economy is supposed to possess *N* individuals. Each consumer $i\in \mathbf{N}=\{1,\dots ,N\}$ is characterized by his/her initial endowment of a private good ${\omega}_{i}$ and his/her utility function ${u}_{i}:{\mathbf{R}}_{+}^{2}\to \mathbf{R}$.

The production sector is represented by the transformation function $G:{\mathbf{R}}_{+}\to {\mathbf{R}}_{+}$, where $y=G(x)$ signifies the minimal private good quantities needed to produce the public good *x*. It is assumed as usual that there is no production of a private good.

The following assumptions and definitions are used throughout this paper.

**Assumption 1** For any $i\in \mathbf{N}$, $u(\cdot ,\cdot )$ is strictly quasi-concave and at least twice continuously differentiable.

**Assumption 2** For any $i\in \mathbf{N}$, ${u}_{ix}(x,{y}_{i})\equiv \partial {u}_{i}(x,{y}_{i})/\partial x\ge 0$, ${u}_{iy}(x,{y}_{i})\equiv \partial {u}_{i}(x,{y}_{i})/\partial {y}_{i}>0$, and ${u}_{i}(0,0)=0$ for any $(x,{y}_{i})$.

**Assumption 3**$G(x)$ is convex and twice continuously differentiable.

*i*to report his/her marginal rate of substitution between the public good and the private good used as a numéraire.

**Definition 1**An allocation

*z*is feasible if and only if

**Definition 2**An allocation

*z*is individually rational if and only if

**Definition 3**A Pareto optimum for this economy is an allocation ${z}^{\ast}\in \mathbf{Z}$ such that there exists no feasible allocation

*z*with

These assumptions and definitions altogether give us conditions for Pareto optimality in our economy.

**Lemma 1**

*Under Assumptions*1-3,

*necessary and sufficient conditions for an allocation to be Pareto optimal is*

Furthermore, conventional mathematical notation is used throughout in the same manner as in Sato ([2011]). Hereafter all variables are assumed to be functions of time *t*; however, the argument *t* is often omitted unless confusion could arise. The analyses in the following sections bypass the possibility of boundary problem at $x(t)=0$. This is an innocuous assumption in the single public good case, because *x* is always increasing. The results below can be applied to the model with many public goods.

## 3 The class of MDP Procedures

### 3.1 A brief review of the MDP Procedure and its properties

The MDP Procedure is the best-known member belonging to the family of the quantity-guided procedures, in which the planning center asks individual agents their MRSs between the public good and the private numéraire. Then the center revises an allocation according to the discrepancy between the reported MRSs and the MRT. The relevant information exchanged between the center and the periphery is in the form of quantity. Let $\psi (t)=({\psi}_{1}(t),\dots ,{\psi}_{N}(t))\in {\mathbf{R}}_{+}^{N}$ be a vector of MRSs announced at any iteration $t\in [0,\mathrm{\infty})$ of the procedure. Needless to say, ${\psi}_{i}$ is not necessarily equal to ${\pi}_{i}$, thus, the incentive problem matters.

*The MDP Procedure*reads:

Denote a distributional coefficient as ${\delta}_{i}>0$, $\mathrm{\forall}i\in \mathbf{N}$, with ${\sum}_{i\in \mathbf{N}}{\delta}_{i}=1$, determined by the planner prior to the beginning of an operation of the procedure. Its role is to share among individuals the ‘social surplus,’ $\{{\sum}_{j\in \mathbf{N}}{\psi}_{j}(t)-\gamma (t)\}X(\psi (t))$, which is always positive except at the equilibrium.

*Remark 1*${\delta}_{i}>0$ was posited by Drèze and de la Vallée Poussin ([1971]), and followed by Roberts ([1979a, 1979b]), whereas ${\delta}_{i}\ge 0$ was assumed by Champsaur ([1976]) who advocated a notion of neutrality to be explained below.

*local incentive game*associated with each iteration of the process is formally defined as the normal form game $(\mathbf{N},\mathbf{\Psi},\mathbf{U})$;

**N**is the set of players, $\mathbf{\Psi}={\times}_{j\in \mathbf{N}}{\psi}_{j}\subset {\mathbf{R}}_{+}$ is the Cartesian product of the ${\mathbf{\Psi}}_{j}$, which is the set of player

*i*’s strategies, and $\mathbf{U}=({U}_{1},\dots ,{U}_{N})$ is the

*N*-tuple of payoff functions. The time derivative of consumer

*i*’s utility is such that

which is the payoff that each individual obtains in the local incentive game along the procedure.

The behavioral hypothesis underlying the above equations is the following *myopia**assumption*: i.e., each player determines his/her strategy ${\psi}_{i}\in {\mathbf{\Psi}}_{i}$ in order to maximize his/her instantaneous utility increment ${U}_{i}(\psi (t))$.

### 3.2 Normative conditions for the family of the MDP Procedures

The conditions that I have presented in the Introduction are in order. Let ${\psi}_{-i}=({\psi}_{1},\dots ,{\psi}_{i-1},{\psi}_{i+1},\dots ,{\psi}_{N})\in {\mathbf{\Psi}}_{-i}={\times}_{j\in \mathbf{N}-\{i\}}{\mathbf{\Psi}}_{j}\subset {\mathbf{R}}_{+}$.

**Condition F**Feasibility:

**Condition M**Monotonicity:

**Condition PE**Pareto Efficiency:

**Condition LSP**Local Strategy Proofness:

**Condition N**Neutrality:

where ${\mathbf{P}}_{0}$ is the set of individually rational Pareto optimum (IRPO), Δ is the set of $\delta =({\delta}_{1},\dots ,{\delta}_{N})$, and $z(\cdot )$ is a solution of the procedure.

It was Champsaur ([1976]) who advocated the notion of neutrality for the MDP Procedure, and Cornet ([1977b]) generalized it by omitting two restrictive assumptions imposed by Champsaur, i.e., (i) uniqueness of solution and (ii) concavity of the utility functions. Neutrality depends on the distributional coefficient vector *δ*. Remember that the role of *δ* is to attain any IRPO by redistributing the social surplus generated during the operation of the procedure: *δ* varies trajectories to reach every IRPO. In other words, the planning center can guide an allocation via the choice of *δ*, however, it cannot predetermine a final allocation to be achieved. This is a very important property for noncooperative games, since the equity considerations among players matter.^{1}

*Remark 2* Conditions except *PE* must be fulfilled for any $t\in [0,\mathrm{\infty})$. *PE* is based on the announced values, ${\psi}_{i}$, $\mathrm{\forall}i\in \mathbf{N}$, which implies that a Pareto optimum reached is not necessarily equal to the one achieved under the truthful revelation of preferences for the public good. Condition *LSP* signifies that the truth-telling is a dominant strategy. Condition *N* means that for every efficient point ${z}^{\ast}\in \mathbf{Z}$ and for any initial point ${z}_{0}\in \mathbf{Z}$, there exists *δ* and $z(t,\delta )$, a trajectory starting from ${z}_{0}$, such that ${z}^{\ast}=z(\mathrm{\infty},\delta )$.

The MDP Procedure enjoys feasibility, monotonicity, stability, neutrality, and incentive properties pertaining to minimax and Nash strategies, as was proved by Drèze and de la Vallée Poussin ([1971]), and Roberts ([1979a, 1979b]). The MDP Procedure as an algorithm evolves in the allocation space and stops when the Samuelson’s conditions are met so that the public good quantity is optimal, and simultaneously the private good is allocated in a Pareto optimal way, i.e., $(x,{y}_{1},\dots ,{y}_{N})$ is Pareto optimal. Malinvaud ([1971, 1972]) designed a price-guided and a price-quantity guided planning procedures. Drèze ([1972]) constructed a tâtonnement process under uncertainty.

### 3.3 The process renamed the LSP MDP Procedure

In our context, as a planner’s most important task is to achieve an optimal allocation of the public good, he or she has to collect the relevant information from the periphery so as to meet the conditions presented above. Fortunately, the necessary information is available if the procedure is locally strategy proof. It was already shown by Fujigaki and Sato ([1982]), however, that the locally strategy proof MDP Procedure cannot preserve neutrality, since ${\delta}_{i}$$\mathrm{\forall}i\in \mathbf{N}$, was concluded to be fixed, i.e., $1/N$ to accomplish LSP, keeping the other conditions fulfilled. ${\delta}_{i}=1/N\ne 0$, since *N* is greater than two.

*LSP MDP Procedure*which reads:

*Remark 3* We termed our procedure the ‘Generalized MDP Procedure’ in our paper (1981). Certainly, the public good decision function was generalized to include that of the MDP Procedure, whereas, the distributional vector was fixed to the above specific value. Thus, in order to be more precise, let me call hereafter the above procedure the ‘LSP MDP Procedure.’ The genuine *Generalized* MDP Procedure is presented below.

- (i)
The Procedure monotonically converges to an individually rational optimum, even if agents do not report their true valuation, i.e., MRS for the public good.

- (ii)
Revealing his/her true MRS is always a dominant strategy for each myopically behaving agent.

- (iii)
The Procedure generates similar trajectories in the feasible allocation space as the MDP Procedure with uniform distribution of the instantaneous surplus generated at each iteration, which leaves no influence of the planning authority on the final plan. Hence, the Procedure is nonneutral.

*Remark 4* The property (ii) is an important one that cannot be enjoyed by the original MDP Process except when there are only two agents with an equal surplus share, i.e., ${\delta}_{i}=1/2$$\mathrm{\forall}i=1,2$. The result on nonneutrality in (iii) can be modified by designing the Generalized MDP Procedure below. See Roberts ([1979a, 1979b]) for these properties.

Theorems are enumerated without proofs which were given in Fujigaki and Sato ([1981]).

**Theorem 1***The LSP MDP Procedure fulfills Conditions F*, *M*, *PE*, *and LSP*. *However*, *it cannot satisfy Condition N*.

**Theorem 2***For the LSP MDP Procedure and for any*${z}_{0}\in \mathbf{Z}$, *there exists a unique solution*$z(\cdot ):[0,\mathrm{\infty})\to \mathbf{Z}$, *which is such that*${lim}_{t\to \mathrm{\infty}}z(t)$*exists and is a Pareto optimum*.

### 3.4 The Generalized MDP Procedures

In the local incentive game the planner can know the true information of individuals, since the LSP MDP Procedure induces them to elicit it. Its operation does not even require truthfulness of each player to be a Nash equilibrium strategy, but it needs only aggregate correct revelation to be a Nash equilibrium, as was verified in Sato ([1983]). It is easily seen from the above discussion that the LSP MDP Procedure is not neutral at all, which means that local strategy proofness impedes the attainment of neutrality. Hence, Sato ([1983]) proposed another version of neutrality, and Condition *Aggregate Correct Revelation (ACR)* which is much weaker than *LSP*.

Let $\pi =({\pi}_{1},\dots ,{\pi}_{N})$ be a vector of MRSs for the public good and **Π** be its set. The condition can be stated in our context as follows:

**Condition ACR**Aggregate Correct Revelation:

*Remark 5* Condition *ACR* means that the sum of the Nash equilibrium strategies, ${\varphi}_{i}$, $\mathrm{\forall}i\in \mathbf{N}$, always coincides with the aggregate value of the correct MRSs. Clearly, *ACR* only claims truthfulness in the aggregate.

I needed also the following two conditions. Let $\rho :{\mathbf{R}}_{+}^{N}\to {\mathbf{R}}_{+}^{N}$ be a permutation function and ${T}_{i}(\psi )$ be a transfer in private good to agent *i*.

**Condition TA**Transfer Anonymity:

*Remark 6* Condition *TA* says that the agent *i*’s transfer in private good is invariant under permutation of its arguments, i.e., the order of strategies does not affect the value of ${T}_{i}(\psi )$$\mathrm{\forall}i\in \mathbf{N}$. Sato ([1983]) proved that ${T}_{i}(\psi )={T}_{i}({\sum}_{j\in N}{\psi}_{j}-\gamma )$, which is an example of transfer rules.

**Condition TN**Transfer Neutrality:

where $T=({T}_{1},\dots ,{T}_{N})$ is a vector of transfer functions and **Ω** is its set.

Now, I enumerate the properties of the Generalized MDP Procedures just renamed supra. Proofs are already given in Sato ([1983]), so they are omitted here.

**Theorem 3**

*The Generalized MDP Procedures fulfill Conditions ACR*,

*F*,

*M*,

*PE*,

*TA and TN*.

*Conversely*,

*any planning process satisfying these conditions is characterized to*:

**Theorem 4***Revealing preferences truthfully in any Generalized MDP Procedure is a minimax strategy for any*$i\in \mathbf{N}$. *It is the only minimax strategy for any*$i\in \mathbf{N}$, *when*$x>0$.

**Theorem 5**${\varphi}_{i}={\pi}_{i}$*holds for any*$i\in \mathbf{N}$*at the equilibrium of the Generalized MDP Procedures*.

**Theorem 6***Under Assumptions* 1-3, *for every individually rational Pareto optimum*${z}^{\ast}$, *there exists**δ**and a trajectory*$z(\cdot ):[0,\mathrm{\infty})\to \mathbf{Z}$*of the differential equation defining the Generalized MDP Procedures such that*, $\mathrm{\forall}i\in \mathbf{N}$, ${u}_{i}({z}^{\ast})={lim}_{t\to \mathrm{\infty}}{u}_{i}(x(t),{y}_{i}(t))$.

Keeping the same nonlinear public good decision function as derived from Condition *LSP*, Sato ([1983]) could state the above characterization theorem. In the sequel, the Generalized MDP Procedure with ${T}_{i}({\sum}_{j\in N}{\psi}_{j}-\gamma )={\delta}_{i}({\sum}_{j\in N}{\psi}_{j}-\gamma )X(\psi )$ is employed. Via the pertinent choice of ${T}_{i}(\cdot )$ we can make the family of the Generalized MDP Procedures, including the MDP and the LSP MDP Procedures as special members.

*Remark 7* Green and Laffont ([1979]), Laffont ([1979]), and Champsaur and Rochet ([1983]) gave a systematic study on the family of planning procedures that are asymptotically efficient and locally strategy proof. Now we know that the class of the *LSP* procedures is large enough, which includes the Bowen Procedure, the Generalized Wicksell Procedure, and the LSP MDP Procedure as special members, as classified by Rochet ([1982]) and Sato ([2011]). Sato ([2010]) presented a discrete version of the procedure developed by Green and Laffont, which was the first LSP procedure with pivotal agents.

The next section provides a positive result on neutrality, different from Champsaur and Laroque ([1981, 1982]), and Laroque and Rochet ([1983]) who concluded nonneutrality of the intertemporal MDP Procedures with and without public goods.

## 4 The piecewise linearized MDP Procedure

### 4.1 A description of the piecewise linearized MDP Procedure

In the Procedure below, the planner plans to provide an optimal quantity of a public good by revising its quantity at discrete times, $t=\{{\tau}_{1},\dots ,{\tau}_{s},{\tau}_{s+1},\dots ,D\}\in \mathbf{T}$: the set of discrete dates. The length of time horizon *D*, which can take the value ∞, is predetermined by the planner. In order for the planner to decide in what direction an allocation should be changed, it proposes a tentative feasible allocation, $z(0)=(\chi (0),{\omega}_{1},\dots ,{\omega}_{N})$$\mathrm{\forall}i\in \mathbf{N}$, with a tentative step-size of the public good, $\chi (0)$, at the initial time 0 given by the planner to which agents are asked to report his/her true MRS, ${\pi}_{i}(z(0))$$\mathrm{\forall}i\in \mathbf{N}$, as a local privately held information. At each discrete date ${\tau}_{s}$ the planner can easily calculate for any *t* the sum of their announced MRSs to change the allocation at the next date ${\tau}_{s+1}$. It is supposed that it can get the exact value of MRT. Assume also that the agents have rational expectations on the time interval, although the latter are bounded; they not only have complete knowledge as to the planning rules of the procedure defined below, but also can at least predict an allocation to be attained at the beginning of the next interval. Champsaur and Laroque ([1982], p.326) wrote that ‘[s]uch a situation of limited intertemporal consistency is similar to the discrete procedures.’ Champsaur and Laroque ([1981, 1982]) took into consideration the effects of the agents’ strategies upon the final allocation. Agents in the private good economy of Fujigaki ([1981]) are assumed to maximize their utility anticipated at the end of each time interval. So I extend his model to involve a public good in order to examine nonmyopic behaviors on the part of strategic players, as in Champsaur and Laroque ([1981]).

*D*intervals $[{\tau}_{s},{\tau}_{s+1})$. As repeated to apply our procedure to each interval, an allocation at any point of each interval is given for any ${\tau}_{s}\in \mathbf{T}$ and for any $t\in [{\tau}_{s},{\tau}_{s+1})$

where ${X}^{\alpha}(t)$ and ${Y}_{i}^{\alpha}(t)$ are the average speeds of adjustments over the interval $[{\tau}_{s},{\tau}_{s+1})$, which are defined by the Generalized MDP Procedure with ${T}_{i}$ specified above.

Note that the planner has to observe the size, $\chi (t)$, but not each ${\upsilon}_{i}(t)$, since the former determines the latter. Let us call this piecewise linearized procedure the *λMDP**Procedure*, which plays as a rule of a piecewise intertemporal incentive game.

### 4.2 Normative conditions for the *λ* MDP Procedure

The following new conditions are defined for our *λ* MDP Procedure.

**Condition PIF**Piecewise Intertemporal Feasibility:

**Condition PIM**Piecewise Intertemporal Monotonicity:

**Condition PISP**Piecewise Intertemporal Strategy Proofness:

Condition PISP may also be called *Stepwise Strategy Proofness.*

### 4.3 The *λ* MDP Procedure as a piecewise intertemporal incentive game form

To examine incentive properties of the procedure, an assumption of truthful revelation of preferences is omitted. Each player’s announcement, ${\psi}_{i}$, is not necessarily equal to his/her true MRS, ${\pi}_{i}$. Thus, ${\pi}_{i}$ must have been replaced with ${\psi}_{i}$ in the dynamic system of the *λ* MDP Procedure. The nonmyopia assumption is introduced for our procedure, since a discrete time framework is a weaker representation of myopia. The procedure and the game are repeated for each interval in our framework.

What I associate with the above process instead of intertemporal game used by Champsaur and Laroque ([1981]) is so to speak a ‘bounded’ or ‘piecewise’ intertemporal game, since I divide the time interval in the model. A piecewise intertemporal game played at discrete dates of each time interval of the procedure is formally defined as the normal form game $(\mathbf{N},\mathbf{\Psi},\mathbf{V})$. **N** is the set of players, $\mathrm{\Psi}={\times}_{i\in \mathbf{N}}{\mathbf{\Psi}}_{i}\subset {\mathbf{R}}_{+}$ is the Cartesian product of ${\mathbf{\Psi}}_{i}$ which is the set of player *i*’s strategies, and $V=({V}_{1}({\tau}_{s+1}),\dots ,{V}_{n}({\tau}_{s+1}))$ is the *n*-tuple of payoff functions at the end of the current time interval $[{\tau}_{s},{\tau}_{s+1})$ such that ${V}_{i}({\tau}_{s+1})={u}_{i}(x({\tau}_{s+1}),{y}_{i}({\tau}_{s+1}))$$\mathrm{\forall}i\in \mathbf{N}$.

Let us give a definition here.

**Definition 4**

*The best reply strategy*for each individual

*i*in the piecewise intertemporal game $(\mathbf{N},\mathbf{\Psi},\mathbf{V})$ is the strategy ${\psi}_{i}^{\ast}({\tau}_{s})\in {\mathbf{\Psi}}_{i}$ such that for any ${\tau}_{s}\in \mathbf{T}$:

*Remark 8* Condition PISP satisfies if truth-telling coincides with the best reply strategy in the piecewise intertemporal game. The behavioral hypothesis underlying the above equation is the nonmyopia assumption, i.e., each player determines his/her best reply strategy at the beginning of each interval $[{\tau}_{s},{\tau}_{s+1})$ in order to maximize his/her payoff, ${V}_{i}({\tau}_{s+1})$, at the beginning of the next interval $[{\tau}_{s+1},{\tau}_{s+2})$.

**Nonmyopia Assumption** Every player is assumed to behave nonmyopically: viz., when each player determines his/her strategy in a piecewise intertemporal game, he/she does not maximize the time derivative of utility function but the utility increment based on the allocation that he/she can foresee to get at the end of the current time interval.

*Remark 9* This behavioral hypothesis may be justified by considering that the future development of an allocation cannot be predicted for exactly. Hence, every player has to make a piecewise decision under uncertainty. Players are rather assumed to forecast at least what will happen at the next discrete date.

Now I examine the properties of the *λ* MDP Procedure just defined above. This paper is confined to PISP, instead of LSP or Strongly Locally Individually Incentive Compatibility (SLIIC).

Suppose the *λ* MDP Procedure is not at an equilibrium at ${\tau}_{s+1}$, then the following theorems hold. Proofs are postponed to the Appendix.

**Theorem 7***For each*$i\in \mathbf{N}$*and for any*${\tau}_{s+1}\in \mathbf{T}$, ${U}_{i}^{-}({\tau}_{s+1})\ge 0$.

**Theorem 8**

*For each*$i\in \mathbf{N}$

*and for any*${\tau}_{s+1}\in \mathbf{T}$

Therefore, the average speed of each individual’s utility increment is positive over the interval $[{\tau}_{s},{\tau}_{s+1})$ for any revision date ${\tau}_{s}\in \mathbf{T}$.

The next theorem states that the utility is monotonically nondecreasing over the interval $[{\tau}_{s},{\tau}_{s+1})$ for any ${\tau}_{s}\in \mathbf{T}$.

**Theorem 9***For each*$i\in \mathbf{N}$*and for any*$t\in [{\tau}_{s},{\tau}_{s+1})$, ${U}_{i}(t)>0$.

**Theorem 10***For each*$i\in \mathbf{N}$, ${\psi}_{i}^{\ast}({\tau}_{s})={\pi}_{i}({\tau}_{s+1})$*is player**i’s best reply strategy at date*${\tau}_{s}$, *which maximizes*${V}_{i}({\tau}_{s+1})$*in the piecewise intertemporal incentive game associated with the**λMDP Procedure*.

That is to say, truthful revelation for the public good is the best reply strategy in the piecewise intertemporal game, and it is the only best reply strategy when $x>0$.

*Remark 10* Theorem 10 means that the best reply strategy at ${\tau}_{s}$ for each player is to reveal his/her true MRS for the public good to be provided at date ${\tau}_{s+1}$, i.e., ${\pi}_{i}({\tau}_{s+1})$, but not ${\pi}_{i}({\tau}_{s})$. For each time interval $[{\tau}_{s},{\tau}_{s+1})$, the *λMDP* Procedure is piecewise intertemporally strategy proof in the sense that each player’s MRS announced at date ${\tau}_{s}$ coincides with the true one which corresponds to an allocation anticipated by that player at the end of the current interval $[{\tau}_{s},{\tau}_{s+1})$. The crucial point is that each player’s best reply strategy, ${\psi}_{i}^{\ast}({\tau}_{s})$ is not ${\pi}_{i}({\tau}_{s})$ but ${\pi}_{i}({\tau}_{s+1})$. This result comes from the difference between the myopia and nonmyopia assumptions, i.e., the length of time horizon of the players matters.

*Remark 11* The myopia assumption is common in local games associated with both continuous and discrete planning procedures such as the MDP and the CDH (Champsaur-Drèze-Henry) Procedures. See Henry ([1979]), Schoumaker ([1977, 1979]) for details on this point. Also, nontâtonnement procedures are of concern in real economic life. Hence, in view of obvious practical relevance, I must have constructed our discrete process in a nontâtonnement setting. However, I have confined myself to develop a piecewise linearized process as an approximation. Under nonmyopia assumption, a sincere revelation of preference for the public good at any discrete date of the *λMDP* Process is the best reply strategy for each player.

Hence, I am now in a position to present the theorem.

**Theorem 11***Under Assumptions* 1-3, *the**λMDP Procedure satisfies PIF*, *PIM*, *TN*, *PE*, *and PISP*.

*Remark 12* Our *λMDP* Procedure can keep neutrality, which is different from Champsaur and Laroque ([1981])’s result on nonneutrality of the procedures with intertemporal strategic behaviors of agents. This possibility stems from Sato ([1983]) who proposed an aggregate correct revelation as a condition to be replaceable with local strategy proofness, and he constructed a planning procedure which simultaneously satisfies three desiderata: efficiency, neutrality, and aggregate correct revelation.

Let ${\mathbf{T}}_{1}=\{{\tau}_{1}\}$ be the set of dates for revising the allocation by the center. When ${\tau}_{1}$ tends to infinity, the MRSs revealed by the players at date 0 converge to those corresponding to a Pareto optimal allocation, $z({\tau}_{1})$, achieved via the procedure. Theorem 11, therefore, brings another theorems whose proofs are obvious, thus omitted here.

**Theorem 12***When*${\tau}_{1}$*tends to infinity*, *any trajectory of the**λMDP Process converges towards a Pareto optimal allocation*. *Furthermore*, *it is intertemporally strategy proof in the sense of Champsaur and Laroque*.

## 5 Literature on myopia and discreteness

### 5.1 A discussion on discreteness

Here I present some comments on the discrete procedures. A proper discrete procedure could be constructed via the use of a decreasing pitch proposed by Champsaur, Drèze and Henry ([1977]), but I have attempted a different approach, in which discussions can be extended to a piecewise linearized procedure. The above dynamic system can be generalized to involve many public goods, amounts of which can be simultaneously adjusted at each iteration. This result differs from Champsaur, Drèze, and Henry ([1977]), in which the quantity of only one public good can be revised at each discrete date.

Incidentally, little is known about the speed of convergence of the procedures, particularly when they are formulated in the discrete versions, which are the only realistic ones from the standpoint of actual planning practices. The continuous version implies that the player’s responses are transmitted continuously to the planning center, with no computation cost or adjustment lag.^{2} However, for the simplicity of presentation, the technical advantages of the differential approach are well known. As Malinvaud ([1970-1971], p.192) rightly pointed out, a continuous formulation removes the difficult question of choosing an adjustment speed. Hence, the continuous version is justified mainly by convenience. Moreover, a continuous formulation might be considered as an approximation to a discrete representation.^{3}

Casual observations suggest that discrete procedures are more realistic than continuous ones, and that the revisions of resource allocation are essentially made in discrete time. But most planning procedures discussed in the literature are formulated in continuous time because of the difficulties involved in using the discrete version. As indicated by Malinvaud ([1967]) and others, this dilemma concerns a traditional technical difficulty which is summarized in such a way that if one selects a pitch large enough to get a rapid convergence, one runs the risk of no convergence. On the other hand, if one chooses a pitch small enough to expect an exact convergence, there is a possibility of delay.

Discrete versions of the MDP Procedure have been presented by several authors, and there are different strains of related literature. The first strain - taken by Champsaur, Drèze, and Henry ([1977]) - is characterized by a decreasing adjustment pitch (or step-size) as a parameter, with which they could overcome a dilemma associated with a discrete formulation by keeping the pitch constant as long as it allows progress in efficiency, and by halving it as soon as it is impossible. The above-mentioned dilemma associated with discrete procedures is therefore overcome.^{4}

Discussions of incentives in discrete-time MDP Procedures are given in Henry ([1979]), and Schoumaker ([1976, 1977, 1979]). They analyzed players’ strategic behaviors in the discrete MDP Processes, by ruling out the assumption of truthful revelation. The result they achieved is that their procedures still converge to a Pareto optimum even under strategic preference revelation *à la* Nash.

Approaching the same issue from another angle, Green and Schoumaker ([1980]) presented a discrete MDP Process with a flexible step-size at each iteration, and studied its incentive properties in the game theoretical framework. Their analysis dispensed with the ‘strategic indifference’ assumption imposed by Henry ([1979]) and Schoumaker ([1979]), i.e., the players choose truth-telling if the resulting outcome would be indifferent. Their discrete-time procedure, however, requires reporting global information with respect to the preferences of consumers. More precisely, consumers’ marginal willingness to pay functions are constrained to be compatible with and a part of their utility functions. Essentially, a Nash equilibrium concept is employed. Although their ideas are interesting, the informational burden in their model is much greater than that in other approaches.

Mas-Colell ([1980]) proposed a voluntary financing process, which is a global analog of the MDP Procedure.^{5} He obtained characterizations of Pareto optimal and core states in terms of valuation functions. The incentive problem was not considered. Chander ([1985]) presented a discrete version of the MDP Procedure and insisted that his system is the most informationally efficient allocation mechanism, without taking any consideration on its incentive property, though. Otsuki ([1978]) employed the feasible direction method in the theory of discrete planning and applied it to the MDP and the Heal Procedures by devising implementable algorithms. Again, the problem of incentives was not treated in his paper.

Roberts ([1987]) challenged another difficult issue which is not yet fully settled: he attempted to relax both the assumptions of myopia and complete information in the simplest version of an iterative planning framework due to Champsaur, Drèze, and Henry ([1977]). In his procedure the agents initially are imperfectly informed but gradually learn about each other to predict future behaviors of others. He discussed the Baysian incentive compatibility of his procedure. And he gave a numerical example of a condominium as a public good, entrance of which is redecorated by its members who use the iterative process.^{6}

Allard et al. ([1980]) proposed definitions of temporary and intertemporal Pareto optimality. In their paper individuals are represented by Roy-consistent expectation functions induced by their learning processes. In order to explain their concepts of expectation functions, they referred to a pure exchange MDP Process, in which the planner asks agents to evaluate present goods and to send him/her their demands. So as to value present goods, they must forecast future quantities. Thus, Allard et al. ([1980]) assumed that the consumers are endowed with expectation functions.

As was criticized by Coughlin and Howe ([1989]), none of the above discrete procedures satisfied a criterion of intertemporal Pareto optimality. Following them, only the process devised by Green and Schoumaker ([1980]) insinuated a possible avenue to the criterion of intertemporal Pareto optimality. Thus, I have shown a different version of the Green and Schoumaker ([1980])’s discrete process with variable step-sizes and only local informational requirement.

### 5.2 A digression and justification of myopia

In the literature on the problem of incentives in planning procedures, the myopic strategic behavior prevailed. Many papers imposed this behavioral hypothesis, i.e., myopia, on which the forgoing discussions crucially depended, spawning numerous desirable results in connection with the family of MDP Procedures.

The aim of this paper has been to examine the consequences of dropping the assumption that individuals choose their strategies to maximize an instantaneous change in utility function at each iteration along the procedure. Instead of the myopic behavior, I have assumed that the agents select their announcements concerning their marginal rates of substitution to maximize their utility increment to be obtained at the end of each time interval.

Also verified is that the *λ* MDP Procedure can always keep neutrality different from Champsaur and Laroque ([1981, 1982]), and Laroque and Rochet ([1983]). They analyzed the properties of the MDP Procedure under the nonmyopic assumption. They treated the case where each individual attempts to forecast the influence of his/her announcement to the planning center over a predetermined time horizon, and optimizes his/her responses accordingly. It is proved that, if the time horizon is long enough, any noncooperative equilibrium of intertemporal game attains an approximately Pareto optimal allocation. But at such an equilibrium, the influence of the center on the final allocation is negligible, which entails nonneutrality of the procedure. Their attempt was to bridge the gap between the local instantaneous game and the global game, as was pointed out by Hammond ([1979]). Our aim has been, however, to bridge the gap between the local game and intertemporal game, by constructing a compromise of continuous and discrete procedures, i.e., the piecewise linearized procedure. By letting the length of the discrete periods shrink to zero (noting that *χ* and hence, ${\upsilon}_{i}$$\mathrm{\forall}i\in N$, would also shrink to zero), we would approach the continuous patch.

Incidentally, how can we justify the myopia assumption which is a crucial underpinning to obtain a lot of fruitful results in the theory of incentives, especially in the planning procedures for optimally allocating public goods? Indeed, in reality people seem to be considered to behave myopically rather than farsightedly. Matthews ([1982], p.638) wrote that “myopia may be regarded as a tractable approximation, a result of ‘bounded rationality’.”

Laffont ([1985], pp.19-20) justified myopia as follows: the participants in a planning procedure always believe that it is the last step of the procedure or that they will not enter the complexities of strategic behavior for a longer time horizon. In the MDP Procedure, a correct revelation of preferences is a maximin strategy in the global game, as was pointed out by Drèze. As the procedure is monotone in utility functions, the worst that could happen is the termination of the procedure. In other words, the global game reduces to the local game, in which the maximin strategy consists of correctly revealing preferences. Conversely, choosing a myopic strategy reduces to adopting a maximin approach to the global game. It would be logical, however, to adopt a maximin strategy in the local game, too.

Finally, let me introduce two justifications of myopia by Moulin ([1984], pp.131-132). The first one is to consider an isolated player who finds him/herself so small that his/her proper choice of strategies influences the others’ choice in a negligible way. The other, which completes the first, is complete ignorance where no player knows his/her opponents’ utility functions; a player knows that he/she is unable to predict in what direction the change will occur.

The method of Truchon ([1984]) is to examine a nonmyopic incentive game, where each agent’s payoff is a utility at the final allocation. Differently from others, Truchon introduced a ‘threshold’ into his model to analyze agents’ strategic behavior. T. Sato ([1983]) also investigated how the MDP Procedure works when players with individual expectation functions nonmyopically play a sequential game, by letting them forecast what allocation would be proposed over the period when they take a certain path of strategies.

## 6 Final remarks

The present paper has formulated a piecewise linearized version of the Generalized MDP Procedure and analyzed its properties. On doing so, I have extended the Fujigaki’s private goods economy to involve a public good, and have localized the intertemporal game *à la* Champsaur and Laroque ([1981, 1982]), by dividing the time interval and by applying the Generalized MDP Procedure for each interval. In the piecewise intertemporal game associated with any interval generated by our procedure, each player’s payoff is the utility increment at the initial point of each next interval. Variable step-sizes are used to formalize the piecewise linearized procedure that shares similar desirable properties with continuous procedures. This process involves the partitioning of the planning horizon into a specific sequence of time intervals. I have called this process the *λ* MDP Procedure and shown that it can simultaneously achieve efficiency and piecewise intertemporal strategy proofness. That is, it converges to a Pareto optimum; and the best replay strategy of each player at each date ${\tau}_{s}$ is to declare his/her anticipated MRS *at the end of the current time interval*$[{\tau}_{s},{\tau}_{s+1})$: i.e., ${\psi}_{i}^{\ast}({\tau}_{s})={\pi}_{i}({\tau}_{s+1})$. The *λ* MDP Procedure can also preserve transfer neutrality.

Recognizing the difficulties concerning the possibility of manipulating private information by individuals, the literature has verified that this incentive problem could be dealt with by the planning procedures that require a continuous revelation of information, provided that agents adopt myopic behavior. Whereas, if individuals are farsighted, the traditional impossibility results occur, i.e., incentive compatibility is incompatible with efficiency, as it was pointed out by Champsaur, Laroque and Rochet. This paper has studied an intermediate situation where agents are only asked to declare their anticipated MRS at discrete dates, where the direction and speed of adjustment are changed. Consequently, the associated dynamic process named the *λMDP**Procedure* has become piecewise linear. Individuals are assumed to take the interval between two discrete dates as their time horizon. Their behavior is hence intermediate between myopia and farsightedness. The idea of looking at an intermediate time horizon for agents’ manipulations of information is more natural and more realistic than myopia and farsightedness.

## Appendix

*Proof of Theorem 7*Our

*λ*MDP Procedure gives

since ${X}^{\alpha}({\tau}_{s})$ and ${\sum}_{j\in N}{\psi}_{j}({\tau}_{s})-\gamma ({\tau}_{s})$ are sign-preserving. □

*Proof of Theorem 8*Because of the strict concavity of utility functions, it follows that for any ${\alpha}_{0},{\alpha}_{1}\in {\mathbf{R}}_{+}^{2}$ and for any real number $\beta \in [0,1]$

Since an allocation path is a line segment, if we set ${\alpha}_{0}=\alpha ({\tau}_{s})$ and ${\alpha}_{1}=\alpha ({\tau}_{s+1})$, then we can associate via the choice of *β* any allocation given by the *λ* MDP Procedure over the interval $[{\tau}_{s},{\tau}_{s+1})$.

*t*tends to ${\tau}_{s+1}$, the L.H.S. of Eq. (3) approaches the average speeds of utility change over the interval $[{\tau}_{s},{\tau}_{s+1})$, and the R.H.S. signifies the infinitesimal speed of utility increment. By Theorem 7

*t*goes ${\tau}_{s}$ in Eq. (3), we get

These equations give us the statement of the theorem. □

*Proof of Theorem 9*Theorem 8 follows that

and therefore ${u}_{i}({\tau}_{s+1})\ge {u}_{i}({\tau}_{s})$.

Continuity of utility functions, and thus the intermediate value theorem, assures the existence of a certain $t\in [{\tau}_{s},{\tau}_{s+1})$ such that ${u}_{i}({\tau}_{s+1})>{u}_{i}(t)>{u}_{i}({\tau}_{s})$.

*t*from below yields

*t*from above, we get

*t*that ${u}_{i}({\tau}_{s})\ge {u}_{i}(t)$ and ${u}_{i}(t)\ge {u}_{i}({\tau}_{s+1})$ over the interval $[{\tau}_{s},{\tau}_{s+1})$. In fact, if there exists $t\in [{\tau}_{s},{\tau}_{s+1})$ such that ${u}_{i}(t)\ge {u}_{i}({\tau}_{s+1})$, then, for any $\tilde{t}\in [t,{\tau}_{s+1})$

must hold. This clearly contradicts ${U}_{i}^{-}({\tau}_{s+1})>0$, hence the desired conclusion is obtained. □

*Proof of Theorem 10*Without a truthful revelation for the public good, which is different from the proof of Theorem 9, we observe

At any equilibrium of the *λ* MDP Procedure, the third term in the brackets vanishes, and ${u}_{iy}>0$ by assumption, so that we conclude that ${\psi}_{i}({\tau}_{s})={\pi}_{i}({\tau}_{s+1})={\psi}_{i}^{\ast}({\tau}_{s})$ holds for any $i\in N$ and for any ${\tau}_{s}\in \mathbf{T}$. □

*Proof of Theorem 11* Condition PIF is easily checked to be satisfied, since it has been already used to formulate the procedure. To sum up, the features of the process are as follows: its solution $z[t,z(0)]$ defining the procedure is the function which associates a program $z(t)$ as well as step-sizes $\chi (t)$ and ${\upsilon}_{i}(t)$, $\mathrm{\forall}i\in \mathbf{N}$, with every iteration *t*. If an initial program is feasible, then every succeeding one is also feasible. It can be demonstrated under Assumptions 1-3 that the process is stable and always converges monotonically from any initial point to an individually rational Pareto optimum. The proofs of other conditions immediately follow from the proofs of theorems supra and the definitions of the Generalized MDP Procedure. □

For the concepts of neutrality associated with planning procedures, see Cornet ([1977a, 1977b]), Cornet and Lasry ([1976]), Rochet ([1982]), Sato ([1983, 2011]). See also d’Aspremont and Drèze ([1979]) for a version of neutrality which is valid for the generic context.

The essence of the discrete version of the MDP Procedure (CDH Procedure) can be captured in Henry and Zylberberg ([1977]). See, in addition, Tulkens ([1978]), Laffont ([1982]), Mukherji ([1990]) and Salanié ([1998]) for lucid summaries of the MDP Procedure. It can be seen as a ‘nontâtonnement process,’ because of its feasibility, one can therefore truncate it at any time. As for a contribution to the MDP literature, see Von Dem Hagen ([1991]), where a differential game approach is taken. De Trenquale ([1992]) defined a dynamic mechanism different from the MDP Procedure, that implements with local dominant strategies a Pareto efficient and individually rational allocations in a general two-agent model. Chander ([1993]) verified the incompatibility between core convergence property and local strategy proofness. Sato ([2007]) designed the Hedonic MDP Procedure for optimizing gaseous attributes which compose the global atmosphere in the new theoretical context.

See Henry and Zylberberg ([1978]) for graphical illustration of how the method of a decreasing pitch successfully works until a Pareto optimum is attained. Although they treated the case with increasing returns to scale, the structure is isomorphic to the model with public goods. Crémer ([1983, 1990]) took another approach to treat increasing returns to scale as well as useful ideas that can be applied for public goods. See Heal ([1986]) for a comprehensive account of the planning theory and the dilemma of choosing a step-size in discrete procedures. See also Henry and Zylberberg ([1977]) for the Heal Procedure.

## Acknowledgements

This is one of a series of papers dedicated to the XXXXth Anniversary of the MDP Procedure. It was at the Brussels Meeting of the Econometric Society in September 1969 that Drèze and de la Vallée Poussin together, and Malinvaud independently, presented their papers on planning tâtonnement processes for guiding and financing the optimal provision of public goods. A preliminary version of this paper was presented at the Far Eastern Meeting of the Econometric Society held at the International Conference Center Kobe, Japan, July 21, 2001. The revised version was presented at the autumn meeting of the Japanese Economic Association held at Hitotsubashi University, October 8, 2001. Some major revisions were made thereafter.

## Copyright information

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.