Journal of Economic Structures

, 1:5

Nonmyopia and incentives in the piecewise linearized MDP procedures with variable step-sizes

Authors

    • Heat Island Institute International
    • Tokyo Institute of Technology
Open AccessResearch

DOI: 10.1186/2193-2409-1-5

Cite this article as:
Sato, K. Economic Structures (2012) 1: 5. doi:10.1186/2193-2409-1-5

Abstract

The paper formulates a piecewise linearized version of the procedure developed by Sato (1983) and analyzes its properties. In so doing, Fujigaki’s (1981) private goods economy is extended to involve a public good; and the intertemporal game of Champsaur and Laroque is piecewise localized by dividing the time interval in their game and by using variable step-sizes to formalize the piecewise linearized procedure, called λMDP Procedure, that can possess similar desirable properties shared by continuous-time procedures. Under Nonmyopia assumption, each player’s best reply strategy at any discrete date is to reveal his/her anticipated marginal rate of substitution for a public good at the end of a current time interval of the λ MDP Procedure.

JEL Classification:H41.

Keywords

incentives nonmyopia piecewise linearized procedures public goods variable step-sizes

1 Introduction

This paper formulates a piecewise linearized version of the procedure developed by Sato ([1983]) and then analyzes its properties. In so doing, Fujigaki’s ([1981]) private good economy is extended to involve a public good. Also the intertemporal game of Champsaur and Laroque ([1981]) and ([1982]) is piecewise localized by dividing the time interval in their game and by using variable step-sizes for revising an amount of the public good to formalize the procedure. It is able to possess similar desirable features shared by a continuous one, i.e., efficiency and incentive compatibility. I employ the idea of modeling agents as having intermediate time horizon, which differs from the previous results on incentives in either continuous or discrete planning procedures.

The MDP Procedure received a lot of attention in the 1970s and 1980s, especially on the problem of incentives in planning procedures with public goods, but there has been very little work on it over the last fifteen years. This paper is a follow up on the literature on the use of processes as mechanisms for aggregating the decentralized information needed for guiding and financing a public good.

Initiated by three great pioneers - Malinvaud ([1970-1971]), and Drèze and de la Vallée Poussin ([1971]) - this field of research has made a remarkable progress in the last three decades. The analyses of incentives in planning tâtonnement procedures began in the late sixties and were mathematically refined by the characterization theorems of Champsaur and Rochet ([1983]), which generalized the previous results of Fujigaki and Sato ([1981, 1982]), as well as Laffont and Maskin ([1983]). Champsaur and Rochet highlighted the incentive theory in the planning context to reach the acme and culminated in their generic theorems. Most of these procedures can be characterized by the axioms, the formal definitions of which are given in Section 3:
  1. (i)

    Feasibility

     
  2. (ii)

    Monotonicity

     
  3. (iii)

    Pareto Efficiency

     
  4. (iv)

    Local Strategy Proofness

     
  5. (v)

    Neutrality.

     

The procedure to be presented is aiming at bridging the gap between local and intertemporal games. Our process differs from that of Champsaur, Drèze, and Henry ([1977]) in the sense that the step-sizes for revising a public good are variable at each iteration along the solution paths. Our procedure is also different from Green and Schoumaker ([1980]), where global information, viz., a part of each player’s indifference curve, is needed to be revealed. Only local information, i.e., marginal rates of substitution (MRSs) of any player, is required to determine the trajectories of our piecewise intertemporal process. It is verified that the best reply strategy for each player at each discrete date is to reveal his/her anticipated true MRS for the public good at the end of the current time interval, which maximizes each player’s payoff in the piecewise intertemporal incentive game. Thus, our procedure can achieve ‘piecewise intertemporal strategy proofness.’

The remainder of the paper is organized as follows. The next section outlines the general framework. Section 3 reviews the MDP Procedure and renames the LSP MDP and the Generalized MDP Processes. Section 4 presents a piecewise linearized version of the Generalized MDP Procedures with variable step-sizes and then examines their properties. Section 4 also explores players’ strategic manipulability in the piecewise intertemporal incentive game associated with each time interval of the procedure and presents our theorems. Discussions about myopia and discrete procedures are given in Section 5. The last section provides some final remarks. Proofs to the theorems are given in the Appendix.

2 The model

The simplest model incorporating the essential features of the problem proposed in this paper involves two goods, one public good and one private good, whose quantities are represented by x and y, respectively. y i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq1_HTML.gif is denoted as an amount of the private good allocated to the i th consumer. The economy is supposed to possess N individuals. Each consumer i N = { 1 , , N } https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq2_HTML.gif is characterized by his/her initial endowment of a private good ω i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq3_HTML.gif and his/her utility function u i : R + 2 R https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq4_HTML.gif.

The production sector is represented by the transformation function G : R + R + https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq5_HTML.gif, where y = G ( x ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq6_HTML.gif signifies the minimal private good quantities needed to produce the public good x. It is assumed as usual that there is no production of a private good.

The following assumptions and definitions are used throughout this paper.

Assumption 1 For any i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq7_HTML.gif, u ( , ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq8_HTML.gif is strictly quasi-concave and at least twice continuously differentiable.

Assumption 2 For any i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq7_HTML.gif, u i x ( x , y i ) u i ( x , y i ) / x 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq9_HTML.gif, u i y ( x , y i ) u i ( x , y i ) / y i > 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq10_HTML.gif, and u i ( 0 , 0 ) = 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq11_HTML.gif for any ( x , y i ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq12_HTML.gif.

Assumption 3 G ( x ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq13_HTML.gif is convex and twice continuously differentiable.

Let γ ( x ) = d G ( x ) / d x https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq14_HTML.gif denote the marginal rate of transformation which is assumed to be known to the planning center. It asks each individual i to report his/her marginal rate of substitution between the public good and the private good used as a numéraire.
π i ( x , y i ) = u i x ( x , y i ) / u i y ( x , y i ) . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equa_HTML.gif
Definition 1 An allocation z is feasible if and only if
z Z = { ( x , y 1 , , y N ) R + N + 1 | i N y i + G ( x ) = i N ω i } . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equb_HTML.gif
Definition 2 An allocation z is individually rational if and only if
u i ( x , y i ) u i ( 0 , ω i ) , i N . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equc_HTML.gif
Definition 3 A Pareto optimum for this economy is an allocation z Z https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq15_HTML.gif such that there exists no feasible allocation z with
u i ( x , y i ) u i ( x , y i ) , i N , u j ( x , y j ) > u j ( x , y j ) , j N . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equd_HTML.gif

These assumptions and definitions altogether give us conditions for Pareto optimality in our economy.

Lemma 1 Under Assumptions 1-3, necessary and sufficient conditions for an allocation to be Pareto optimal is
i N π i γ and ( i N π i γ ) x = 0 . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Eque_HTML.gif

Furthermore, conventional mathematical notation is used throughout in the same manner as in Sato ([2011]). Hereafter all variables are assumed to be functions of time t; however, the argument t is often omitted unless confusion could arise. The analyses in the following sections bypass the possibility of boundary problem at x ( t ) = 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq16_HTML.gif. This is an innocuous assumption in the single public good case, because x is always increasing. The results below can be applied to the model with many public goods.

3 The class of MDP Procedures

3.1 A brief review of the MDP Procedure and its properties

Let us describe a generic model of our planning procedures for a public good and a private good as:
{ d x / d t X ( t ) , d y i / d t Y i ( t ) , i N . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equf_HTML.gif

The MDP Procedure is the best-known member belonging to the family of the quantity-guided procedures, in which the planning center asks individual agents their MRSs between the public good and the private numéraire. Then the center revises an allocation according to the discrepancy between the reported MRSs and the MRT. The relevant information exchanged between the center and the periphery is in the form of quantity. Let ψ ( t ) = ( ψ 1 ( t ) , , ψ N ( t ) ) R + N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq17_HTML.gif be a vector of MRSs announced at any iteration t [ 0 , ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq18_HTML.gif of the procedure. Needless to say, ψ i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq19_HTML.gif is not necessarily equal to π i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq20_HTML.gif, thus, the incentive problem matters.

The MDP Procedure reads:
{ X ( ψ ( t ) ) = j N ψ j ( t ) γ ( t ) , Y i ( ψ ( t ) ) = ψ i ( t ) X ( ψ ( t ) ) + δ i { j N ψ j ( t ) γ ( t ) } X ( ψ ( t ) ) , i N . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equg_HTML.gif

Denote a distributional coefficient as δ i > 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq21_HTML.gif, i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq22_HTML.gif, with i N δ i = 1 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq23_HTML.gif, determined by the planner prior to the beginning of an operation of the procedure. Its role is to share among individuals the ‘social surplus,’ { j N ψ j ( t ) γ ( t ) } X ( ψ ( t ) ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq24_HTML.gif, which is always positive except at the equilibrium.

Remark 1 δ i > 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq21_HTML.gif was posited by Drèze and de la Vallée Poussin ([1971]), and followed by Roberts ([1979a, 1979b]), whereas δ i 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq25_HTML.gif was assumed by Champsaur ([1976]) who advocated a notion of neutrality to be explained below.

A local incentive game associated with each iteration of the process is formally defined as the normal form game ( N , Ψ , U ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq26_HTML.gif; N is the set of players, Ψ = × j N ψ j R + https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq27_HTML.gif is the Cartesian product of the Ψ j https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq28_HTML.gif, which is the set of player i’s strategies, and U = ( U 1 , , U N ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq29_HTML.gif is the N-tuple of payoff functions. The time derivative of consumer i’s utility is such that
d u i / d t U i ( ψ ( t ) ) = u i x X ( ψ ( t ) ) + u i y Y i ( ψ ( t ) ) = u i y { π i X ( ψ ( t ) ) + Y i ( ψ ( t ) ) } https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equh_HTML.gif

which is the payoff that each individual obtains in the local incentive game along the procedure.

The behavioral hypothesis underlying the above equations is the following myopia assumption: i.e., each player determines his/her strategy ψ i Ψ i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq30_HTML.gif in order to maximize his/her instantaneous utility increment U i ( ψ ( t ) ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq31_HTML.gif.

3.2 Normative conditions for the family of the MDP Procedures

The conditions that I have presented in the Introduction are in order. Let ψ i = ( ψ 1 , , ψ i 1 , ψ i + 1 , , ψ N ) Ψ i = × j N { i } Ψ j R + https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq32_HTML.gif.

Condition F Feasibility:
γ ( t ) X ( ψ ( t ) ) + j N Y j ( ψ ( t ) ) = 0 , i N , t [ 0 , ) . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equi_HTML.gif
Condition M Monotonicity:
ψ Ψ , i N , t [ 0 , ) , U i ( ψ ( t ) ) = u i y { π i ( t ) X ( ψ ( t ) ) + Y i ( ψ ( t ) ) } 0 . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equj_HTML.gif
Condition PE Pareto Efficiency:
X ( ψ ( t ) ) = 0 i N ψ i ( t ) = γ ( t ) , ψ Ψ . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equk_HTML.gif
Condition LSP Local Strategy Proofness:
π i ( t ) X ( π i ( t ) , ψ i ( t ) ) + Y i ( π i ( t ) , ψ i ( t ) ) π i ( t ) X ( ψ ( t ) ) + Y i ( ψ ( t ) ) , ψ Ψ , ψ i Ψ i , i N , t [ 0 , ) . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equl_HTML.gif
Condition N Neutrality:
z = lim t z ( t ) P 0 , δ Δ , z ( ) Z , https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equm_HTML.gif

where P 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq33_HTML.gif is the set of individually rational Pareto optimum (IRPO), Δ is the set of δ = ( δ 1 , , δ N ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq34_HTML.gif, and z ( ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq35_HTML.gif is a solution of the procedure.

It was Champsaur ([1976]) who advocated the notion of neutrality for the MDP Procedure, and Cornet ([1977b]) generalized it by omitting two restrictive assumptions imposed by Champsaur, i.e., (i) uniqueness of solution and (ii) concavity of the utility functions. Neutrality depends on the distributional coefficient vector δ. Remember that the role of δ is to attain any IRPO by redistributing the social surplus generated during the operation of the procedure: δ varies trajectories to reach every IRPO. In other words, the planning center can guide an allocation via the choice of δ, however, it cannot predetermine a final allocation to be achieved. This is a very important property for noncooperative games, since the equity considerations among players matter.1

Remark 2 Conditions except PE must be fulfilled for any t [ 0 , ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq18_HTML.gif. PE is based on the announced values, ψ i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq19_HTML.gif, i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq36_HTML.gif, which implies that a Pareto optimum reached is not necessarily equal to the one achieved under the truthful revelation of preferences for the public good. Condition LSP signifies that the truth-telling is a dominant strategy. Condition N means that for every efficient point z Z https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq15_HTML.gif and for any initial point z 0 Z https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq37_HTML.gif, there exists δ and z ( t , δ ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq38_HTML.gif, a trajectory starting from z 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq39_HTML.gif, such that z = z ( , δ ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq40_HTML.gif.

The MDP Procedure enjoys feasibility, monotonicity, stability, neutrality, and incentive properties pertaining to minimax and Nash strategies, as was proved by Drèze and de la Vallée Poussin ([1971]), and Roberts ([1979a, 1979b]). The MDP Procedure as an algorithm evolves in the allocation space and stops when the Samuelson’s conditions are met so that the public good quantity is optimal, and simultaneously the private good is allocated in a Pareto optimal way, i.e., ( x , y 1 , , y N ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq41_HTML.gif is Pareto optimal. Malinvaud ([1971, 1972]) designed a price-guided and a price-quantity guided planning procedures. Drèze ([1972]) constructed a tâtonnement process under uncertainty.

3.3 The process renamed the LSP MDP Procedure

In our context, as a planner’s most important task is to achieve an optimal allocation of the public good, he or she has to collect the relevant information from the periphery so as to meet the conditions presented above. Fortunately, the necessary information is available if the procedure is locally strategy proof. It was already shown by Fujigaki and Sato ([1982]), however, that the locally strategy proof MDP Procedure cannot preserve neutrality, since δ i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq42_HTML.gif i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq36_HTML.gif, was concluded to be fixed, i.e., 1 / N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq43_HTML.gif to accomplish LSP, keeping the other conditions fulfilled. δ i = 1 / N 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq44_HTML.gif, since N is greater than two.

Fujigaki and Sato ([1981]) presented the LSP MDP Procedure which reads:
{ X ( ψ ( t ) ) = ( j N ψ j ( t ) γ ( t ) ) | j N ψ j ( t ) γ ( t ) | N 2 , Y i ( ψ ( t ) ) = ψ i ( t ) X ( ψ ( t ) ) + 1 N ( j N ψ j ( t ) γ ( t ) ) X ( ψ ( t ) ) , i N . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equn_HTML.gif

Remark 3 We termed our procedure the ‘Generalized MDP Procedure’ in our paper (1981). Certainly, the public good decision function was generalized to include that of the MDP Procedure, whereas, the distributional vector was fixed to the above specific value. Thus, in order to be more precise, let me call hereafter the above procedure the ‘LSP MDP Procedure.’ The genuine Generalized MDP Procedure is presented below.

The LSP MDP Procedure for optimally providing the public good has the following properties:
  1. (i)

    The Procedure monotonically converges to an individually rational optimum, even if agents do not report their true valuation, i.e., MRS for the public good.

     
  2. (ii)

    Revealing his/her true MRS is always a dominant strategy for each myopically behaving agent.

     
  3. (iii)

    The Procedure generates similar trajectories in the feasible allocation space as the MDP Procedure with uniform distribution of the instantaneous surplus generated at each iteration, which leaves no influence of the planning authority on the final plan. Hence, the Procedure is nonneutral.

     

Remark 4 The property (ii) is an important one that cannot be enjoyed by the original MDP Process except when there are only two agents with an equal surplus share, i.e., δ i = 1 / 2 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq45_HTML.gif i = 1 , 2 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq46_HTML.gif. The result on nonneutrality in (iii) can be modified by designing the Generalized MDP Procedure below. See Roberts ([1979a, 1979b]) for these properties.

Theorems are enumerated without proofs which were given in Fujigaki and Sato ([1981]).

Theorem 1 The LSP MDP Procedure fulfills Conditions F, M, PE, and LSP. However, it cannot satisfy Condition N.

Theorem 2 For the LSP MDP Procedure and for any z 0 Z https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq37_HTML.gif, there exists a unique solution z ( ) : [ 0 , ) Z https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq47_HTML.gif, which is such that lim t z ( t ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq48_HTML.gifexists and is a Pareto optimum.

3.4 The Generalized MDP Procedures

In the local incentive game the planner can know the true information of individuals, since the LSP MDP Procedure induces them to elicit it. Its operation does not even require truthfulness of each player to be a Nash equilibrium strategy, but it needs only aggregate correct revelation to be a Nash equilibrium, as was verified in Sato ([1983]). It is easily seen from the above discussion that the LSP MDP Procedure is not neutral at all, which means that local strategy proofness impedes the attainment of neutrality. Hence, Sato ([1983]) proposed another version of neutrality, and Condition Aggregate Correct Revelation (ACR) which is much weaker than LSP.

In order to present Condition ACR, I need some notation. ϕ i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq49_HTML.gif is a Nash equilibrium strategy given by Roberts ([1979a, 1979b]) as
ϕ i = π i 1 2 δ i N 1 ( j N ψ j γ ) , i N . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equo_HTML.gif

Let π = ( π 1 , , π N ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq50_HTML.gif be a vector of MRSs for the public good and Π be its set. The condition can be stated in our context as follows:

Condition ACR Aggregate Correct Revelation:
i N ϕ i ( π ( t ) ) = i N π i ( t ) , π Π , t [ 0 , ) . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equp_HTML.gif

Remark 5 Condition ACR means that the sum of the Nash equilibrium strategies, ϕ i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq49_HTML.gif, i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq36_HTML.gif, always coincides with the aggregate value of the correct MRSs. Clearly, ACR only claims truthfulness in the aggregate.

I needed also the following two conditions. Let ρ : R + N R + N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq51_HTML.gif be a permutation function and T i ( ψ ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq52_HTML.gif be a transfer in private good to agent i.

Condition TA Transfer Anonymity:
T i ( ψ ) = T i ( ρ ( ψ ) ) , ψ Ψ , i N . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equq_HTML.gif

Remark 6 Condition TA says that the agent i’s transfer in private good is invariant under permutation of its arguments, i.e., the order of strategies does not affect the value of T i ( ψ ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq52_HTML.gif i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq36_HTML.gif. Sato ([1983]) proved that T i ( ψ ) = T i ( j N ψ j γ ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq53_HTML.gif, which is an example of transfer rules.

Condition TN Transfer Neutrality:
z = lim t z ( t ) , z P 0 , T Ω , z ( ) Z , https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equr_HTML.gif

where T = ( T 1 , , T N ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq54_HTML.gif is a vector of transfer functions and Ω is its set.

Now, I enumerate the properties of the Generalized MDP Procedures just renamed supra. Proofs are already given in Sato ([1983]), so they are omitted here.

Theorem 3 The Generalized MDP Procedures fulfill Conditions ACR, F, M, PE, TA and TN. Conversely, any planning process satisfying these conditions is characterized to:
{ X ( ψ ( t ) ) = ( j N ψ j ( t ) γ ( t ) ) | j N ψ j ( t ) γ ( t ) | N 2 , Y i ( ψ ( t ) ) = ψ i ( t ) X ( ψ ( t ) ) + T i ( j N ψ j ( t ) γ ( t ) ) , i N . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equs_HTML.gif

Theorem 4 Revealing preferences truthfully in any Generalized MDP Procedure is a minimax strategy for any i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq7_HTML.gif. It is the only minimax strategy for any i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq7_HTML.gif, when x > 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq55_HTML.gif.

Theorem 5 ϕ i = π i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq56_HTML.gifholds for any i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq7_HTML.gifat the equilibrium of the Generalized MDP Procedures.

Theorem 6 Under Assumptions 1-3, for every individually rational Pareto optimum z https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq57_HTML.gif, there exists δ and a trajectory z ( ) : [ 0 , ) Z https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq58_HTML.gifof the differential equation defining the Generalized MDP Procedures such that, i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq36_HTML.gif, u i ( z ) = lim t u i ( x ( t ) , y i ( t ) ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq59_HTML.gif.

Keeping the same nonlinear public good decision function as derived from Condition LSP, Sato ([1983]) could state the above characterization theorem. In the sequel, the Generalized MDP Procedure with T i ( j N ψ j γ ) = δ i ( j N ψ j γ ) X ( ψ ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq60_HTML.gif is employed. Via the pertinent choice of T i ( ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq61_HTML.gif we can make the family of the Generalized MDP Procedures, including the MDP and the LSP MDP Procedures as special members.

Remark 7 Green and Laffont ([1979]), Laffont ([1979]), and Champsaur and Rochet ([1983]) gave a systematic study on the family of planning procedures that are asymptotically efficient and locally strategy proof. Now we know that the class of the LSP procedures is large enough, which includes the Bowen Procedure, the Generalized Wicksell Procedure, and the LSP MDP Procedure as special members, as classified by Rochet ([1982]) and Sato ([2011]). Sato ([2010]) presented a discrete version of the procedure developed by Green and Laffont, which was the first LSP procedure with pivotal agents.

The next section provides a positive result on neutrality, different from Champsaur and Laroque ([1981, 1982]), and Laroque and Rochet ([1983]) who concluded nonneutrality of the intertemporal MDP Procedures with and without public goods.

4 The piecewise linearized MDP Procedure

4.1 A description of the piecewise linearized MDP Procedure

In the Procedure below, the planner plans to provide an optimal quantity of a public good by revising its quantity at discrete times, t = { τ 1 , , τ s , τ s + 1 , , D } T https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq62_HTML.gif: the set of discrete dates. The length of time horizon D, which can take the value ∞, is predetermined by the planner. In order for the planner to decide in what direction an allocation should be changed, it proposes a tentative feasible allocation, z ( 0 ) = ( χ ( 0 ) , ω 1 , , ω N ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq63_HTML.gif i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq36_HTML.gif, with a tentative step-size of the public good, χ ( 0 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq64_HTML.gif, at the initial time 0 given by the planner to which agents are asked to report his/her true MRS, π i ( z ( 0 ) ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq65_HTML.gif i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq22_HTML.gif, as a local privately held information. At each discrete date τ s https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq66_HTML.gif the planner can easily calculate for any t the sum of their announced MRSs to change the allocation at the next date τ s + 1 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq67_HTML.gif. It is supposed that it can get the exact value of MRT. Assume also that the agents have rational expectations on the time interval, although the latter are bounded; they not only have complete knowledge as to the planning rules of the procedure defined below, but also can at least predict an allocation to be attained at the beginning of the next interval. Champsaur and Laroque ([1982], p.326) wrote that ‘[s]uch a situation of limited intertemporal consistency is similar to the discrete procedures.’ Champsaur and Laroque ([1981, 1982]) took into consideration the effects of the agents’ strategies upon the final allocation. Agents in the private good economy of Fujigaki ([1981]) are assumed to maximize their utility anticipated at the end of each time interval. So I extend his model to involve a public good in order to examine nonmyopic behaviors on the part of strategic players, as in Champsaur and Laroque ([1981]).

To formulate our planning rules, let us equally divide the time horizon [ 0 , D ] https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq68_HTML.gif into D intervals [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq69_HTML.gif. As repeated to apply our procedure to each interval, an allocation at any point of each interval is given for any τ s T https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq70_HTML.gif and for any t [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq71_HTML.gif
{ x ( t ) = 0 τ s X α ( t ) d t + ( t τ s ) X α ( t ) , y i ( t ) = 0 τ s Y i α ( t ) d t + ( t τ s ) Y i α ( t ) , i N , https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equt_HTML.gif

where X α ( t ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq72_HTML.gif and Y i α ( t ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq73_HTML.gif are the average speeds of adjustments over the interval [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq69_HTML.gif, which are defined by the Generalized MDP Procedure with T i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq74_HTML.gif specified above.

Hence, the trajectories are piecewise linear and the variable step-sizes for each t [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq75_HTML.gif in our procedure are in order:
{ χ ( t ) = x ( t ) x ( τ s ) = ( t τ s ) X α ( t ) , υ i ( t ) = y i ( t ) y i ( τ s ) = ( t τ s ) Y i α ( t ) , i N . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equu_HTML.gif
For any τ s T https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq70_HTML.gif and for any t [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq76_HTML.gif, our piecewise linearized procedure can be defined as:
{ x ( t ) = x ( τ s ) + χ ( t ) , y i ( t ) = y i ( τ s ) + υ i ( t ) , i N . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equv_HTML.gif

Note that the planner has to observe the size, χ ( t ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq77_HTML.gif, but not each υ i ( t ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq78_HTML.gif, since the former determines the latter. Let us call this piecewise linearized procedure the λMDP Procedure, which plays as a rule of a piecewise intertemporal incentive game.

4.2 Normative conditions for the λ MDP Procedure

The following new conditions are defined for our λ MDP Procedure.

Condition PIF Piecewise Intertemporal Feasibility:
i N Y i α ( ψ ( t ) ) + γ ( t ) X α ( t ) = 0 , ψ Ψ , τ s T , t [ τ s , τ s + 1 ) . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equw_HTML.gif
Condition PIM Piecewise Intertemporal Monotonicity:
U i ( ψ ( t ) ) = u i y { π i ( t ) X α ( ψ ( t ) ) + Y i α ( ψ ( t ) ) } 0 , i N , ψ Ψ , τ s T , t [ τ s , τ s + 1 ) . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equx_HTML.gif
Condition PISP Piecewise Intertemporal Strategy Proofness:
π i ( t ) X α ( π i ( t ) , ψ i ( t ) ) + Y i α ( π i ( t ) , ψ i ( t ) ) π i ( t ) X α ( ψ ( t ) ) + Y i α ( ψ ( t ) ) , i N , ψ Ψ , ψ i Ψ i , τ s T , t [ τ s , τ s + 1 ) . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equy_HTML.gif

Condition PISP may also be called Stepwise Strategy Proofness.

4.3 The λ MDP Procedure as a piecewise intertemporal incentive game form

To examine incentive properties of the procedure, an assumption of truthful revelation of preferences is omitted. Each player’s announcement, ψ i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq79_HTML.gif, is not necessarily equal to his/her true MRS, π i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq20_HTML.gif. Thus, π i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq20_HTML.gif must have been replaced with ψ i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq19_HTML.gif in the dynamic system of the λ MDP Procedure. The nonmyopia assumption is introduced for our procedure, since a discrete time framework is a weaker representation of myopia. The procedure and the game are repeated for each interval in our framework.

What I associate with the above process instead of intertemporal game used by Champsaur and Laroque ([1981]) is so to speak a ‘bounded’ or ‘piecewise’ intertemporal game, since I divide the time interval in the model. A piecewise intertemporal game played at discrete dates of each time interval of the procedure is formally defined as the normal form game ( N , Ψ , V ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq80_HTML.gif. N is the set of players, Ψ = × i N Ψ i R + https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq81_HTML.gif is the Cartesian product of Ψ i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq82_HTML.gif which is the set of player i’s strategies, and V = ( V 1 ( τ s + 1 ) , , V n ( τ s + 1 ) ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq83_HTML.gif is the n-tuple of payoff functions at the end of the current time interval [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq69_HTML.gif such that V i ( τ s + 1 ) = u i ( x ( τ s + 1 ) , y i ( τ s + 1 ) ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq84_HTML.gif i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq36_HTML.gif.

The maximization problem for any player is as follows: τ s + 1 T https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq85_HTML.gif and t [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq86_HTML.gif
Max V i ( τ s + 1 ) s.t. x ( t ) = x ( τ s ) + χ ( t ) and y i ( t ) = y i ( τ s ) + υ i ( t ) . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equz_HTML.gif

Let us give a definition here.

Definition 4 The best reply strategy for each individual i in the piecewise intertemporal game ( N , Ψ , V ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq87_HTML.gif is the strategy ψ i ( τ s ) Ψ i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq88_HTML.gif such that for any τ s T https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq70_HTML.gif:
V i ( ψ i ( τ s ) , ψ i ( τ s ) ) V i ( ψ i ( τ s ) , ψ i ( τ s ) ) , ψ i Ψ i , ψ i Ψ i . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equaa_HTML.gif

Remark 8 Condition PISP satisfies if truth-telling coincides with the best reply strategy in the piecewise intertemporal game. The behavioral hypothesis underlying the above equation is the nonmyopia assumption, i.e., each player determines his/her best reply strategy at the beginning of each interval [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq69_HTML.gif in order to maximize his/her payoff, V i ( τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq89_HTML.gif, at the beginning of the next interval [ τ s + 1 , τ s + 2 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq90_HTML.gif.

Nonmyopia Assumption Every player is assumed to behave nonmyopically: viz., when each player determines his/her strategy in a piecewise intertemporal game, he/she does not maximize the time derivative of utility function but the utility increment based on the allocation that he/she can foresee to get at the end of the current time interval.

Remark 9 This behavioral hypothesis may be justified by considering that the future development of an allocation cannot be predicted for exactly. Hence, every player has to make a piecewise decision under uncertainty. Players are rather assumed to forecast at least what will happen at the next discrete date.

Now I examine the properties of the λ MDP Procedure just defined above. This paper is confined to PISP, instead of LSP or Strongly Locally Individually Incentive Compatibility (SLIIC).

Suppose the λ MDP Procedure is not at an equilibrium at τ s + 1 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq91_HTML.gif, then the following theorems hold. Proofs are postponed to the Appendix.

The following notation is used for each i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq7_HTML.gif and for any τ s + 1 T https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq92_HTML.gif
U i + ( τ s + 1 ) = lim t τ s + 1 + u i ( t ) u i ( τ s ) t τ s https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equab_HTML.gif
and
U i ( τ s + 1 ) = lim t τ s + 1 u i ( τ s + 1 ) u i ( t ) τ s + 1 t . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equac_HTML.gif

Theorem 7 For each i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq93_HTML.gifand for any τ s + 1 T https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq92_HTML.gif, U i ( τ s + 1 ) 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq94_HTML.gif.

Theorem 8 For each i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq93_HTML.gif and for any τ s + 1 T https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq92_HTML.gif
U i + ( τ s ) > u i ( t 1 ) u i ( t 0 ) τ s + 1 τ s > U i ( τ s + 1 ) > 0 . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equad_HTML.gif

Therefore, the average speed of each individual’s utility increment is positive over the interval [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq69_HTML.gif for any revision date τ s T https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq70_HTML.gif.

The next theorem states that the utility is monotonically nondecreasing over the interval [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq69_HTML.gif for any τ s T https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq70_HTML.gif.

Theorem 9 For each i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq7_HTML.gifand for any t [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq75_HTML.gif, U i ( t ) > 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq95_HTML.gif.

Theorem 10 For each i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq7_HTML.gif, ψ i ( τ s ) = π i ( τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq96_HTML.gifis player i’s best reply strategy at date τ s https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq97_HTML.gif, which maximizes V i ( τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq89_HTML.gifin the piecewise intertemporal incentive game associated with the λMDP Procedure.

That is to say, truthful revelation for the public good is the best reply strategy in the piecewise intertemporal game, and it is the only best reply strategy when x > 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq55_HTML.gif.

Remark 10 Theorem 10 means that the best reply strategy at τ s https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq97_HTML.gif for each player is to reveal his/her true MRS for the public good to be provided at date τ s + 1 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq67_HTML.gif, i.e., π i ( τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq98_HTML.gif, but not π i ( τ s ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq99_HTML.gif. For each time interval [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq100_HTML.gif, the λMDP Procedure is piecewise intertemporally strategy proof in the sense that each player’s MRS announced at date τ s https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq97_HTML.gif coincides with the true one which corresponds to an allocation anticipated by that player at the end of the current interval [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq100_HTML.gif. The crucial point is that each player’s best reply strategy, ψ i ( τ s ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq101_HTML.gif is not π i ( τ s ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq99_HTML.gif but π i ( τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq102_HTML.gif. This result comes from the difference between the myopia and nonmyopia assumptions, i.e., the length of time horizon of the players matters.

Remark 11 The myopia assumption is common in local games associated with both continuous and discrete planning procedures such as the MDP and the CDH (Champsaur-Drèze-Henry) Procedures. See Henry ([1979]), Schoumaker ([1977, 1979]) for details on this point. Also, nontâtonnement procedures are of concern in real economic life. Hence, in view of obvious practical relevance, I must have constructed our discrete process in a nontâtonnement setting. However, I have confined myself to develop a piecewise linearized process as an approximation. Under nonmyopia assumption, a sincere revelation of preference for the public good at any discrete date of the λMDP Process is the best reply strategy for each player.

Hence, I am now in a position to present the theorem.

Theorem 11 Under Assumptions 1-3, the λMDP Procedure satisfies PIF, PIM, TN, PE, and PISP.

Remark 12 Our λMDP Procedure can keep neutrality, which is different from Champsaur and Laroque ([1981])’s result on nonneutrality of the procedures with intertemporal strategic behaviors of agents. This possibility stems from Sato ([1983]) who proposed an aggregate correct revelation as a condition to be replaceable with local strategy proofness, and he constructed a planning procedure which simultaneously satisfies three desiderata: efficiency, neutrality, and aggregate correct revelation.

Let T 1 = { τ 1 } https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq103_HTML.gif be the set of dates for revising the allocation by the center. When τ 1 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq104_HTML.gif tends to infinity, the MRSs revealed by the players at date 0 converge to those corresponding to a Pareto optimal allocation, z ( τ 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq105_HTML.gif, achieved via the procedure. Theorem 11, therefore, brings another theorems whose proofs are obvious, thus omitted here.

Theorem 12 When τ 1 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq104_HTML.giftends to infinity, any trajectory of the λMDP Process converges towards a Pareto optimal allocation. Furthermore, it is intertemporally strategy proof in the sense of Champsaur and Laroque.

5 Literature on myopia and discreteness

5.1 A discussion on discreteness

Here I present some comments on the discrete procedures. A proper discrete procedure could be constructed via the use of a decreasing pitch proposed by Champsaur, Drèze and Henry ([1977]), but I have attempted a different approach, in which discussions can be extended to a piecewise linearized procedure. The above dynamic system can be generalized to involve many public goods, amounts of which can be simultaneously adjusted at each iteration. This result differs from Champsaur, Drèze, and Henry ([1977]), in which the quantity of only one public good can be revised at each discrete date.

Incidentally, little is known about the speed of convergence of the procedures, particularly when they are formulated in the discrete versions, which are the only realistic ones from the standpoint of actual planning practices. The continuous version implies that the player’s responses are transmitted continuously to the planning center, with no computation cost or adjustment lag.2 However, for the simplicity of presentation, the technical advantages of the differential approach are well known. As Malinvaud ([1970-1971], p.192) rightly pointed out, a continuous formulation removes the difficult question of choosing an adjustment speed. Hence, the continuous version is justified mainly by convenience. Moreover, a continuous formulation might be considered as an approximation to a discrete representation.3

Casual observations suggest that discrete procedures are more realistic than continuous ones, and that the revisions of resource allocation are essentially made in discrete time. But most planning procedures discussed in the literature are formulated in continuous time because of the difficulties involved in using the discrete version. As indicated by Malinvaud ([1967]) and others, this dilemma concerns a traditional technical difficulty which is summarized in such a way that if one selects a pitch large enough to get a rapid convergence, one runs the risk of no convergence. On the other hand, if one chooses a pitch small enough to expect an exact convergence, there is a possibility of delay.

Discrete versions of the MDP Procedure have been presented by several authors, and there are different strains of related literature. The first strain - taken by Champsaur, Drèze, and Henry ([1977]) - is characterized by a decreasing adjustment pitch (or step-size) as a parameter, with which they could overcome a dilemma associated with a discrete formulation by keeping the pitch constant as long as it allows progress in efficiency, and by halving it as soon as it is impossible. The above-mentioned dilemma associated with discrete procedures is therefore overcome.4

Discussions of incentives in discrete-time MDP Procedures are given in Henry ([1979]), and Schoumaker ([1976, 1977, 1979]). They analyzed players’ strategic behaviors in the discrete MDP Processes, by ruling out the assumption of truthful revelation. The result they achieved is that their procedures still converge to a Pareto optimum even under strategic preference revelation à la Nash.

Approaching the same issue from another angle, Green and Schoumaker ([1980]) presented a discrete MDP Process with a flexible step-size at each iteration, and studied its incentive properties in the game theoretical framework. Their analysis dispensed with the ‘strategic indifference’ assumption imposed by Henry ([1979]) and Schoumaker ([1979]), i.e., the players choose truth-telling if the resulting outcome would be indifferent. Their discrete-time procedure, however, requires reporting global information with respect to the preferences of consumers. More precisely, consumers’ marginal willingness to pay functions are constrained to be compatible with and a part of their utility functions. Essentially, a Nash equilibrium concept is employed. Although their ideas are interesting, the informational burden in their model is much greater than that in other approaches.

Mas-Colell ([1980]) proposed a voluntary financing process, which is a global analog of the MDP Procedure.5 He obtained characterizations of Pareto optimal and core states in terms of valuation functions. The incentive problem was not considered. Chander ([1985]) presented a discrete version of the MDP Procedure and insisted that his system is the most informationally efficient allocation mechanism, without taking any consideration on its incentive property, though. Otsuki ([1978]) employed the feasible direction method in the theory of discrete planning and applied it to the MDP and the Heal Procedures by devising implementable algorithms. Again, the problem of incentives was not treated in his paper.

Roberts ([1987]) challenged another difficult issue which is not yet fully settled: he attempted to relax both the assumptions of myopia and complete information in the simplest version of an iterative planning framework due to Champsaur, Drèze, and Henry ([1977]). In his procedure the agents initially are imperfectly informed but gradually learn about each other to predict future behaviors of others. He discussed the Baysian incentive compatibility of his procedure. And he gave a numerical example of a condominium as a public good, entrance of which is redecorated by its members who use the iterative process.6

Allard et al. ([1980]) proposed definitions of temporary and intertemporal Pareto optimality. In their paper individuals are represented by Roy-consistent expectation functions induced by their learning processes. In order to explain their concepts of expectation functions, they referred to a pure exchange MDP Process, in which the planner asks agents to evaluate present goods and to send him/her their demands. So as to value present goods, they must forecast future quantities. Thus, Allard et al. ([1980]) assumed that the consumers are endowed with expectation functions.

As was criticized by Coughlin and Howe ([1989]), none of the above discrete procedures satisfied a criterion of intertemporal Pareto optimality. Following them, only the process devised by Green and Schoumaker ([1980]) insinuated a possible avenue to the criterion of intertemporal Pareto optimality. Thus, I have shown a different version of the Green and Schoumaker ([1980])’s discrete process with variable step-sizes and only local informational requirement.

5.2 A digression and justification of myopia

In the literature on the problem of incentives in planning procedures, the myopic strategic behavior prevailed. Many papers imposed this behavioral hypothesis, i.e., myopia, on which the forgoing discussions crucially depended, spawning numerous desirable results in connection with the family of MDP Procedures.

The aim of this paper has been to examine the consequences of dropping the assumption that individuals choose their strategies to maximize an instantaneous change in utility function at each iteration along the procedure. Instead of the myopic behavior, I have assumed that the agents select their announcements concerning their marginal rates of substitution to maximize their utility increment to be obtained at the end of each time interval.

Also verified is that the λ MDP Procedure can always keep neutrality different from Champsaur and Laroque ([1981, 1982]), and Laroque and Rochet ([1983]). They analyzed the properties of the MDP Procedure under the nonmyopic assumption. They treated the case where each individual attempts to forecast the influence of his/her announcement to the planning center over a predetermined time horizon, and optimizes his/her responses accordingly. It is proved that, if the time horizon is long enough, any noncooperative equilibrium of intertemporal game attains an approximately Pareto optimal allocation. But at such an equilibrium, the influence of the center on the final allocation is negligible, which entails nonneutrality of the procedure. Their attempt was to bridge the gap between the local instantaneous game and the global game, as was pointed out by Hammond ([1979]). Our aim has been, however, to bridge the gap between the local game and intertemporal game, by constructing a compromise of continuous and discrete procedures, i.e., the piecewise linearized procedure. By letting the length of the discrete periods shrink to zero (noting that χ and hence, υ i https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq106_HTML.gif i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq107_HTML.gif, would also shrink to zero), we would approach the continuous patch.

Incidentally, how can we justify the myopia assumption which is a crucial underpinning to obtain a lot of fruitful results in the theory of incentives, especially in the planning procedures for optimally allocating public goods? Indeed, in reality people seem to be considered to behave myopically rather than farsightedly. Matthews ([1982], p.638) wrote that “myopia may be regarded as a tractable approximation, a result of ‘bounded rationality’.”

Laffont ([1985], pp.19-20) justified myopia as follows: the participants in a planning procedure always believe that it is the last step of the procedure or that they will not enter the complexities of strategic behavior for a longer time horizon. In the MDP Procedure, a correct revelation of preferences is a maximin strategy in the global game, as was pointed out by Drèze. As the procedure is monotone in utility functions, the worst that could happen is the termination of the procedure. In other words, the global game reduces to the local game, in which the maximin strategy consists of correctly revealing preferences. Conversely, choosing a myopic strategy reduces to adopting a maximin approach to the global game. It would be logical, however, to adopt a maximin strategy in the local game, too.

Finally, let me introduce two justifications of myopia by Moulin ([1984], pp.131-132). The first one is to consider an isolated player who finds him/herself so small that his/her proper choice of strategies influences the others’ choice in a negligible way. The other, which completes the first, is complete ignorance where no player knows his/her opponents’ utility functions; a player knows that he/she is unable to predict in what direction the change will occur.

The method of Truchon ([1984]) is to examine a nonmyopic incentive game, where each agent’s payoff is a utility at the final allocation. Differently from others, Truchon introduced a ‘threshold’ into his model to analyze agents’ strategic behavior. T. Sato ([1983]) also investigated how the MDP Procedure works when players with individual expectation functions nonmyopically play a sequential game, by letting them forecast what allocation would be proposed over the period when they take a certain path of strategies.

6 Final remarks

The present paper has formulated a piecewise linearized version of the Generalized MDP Procedure and analyzed its properties. On doing so, I have extended the Fujigaki’s private goods economy to involve a public good, and have localized the intertemporal game à la Champsaur and Laroque ([1981, 1982]), by dividing the time interval and by applying the Generalized MDP Procedure for each interval. In the piecewise intertemporal game associated with any interval generated by our procedure, each player’s payoff is the utility increment at the initial point of each next interval. Variable step-sizes are used to formalize the piecewise linearized procedure that shares similar desirable properties with continuous procedures. This process involves the partitioning of the planning horizon into a specific sequence of time intervals. I have called this process the λ MDP Procedure and shown that it can simultaneously achieve efficiency and piecewise intertemporal strategy proofness. That is, it converges to a Pareto optimum; and the best replay strategy of each player at each date τ s https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq97_HTML.gif is to declare his/her anticipated MRS at the end of the current time interval [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq69_HTML.gif: i.e., ψ i ( τ s ) = π i ( τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq108_HTML.gif. The λ MDP Procedure can also preserve transfer neutrality.

Recognizing the difficulties concerning the possibility of manipulating private information by individuals, the literature has verified that this incentive problem could be dealt with by the planning procedures that require a continuous revelation of information, provided that agents adopt myopic behavior. Whereas, if individuals are farsighted, the traditional impossibility results occur, i.e., incentive compatibility is incompatible with efficiency, as it was pointed out by Champsaur, Laroque and Rochet. This paper has studied an intermediate situation where agents are only asked to declare their anticipated MRS at discrete dates, where the direction and speed of adjustment are changed. Consequently, the associated dynamic process named the λMDP Procedure has become piecewise linear. Individuals are assumed to take the interval between two discrete dates as their time horizon. Their behavior is hence intermediate between myopia and farsightedness. The idea of looking at an intermediate time horizon for agents’ manipulations of information is more natural and more realistic than myopia and farsightedness.

Appendix

Proof of Theorem 7 Our λ MDP Procedure gives
u i ( τ s + 1 ) = u i ( α i ( τ s ) + ( τ s + 1 τ s ) A i ) , https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equae_HTML.gif
where α i ( τ s ) = ( x ( τ s ) , y i ( τ s ) ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq109_HTML.gif and A i d α i / d t = ( X α ( τ s ) , Y i α ( τ s ) ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq110_HTML.gif. Let π i ( τ s + 1 ) = u i x ( τ s + 1 ) / u i y ( τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq111_HTML.gif. Under the truth-telling, we have
U i ( τ s + 1 ) = u i x ( τ s + 1 ) X α ( τ s ) + u i y ( τ s + 1 ) Y i α ( τ s ) = u i y ( τ s + 1 ) { π i ( τ s + 1 ) X α ( τ s ) + Y i α ( τ s ) } = u i y ( τ s + 1 ) δ i { j N ψ j ( τ s ) γ ( τ s ) } X α ( τ s ) 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equaf_HTML.gif

since X α ( τ s ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq112_HTML.gif and j N ψ j ( τ s ) γ ( τ s ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq113_HTML.gif are sign-preserving. □

Proof of Theorem 8 Because of the strict concavity of utility functions, it follows that for any α 0 , α 1 R + 2 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq114_HTML.gif and for any real number β [ 0 , 1 ] https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq115_HTML.gif
u i { ( 1 β ) a 0 + β α 1 } ( 1 β ) u i ( α 0 ) + β u i ( α 1 ) . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equ1_HTML.gif
(1)

Since an allocation path is a line segment, if we set α 0 = α ( τ s ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq116_HTML.gif and α 1 = α ( τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq117_HTML.gif, then we can associate via the choice of β any allocation given by the λ MDP Procedure over the interval [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq69_HTML.gif.

Denote β = β ( t ) = ( t τ 0 ) / ( τ 1 τ 0 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq118_HTML.gif for each t [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq75_HTML.gif. Thus, we have
( 1 β ) α 0 + β α 1 = ( 1 β ) α 0 + β ( τ 1 τ 0 ) A i = α ( t ) . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equag_HTML.gif
In light of Eq. (1)
U i ( α ( t ) ) > ( 1 β ( t ) ) u i ( α 0 ) + β ( t ) u i ( α 1 ) . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equ2_HTML.gif
(2)
Combining (2) with
u i ( α ( t ) ) = τ 0 t U i ( τ ) d τ https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equah_HTML.gif
and
u i ( α 1 ) = τ 0 τ 1 U i ( τ ) d τ https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equai_HTML.gif
yields the expression
u i ( τ s ) + τ s t U i ( τ ) d τ > ( 1 β ( t ) ) u i ( τ s ) + β ( t ) { u i ( τ s ) + τ s τ s + 1 U i ( τ ) d τ } . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equaj_HTML.gif
Consequently, we have for any t ( τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq119_HTML.gif
1 β ( t ) β ( t ) τ s t U i ( τ ) d τ > t τ s + 1 U i ( τ ) d τ . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equak_HTML.gif
Since ( 1 β ( t ) ) / β ( t ) = ( τ 1 t ) / ( τ 1 τ 0 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq120_HTML.gif, this equation gives for any t ( τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq121_HTML.gif
1 t τ 0 τ s t U i ( τ ) d τ > 1 τ 1 t t τ s + 1 U i ( τ ) d τ . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equ3_HTML.gif
(3)
As t tends to τ s + 1 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq67_HTML.gif, the L.H.S. of Eq. (3) approaches the average speeds of utility change over the interval [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq69_HTML.gif, and the R.H.S. signifies the infinitesimal speed of utility increment. By Theorem 7
u i ( τ s + 1 ) u i ( τ s ) τ s + 1 τ s > U i ( τ s + 1 ) > 0 . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equal_HTML.gif
When t goes τ s https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq97_HTML.gif in Eq. (3), we get
U i + ( τ s ) > 1 τ s + 1 τ s τ s τ s + 1 U i ( τ ) d τ . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equam_HTML.gif

These equations give us the statement of the theorem. □

Proof of Theorem 9 Theorem 8 follows that
u i ( τ s + 1 ) u i ( τ s ) τ s + 1 τ s > 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equan_HTML.gif

and therefore u i ( τ s + 1 ) u i ( τ s ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq122_HTML.gif.

Continuity of utility functions, and thus the intermediate value theorem, assures the existence of a certain t [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq75_HTML.gif such that u i ( τ s + 1 ) > u i ( t ) > u i ( τ s ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq123_HTML.gif.

Denote I 1 = [ τ s , t ] https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq124_HTML.gif and I 2 = [ t , τ s + 1 ] https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq125_HTML.gif. Applying the similar argument of Theorem 8 for each interval yields
1 ζ 1 τ s τ s ζ 1 U i ( τ ) d τ 1 t ζ 1 ζ 1 t U i ( τ ) d τ , ζ 1 I 1 , 1 ζ 2 t t ζ 2 U i ( τ ) d τ 1 τ s + 1 ζ 2 ζ 2 τ s + 1 U i ( τ ) d τ , ζ 2 I 2 , https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equao_HTML.gif
where ζ 1 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq126_HTML.gif and ζ 2 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq127_HTML.gif are any real numbers in each interval. From the above equations, we obtain
1 ζ 1 τ s τ s ζ 1 U i ( τ ) d τ u i ( t ) u i ( t ζ 1 ) t ζ 1 . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equap_HTML.gif
Letting ζ 1 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq126_HTML.gif approach t from below yields
1 t τ k τ k t U i ( τ ) d τ U i ( t ) . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equaq_HTML.gif
If the similar manipulation is adapted by letting ζ 2 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq127_HTML.gif tend to t from above, we get
U i + ( t ) ( τ s + 1 t ) t τ s + 1 U i ( τ ) d τ . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equar_HTML.gif
By utility functions of class C 2 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq128_HTML.gif, the above two equations give
1 τ s + 1 t t τ s + 1 U i ( τ ) d τ U i ( t ) 1 t τ s τ s t U i ( τ ) d τ . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equas_HTML.gif
It is concluded that U i ( t ) > 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq95_HTML.gif, since u i ( τ s + 1 ) > u i ( t ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq129_HTML.gif by assumption. Consequently, we show that u i ( τ s ) < u i ( t ) < u i ( τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq130_HTML.gif reduces to u i ( t ) > 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq131_HTML.gif, i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq107_HTML.gif, t [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq86_HTML.gif. It is easily seen that there exists no such t that u i ( τ s ) u i ( t ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq132_HTML.gif and u i ( t ) u i ( τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq133_HTML.gif over the interval [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq69_HTML.gif. In fact, if there exists t [ τ s , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq75_HTML.gif such that u i ( t ) u i ( τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq133_HTML.gif, then, for any t ˜ [ t , τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq134_HTML.gif
u i ( t ˜ ) u i ( τ s + 1 ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equat_HTML.gif

must hold. This clearly contradicts U i ( τ s + 1 ) > 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq135_HTML.gif, hence the desired conclusion is obtained. □

Proof of Theorem 10 Without a truthful revelation for the public good, which is different from the proof of Theorem 9, we observe
U i ( τ s + 1 ) = u i y ( τ s + 1 ) [ π i ( τ s + 1 ) ψ i ( τ s ) + δ i { j N ψ j ( τ s ) γ ( τ s ) } X α ( τ s ) ] = 0 . https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_Equau_HTML.gif

At any equilibrium of the λ MDP Procedure, the third term in the brackets vanishes, and u i y > 0 https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq136_HTML.gif by assumption, so that we conclude that ψ i ( τ s ) = π i ( τ s + 1 ) = ψ i ( τ s ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq137_HTML.gif holds for any i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq138_HTML.gif and for any τ s T https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq70_HTML.gif. □

Proof of Theorem 11 Condition PIF is easily checked to be satisfied, since it has been already used to formulate the procedure. To sum up, the features of the process are as follows: its solution z [ t , z ( 0 ) ] https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq139_HTML.gif defining the procedure is the function which associates a program z ( t ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq140_HTML.gif as well as step-sizes χ ( t ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq77_HTML.gif and υ i ( t ) https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq78_HTML.gif, i N https://static-content.springer.com/image/art%3A10.1186%2F2193-2409-1-5/MediaObjects/40008_2012_Article_5_IEq22_HTML.gif, with every iteration t. If an initial program is feasible, then every succeeding one is also feasible. It can be demonstrated under Assumptions 1-3 that the process is stable and always converges monotonically from any initial point to an individually rational Pareto optimum. The proofs of other conditions immediately follow from the proofs of theorems supra and the definitions of the Generalized MDP Procedure. □

Footnotes
1

For the concepts of neutrality associated with planning procedures, see Cornet ([1977a, 1977b]), Cornet and Lasry ([1976]), Rochet ([1982]), Sato ([1983, 2011]). See also d’Aspremont and Drèze ([1979]) for a version of neutrality which is valid for the generic context.

 
2

See Laffont and Saint-Pierre ([1979]) for an exception with an information processing cost.

 
3

The essence of the discrete version of the MDP Procedure (CDH Procedure) can be captured in Henry and Zylberberg ([1977]). See, in addition, Tulkens ([1978]), Laffont ([1982]), Mukherji ([1990]) and Salanié ([1998]) for lucid summaries of the MDP Procedure. It can be seen as a ‘nontâtonnement process,’ because of its feasibility, one can therefore truncate it at any time. As for a contribution to the MDP literature, see Von Dem Hagen ([1991]), where a differential game approach is taken. De Trenquale ([1992]) defined a dynamic mechanism different from the MDP Procedure, that implements with local dominant strategies a Pareto efficient and individually rational allocations in a general two-agent model. Chander ([1993]) verified the incompatibility between core convergence property and local strategy proofness. Sato ([2007]) designed the Hedonic MDP Procedure for optimizing gaseous attributes which compose the global atmosphere in the new theoretical context.

 
4

See Henry and Zylberberg ([1978]) for graphical illustration of how the method of a decreasing pitch successfully works until a Pareto optimum is attained. Although they treated the case with increasing returns to scale, the structure is isomorphic to the model with public goods. Crémer ([1983, 1990]) took another approach to treat increasing returns to scale as well as useful ideas that can be applied for public goods. See Heal ([1986]) for a comprehensive account of the planning theory and the dilemma of choosing a step-size in discrete procedures. See also Henry and Zylberberg ([1977]) for the Heal Procedure.

 
5

For another global analog, see also Dubins’ mechanism which is a speed transform of the MDP Procedure explained in Green and Laffont ([1979]).

 
6

See Spagat ([1995]) for incisive critics on iterative planning theory and his re-examination of the standard procedures in the Bayesian learning real-time model.

 

Acknowledgements

This is one of a series of papers dedicated to the XXXXth Anniversary of the MDP Procedure. It was at the Brussels Meeting of the Econometric Society in September 1969 that Drèze and de la Vallée Poussin together, and Malinvaud independently, presented their papers on planning tâtonnement processes for guiding and financing the optimal provision of public goods. A preliminary version of this paper was presented at the Far Eastern Meeting of the Econometric Society held at the International Conference Center Kobe, Japan, July 21, 2001. The revised version was presented at the autumn meeting of the Japanese Economic Association held at Hitotsubashi University, October 8, 2001. Some major revisions were made thereafter.

Copyright information

© Sato; licensee Springer. 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.