The New Palgrave Dictionary of Economics

Living Edition
| Editors: Palgrave Macmillan

Growth Models, Multisector

  • W. A. Brock
  • W. D. Dechert
Living reference work entry


Multisector growth models have been increasingly used since the 1980s. The duality between growth models and dynamic general equilibrium models renders the multisector growth model ideal for the analysis of efficient intertemporal resource allocation. This includes renewable and non-renewable natural resources, produced resources such as capital, and land and labour resources. Growth models have been widely used in business cycle theory and in asset pricing theory. They have also been applied to the optimal management of dynamic ecological systems that have an economic component as a part of a complex systems model.


Asset pricing model Bequest motive Business cycles Central limit theorem Computation Concavity Convergence Decentralization Dynamic macroeconomic theory Equity premium puzzle Equivalence theorem General equilibrium Indirect utility function Infinite horizons Law of large numbers Multisector growth models New Keynesian macroeconomics Optimal growth models Optimal planning models Overtaking ordering Rational expectations equilibrium Real business cycles Recursive intertemporal general equilibrium models Representative agent Separating hyperplane theorem Single-sector growth models Turnpike theorems 

JEL Classifications


Multisector growth models are basic building blocks not only for optimal planning models (Majumdar 1987; McKenzie 1986) but also for recursive general equilibrium models (McKenzie 2002; Stokey and Lucas 1989), and for econometrically tractable models for business cycle research (Cooley 1995) and general macroeconomics (Sargent 1987). Majumdar (1987) has already covered some basic theory, some efficiency and decentralization analysis, as well as some optimization concepts. We attempt to fill in the space between Majumdar (1987) and the current research frontier as well as to outline applications not treated by Majumdar.

Before we begin, we wish to stress that the style of this article is to point the reader towards surveys of the subject in order to economize on references to the many researchers who have contributed to this rather large area, and to paint, in broad strokes, the overall structure of this research area, especially its impact on empirical work, in order to illuminate directions where the research frontier might go.

Dynamic macroeconomic theory has made much use of the stochastic one-sector growth model (Cooley 1995; Altug and Labadie 1994; Sargent 1987; Stokey and Lucas 1989), for two primary reasons. First, it is a classical result that optimal growth models can be viewed as general equilibrium models by use of the separating hyperplane theorem in an appropriate space to construct the support prices. See Becker and Boyd (1997) for this general result, which they call the ‘equivalence theorem’. It is closely related to the use of decentralization prices in Majumdar (1987) and the general treatment of decentralization in Majumdar (1992).

The basic idea of the class of the ‘equivalence theorem’ of Becker and Boyd is as follows. Consider an infinite horizon intertemporal general equilibrium model with a representative infinitely lived consumer who faces intertemporal prices as given. Then it is a classical result that the rational expectations equilibrium of such a model is the same as the optimal solution of a planning problem where the planner has the same preferences as the representative consumer. Technical issues arise from the infinite horizon such as the necessity and sufficiency of transversality conditions at infinity (that is, the present discounted mathematical expectation of value of any stocks ‘left over’ at infinity should be zero, much as in a finite horizon case with no bequest motive). But the general ideas behind this type of result are much the same as in the well-known finite dimensional cases. See Becker and Boyd (1997) for the details.

Second, infinite horizon stochastic multisector models are also basic in constructing econometrically tractable models to use in analysing data. Here, especially, is where stochastic versions of the turnpike theorem (explained below) are used. For example, it is used to justify use of laws of large numbers and central limit theorems in econometric time-series applications.

A key property of the one-sector model that promotes its use in real business cycle applications as well as intertemporal general equilibrium asset pricing applications is the stochastic analog of the turnpike theorem. This theorem states that optimal capital stock and optimal consumption converge in a stochastic sense to a unique stochastic limit under standard assumptions of concavity of the payoff function (for example, the planner’s preferences) and of the production function and modest assumptions on the structure of the stochastic shocks. It is much more difficult to obtain such results for general multisector stochastic models (Arkin and Evstigneev 1987; Marimon 1989) and even for deterministic versions of those models (McKenzie 1986, 2002).

However, one can show that if the discount rate on the future is small enough there are results available in the literature that locate useful sufficient conditions on payoffs and technology such that stochastic convergence occurs (Marimon 1989) and deterministic convergence occurs (McKenzie 1986, 2002). Results for stochastic multisector growth models in both discrete time settings and continuous time settings are also contained in the papers in Dechert (2001).

The basic idea behind these results, called ‘turnpike’ results, is to first observe that, if the discount rate on the future is zero, the dynamic optimization problem will attempt to maximize a long-run ‘static’ objective in order to avoid infinite ‘value loss’ if it failed to do so. Making this intuition mathematically precise requires introduction of a partial ordering called the ‘overtaking ordering’ and making assumptions on the objective function and the dynamics so that avoidance of infinite value loss results in convergence of the optimal quantities to a unique long-run limit (see Arkin and Evstigneev 1987, and the papers in Dechert 2001, for stochastic cases and McKenzie 1986, 2002, for deterministic cases.)

Once one has results well in hand for the case of zero discounting on the future, intuition suggests that there should be a notion of ‘continuity’ that would enable one to prove that, if the discount rate is close enough to zero, convergence would still hold. Unfortunately, turning such intuition into precise mathematics turns out to be rather difficult (see McKenzie 1986, 2002, for deterministic literature and Arkin and Evstigneev 1987, the papers in Dechert 2001, and Marimon 1989, for the stochastic case).

We attempt to give the reader a brief idea of how the mathematical arguments work in a sketch of the arguments used to prove turnpike theorems for the deterministic case below. Let preferences of a planner be given by
$$ {\displaystyle \underset{\left\{{x}_t\right\}}{ \max }}{\displaystyle \sum_{t=0}^{\infty }{\beta}^t\left[u\left({x}_t,{x}_{t-1}\right)-u\left({x}_{\beta}^{*},{x}_{\beta}^{*}\right)\right]} $$
where u: R 2n R is a twice continuously differentiable function (typically an indirect utility or payoff function), β is a discount factor, 0 < β ≤ 1, and \( {x}_{\beta}^{*} \) is an optimal steady state which solves the first-order necessary conditions of the optimization in Eq. (1):
$$ {D}_1u\left({x}_t,{x}_{t-1}\right)+\beta {D}_2u\left({x}_{t+1},{x}_t\right)=0t\ge 1 $$
D i denotes partial derivative with respect to the ith argument of u, and x 0 is given. We assume that u is jointly concave in its arguments and use Eq. (2) evaluated at the optimal steady state \( {x}_{\beta}^{*} \) to rewrite the sum in Eq. (1). To simplify the notation, we let \( {u}_{\beta}^{*}=u\left({x}_{\beta}^{*},{x}_{\beta}^{*}\right)\mathrm{and}\ {D}_i{u}_{\beta}^{*}={D}_iu\left({x}_{\beta}^{*},{x}_{\beta}^{*}\right). \) Also, define
$$ {d}_t=-\left[u\left({x}_t,{x}_{t-1}\right)-{u}_{\beta}^{*}-\left({D}_1{u}_{\beta}^{*}\right)\left({x}_t-{x}_{\beta}^{*}\right)-\left({D}_2{x}_{\beta}^{*}\right)\left({x}_{t-1}-{x}_{\beta}^{*}\right)\right] $$
which is positive by the concavity of u. With this notation,
$$ {\displaystyle \sum_{t=1}^T{\beta}^t\left[u\left({x}_t,{x}_{t-1}\right)-{u}_{\beta}^{*}\right]}={\beta}^T\left({D}_1{u}_{\beta}^{*}\right)\left({x}_T-{x}_{\beta}^{*}\right)+\left({D}_2{x}_{\beta}^{*}\right)\left({x}_0-{x}_{\beta}^{*}\right)-{\displaystyle \sum_{t=1}^T{\beta}^t{d}_t} $$
Equation (3) immediately suggests that a good strategy to construct candidate optimal programs {x t } is to choose a program {x t } to solve
$$ \min {\displaystyle \sum_{t=1}^{\infty }{\beta}^t{d}_t}. $$

This strategy works for all β ∈ (0, 1]. Following McKenzie (1986, and his references to David Gale) for β = 1, classify a program {x t } as good (bad) if the series \( {\displaystyle {\sum}_{t=1}^T{d}_t} \) converges (diverges) and note that all programs {x t } are either good or bad. Solve Eq. (4) over good programs to get a top candidate for an optimum. By defining an appropriate partial ordering of programs that is a total ordering on the set of good programs, this top candidate turns out to be optimum. Since the series {d t } converges to 0 for all good programs, this forces {x t } to converge to a unique x * which is the maximizer of u(x, x) under the assumption that u is strictly concave. We call this analytical strategy the ‘value loss’ strategy.

There are basically two analytical strategies used for the case β is less than but close to 1. It is beyond the scope of this article to discuss them here; see McKenzie (1986, 2002) for the details.

All three of these analytical strategies can be generalized to stochastic cases where the indirect utility u contains stochastic shocks provided that Markovian type conditions are assumed on the stock process; {x t } is replaced by a sequence of random variables {X t }; and \( {x}_{\beta}^{*} \) is replaced by a certain stationary ergodic stochastic process, \( {X}_{\beta}^{*} \) , that plays the role of the optimal stochastic steady state. This is not simple but we hope that our outline of one of the analytical strategies makes that one, at least, intuitively plausible (see, for example, Arkin and Evstigneev 1987; Marimon 1989; and the papers by Brock and Majumdar 1978, Brock and Mirman 1972, and Brock and Magill 1979, reprinted in Dechert 2001).

Our sketch of the above results has been deliberately brief since excellent survey treatments are readily available in the literature that we have cited. We wish to discuss here applications of multisector models to the following areas of economics: (a) a general vision of how the economy works; (b) asset pricing; (c) coupled ecological/economic dynamical systems.

General Vision

It is no exaggeration to say that classical general equilibrium theory is analytically organized around existence of equilibrium, the core and equilibria, the two welfare theorems, as well as the ‘anything goes’ theorem of Sonnenschein, Mantel and Debreu (SMD) as the subject is expounded in McKenzie (2002). The SMD result requires users to place restrictions on the consumers and producers that populate general equilibrium models in order to use the theory for empirical work. In intertemporal economics a most popular way of doing this is to restrict oneself to recursive intertemporal general equilibrium models, and that restriction (via the ‘equivalence theorem’) places us in the domain of multisector growth models (Becker and Boyd 1997).

Black (1995), stimulated by general equilibrium theory, sketches with broad strokes a vision of the economy that is basically operating close enough to a complete set of markets so that the device of generating equilibria by maximizing a weighted sum of utilities can be applied (McKenzie 2002). Analytically, this device puts us in the domain of a large multisector model viewed as general equilibrium via a generalization of the ‘equivalence theorem’ in Becker and Boyd (1997). As McKenzie (2002) shows, turnpike theory could be extended to recursive intertemporal general equilibrium models with heterogeneous consumers provided markets are complete. Black (1995) proposes adding various elements to received intertemporal recursive general equilibrium models (that is, multisector growth models) not only to fill in gaps in the existing literature up to the mid-1990s but also to make the models match up better to data.

The book by Altug et al. (2003) might be viewed as an example of a realization of Black’s vision. It shows the power of variations on uses of single-sector and multisector growth models as building blocks for closed- and openeconomy macro models. We give some specific examples below. The examples are chosen because current cutting-edge work is being done in these areas and because the subject is moving fast in the directions of these chosen areas.

Asset Pricing

Use of the ‘equivalence theorem’ rapidly lead to development of recursive econometrically tractable intertemporal general equilibrium asset pricing models based upon multisector stochastic optimal growth models (Becker and Boyd, 1997, and the papers in Dechert, 2001). The confrontation with data has not been all positive. Three main directions in which these models failed when confronted with data came to be known as the equity premium family of puzzles. But Weitzman (2004, p. 1) has shown that ‘. . .the subjective distribution of the future growth rate has its mean and variance calibrated to average past values. This paper shows that using the Bayesian posterior estimates of these parameters can go a very long way toward eliminating simultaneously all three puzzles.’ A major point of Weitzman is that, once the uncertainty inherent in the fact that there is estimation uncertainty in key parameters that the agents living in the model must take into account in addition to the shocks inherent in the model, then the puzzles tend to vanish.

Akdeniz and Dechert (2007) show that a single-sector stochastic asset pricing model with production and with heterogeneous firms can go a long ways toward removing the puzzles without having to introduce Weitzman’s Bayesian modification of the underlying basic model. Work like that of Akdeniz and Dechert is now possible due to advances in computational technology. Jog and Schaller (1994) have shown that a modification of the basic model for liquidity-constrained firms can account for patterns of mean reversion observed in returns data across size classes of firms.


We have already mentioned the real business cycle literature (Cooley 1995; Altug et al. 2003) as macroeconomic applications of multisector growth models and their decentralization analysis. A major recent development in macroeconomics is to replace the representative consumer agent and competitive firms in such models with a representative agent facing a set of differentiated products, each produced by a differentiated products monopolist who faces a stochastic process that gives it realizations of periods when it is allowed to change prices. This strategic modeling device allows one to add an analytically tractable theory of price setting which can be grafted onto the existing analytical apparatus of recursive multisector models to produce a model where a unification of the ‘real side’ and the ‘monetary side’ of macroeconomics can take place. Various devices are used to produce a demand for money balances in the model that include real balance services in the indirect utility function and cash in advance constraints. This modeling strategy has produced a new generation of very fruitful ‘New Keynesian’ macro models which has allowed treatment of key issues of monetary policy as well as better fit to data especially data resulting from interactions between the real side and the monetary side of an economy. See Altug et al. (2003) and, especially, Woodford’s treatise (2003) for this genre.

The real world has distortions such as taxes, inflation and other government activities such as production of public goods which require modifications of the basic structure of intertemporal recursive general equilibrium theory. Fortunately the analytical core can be quite readily modified to include these elements (Turnovsky 1995).

Much of the literature on multisector optimal growth theory assumes convex technology and concave payoff (that is, concave utility) so that the indirect utility u(x t , x t− 1, S t ) is jointly concave in (x t , x t−1) for each value of the stochastic shock S t . We believe much activity in the future will involve generalizations to models of coupled ecological and economic dynamic systems where such concavity does not hold. Some analytical work in this area has already appeared (Becker and Boyd 1997; Majumdar 1992) and as computational technology progresses we expect to see more developments that use a combination of analytics and computation.

See Also


  1. Akdeniz, L., and W. Dechert. 2007. The equity premium in Brock’s asset pricing model. Journal of Economic Dynamics and Control 31: 2263–2292.CrossRefGoogle Scholar
  2. Altug, S., J. Chadha, and C. Nolan. 2003. Dynamic macroeconomic analysis: Theory and policy in general equilibrium. Cambridge: Cambridge University Press.Google Scholar
  3. Altug, S., and P. Labadie. 1994. Dynamic choice and asset markets. New York: Academic Press.Google Scholar
  4. Arkin, V., and I. Evstigneev. 1987. Stochastic models of control and economic dynamics. New York: Academic Press.Google Scholar
  5. Becker, R., and R. Boyd. 1997. Capital theory, equilibrium analysis and recursive utility. Oxford: Blackwell.Google Scholar
  6. Black, F. 1995. Exploring general equilibrium. Cambridge, MA: MIT Press.Google Scholar
  7. Brock, W.A., and M.J.P. Magill. 1979. Dynamics under uncertainty. Econometrica 47: 843–868.CrossRefGoogle Scholar
  8. Brock, W.A., and M. Majumdar. 1978. Global asymptotic stability results for multisector models of optimal growth under uncertainty when future utilities are discounted. Journal of Economic Theory 18: 225–243.CrossRefGoogle Scholar
  9. Brock, W.A., and L. Mirman. 1972. Optimal economic growth and uncertainty: the discounted case. Journal of Economic Theory 4: 479–513.CrossRefGoogle Scholar
  10. Cooley, T., ed. 1995. Frontiers of business cycle research. Princeton: Princeton University Press.Google Scholar
  11. Dechert, W., ed. 2001. Growth theory, nonlinear dynamics, and economic modelling: scientific essays of William Allen Brock. Cheltenham: Edward Elgar.Google Scholar
  12. Jog, V., and H. Schaller. 1994. Finance constraints and asset pricing: evidence on mean reversion. Journal of Empirical Finance 1: 193–209.CrossRefGoogle Scholar
  13. Majumdar, M. 1987. Multisector growth models. In The new palgrave: A dictionary of economics, ed. J. Eatwell, M. Milgate, and P. Newman. London: Macmillan.Google Scholar
  14. Majumdar, M., eds. 1992. Decentralization in infinite horizon economies. Boulder: Westview Press.Google Scholar
  15. Marimon, R. 1989. Stochastic turnpike property and stationary equilibrium. Journal of Economic Theory 47: 282–306.CrossRefGoogle Scholar
  16. McKenzie, L. 1986. Optimal economic growth, turnpike theorems, and comparative dynamics. In Handbook of mathematical economics, ed. K. Arrow and M. Intriligator, vol. 3. Amsterdam: North-Holland.Google Scholar
  17. McKenzie, L. 2002. Classical general equilibrium theory. Cambridge, MA: MIT Press.Google Scholar
  18. Sargent, T. 1987. Dynamic macroeconomic theory. Cambridge, MA: Harvard University Press.Google Scholar
  19. Stokey, N., and R. Lucas. 1989. Recursive methods in economic dynamics. Cambridge, MA: MIT Press.Google Scholar
  20. Turnovsky, S. 1995. Methods of macroeconomic dynamics. Cambridge, MA: MIT Press.Google Scholar
  21. Weitzman, M. 2004. The Bayesian equity premium. Working paper, Department of Economics, Harvard University.Google Scholar
  22. Woodford, M. 2003. Interest and prices. Princeton: Princeton University Press.Google Scholar

Copyright information

© The Author(s) 2008

Authors and Affiliations

  • W. A. Brock
    • 1
  • W. D. Dechert
    • 1
  1. 1.