Skip to main content
Log in

Revisiting the Great Moderation: Policy or Luck?

  • Research Article
  • Published:
Open Economies Review Aims and scope Submit manuscript

Abstract

We investigate the relative roles of monetary policy and shocks in causing the Great Moderation, using indirect inference where a DSGE model is tested for its ability to mimic a VAR describing the data. A New Keynesian model with a Taylor Rule and one with the Optimal Timeless Rule are both tested. The latter easily dominates, whether calibrated or estimated, implying that the Fed’s policy in the 1970s was neither inadequate nor a cause of indeterminacy; it was both optimal and essentially unchanged during the 1980s. By implication it was largely the reduced shocks that caused the Great Moderation—among them monetary policy shocks the Fed injected into inflation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. Rules of the Taylor type are generally found to fit the data well, either as a stand-alone equation in regression analysis, or as part of a full model in DSGE analysis. Giannoni and Woodford (2005) is a recent example of the former, whereas Smets and Wouters (2007) and Ireland (2007) are examples of the latter. However, besides the usual difficulties encountered in applied work (e.g., Castelnuovo 2003 and Carare and Tchaidze 2005), these estimates face an identification problem pointed out by Minford et al. (2002) and Cochrane (2011).

    Lack of identification occurs when an equation could be confused with a linear combination of other equations in the model. In the case of the Taylor Rule, DSGE models give rise to the same correlations between interest rate and inflation as the Taylor Rule, even if the Fed is doing something quite different, such as targeting the money supply. For example, Minford (2008) shows this in a DSGE model with Fischer wage contracts. For details see Minford and Ou (2013).

  2. With the use of a flat prior Bayesian ranking is equivalent to assessing the likelihood ratio of two models. This ranking is similar to the ranking we obtain under indirect inference except that the criterion is the likelihood of the data rather than the likelihood of the data representation. However we must emphasise that our first concern is testing against the data for rejection; only for models that are not rejected are we concerned with rank.

  3. Our Supporting Annex shows the full derivation.

  4. See also Clarida et al. (1999) and McCallum and Nelson (2004). This is based on defining social welfare loss as ‘the loss in units of consumption as a percentage of steady-state output’ as in Rotemberg and Woodford (1998)—also Nistico (2007); it is conditional on assuming a particular utility function and zero-inflation steady state—more details can be found in our Annex.

  5. Svensson and Woodford (2004) also comment that such a rule may produce indeterminacy; however this does not occur in the model here.

  6. Note all equation errors are allowed to follow an AR(1) process when the models are tested against the data so omitted variables are allowed for. We also transform the Taylor Rules to quarterly versions so the frequency of interest rate and inflation is consistent with other variables in the model. All constant terms are dropped as demeaned, detrended data will be used- see section ‘Data and Results’ below.

  7. Some equations may involve calculation of expectations. The method we use here is the robust instrumental variables estimation suggested by McCallum (1976) and Wickens (1982): we set the lagged endogenous data as instruments and calculate the fitted values from a VAR(1)—this also being the auxiliary model chosen in what follows.

  8. The bootstraps in our tests are all drawn as time vectors so contemporaneous correlations between the innovations are preserved.

  9. Specifically, they found that the bias due to bootstrapping was just over 2 % at the 95 % confidence level and 0.6 % at the 99 % level. They suggested possible further refinements in the bootstrapping procedure which could increase the accuracy further; however, we do not feel it necessary to pursue these here.

  10. We use a Simulated Annealing algorithm due to Ingber (1996). This mimics the behaviour of the steel cooling process in which steel is cooled, with a degree of reheating at randomly chosen moments in the cooling process—this ensuring that the defects are minimised globally. Similarly the algorithm searches in the chosen range and as points that improve the objective are found it also accepts points that do not improve the objective. This helps to stop the algorithm being caught in local minima. We find this algorithm improves substantially here on a standard optimisation algorithm- Chib et al (2010) report that in their experience the SA algorithm deals well with distributions that may be highly irregular in shape, and much better than the Newton–Raphson method.

    Our method used our standard testing method: we take a set of model parameters (excluding error processes), extract the resulting residuals from the data using the LIML method, find their implied autoregressive coefficients (AR(1) here) and then bootstrap the implied innovations with this full set of parameters to find the implied Wald value. This is then minimised by the SA algorithm.

  11. http://research.stlouisfed.org/fred2/.

  12. Note by defining the output gap as the HP-filtered log output we have effectively assumed that the HP trend approximates the flexible-price output in line with the bulk of other empirical work. To estimate the flexible-price output from the full DSGE model that underlies our three-equation representation, we would need to specify that model in detail, estimate the structural shocks within it and fit the model to the unfiltered data, in order to estimate the output that would have resulted from these shocks under flexible prices. This is a substantial undertaking well beyond the scope of this paper, though something worth pursuing in future work.

    Le et al. (2011) test the Smets and Wouters (2007) US model by the same methods as we use here. This has a Taylor Rule that responds to flexible-price output. It is also close to the timeless optimum since, besides inflation, it responds mainly not to the level of the output gap but to its rate of change and also has strong persistence so that these responses cumulate strongly. Le et al. find that the best empirical representation of the output gap treats the output trend as a linear or HP trend instead of the flexible-price output—this Taylor Rule is used in the best-fitting ‘weighted’ models for both the full sample and the sample from 1984. Thus while in principle the output trend should be the flexible-price output solution, it may be that in practice these models capture this rather badly so that it performs less well than the linear or HP trends.

    We have also purposely adjusted the annual Fed rates from the Fred to quarterly rates so the frequencies of all time series kept consistent on quarterly basis. The quarterly interest rate instead state is given by \( {i}_{ss}=\frac{1}{\beta }-1 \).

  13. The Qu-Perron test suggests 1984Q3 as the most likely within the range. We show in the Supporting Annex that our tests are robust to this later choice of switch date.

  14. We show the plots and unit root test results of these in the Annex.

    T-value normalization of the Wald percentiles is calculated based on Wilson and Hilferty (1931)’s method of transforming chi-squared distribution into the standard normal distribution. The formula used here is:

    $$ Z=\left\{\left[{\left(2{M}^{sq u}\right)}^{1/2}-{(2n)}^{1/2}\right]/\left[{\left(2{M}^{sq{u}^{95 th}}\right)}^{1/2}-{(2n)}^{1/2}\right]\right\}\times 1.645 $$

    where M squ is the square of the Mahalanobis distance calculated from the Wald statistic equation with the real data, \( {M}^{sq{u}^{95 th}} \) is its corresponding 95 % critical value on the simulated (chi-squared) distribution, n is the degrees of freedom of the variate, and Z is the normalized t value; it can be derived by employing a square root and assuming n tends to infinity when the Wilson and Hilferty (1931)’s transformation is performed.

  15. We could use the approach suggested in Minford et al. (2011) in which the monetary authority embraces a terminal condition designed to eliminate imploding (as well as exploding) sunspots. In this case the model is forced to a determinate solution even when the Taylor Principle does not hold. However in our sample here we find that the model only fails to be rejected with inflation response parameters well in excess of unity—see below—while as we see from Table 4 being consistently rejected for parameters that get close to unity. So parameter values below unity, where the Taylor Principle does not apply, seem unlikely to fit the facts and we have not therefore pursued them here using this terminal condition approach.

  16. In a recent paper Ireland (2007) estimates a model in which there is a non-standard ‘Taylor Rule’ that is held constant across both post-war episodes. His policy rule always satis.es the Taylor Principle because unusually it is the change in the interest rate that is set in response to inflation and the output gap so that the long-run response to inflation is infinite. He distinguishes the policy actions of the Fed between the two subperiods not by any change in the rule’s coefficients but by a time-varying inflation target which he treats under the assumptions of ‘opportunism’ largely as a function of the shocks to the economy. Ireland’s model implies that the cause of the Great Moderation is the fall in shock variances. However, since these also cause a fall in the variance of the inflation target, which in turn lowers the variance of inflation, part of this fall in shock variance can be attributed to monetary policy.

    It turns out that Ireland’s model is hardly distinguishable from our Optimal Timeless Rule model. His rule changes the interest rate until the Optimal Timeless Rule is satisfied, in effect forcing it on the economy. Since the Ireland rule is so similar to the Optimal Timeless Rule, it is not surprising that its empirical performance is also similar- thus the Wald percentiles for it are virtually the same.

    Ireland’s rule can in principle be distinguished from the Optimal Timeless Rule via his restriction on the rule’s error. However we cannot apply this restriction within our framework here so that Ireland’s rule in its unrestricted form here only differs materially from the Optimal Timeless Rule in the interpretation of the error. However from a welfare viewpoint it makes little difference whether the cause of the policy error is excessive target variation or excessively variable mistakes in policy setting; the former can be seen as a type of policy mistake. Thus both versions of the rule imply that what changed in it between the two subperiods was the policy error. In effect, we can treat Ireland’s rule as essentially the same in our model context as the Optimal Timeless Rule, and while he calls it a Taylor Rule, it is quite distinct from such rules as defined here.

  17. We fix the time discount factor β and the steady-state consumption-output ratio \( \frac{C}{Y} \) as calibrated; other parameters are allowed to vary within ±50% of the calibrated values—which are set as initial values here—unless stated otherwise.

  18. It could be argued that deep parameters such as the elasticity of intertemporal substitution and Calvo price-change probabilities should remain fixed across the two periods. However, with such radically different environments these parameters could have differed; for example Le et al. (2011) find evidence that the degree of nominal rigidity varied across periods and interpret this as a response to changing variability. Here therefore we allow the data to determine the extent of change.

    The results for the best testable weak Taylor Rule version as in Table 4.

References

  • Benati L, Surico P (2009) VAR analysis and the great moderation. Am Econ Rev Am Econ Assoc 99((4)):1636–1652

    Article  Google Scholar 

  • Bernanke BS, Mihov I (1998) Measuring monetary policy. Q J Econ 113(3):869–902, MIT Press

    Article  Google Scholar 

  • Boivin J, Giannoni MP (2006) Has monetary policy become more effective? Rev Econ Stat 88((3)):445–462, MIT Press

    Article  Google Scholar 

  • Calvo GA (1983) Staggered prices in a utility maximising framework. J Monet Econ 12:383–398

    Article  Google Scholar 

  • Canova F (2005) Methods for applied macroeconomic research. Princeton University Press, Princeton

    Google Scholar 

  • Carare A, Tchaidze R (2005) The use and abuse of Taylor Rules: how precisely can we estimate them?. IMF Working Paper, No. 05/148, July

  • Castelnuovo E (2003) Taylor Rules, omitted variables, and interest rate smoothing in the US. Econ Lett 81(2003):55–59

    Article  Google Scholar 

  • Chib S, Kang KH, Ramamurthy S (2010) Term structure of interest rates in a DSGE model with regime changes. Olin Business School working paper, Washington University, http://apps.olin.wustl.edu/MEGConference/Files/pdf/2010/59.pdf, especially appendix D, p 46

  • Christiano LJ, Eichenbaum M, Evans CL (2005) Nominal rigidities and the dynamic effects of a shock to monetary policy. J Polit Econ 113(1):1–45, University of Chicago Press

    Article  Google Scholar 

  • Clarida R, Gali J, Gertler ML (1999) The science of monetary policy: a new keynesian perspective. J Econ Lit 37(4):1661–1707

    Article  Google Scholar 

  • Clarida R, Gali J, Gertler ML (2000) Monetary policy rules and macroeconomic stability: evidence and some theory. Q J Econ 115((1)):147–180, MIT Press

    Article  Google Scholar 

  • Cochrane JH (2011) Determinacy and Identification with Taylor Rules. J Polit Econ 119(3):565–615, University of Chicago Press

    Article  Google Scholar 

  • Fernandez-Villaverde J, Guerron-Quintana P, Rubio-Ramirez JF (2009) The new macroeconometrics: a Bayesian approach. In: O’Hagan A, West M (eds) Handbook of applied Bayesian analysis. Oxford University Press, Oxford

    Google Scholar 

  • Fernandez-Villaverde J, Guerron-Quintana P, Rubio-Ramirez JF (2010) Reading the recent monetary history of the U.S., 1959–2007. Federal Reserve Bank of Philadelphia working paper, No. 10–15

  • Gambetti L, Pappa E, Canova F (2008) The structural dynamics of U.S. output and inflation: what explains the changes? J Money Credit Bank 40(2–3):369–388, Blackwell Publishing

    Article  Google Scholar 

  • Giannoni MP, Woodford M (2005) Optimal interest rate targeting rules. In: Bernanke BS, Woodford M (eds) Inflation targeting. University of Chicago Press, Chicago

    Google Scholar 

  • Gourieroux C, Monfort A (1996) Simulation based econometric methods, CORE Lectures Series. Oxford University Press, Oxford

  • Gourieroux C, Monfort A, Renault E (1993) Indirect inference. J Appl Econ 8:S85–S118, John Wiley & Sons, Ltd

    Article  Google Scholar 

  • Gregory AW, Smith GW (1991) Calibration as testing: inference in simulated macroeconomic models. J Bus Econ Stat Am Stat Assoc 9(3):297–303

    Google Scholar 

  • Gregory AW, Smith GW (1993) Calibration in macroeconomics. In: Maddla GS (ed) Handbook of statistics, 11th edn. Elsevier, St. Louis, pp 703–719

    Google Scholar 

  • Ingber L (1996) Adaptive simulated annealing (ASA): lessons learned. Control Cybern 25:33

    Google Scholar 

  • Ireland PN (2007) Changes in the Federal Reserve’s Inflation Target: causes and consequences. J Money Credit Bank 39(8):1851–1882, Blackwell Publishing

    Article  Google Scholar 

  • Le VPM, Meenagh D, Minford APL, Wickens MR (2011) How much nominal rigidity is there in the US economy? Testing a new Keynesian DSGE model using indirect inference. J Econ Dyn Control 35(12):2078–2104, Elsevier

    Article  Google Scholar 

  • Le VPM, Meenagh D, Minford APL, Wickens MR (2012) Testing DSGE models by Indirect inference and other methods: some Monte Carlo experiments. Cadiff University Working Paper Series, E2012/15, June

  • Lubik TA, Schorfheide F (2004) Testing for indeterminacy: an application to U.S. monetary policy. Am Econ Rev Am Econ Assoc 94(1):190–217

    Article  Google Scholar 

  • McCallum BT (1976) Rational Expectations and the Natural Rate Hypothesis: some consistent estimates. Econometrica Econ Soc 44(1):43–52

    Article  Google Scholar 

  • McCallum BT, Nelson E (2004) ‘Timeless perspective vs. discretionary monetary policy’. Review, Federal Reserve Bank of St. Louis 86(2):43

  • Minford P (2008) Commentary on ‘Economic projections and rules of thumb for monetary policy’. Review, Federal Reserve Bank of St. Louis 90(4):331

  • Minford P, Ou Z (2013) Taylor rule or optimal timeless policy? Reconsidering the Fed’s behaviour since 1982. Econ Model 32:113–123

    Article  Google Scholar 

  • Minford P, Perugini F, Srinivasan N (2002) Are interest rate regressions evidence for a Taylor rule? Econ Lett 76(1):145–150

    Article  Google Scholar 

  • Minford P, Perugini F, Srinivasan N (2011) Determinacy in New Keynesian Models: a role for money after all? Int Financ 14(2):211–229

    Article  Google Scholar 

  • Nistico S (2007) The welfare loss from unstable inflation. Econ Lett 96(1):51–57, Elsevier

    Article  Google Scholar 

  • Primiceri GE (2005) Time varying structural vector autoregressions and monetary policy. Rev Econ Stud 72:821–825, The Review of Economic Studies Limited

    Article  Google Scholar 

  • Qu Z, Perron P (2007) Estimating and testing structural changes in multivariate regressions. Econometrica 75(2):459–502

    Article  Google Scholar 

  • Rotemberg JJ, Woodford M (1997) An optimization-based econometric model for the evaluation of monetary policy. NBER Macroecon Annu 12:297–346

    Article  Google Scholar 

  • Rotemberg JJ, Woodford M (1998) An optimization-based econometric model for the evaluation of monetary policy. NBER Technical Working Paper, No. 233

  • Rudebusch GD (2002) Term structure evidence on interest rate smoothing and monetary policy inertia. J Monet Econ 49(6):1161–1187, Elsevier

    Article  Google Scholar 

  • Sack B, Wieland V (2000) Interest-rate smoothing and optimal monetary policy: a review of recent empirical evidence. J Econ Bus 52:205–228

    Article  Google Scholar 

  • Sims CA, Zha T (2006) Were there regime switches in U.S. monetary policy? Am Econ Rev Am Econ Assoc 96(1):54–81

    Article  Google Scholar 

  • Smets F, Wouters R (2007) Shocks and frictions in US business cycles: a Bayesian DSGE approach. Am Econ Rev Am Econ Assoc 97(3):586–606

    Article  Google Scholar 

  • Smith A (1993) Estimating nonlinear time-series models using simulated vector autoregressions. J Appl Econ 8:S63–S84

    Article  Google Scholar 

  • Stock JH, Watson MW (2002) Has the Business Cycle Changed and Why? NBER Working Papers 9127, National Bureau of Economic Research, Inc

  • Svensson LEO, Woodford M (2004) Implementing optimal policy through inflation-forecast targeting. In: Bernanke BS, Woodford M (eds) Inflation targeting. University of Chicago Press, Chicago

    Google Scholar 

  • Taylor JB (1993) Discretion versus policy rules in practice. Carn-Roch Conf Ser Public Policy 39:195–214

    Article  Google Scholar 

  • Wickens MR (1982) The efficient estimation of econometric models with rational expectations. Rev Econ Stud 49(1):55–67, Blackwell Publishing

    Article  Google Scholar 

  • Wilson EB, Hilferty MM (1931) The distribution of chi-square. Proc Natl Acad Sci 17(12):684–688

    Article  Google Scholar 

  • Woodford M (1999) Optimal monetary policy inertia. Manch Sch 67:1–35, University of Manchester

    Article  Google Scholar 

  • Woodford M (2003a) Optimal interest rate smoothing. Rev Econ Stud 70(4):861–886

    Article  Google Scholar 

  • Woodford M (2003b) Interest and prices: foundations of a theory of monetary policy. Princeton University Press, Princeton

    Google Scholar 

Download references

Acknowledgments

We are grateful to Michael Arghyrou, Michael Hatcher, Vo Phuong Mai Le, David Meenagh, Edward Nelson, Ricardo Reis, Peter Smith, Herman van Dijk and Kenneth West for helpful comments. We also thank Zhongjun Qu and Pierre Perron for sharing their code for testing of structural break. A supporting annex to this paper is available at www.patrickminford.net/wp/E2012_9_annex.pdf.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhirong Ou.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Minford, P., Ou, Z. & Wickens, M. Revisiting the Great Moderation: Policy or Luck?. Open Econ Rev 26, 197–223 (2015). https://doi.org/10.1007/s11079-014-9319-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11079-014-9319-7

Keywords

JEL Classification

Navigation