The Econometrics of DSGE Models

In this paper, I review the literature on the formulation and estimation of dynamic stochastic general equilibrium (DSGE) models with a special emphasis on Bayesian methods. First, I discuss the evolution of DSGE models over the last couple of decades. Second, I explain why the profession has decided to estimate these models using Bayesian methods. Third, I briefly introduce some of the techniques required to compute and estimate these models. Fourth, I illustrate the techniques under consideration by estimating a benchmark DSGE model with real and nominal rigidities. I conclude by offering some pointers for future research.


Introduction
This article elaborates on a basic thesis: the formal estimation of dynamic stochastic general equilibrium (DSGE) models has become one of the cornerstones of modern macroeconomics.
The combination of rich structural models, novel solution algorithms, and powerful simulation techniques has allowed researchers to transform the quantitative implementation of equilibrium models from a disparate collection of ad hoc procedures to a systematic discipline where progress is fast and prospects entrancing. This captivating area of research, which for lack of a better name I call the New Macroeconometrics, is changing the way we think about models and about economic policy advice.
In the next pages, I will lay out my case in detail. I will start by framing the appearance of DSGE models in the context of the evolution of contemporary macroeconomics and how economists have reacted to incorporate both theoretical insights and empirical challenges.
Then, I will explain why the New Macroeconometrics mainly follows a Bayesian approach. I will introduce some of the new techniques in the literature. I will illustrate these points with a benchmark application and I will conclude with a discussion of where I see the research at the frontier of macroeconometrics. Because of space limitations, I will not survey the …eld in exhausting detail or provide a complete description of the tools involved (indeed, I will o¤er the biased recommendation of many of my own papers). Instead, I will o¤er an entry point to the topic that, like the proverbial Wittgenstein's ladder, can eventually be discarded without undue apprehension once the reader has mastered the ideas considered here. The interested economist can also …nd alternative material on An and Schorfheide (2006), who focus more than I do on Bayesian techniques and less in pure macroeconomics, and in Fernández-Villaverde, Guerrón-Quintana, and Rubio-Ramírez (2008), where the work is related with general issues in Bayesian statistics, and in the recent textbooks on macroeconometrics by Canova (2007) and DeJong and Dave (2007).

The Main Thesis
Dynamic equilibrium theory made a quantum leap between the early 1970s and the late 1990s.
In the comparatively brief space of 30 years, macroeconomists went from writing prototype models of rational expectations (think of Lucas, 1972) to handling complex constructions like the economy in Christiano, Eichenbaum, and Evans (2005). It was similar to jumping from the Wright brothers to an Airbus 380 in one generation.
A particular keystone for that development was, of course, Kydland and Prescott's 1982 paper Time to Build and Aggregate Fluctuations. For the …rst time, macroeconomists had a small and coherent dynamic model of the economy, built from …rst principles with optimizing agents, rational expectations, and market clearing, that could generate data that resembled observed variables to a remarkable degree. Yes, there were many dimensions along which the model failed, from the volatility of hours to the persistence of output. But the amazing feature was how well the model did despite having so little of what was traditionally thought of as the necessary ingredients of business cycle theories: money, nominal rigidities, or non-market clearing.
Except for a small but dedicated group of followers at Minnesota, Rochester, and other bastions of heresy, the initial reaction to Kydland and Prescott's assertions varied from amused incredulity to straightforward dismissal. The critics were either appalled by the whole idea that technological shocks could account for a substantial fraction of output volatility or infuriated by what they considered the super ‡uity of technical …reworks. After all, could we not have done the same in a model with two periods? What was so important about computing the whole equilibrium path of the economy?
It turns out that while the …rst objection regarding the plausibility of technological shocks is alive and haunting us (even today the most sophisticated DSGE models still require a notable role for technological shocks, which can be seen as a good or a bad thing depending on your perspective), the second complaint has aged rapidly. As Max Plank remarked somewhere, a new methodology does not triumph by convincing its opponents, but rather because critics die and a new generation grows up that is familiar with it. 1 Few occasions demonstrate the insight of Plank's witticism better than the spread of DSGE models. The new cohorts of graduate students quickly became acquainted with the new tools employed by Kydland and Prescott, such as recursive methods and computation, if only because of the comparative advantage that the mastery of technical material o¤ers to young, ambitious minds. 2 And naturally, in the process, younger researchers began to appreciate the ‡exibility o¤ered by the tools. Once you know how to write down a value function in a model with complete markets and fully ‡exible prices, introducing rigidities or other market imperfections is only one step ahead: one more state variable here or there and you have a job market paper.
Obviously, I did not mention rigidities as a random example of contraptions that we include in our models, but to direct our attention to how surprisingly popular such additions to the main model turned out to be. Most macroeconomists, myself included, have always had a soft spot for nominal or real rigidities. A cynic will claim it is just because they are most convenient. After all, they dispense with the necessity for re ‡ection, since there is hardly any observation of the aggregate behavior of the economy cannot be blamed on one rigidity or another. 3 But just because a theory is inordinately serviceable or warrants the more serious accusation that it encourages mental laziness is certainly not proof that the theory is not true.
At least since David Hume, economists have believed that they have identi…ed a monetary transmission mechanism from increases in money to short-run ‡uctuations caused by some form or another of price stickiness. It takes much courage, and more aplomb, to dismiss two and a half centuries of a tradition linking Hume to Woodford and going through Marshall, Keynes, and Friedman. Even those with less of a Burkean mind than mine should feel reluctant to proceed in such a perilous manner. Moreover, after one …nishes reading Friedman and Schwartz's (1971) A Monetary History of the U.S. or slogging through the mountain of Vector Autoregressions (VARs) estimated over 25 years, it must be admitted that those who see money as an important factor in business cycles ‡uctuations have an impressive empirical case to rely on. Here is not the place to evaluate all these claims (although in the interest of disclosure, I must admit that I am myself less than totally convinced of the importance of money outside the case of large in ‡ations). Su¢ ce it to say that the previous arguments of intellectual tradition and data were a motivation compelling enough for the large number of economists who jumped into the possibility of combining the beauty of DSGE models with the importance of money documented by empirical studies.
Researchers quickly found that we basically require three elements for that purpose. First, we need monopolistic competition. Without market power, any …rm that does not immediately adjust its prices will lose all its sales. While monopolistic competition can be incorporated in di¤erent ways, the favorite route is to embody the Dixit-Stiglitz framework into a general equilibrium environment, as so beautifully done by Blanchard and Kiyotaki (1987).
While not totally satisfactory (for example, the basic Dixit-Stiglitz setup implies counterfactually constant mark-ups), the framework has proven to be easy to handle and surprisingly 3 A more sophisticated critic will even point out that the presence of rigidities at the micro level may wash out at an aggregate level, as in the wonderful example of Caplin and Spulber (1987). ‡exible. Second, we need some role to justify the existence of money. Money in the utility function or a cash-in-advance constraint can accomplish that goal in a not particularly elegant but rather e¤ective way. 4 Third, we need a monetary authority inducing nominal shocks to the economy. A monetary policy rule, such as a money growth process or a Taylor rule, usually nicely stands in for such authority. There were, in addition, two extra elements that improve the …t of the model. First, to delay and extend the response of the economy to shocks, macroeconomists postulated factors such as habit persistence in consumption, adjustment cost of investment, or a changing utilization rate of capital. Finally, many extra shocks were added: to investment, to preferences, to monetary and …scal policy, etc. 5 The stochastic neoclassical growth model of Kydland and Prescott showed a remarkable ability to absorb all these mechanisms. After a transitional period of amalgamation during the 1990s, by 2003, the model augmented with nominal and real rigidities was su¢ ciently mature as to be put in a textbook by Mike Woodford and to become the basis for applied work. For the …rst time, DSGE models were su¢ ciently ‡exible to …t the data su¢ ciently well as to be competitive with VARs in terms of forecasting power (see Edge, Kiley, and Laforte, 2008, for the enchantingly good forecast record of a state-of-the-art DSGE model) and rich enough to become laboratories where realistic economic policies could be evaluated. The rest of the history is simple: DSGE models quickly became the standard tool for quantitative analysis of policies and every self-respecting central bank felt that it needed to estimate its own DSGE model. 6 However, as surprising as the quick acceptance of DSGE models outside academic circles was, even more unexpected was the fact that models were not only formally estimated, leaving behind the rather unsatisfactory calibration approach, but they were estimated from a Bayesian perspective. 4 Wallace (2001) has listed many reasons to suspect that these mechanisms may miss important channels through which money matters. After all, they are reduced forms of an underlying model and, as such, they may not be invariant to policy changes. Unfortunately, the profession has not developed a well-founded model of money that can be taken to the data and applied to policy analysis. Despite some recent promising progress (Lagos and Wright, 2005), money in the utility function or cash-in-advance will be with us for many years to come. 5 Also, researchers learned that it was easy to incorporate home production (Benhabib et al., 1991), an open-economy sector (Mendoza, 1991, Backus, Kehoe, and Kydland, 1992, and Correia, Neves, and Rebelo, 1995 or a …nancial sector (Bernanke, Gertler, and Gilchrist, 1999) among other extensions that I cannot discuss here. 6 Examples include the Federal Reserve Board (Erceg, Guerrieri, and Gust, 2006), the European Central Bank (Christo¤el, Coenen, and Warne, 2007), the Bank of Canada (Murchison and Rennison, 2006), the Bank of England (Harrison et al., 2005), the Bank of Sweden (Adolfson et al., 2005), the Bank of Finland (Kilponen andRipatti, 2006 andKortelainen, 2002), and the Bank of Spain (Andrés, Burriel, and Estrada, 2006).

The Bayesian Approach
I took my …rst course in Bayesian econometrics from John Geweke at the University of Minnesota in the fall of 1996. I remember how, during one of the lectures in that course, Geweke forecasted that in a few years, we would see a considerable proportion of papers in applied macro being written from a Bayesian perspective. I was rather skeptical about the prediction and dismissed Geweke's claim as an overly optimistic assessment by a committed Bayesian. Fortunately, Geweke was right and I was wrong. The last decade has indeed experienced an explosion of research using Bayesian methods; so much so that, during a recent talk, when I was presenting an estimation that for several reasons I had done using maximum likelihood, I was assailed by repeated instances of the question: why didn't you use Bayes?, a predicament rather unimaginable even a decade ago.
How did such a remarkable change come about? It would be tempting to re-enumerate, as has been done innumerable times before, the long list of theoretical advantages of Bayesian statistics and state that it was only a matter of time before economists would accept the obvious superiority of the Bayes choice. In fact, I will momentarily punish the reader with yet one more review of some of those advantages, just to be sure that we are all on board.
But the simpler truth is that, suddenly, doing Bayesian econometrics was easier than doing maximum likelihood. 7 The reason is that maximizing a complicated, highly dimensional function like the likelihood of a DSGE model is actually much harder than it is to integrate it, which is what we do in a Bayesian exercise. First, the likelihood of DSGE models is, as I have just mentioned, a highly dimensional object, with a dozen or so parameters in the simplest cases to close to a hundred in some of the richest models in the literature. Any search in a high dimensional function is fraught with peril. More pointedly, likelihoods of DSGE models are full of local maxima and minima and of nearly ‡at surfaces. This is due both to the sparsity of the data (quarterly data do not give us the luxury of many observations that micro panels provide) and to the ‡exibility of DSGE models in generating similar behavior with relatively di¤erent combination of parameter values (every time you see a sensitivity analysis claiming that the results of the paper are robust to changes in parameter values, think about ‡at likelihoods). 7 This revival of Bayesian tools is by no means limited to econometrics. Bayesian methods have become extremely popular in many …elds, such as genetics, cognitive science, weather analysis, and computer science. The forthcoming Handbook of Applied Bayesian Analysis edited by O'Hagan and West is a good survey of Bayesian statistics across many di¤erent disciplines.
Consequently, even sophisticated maximization algorithms like simulated annealing or the simplex method run into serious di¢ culties when maximizing the likelihoods of dynamic models. Moreover, the standard errors of the estimates are notoriously di¢ cult to compute and their asymptotic distribution a poor approximation to the small sample one.
In comparison, Markov chain Monte Carlo (McMc) methods have a much easier time exploring the likelihood (more precisely, the likelihood times the prior) of DSGE models and o¤er a thorough view of our object of interest. That is why we may want to use McMc methods even when dealing with classical problems. Chernozhukov and Hong (2003) is a path-breaking paper that brought that possibility to the attention of the profession. Even more relevantly, McMc can be transported from application to application with a relatively small degree of …ne tuning, an attractive property since the comparative advantage of most economists is not in numerical analysis (and, one suspects, neither their absolute advantage).
I promised before, though, that before entering into a more detailed description of techniques like McMc, I would in ‡ict upon the reader yet another enumeration of the advantages of Bayesian thinking. But fortunately, this will be, given the circumstances of this paper, a rather short introduction. A whole textbook treatment of Bayesian statistics can be found in several excellent books in the market, among which I will recommend Robert (2001) and Bernardo and Smith (2000).
I start with a point that Chris Sims repeatedly makes in his talks: Bayesian inference is a way of thinking, not a "basket"of methods. Classical statistics searches for procedures that work well ex ante, i.e., procedures that applied in a repeated number of samples will deliver the right answer in a prespeci…ed percentage of cases. This prescription is not, however, a constructive recipe. It tells us a property of the procedure we want to build and not how to do it. Consequently, we can come up with a large list of procedures that achieve the same objective without a clear metric to pick among them. The best possible illustration is the large number of tests that can be de…ned to evaluate the null hypothesis of cointegration of two random variables, each with its strengths and weaknesses. Furthermore, the procedures may be quite di¤erent in their philosophy and interpretation. In comparison, Bayesian inference is summarized in one simple idea: the Bayes'theorem. Instead of spending our time proving yet one more asymptotic distribution of a novel estimator, we can go directly to the data, apply Bayes'theorem, and learn from it. As simple as that.
Let me outline the elements that appear in the theorem. First, we have some data y T fy t g T t=1 2 R N T . For simplicity, I will use an index t that is more natural in a time series context like the one I will use below, but minimum work would adapt the notation to cross-sections or panels. From the Bayesian perspective, data are always given and, in most contexts, it does not make much sense to think about it as the realization of some datagenerating process (except, perhaps when exploring some asymptotic properties of Bayesian methods as in Phillips and Ploberger, 1996 is common to bound the discount factor in an intertemporal choice problem to ensure that total utility is well de…ned.

2.
A likelihood function p(y T j ; i) : R N T i ! R + that tells us the probability that the model assigns to each observation given some parameter values. This likelihood function is nothing more than the restrictions that our model imposes on the data, either coming from statistical considerations or from equilibrium conditions. 3. A prior distribution ( ji) : i ! R + that captures pre-sample beliefs about the right value of the parameters (yes, "right" is an awfully ambiguous word; I will come back later to what I mean by it).
Bayes'theorem tells us that the posterior distribution of the parameters is given by: This result, which follows from a basic application of the laws of probability, tells us how we should update our beliefs about parameter values: we combine our prior beliefs, ( ji) ; with the sample information embodied in the likelihood, f (y T j ; i), and we obtain a new set of beliefs, jy T ; i : In fact, Bayes' theorem is an optimal information processing rule as de…ned by Zellner (1988): it uses e¢ ciently all of the available information in the data, both in small and large samples, without adding any extraneous information.
Armed with Bayes'theorem, a researcher does not need many more tools. For any possible model, one just writes down the likelihood, elicits the prior, and obtains the posterior. Once we have the posterior distribution of the parameters, we can perform inference like point estimation or model comparison given a loss function that maps how much we select an incorrect parameter value or model. For sure, these tasks can be onerous in terms of implementation but, conceptually, they are straightforward. Consequently, issues such as nonstationarity do not require speci…c methods as needed in classical inference (see the eye-opening helicopter tour of Sims and Uhlig, 1991). If we suspect non-stationarities, we may want to change our priors to re ‡ect that belief, but the likelihood function will still be the same and Bayes'theorem is applicable without the disconcerting discontinuities of classical procedures around the unit root.
But while coherence is certainly an attractive property, at least from an esthetic consideration, it is not enough by itself. A much more relevant point is that coherence is a consequence of the fact that Bayes'theorem can be derived from a set of axioms that decision theorists have proposed to characterize rational behavior. It is not an accident that the main solution concepts in games with incomplete information are Bayesian Nash equilibria and sequential equilibria and that Bayes'theorem plays a critical role in the construction of these solution concepts. It is ironic that we constantly see papers where the researcher speci…es that the rational agents in the model follow Bayes'theorem and, then, she proceeds to estimate the model using classical procedures, undaunted by the implied logical contradiction.
Closely related to this point is the fact that the Bayesian approach satis…es by construction the Likelihood Principle (Berger and Wolpert, 1988) that states that all of the information existing in a sample is contained in the likelihood function. Once one learns about how Birnbaum (1962)  of how unnatural it is to think in frequentist terms is to teach introductory statistics. Nearly all students will interpret con…dence intervals at …rst as a probability interval. Only the repeated insistence of the instructor will make a disappointingly small minority of students understand the di¤erence between the two and provide the right interpretation. The rest of the students, of course, would simply memorize the answer for the test in the same way they would memorize a sentence in Aramaic if such a worthless accomplishment were useful to get a passing grade. Neither policy makers nor undergraduate students are silly (they are ignorant, but that is a very di¤erent sin); they just think in ways that are more natural to humans. 8 Frequentist statements are beautiful but inconsequential.
Second, pre-sample information is often amazingly rich and considerably useful and not taking advantage of it is an unforgivable omission. For instance, microeconometric evidence can guide our building of priors. If we have a substantial set of studies that estimate the discount factor of individuals and they …nd a range of values between 0.9 and 0.99, any sensible prior should take this information into consideration. However, the evidence that fed their attacks was gathered mainly for prime age white males in the United States (or a similarly restrictive group). But representative agent models are not about prime age white males: the representative agent is instead a stand-in for everyone in the economy. It has a bit of a prime age male and a bit of old woman, a bit of a minority young and a bit of a part-timer. If much of the response of labor to changes in wages is done through the labor supply of women and young workers, it is perfectly possible to have a high aggregate elasticity of labor supply and a low labor supply elasticity of prime age males.
To illustrate this point, Rogerson and Wallenius (2007) construct an overlapping generations economy where the micro and macro elasticities are virtually unrelated. But we should not push the previous example to an exaggerated degree: it is a word of caution, not a licence to concoct wild priors. If the researcher wants to depart in her prior from the micro estimates, she must have at least some plausible explanation of why she is doing so (see Browning, Hansen, and Heckman (1999) for a thorough discussion of the mapping between micro and macro estimates).
An alternative source of pre-sample information is the estimates of macro parameters from di¤erent countries. One of the main di¤erences between economists and other social scientists is that we have a default belief that individuals are basically the same across countries and that di¤erences in behavior can be accounted for by di¤erences in relative prices. Therefore, if we have estimates from Germany that the discount factor in a DSGE model is around 0.98, it is perfectly reasonable to believe that the discount factor in Spain, if we estimate the same model, should be around 0.98. Admittedly, di¤erences in demographics or …nancial markets may show up as slightly di¤erent discount factors, but again, the German experience is most informative. Pre-sample information is particularly convenient when we deal with emerging economies, when the data as extremely limited, or when we face a change in policy regime.  9 We can push the arguments to the limit. Strictly speaking we can perform Bayesian inference without any data: our posterior is just equal to the prior! We often face this situation. Imagine that we were back in 1917 and we just heard about the Russian revolution. Since communism had never been tried, as economists we would need to endorse or reject the new economic system exclusively based on our priors about how well central planning could work. Waiting 70 years to see how well the whole experiment would work is not a reasonable course of action. the e¤ect of an increase in public consumption. Such an object, with its whole assessment of risks, is a much more relevant tool for policy analysis. Classical procedures have a much more di¢ cult time jumping from point estimates to whole distributions of policy-relevant objects.
Finally, Bayesian econometrics deals in a natural way with misspeci…ed models (Monfort, 1996). As the old saying goes, all models are false, but some are useful. Bayesians are not in the business of searching for the truth but only in coming up with good description of the data. Hence, estimation moves away from being a process of discovery of some "true" value of a parameter to being, in Rissanen's (1986) powerful words, a selection device in the parameter space that maximizes our ability to use the model as a language in which to express the regular features of the data. Coming back to our previous discussion about "right"parameters, Rissanen is telling us to pick those parameter values that allow us to tell powerful economic histories and to exert control over outcomes of interest. These parameter values, which I will call "pseudotrue," may be, for example, the ones that minimize the Kullback-Leibler distance between the data generating process and the model (Fernández-Villaverde and Rubio-Ramírez, 2004, o¤er a detailed explanation of why we care about these "pseudotrue"parameter values).
Also, by thinking about models and parameters in this way, we come to the discussion of partially identi…ed models initiated by Manski (1999) from a di¤erent perspective. Bayesians emphasize more the "normality" of a lack of identi…cation than the problems caused by it.
Bayesians can still perform all of their work without further complications or the need of new theorems even with a ‡at posterior (and we can always achieve identi…cation through non- ‡at priors, although such an accomplishment is slightly boring). For example, I can still perfectly evaluate the welfare consequences of one action if the posterior of my parameter values is ‡at in some or all of the parameter space. The answer I get may have a large degree of uncertainty, but there is nothing conceptually di¤erent about the inference process. This does not imply, of course, that identi…cation is not a concern. 10 I only mean that identi…cation is a somehow di¤erent preoccupation for a Bayesian. I would not be fully honest, however, if I did not discuss, if only brie ‡y, the disadvantages of Bayesian inference. The main one, in my opinion, is that many non-parametric and semiparametric approaches sound more natural when set up in a classical framework. Think about the case of the Generalized Method of Moments (GMM). The …rst time you hear about it in class, your brain (or at least mine!) goes "ah!, this makes perfect sense." And it does so because GMM (and all its related cousins in the literature of empirical likelihood, Owen, 2001, and, in economics, Kitamura and Stutzer, 1997) are clear and intuitive procedures that have a transparent and direct link with …rst order conditions and equilibrium equations. Also, methods of moments are a good way to estimate models with multiple equilibria, since all of those equilibria need to satisfy certain …rst order conditions that we can exploit to come up with a set of moments. 11 Even if you can cook up many things in a Bayesian framework that look a lot like GMM or empirical likelihood (see, for example, Kim, 1998, Schennach, 2005 or Ragusa, 2006, among several others), I have never been particularly satis…ed with any of them and none has passed the "ah!"test that GMM overcomes with such an excellent grade.
Similarly, you can implement a non-parametric Bayesian analysis (see the textbook by Ghosh and Ramamoorthi, 2003, and in economics, Chamberlain and Imbens, 2003). However, the methods are not as well developed as we would like and the shining building of Bayesian statistics gets dirty with some awful discoveries such as the potentially bad asymptotic properties of Bayesian estimators (…rst pointed out by Freedman, 1963) or the breakdown of the likelihood principle (Robins and Ritov, 1997). Given that the literature is rapidly evolving, Bayesian methods may end up catching up and even overcoming classical procedures for nonparametric and semiparametric problems, but this has not happened yet. In the meantime, the advantage in this sub-area seems to be in the frequentist camp.

The Tools
No matter how sound were the DSGE models presented by the literature or how compelling the arguments for Bayesian inference, the whole research program would not have taken o¤ without the appearance of the right set of tools that made the practical implementation of the estimation of DSGE models feasible in a standard desktop computer. Otherwise, we would probably still be calibrating our models, which would be, in addition, much smaller and simpler. I will classify those tools in three sets. First, better and improved solution methods.
Second, methods to evaluate the likelihood of the model. Third, methods to explore the likelihood of the model. 11 A simple way to generate multiplicity of equilibria in a DSGE model that can be very relevant empirically is to have increasing returns to scale, as in Benhabib and Farmer (1992). For a macro perspective on estimation of models with multiplicity of equilibria, see Jovanovic (1989) or Cooper (2002).

Solution Methods
DSGE models do not have, except for a very few exceptions, a "paper and pencil" solution.
Hence, we are forced to resort to numerical approximations to characterize the equilibrium dynamics of the model. Numerical analysis is not part of the standard curriculum either at the undergraduate or the graduate level. Consequently, the profession had a tough time accepting that analytic results are limited (despite the fact that the limitations of close form …ndings happens in most other sciences where the transition to numerical approximations happened more thoroughly and with less soul searching). To make things worse, few economists were con…dent in dealing with the solution of stochastic di¤erence functional equations, which are the core of the solution of a DSGE model. The …rst approaches were based on …tting the models to be solved into the framework of what was described in standard optimal control literature textbooks. For example, Kydland and Prescott (1982)  Let me use the example of linearization, since it is the solution method that I will use below. 12 Judd and Guu (1993) showed that linearization was not an ad hoc procedure but the …rst order term of a mainstream tool in scienti…c computation, perturbation. The idea of perturbation methods is to substitute the original problem, which is di¢ cult to solve, for a simpler one that we know how to handle and use the solution of the simpler model to approximate the solution of the problem we are interested in. In the case of DSGE models, we …nd an approximated solution by …nding a Taylor expansion of the policy function describing the dynamics of the variables of the model around the deterministic steady state. Linearization, therefore, is just the …rst term of this Taylor expansion. But once we understand this, it is straightforward to get higher order expansions that are both analytically informative and more accurate (as in Schmitt-Grohé and Uribe, 2004). 13 Similarly, we can apply all of the accumulated knowledge of perturbation methods in terms of theorems or in improving the performance of the method. 14

Evaluating the Likelihood Function
In our previous description of Bayes'theorem, the likelihood function of the model played a key role, since it was the object that we multiplied by our prior to obtain a posterior. The challenge is how to obtain the likelihood of a DSGE model for which we do not even have an analytic solution. The most general and powerful route is to employ the tools of state space representations and …ltering theory.
Once we have the solution of the DSGE model in terms of its (approximated) policy functions, we can write the laws of motion of the variables in a state space representation that consists of: ; where S t is the vector of states that describe the situation of the model in any given moment in time, W t is a vector of innovations, and is a vector with the structural parameters that describe technology, preferences, and information processes.
; where Y t are the observables and V t a set of shocks to the observables (like, but necessarily, measurement errors).
While the transition equation is unique up to an equivalent class, the measurement equation depends on what we assume we can observe, selection that may imply many degrees of freedom (and not trivial consequences for inference; see the experiments in Guerrón-Quintana, 2008). 15 The state space representation lends itself to many convenient computations. To begin and hence we can compute p (Y t jS t 1 ; ) (here I am omitting the technical details regarding the existence of these objects).
All of these conditional densities appear in the likelihood function in a slightly disguised way. If we want to evaluate the likelihood function of the observables y T at parameter values , p y T ; , we can start by taking advantage of the Markov structure of our state space representation to write: Hence, knowledge of fp (S t jy t 1 ; )g T t=1 and p (S 1 ; ) allow the evaluation of the likelihood of the model.
Filtering theory is the branch of mathematics that is preoccupied precisely with …nding the sequence of conditional distributions of states given observations, fp (S t jy t 1 ; )g T t=1 : For this task, it relies on two fundamental tools, the Chapman-Kolmogorov equation: and Bayes'theorem (yes, again) : is the conditional likelihood.
The Chapman-Kolmogorov equation, despite its intimidating name, tells us only that the distribution of states tomorrow given an observation until today, p (S t+1 jy t ; ), is equal to the distribution today of p (S t jy t ; ) times the transition probabilities p (S t+1 jS t ; ) integrated over all possible states. Therefore, the Chapman-Kolmogorov equation just gives us a forecasting rule for the evolution of states. Bayes'theorem updates the distribution of states p (S t jy t 1 ; ) when a new observation arrives given its probability p (y t jS t ; ) : By a recursive application of forecasting and updating, we can generate the complete sequence fp (S t jy t 1 ; )g T t=1 we are looking for.
While the Chapman-Kolmogorov equation and Bayes'theorem are mathematically rather straightforward objects, their practical implementation is cumbersome because they involve the computation of numerous integrals. Even when the number of states is moderate, the computational cost of these integrals makes an exact (or up to ‡oating point accuracy) evaluation of the integrals unfeasible.

The Kalman Filter
To …x this computational problem, we have two routes. First, if the transition and measurement equation are linear and the shocks are normally distributed, we can take advantage of the observation that all of the relevant conditional distributions are Gaussian (this just from the simple fact that the space of normal distributions is a vector space). Therefore, we only need to keep track of the mean and variance of these conditional normals. The tracking of the moments is done through the Ricatti equations of the Kalman …lter (for more details, see any standard textbook, such as Harvey, 1989, or Stengel, 1994.
To do so, we start by writing the …rst order linear approximation to the solution of the model in the state space representation we introduced above: where we use lower case letters to denote realizations of the random variable and where " t is the vector of innovations to the model that stacks W t and V t .
Let us de…ne the linear projections s tjt 1 = E (s t jY t 1 ) and s tjt = E (s t jY t ) where Y t = fy 1 ; y 2 ; :::; y t g and the subindex tracks the conditioning set (i.e., tjt 1 means a draw at moment t conditional on information until t 1). Also, we have matrices of variances-covariances Given these linear projections and the Gaussian structure of our state space representations, the one-step-ahead forecast error, t = y t Cs tjt 1 ; is white noise.
We forecast the evolution of states: Since the possible presence of correlation in the innovations does not change the nature of the …lter (Stengel, 1994), so it is still the case that where K is the Kalman gain at time t. De…ne variance of forecast as V y = CP tjt 1 C 0 + DD 0 : Since t is white noise, the conditional loglikelihood of the period observation y t is just: The last step is to update our estimates of the states. De…ne residuals tjt 1 = s t s tjt 1 and tjt = s t s tjt . Subtracting equation (3) from equation (1) s t s tjx 1 = A s t 1 s t 1jt 1 + Bw t ; Now subtract equation (4) from equation (1) s t s tjt = s t s tjt 1 K Cs t + Dw t Cs tjt 1 Note P tjt 1 can be written as and for P tjt we have: The optimal gain K minimizes P tjt with the …rst order condition @T r P tjt @K = 0 and solution Consequently, the updating equations are: x tjt = x tjt 1 + K opt t and we close the iterations. We only need to apply the equations from t = 1 until T and we can compute the loglikelihood function. The whole process takes only a fraction of a second on a modern laptop computer.

The Particle Filter
Unfortunately from the sequence fp (S t jy t 1 ; )g T t=1 generated by simulation. Then, by a trivial application of the law of large numbers: The problem is then to draw from fp (S t jy t 1 ; )g T t=1 . But, following Rubin (1988), we can apply sequential sampling: where the resampling probability is given by ; Then fe s i t g N i=1 is a draw from p (S t jy t ; ).

Proposition 1 recursively uses a draw
from p (S t jy t ; ). But this is nothing more than the update of our estimate of S t to add the information on y t that Bayes'theorem is asking for.
The reader may be surprised by the need to resample to obtain a new conditional distribution. However, without resampling, all of the sequences would become arbitrarily far away from the true sequence of states and the sequence that is closer to the true states dominates all of the remaining ones in weight. Hence, the simulation degenerates after a few steps and we cannot e¤ectively evaluate the likelihood function, no matter how large N is.
Once we have , we draw N vectors of exogenous shocks to the model (for example, the productivity or the preference shocks) from their corresponding distributions and apply the law of motion for states to generate . This step, known as forecast, puts us back at the beginning of Proposition 1, but with the di¤erence that we have moved forward one period in our conditioning, from tjt 1 to t+1jt; implementing in that way the Chapman-Kolmogorov equation.
The following pseudo-code summarizes the description of the algorithm: Step 0, Initialization: Set t 1. Sample N values from p (S 0 ; ).
Step 1, Prediction: Sample N values , the law of motion for states and the distribution of shocks " t .
Step 2, Filtering: Assign to each draw s i tjt 1 the weight ! i t in Proposition 1.
Step 3, Sampling: Sample N times with replacement from Call each draw s i tjt . If t < T set t t + 1 and go to step 1. Otherwise stop.
With the simulation, we just substitute into our formula and get an estimate of the likelihood of the model given . Del Moral and Jacod (2002) and Künsch (2005) show weak conditions for the consistency of this estimator and for a central limit theorem to apply.

Exploring the Likelihood Function
Once we have an evaluation of the likelihood function from …ltering theory, we need to explore it, either by maximization or by description. As I explained before when I motivated the Bayesian choice, maximization is particularly challenging and the results are often not very robust. Consequently, I will not get into a discussion of how we can attempt to solve this complicated optimization. The Bayesian alternative is, of course, to …nd the posterior: probability less than 1. In such a way, we always go toward the higher regions of the posterior but we also travel, with some probability, towards the lower regions. This procedure avoids getting trapped in local maxima. A simple pseudo-code for a plain vanilla Metropolis-Hastings algorithm is as follows: Step 0, Initialization: Set i 0 and an initial i . Solve the model for i and build the state space representation: Evaluate ( i ) and p(y T j i ). Set i i + 1: Step 1, Proposal draw: Get a draw i from a proposal density q ( i 1 ; i ).
Step 2, Solving the Model: Solve the model for i and build the new state space representation.
Step 3, Evaluating the proposal: Evaluate ( i ) and p(y T j i ) with (9).
Step 4, Accept/Reject: Step 5, Iteration: If i < M , set i i + 1 and go to step 1. Otherwise stop.
This algorithm requires us to specify a proposal density q ( ; ). The standard practice (and the easiest) is to choose a random walk proposal, i = i 1 + i , i N (0; ), where is a scaling matrix that the researcher selects to obtain the appropriate acceptance ratio of proposals (Roberts, Gelman and Gilks, 1997, provide the user with guidelines for the optimal acceptance ratios that maximize the rate of convergence of the empirical distribution toward the ergodic distribution). Of course, we can always follow more sophisticated versions of the algorithm, but for most researchers, the time and e¤ort involved in re…nements will not compensate for the improvements in e¢ ciency.
If we are using the particle …lter, we need to keep the random numbers of the simulation constant across iterations of the Metropolis-Hastings algorithm. As emphasized by McFadden (1989) and Pakes and Pollard (1989), …xing the random numbers across iterations is required to achieve stochastic equicontinuity. Thanks to it, the pointwise convergence of the likelihood (9) to the exact likelihood we stated above becomes uniform convergence. Although not strictly necessary in a Bayesian context, uniform continuity minimizes the numerical instabilities created by the "chatter"of random numbers across iterations.
Once we have run the algorithm for a su¢ cient number of iterations (see Mengersen, Robert, and Guihenneuc-Jouyaux, 1999, for a review of convergence tests), we can perform inference: we have an empirical approximation of the posterior of the model and …nding means, standard deviations, and other objects of interest is a trivial task. In the interest of space, I omit a discussion of how to select a good initial value 0 . 16 The values we would have for a standard calibration exercise are, in general, a good default choice.
The reader who is not familiar with the Metropolis-Hastings algorithm may feel that the previous discussion introduced many concepts. Yes, but none of them is particularly deep once one has thought about them a bit more carefully. Most important, once you get the gist of it, McMc methods are surprisingly easy to code, much more, in fact, than even simple optimization algorithms, and they can be easily be recycled for future estimations. This is why I said in section 3 that nowadays doing Bayesian econometrics is easier than doing classical inference.

An Application
The previous pages would look dry and abstract without an application that illustrates how we do things in practice. Hence, I am presenting a simple estimation exercise that I borrow from a recent paper I coauthored with Juan Rubio-Ramírez (Fernández-Villaverde and Rubio-Ramírez, 2008). Since in that paper we were trying to explore how stable over time were the parameters of DSGE models when we let them vary over time, we took care in estimating a model that could be easily accepted by as many readers as possible as embodying the  Keynes (1936) complained in the General Theory that David Hume had a foot and a half in the classical world. Modern DSGE models fully share this nature. 17 In addition, the model introduces a number of real and nominal rigidities that generate the higher degree of persistence we see in the data and allow for a non-trivial role of monetary policy, which as we discussed in section 2, perhaps we also …nd in the data.
However, we need to remember the many shortcomings of the model. We may as well begin with its core, the neoclassical growth model. Growth theorist have accumulated many objections to the basic growth model: it does not have an endogenous mechanism for long-run growth, the existence of a balanced growth path violates some observations in the data, the model does not account for the large cross-country di¤erences in income per capita, and so forth. Our model will su¤er from all of those objections.
The second line of criticism regards the nominal rigidities, which are added in an ad hoc way through Calvo pricing. Beyond the lack of microfoundations, Calvo pricing misses many 17 That is why many argue, with some plausibility, that New Keynesian models are not that Keynesian after all (see Farmer, 2007). Given the importance they give to a neoclassical core, the key role of money, and the preference they generate for low and stable in ‡ation, we could just as well call them neomonetarist models. However, after seeing the recent uproar at the University of Chicago regarding the new Milton Friedman Institute for Research in Economics, it is clear that the New Keynesian brand still sells better in many quarters. and wages will be subject to rigidities that limit how often they can be changed.

The Households
The …rst type of agents in our model will be the households. We want to have a continuum of them because, in that way, we can generate a whole distribution of wages in the economy, with each household charging its own di¤erentiated wage. At the same time, we do not want to have too much heterogeneity, because this will make computing the model a daunting task.
The trick to combine di¤erent wages but not a lot of heterogeneity is to assume a separable utility function in consumption and labor and complete markets. Complete markets give us the basic risk-sharing result that, in equilibrium, marginal utilities are equated. If utility is separable in consumption, then perfect risk-sharing implies that all households consume the same amount of the …nal good and hold the same level of capital, collapsing the distribution of agents along that dimension. Finally, the requirement that we have a balanced growth path implies that we want to consider utility functions of the form: where j is the index of the household, E 0 is the conditional expectation operator, c jt is consumption, mo jt =p t are real money balances, p t is the aggregate price level, and l jt is hours worked. In addition, we have the discount factor, , a degree of habit persistence, h, which will help to induce inertia in the responses of consumption to shocks, and the Frisch labor supply elasticity, 1=#: The period utility function is shifted by two shocks. First, a shock to intertemporal preferences, d t , that works as a demand shock, inducing agents to consume more or less in the current period. Second, a shock to labor supply, to capture the movements in the observed wedge in the …rst order condition relating consumption and labor (Hall, 1997). For simplicity, we postulate that both shocks follow an autoregressive process of order 1 in logs: The standard deviation of the shocks, d and ' , is constant over time, but we could easily Western economies over the last decades that has been named the "Great Moderation" by Stock and Watson (2003).
Households trade on the whole set of Arrow-Debreu securities, contingent on idiosyncratic and aggregate events. My notation a jt+1 indicates the amount of those securities that pay one unit of consumption in event ! j;t+1;t purchased by household j at time t at (real) price q jt+1;t . To save on notation, we drop the explicit dependence on the event. Households also hold an amount b jt of government bonds that pay a nominal gross interest rate of R t and invest x t . Then, the j th household's budget constraint is: where w jt is the real wage paid per unit of labor, r t the real rental price of capital, u jt > 0 the utilization rate of capital, 1 t [u jt ] is the physical cost of rate u jt in resource terms (where [u] = 1 (u 1)+ 2 2 (u 1) 2 and 1 ; 2 0), t is an investment-speci…c technological shock that shifts the relative price of capital, T t is a lump-sum transfer from the government, and z t is the household share of the pro…ts of the …rms in the economy. This budget constraint is slightly di¤erent from a conventional one because households are monopolistic suppliers of their own type of work j. Therefore, the household …xes w jt (subject to some rigidities to be speci…ed below) and supplies the amount of labor l jt demanded at that wage. We can think of the household either as an independent businessman who can set its own rate or as a union that negotiates a particular wage rate. This assumption is relatively inconsequential. At the cost of some additional algebra, we could also let …rms set wages and households supply the desired labor at such wages.
The law of motion for capital is given by: where is the depreciation rate. We have a quadratic adjustment cost function where 0 and x is the long-run growth of investment. The speci…cation of the adjustment cost function captures the idea that the costs are with respect to moving away from the path of investment growth that we would have in the balanced growth path. In front of investment, we have an investment-speci…c technological shock t that also follows an autoregressive process: t = t 1 exp ( + z ;t ) where z ;t = " ;t and " ;t N (0; 1): The investment-speci…c technological shock accounts for the fall in the relative price of capital observed in the U.S. economy since the Second World War and it plays a crucial role in accounting for long-run growth and in generating business cycle ‡uctuations (see the rather compelling evidence in Krusell, 1997 and.The process for investment-speci…c technological change generates the …rst unit root in the model and it will be one source of growth in the economy. The Lagrangian function that summarizes the problem of the household is given by:  where the household chooses c jt , b jt , u jt , k jt , x jt , w jt , l jt and a jt+1 (maximization with respect to money holdings comes from the budget constraint), t is the Lagrangian multiplier associated with the budget constraint, and Q t the Lagrangian multiplier associated with the law of motion of capital. Since I argued before that with complete markets and separable utility, marginal utilities will be equated in all states of nature and all periods, I do not need to index the multipliers by j.
The …rst order conditions with respect to c jt , b jt , u jt , k jt , and x jt are: x jt+1 x jt 2 = 0: I do not include the …rst order conditions with respect to Arrow-Debreu securities, since we do not need them to solve for the equilibrium of the economy. Nevertheless, those …rst order conditions will be useful below to price the securities. In particular, from the second equation, we can see that is the pricing kernel of the economy.
If we de…ne the (marginal) Tobin's Q as q t = Qt t (the value of installed capital in terms of its replacement cost), we …nd: The …rst equation tells us that the relative price of capital is equal to the (expected) return we will get from it in the next period i.e., the marginal Tobin's Q is equal to the replacement cost of capital (the relative price of capital), which falls over time as t increases. Furthermore, if t = 1 (as we have in the basic real business cycle model), the relative price of capital is trivially equated to 1.
The necessary conditions with respect to labor and wages are more involved. There is a labor "packer" that aggregates the di¤erentiated labor supplied by each household into a homogeneous labor unit that intermediate good producers hire in a competitive market. The aggregation is done through the following production function: where 0 < 1 is the elasticity of substitution among di¤erent types of labor and l d t is the aggregate labor demand.
The labor "packer" maximizes pro…ts subject to the production function (10), taking as given all di¤erentiated labor wages w jt and the wage w t for l d t . Consequently, its maximization problem is: After some algebra, we get the input demand functions associated with this problem: which shows that the elasticity of substitution also controls the elasticity of demand for j th type of labor with respect to wages. Then, by using the zero pro…t condition for the labor "packer": Now we can specify the wage-setting mechanism. There are several mechanisms for introducing wage rigidities but one that is particularly clever and simple is a time-dependent rule called Calvo pricing. In each period, a fraction 1 w of households can reoptimize their wages and set a nominal value p t w jt . All other households can only partially index their wages by past in ‡ation with an indexation parameter w 2 [0; 1]. Therefore, the real wage of a household that has not been able to reoptimize it for periods is: The probability 1 w is the reduced-form representation of a more microfounded origin of wage rigidities (quadratic adjustment costs as in the original Calvo paper, 1983, contract costs, Caplin and Leahy, 1991 and 1997, or information limitations, Mankiw andReis, 2002, or Sims, 2002), which we do not include in the model to keep the number of equations within reasonable bounds. In section 6, I will discuss the problems of Calvo pricing in detail. Su¢ ce it to say here that, despite many potential problems, Calvo pricing is so simple that it still constitutes the natural benchmark for price and wage rigidities. This is due to the memoryless structure of the mechanism: we do not need to keep track of when wages reoptimized the last time, since w is time independent.
Relying on the separability of the utility function and the presence of complete markets, the only part of the Lagrangian that gets a¤ected by the wage and labor supply decisions of the household is: subject to Note how we have modi…ed the discount factor to include the probability w that the household has to keep the wage for one more period. Once the household can reoptimize, the continuation of the decision problem is independent from our choice of wage today, and hence, we do not need to include it in the section of the Lagrangian in equation (12). We also assume that t+s goes to zero for the previous sum to be well de…ned.
Also, because of complete markets, all of the households reoptimazing wages in the current period will pick the same wage and we can drop the jth from w jt . The …rst order condition of this problem is then: where w t is the new optimal wage. This expression involves in…nite sums that are di¢ cult to handle computationally. It is much simpler to write the …rst order conditions as where we have the recursive functions: and: Now, if we put the previous equations together and drop the j's indexes (that are redundant), we have the …rst order conditions the budget constraint: and the laws of motion for f t : and: where w t = w t =w t : The real wage evolves as a geometric average of the past real wage and the new optimal wage:

The Final Good Producer
There is one …nal good producer that aggregates intermediate goods y it with the production function: where " is the elasticity of substitution. Similarly to the labor "packer," the …nal good producer is perfectly competitive and maximizes pro…ts subject to the production function (13), taking as given all intermediate goods prices p ti and the …nal good price p t . Thus, the input demand functions associated with this problem are: where y d t is aggregate demand and the price level is:

Intermediate Good Producers
As mentioned above, there is a continuum of intermediate goods producers, each of which has access to a production function y it = A t k it 1 l d it 1 z t where k it 1 is the capital rented by the …rm, l d it is the amount of the labor input rented from the labor "packer," and where A t , a neutral technology level, evolves as: where z A;t = A " A;t and " A;t N (0; 1) This process incorporates a second unit root in the model. The …xed cost of production is indexed by the variable z t = A 1 1 t 1 t to make it grow with the economy (think, for example, of the …xed cost of paying some fees for keeping the factory open: it is natural to think that the fees will increase with income). Otherwise, the …xed cost would become asymptotically irrelevant. In a balanced growth path, z t is precisely the growth factor in the economy that we want to scale for. The role of the …xed cost is to roughly eliminate pro…ts in equilibrium and to allow us to dispense with the entry and exit of intermediate good producers.
, we can combine the processes for A t and t to get: Many of the variables in the economy, like c t , will be cointegrated in equilibrium with z t . This The problem of intermediate goods producers can be chopped into two parts. First, given input prices w t and r t , they rent l d it and k it 1 to minimize real cost: subject to their supply curve: The solution of this problem implies that all intermediate good …rms equate their capitallabor ratio to the ratio of input prices times a constant: and that the marginal cost mc t is A useful observation is that neither of these expressions depends on i since A t and input prices are common for all …rms.
The second part of the problem is to set a price for the intermediate good. In a similar vein to the household, the intermediate good producer is subject to Calvo pricing, where now the probability of reoptimizing prices is 1 p and the indexation parameter is 2 [0; 1]. Therefore, the problem of the …rms is: where future pro…ts are valued using the pricing kernel t+ = t .
The …rst order condition of this problem, after some algebra and noticing that p it = p t , This expression tells us that the price is equal to a weighted sum of future expected mark-ups.
We can express this condition recursively as: where: Given Calvo's pricing, the price index evolves as:

The Government Problem
The last agent in the model is the government. To simplify things I forget about …scal policy and I assume that the government follows a simple Taylor rule: that sets the short-term nominal interest rates as a function of past interest rates, in ‡ation and the "growth gap": the ratio between the growth of aggregate demand, , and the average growth of the economy, z . Introducing this growth gap avoids the need to specify a measure of the output gap (always somehow arbitrary) and, more important, …ts the evidence better (Orphanides, 2002). The term m t is a random shock to monetary policy such that is an ine¢ ciency factor created by price dispersion, and is an ine¢ ciency factor created by wage dispersion. Furthermore, by Calvo's pricing A de…nition of equilibrium in this economy is standard and it is characterized by …rst order conditions of the household, the …rst order conditions of the …rms, the recursive de…nitions of g 1 t and g 2 t , the Taylor rule of the government, and market clearing. Since the model has two unit roots, one in the investment-speci…c technological change and one in the neutral technological change, we need to rescale all the variables to avoid solving the model with non-stationary variables (a solution that is feasible, but most cumbersome).
The scaling will be given by the variable z t in such a way that, for any arbitrary variable x t , we will have e x t = x t =z t : Partial exceptions are the variables, e r t = r t t , e q t = q t t , and e k t = kt zt t : Once the model has been rescaled, we can …nd the steady state and solve the model by loglinearizating around the steady state.
Loglinearization is both a fast and e¢ cient method for solving large-scale DSGE models.
I have documented elsewhere (Aruoba, Fernández-Villaverde, and Rubio-Ramírez, 2006), that it is a nice compromise between speed and accuracy in many applications of interest.
Furthermore, it can easily be extended to include higher order terms (Judd, 1998). Once I have solved the model, I use the Kalman …lter to evaluate the likelihood of the model, given some parameter values. The whole process takes less than 1 second per evaluation of the likelihood.

Empirical Results
I estimate the DSGE model using …ve time series for the U.S. economy: 1) the relative price of investment with respect to the price of consumption, 2) real output per capita growth, 3      What do we learn from our estimates? First, the discount factor is very high, 0.998. This is quite usual in DSGE models, since the likelihood wants to match a low interest rate. 18 These results are also reported in Fernández-Villaverde, Guerrón-Quintana, and Rubio-Ramírez (2008).
Since we have long-run growth in the model, the log utility function generates a relatively high interest rate without the help of any discounting. Second, we have a very high degree of habit, around 0.97. This is necessary to match the slow response of the economy to shocks as documented by innumerable number of VAR exercises. Third, the Frisch elasticity of labor supply is 0.85 (1/1.17). This is a nice surprise, since it is a relatively low number, which makes it quite close to the estimates of the micro literature (in fact, some micro estimates are higher than 0.85!). Since one of the criticisms of DSGE models has traditionally been that they assumed a large Frisch elasticity, our model does not su¤er from this problem. 19 Investment is subject to high adjustment costs, 9.51

Areas of Future Research
In the next few pages, I will outline some of the potential areas of future research for the formulation and estimation of DSGE models. I do not attempt to map out all existing problems. Beyond being rather foolish, it would take me dozens of pages just to brie ‡y describe some of the open questions I am aware of. I will just talk about three questions I have been thinking about lately: better pricing mechanisms, asset pricing, and more robust inference.

Better Pricing Mechanisms
In our application, we assumed a simple Calvo pricing mechanism. Unfortunately, the jury is still out regarding how bad a simpli…cation it is to assume that the probability of changing prices (or wages) is …xed and exogenous. Dotsey, King, and Wolman (1999), in an important paper, argue that state-dependent pricing (…rms decide when to change prices given some costs and their states) is not only a more natural setup for thinking about rigidities but also an environment that may provide very di¤erent answers than the basic Calvo pricing.
More recently, Bils, Klenow, and Malin (2008) have presented compelling evidence that state-dependent pricing is also a better description of the data. Bils, Klenow and Malin's paper is a remarkably nice contribution because the mapping between microevidence of price and wages changes and nominal rigidity in the aggregate is particularly subtle. An interesting characteristic of our Calvo pricing mechanism is that all the wages are being changed in every period, some because of reoptimization, some because of indexing. Therefore, strictly speaking, the average duration of wages in this model is one period. In the normal setup, we equate a period with one quarter, which indicates that any lower degree of price changes in the data implies that the model display "excess"price volatility. A researcher must then set up a smart "mousetrap." Bils, Klenow, and Malin …nd their trap in the reset price in ‡ation that they build from micro CPI data. This reset in ‡ation clearly indicates that Calvo pricing cannot capture many of the features of the micro data and the estimated persistence of shocks. The bad news is, of course, that handling a state-dependent pricing model is rather challenging (we have to track a non-trivial distribution of prices), which limits our ability to estimate it. Being able to write, solve, and estimate DSGE models with better pricing mechanisms is, therefore, a …rst order of business.

Asset Pricing
So far, assets and asset pricing have only made a collateral appearance in our exposition.
This is a defect common to much of macroeconomics, where quantities (consumption, investment, hours worked) play a much bigger role than prices. However, if we take seriously the implications of DSGE models for quantities, it is inconsistent not to do the same for prices, in particular asset prices. You cannot believe the result while denying the mechanism: it is through asset prices that the market signals the need to increase or decrease current consumption and, in conjunction with wages, the level of hours worked. Furthermore, one of the key questions of modern macroeconomics, the welfare cost of aggregate ‡uctuation, is, in a precise sense, an exercise in asset pricing. Roughly speaking, a high market price for risk will denote a high welfare cost of aggregate ‡uctuations and low market price for risk, a low welfare cost.
The plight with asset prices is, of course, that DSGE models do a terrible job at matching them: we cannot account for the risk-free interest rate (Weil, 1989), the equity premium (Mehra and Prescott, 1985), the excess volatility puzzle, the value premium, the slope of the yield curve, or any other of a long and ever-growing list of related observations (Campbell, 2003).
The origin of our concerns is that the stochastic discount factor (SDF) implied by the model does not covariate with observed returns in the correct way (Hansen and Jagannathan, 1991). For ease of exposition, let me set h = 0 (the role of habits will become clearer in a moment) and use the equilibrium condition that individual consumption is equal to aggregate consumption. Then, the SDF m t is: = e z z z;t+1 e c t e c t+1 Since (detrended) consumption is rather smooth in the data: e c t e c t+1 1 and the variance of z z;t+1 is low, we have that E t m t e z and t (m t ) is small.
To get a sense of the importance of the …rst result, we can plug in some reasonable value for the parameters. The annual long-run per capita growth of the U.S. economy between 1865-2007 has been around 1.9 percent. I then set z = 0:019: For the discount factor, I pick = 0:999, which is even higher than our point estimate in section 5 but which makes my point even stronger. Thus, the gross risk-free real interest rate, R t is equal to: However, in the data, we …nd that the risk-free real interest rate has been around 1 percent (Campbell, 2003). This is, in a nutshell, the risk-free interest rate: even in a context where agents practically do not discount the future and where the elasticity of intertemporal substitution (EIS) is 1, we create a high interest rate. By lowering or the EIS to more reasonable numbers, we only make the puzzle stronger. The extension of the previous formula for the general constant relative risk aversion utility function is: where is the EIS. Even by lowering the EIS to 0.5, we would have that e 1 z would be around 1.04, which closes the door to any hope of ever matching the risk-free interest rate.
The second result, m t ‡uctuates very little, implies that the market price for risk, is also low. But this observation just runs in the completely opposite direction of the equity premium puzzle, where, given historical stock returns, we require a large market price for risk.
How can we …x the behavior of the SDF in a DSGE model? My previous argument relied on three basic components. First, that consumption ‡uctuates little. Second, that the marginal utility of consumption ‡uctuates little when consumption changes a small amount, and third, that the pricing of assets is done with the SDF. The …rst component seems robust.
Notwithstanding recent skepticism (Barro, 2006), I …nd little evidence of the large ‡uctuations in consumption that we would need to make the model work. Even during the Great Depression, the yearly ‡uctuations were smaller than the total drop in consumption over the whole episode (the number that Barro uses) and they were accompanied by an increase in leisure. Exploring the third component, perhaps with incomplete markets or with bounded rationality, is to venture into a wild territory beyond my current fancy for emotions. My summary dismissals of the …rst and last argument force me to conclude that marginal utility must, somehow, substantially ‡uctuate when consumption moves just a little bit.
The standard constant relative risk aversion utility functions just cannot deliver these large ‡uctuations in marginal utility in general equilibrium. As we raise risk aversion, consumers respond by making their consumption decisions smoother. Indeed, for su¢ ciently large levels of risk aversion, consumption is so smooth that the market price for risk actually falls (Rouwenhorst, 1995).
Something we can do is to introduce habits, as I did in the model that I estimated before.
Then, the SDF becomes and for a su¢ ciently high level of h, we can obtain large ‡uctuations of the SDF. The intuition is that, as h ! 1, we care about the ratio of the …rst di¤erences in consumption and not the ratio of levels, and this ratio of …rst di¤erences can be quite large. Habits are plausible (after a few trips in business class, coming back to coach is always a tremendous shock) and there may be some good biological reasons why nature has given us a utility function with habits (Becker and Rayo, 2007). At the same time, we do not know much about the right way to introduce habits in the utility function (the simple form postulated above is rather arbitrary and rejected by the data, as shown by Chen and Ludvigson, 2008) and habits generate interest rates that are too volatile.
Consequently, a second avenue is the exploration of "exotic preferences." Standard expected utility functions, like the one used in this paper, face many theoretical limitations.
Without being exhaustive, standard expected utility functions do not capture a preference for the timing of resolution of uncertainty, they do not re ‡ect attitudes toward ambiguity, and they cannot accommodate loss aversion. Moreover, the standard model assumes that economic agents do not fear missespeci…cation: they are sure that the model in their heads is the same as the true description of the world. These limitations are potentially of empirical importance as they may be behind our inability to account for many patterns in the data, in particular the puzzling behavior of the prices of many assets and the risk premia

More Robust Inference
The relative disadvantage of Bayesian methods when dealing with semiparametrics that we discussed in section 3 is unsatisfactory. DSGE models are complex structures. To make the models useful, researchers add many mechanisms that a¤ect the dynamics of the economy: sticky prices, sticky wages, adjustment costs, etc. In addition, DSGE models require many parametric assumptions: the utility function, the production function, the adjustment costs, the distribution of shocks, etc.
Some of those parametric choices are based on restrictions that the data impose on the theory. For example, the observation that labor income share has been relative constant since the 1950s suggests that a Cobb-Douglas production function may not be a bad approximation to reality (although this assumption itself is problematic: see the evidence in Young, 2005, among others). Similarly, the observation that the average labor supplied by adults in the U.S. economy has been relatively constant over the last several decades requires a utility function with a marginal rate of substitution between leisure and consumption that is linear in consumption.
Unfortunately, many other parametric assumptions do not have much of an empirical foundation. Instead, researchers choose parametric forms for those functions based only on convenience. For example, in the prototypical DSGE model that we presented in the previous section, the investment adjustment cost function S ( ) plays an important role in the dynamics of the economy. However, we do not know much about this function. Even the mild restrictions that we imposed are not necessarily true in the data. 20 For example, there is much evidence of non-convex adjustment costs at the plant level (Cooper and Haltiwanger, 2006) and of nonlinear aggregate dynamics (Caballero and Engel, 1999). Similarly, we assume a Gaussian structure for the shocks driving the dynamics of the economy. However, there is much evidence (Geweke, 1993, Fernández-Villaverde and Rubío-Ramírez, 2007 that shocks to the economy are better described by distributions with fat tails. The situation is worrisome. Functional form misspeci…cation may contaminate the whole inference exercise. Moreover, Heckman and Singer (1984) show that the estimates of dynamic models are inconsistent if auxiliary assumptions (in their case, the modelling of individual heterogeneity in duration models) are misspeci…ed. These concerns raise the question of how we can conduct inference that is more robust to auxiliary assumptions, especially within a Bayesian framework.
Researchers need to develop new techniques that allow for the estimation of DSGE models using a Bayesian framework where we can mix tight parametric assumptions along some dimensions while keeping as much ‡exibility as possible in those aspects of the model that we have less con…dence with. The potential bene…ts from these new methods are huge. Our approach shares many lines of contact with Chen and Ludvigson (2008), a paper that has pioneered the use of more general classes of functions when estimating dynamic equilibrium models within the context of methods of moments. Also, I am intrigued by the possibilities of ideas like those in Álvarez and Jermann (2004), who use data from asset pricing to estimate the welfare cost of the business cycle without the need to specify particular preferences. In a more theoretical perspective, Kimball (2002) has worked out many implications of DSGE models that do not depend on parametric assumptions. Some of these implications are potentially usable for estimation.