1 Introduction

Recently developed business cycle models feature a recursive timing structure, according to which decision rules of forward-looking, rationally optimizing agents reflect the presence of delayed information dissemination across the economy. Two main examples in the policy domain stand out: first, models where government spending entails planning lags and thus cannot respond to current economic developments—see e.g. Kormilitsina and Zubairy (2018), Schmitt-Grohé and Uribe (2012); secondly, monetary frameworks where slow-moving variables, such as consumption and wages, are bound not to respond on impact to unexpected changes in the policy rate, which only gradually propagate in the private sector—see e.g. Altig et al. (2011), Christiano et al. (2005).Footnote 1

Model-wise, timing restrictions in structural modeling are micro-founded via informational constraints that relate agents’ expectation formation and the ensuing decision rules to increasing sequences of nested and temporarily asymmetric information sets—see Kormilitsina (2013). The timing mismatch between optimal decisions and the evolution of the state of the economy generates propagation delays for a subset of the exogenous forces (i.e. the structural shocks) driving short-run dynamics, with potentially significant implications for the model’s predictions about the time series properties of endogenous variables—see Angelini and Sorge (2021).

On a positive side, a burgeoning number of studies in modern macroeconomic writing has underscored the role of informational frictions in reconciling theoretical predictions of otherwise standard rational expectations frameworks with observed features of the data, e.g. the persistent and hump-shaped responses of inflation and output measures to unanticipated monetary shocks (Mankiw and Reis 2002), the relationship between inflation illusion and asset prices (Piazzesi and Schneider 2008), or the excess return predictability in financial markets (Bacchetta et al. 2009). While empirical work exploiting survey data has been supportive of imperfect information models in general, e.g. Branch (2007), Mankiw et al. (2003), the question whether macroeconomic data favor the adoption of the recursive timing assumption as opposed to the common (unrestricted) timing protocol, conditional on a given DSGE structure, still remains open to debate. We believe this modeling issue qualifies as a key concern for the specification of macroeconomic models and their empirical validation: failure to control for the informational transmission channel in estimated models can in principle distort inference on the relative contribution of aggregate shocks to business cycle fluctuations, and/or the assessment of competing policy measures whose effects are shaped, among other things, by agents’ expectations.

To tackle these issues, we provide a handy frequentist procedure, based on a bootstrap variant of the likelihood ratio (LR) test in state-space systems, to empirically assess the relevance of timing restrictions and ensuing shock transmission delays in small-scale dynamic stochastic general equilibrium (DSGE) environments. Specifically, we submit to formal testing the null hypothesis that a subset of endogenous model variables of interests (e.g. the inflation rate) fail to simultaneously and/or fully respond to the state of the economy (e.g. movements in the nominal interest rate), thus providing evidence against the alternative of contemporaneous timing. Upon estimating the model-implied set of endogenous responses across timing structures (restricted versus unrestricted) along with estimates of a given model’s structural parameters, information stemming from likelihood-based tests for the rational expectations cross-equation restrictions (CERs) placed by the model’s structure on its equilibrium (reduced form) representation can be exploited to evaluate the empirical plausibility of the recursive timing assumption in the DSGE context.

Operationally, we build on recent contributions by Angelini et al. (2022), Bårdsen and Fanelli (2015), Stoffer and Wall (1991) on estimation and hypothesis testing in state-space models. Stoffer and Wall (1991) propose a nonparametric Monte Carlo bootstrap that abstracts from distributional assumptions that are hardly valid in small to moderate samples. Bårdsen and Fanelli (2015) develop a frequentist approach to testing sequentially cointegration/common-trend restrictions along with conventional rational expectations CERs in DSGE models, arguing in favor of classical likelihood-based tests to handle both long- and short-run restrictions placed by the model on its reduced form representation. Angelini et al. (2022) emphasize the role of bootstrap resampling as a conceptually simple diagnostic tool for asymptotic inference in estimated state-space models. Among other things, these authors show that, in the case of strong identification the bootstrap maximum likelihood (ML) estimator of the structural parameters replicates the asymptotic distribution of the ML estimator, and prove formally that the restricted bootstrap (i.e. with the null hypothesis under investigation being imposed in estimation) is consistent. Under these circumstances, not only the (either standard or bootstrap) LR test is asymptotically pivotal and chi-square distributed, but the bootstrap tends to reduce the discrepancy between actual and nominal probabilities of type-I error. In fact, the bootstrap in DSGE models (and, more generally, in frameworks that admit a conventional state space representation) has the potential to mitigate the over-rejection phenomenon that characterizes tests of non-linear hypothesis that rely on first-order asymptotic approximations vis-à-vis short time series, as those usually employed for business cycle analysis. We indeed find our resampling method to improve upon the asymptotic LR test, for the empirical size of the bootstrap-based LR test tends to approach the chosen nominal level.

The computation of our bootstrap-based LR test for the timing-specific CERs associated with the DSGE model entails the estimation of the structural parameters, which in our setup is accomplished by maximizing the likelihood function of the (locally unique) reduced form equilibrium subject to the CERs. Model estimation via classical ML is not very common in the DSGE literature, and Bayesian methods are typically preferred given their inherent ability to deal with misspecification issues and small-sample inference—see Del Negro et al. (2007), Del Negro and Schorfheide (2009) among others. Bårdsen and Fanelli (2015) emphasize that classical statistical methods can also be useful for empirically evaluating small DSGE models, insofar as they offer a number of indications about the qualitative and/or quantitative features of the data that a given framework fails to adequately capture, with an explicit measure of adequacy being provided by the pre-fixed nominal probability of type-I error associated with the LR test. In a similar vein, the outcome of our testing strategy can be thought of as providing information about possible directions for enhancing coherence between theory and measurement in the class of small-scale DSGE models.

While linear Gaussian state space systems are in widespread use in macroeconometrics, e.g. Chan and Strachan (2023), likelihood-based inferential analysis in these models still remains relatively scant. A main obstacle to establishing the asymptotic properties of conventional likelihood-based tests is the well-known lack of identification of these models in the absence of further restrictions. In fact, since any similarity transform (or rotation) of the vector of latent variables by a non-singular (conformable) arbitrary matrix yields a state-space representation of the system with equivalent second-order properties, information on the autocovariance patterns for the observables fail to ensure identification of the state-space parameters; this in turn violates standard regularity conditions for likelihood-based inference, e.g. Komunjer and Ng (2011). Komunjer and Zhu (2020) ingeniously address this issue by locally re-parameterizing the state-space system in terms of a lower-dimensional canonical parameter which is by construction identified, without affecting the likelihood of the model; this in turn allows them to derive the asymptotic distribution in conventional chi-squared form (with known degrees of freedom) of the LR test under several hypotheses of interest, which can therefore be used to assess the validity of DSGE model specifications.Footnote 2

We fully acknowledge the pervasiveness of weak identification (or lack thereof) of structural parameters in richly parameterized DSGE models, e.g. Canova and Sala (2009); Consolo et al. (2009); Mavroeidis (2010); Qu and Tkachenko (2012), and the fact that any direct test for CERs in dynamic macroeconomic models should preferably be set out when parameter (local) identifiability is ensured. Population identification of the deep parameters of the DSGE model, i.e. the existence of an injective mapping from the reduced-form parameters under the CERs and the structural deep ones, can be numerically checked by means of appropriate rank conditions, e.g. Iskrev (2010); necessary and sufficient (rank/order) identification conditions in terms of equivalent spectral densities, that do not require the numerical evaluation of analytical moments, can also be invoked, e.g. Komunjer and Ng (2011). In order not to shift the focus on identifiability issues, our investigation of the properties of the bootstrap-based LR test is therefore conducted on the assumption of strong identification, meaning that all the regularity conditions for standard asymptotic inference are at work. An advantage of our approach is that, as argued in Angelini et al. (2022), the asymptotic distribution of the bootstrap estimator of the structural parameters reveals possible identification failures. By the same token, failure of either version of the DSGE model (one with recursive timing, the other free of timing restrictions) to pass the LR test for the short-run CERs can be interpreted as an indication toward envisioning alternative structural frameworks and/or shock transmission mechanisms that possibly allow to capture some of the patterns observed in the data.

To showcase the validity of our testing procedure, we adopt the hybrid New Keynesian (NK) model popularized by Benati and Surico (2009), and formalize timing restrictions as follows: private sector variables (inflation and output gap) and expectations cannot respond on impact to monetary policy innovations, yet they can fully adjust to other sources of uncertainty and model’s states (e.g. past inflation). Since Rotemberg and Woodford (1997), NK structures embodying nominal rigidities and recursive timing with respect to the propagation of monetary policy surprises have been used to shed light on the origins of aggregate fluctuations and the historical evolution of the monetary transmission mechanism in the U.S. economy—see e.g. Altig et al. (2011), Boivin and Giannoni (2006), Christiano et al. (2005). Across all of these studies, no formal test for the over-imposed timing restrictions is ever performed, meaning that the estimated transmission mechanism necessarily reflects the (arbitrary) way timing restrictions are framed and embedded into the underlying model specification. Conditional on such restrictions being operative, the model’s general equilibrium dynamics is then evaluated in response to cyclical variation in the systematic component of the monetary policy rule as well as to unexpected changes in the Fed funds rate in the US economy (policy surprises).

We first conduct a battery of Monte Carlo experiments in order to evaluate the empirical size properties of the proposed test, explicitly considering two distinct scenarios: one where information-based timing restrictions produce non-negligible variation in the dynamic adjustment paths of non-policy variables to the non-systematic component of monetary policy; and the other where (with the exclusion of the zero effect on impact) the dynamic properties of the model are almost identical across informational structures (restricted vs. unrestricted). In either case, simulation results robustly indicate that the bootstrap-based approach manages to counterbalance the tendency of the standard LR test to over-reject the hypothesis of structural timing restrictions in small samples, with rejection frequencies close to the \(5\%\) nominal level.

We then revisit the evidence on the transmission of monetary policy in the so-called Great Moderation period of U.S. macroeconomic history, using our likelihood-based testing approach. Characterized by a sharp decline in macroeconomic volatility that began in the mid-1980s, the Great Moderation has attracted a great deal of attention from macroeconomic analysts, interested in uncovering the deep causes of such phenomenon. One main view, supported by both system-based and reduced form evidence, has credibly attributed it to an active monetary policy behavior that managed to stabilize inflationary expectations via commitment to a strong response of the nominal interest rate to deviations of the inflation rate from the policy target—see, among others, Clarida et al. (2000); Fanelli (2012); Hirose et al. (2020); Lubik and Schorfheide (2004). Our estimation results appear to lend support to the conventional (unrestricted) timing protocol, whereby monetary policy shocks—i.e. unexpected exogenous changes in the Federal funds rate—have entailed contemporaneous effects on both inflation and output gap dynamics for the period running from 1985 to 2008. This finding calls for some caution in interpreting the responses of inflation and real economic activity to the conduct of monetary policy as estimated in the earlier NK literature that routinely adopted, without testing, timing restrictions on the observability of policy shocks, e.g. Altig et al. (2011), Boivin and Giannoni (2006).

The remainder of the paper is organized as follows. Section (2) reviews the general state space representation of first-order approximate solution to general DSGE models featuring timing restrictions. Section (3) introduces the testing problem and discusses the bootstrap algorithm used to test for the relevance of shock propagation delays in DSGE environments. Section (4) reports the outcome of our simulation experiments, whereas Sect. (5) presents an empirical application for the U.S. economy. Section (6) concludes.

2 Setup

2.1 General DSGE Model Representation

DSGE models are generally described by an \(n_f\)-dimensional stochastic difference system

$$\begin{aligned} E_t f \left( y_{t+1}, y_t, x_{t+1}, x_t; \theta , \sigma \right) =0 \end{aligned}$$
(1)

where the random processes \(\left( y_t \right)\) and \(\left( x_t \right)\) are defined on the same filtered probability space, and \(E_t\) is the conditional expectation associated with the underlying probability measure. The \(n_y\)-dimensional vector y collects the model’s endogenous jump variables, whereas the \(n_x\)-dimensional vector x contains \(n^1_x\) endogenous predetermined variables as well as \(n^2_x\) exogenous states (where \(n^1_x+n^2_x=n_x\), \(n_y+n_x=n_f\)). Finally, the vector \(\theta\) collects the structural parameters and the scalar \(\sigma \ge 0\) captures surrounding uncertainty, see Schmitt-Grohé and Uribe (2004).

Let the prime superscript denotes one-step ahead variables. Under the common timing protocol, decision rules for all variables y depend on the whole set of states x. The linearly perturbed solution to (1) then reads as

$$\begin{aligned} y=g_x(\theta ) x,\quad x'=h_x(\theta ) x +\sigma \kappa (\theta ) \epsilon ' \end{aligned}$$
(2)

where the conformable matrix \(\kappa (\theta )\) loads the \(n^2_x\)-dimensional vector of structural economic shocks \(\epsilon \sim i.i.d.(0,I_{n^2_x})\) (e.g. preference shocks, supply-side shocks, policy innovations) on the state variables x, and the coefficient matrices \(g_x\) and \(h_x\) are evaluated at the non-stochastic steady state \(({\bar{y}}, {\bar{x}})\) solving (1) when \(\sigma =0\).

2.2 DSGE Models Under Timing Restrictions

Following Kormilitsina (2013), Sorge (2020), we are interested in a particular class of limited information DSGE models, namely those in which some state variables are unobserved in the current period or observed with some lag, and where some endogenous variables that adjust to observed states can serve as additional states forcing variation in other endogenous variables. This approach acknowledges the fact that it is not the date at which expectations are formed that matters, but rather the date and the structure of the information set upon which expectations are framed. In this context, restricted (or limited) information means that, in the face of exogenous shocks that do not occur simultaneously, agents’ expectation formation and the ensuing decision rules are to be conditioned on different information sets. Accordingly, timing restrictions are naturally formalized by means of fictitious informational sub-periods characterized by heterogeneous (across rational decision-makers) information sets and the associated process of expectations formation and the timing of decisions.Footnote 3

Remarkably, being rooted in the theory of perturbation of non-linear systems, this approach allows one to embed the assumed set of timing restrictions directly into the non-linear equilibrium conditions that fully characterize a given DSGE model; approximated (up to second-order) optimal decision rules can then be derived on the basis of the imposed informational structure. To this aim, let \(f = [f^y,\, f^x]'\) denote the set of \((n_y+n_x)\) equations of the model, and \({\mathcal {E}}_t\) the collection of (conditional) expectation operators accounting for the heterogeneous information sets, that is

$$\begin{aligned} {\mathcal {E}}_t\left[ f(y_{t+1}, y_t, x_{t+1}, x_t; \sigma ) \right] =\left( \begin{array}{c} E \left[ f^{(y, x)}_1 (y_{t+1}, y_t, x_{t+1}, x_t; \sigma ) \,\, \Big \vert \,\, \mathbb {I}_{1,t} \right] \\ \\ E \left[ f^{(y, x)}_2 (y_{t+1}, y_t, x_{t+1}, x_t; \sigma ) \,\,\Big \vert \,\, \mathbb {I}_{2,t} \right] \\ \vdots \\ E \left[ f^{(y, x)}_{n_y+n^1_x} (y_{t+1}, y_t, x_{t+1}, x_t; \sigma ) \,\,\Big \vert \,\, \mathbb {I}_{n_y+n^1_x, t} \right] \\ \\ f^{(x)}_{1}( x^2_{t+1}, x^2_t; \sigma ) \\ \\ f^{(x)}_{2}(x^2_{t+1}, x^2_t; \sigma ) \\ \vdots \\ f^{(x)}_{n^2_x}\left( x^2_{t+1}, x^2_t; \sigma \right) \end{array} \right) \end{aligned}$$
(3)

where \(f^{(y, x)}_k\) (\(k\le n_y+n^1_x\)) is the model’s equation used to pin down the k-th endogenous variable in \((y, x^1)\), conditional on the equilibrium values for the other endogenous variables and the relevant states, for which model-consistent expectations (optimal projections) at date t are determined on the basis of the restricted (and in principle different across these equations) information set \(\mathbb {I}_{i, t}\), \(i \le n_y\); and \(f^{(x)}_j\) (\(j \le n^2_x\)), is the possibly nonlinear equation that governs the dynamics of j-th exogenous state variable \(x_j\). Apparently, one can make the DSGE model embody information-based timing restrictions by simply specifying the information sets \(\mathbb {I}_{i,t}\). We maintain, as assumed in the aforementioned literature, that all types of agents (indexed by the information sets \(\mathbb {I}_{i,t}\)) know the actual structure of the model and form expectations rationally. Differently from dynamic structures with persistently dispersed information, e.g. Kasa et al. (2014), the specification of information-based timing restrictions does not involve an infinite regress of expectations, and the underlying model’s representation will generically be finite dimensional, see Angelini and Sorge (2021).

As detailed in the Appendix, structural timing restrictions in the DSGE setting can be modeled via system partitions of the form

$$\begin{aligned} y=\left[ y_u; \, y_r \right] , \quad x=\left[ x_u; \, x_r \right] \end{aligned}$$
(4)

where \(y_u \in y\) collects endogenous variables which respond to the whole set of state variables x, and the \(n_{y_r}\)-dimensional vector \(y_r \in y\) includes variables that can respond to observed states \(x_u \in x\) and the best (minimum mean square error) forecast of unobserved states \(x_u \in x\), whose dynamics obey

$$\begin{aligned} x'_{r} = P x_r +\sigma \epsilon '_{x_r},\quad \epsilon _{x_r} \sim i.i.d. N(0, 1) \end{aligned}$$
(5)

where P is a stable square matrix of autoregressive coefficients, and \(\epsilon _{x_r}\) collects the exogenous shocks associated with the states \(x_r\).

The non-linear recursive RE solution under timing restrictions is

$$\begin{aligned} y={\hat{g}}_{x}(\theta )\left( \begin{array}{c} x_u \\ x_r \\ x_{r,-1} \end{array} \right) , \quad x'={\hat{h}}_{x}(\theta )\left( \begin{array}{c} x_u \\ x_r \\ x_{r,-1} \end{array} \right) +\sigma \kappa (\theta ) \epsilon ' \end{aligned}$$
(6)

where the coefficient matrices

$$\begin{aligned} {\hat{g}}_{x}(\theta )=\left( \begin{array}{ccc} {\hat{g}}_{x_u}(\theta ) &{} {\hat{g}}_{x_r}(\theta ) &{} {\hat{g}}_{x_{r,-1}}(\theta ) \\ {\hat{j}}_{x_u}(\theta ) &{} 0_{n_{y_r} \times n_{x_r}} &{} {\hat{j}}_{x_{r,-1}} (\theta ) \end{array} \right) , \quad {\hat{h}}_{x}(\theta )=\left( \begin{array}{ccc} {\hat{h}}_{x_u}(\theta ) &{} {\hat{h}}_{x_r}(\theta ) &{} {\hat{h}}_{x_{r,-1}}(\theta ) \\ 0_{n_{x_r} \times n_{x_r}} &{} P(\theta ) &{} 0_{n_{x_r} \times n_{x_r}} \end{array} \right) \end{aligned}$$
(7)

are readily constructed via linear transformations of those entering (2)—please see the Appendix for full details on the solution method up to first-order of approximation, and Kormilitsina (2013) for a complete reference and examples.

We remark that timing restrictions enrich the autocovariance patterns for the endogenous variables: matrices \({\hat{h}}_{x_r}\) and \({\hat{h}}_{x_{r,-1}}\) (and thereby \({\hat{g}}_{x_r}\) and \({\hat{g}}_{x_{r,-1}}\)) will generally differ from those implied by the counterpart model with unrestricted timing—see Angelini and Sorge (2021). Remarkably, dynamic impulse responses and other statistics will depend (among other things) on the structure of the matrix \({\hat{j}}_{x_r}\) which maps exogenous states \(x_r\) into partially endogenous variables \(y_r\), and that of the matrix \({\hat{g}}_{x_{r,-1}}\) which governs the dependence of the fully endogenous variables \(y_u\) on the lagged states \(x_{r,-1}\). Information contained in the likelihood function can therefore be exploited to derive (classical or Bayesian) inference about the relevance of delayed propagation for the shock(s) of interest.Footnote 4

To frame our testing procedure, we exploit the structural form in (6) embodying timing restrictions against the following state-space counterpart

$$\begin{aligned} y={\tilde{g}}_{x}(\phi )\left( \begin{array}{c} x_u \\ x_r \\ x_{r,-1} \end{array} \right) , \quad x'={\tilde{h}}_{x}(\phi )\left( \begin{array}{c} x_u \\ x_r \\ x_{r,-1} \end{array} \right) +\sigma \kappa (\phi ) \epsilon ' \end{aligned}$$
(8)

with the non-zero parameters in \({\tilde{h}}_{x}\) and \({\tilde{g}}_{x}\) collected in the vector \(\phi\) (i.e. \({\tilde{g}}_{x}={\tilde{g}}_{x}(\phi )\) and \({\tilde{h}}_{x}={\tilde{h}}_{x}(\phi )\)). Notice that, when the P matrix is non-empty, the time series representation for the endogenous variables y in (6) is in VARMA-type form, even when its unrestricted counterpart (2) admits a finite-order VAR representation. The testing strategy discussed below exploits the Kalman filter to evaluate the likelihood function associated with the minimal state-space representation of the system (6) under the implicit non-linear CERs embodied in (7), maintaining that the regularity (identification) conditions for standard asymptotic inference in the state-space representation of the DSGE model are valid both under the null and the alternative.Footnote 5

2.3 On the Properties of the RE Equilibrium Under Timing Restrictions

As pointed out in Hespeler and Sorge (2019) and acknowledged in Kormilitsina (2019), different informational partitions typically generate different CERs in equilibrium representations of RE models. In the presence of informational constrains, solving for endogenous variables requires consistency between optimal projections determined on the basis of restricted information sets and conditional expectations which by contrast exploit full information. This in turn may over-constrain the RE forecast errors associated with the model exhibiting timing restrictions, and thus impact on the latter’s dynamic stability properties. Recall that the first-order approximate RE solution can be fully characterized by a sequence of RE forecast errors under which the dynamics of the endogenous variables is non-explosive, see e.g. Sims (2002). As a consequence, different ways of restricting the informational structure may produce distinct effects on the ability of RE forecast errors to neutralize the model’s unstable behavior. Sorge (2020) provides technical conditions under which, on the assumption that the unrestricted (full information) RE model exhibits saddle-path stability, the model counterpart with timing restrictions admits a locally unique RE equilibrium (i.e. under which the determinacy property is preserved across informational structures). Such conditions hold generically (in the space of the model’s parameters) in nearly all cases of interest, and we explicitly check them in the simulation/estimation exercises reported below.

It is also worth emphasizing that, by their very nature, DSGE models with timing restrictions do not overlap with linear RE models where two types of agents permanently display asymmetric information, as those examined in e.g. Lubik et al. (2023). In the presence of timing restrictions, fully informed agents observe histories of all exogenous and endogenous variables, while less informed agents observe only a strict subset of the full information set and need to solve a simple signal extraction problem to gather information about the shocks/states that have not yet materialized and endogenous variables that have not yet been decided upon. When, for any given unit of time, all the informational sub-periods have occurred, information sets are perfectly aligned, and filtering estimates of previously unobserved variables replaced by realized shock/states/endogenous outcomes for the current optimal decisions (based on the current unfolding of informational sub-periods) to be undertaken. One distinctive feature of this setting is that a rank condition—as formalized in Klein (2000)—that is necessary for existence of RE equilibria may fail to hold, implying non-existence of dynamically stable solutions to the model exhibiting timing restrictions; however, when a RE solution exists in the restricted information environment, it is certainty equivalent and is generically (in a measure theoretic sense) unique provided the full information model admits a determinate RE equilibrium, see Sorge (2020).

3 Testing Procedure

We consider the testing problem

$$\begin{aligned} {\textsf{H}}_{0}:{\tilde{h}}_{x}(\phi )={\hat{h}}_{x}(\theta ) {\textbf { and }} {\tilde{g}}_{x}(\phi )={\hat{g}}_{x}(\theta ) \quad {\textbf {vs. }} {\textsf{H}}_{1}:{\tilde{h}}_{x}(\phi )\ne {\hat{h}}_{x}(\theta ) {\textbf { or }} {\tilde{g}}_{x}(\phi )\ne {\hat{g}}_{x}(\theta ) \end{aligned}$$
(9)

by a LR test. The null \({\textsf{H}}_{0}\) incorporates the timing restrictions encoded in the informational partition (4). Let \(\ell _{T}(\phi )\) and \(\ell _{T}(\theta )\) denote the log-likelihoods of the DSGE model under \({\textsf{H}}_{1}\) and \({\textsf{H}} _{0}\), respectively, and \({\hat{\phi }}_{T}=\arg \max _{\phi \in {\mathcal {P}} _{\phi }}\ell _{T}(\phi )\) and \({\hat{\theta }}_{T}=\arg \max _{\theta \in {\mathcal {P}}^{D}}\ell _{T}(\theta )\) be the ML estimators of \(\phi\) and \(\theta\). Estimation of the model under the null (\({\textsf{H}}_{0}\)) and under the alternative (\({\textsf{H}}_{1}\)) is a preliminary step to the computation of the LR test, which reads as

$$\begin{aligned} LR_{T}=-2[\ell _{T}({\hat{\theta }}_T)-\ell _{T}({\hat{\phi }}_{T})]. \end{aligned}$$
(10)

see the Appendix for details on the derivation of the LR statistic. The asymptotic properties of the tests statistics \(LR_{T}\) are intimately related to the asymptotic properties of \({\hat{\theta }}_{T}\) and \({\hat{\phi }}_{T}\) and these crucially depend on whether the regularity conditions for inference are valid in the estimated DSGE model.

To improve inference in small samples, we employ a nonparametric ‘restricted bootstrap’ algorithm, according to which the bootstrap samples are generated using the parameter estimates \({\hat{\theta }}_{T}\) obtained under \({\textsf{H}}_{0}\). The LR test statistic, \(LR_{T}({\hat{\theta }}_{T})\), computed as in Eq. (10) in the main text, is stored along with \({\hat{\theta }}_{T}\). Our procedure is described by the following algorithm. Here, steps 1–4 define the bootstrap sample, the bootstrap parameter estimators and related bootstrap LR statistic; steps 5–7 describe the numerical computation of the bootstrap p-value associated to the bootstrap LR test.

  1. 1.

    Let the superscript ‘0’ denote any item obtained from the application of the Kalman filter to the state space representation of the DSGE model under the null \({\textsf{H}}_{0}\). Given the innovation residuals \({\hat{\epsilon }}_{t}^{0}=y_{t}-{\hat{g}}_{x}({\hat{\theta }}_T) {\hat{x}}_{t\mid t-1}\) and the estimated covariance matrices \({\hat{\Sigma }}_{\epsilon ^{0},t}\) produced by the estimation of the restricted DSGE model, construct the standardized innovations as

    $$\begin{aligned} {\hat{e}}_{t}^{0}={\hat{\Sigma }}_{\epsilon ^{0},t}^{-1/2}{\hat{\epsilon }}_{t}^{0,c}, \quad t=1,\ldots ,T, \end{aligned}$$
    (11)

    where \({\hat{\Sigma }}_{\epsilon ^{0},t}^{-1/2}\) is the inverse of the square-root matrix of \({\hat{\Sigma }}_{\epsilon ^{0},t}\) and \({\hat{\epsilon }} _{t}^{0,c}\), \(t=1,\ldots ,T\), are the centered residuals \({\hat{\epsilon }}_{t}^{0,c}={\hat{\epsilon }}_{t}^{0}-T^{-1}\sum \nolimits _{t=1}^{T}{\hat{\epsilon }}_{t}^{0}\);

  2. 2.

    Sample, with replacement, T times from \({\hat{e}}_{1}^{0},{\hat{e}}_{2}^{0},\ldots ,{\hat{e}}_{T}^{0}\) to obtain the bootstrap sample of standardized innovations \(e_{1}^{*},e_{2}^{*},\ldots ,e_{T}^{*}\);

  3. 3.

    Mimicking the innovation form representation of the DSGE model, the bootstrap sample \(y_{1}^{*},y_{2}^{*},\ldots ,y_{T}^{*}\) is generated recursively by solving, for \(t=1,\ldots ,T,\) the system

    $$\begin{aligned} \left( \begin{array}{c} {\hat{x}}_{t+1\mid t}^{*} \\ y_{t}^{*} \end{array} \right) =\left( \begin{array}{cc} {\hat{h}}_{x}({\hat{\theta }}_T) &{} 0_{n_{m}\times n_{y}} \\ {\hat{g}}_{x}({\hat{\theta }}_T) &{} 0_{n_{y}\times n_{y}} \end{array} \right) \left( \begin{array}{c} {\hat{x}}_{t\mid t-1}^{*} \\ y_{t-1}^{*} \end{array} \right) +\left( \begin{array}{c} K_{t}({\hat{\theta }}_T){\hat{\Sigma }}_{\epsilon ^{0},t}^{1/2} \\ {\hat{\Sigma }}_{\epsilon ^{0},t}^{1/2} \end{array} \right) e_{t}^{*}\, \end{aligned}$$
    (12)

    with initial condition \({\hat{x}}_{1\mid 0}^{*}={\hat{x}}_{1\mid 0}\);

  4. 4.

    From the generated pseudo-sample \(y_{1}^{*},y_{2}^{*},\ldots ,y_{T}^{*}\), estimate the DSGE model under \({\textsf{H}}_{0}\) obtaining the bootstrap estimator \({\hat{\theta }}_{T}^{*}\) and the associated log-likelihood \(\ell _{T}^{*}({\hat{\theta }}_T^{*})\), and estimate the DSGE model under \({\textsf{H}}_{1}\) obtaining the bootstrap estimator \({\hat{\phi }}_{T}^{*}\) and the associated log-likelihood \(\ell _{T}^{*}({\hat{\phi }}_{T}^{*})\); the bootstrap LR test for the CERs is

    $$\begin{aligned} LR_{T}^{*}({\hat{\theta }}_{T}^{*})=-2[\ell _{T}^{*}({\hat{\theta }}_T^{*})-\ell _{T}^{*}({\hat{\phi }}_{T}^{*})]; \end{aligned}$$
    (13)
  5. 5.

    Steps 2–4 are repeated B times in order to obtain B bootstrap realizations of \({\hat{\theta }}_{T}\) and \({\hat{\phi }}_{T}\), say \(\{{\hat{\theta }} _{T:1}^{*},\) \({\hat{\theta }}_{T:2}^{*},\ldots ,{\hat{\theta }}_{T:B}^{*}\}\) and \(\{{\hat{\phi }}_{T:1}^{*},\) \({\hat{\phi }}_{T:2}^{*},\ldots ,{\hat{\phi }}_{T:B}^{*}\}\), and the B bootstrap realizations of the associated bootstrap LR test, \(\{LR_{T:1}^{*}\), \(LR_{T:2}^{*},\ldots ,LR_{T:B}^{*}\}\), where \(LR_{T:b}^{*}=LR_{T}^{*}({\hat{\theta }}_{T:b}^{*})\), \(b=1,\ldots ,B\);

  6. 6.

    The bootstrap p-value of the test of the timing restrictions is computed as

    $$\begin{aligned} {\widehat{p}}_{T,B}^{*}={\hat{G}}_{T,B}^{*}(LR_{T}({\hat{\theta }}_{T}))\quad , \quad {\hat{G}}_{T,B}^{*}(\delta )=B^{-1}\sum _{b=1}^{B} \mathbb {I\{}LR_{T:b}^{*}>\delta \}, \end{aligned}$$
    (14)

    \(\mathbb {I}\left\{ \cdot \right\}\) being the indicator function;

  7. 7.

    The bootstrap LR test for the timing restrictions at the \(100\eta \%\) (nominal) significance level rejects \({\textsf{H}}_{0}\) if \({\widehat{p}}_{T,B}^{*}\le \eta\).

4 Simulation Experiments

4.1 Model

We showcase the usefulness of our testing procedure by running a series of Monte Carlo simulation experiments based on the hybrid NK model put forward by Benati and Surico (2009) in their exploration of the causes of the so-called Great Moderation.

The canonical NK model with unrestricted timing is fully characterized by the following system of equations

$$\begin{aligned} g_{t}&=\gamma E_{t}g_{t+1}+(1-\gamma )g_{t-1}-\delta ^{-1} (i_{t}-E_{t}\pi _{t+1})+\omega ^g_{t} \end{aligned}$$
(15)
$$\begin{aligned} \pi _{t}&=\frac{\beta }{1+\beta \alpha }E_{t}\pi _{t+1}+\frac{\alpha }{1+\beta \alpha }\pi _{t-1}+\kappa g_{t}+\omega ^\pi _{t} \end{aligned}$$
(16)
$$\begin{aligned} i_{t}&=\rho i_{t-1}+(1-\rho )(\varphi _{\pi }\pi _{t}+\varphi _{g}g_{t})+\omega ^i_{t} \end{aligned}$$
(17)

where

$$\begin{aligned} \omega ^j_{t}=\rho _{j}\omega ^j_{t-1}+\epsilon ^j_{t},\quad |\rho _j|<1,\quad \epsilon ^j_{t}\sim \text {WN}(0,\sigma _{j}^{2}),\ \ j=g,\pi ,i \end{aligned}$$
(18)

The variables \(g_{t}\), \(\pi _{t}\), and \(i_{t}\) stand for the output gap, inflation, and the nominal interest rate, respectively; \(\gamma\) weights the forward-looking component in the dynamic IS curve; \(\alpha\) is price setters’ extent of indexation to past inflation; \(\delta\) is the intertemporal elasticity of substitution in consumption; \(\kappa\) is the slope of the Phillips curve; \(\rho\), \(\varphi _{\pi }\), and \(\varphi _{g}\) are the interest rate smoothing coefficient, the long-run coefficient on inflation, and that on the output gap in the monetary policy rule, respectively; finally, \(\omega ^g_{t}\), \(\omega ^\pi _{t}\) and \(\omega ^i_{t}\) in Eq. (18) are the mutually independent, asymptotically stable AR(1) exogenous shock processes and \(\epsilon ^g_{t}\), \(\epsilon ^\pi _{t}\) and \(\epsilon ^i_{t}\) are the structural innovations.

Exploiting the general DSGE model representation under timing restrictions (3), the retricted NK model is rather in the form:

$$\begin{aligned} {\mathcal {E}}_{g,t} \left[ g_{t} - \gamma g_{t+1} - (1-\gamma )g_{t-1}+ \delta ^{-1} (i_{t}-\pi _{t+1})-\omega ^g_{t} \right] = 0 \end{aligned}$$
(19)
$$\begin{aligned} {\mathcal {E}}_{\pi ,t} \left[ \pi _{t} - \frac{\beta }{1+\beta \alpha } \pi _{t+1} - \frac{\alpha }{1+\beta \alpha }\pi _{t-1} - \kappa g_{t} - \omega ^\pi _{t} \right] = 0 \end{aligned}$$
(20)
$$\begin{aligned} {\mathcal {E}}_{i,t} \left[ i_{t} - \rho i_{t-1}-(1-\rho )(\varphi _{\pi }\pi _{t}+\varphi _{g}g_{t})- \omega ^i_{t} \right] =0 \end{aligned}$$
(21)

where \({\mathcal {E}}_{j,t}=E(\cdot \, | \, \mathbb {I}_{j,t})\), \(j=g,\pi ,i\) is the rational (model-consistent) expectation operator conditioned on the information set \(\mathbb {I}_{j,t}\), with

$$\begin{aligned} \mathbb {I}_{g,t}=\mathbb {I}_{\pi ,t}= \left\{ g_{t-\tau }, \pi _{t-\tau }, i_{t-1-\tau }, \omega ^g_{t-\tau }, \omega ^\pi _{t-\tau }, \omega ^i_{t-1-\tau }; \, \tau =0,1,2,\ldots \right\} \end{aligned}$$

i.e. private agents do not observe the current monetary policy shock \(\epsilon ^i_t\) and thus cannot infer the current value of the nominal interest rate \(i_t\) but only project it as a function of the observables in their information set; and

$$\begin{aligned} \mathbb {I}_{i,t}= \left\{ g_{t-\tau }, \pi _{t-\tau }, i_{t-1-\tau }, \omega ^g_{t-\tau }, \omega ^\pi _{t-\tau }, \omega ^i_{t-\tau }; \, \tau =0,1,2,\ldots \right\} \end{aligned}$$

i.e. the monetary policy authority observes at time t the entire history of all the endogenous and exogenous variables up to time t.Footnote 6

Clearly, the information partition supporting this set of timing restrictions is

$$\begin{aligned} f= \left[ f^0; \, f^1; \, f^{x_r} \right] \end{aligned}$$
(22)

where

$$\begin{aligned} f^0=\left[ \begin{array}{c} g - \gamma g' - (1-\gamma ) g_{-1} + \delta ^{-1} (i-\pi ')-\omega ^g\\ \\ \pi - \frac{\beta }{1+ \beta \alpha } \pi ' - \frac{\alpha }{1+\beta \alpha } \pi _{-1} - \kappa g - \omega ^\pi \end{array} \right] , \\ f^1=\left[ \begin{array}{c} i-\rho i_{-1} - (1-\rho ) (\varphi _\pi \pi + \varphi _g g) - \omega _i\\ \\ \omega ^g - \rho _g \omega ^{g}_{-1} - \epsilon ^g \\ \\ \omega ^\pi - \rho _\pi \omega ^{\pi }_{-1} - \epsilon ^\pi \end{array} \right] \end{aligned}$$

and

$$\begin{aligned} f^{x_r}= \left[ \omega ^i - \rho _i \omega ^{i}_{-1} - \epsilon ^i \right] \end{aligned}$$

Accordingly, the computation of the RE equilibrium under timing restrictions requires the following assignment of variables:

$$\begin{aligned} \begin{aligned}&\quad y_u= i, \quad y_r =[g,\, \pi ]'\\ x_u&=[g_{-1}, \, \pi _{-1},\, i_{-1},\, \omega ^g,\, \omega ^{\pi }]', \quad x_r= \omega ^{i} \end{aligned} \end{aligned}$$
(23)

As is known, the model (15)–(18) can admit a continuum of asymptotically stable equilibria (equilibrium indeterminacy) depending on the strength of the monetary authority’s response to inflation. Under these circumstances, short-run dynamics for the endogenous variables can be arbitrarily driven by both structural and non-structural (sunspot) shocks, e.g. Lubik and Schorfheide (2003). We shall remark that the NK model under timing restrictions model displays, generically in the space of the admissible parameters, a locally unique (determinate) RE solution insofar as its unrestricted counterpart does. In our Monte Carlo simulation experiments, we explicitly confine attention to the determinate equilibrium version of Benati and Surico (2009)’s model, so that variation in the likelihood across the two information structures (restricted vs. unrestricted) is to be ascribed to the presence of timing restrictions solely, on the assumption that the structural model is correctly specified.Footnote 7

4.2 Monte Carlo Simulations

In this section we investigate the empirical performance of the bootstrap test using the NK structure (15)–(18) as our data generating process (DGP). More specifically, we consider two DSGE-based equilibrium state space representations, denoted as DGP under timing restrictions and DGP with unrestricted timing, respectively. In the former, it is assumed that the data are generated by the determinate equilibrium representation that emerges in the presence of structural timing restrictions embodied in (23); in the latter, artificial series are rather generated by allowing for contemporaneous effects of policy innovations on the inflation rate and the output gap (i.e. when no informational constraints are at work), again imposing equilibrium determinacy. Assuming Gaussian distributions for the structural shocks, and for given initial values, the ML estimation of the model’s parameters, as a preliminary step to the construction of the bootstrap-based LR test, is carried out iteratively by means of a standard BFGS quasi-Newton optimization method, as described in Bårdsen and Fanelli (2015). For either experiment, the nominal significance level is set to \(5\%\).

In a first experiment, we allow timing restrictions to produce quantitatively non-negligible differences in the propagation of the monetary policy shock relative to the unrestricted model. To this aim, we let the structural innovations display relatively high dispersion (\(\sigma _j=2\), \(j=g,\pi ,i\)), and assign a markedly larger persistence to the monetary shock process (\(\rho _i=0.9>0.1=\rho _j\), \(j=g,\pi\)). We then investigate the empirical size of the LR test, using the restricted model as the actual DGP (column DGP under timing restrictions), and its power, when the unrestricted model serves as the underlying DGP (column DGP with unrestriced timing). For either DGP we consider \(K=500\) simulations and a sample size \(T \in \left\{ 100,500\right\}\) with a burn-in of 200 observations.

We estimate five key structural parameters on artificial data: \(\delta\) (shaping the intertemporal channel of monetary policy transmission to non-policy variables); \(\kappa\) (governing the output-inflation trade-off faced by central banks); and the inertial parameters \(\gamma\), \(\alpha\) (both relevant for stabilization goals) and \(\rho\) (capturing policy persistence). Other parameters are calibrated to Benati and Surico (2009)’s posterior median estimates over the Great Moderation—see Table 1.

Table 1 Parameterization of Benati and Surico (2009)’s model in first simulation experiment (\(j=g, \pi , i\))
Table 2 First Monte Carlo experiment

Results are summarized in Table 2, which reports the estimates for the subset of structural parameters \(\theta ^s=(\kappa , \gamma , \alpha , \rho , \delta ^{-1})'\) from the state-space form (6), when the DGP complies with either information structure (restricted vs. unrestricted). We notice, first, that sample estimates across information structures are roughly equal, speaking in favour of identification of the population deep parameters (which, by definition, are invariant with respect to the timing of decisions). Second, the bootstrap tends to mitigate the discrepancy between actual and nominal probabilities of type-I error. Indeed, when asymptotic critical values taken from the \(\chi _{x}^{2}\) distribution (\(x=\dim (\phi )-\dim (\theta )\)) are employed, the rejection frequency of the LR test for the timing restrictions is \(7.2\%\) and \(11.2\%\) for \(T=500\) and 100 respectively. Therefore, in finite samples our bootstrap-based approach attenuates the tendency of the asymptotic LR test to over-reject the CERs associated with the restricted timing protocol, with rejection frequencies close to the \(5\%\) level. Remarkably, the bootstrap test also shows satisfactory power (column DGP with unrestricted timing).

On empirical grounds, a standard deviation \(\sigma _i=2\) for the monetary policy shock is implausibly large compared to both the historically observed 0.25 ppt increments in central banks’ interest rates, and to estimates from structural VARs that tend to be much smaller (at least since the onset of the Great Moderation until the upsurge of aggregate prices in recent times). This observation motivates our second experiment, where structural disturbances driving short-run dynamics are assumed to exhibit low volatility (standard deviation equal to 0.1), and all the structural parameters are calibrated to Benati and Surico (2009)’s posterior median estimates over the Great Moderation—see Table 3. In this scenario, with the exception of the (mechanically arising) zero on-impact effect of the monetary policy innovation on the inflation rate and the output gap when timing restriction are at work, the dynamics of the impulse response functions of the model are almost identical across informational structures (restricted vs. unrestricted), making it relatively harder to distinguish between the two. This notwithstanding, the bootstrap LR test is able to detect the presence of timing restrictions in the underlying DGP (column DGP under timing restrictions), with rejection frequencies approaching the pre-fixed nominal level—see Table 4.

Table 3 Parameterization of Benati and Surico (2009)’s model in second simulation experiment (\(j=g, \pi , i\))
Table 4 Second Monte Carlo experiment

5 Empirical Application

In this section we employ our bootstrap-based testing strategy to evaluate the empirical relevance of timing restrictions in a particular historical juncture of the U.S. economy, i.e. the so-called Great Moderation. Given their focus on empirically evaluating the effectiveness of monetary policy in the US post-WWII macroeconomic history, several studies have developed small- to medium-scale frameworks with recursive timing, under which the model’s responses to a monetary innovation are zero on impact, in keeping with the recursive (Cholesky-type) identification scheme in sVAR systems, in order to pave the way for the implementation of a straightforward impulse response matching procedure—see e.g. Altig et al. (2011), Guerron-Quintana et al. (2017), Rotemberg and Woodford (1997).

In our application, we closely follow Bårdsen and Fanelli (2015) and perform a ML estimation of Benati and Surico (2009)’s NK monetary business cycle model on U.S. quarterly data for the 1985Q1 - 2008Q3 window (\(T=95\) observations, not including initial lags); the observables include the natural rate of output (proxied by the official measure from the Congressional Budget Office), the real GDP, the inflation rate (quarterly growth rate of the GDP deflator) and the short-term nominal interest rate (effective federal funds rate expressed in averages of monthly values). We intentionally disregard the zero-lower bound phase, beginning in December 2008, which entailed non-standard policy measures by the Federal Reserve that are not consistent with the feedback rule embodied in the interest rate equation (17). The length of the elected sample also reflects our intention of focusing on a determinate (locally unique) RE equilibrium, see e.g. Castelnuovo and Fanelli (2015); and allows us to emphasize the empirical appeal of the bootstrap-based LR test in short data samples.

As mentioned, a main disadvantage of classical estimation methods for DSGE models compared to Bayesian techniques lies in the difficulty to handle identification failure for some of the structural parameters of the model \(\theta\), and their reduced form analogs \(({\hat{g}}_x(\theta ), {\hat{h}}_x(\theta ))\), given the non-linear mapping induced by the CERs (identification in population); the relationship between the structural parameters and the sample objective function (here, the likelihood function) is also problematic, for strong identification can be precluded by the nature and size of available data (sample identification).Footnote 8 Besides, given the tight set of (sign and bound) restrictions that theory typically imposes on structural parameters (e.g. the slope of the Phillips curve), estimation approaches that do not exploit prior information may well fail to safeguard against the generation of economically implausible estimates, see e.g. An and Schorfheide (2007). To partially address these concerns, the parameter vector \(\theta\) is split into two sub-vector: \(\theta ^{ng}=(\gamma , \rho , \rho _g, \rho _{\pi }, \rho _i)'\), which are directly estimated via the ML algorithm, and \(\theta ^g=(\delta ^{-1}, \alpha , \kappa , \varphi _g, \varphi _{\pi }, \sigma _j)'\), which are fine-tuned via a numerical grid-search method with pre-specified ranges. Operationally, the log-likelihood function associated with the NK model featuring timing restrictions is maximized over the parameters in \(\theta ^{ng}\) conditional on random draws (15,000 points) from a uniform distribution for each of the parameters in \(\theta ^g\); optimal estimates for the the latter are then selected as those that maximize the log-likelihood evaluated at the ML estimate for the free parameters \(\theta ^{ng}\) (see the Notes to Table 5). A rank condition for local identifiability (in population), based on the differentiation of the CERs, is then performed along the lines of Iskrev (2010).Footnote 9

Three differences between the approach of Bårdsen and Fanelli (2015) and ours require further discussion. First, full-information ML estimation methods for DSGE models are intimately related to the system reduction method employed to derive the reduced form representation of the model under scrutiny; while Bårdsen and Fanelli (2015) adopt the method put forward in Binder and Pesaran (1999), we follow the algorithm outlined in Schmitt-Grohé and Uribe (2004) to compute first-order approximate solutions, which in turn exploits Klein (2000)’s package to determine the unknown coefficient matrices in the equilibrium dynamics for both the control and the state variables. Second, we assume that the exogenous structural innovations \(\epsilon _t^j\) (\(j=g, \pi , i\)) are orthogonal white noises, whose variances are determined numerically via grid search; Bårdsen and Fanelli (2015), instead, do not restrict the covariance matrix of the structural shocks, whose elements are then indirectly recovered from inverting the CERs associated with the law of motion for exogenous state variables (evaluated at the ML estimates for the other structural parameters). Third, since the bootstrap version of the LR test in state-space models is computationally intensive, the values for \(\theta ^g\) in the grid that optimize the log-likelihood function given the observed sample are kept fixed in the bootstrap replications; taking advantage from the existence of a finite-order VAR representation for the Benati and Surico (2009)’s model free of timing restrictions, Bårdsen and Fanelli (2015) compute the bootstrap version of their test of the implied CERs by drawing 1500 points from the grid for each bootstrap replication.

As argued in Angelini and Sorge (2021), timing restrictions generally induce moving average (MA) components in equilibrium reduced form representatins of DSGE models. As a result, the model does not generically admit a finite-order VAR representation. The estimation of (6) and of its unrestricted analog (8) requires finding the minimal state-space representation associated with the specified NK model among the set of equivalent representations, see e.g. Guerron-Quintana et al. (2013). Provided this representation is at hand, the Kalman filter can be combined with the ML estimation algorithm and the bootstrap procedure to build and evaluate the log-likelihood function of the NK model under the Gaussian assumption, and then compute the LR test for timing restrictions.

Table 5 ML estimation of the parameters \(\theta ^{ng} = \left( \gamma ,\rho ,\rho _g,{\rho }_{\pi },{\rho }_i \right)\), bootstrap standard error in parentheses

Estimates for the parameters of interest and bootstrap standard errors are both reported in the second column of Table 5, along with the point estimates obtained by Bårdsen and Fanelli (2015) for exactly the same parameters entering Benati and Surico (2009)’s model under the assumption of unrestricted timing (third column). We notice that the inertial parameters \((\rho , \rho _j)\), \(j=g, \pi , i\) are not precisely estimated in the model embodying timing restrictions, and generally exhibit lower magnitudes relative to their counterparts in the unrestricted model, for the likelihood tends to ascribe a fraction of the persistence in the data to the informational channel (endogenous backward dependence). The bootstrap p-value associated with the LR test for the CERs associated with the conventional (unrestricted) timing protocol is 0.8, while it falls dramatically to 0.005 when recursive timing is imposed instead. Modulo the previously discussed caveat on the adverse impact of weak identification on the asymptotic and bootstrap distributions of estimators, this evidence indicates that information-induced timing restrictions played no significant role in shaping business cycle dynamics over the period of interest. Accordingly, we view the outcome of our test as suggesting that, as far as short-run macroeconomic fluctuations over the Great Moderation period are concerned, the recursive timing protocol adopted in the aforementioned literature is not favored by aggregate data, thereby calling for caution in the interpretation of the estimated responses of inflation and real economic activity to monetary policy shocks as reported in those studies.

6 Conclusion

This paper develops a simple bootstrap-based testing procedure for the relevance of timing restrictions and ensuing shock transmission delays in small-scale DSGE model environments. Remarkably, the computer code is consistent with standard MATLAB packages—such as Sims (2002)’s gensys.m— that are routinely used to compute first-order approximate solutions to dynamic macroeconomic models; and can be straightforwardly adapted to allow for relatively more sophisticated recursive timing structures than those considered herein, e.g. those involving multi-period informational partitions (Kormilitsina, 2013).