Skip to main content
Log in

Posterior Inference on Parameters in a Nonlinear DSGE Model via Gaussian-Based Filters

  • Published:
Computational Economics Aims and scope Submit manuscript

Abstract

This paper studies Gaussian-based filters within the pseudo-marginal Metropolis Hastings (PM-MH) algorithm for posterior inference on parameters in nonlinear DSGE models. We implement two Gaussian-based filters to evaluate the likelihood of a DSGE model solved to second and third order and embed them into the PM-MH: Central Difference Kalman filter (CDKF) and Gaussian mixture filter (GMF). The GMF is adaptively refined by splitting a mixture component into new mixture components based on Binomial Gaussian mixture. The overall results indicate that the estimation accuracy of the CDKF and the GMF is comparable to that of the particle filter (PF), except that the CDKF generates biased estimates in the extremely nonlinear case. The proposed GMF generates the most accurate estimates among them. We argue that the GMF with PM-MH can converge to the true and invariant distribution when the likelihood constructed by infinite Gaussian mixtures weakly converges to the true likelihood. In addition, we show that the Gaussian-based filters are more efficient than the PF in terms of effective computing time. Finally, we apply the method to real data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. We tested other Sigma-Point Kalman filters (UKF and CKF) for the likelihood evaluation and posterior inference on parameters in the application of our neoclassical growth model solved up to second and third order. However, the UKF, the CDKF, and the CKF show similar performance. The estimation results are available upon request.

  2. We can apply the GMF to models with non-Gaussian structural shocks and measurement errors by estimating the Gaussian mixture densities of the structural shocks and the measurement errors. The Gaussian mixture densities can be estimated using an Expectation Maximization (EM) algorithm (McLachlan and Peel 2004.

  3. Runnalls (2007) proposes the following KL-based discrimination measure:

    figure a

    where \(\mu _i\) is the (predictive or filtered) mean for component i, \(w_i\) is the weight for mixture component i, and \(P_i\) is the (predictive or filtered) covariance matrix for component i.

  4. We greatly thank Martin M. Andreasen for making publicly available his codes for the optimized central difference particle filter.

  5. The higher-order approximations create explosive sample paths because the higher-order terms generate unstable steady states. To deal with this problem, it is recommended to apply pruning scheme that omits terms of higher-order effects (Kim et al. 2008). However, the simulated sample paths without pruning are stable and are not much different from those obtained with pruning in our case. Moreover, the estimation results without pruning are consistent with those with pruning so that we will report the results obtained without pruning. The estimation results obtained with pruning are available upon request.

  6. The CDKF might not satisfy these sufficient conditions, depending on how nonlinear the model is. According to the Monte Carlo exercises in Sect. 5, the CDKF seems to satisfy the conditions in the benchmark and nearly linear cases (except for a few cases in the benchmark case), but does not satisfy the conditions when the model is highly nonlinear.

  7. Mengersen and Tweedie (1996) and Roberts and Tweedie (1996) verify that a random-walk-based Metropolis is geometrically ergodic when the target invariant distribution has exponentially decreasing tails (in one dimension) and behaves sufficiently smoothly in the tails (in higher dimension). Jarner and Hansen (2000) shows more general conditions for geometric ergodicity. Although a random-walk-based MH sampler might fail to satisfy these conditions, it can be polynomially ergodic of all orders (Fort and Moulines 2000).

  8. We calculate the scores \(\hat{s}^k_t(\theta )\) and the Hessians \(\hat{h}^k_t(\theta )\) by numerically differentiating the conditional likelihoods \(\hat{l}^k_t(\theta )\)

  9. The effective sample size is used to assess the convergence of sums of MCMC samples in a heuristic way.

  10. The coefficient of variation is calculated as

    $$\begin{aligned} CV=\frac{var(h)}{E[h]^2}=exp\Bigg (\frac{\sigma _{\sigma _z}^2}{1-\rho _{\sigma _z}^2}\Bigg )-1. \end{aligned}$$
  11. When setting \(\rho _{\sigma _z}=0.9\) and \(\sigma _{\sigma _z}=0.135\), the CV is 0.1. In that case, the estimation result for \(\rho _{\sigma _z}\) becomes poorer, though the result for \(\sigma _{\sigma _z}\) is not bad. The estimation results are given in “Appendix E”.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sanha Noh.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

I am deeply indebted to my advisor, Christopher Otrok, for his guidance at all stages of this research project. I am also grateful to Scott Holan, Shawn Ni, Isaac Miller, and Kyungsik Nam for useful feedback. I thank participants at CEF (2017).

Appendices

Central Difference Kalman Filter

We define the following matrices which make it possible to match the first-order terms,

$$\begin{aligned} S^{(1)}_{xx,t|t-1}= & {} \Bigg \{(h_l(\hat{\chi }^{x+}_{i,t-1},\hat{\chi }^{\eta }_{0,t}; \theta )-h_l(\hat{\chi }^{x-}_{i,t-1},\hat{\chi }^{\eta }_{0,t};\theta ))/2h\Bigg \}, \end{aligned}$$
(56)
$$\begin{aligned} S^{(1)}_{x\eta ,t|t-1}= & {} \Bigg \{(h_l(\hat{\chi }^x_{0,t-1},\hat{\chi }^{\eta +}_{i,t}; \theta )-h_l(\hat{\chi }^x_{0,t-1},\hat{\chi }^{\eta -}_{i,t};\theta ))/2h\Bigg \}, \end{aligned}$$
(57)
$$\begin{aligned} S^{(1)}_{yx,t|t-1}= & {} \Bigg \{((g_l(\bar{\chi }^{x+}_{i,t};\theta )-(g_l (\bar{\chi }^{x-}_{i,t};\theta ))/2h\Bigg \}, \end{aligned}$$
(58)

and the second-order terms in a Taylor series expansion of our nonlinear state space Eqs. (2) and (3) (Andreasen 2013),

$$\begin{aligned} S^{(2)}_{xx,t|t-1}= & {} \Bigg \{\frac{\sqrt{h^2-1}}{2h^2}(h_l (\hat{\chi }^{x+}_{i,t-1},\hat{\chi }^{\eta }_{0,t};\theta )+h_l(\hat{\chi }^{x-}_{i,t-1}, \hat{\chi }^{\eta }_{0,t};\theta )-2h_l(\hat{\chi }^x_{0,t-1}, \hat{\chi }^{\eta }_{0,t};\theta ))\Bigg \}, \end{aligned}$$
(59)
$$\begin{aligned} S^{(2)}_{x\eta ,t|t-1}= & {} \Bigg \{\frac{\sqrt{h^2-1}}{2h^2}(h_l (\hat{\chi }^x_{0,t-1},\hat{\chi }^{\eta +}_{i,t};\theta )+h_l(\hat{\chi }^x_{0,t-1}, \hat{\chi }^{\eta -}_{i,t};\theta )-2h_l(\hat{\chi }^x_{0,t-1},\hat{\chi }^{\eta }_{0,t}; \theta ))\Bigg \}, \end{aligned}$$
(60)
$$\begin{aligned} S^{(2)}_{yx,t|t-1}= & {} \Bigg \{\frac{\sqrt{h^2-1}}{2h^2}(g_l (\bar{\chi }^{x+}_{i,t};\theta )+g_l(\bar{\chi }^{x-}_{i,t};\theta )-2g_l (\bar{\chi }^x_{0,t};\theta ))\Bigg \}, \end{aligned}$$
(61)

where \(g_l(\cdot )\) and \(h_l(\cdot )\) are lth equation of \(g(\cdot )\) and \(h(\cdot )\), respectively. The sigma-points are used to evaluate the mean and covariance matrix of prediction density:

$$\begin{aligned} \hat{x}_{t|t-1}= & {} \frac{h^2-n_x-n_{\eta }}{h^2}h(\hat{\chi }^x_{0,t-1}, \hat{\chi }^{\eta }_{0,t};\theta )\nonumber \\&+\frac{1}{2h^2}\sum ^{n_x}_{i=1}{h(\hat{\chi }^{x+}_{i,t-1},\hat{\chi }^{\eta }_{0,t}; \theta )+h(\hat{\chi }^{x-}_{i,t-1},\hat{\chi }^{\eta }_{0,t};\theta )}\nonumber \\&+\frac{1}{2h^2}\sum ^{n_{\eta }}_{i=1}{h(\hat{\chi }^x_{0,t-1}, \hat{\chi }^{\eta +}_{i,t};\theta )+h(\hat{\chi }^x_{0,t-1},\hat{\chi }^{\eta -}_{i,t}; \theta )}, \end{aligned}$$
(62)
$$\begin{aligned} S^{x}_{t|t-1}= & {} \Phi ([S^{(1)}_{xx,t|t-1}\quad S^{(1)}_{x\eta ,t|t-1} \quad S^{(2)}_{xx,t|t-1}\quad S^{(2)}_{x\eta ,t|t-1}]), \end{aligned}$$
(63)
$$\begin{aligned} \hat{y}_{t|t-1}= & {} \frac{h^2-n_x}{h^2}g(\bar{\chi }^x_{0,t})+\frac{1}{2h^2} \sum ^{n_x}_{i=1}(g(\bar{\chi }^{x+}_{i,t};\theta )+g(\bar{\chi }^{x-}_{i,t};\theta )). \end{aligned}$$
(64)

The above prediction step gives the Gaussian-based prediction density \(p(x_t|y_{1:t-1};\theta )\approx N(\hat{x}_{t|t-1},P^x_{t|t-1})\) where \(P^x_{t|t-1}=S^x_{t|t-1}{S^x_{t|t-1}}'\). The predictions are then updated using the standard Kalman filter updating rule:

$$\begin{aligned} S^y_{t|t-1}= & {} \Phi ([S^{(1)}_{yx,t|t-1}\quad S_v\quad S^{(2)}_{yx,t|t-1}]), \end{aligned}$$
(65)
$$\begin{aligned} K_t= & {} S^x_{t|t-1}S^{(1)'}_{yx,t|t-1}[S^y_{t|t-1}{S^{y}_{t|t-1}}']^{-1}, \end{aligned}$$
(66)
$$\begin{aligned} \hat{x}_{t|t}= & {} \hat{x}_{t|t-1}+K_t(y_t-\hat{y}_{t|t-1}), \end{aligned}$$
(67)
$$\begin{aligned} S^x_{t|t}= & {} \Phi ([S^x_{t|t-1}-K_tS^{(1)}_{yx,t|t-1}\quad K_tS_v\quad K_tS^{(2)}_{yx,t|t-1}]), \end{aligned}$$
(68)

where \(S_v\) is the upper triangular Cholesky factorization of R\(_v\). The updating step gives \(p(x_t|y_{1:t};\theta )\approx N(\hat{x}_{t|t},P^x_{t|t})\) where \(P^x_{t|t}=S^x_{t|t}{S^x_{t|t}}'\). Finally we can get the following conditional marginal likelihood,

$$\begin{aligned} \hat{p}^{\text {CDKF}}(y_t|y_{1:t-1};\theta )=N(\hat{y}_{t|t-1},P^y_{t|t-1}), \end{aligned}$$
(69)

where \(P^y_{t|t-1}=S^y_{t|t-1}{S^y_{t|t-1}}'\).

Binomial Gaussian Mixture

Following Raitoharju et al. (2015), the Binomial Gaussian mixture splits a normal distributed mixture component of the prediction and filtering density into smaller ones using weights and transformed locations from the binomial distribution. The probability density function of the standardized binomial distribution with the smallest error is expressed using Dirac delta notation as

$$\begin{aligned} P_{B,m}(x)=\sum _{k=1}^{m}\left( {\begin{array}{c}m-1\\ k-1\end{array}}\right) \Bigg (\frac{1}{2}\Bigg )^{m-1} \delta \Bigg (x-\frac{2k-m-1}{\sqrt{m-1}}\Bigg ), \end{aligned}$$
(70)

where m denotes the number of the composite components. According to Berry–Essen theorem, the \(P_{B,m}\) converges to the standard normal distribution as m increases. Based on this well-known property, the Binomial Gaussian mixture is constructed using a mixture of standard normal distributions of which component means, covariance matrices, and weights are chosen based on a scaled binomial distribution. A proper affine transformation allows the mixture product to preserve the mean and covariance of the original Gaussian distribution.

A Binomial Gaussian mixture with \(m_{total}=\prod ^{n}_{i=1}m_i\) components has the following representation

$$\begin{aligned} p_{BinoGM}(x)=\sum ^{m_{total}}_{l=l}w_l p_N(x|\mu _l, P), \end{aligned}$$
(71)

where

$$\begin{aligned} w_l= & {} \prod ^n_{i=1}\left( {\begin{array}{c}m_i-1\\ C_{l,i}-1\end{array}}\right) \Bigg (\frac{1}{2}\Bigg )^{m_i-1}, \end{aligned}$$
(72)
$$\begin{aligned} \mu _l= & {} T \left[ \begin{array}{c} \sigma _1\frac{2C_{l,1}-m_1-1}{\sqrt{m_1 -1}} \\ \sigma _2\frac{2C_{l,2}-m_2-1}{\sqrt{m_2 -1}} \\ \vdots \\ \sigma _n\frac{2C_{l,n}-m_n-1}{\sqrt{m_n -1}} \end{array} \right] +\mu , \end{aligned}$$
(73)
$$\begin{aligned} P= & {} TT', \end{aligned}$$
(74)

\(m_i\) is the number of components in ith binomial distribution, \(\sigma _i\) is the variance in ith binomial distribution, T is the operator for the affine transformation, and C is the Cartesian product

$$\begin{aligned} C=\{1,\dots ,m_1\}\times \{1,\dots ,m_2\}\times \dots \times \{1,\dots ,m_n\}. \end{aligned}$$
(75)

Notation \(C_{l,i}\) is the ith component of the lth combination. If \(m_k=1\), the term \(\frac{2C_{l,k}-m_k-1}{\sqrt{m_k -1}}\) is set to 0.

Raitoharju et al. (2015) use the notation to define a random variable \({\varvec{x}}_{{\varvec{BinoGM}}}\) distributed according to the above Binomial Gaussian mixture.

$$\begin{aligned} {\varvec{x}}_{{\varvec{BinoGM}}}\sim BinoGM(\mu ,T,\Sigma ,m_1,\dots ,m_n)\;\;\text {where}\;\; \Sigma =\text {diag}(\sigma ^2_1,\dots ,\sigma ^2_n). \end{aligned}$$
(76)

They show that

$$\begin{aligned}&E({\varvec{x}}_{{\varvec{BinoGM}}})=\mu , \end{aligned}$$
(77)
$$\begin{aligned}&cov({\varvec{x}}_{{\varvec{BinoGM}}})=P_0=T(\Sigma +I)T', \end{aligned}$$
(78)
$$\begin{aligned}&\lim _{m_1,\dots ,m_n\rightarrow \infty }BinoGM(\mu ,T,\Sigma ,m_1,\dots ,m_n)=N(\mu ,T(\Sigma +I)T'). \end{aligned}$$
(79)

The Eq. (79) implies that the Binomial Gaussian mixture converges to the original Gaussian distribution in the limit. In splitting a mixture component, the parameters are chosen such that the mean and covariance are preserved. The mean is preserved by choosing \(\mu \) in (73) to be the mean of the mixture component. If the component covariance matrix is \(P_0\) and it is split into smaller components that have covariance P, matrices T and \(\Sigma \) have to be chosen so that

$$\begin{aligned} TT'= & {} P, \end{aligned}$$
(80)
$$\begin{aligned} T\Sigma T'+P= & {} P_0. \end{aligned}$$
(81)

The parameters for the Binomial Gaussian mixture are chosen such that it is a good approximation of the original mixture component, the nonlinearity is below a predetermined threshold \(\eta _{limit}\), two equally weighted components that are next to each other produce a unimodal pdf, and the number of components is minimized (see Appendix in Raitoharju et al. 2015). Under these conditions, the matrix T is chosen as

$$\begin{aligned} T=L(P_0)V\text {diag}\left[ \frac{1}{\sqrt{\sigma _1^2+1}}, \frac{1}{\sqrt{\sigma _2^2+1}},\dots ,\frac{1}{\sqrt{\sigma _n^2+1}}\right] , \end{aligned}$$
(82)

where \(L(P_0)\) is the square root matrix of \(P_0\), \(\sigma _i^2=m_i-1\), \(\frac{\lambda _i^2}{m_i^2}=\frac{\lambda _j^2}{m_j^2}\) with the conditions that \(m_i\ge 1\), \(\frac{1}{R}\sum ^n_{i=1}\frac{\lambda _i^2}{m_i^2}\) is not larger than a predetermined nonlinearity threshold \(\eta _{limit}\), and \(\lambda _i\) is the ith eigenvalue in the eigendecomposition

$$\begin{aligned} V\Lambda V'=\frac{Q^{l}}{h^2}, \end{aligned}$$
(83)

where l indicates the lth measurement equation, h is a scaling parameter that determines the spread of the sigma-points around predictive or filtered mean, and \(Q^{l}\) is defined as follows:

$$\begin{aligned} Q^{l}_{[c,d]}=\left\{ \begin{array}{ll} g_l(\chi ^a_{t|t-1}+h\text {S}^a_{i,t|t-1})+g_l(\chi ^a_{t|t-1}-h\text {S}^a_{i,t|t-1})-2g_l(\chi ^a_{t|t-1}) &{} c=d \\ \frac{1}{2}[g_l(\chi ^a_{t|t-1}+h\text {S}^a_{i,t|t-1}+h\text {S}^a_{i,t|t-1})+g_l(\chi ^a_{t|t-1}-h\text {S}^a_{i,t|t-1}-h\text {S}^a_{i,t|t-1})-\\ 2g_l(\chi ^a_{t|t-1})-Q^{l}_{[c,c]}-Q^{l}_{[d,d]}] &{} c\ne d, \\ \end{array} \right. \end{aligned}$$

where the function g describes the measurement equation. All of the notations come from the definition of the sigma-points discussed in the Eq. (7) of the Sect. 2.2.1.

Proof

Proof of Corollary 1

We show that the acceptance rate of the pseudo-marginal MH algorithm converges to that of the ideal marginal MH algorithm:

$$\begin{aligned} \hat{r}^{\text {GMF}}(\theta ^*|\theta )= & {} \frac{\hat{p}^{\text {GMF}} (y_{1:T}|\theta ^{*})p(\theta ^{*})q(\theta |\theta ^{*})}{\hat{p}^{\text {GMF}} (y_{1:T}|\theta )p(\theta )q(\theta ^{*}|\theta )}\underset{J\rightarrow \infty }{\longrightarrow }r(\theta ^*|\theta )\nonumber \\= & {} \frac{p(y_{1:T}|\theta ^{*})p(\theta ^{*})q(\theta |\theta ^{*})}{p(y_{1:T}|\theta )p(\theta )q(\theta ^{*}|\theta )}, \end{aligned}$$
(84)

where J is the total number of Gaussian mixture components of the marginal likelihood. It is a direct result of Theorem 12 of Ali-Löytty (2008) which is about the convergence of the GMF. Under the assumption that the Gaussian mixture prediction and filtered density weakly converge to the true densities, the marginal likelihood implied by the GMF weakly converges to the true marginal likelihood as the number of mixture component J goes to infinity:

$$\begin{aligned} \begin{aligned} \hat{p}^{\text {GMF}}(y_t|y_{1:t-1};\theta )&=\sum ^{L'}_{l'=1}\sum ^N_{n=1}w^{[l']}_{t}\gamma ^{[n]}_{t}N(\hat{y}^{[l'\times n]}_{t|t-1},P^{y,[l'\times n]}_{t|t-1})\\&=\sum ^{J}_{i=1}l^{[i]}_{t}N(\hat{y}^{[i]}_{t|t-1},P^{y,[i]}_{t|t-1}) \underset{J\rightarrow \infty }{\longrightarrow }p(y_t|y_{1:t-1};\theta ), \end{aligned} \end{aligned}$$
(85)

where \(J=L'N\) is the total number of Gaussian mixture components of the marginal likelihood and \(l^{[i]}_{t}=w^{[l']}_{t}\gamma ^{[n]}_{t}\). As a result, the realizations of GMF with PM-MH converge to the true and stationary density. \(\square \)

Proof of Corollary 2

Assumption 2 ensures that the drift function V satisfies

$$\begin{aligned} \Vert \hat{\mathbf {P}}^{GMF,i}(A|\theta _0)-\pi (A))\Vert \le RV(\theta _0)\rho ^i \quad \text {for any}\ \theta _0\in \Theta ,\ \text {as}\ J\rightarrow \infty , \end{aligned}$$
(86)

where \(V(\cdot )\) denotes a drift function, satisfying (41), \(\rho <1\), and \(R<\infty \). J is the total number of mixture components of the marginal likelihood and \(\Theta \) is the support set of the true posterior \(p(\theta |y_{1:T})\). The proof is the direct consequences of the Gaussian mixture approximation in Corollary 1 and Theorem 3.1 in Jarner and Hansen (2000). \(\square \)

Proof of Corollary 3

Under regularity conditions of Theorem 2 in Müller (2013), the asymptotic posterior normality (a) based on the Gaussian-based filters corresponds to Theorem 2 in Müller (2013). The argument (b) is based on Corollary 1 describing that the realizations of GMF with PM-MH converge to the true and stationary density when the likelihood constructed by infinite Gaussian mixtures weakly converges to the true likelihood.

Second-Order Approximation for Calibrated Parameters

Benchmark Case

$$\begin{aligned} G_{x}= & {} {\begin{bmatrix} 0.55989&\quad 0.37427&\quad -0.8481 \\ -0.8026&\quad 4.08271&\quad 4.17808 \\ -0.1348&\quad 0.367&\quad 0.4974 \end{bmatrix}}\quad H_{x} = {\begin{bmatrix} 0.95494&\quad 0.10207&\quad 0.12945 \\ 8.99E-17&\quad 0.95&\quad 0 \\ 2.87E-17&\quad 1.25E-17&\quad 0.72 \end{bmatrix}}\\ G_{xx}= & {} {\begin{bmatrix} 0.02532&\quad -0.0501&\quad -0.0361&\quad -0.0501&\quad 0.06407&\quad 0.05009&\quad -0.0361&\quad 0.05009&\quad 0.04847 \\ -0.8661&\quad 2.26471&\quad 2.91292&\quad 2.26471&\quad -5.808&\quad -7.6907&\quad 2.91292&\quad -7.6907&\quad -10.328 \\ -0.0258&\quad 0.05932&\quad 0.06176&\quad 0.05932&\quad -0.119&\quad -0.1398&\quad 0.06176&\quad -0.1398&\quad -0.1781 \end{bmatrix}}\\ H_{xx}= & {} {\begin{bmatrix} 0.01795&\quad -0.0331&\quad -0.0409&\quad -0.0331&\quad 0.05795&\quad 0.06538&\quad -0.0409&\quad 0.06538&\quad 0.06859 \\ 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0 \\ 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0 \end{bmatrix}}\\ g_{\sigma \sigma }= & {} {\begin{bmatrix} 0.00209 \\ -0.0103 \\ -0.0012 \end{bmatrix}}\quad h_{\sigma \sigma } = {\begin{bmatrix} -0.0003 \\ 0 \\ 0 \end{bmatrix}} \end{aligned}$$

Extremely Nonlinear Case

$$\begin{aligned} G_{x}= & {} {\begin{bmatrix} 0.49684&\quad 0.50058&\quad -0.757 \\ -1.6118&\quad 6.81266&\quad 8.81103 \\ -0.4907&\quad 1.46888&\quad 2.22657 \end{bmatrix}}\quad H_{x} = {\begin{bmatrix} 0.93471&\quad 0.17032&\quad 0.24528 \\ -1.03E-17&\quad 0.95&\quad -2.22E-16 \\ -3.73E-18&\quad 9.42E-18&\quad 0.72 \end{bmatrix}}\\ G_{xx}= & {} {\begin{bmatrix} 0.03112&\quad -0.0737&\quad -0.0712&\quad -0.0737&\quad 0.12288&\quad 0.1261&\quad -0.0712&\quad 0.1261&\quad 0.15003 \\ -2.078&\quad 5.99445&\quad 8.61418&\quad 5.99445&\quad -16.805&\quad -24.773&\quad 8.61418&\quad -24.773&\quad -37.073 \\ -0.0968&\quad 0.23281&\quad 0.23351&\quad 0.23281&\quad -0.409&\quad -0.443&\quad 0.23351&\quad -0.443&\quad -0.5506 \end{bmatrix}}\\ H_{xx}= & {} {\begin{bmatrix} 0.03119&\quad -0.067&\quad -0.0969&\quad -0.067&\quad 0.14553&\quad 0.19527&\quad -0.0969&\quad 0.19527&\quad 0.24629 \\ 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0 \\ 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0 \end{bmatrix}}\\ g_{\sigma \sigma }= & {} {\begin{bmatrix} 0.00312 \\ -0.0363 \\ -0.0092 \end{bmatrix}}\quad h_{\sigma \sigma } = {\begin{bmatrix} -0.0009 \\ 0 \\ 0 \end{bmatrix}} \end{aligned}$$

Nearly Linear Case

$$\begin{aligned} G_{x}= & {} {\begin{bmatrix} 0.58795&\quad 0.32678&\quad -0.8758 \\ -0.5386&\quad 3.26686&\quad 2.94909 \\ -0.0102&\quad 0.02658&\quad 0.03458 \end{bmatrix}}\quad H_{x} = {\begin{bmatrix} 0.96154&\quad 0.08167&\quad 0.09873 \\ 3.06E-17&\quad 0.95&\quad 3.33E-16 \\ 9.29E-18&\quad 4.74E-18&\quad 0.72 \end{bmatrix}}\\ G_{xx}= & {} {\begin{bmatrix} 0.02472&\quad -0.0452&\quad -0.0291&\quad -0.0452&\quad 0.0553&\quad 0.0392&\quad -0.0291&\quad 0.0392&\quad 0.03611 \\ -0.5688&\quad 1.41942&\quad 1.74664&\quad 1.41942&\quad -3.4939&\quad -4.4351&\quad 1.74664&\quad -4.4351&\quad -5.7198 \\ -0.0011&\quad 0.00198&\quad 0.00141&\quad 0.00198&\quad -0.0027&\quad -0.0022&\quad 0.00141&\quad -0.0022&\quad -0.0023 \end{bmatrix}}\\ H_{xx}= & {} {\begin{bmatrix} 0.01463&\quad -0.0258&\quad -0.0304&\quad -0.0258&\quad 0.04272&\quad 0.04635&\quad -0.0304&\quad 0.04635&\quad 0.04707 \\ 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0 \\ 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0&\quad 0 \end{bmatrix}}\\ g_{\sigma \sigma }= & {} {\begin{bmatrix} 1.98E-05 \\ -6.67E-05 \\ -7.82E-07 \end{bmatrix}}\quad h_{\sigma \sigma } = {\begin{bmatrix} -1.67E-06 \\ 0 \\ 0 \end{bmatrix}} \end{aligned}$$

Benchmark with Stochastic Volatility

See Fig. 6 and Table 14.

Fig. 6
figure 6

Posterior distribution with \(CA=0.1\). Notes The vertical dashed line represents true parameter values

Table 14 Posterior distribution with \(CA=0.1\)

Data Sources over 1991Q1–2015Q3

  1. 1.

    Real Gross Domestic Product, BEA, NIPA table 1.1.6, line 1.

  2. 2.

    Gross Domestic Product, BEA NIPA table 1.1.5, line 1.

  3. 3.

    Personal Consumption Expenditure on Nondurable Goods, BEA, NIPA table 1.1.5, line 5.

  4. 4.

    Personal Consumption Expenditure on Services, BEA NIPA table 1.1.5, line 6.

  5. 5.

    Gross Private Domestic Investment, Fixed Investment, BEA NIPA table 1.1.5, line 8.

  6. 6.

    GDP Deflator=Gross Domestic Product/ Real Gross Domestic Product=\(\#\)2/\(\#\)1.

  7. 7.

    Real Consumption=(Personal Consumption Expenditure on Nondurable Goods+ Services)/GDP Deflator=(\(\#\)3+\(\#\)4)/\(\#\)6.

  8. 8.

    Real Investment=(Total private fixed investment + consumption expenditures on durable goods)/GDP Deflator=\(\#\)5/\(\#\)6.

  9. 9.

    Total hours is measured as total hours in the non-farm business sector, available from the BLS.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Noh, S. Posterior Inference on Parameters in a Nonlinear DSGE Model via Gaussian-Based Filters. Comput Econ 56, 795–841 (2020). https://doi.org/10.1007/s10614-019-09944-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10614-019-09944-5

Keywords

JEL Classification

Navigation