1 Introduction

The contribution of Vector Autoregressive (VAR) models to modern macroeconometrics can hardly be overstated: starting from the cutting-edge article by Sims (1980), VARs have become an unrivaled tool for macroeconomic analysis (Christiano 2002; Del Negro et al. 2007; Giacomini 2013) with special attention devoted to the monetary policy (Bernanke et al. 1997; Bernanke and Mihov 1998; Sims and Zha 2006a, b).

However the classical or frequentist approach, despite its simplicity in usage, poses some threats to the practitioner in terms of over-parametrization: the moderate size of macroeconomic datasets, coupled with the abundance of series to consider (including the lagged ones), requires some kind of regularization which is naturally addressed within the Bayesian framework via prior distributions (Koop and Korobilis 2010). The celebrated Minnesota prior (Litterman 1986) is the most notorious example, but is also the peak of the iceberg: through time, Bayesian analysis of VARs has grown and improved in the direction of i) new priors which could cater to peculiar necessities (e.g., lag selection as in George et al. (2008); Korobilis (2013) or hyperparameter tuning, Giannone et al. (2015)); ii) forecasting (Karlsson 2013); iii) structural analysis (Sims and Zha 1998; Waggoner and Zha 2003; Rubio-Ramirez et al. 2010; Baumeister and Hamilton 2015); iv) embodying the time-varying nature of parameters (Cogley and Sargent 2001, 2005; Primiceri 2005).

As a consequence, having dedicated software which could perform Bayesian VAR inference is essential for econometricians: for example, MATLAB with the BEAR toolbox (Dieppe et al. 2016) and the recent Empirical Macro Model toolkit by Canova and Ferroni (2022); however, other alternatives are available such as the new Bayesian functionalities provided in STATA from release 17, Eviews built-in commands or the R packages bvar (Kuschnig and Vashold 2021) and bvartools (Mohr 2023).

gretl, on the other side, dispenses many utilities for VARs either via internal commands (e.g., var, vecm) and functions (e.g., vma, varsimul, irf) or via the more accessible graphical interface: the routines include, among others, lag selection procedures, Granger-causality tests, forecast, structural analysis via short-run restrictions (Cholesky identification) and cointegration analysis. Additional functionalities can be retrieved via packages: the SVAR addon (Lucchetti and Schreiber 2023), for example, integrates the native structural analysis with various identification schemes such as long-run or sign restrictions; the package MSVAR (Schreiber 2021) provides, instead, Markov-Switching VAR models. The common denominator is the predominant use of frequentist techniques; only recently the BVAR package (Pedini and Schreiber 2024) has brought the attention to Bayesian techniques. This contribution, however, is still preliminary and its applicability in real data context is little documented.

To promote a more rigorous guideline to gretl users, this paper proposes a replication exercise of Kilian and Lütkepohl (2017), Chapter 5, where firstly a Bayesian VAR on U.S. main macroeconomic variables (Koop and Korobilis 2010) is estimated under different prior setups and then Cholesky-identified impulse responses are produced. Bayesian computation was originally performed via an independent software implementation, which has now become part of the BVAR package. For this reason, I refer to the package functionalities, but all computing and software technicalities will be minimized favoring, instead, an applied and didactic perspective.Footnote 1 The reported exercise wants to offer a first valuable guidance for gretl users in understanding and applying the Bayesian paradigm in simple, but often useful, contexts. A generalization to other applications or data is immediate.

The rest of the paper is organized as follows: Sect. 2 describes Bayesian VAR models alongside their main features in terms of prior distributions and posterior simulation; Sect. 3 reports the proposed replication exercise; Sect. 4 concludes.

2 Statistical background

Define a VAR(p) model as:

$$\begin{aligned} y_t = a_0 + \sum _{j=1}^p A_jy_{t-j} + \varepsilon _t, \end{aligned}$$
(1)

where \(y_t\) is an \(m \times 1\) vector of observable variables at time t; \(\varepsilon _t\) the vector of disturbances; \(a_0\) and \(A_j\), respectively a \(m \times 1\) vector and a \(m \times m\) matrix, are the coefficients of interest. For notational convenience assume from now on \(A = [a_0, A_1, \ldots , A_p ]\). Under the i.i.d. assumption of \(\varepsilon _t \sim N({\textbf{0}}, \Sigma )\), meaning normally distributed errors with zero mean and variance-covariance matrix \(\Sigma \), the likelihood \({\mathcal {L}}(y \vert A, \Sigma )\) is also normal with maximum \({\hat{A}}\) as the least squares estimator.

The Bayesian framework replaces the centrality of classical estimators with that of posterior distributions, namely \(P(A, \Sigma \vert y)\). On this line, posteriors are obtained by combining the likelihood with prior distributions \(P(A, \Sigma )\), in symbols

$$\begin{aligned} P(A, \Sigma \vert y) \propto {\mathcal {L}}(y \vert A, \Sigma ) P(A, \Sigma ) \end{aligned}$$

Priors embody the subjective contribution of the practitioner and should reflect the theoretical beliefs about the long-run properties of the data. However, to make posterior computation feasible, some standardized setups are generally applied with the major discriminant in the error covariance \(\Sigma \) treated either deterministically or as a random variable.

In accordance with the replication exercise in Sect. 3, a brief summary of the involved priors is here reported: Sect. 2.1 considers the case of a deterministic \(\Sigma \) with regressor coefficients as normally distributed with the so-called Minnesota prior; Sect. 2.2, instead, introduces a prior for \(\Sigma \) leading to two major scenarios, the conjugate prior setup and the independent one. Finally, Sect. 2.3 briefly describes the derivation of impulse responses (structural and not) in a Bayesian framework.

2.1 Fixed-\(\Sigma \) case

Assume \(\Sigma \) as deterministic and known a priori. In particular, \(\Sigma \) may be substituted by a valid frequentist estimator \({\hat{\Sigma }}\) which, depending on the application at hand, corresponds to i) a diagonal matrix of residual variances from single equation auto-regressive (AR) models; ii) the diagonal elements of the VAR residual covariance matrix; iii) the full VAR residual covariance matrix.

Given the premise, the only parameter of interest becomes A: to simplify the exposition suppose to work with vectorized parameters, i.e., \(\alpha = vec(A)\). The canonical framework relies on

$$\begin{aligned} \underset{M \times 1}{\alpha }\ \sim N({\underline{\alpha }}, {\underline{V}}) \end{aligned}$$
(2)

where \(M = m(1 + mp)\) and \(N(\mu , V)\) denotes a multivariate Gaussian distribution with mean \(\mu \) and variance-covariance matrix V. The most popular choice for prior parameters follows Litterman (1986) who proposes

$$\begin{aligned} {\underline{\alpha }}_{[i]}&= {\left\{ \begin{array}{ll} 1 &{} \text {for} \, \textrm{E}(A_{1[ll]}), \\ 0 &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
(3)
$$\begin{aligned} {\underline{V}}_{[ii]}&= {\left\{ \begin{array}{ll} (\frac{\pi _1}{j^{\pi _3}})^2 &{} \text {for} \, \textrm{V}(A_{j[ll]}), \\ (\frac{\pi _1\pi _2\sigma _l}{j^{\pi _3}\sigma _q})^2 &{} \text {for} \, \textrm{V}(A_{j[lq]}), \quad l \ne q \\ (\pi _1\pi _4)^2\sigma _l^2 &{} \text {for} \, \textrm{V}(a_{0[l]}) \end{array}\right. } \end{aligned}$$
(4)

with \(l = 1, \ldots , m\) and \(i = 1, \ldots , M\). The notation \({\textbf{a}}_{[i]}\) and \(A_{[lq]}\) identify respectively the i-th element of the vector \({\textbf{a}}\) and the lq-th element of the matrix A; while the operators \(\textrm{E}(\cdot )\) and \(\textrm{V}(\cdot )\) denote the expected value and the variance. The scalars \(\pi _1, \pi _2, \pi _3, \pi _4\) are up to the user, with \(\sigma _l^2\) commonly replaced by the least squares residual variance of an auto-regressive model for variable l. Clearly, the matrix \({\underline{V}}\) is diagonal. The above specification is universally known as Minnesota prior and lots of its success derives from a combination of two factors: the interpretation, in line with most macroeconometric stylized facts (unit root series with a declining impact for further lags) and the great computational tractability.

The final implication of the framework pertains to the posteriors: with \(\alpha \) as the sole parameter of interest, \(P(\alpha \vert y)\) is normal with easily-derivable moments.

2.2 Priors for \(\Sigma \)

Assume this time that \(\Sigma \) is a random variable, in particular,

$$\begin{aligned} \underset{m \times m}{\Sigma }\ \sim IW({\underline{S}}, {\underline{\nu }}) \end{aligned}$$
(5)

where \(IW(S,\nu )\) denotes an Inverse Wishart distribution with scale S and \(\nu \) degrees of freedom. Once again, the prior hyperparameters are up to the user, with some common choices for the scale as the identity matrix or a diagonal matrix holding the residual variances from single-equation AR models; \({\underline{\nu }}\), instead, is generally set to \(m + 2\).

Introducing a prior for \(\Sigma \) leads to defining a joint distribution \(P(\alpha , \Sigma )\), which could be further decomposed into the alternatives

  • \(P(\alpha \vert \Sigma )P(\Sigma )\);

  • \(P(\alpha )P(\Sigma )\).

The former case is often associated with the so-called conjugate Normal-Inverse Wishart (NIW) setup; while the latter with the independent NIW. The conjugate NIW postulates the following prior specification:

$$\begin{aligned} \alpha \vert \Sigma&\sim N({\underline{\alpha }}, \Sigma \otimes \underline{{\underline{V}}}) \\ \Sigma&\sim IW({\underline{S}}, {\underline{\nu }}) \end{aligned}$$

the paramters \({\underline{\alpha }}, {\underline{S}}, {\underline{\nu }}\) have to be intended as before, while \(\underline{{\underline{V}}}\) is a \((1 + mp) \times (1 + mp)\) diagonal matrix; \(\otimes \) identifies the Kronecker product. Differently from Eq. (2), \(\underline{{\underline{V}}}\) represents now the variance of “the single equation” in the VAR, meaning that a simple transposition of the Minnesota structure is not viable. A solution is provided in Karlsson (2013), who actually adapts the Minnesota framework to the conjugate NIW, but an equally popular alternative relies on ridge priors i.e., \(\underline{{\underline{V}}} = \eta \textrm{I}\) with \(\textrm{I}\) as the identity matrixFootnote 2 and \(\eta \) as a shrinking factor. Posteriors are again available in closed-form with \(\alpha \vert \Sigma , y\) as normal and \(\Sigma \vert y\) as an Inverse Wishart.

The independent NIW configuration, instead, relaxes the Kronecker structure on the \(\alpha \)-prior by postulating its independence from \(\Sigma \). In other words,

$$\begin{aligned} P(\alpha , \Sigma )&= P(\alpha )P(\Sigma )\\ \alpha&\sim N({\underline{\alpha }}, {\underline{V}}) \\ \Sigma&\sim IW({\underline{S}}, {\underline{\nu }}) \end{aligned}$$

The improvement concerns the flexibility in eliciting priors: the parameter \(\alpha \) follows de facto Eq. (2), allowing for any arbitrary choice in covariance matrices. Deriving the posterior distribution requires the use of Markov Chain Monte Carlo (MCMC) methods: a Gibbs sampler which loops through the conditional posteriors (\(\alpha \vert \Sigma , y\) is normally distributed and \(\Sigma \vert \alpha , y\) is an Inverse Wishart) can be easily devised.

2.3 Impulse response functions

Deriving posteriors for functions of \((\alpha , \Sigma )\) is almost universally accomplished via simulation regardless of the availability of closed-form distributions: impulse response functions (IRFs) for dynamic analysis are a typical example.

The procedure is summarized in Algorithms 1-2: Algorithm 1 describes the Monte Carlo experiment for drawing \((\alpha , \Sigma )\) parameters in the previously considered scenarios. Note that a generic draw at iteration i is here expressed with the superscript (i): \(\alpha ^{(i)}\), for example, has to be intended as the sampled vector \(\alpha \) at iteration i.

Algorithm 1
figure a

Sampling from the posterior distributions

Recall that IRFs are defined as the response of \(y_{t+h}\) to a one-time shock in \(\varepsilon _{t}\),

$$\begin{aligned} C_{h} = \frac{\partial y_{t+h}}{\partial \varepsilon _t} \end{aligned}$$

where \(C_h\) contains the IRFs at time h. Note that \(C_{h[lq]}\) identifies the response of variable \(y_{t+h[l]}\) from a shock in \(\varepsilon _{t[q]}\), with \( l, \, q = 1, \ldots , m\).

A practical solution for computing \(C_{h[lq]}\) is to simulate a VAR process for h period by setting as initial conditions \(\varepsilon _{0[q]} = 1, \, \varepsilon _{0[s]} = 0,\, s \ne q\) and \(y_{-1} = y_{-2} = \ldots = y_{-p} = {\textbf{0}}\). The detail is reported in Algorithm 2 where the values for A are provided by the previous posterior simulations. Simple IRFs are rarely of any practical interest per se: what is widely more predominant is the use of structural IRFs (SIRFs). In the following, I will briefly define one of the possible schemes, i.e., the Cholesky identification, for pursuing structural analysis and, in doing so, I will avoid any unnecessary detail. For a scrupulous study of Structural VARs or identification schemes I refer to specialized textbooks such as Kilian and Lütkepohl (2017).

Algorithm 2
figure b

Sampling impulse response functions

SIRFs with a Cholesky identification scheme are obtained by rewriting the error term as

$$\begin{aligned} \varepsilon _t&= Bu_t, \quad u_t \sim N({\textbf{0}}, \mathrm {I_m}) \\ D_h&= C_hB = \frac{\partial y_{t+h}}{\partial u_t} \end{aligned}$$

where \(u_t\) is the vector of structural shocks and B a lower-triangular matrix containing the simultaneous effect of the variables (note that \(\Sigma = BB'\), therefore B is the Cholesky factor of \(\Sigma \)); \(D_h\) is the structural counterpart of \(C_h\). Clearly, working with \(u_t\) instead of \(\varepsilon _t\) allows to disentangle the shocks from the correlation effect, achieving in this way a more authentic measure of the responses. Algorithm 2 can be revised to account for this change by retrieving the Cholesky decomposition of the sampled \(\Sigma ^{(i)}\), namely \(B^{(i)}\), and post-multiplying each \(C_h^{(i)}\) by this term in the IRF step.

3 Replication exercise

In this Section I will replicate the Bayesian VAR examples proposed by Kilian and Lütkepohl (2017), Chapter 5. This exercise has to be intended as a how-to-do “manual” for performing simple Bayesian analyses, with particular attention to structural responses via the Cholesky identification scheme. The data comes from Koop and Korobilis (2010) and includes the quarterly US GDP price deflator (inflation), the seasonally adjusted unemployment rate (unemployment) and the 3-month Treasury bill yield (interest) for the time span 1953q1-2006q3. The series are reported in Fig. 1.

Fig. 1
figure 1

Quarterly US inflation, unemployment rate and Treasury bill interest rate over the period 1953q1-2006q3

The baseline specification follows a VAR(4); three different prior setups are then introduced: the fixed-\(\Sigma \) case with a Minnesota prior for \(\alpha \), a conjugate and an independent NIW with ridge priors. Each configuration explores various prior-hyperparameter choices whose contribution can be observed by means of SIRFs. In particular, Kilian and Lütkepohl (2017) impose the variable order as inflation, unemployment, interest and inspect the response of inflation to a shock in interest rate (contractionary monetary policy shock). For future comparisons, the frequentist SIRF is reported in Fig. 2.

Fig. 2
figure 2

Structural response of inflation to a shock in interest rate - Classical VAR(4) estimates. The plot is obtained using the gretl GUI, on the path Model->Multivariate Time Series->Vector Autoregression

The replication code exploits the collection of functions provided by the BVAR package (Pedini and Schreiber 2024),Footnote 3 but a clarification is necessary: even if I refer to its functionalities, I will avoid a comprehensive package showcase. The preliminary state of BVAR prevents any proper package presentation which will be postponed for a future contribution. Conversely, it could be fruitfully used for introducing simple instructions for practitioners new to Bayesian VAR analysis. The scripts, in fact, can be easily extended to different data and applications.

BVAR usage hinges upon four major declarations, namely BVAR_setup, BVAR_sigma_prior, BVAR_alpha_prior, BVAR_posterior. In particular, BVAR_setup(list y, lag p, string prior_type) is a “setup” function which computes some preliminary quantities given the list of endogenous variables y, the lag order p and the kind of prior prior_type (supported options: "fixed", "conj", "indep"). Prior customization is then demanded to BVAR_sigma_prior and BVAR_alpha_prior. Both functions modify the resulting bundle from BVAR_setup with the help of dedicated options for prior hyperparameters. Their functioning is directly illustrated in the examples reported below. Finally, BVAR_posterior incorporates the routines previously presented in Algorithms 1-2. The signature is BVAR_posterior(bundle setup, string simul, bundle opt) where setup is the previously defined setup bundle, simul governs the simulation procedure: "coeff" will trigger the sole coefficient derivation, "irf" will augment the scheme with IRFS (and Cholesky identified IRFs following the variable order in the list y). The bundle opt allows the user to finely tune computational details such as the number of replications (bundle key iter, default: 10000), the burn-in percentage (bundle key burn, default: \(10\%\) for MCMC experiments, 0 otherwise), the quantile range to report as “confidence band” for posterior distributions (bundle key q_range, default 0.90) or the IRF time horizon (bundle keys irf_h, default: 24). If omitted, the default choices will be used. An auxiliary function for plotting the results is also at disposal: BVAR_irf_plot(bundle out, list from, list to, bundle irfopt),Footnote 4 which will display the posterior median of the sampled IRFs with a shaded area around defined via q_range; the time horizon follows irf_h. The bundle out is the returning bundle from BVAR_posterior; the lists from, to act as selectors for the IRFs to plot: in particular, from stands for the variables whose error is perturbed and to the outcome variables. If omitted, all combinations are shown on the screen. The optional bundle irfopt allows to select the saving options (key save, default: display on screen), the gretl gridplot integration (key grid, default: 1), particularly useful when showing multiple IRFs, the printing of the posterior sample mean (key no_mean, default: 1 - meaning suppress the mean plot), and the kind of IRFs to report between structural and not (key irf_type, default: s_irf).

To sum up, the workflow here adopted is the following:

figure c

Lines 14-15 report only the prior functions invocation; as previously mentioned, all specific options will be shown in the next examples. Note, however, how the functions in question modify the setup bundle mod via the pointer solution ( &mod).Footnote 5

3.1 Fixed-\(\Sigma \) case

The first scenario assumes a known \(\Sigma \) obtained via single equation Autoregressive Distributed Lag models (ADL) of order 4;Footnote 6 the prior for \(\alpha \) is a Minnesota-like type as in Eqs. (2)-(4): Kilian and Lütkepohl (2017) distinguish two major cases depending on how the persistency in the series is perceived. For an integrated nature, they propose a random-walk mean, i.e., \({\underline{\alpha }}\) is specified as in Eq. (3); conversely, if the long-run tendency is expected to be stationary, the white-noise mean \({\underline{\alpha }} = {\textbf{0}}\) is proposed. \({\underline{V}}\) follows Eq. (4) where the following combinations of hyperparameters are inspected:

  • \(\pi _1 = 0.1, \pi _2 = 0.5\);

  • \(\pi _1 = 0.1, \pi _2 = 1\);

  • \(\pi _1 = 1, \pi _2 = 1\).

The choices \(\pi _3=1\), \(\pi _4 = 1000\) and \(\sigma ^2_l\) from AR(4) models are adopted in all cases,Footnote 7 while posterior simulation is executed via 10000 Monte Carlo iterations.

Starting from the previous pseudo-code, the prior customization can be performed as follows,

figure d

BVAR_sigma_prior, in the "fixed" case, is a simply utility function for computing \({\hat{\Sigma }}\): the second argument can be fixed to "ADL" for computing a diagonal matrix with ADL residual variances; "AR" for a diagonal matrix of AR variances. The lag order is the same specified via BVAR_setup.

The invocation BVAR_alpha_prior( &mod, "Minn") can be used to recover a Minnesota prior shape; a third and optional input can be supplied in the form of a bundle: this is intended for setting hyperparameter values. The supported keys are p_mean for setting \(\textrm{E}(A_{1[ll]})\) in \({\underline{\alpha }}\) (default = 1); p_cov_p1 for \(\pi _1\) (default=0.1), p_cov_p2 for \(\pi _2\) (default=0.5), p_cov_p3 for \(\pi _3\) (default=1) and p_cov_inter for \(\pi _4\) (default=1000).

Figure 3 reports the BVAR Cholesky-identified IRFs of inflation due to an interest rate shock under the different prior hyperparameter setups: the dashed red line represents the posterior median, while the blue one the posterior sample mean. The shaded area is delimited by the 0.05 and 0.95 quantiles. Figure 4 illustrates the results obtained in MATLAB by means of Kilian and Lütkepohl (2017) replication codeFootnote 8 the dashed red lines represent respectively the 0.05, 0.5, 0.95 posterior quantiles; the blue line is the posterior sample mean. As it can be noted, the results are replicated in gretl.

Fig. 3
figure 3

Structural impulse response of inflation to an interest rate shock. Bayesian VAR with fixed-\(\Sigma \): gretl script

Fig. 4
figure 4

Structural impulse response of inflation to an interest rate shock. Bayesian VAR with fixed-\(\Sigma \): Kilian and Lütkepohl (2017) MATLAB script

3.2 Conjugate NIW

Assume now the conjugate NIW setup: Kilian and Lütkepohl (2017) consider \(\Sigma \sim IW(\textrm{I}_3, 4)\), \(\alpha \vert \Sigma \sim N({\underline{\alpha }}, \Sigma \otimes \underline{{\underline{V}}})\). The expected value \({\underline{\alpha }}\) follows again the random-walk scenario with ones on the first own lag of the equation endogenous variables or the white-noise case with zeros; \(\underline{{\underline{V}}}\) is defined via a ridge prior \(\eta \textrm{I}_{13}\). Three possible values for \(\eta \) are analyzed:

  • \(\eta =0.01\);

  • \(\eta =1\);

  • \(\eta =100\).

under a common simulation design of 10000 Monte Carlo replications. The prior specification is obtained in gretl as follows:

figure e
Fig. 5
figure 5

Structural impulse response of inflation to an interest rate shock. Bayesian VAR with conjugate NIW: gretl script

Fig. 6
figure 6

Structural impulse response of inflation to an interest rate shock. Bayesian VAR with conjugate NIW: Kilian and Lütkepohl (2017) MATLAB script

BVAR_sigma_prior with the "custom" modifier allows to directly express the scale and the degrees of freedom. The same modifier can be applied to \(\alpha \) with analogous consequences; note that \({\underline{\alpha }}\) has 39 elements corresponding to the total VAR coefficients (param_tot), while \(\underline{{\underline{V}}}\) is a \(13 \times 13\) matrix (param_eq=13, m = 3, p = 4, 1 intercept). Finally, it suffices to specify a vector of elements for both the scale of \(\Sigma \) and the variance of \(\alpha \) since diagonal solutions are employed. Figures 5, 6 report the comparison between software: gretl replicates exactly the result.

3.3 Independent NIW

In the independent NIW context, the \(\Sigma \)-specification exactly recalls the one used in the conjugate case, i.e., \(\Sigma \sim IW(\textrm{I}_3, 4)\); \(\alpha \) is again Normal with the random walk - white noise option for \({\underline{\alpha }}\). The previous \(\underline{{\underline{V}}}\) is replaced by \({\underline{V}} = \eta \textrm{I}_{39}\) with ridge parameter \(\eta \):

  • \(\eta =0.01\);

  • \(\eta =1\).

Since a MCMC is employed, a burn-in period of \(10\%\) has been introduced. The gretl script is closely related to the previous one, but the dimensional mismatch introduced by \({\underline{V}}\) requires the following modification

figure f

The results of the replication are reported in Fig. 7 for gretl and in Fig. 8 for MATLAB. Again, the IRFs are successfully replicated.

Fig. 7
figure 7

Structural impulse response of inflation to an interest rate shock. Bayesian VAR with independent NIW: gretl script

Fig. 8
figure 8

Structural impulse response of inflation to an interest rate shock. Bayesian VAR with independent NIW: Kilian and Lütkepohl (2017) MATLAB script

4 Conclusion

This short paper has highlighted, via the replication of Kilian and Lütkepohl (2017) - Chapter 5, the use of gretl in Bayesian VAR contexts, showing how the software can be effectively counted among those programs that provide such kind of analysis. The utilities and routines are still limited, but can be equally valuable, especially as preliminary analyses or in a didactic context. Needless to say, additional functionalities and support will be provided in the future: packages that could address different aspects of the Bayesian multivariate time series modeling (e.g., Bayesian Structural VARs or Time Varying VARs) are a must and the interaction with the gretl ecosystem could bring to enormous advantages for the whole software community.

5 Computational details

The replication exercise has been run on gretl 2023c and gretl 2024a on a macOS laptop, using BVAR package, version 0.2.