Economic forecasting: editors’ introduction
- 195 Downloads
This special issue of Empirical Economics contains, with some exceptions, papers presented at the First Vienna Workshop on Economic Forecasting that took place at the Institute for Advanced Studies in Vienna in February 2018. The 2-day workshop, organized by Robert M. Kunst and Martin Wagner, drew much more attention than originally expected. This reflects the increased—respectively regained—importance of forecasting not only in practical terms but also as a research topic in the underlying scientific disciplines. Much of this growing interest may be also rooted in increased importance of forecasting in fields such as management science, marketing or supply chain management and may well be driven by methodological developments rooted in several disciplines that could be summarized under labels such as big data, machine learning and the like; the workshop itself had a narrower focus on macroeconomic forecasting. Even with the specific focus on macroeconomics, the papers span a wide portfolio of approaches and applications, ranging from statistical theory to data-driven research. As indicated in the beginning, some of the contributions in this special volume are not related to the workshop, as submission of manuscripts was open.
The first article in the issue, “Assessing nowcast accuracy of US GDP growth in real time: the role of booms and busts” by Boriss Siliverstovs, is representative for the focus on nowcasting that has become very prominent over the most recent decade. This focus naturally responds to the urge of the public as well as of policy-makers to at least be able to assess the current situation, despite the delays inherent in data production and data revision, for example when relying upon national accounts data. Siliverstovs takes up a Bayesian mixed-frequency model from Carriero et al. (2015) and evaluates it with regard to point and also density forecasts for US macroeconomic data. He finds that the relative forecast performance of the investigated model relative to a simple univariate benchmark depends critically on the phases of the business cycle: In expansions, the sophisticated model is hardly better than the benchmark; in recessions, however, it outperforms the benchmark substantially.
Similarly, Katja Heinisch, João Claudio, and Oliver Holtemöller focus on nowcasting in “Nowcasting East German GDP Growth: A MIDAS Approach.” The MIDAS approach, with MIDAS an acronym for “mixed data sampling,” has been introduced by Ghysels et al. (2004, 2006a, b) in a series of papers. MIDAS has gained prominence in the forecasting literature as it allows to model data sampled at different frequencies. This is clearly important for macroeconomic forecasting, where the coexistence of low-frequency macro-aggregates, typically available at quarterly frequency, with higher-frequency variables sampled at the monthly or even higher frequencies, is widespread. The beginning of the MIDAS literature saw so-called Almon lags becoming prominent again, but more recent approaches remove the corresponding coefficient restrictions, for example the U(nrestricted)-MIDAS approach of Foroni et al. (2015). Heinisch et al. present an application of MIDAS and U-MIDAS to forecast economic activity in East Germany. They find that the MIDAS-based forecasts, in either variant, outperform the forecasts obtained by more traditional (single-frequency) time series models.
Forecasting is a discipline that very quickly absorbs innovations in data science and statistics much. A good example for this process is the contribution of Paolo Fornaro and Henri Luomaranta on “Nowcasting Finnish real economic activity: a machine learning approach.” Big data, in this application large micro-level datasets on the Finnish economy, require novel tools, such as boosting and tree-based methods of machine learning. Fornaro and Luomaranta demonstrate that their machine learning methods deliver forecasts that have the same quality as existing nowcasts, while being available much faster. Obtaining nowcasts a month earlier than with existing rival procedures can be seen as a relevant achievement in this market segment.
Christian Glocker and Philipp Wegmueller contribute with “Forecasting and recession dating using real-time Swiss GDP data.” They consider a small-scale dynamic factor model that integrates nonlinearities via a two-state Markov chain construction. Glocker and Wegmueller thereby allow for the possibility that policy realignments may lead to regime switches. They demonstrate that their approach dominates rival benchmarks based on expert judgment, both with respect to the forecasting performance and with respect to dating recessionary episodes.
Ines Fortin, Sebastian P. Koch, and Klaus Weyerstrass provide an “Evaluation of economic forecasts for Austria.” They present a very detailed comparison of the forecast accuracy achieved by two major Austrian institutions forecasting the Austrian economy on a quarterly basis. Fortin et al. also compare the two domestically produced forecasts with the forecasts of the European Commission. They corroborate the traded wisdom that forecasts from rival institutions tend to be rather similar, as they access the same information and are subject to the same assessment by their customers and the media. For specific aspects and variables, they find a slight performance advantage for their home institution, the Institute for Advanced Studies. Altogether, it is comforting to learn that forecast accuracy has improved over time.
Chris Heaton, Natalia Ponomareva, and Qin Zhang take up the challenge of macroeconomic forecasting for the Chinese economy in their contribution “Forecasting models for the Chinese macroeconomy: The simpler the better?”. They are mainly interested in whether forecast models that work well for developed market economies also perform well for China, a unique mixed and fast developing economy. A slightly sobering news is that simple benchmark forecasts can be outperformed significantly only for some variables, such as inflation.
Simon Lineu Umbach contributes with “Forecasting with supervised factor models,” an approach that becomes interesting in data-rich environments with vast sets of potentially useful predictor variables. More specifically, he takes up the PCovR (Principal Covariate Regression) technique of deJong and Kiers (1992) and applies it to US data. The PCovR models are a special type of supervised factor models, a construction that bridges traditional principal components and profligate least squares via a supervision parameter θ. Such supervision parameters typically set ex ante to ensure that the resulting supervised model is close to principal components. Umbach suggests to determine the supervision parameter empirically. Generally, he finds supervised factor models to dominate simple principal components in terms of resultant forecasting performance.
Whereas most macroeconomic forecasts in this issue are in fact short-run projections related to business cycle chronology, Marek Chudy, Sayar Karmakar, and Wai Biao Wu target the long run in “Long-term prediction intervals of economic time series.” They build on Zhou et al. (2010) and show that an adjusted variant of that technique leads to considerable improvements in coverage probability. Chudy et al. compare their approach with two competing methods using both Monte Carlo simulation and a set of US and Japanese economic time series.
Guido Schultefrankenfeld studies the dynamics of decision making for individual and collective forecasts in the article “Appropriate monetary policy and forecast disagreement at the FOMC.” The Federal Open Market Committee is the main decision-making body of the US Federal Reserve System. Schultefrankenfeld finds that dissenting views of the committee members on the appropriate monetary policy translate into a parallel dissent in forecasts, in the sense that those who favor a more restrictive stance tend to emphasize the risk of inflation in their forecasts. The effect is measured by the sign of a statistic called “the skew” that summarizes the relative stance of members compared to the implemented policy rate and thus accounts for the direction of dissent. The author refrains, however, from concluding that this effect may reflect strategic action by committee members.
Gabe Jacob de Bondt, Arne Gieseck, and Zivite Zekaite consider “Thick modelling income and wealth effects: a forecast application to euro area private consumption.” Thick modeling refers to a combination approach that may be of interest in applications where no element of a set of rival forecasting models is likely to be the correct model or, seen otherwise, each of the models has to offer some distinctive information that can be useful for prediction. Household consumption is the largest component of aggregate demand, and its forecasts are valuable for any short-run outlook on the euro-area business cycle. The authors study in particular decompositions of household income and wealth into wage and non-wage parts.
Whereas most authors target point forecasts, Marcus Paul Andrew Cobb is concerned with “Aggregate density forecasting from disaggregate components using Bayesian VARs.” The approach is applied to GDP and CPI data from several European countries. Particularly during crises or other episodes that see an unusually strong interaction between components, the advantages of the Bayesian procedure become most obvious.
Nima Nonejad contributes with another article that transcends point forecasts, “Does the price of crude oil help predict the conditional distribution of aggregate equity return?”. The target to be forecasted is a quarterly series of index returns over the long period from 1859 to 2017. The author’s toolbox includes, on top of an oil price series, several nonlinear constructions separating negative and positive increments in variables or ranges to extrema. The preferred method is Bayesian modeling. Using this toolbox, some benefits for specific aspects of the target series are reported, although univariate models turn out—once again—to be quite difficult to outperform.
The South African researchers Rangan Gupta, Hylton Hollander, and Rudi Steinbach contribute with “Forecasting Output Growth using a DSGE-Based Decomposition of the South African Yield Curve.” Gupta et al. implement a Bayesian estimation procedure of their DSGE model for the South African economy. They demonstrate that in contrast to findings for other economies, the term spread proper fails to improve the real growth-rate forecast, but that the latent expected term spread, a result of a yield spread decomposition, is valuable for forecasting the real growth rate.
Empirical Economics is committed to a special emphasis on replication studies, as these tend to be overlooked by other publication outlets. In this issue, Byeong U. Park, Leopold Simar, and Valentin Zelenyuk contribute with “Forecasting of recessions via dynamic probit for time series: replication and extension of Kauppi and Saikkonen (2008).” The Kauppi–Saikkonen approach consists of applying dynamic probit analysis to US data, with the aim of forecasting US recessions. Their response target is a binary indicator of recession versus expansion, and their main predictor is once more a lagged interest rate spread. Park et al. first replicate and corroborate the Kauppi–Saikkonen paper and then contrast it with a nonparametric technique that yields results similar to the parametric probit model.
The widespread interest in economic forecasting documented by the workshop and the associated special volume have led us to organize the Second Vienna Workshop on Economic Forecasting to be held again at the Institute for Advanced Studies, Vienna, on June 4–5, 2020.
- Ghysels E, Santa-Clara P, Valkanov R (2004) The MIDAS touch: mixed data sampling regression models. CIRANO working papers 2004s-20, CIRANO, Montreal, CanadaGoogle Scholar