Review of World Economics

, Volume 146, Issue 2, pp 241–261

Rose effect and the euro: is the magic gone?

Authors

    • Economic Research and Financial Stability DepartmentCzech National Bank
    • Institute of Economic StudiesCharles University in Prague
Original Paper

DOI: 10.1007/s10290-010-0050-1

Cite this article as:
Havránek, T. Rev World Econ (2010) 146: 241. doi:10.1007/s10290-010-0050-1

Abstract

This paper presents an updated meta-analysis of the effect of currency unions on trade, focusing on the euro area. Using meta-regression methods such as the funnel asymmetry test, evidence for strong publication bias is found. The estimated underlying effect for currency unions other than the eurozone reaches more than 60%. However, according to the meta-regression analysis, the euro’s trade promoting effect corrected for publication bias is insignificant. The Rose effect literature shows signs of the economics research cycle: reported t-statistic is a quadratic concave function of the publication year. Explanatory meta-regression (robust fixed effects and random effects), that can explain about 70% of the heterogeneity in the literature, suggests that results published by some authors might consistently differ from the mainstream output and that study outcomes are systematically dependent on study design (usage of panel data, short- or long-run nature, number of countries in the data set).

Keywords

Rose effectTradeCurrency unionEuroMeta-analysisPublication bias

JEL Classification

C42F15F33

1 Introduction

Most of the Rose effect literature treats currency unions as magic wands—one touch and intra-currency-union trade flows rise between 5 and 1,400%. The only question is: How big is the magic? (Baldwin 2006, p. 36)

Since the pioneering work of Rose (2000) and his result that currency unions increase trade by more than 200%, a whole new stream of literature has emerged and thrived, focusing especially on the eurozone in recent years. How much does the euro boost trade among eurozone members? While some researchers are rather skeptical to search for “the one number” (e.g., Richard Baldwin, as the opening quotation suggests), others keep seeking: in a narrative literature review, Frankel (2008a) estimates the euro’s Rose effect to lie between 10 and 15%. Even Baldwin (2006, p. 48) himself talks about 5–10% and expects the effect to double as the euro matures. This question is very attractive for welfare economists and policy makers: for instance, Frankel (2008b) uses his estimates to give Central and Eastern European countries advice on the timing of their admission to the eurozone; and Masson (2008), employing the result that “currency unions double trade,” asseses the welfare effects of creating a monetary union in Africa.

There has been one meta-analysis 1 on this subject. Rose and Stanley (2005), using a combined sample of studies on both the eurozone and other currency unions, report the general underlying effect to lie between 30 and 90%. The purpose of this paper is to extend the aforementioned work by including new studies and different meta-analysis methods, which enables us to concentrate on the effects of the euro and other currency unions separately. It is shown that the distinction between euro and non-euro studies is important since both sub-samples tell a very different story. Twenty-seven new studies were added to the sample, 21 of which focuse on the eurozone. Together, there are 61 studies, 28 on the eurozone and 33 on other currency unions (see Table 4 in the Appendix). We examine publication bias among the literature (Card and Krueger 1995; Stanley 2005a), using the meta-regression approach (Stanley and Jarrell 1989; Stanley et al. 2008) and graphical methods (funnel plots, Galbraith plots); the “true” underlying effect is estimated as well. The meta-regression analysis (MRA) by Rose and Stanley (2005) is augmented with multiple different techniques (robust estimators, multilevel methods). Explanatory meta-regression methods, including robust meta-regression (see, for example, Bowland and Beghin 2001) and random effects meta-regression (Abreu et al. 2005), are used to examine systematic dependencies of results on study design and thus to model the heterogeneity present in the sample. Moreover, a test for the “economics research cycle” is conducted (novelty and fashion in economics research, see Goldfarb 1995).

The paper is structured as follows: in Sect. 2, the essence of meta-analysis is briefly described and the basic properties of the sample of literature are discussed. Section 3 focuses on publication selection and search for the true Rose effect beyond publication bias. In Sect. 4, the explanatory MRA is conducted. Section 5 concludes.

2 Combining the literature

Meta-analysis has its roots in psychology and epidemiology where it has been employed extensively in the last 3 decades (for a thorough introduction, see Borenstein et al. 2009). Originally, it was used to increase the number of observations and thus statistical power in those fields of medical research where experiments were extremely costly and scarce, or to estimate the “true” effect when the findings were seemingly mixed. Subsequently, this method spread to social sciences, including economics (beginning with Stanley and Jarrell 1989). The essence of meta-analysis is to use all available studies since even biased and misspecified results may carry useful information which can be decoded by the meta-regression approach. Omitting some empirical papers on the Rose effect ex ante, as Baldwin (2006) suggests in his narrative review, is thus, in our opinion, the opposite of what meta-analysts should do.

He [Richard Baldwin] thinks he knows which of the studies are good and which are bad [...], and wants only to count the good ones. The problem with this is that other authors have other opinions as to what is good and what is bad.” (Frankel 2006, p. 83).

Fortunately, the meta-regression methods are able to cope with some degree of misspecification bias (Stanley 2008).
The “Rosean” stream of literature usually employs a variation of the following regression to estimate the trade effect of currency unions, the so-called gravity equation (for a detailed discussion and criticism, see Baldwin 2006):
$$ \log T_{ijt} = \alpha_0 + \gamma CU_{ijt} + \chi_1 (\log Y_i \log Y_j)_t + \chi_2 \log D_{ij} + \sum^{K}_{k=1}{\eta_k X_{ijt}} + \epsilon_{ijt}, $$
(1)
where Tijt stands for the trade flow between two countries (i and j) in period t, CU is a dummy which equals one if both countries are engaged in a currency union in period t, Y denotes the real GDP, D is the distance between the two countries, and X denotes other control variables. The actual percent boost to trade due to the formation of a monetary union is thus given by \({\gimel\, \doteq}\,e^{\gamma}-1\).

The meta-analysis process starts with a selection of literature to be included in the analysis. Some meta-analyses use all point estimates (for instance, Abreu et al. 2005); sometimes it is advised to use only one estimate from each study since otherwise a single researcher could easily dominate the survey (Stanley 2001, 2005b; Krueger 2003). Moreover, most researchers report many different specifications starting with benchmarks. If all those estimates were included in the meta-analysis, the influence of benchmark cases would be highly exaggerated (however, this can be partly treated by multilevel data analysis or clustering). Researchers themselves also assign very different weights to the particular specifications. Therefore, while including all estimates would enhance degrees of freedom, we prefer selecting the representative specifications.2 The present paper builds on the data set provided by Rose and Stanley (2005) which covers a sample of results taken from 34 papers on currency unions’ trade effect. The data set, however, contains only 7 studies on the eurozone, which does not make it possible to estimate the euro’s effect separately. For this reason, an additional search was conducted mainly in the EconLit, RePEc, and Google Scholar databases, concentrating especially on new studies estimating the effect of the euro.3 All papers on the Rose effect containing a quantitative estimate of γ were included, both published and unpublished, extending the sample to the total of 61 studies, including 28 studies on the eurozone. The authors’ preferred estimates were selected; in case there was no preference expressed, the model with the best fit was chosen. However, most authors in this sample reveal their preferences concerning the “best” estimate directly in the abstract or conclusion.

It is generally recognized that the reported Rose effect of the euro is significantly lower than that of other currency unions taken as a whole (Micco et al. 2003; Frankel 2008a). Frankel (2008a) tests three possible explanations (the euro’s youth, the bigger size of eurozone economies compared to the average members of other monetary unions, and reverse causality for the earlier studies), but rejects them one by one. The low estimates of the euro’s trade effect thus remain a puzzle. For policy recommendations concerning the euro, in any case, only the estimates derived from the eurozone studies should be taken into account. The results of the non-euro papers, however, are useful as well: on the one hand, these studies can serve as a control group; on the other hand, the “true” general Rose effect of other currency unions can be extracted from them.

The eurozone sample is depicted in Fig. 1; this type of figure is usually called “forest plot” in medical research. Black dots symbolize individual estimates of γ, horizontal lines show the respective 95% confidence intervals. The traditional method of combining estimates taken from various studies is the standard fixed effects estimator4 which weighs each observation according to its precision; i.e., inverse standard error. The weights constructed on the basis of the inverse-variance method are symbolized by squares with gray fill in the forest plot. The pooled effect estimated by fixed effects is plotted as a vertical dashed line, the solid vertical line symbolizes no effect. Using fixed effects, the pooled estimate of the euro’s γ is very low: a mere 0.038 (\(\widehat{\gimel}=3.87\%\)) with the 95% confidence interval CI = (3.36, 4.39%), although it is very significant (z-stat. = 14.9). These results are not very useful for policy purposes, though, because—among other things starting with heterogeneity and high sensitivity to outliers—they do not account for likely publication selection; i.e., the preference of editors, referees, or researchers themselves for significant or non-negative results (more on this topic in Sect. 3).
https://static-content.springer.com/image/art%3A10.1007%2Fs10290-010-0050-1/MediaObjects/10290_2010_50_Fig1_HTML.gif
Fig. 1

Forest plot of individual estimates of γ, eurozone studies

The forest plot of the results of non-euro studies (Fig. 4 in the Appendix) shows a different picture. The pooled fixed effects estimate is far from zero, namely \(0.67 (\widehat{\gimel} = 95.42\%)\) with the 95% confidence interval CI = (88.89, 102.18%). Assuming that currency unions double trade, as, e.g., Masson (2008) does when he asseses the welfare effects of forming currency unions in Africa, thus might appear plausible in this respect.

Based on these simple statistics, there is no doubt that the estimates of the Rose effect of the euro and other currency unions are indeed immensely different and that it is not very appropriate to pool them together. However, more advanced methods are needed to assess the problem of publication selection and estimate the genuine underlying effect.

3 Publication bias and the true effect

In his thorough and influential review of the Rose effect literature, Richard Baldwin comments on the meta-analysis of Rose and Stanley (2005):

The meta-analysis statistical techniques are fascinating, but I don’t believe it adds to our knowledge since deep down they are basically a weighted average of all point estimates. (Baldwin 2006, p. 36).

While this statement—or at least its last sentence—may apply to the very simple meta-analysis performed in Sect. 2, it disregards the most important part of Rose and Stanley (2005) as well as of the present study: the MRA, filtering out the publication bias, and modelling the heterogeneity (the search for “the one number” is not the only task of a meta-analyst).

In this section, the MRA is employed to test for publication bias and the true underlying Rose effect. Publication selection can take the following two forms (Stanley 2005a):

Type I bias: This form of publication bias occurs when editors, referees, or authors prefer a particular direction of results. Negative estimates of γ, for instance, might be disregarded; it would seem quite strange if the common currency hampered trade among the monetary union’s members. The problem is that even if the true effect was positive, a certain percentage of studies (due to the nature of their data set, methods used, and the laws of probability) should report negative numbers. Otherwise, the average taken from the literature can highly exaggerate the estimated true effect. For instance, Stanley (2005a) shows how price elasticity of water demand is exaggerated fourfold due to publication bias.

Type II bias: The second type of bias arises when statistically significant results are preferred; i.e., when editors choose “good stories” for publication. In this way, many questionable effects may be “discovered” and further supported by subsequent research when other authors are trying to produce significant results as well. Intra-industry spillovers from inward foreign direct investment might serve as an example (Görg and Greenaway 2004).

The presence of type I publication bias is usually investigated employing the so-called funnel plot which shows the estimated effect against its precision (inverse of its standard error, Egger et al. 1997). The essence of this visual test is that, in the case of no bias, the shape of the cloud of observations should resemble an inverted funnel; observations with high precision should be concentrated closely to the true effect, while those with lower precision should be more dispersed. Above all, in the absence of type I publication bias, the funnel must be symmetric.

In Fig. 2, the funnel plot for all 61 studies is presented. It shows a perfect example of strong publication bias. While positive estimates clearly form one half of a funnel, the left half is almost completely missing as there are only four non-positive estimates. The eurozone and non-euro studies taken separately resemble an inverted funnel even less. This test can be formalized using a simple MRA (Ashenfelter et al. 1999):
$$ \widehat{\gamma_i} = \beta + \beta_0 SE_i + \mu_{i}, \quad i=1,\ldots, M, $$
(2)
where M is the number of studies, β denotes the true effect, and β0 measures the magnitude of publication bias. However, regression (2) is evidently heteroskedastic. The measure of heteroskedasticity is the standard error of the estimate of γ, thus weighted least squares can be performed by running a simple OLS on equation (2) divided by the standard error:
$$ {\frac{\widehat{\gamma_i}} {SE_i}} = t_i = \beta_0 + \beta \left({\frac{1} {SE_i}}\right) + \vartheta_{i}. $$
(3)
https://static-content.springer.com/image/art%3A10.1007%2Fs10290-010-0050-1/MediaObjects/10290_2010_50_Fig2_HTML.gif
Fig. 2

Funnel plot, all studies

The meta-response variable changes to the t-statistic corresponding to the estimate of γ taken from the i-th study. A simple t-test on the intercept of (3) is then a test for publication bias: the funnel asymmetry test (FAT). However, meta-analysis is more vulnerable to data contamination than other fields of empirical economics since it is necessary to choose representative estimates from the literature and collect all data manually. As a robustness check to the basic fixed effects meta-regression, we employ the iteratively re-weighted least squares method (IRLS) which moreover does not assume normality for hypothesis testing (Hamilton 2006, pp. 239–256). Robust methods in meta-analysis using IRLS are employed, e.g., by Bowland and Beghin (2001) or Krassoi-Peach and Stanley (2009). In the third specification, we allow for dependence between studies written by the same author; this multilevel approach follows Doucouliagos and Stanley (2009) and uses the restricted maximum likelihood method. In this case, the random intercept model (RIM, only intercept differs across authors) is preferred over the random coefficients model (RCM, both intercept and the coefficient for precision can differ) based on the likelihood ratio (LR) test: the corrected p-value of the test is 0.257 in favor of not rejecting the hypothesis that RIM is plausible.5

The results of all three tests in the case of the eurozone studies are summarized in Table 1. In all specifications, the intercept is highly significant (t-statistics vary from 2.37 to 4.04). Therefore, the hypothesis of no type I publication bias has to be strongly and robustly rejected, which is all the more remarkable given that these tests are usually believed to have relatively low power (Stanley 2005a). The fact that they all reject the null hypothesis at the 5% level of significance implies that publication bias presents, in our opinion, a serious problem for the literature on the euro’s Rose effect.
Table 1

Tests of publication bias and the true effect, eurozone studies

 

FAT-PET

ROBUST

RIM

prec (effect)

0.000667 (0.05)

0.0265 (1.52)

0.00899 (0.90)

Constant (bias)

3.755 (4.04)***

2.451 (2.37)**

3.517 (3.93)***

Observations

28

27

28

RMSE

3.169

3.141

 

Meta-response variable: tstat

t-statistics in parentheses (Huber–White heteroskedasticity-robust for FAT-PET)

FAT-PET: Funnel assymetry test–precision effect test (fixed effects)

ROBUST: Iteratively re-weighted least squares version of fixed effects

RIM: Random intercept model computed using restricted maximum likelihood

*** and ** denote significance at the level of 1 and 5%, respectively

Type II bias can be assessed using the Galbraith plot (Galbraith 1988) that depicts the precision of the estimates of γ against the t-statistics corresponding to those estimates and the (assumed) true effect. If the “true” effect was really true and there was no type II publication bias (selection of papers due to significant results), only about 5% of the studies’ t-statistics should exceed 2 in absolute value and the cloud of observations should not form any systematic pattern. Figure 3 shows the Galbraith plot for the eurozone studies (Galbraith plots for all or non-euro studies yield similar results). If the true effect was 0.05, 13 studies out of 28 would report significant results. The goodness of fit test easily rejects the hypothesis of the expected distribution [\(\chi^2_{(1)}=96, p<0.001\)]; the null hypothesis is rejected even more powerfuly when the true effect is considered to be equal to 0 or 0.1. The t-statistics also show an apparent tendency to decline with rising precision. Therefore, type II bias is clearly present among the eurozone studies.
https://static-content.springer.com/image/art%3A10.1007%2Fs10290-010-0050-1/MediaObjects/10290_2010_50_Fig3_HTML.gif
Fig. 3

Galbraith plot, eurozone studies

All three methods of detecting type I bias (Table 1) can be also used to test for the significance of the true effect beyond publication bias [recall (2)]. Specifically, running a t-test on the slope coefficient of (3) is denoted as the precision effect test (PET). For eurozone studies, the corresponding t-statistic is only 0.05. When robust or random intercept versions of this test of effect are used, the result does not change significantly.6 This means that, employing the meta-regression methodology, there is not even a slight trace of any true underlying Rose effect of the euro beyond publication bias—compared to the 5–10% estimate by Baldwin (2006) and 10–15% estimate by Frankel (2008a). Using meta-regression analysis and the sample of available empirical studies, there is therefore no significant aggregate effect of the euro on trade.

An obvious objection to this approach arises: if the Rose effect of the euro is growing over time (Bun and Klaassen 2002; Baldwin 2006), it is questionable how one can pool together studies written in 2002, when the euro was still young, and papers published, for example, in 2008. It is a potential problem of any meta-analysis. However, as can be seen from Sect. 4, explanatory meta-regression does not find any significant relation between the results of eurozone studies and time. Also, for instance, Frankel (2008a) concludes that the euro’s trade effect has stabilized after a few starting years.

Table 2 summarizes the tests of publication bias and the true effect for non-euro studies. Contrary to the previous case, the random coefficients model is preferred over the random intercept model (p-value of the LR test: 0.0009) and is reported in the table—this basically means that we allow publication bias and the effect to vary across researchers. It is apparent that publication bias is weaker than in the previous case; the intercept is significant according to the basic FAT, but not significant in RCM. However, as has been already mentioned, these tests of publication bias are known to have relatively low power. Therefore it seems that there is some evidence of publication bias among non-euro studies, although significantly weaker than among the eurozone studies. The difference between euro and non-euro studies is the most important finding in this respect—whereas papers on the eurozone are plagued by publication bias, the problem is much less serious for the rest of the literature.
Table 2

Tests of publication bias and the true effect, non-euro studies

 

FAT-PET

PEESE

RCM

prec (effect)

0.534 (4.08)***

0.634 (9.83)***

0.583 (3.52)***

SE (bias)

 

3.567 (1.3)

 

Constant (bias)

1.712 (2.21)**

 

1.167 (1.33)

Observations

33

33

33

RMSE

3.234

3.320

 

Meta-response variable: tstat

t-statistics in parentheses (Huber–White heteroskedasticity-robust for FAT-PET and PEESE)

FAT-PET: Funnel assymetry test–precision effect test (fixed effects)

PEESE: Precision effect estimate with standard error

RCM: Random coefficients model computed using restricted maximum likelihood

*** and ** denote significance at the level of 1 and 5%, respectively

PET rejects the null hypothesis of no underlying effect of currency unions other than euro at the 1% level of significance. There is a caveat, though: Stanley (2005b) uses Monte Carlo simulations to show that PET is reliable only if \(\sigma^2_\vartheta \leq 2\). Otherwise, the estimate might be exaggerated by misspecification biases. In this case, H0:\(\sigma^2_\vartheta \leq 2\) is rejected [\(\chi^2_{(32)}=162, p<0.001\)]. For this reason, we should employ causion when interpreting the magnitude of the effect, even though the result of PET is supported by its robust version and the random coefficients model. When the “true effect” passes the test for effect, which is the case here, Stanley and Doucouliagos (2007) recommend employing the so-called precision effect estimate with standard error (PEESE) to estimate the magnitude of the effect in question. Contrary to the precision effect test, PEESE assumes that publication bias is related to the variance (not standard error) of the estimates of γ. The weighted least squares version thus yields:
$$ {\frac{\widehat{\gamma_i}} {SE_i}} = t_i = \delta_0 SE_i + \delta \left({\frac{1} {SE_i}}\right) + \phi_{i}. $$
(4)

PEESE estimates the true Rose effect of currency unions other than the eurozone to lie between 65 and 115% with 95% probability. The result is probably somehow exaggerated by misspecification biases, though. Therefore, we consider this number consistent with the previous meta-analysis by Rose and Stanley (2005) who estimate the effect to lie between 30 and 90% (Rose and Stanley, however, used also a few eurozone studies in their predominantly non-euro sample).

Figure 5 in the Appendix represents the funnel plot of all studies corrected for publication bias [using the filtered-effect test, details on which can be found in Stanley (2005a) or the working paper version of this article; observations with corrected |γ| > 1 are cut from the figure]. In contrast to Fig. 2, the present funnel plot is clearly symmetric—this is how the literature should look like.

4 Explanatory meta-regression

MRA can also be employed to determine possible dependencies of study results on its design. In fact, it has been the primary focus of most economic meta-analyses since the pioneering work of Stanley and Jarrell (1989). Economics research is usually much more heterogeneous than epidemiology and psychology, where the meta-analysis approach was originally developed. In this respect, MRA is used to assign a pattern to heterogeneity.

We gathered 18 meta-explanatory variables that reflect study design and social and other attributes of the authors (see Table 5 in the Appendix); 6 of the regressors are assumed to affect publication bias, the rest 12 are expected to influence the estimates of γ directly. The former include researchers’ nationality, ranking, gender, panel nature of the data, and year of publication and its square. The latter cover dummies for specific authors, short or long run nature of the study, eurozone data, postwar data, number of countries and years in the data set, and the impact factor of the journal that the particular study was published in.

All meta-explanatory variables were chosen ex ante. We included the meta-explanatory variables used by Rose and Stanley (2005) and added some commonly used variables which are thought to influence publication selection (gender and nationality, for example; for a list of possible regressors affecting publication bias, see Stanley et al. 2008), as well as a few experimental regressors. For instance, the impact factor was included to ascertain whether articles published in leading journals produce significantly different results from unpublished papers. Inclusion of variable topfive (at least one co-author ranks among top 5% economists listed on RePEc) follows a similar logic.

Contrary to the previous sections, now the focus rests on the whole sample because more degrees of freedom are needed; heterogeneity is not so much problematic since it can be modeled to a large extent. There are 61 observations, which is enough for an explanatory meta-regression since sample size in meta-analysis is substantially more effective in increasing the power of hypothesis testing than sample size of original studies (Koetse et al. 2010). We employ the FAT-PET method augmented to the following multivariate version (Stanley et al. 2008):
$$ {\frac{\widehat{\gamma_i}} {SE_i}} = t_i = \underbrace{\beta_0 + \sum^{J}_{j=1}{\theta_j S_{ji}}}_{{ {\hbox {bias}}}} + \underbrace{\tilde{\beta}}_{{ {\hbox {pseudo}}\;{TE}}} \left({\frac{1} {SE_i}}\right) + \underbrace{\sum^{K}_{k=1}{{\frac{\delta_k Z_{ki}} {SE_i}}}}_{{{\hbox {controls}}}} + \vartheta_{i}, $$
(5)
where Sj is a set of variables influencing publication bias and Zk is a set of variables affecting the estimates of γ directly. We refer to this estimator as fixed effects, even though in the strict sense it is not the traditional fixed effects estimator used in meta-analysis: note that variables Sj are not divided by the standard error.
Fixed effects estimates are summarized in column 1 of Table 3. As a robustness check, we employ the IRLS version of the model (column 2). The most insignificant meta-regressors are excluded one by one to get a model which contains only variables significant at least at the 10% level. After insignificant variables were excluded, the “economics research cycle hypothesis”7 was tested by adding the year of publication and its square value. The hypothesis corresponds to the joint significance of these variables and concave shape of the relationship. In this case, F(2,48) = 3.84, p < 0.05 and the relationship is indeed concave, hence the economics research cycle hypothesis is supported for this type of literature. This becomes even more apparent when IRLS are used [F(2,48) = 6.74, p < 0.01]. On the other hand, the research cycle hypothesis is rejected when each group of literature is considered separately: F(2,23) = 1.56, p > 0.05 for non-euro studies and F(2,20) = 0.21, p > 0.05 for the eurozone studies; there is therefore no apparent dependence on time (recall that we used the result that estimates of the euro’s Rose effect do not significantly depend on time in Sect. 3). This might suggest that the research cycle identified in the whole literature emerges also due to a higher proportion of the eurozone papers among the new studies.
Table 3

Explanatory meta-regression analysis

 

FIXED

ROBUST

RANDOM

prec

0.780 (6.16)***

0.842 (8.15)***

 

panel

1.606 (2.07)**

1.864 (2.88)***

2.053 (4.67)***

rose

0.462 (3.45)***

0.328 (4.06)***

0.452 (3.62)***

nitsch

−0.145 (−4.11)***

  

baldwin

−0.0814 (−5.48)***

−0.359 (−2.90)***

 

denardis

−0.0410 (−2.10)**

  

taglioni

 

0.299 (2.42)**

 

euro

−0.700 (−5.99)***

−0.779 (−8.39)***

−0.563 (−5.53)***

shortrun

0.0349 (2.22)**

0.0391 (2.61)**

 

countries

−0.00241 (−3.21)***

−0.00209 (−4.12)***

−0.00108 (−1.74)*

impact

−0.0590 (−2.79)***

−0.0413 (−2.33)**

 

year

1.178 (1.77)*

1.822 (3.61)***

0.145 (2.07)**

year2

 −0.0801 (−1.08)

−0.183 (−3.32)***

−0.0122 (−1.73)*

Constant

 −1.497 (−1.15)

−2.964 (−2.71)***

0.278 (1.45)

Observations

61

60

61

R2

0.725

0.828

 

τ

  

0.0316

Meta-response variable: tstat for FIXED and ROBUST, gamma for RANDOM

ROBUST: Iteratively re-weighted least squares version of FIXED

t-statistics in parentheses (Huber–White heteroskedasticity-robust for FIXED)

Variables prec, rose, nitsch, baldwin, denardis, taglioni, euro, shortrun, countries, and impact are assumed to influence the estimates of γ directly. Variables panel, year, and year2 are assumed to influence publication bias

***, **, and * denote significance at the level of 1, 5, and 10%, respectively

Regression described in column 1 of Table 3 is not very well specified, however. Condition number is high (75) indicating possible multicollinearity, Ramsey’s RESET rejects the null hypothesis [F(3,45) = 4.42, p < 0.05], only normality is not rejected [skewness-kurtosis test: χ2(2) = 1.36, p > 0.05]; nevertheless, the model would pass all specification tests if variables panel, year, and year2 were excluded. It is apparent that fixed effects MRA was able to model a significant portion of the heterogeneity inside the sample—note the high R2s: 0.73 and 0.83 for fixed effects and their robust version, respectively.8 Nevertheless, a lot of heterogeneity still remains unexplained. Testing H0: \(\sigma^2_\vartheta = 1\) (fixed effects MRA explains heterogeneity well) yields χ2(60) = 276, p < 0.001; for column 1, therefore, H0 is rejected—the result is qualitatively the same also for the robust specification.

When this is the case, random effects explanatory MRA might be preferable (see, e.g., Abreu et al. 2005):9
$$ \widehat{\gamma_i} = \iota_0 + \sum^{J}_{j=1}{\theta_j S_{ji} SE_i} + \sum^{K}_{k=1}{\delta_k Z_{ki}} + \lambda_i + \rho_i, $$
(6)
where λi stands for a normal disturbance term with standard deviations assumed to be equal to SEi, and ρi is a normal disturbance term with unknown variance τ2 assumed equal across all studies. This between-study variance is estimated using the restricted maximum likelihood method; t-values are computed employing the Knapp and Hartung (2003) modification. The results of random effects MRA are summarized in the third column of Table 3; there are much less significant explanatory variables than in the previous two specifications.

It is clear from the conducted tests that explanatory meta-regression is as sensitive to method and specification changes as any other field of empirical research. The most important meta-explanatory variables are those that are found significant by all specifications in both fixed and random effects meta-regression (effect on \(\widehat{\gamma}\) is shown in parentheses): studies on the eurozone (−), Rose’s co-authorship (+), number of countries in the data set (−), and usage of panel data (+). Some other variables are significant using fixed effects explanatory MRA and its robust version at the same time: short-run nature of the study (+), Baldwin’s co-authorship (−), and impact factor (−).

The negative sign for studies on the eurozone was expected and is in accordance with the results reported by Rose and Stanley (2005), as well as the influence of the number of countries in the data set and usage of panel data. However, contrary to the previous meta-analysis, short-run studies are expected to report higher trade effects. Two dummies for authorship were found consistently significant. It does not mean, though, that those authors would produce anyhow tendentious results. Their results only seem to be significantly different from the “mainstream” output. According to the fixed effects meta-regression and its robust version, articles published in leading journals are likely to report marginally lower Rose effects. The latter finding is provocative but should be treated with caution since it is not confirmed by random effects meta-regression.

5 Conclusion

Empirical literature on the trade effect of currency unions is heterogeneous to a large extent. Studies estimating the trade effect of the euro find on average much smaller effects than articles concentrating on other currency unions. The present meta-analysis shows that it is more appropriate to consider these two groups separately in a search for the underlying “true” effect.

Evidence of publication selection—i.e., preference towards statistically significant and positively biased results—is robust among papers on eurozone and much stronger than for non-euro studies. Narrative literature reviews discussing the trade effect of the euro that do not take publication selection into account are hence vulnerable to a substantial upward bias. Meta-regression methods show that, beyond publication bias, there is a significant and huge Rose effect of the currency unions other than euro, more than 60%; but no effect at all for the euro area. The absence of an economically important true effect is so robust that even some possible mistakes in the process of choosing the authors’ preferred estimates cannot significantly change the outcome.

Employing explanatory meta-regression, we can model about 70% of the heterogeneity in the “Rosean” literature. The authorship of a particular study is important: papers co-authored by Rose tend to find higher effects, papers co-authored by Baldwin are more likely to report smaller estimates. Papers on the eurozone find significantly lower effects as well as do long-run studies and studies with a high number of cross-sectional units in their data sets. When panel data are used, the study tends to report higher effects. Once a study is published in a journal with a high impact factor, it is likely to find a rather smaller Rose effect; unpublished manuscripts are likely to report higher estimates. The Rose effect literature taken as a whole shows signs of the economics research cycle (Goldfarb 1995; Stanley et al. 2008): the reported t-statistic is a quadratic concave function of the publication year. One might take a note that the literature seems to have almost completed the circle and the results, especially on the eurozone, are getting close to those “before Rose” when exchange rate volatility was believed to have a low influence on international trade (McKenzie 1999).

The present author does not dare to argue that the euro would have no effect on trade. The effects may indeed vary from country to country and industry to industry, as Baldwin (2006) suggests. At the very least, however, there is something not entirely right with the present Rosean literature applied on the eurozone. The degree of publication bias is striking and the trade effect of the euro (at least based on available empirical studies) is probably much lower than we believed, even if “what we believed” was already twentyfold less than what Rose reported in his famous article.

Footnotes
1

For an excellent introduction to the methodology of meta-analysis and its application in economics, see Stanley (2001).

 
2

There is an obvious trade-off between the representativeness and the robustness of the data: selecting representative estimates increases the threat of mistakes and data contamination. For this reason, we employ robust estimation methods wherever possible.

 
3

The exact search query used in RePEc was (((currency | monetary) + union) | euro) + trade + (effect | rose) + estimate, abstract search since 2002. The “old” Rose and Stanley (2005) data were updated—for example, many of the then working papers have been published in a journal since 2005 and their estimates might have slightly changed.

 
4

Note that “fixed” and “random” effects estimators in meta-analysis do not correspond to the standard use of these terms in panel data econometrics. For a more detailed explanation, see Abreu et al. (2005) and Sutton et al. (2000).

 
5

As Rabe-Hesketh and Skrondal (2008, p. 159) note, the LR test is conservative in this case and the correct p-value can be obtained by dividing the original LR p-value by 2.

 
6

Other robustness checks are available from the author upon request or in the working paper version of this article.

 
7

A predictable pattern of novelty and fashion in economics; initial path-breaking results are confirmed by other highly significant estimates, but as the time passes, skeptical results become preferable (Goldfarb 1995; Stanley et al. 2008).

 
8

However, because these are weighted least squares versions of the original equation, R2s have to be recomputed to reflect the actual determination of the estimates of γ. For example, in the case of the robust specification, the corrected R2 reaches 0.68.

 
9

Monte Carlo experiments suggest that random effects MRA is preferable if heterogeneity is caused by non-constant effect size variance or differences in the true underlying effect across studies. However, when heterogeneity arises due to omitted variable bias—which is realistic in economics—fixed effects estimators should be relied upon (Koetse et al. 2010). For this reason, fixed effects MRA is interpreted here as well along with random effects.

 

Acknowledgments

This work was supported by the IES Research Institutional Framework 2005–2010 (MSMT 0021620841). I thank Roman Horváth, Zuzana Iršová, Tom Stanley, Katerina Šmídková, and participants of the ETPM seminar at the Charles University in Prague for valuable comments. I am especially grateful to an anonymous referee of this journal for very useful suggestions that led to a substantial improvement in the quality and readability of the article. All remaining errors and omissions are mine. The views expressed are those of the author and do not necessarily reflect the views of the Czech National Bank.

Copyright information

© Kiel Institute 2010