Erratum to: Behav Res

DOI 10.3758/s13428-012-0289-7

We thank Ray Koopman (personal communication) for noticing that there is a problem with our computation of the t-test for comparing two independent ordinary least squares (OLS) regression coefficients. The method we used to compute the standard error of the difference between b 1 and b 2 (equation 12 in the original article) does not assume equal variances. Therefore, we should have used Satterthwaite degrees of freedom (see Eq. 1 in this document), just as one does when using the unequal variances version of the independent groups t-test (see Howell, 2013, for example).

$$ \mathrm{Satterthwaite}\,df=\frac{{{{{\left( {s_{{{b_1}}}^2+s_{{{b_2}}}^2} \right)}}^2}}}{{\frac{{{{{\left( {s_{{{b_1}}}^2} \right)}}^2}}}{{{n_1}-m-1}}+\frac{{{{{\left( {s_{{{b_2}}}^2} \right)}}^2}}}{{{n_2}-m-1}}}} $$
(1)

We have now revised our SPSS and SAS programs to correct this problem. The revised programs also compute the pooled variance version of this t-test. Users can indicate which version of the test they want by setting an indicator variable called Pool (set Pool = 1 for the pooled variance test, or Pool = 0 for the unequal variances test). For the pooled variance test, the standard error is computed as shown in Eq. 2 (in this document), and the degrees of freedom are equal to n 1 + n 2 − 2 m − 2 (where n 1 and n 2 are the two sample sizes and m is the common number of predictor variables, not including the constant). MSE 1 and MSE 2 are the MS error (or MS residual ) terms from the two regression models, and MSE pooled is computed as shown in Eq. 3.

$$ {s_{{{b_1}-{b_2}}}}=\sqrt{{MS{E_{pooled }}\left( {\frac{{s_{{{b_1}}}^2}}{{MS{E_1}}}+\frac{{s_{{{b_2}}}^2}}{{MS{E_2}}}} \right)}} $$
(2)
$$ MS{E_{pooled }}=\frac{{\left( {{n_1}-m-1} \right)MS{E_1}+\left( {{n_2}-m-1} \right)MS{E_2}}}{{{n_1}+{n_2}-2m-2}} $$
(3)

Note that it is the pooled variances version of this t-test that corresponds to Potthoff (1966) analysis carried out using the raw data. In the original article, we compared the results of the t-test for comparing two independent OLS regression coefficients to results from Potthoff analysis, and reported that the two sets of results were very similar, differing only because of rounding error. In hindsight, they differed because of rounding error and because we were not using the pooled variance estimate of the standard error. When we repeat those comparisons now using the correct pooled variance t-test, the results match more closely, and do differ only because of rounding error.

Koopman also suggested that we could have used Steiger’s (1980) modification of the PF and ZPF tests for comparing two non-independent correlations with no variables in common (equations 18 and 19 in the original article). When computing the standard errors for those tests, Steiger suggests replacing r 12 and r 34, the correlations that are assumed to be equal under the null hypothesis, with their average. (Note that this is also done when computing k, which is used in both equations 18 and 19.) According to Steiger, this method yields “improvement in Type I error rate control” (p. 247). Accordingly, we have modified our programs to compute Steiger’s modified tests in addition to the original versions. Users can choose the version they wish by setting an indicator variable called Steiger (Steiger = 0 for the original versions of the PF and ZPF tests, Steiger = 1 for Steiger’s modified versions).

Finally, Koopman noted that it is not necessary to take the absolute value in Eq. 2 of the original article, as (1 + r)/(1 – r) cannot be negative. Therefore, Eq. 2 should have read as follows:

$$ r\prime =\left( {0.5} \right){\log_e}\left( {\frac{1+r }{1-r }} \right) $$
(4)

Corrected versions of the relevant programs can be downloaded from the authors’ websites (https://sites.google.com/a/lakeheadu.ca/bweaver/Home/statistics/spss/my-spss-page/weaver_wuensch for SPSS syntax files, and http://core.ecu.edu/psyc/wuenschk/W&W/W&W-SAS.htm for SAS code).