# On the statistical performance of Granger-causal connectivity estimators

- 1.2k Downloads
- 7 Citations

## Abstract

In this article, we extend the statistical detection performance evaluation of linear connectivity from Sameshima et al. (in: Slezak et al. (eds.) Lecture Notes in Computer Science, 2014) via brand new Monte Carlo simulations of three widely used toy models under different data record lengths for a classic time domain multivariate Granger causality test, information partial directed coherence, information directed transfer function, and include conditional multivariate Granger causality whose behaviour was found to be *anomalous*.

## Keywords

Partial directed coherence Directed transfer function Granger causality Null hypothesis test performance Conditional multivariate Granger causality## 1 Introduction

This paper compares the statistical performance of linear connectivity detection [1] using four popular neural connectivity estimators. In addition to the classic Granger causality test (GCT) from [2], we employ our recently derived rigorous results [3, 4] about the asymptotic behaviour of information PDC (*i*PDC) and information DTF (*i*DTF) [5] that, respectively, generalize partial directed coherence (PDC) [6] and directed transfer function (DTF) [7] which correctly describe coupling effect size issues. A fourth method was included in this extended version of [8] and consists of the proposal put forward by [9] (*c*MVGC) for detecting conditional Granger causality between time series pairs and is applied here using their published MVGC package. There have been many recent papers [10, 11, 12, 13, 14] aimed at comparing contending connectivity estimation procedures. In fact, almost every new connectivity estimation procedure sports some form of appraisal by counting the number of correct detection decisions. What sets the present effort apart, as emphasized by using the word *statistical*, is that we focus on methods that have rigourous theoretically derived asymptotic detection criteria.

In the comparisons, we used Monte Carlo simulations of three widely used toy models from the literature and verified the performance of null connectivity hypothesis rejection as a function of data record length, * K*. To complement the study, we also computed false positive (FP) and false negative (FN) test rates for each estimator alternative. In the MVGC package case, false detection rates were computed with and without author-recommended corrections [9].

## 2 Methods and results

### 2.1 Monte Carlo simulations

Following our recently proposed information PDC and information DTF [5], and their corresponding rigorous asymptotic statistics (see [3] and [4] for details), we first gauged their statistical performance against that of the well-established time-domain GCT test [2]. For added comparison here, we added results from conditional connectivity detection obtained via the MVGC package [9]. Monte Carlo simulations were performed in the MATLAB environment using its normally distributed pseudorandom number generator to simulate systems with uncorrelated zero mean and unit variance innovation noise as model inputs. To test the performance of the latter four connectivity estimators, for each toy model and at each data record length, we selected values of * K* = {100, 200, 500, 1000, 2000, 5000, 10000} repeating 1000 simulations for each case. For each simulation, a burn-in set of 5000 initial data points were discarded to eliminate possible transients before selecting the* K* value of interest. We used the Nuttall-Strand algorithm for multivariate autoregressive (MAR) model estimation and the Akaike information criterion (AIC) for model order selection [15] for GCT, *i*PDC and *i*DTF, while the Levinson–Wiggins–Robinson solution of the multivariate Yule-Walker equations was used as is default for *c*MVGC [9]. Detection threshold was set in compliance to \(\alpha =1\,\%\). For *i*PDC and *i*DTF, * p* values were computed at 32 uniformly separated normalized frequency points covering the whole interval with a connection being deemed detected for a given pair of structures if its * p* value resulted to be less than \(\alpha\) for some frequency within the interval. This connectivity decision criterion is somewhat lax and tends to overestimate the presence of connectivity for *i*PDC and *i*DTF. In particular for *i*PDC, one should expect connectivity detection more often than GCT, i.e. more FPs are likely.

The reader may access our open MATLAB codes for GCT and for both *i*PDC and *i*DTF asymptotic statistics used in this study at http://www.lcs.poli.usp.br/~baccala/BIHExtension2014/.

The Web site, furthermore, contains the datasets of the employed simulation results and a copy of the exact version of the MVGC package used in the present comparisons [9]. This allows full reader accessing disclosure of the data/procedures with the possibility of cross-checking and replaying all results. Additional graphs and results are available there and may be consulted for details; only the overall representative behaviour is summed up here.

Next we describe the toy models and the allied simulations results.

### 2.2 Model 1: Closed-loop model

Results from a single-trial example of *i*DTF and *i*PDC connectivity estimations in the frequency domain are depicted in Fig. 2a, b, respectively, with significant values, at \(\alpha =0.01\), represented by red solid lines. The corresponding connectivity graph diagrams are contained in Fig. 2c, d, where arrow thickness represents estimate magnitude. Note that *i*PDC reflects adjacent connections, Fig. 2b, d, while *i*DTF, Fig. 2a, c, represents graph reachability aspects of the directed structure [17, 18]. The notion of reachability refers to the net influence from a time series onto another through various signal pathways, i.e. it measures how much of one series ends up influencing another.

#### 2.2.1 Granger causality test for Model 1

_{10}(

*p*value) in Fig. 3 summarize GCT performance for Model 1 and

*K*= {100, 200, 500, 1000, 2000, 5000, 10000} data record lengths. As expected, for

*K*> 200, it properly detects connectivity between adjacent structures with zero observed FNs for all pairs of existing connections.

#### 2.2.2 *i*PDC performance for Model 1

Figure 4 summarizes the asymptotic *i*PDC statistical performances for the same data and record lengths as for GCT in Fig. 3 with similar performance (Figs. 5, 6). Closer comparison on identical trials for each estimator leads to Fig. 7 depicting *i*PDC versus GCT performance (*K* = 2000), further revealing a pattern of consistently higher FP values for *i*PDC expectedly resulting from how the test was performed with *i*PDC decision dictated by a single maximum frequency above threshold. In Fig. 7, the average slopes are above 45° consistent with the larger number of FPs for *i*PDC.

#### 2.2.3 *c*MVGC behaviour for Model 1

*i*PDC. These differences are easier to appreciate on the trial-by-trial comparison with respect to GCT (Fig. 8), which shows that

*c*MVGC’s FP rates are sometimes well below the imposed \(\alpha =1\,\%\) and even become more extreme after authors’ recommended corrections [9] (

*K*= 2000). Note how point distributions in Fig. 8 hardly ever cluster round the 45° line for connections reaching the \(x_1\) and \(x_6\) oscillators. For connections leaving \(x_6\), the pattern is reversed. It is this failure to meet the preset \(\alpha =1\,\%\) irrespective of which connection is under consideration, which we call

*anomalous*here.

#### 2.2.4 *i*DTF performance for Model 1

*i*DTF. The boxplots clearly show that for larger sample sizes,

*i*DTF correctly detects the reachability structure shown in Fig. 2c. Note that the weakest, and in this case, the farthest connection (\(x_2 \rightarrow x_1\)) requires longer record lengths for proper detection.

### 2.3 Model 2: Five-variable model

*K*= {100, 200, 500, 1000, 2000} long records over 1000 Monte Carlo repetitions.

#### 2.3.1 GCT performance

*K*= 200, GCT already performs well with FN rate below 5 %, reaching overall FN rates below 2 % for

*K*= 2000.

#### 2.3.2 *i*PDC performance

*K*> 200 for all measures of GCT,

*i*PDC, and

*c*MVGC (See Figs. 10–12). Overall, the pattern of

*i*PDC performance is similar to that of GCT’s. Yet

*i*PDC’s FP rates are slightly higher than GCT’s. For example, performance for

*K*= 2000 is between 2.7 and 5.6 % (Fig. 13).

#### 2.3.3 *c*MVGC asymptotic behaviour for Model 2

*c*MVGC performance for * K* = {100, 200, 500, 1000, 2000} is shown in Fig. 12. When taken with respect to GCT (Fig. 14), FPs are consistently lower than GCT’s for K = 2000 and, as in the case of the previous model, it does not conform to a preset \(\alpha =1\,\%\) for FP rates. This is also easy to appreciate for other values of \(K\) in Fig. 12 as most outliers (red crosses) are below the −log_{10}(*p* value) = 2 line for nonexisting connections.

*i*PDC and

*c*MVGC, respectively, confirm the pattern of higher FP for the former compared to a pattern of FP, below 1 %, for

*c*MVGC with or without correction (See Figs. 13, 14). This is also suggestive of possible problems encountered in how the MVGC package handles the FP rate, which may be fortuitously benign to MVGC in this example, but does not represent the general case, since it does not hold for Model 1.

### 2.4 Model 3: Modified five-var model

*i*PDC and

*c*MVGC, we simulated the modified five-channel toy Model 3, originally introduced in [6], under the formulation variant proposed by [19] and reproduced here for reference (Fig. 15).

*i*PDC and

*c*MVGC. Here, to assess the impairment that the extra exogenous/latent variables possibly inflict on null-hypothesis testing, we repeated the procedure not just under the same conditions of [19], but also using a broader range of data record sizes:

#### 2.4.1 GCT performance in the presence of exogenous noise, Model 3

*K*= 10000 case. For

*K*= 500, the overall FP rates are between 2.3 and 7.9 % with a median of 3.8 %. At \(K=10000\), the latter rates grow to a range between 20.8 and 40.4 % with the median value of \(26.6\,\%\). FN rates are negligible.

#### 2.4.2 *i*PDC performance in the presence of exogenous noise

*i*PDC performance in detecting connectivity is similar to GCT’s (See Figs. 16, 17). As noted before,

*i*PDC tends to have higher FP rates compared with GCT due possibly to the chosen frequency domain detection criterion of using a single-frequency with significant

*p*value as indicative of a valid connection. Overall, FP rates range between 6.7 and 11.7 % (median \(8.5\,\%\)) at \(K=100\) increasing to the range \((30.8,49.6\,\%)\) range (median \(40.1\,\%\)) at

*K*= 10000.

#### 2.4.3 *c*MVGC performance for Model 3

## 3 Discussion

This study presents simulation evidence about the performances of statistical connectivity tests: two in time domain and using two new frequency domain measures.

One should remind the reader that the frequency domain tests, *i*DTF and *i*PDC, measure different aspects of connectivity and are not immediately comparable as discussed at length in [17, 18]. This contrasts with GCT, *i*PDC and *c*MVGC which are geared towards describing the same aspect of connectivity between adjacent structures [17]. Among the tests in the latter class, GCT proved to be the one most in accord with the expected Neyman–Pearson behaviour in the sense that observed FP rates are in accord with the preset value of \(\alpha\) justifying its employment as reference in the trial-by-trial comparisons between methods as summed up herein.

Qualitatively *i*PDC closely mirrors GCT behaviour, and predictably produces higher FP rates as a consequence of how *i*PDC connectivity was detected by deeming just one frequency above threshold as significant. Whereas one may conceivably improve on how to employ *i*PDC for testing, its use is recommended when there is frequency content of physiological interest.

Added for comparison, *c*MVGC detection proved to be biased towards a reduction of the FP rates in many cases. By contrast, examination of its behaviour for other \(K\) (available in more detail from our Web site) suggests that, for small \(K\), it tends to miss existing connections more often than the other methods.

Perhaps more striking and more important, however, in the sense of Neyman–Pearson detection for \(\alpha\) compliance, is that procedures are usually constructed to impart control over FP decisions, which according to the present observations, is a condition that fails to be met by the *c*MVGC implementation from [9] which was used here without modification. It is also important to note that employing author-recommended decision corrections [9] usually aggravates matters. It is this lack of compliance to Neyman–Pearson criteria that we termed *anomalous*. Whether this happens due to an eventual software glitch, or reflects a more fundamental issue, is unknown. One should note that on many instances, *c*MVGC produced fewer FPs, something good in itself. This apparent quality is counterbalanced by much worse performance for some links, as in Model 1, in sharp contrast to other methods whose results attain the prescribed \(\alpha\) and are balanced for all connections to within the attainable accuracies of the Monte Carlo simulations.

Based on its good asymptotic control of FP observations, it is fair to suggest that, at least provisionally, GCT, as proposed by [2], be taken as a gold standard for detecting connectivity between adjacent structures and that *i*PDC and *c*MVGC should be used taking into account adequate forewarning of their present observed limitations.

The present Monte Carlo simulations showed good large sample fit and robustness for Models 1 and 2. In the presence of large exogenous/latent variables (Model 3), we observed poor performance for large samples possibly due to the poor performance of the MAR model estimation algorithms under low signal-to-noise ratio regardless of the statistical procedure (\(K>5000\)). In this regard, Model 3 deserves the special comment that its comparatively worse performance is not surprising since, strictly speaking, it violates the usual assumptions behind the development of all the test detection procedures discussed herein.

Finally, we propose that the present methodology represents the seed of a potential tool for systematically comparing connectivity estimators. The reason for this is twofold: (a) the framework provides a standardized approach whereby comparisons can be made systematically and (b) may be used even in the absence of formally rigorous statistical criteria, i.e. even if only ad hoc decision rules are available and is therefore not restricted to methods with theoretically well-established detection criteria. We have future plans to include bootstrap-based connectivity detection schemes under the same standardized framework for comparison purposes.

## Notes

### Acknowledgments

CNPq Grant 307163/2013-0 to L.A.B. and CNPq 309381/2012-6 and FAPESP 2014/12907-3 Grants to K.S. are also gratefully acknowledged, and the authors offer thanks to NAPNA - Núcleo de Neurociência Aplicada from the University of São Paulo. Part of this work was carried out during FAPESP Grant 2005/56464- 9 (CInAPCe).

## References

- 1.Baccalá LA, Sameshima K (2014) Brain connectivity. In: Sameshima K, Baccalá LA (eds) Methods in brain connectivity inference through multivariate time series analysis. CRC Press, Boca Raton, pp 1–9Google Scholar
- 2.Lütkepohl H (2005) New introduction to multiple time series analysis. Springer, New YorkzbMATHCrossRefGoogle Scholar
- 3.Baccalá L, De Brito C, Takahashi D, Sameshima K (2013) Unified asymptotic theory for all partial directed coherence forms. Philos Trans R Soc A 371:1–13CrossRefGoogle Scholar
- 4.Baccalá LA, Takahashi DY, Sameshima K (2015) Consolidating a link centered neural connectivity framework with directed transfer function asymptotic. arXiv: q-bio.nc/1166340Google Scholar
- 5.Takahashi D, Baccalá L, Sameshima K (2010) Information theoretic interpretation of frequency domain connectivity measures. Biol Cybern 103:463–469Google Scholar
- 6.Baccalá LA, Sameshima K (2001) Partial directed coherence: a new concept in neural structure determination. Biol Cybern 84:463–474zbMATHCrossRefGoogle Scholar
- 7.Kamiński M, Blinowska KJ (1991) A new method of the description of the information flow in brain structures. Biol Cybern 65:203–210zbMATHCrossRefGoogle Scholar
- 8.Sameshima K, Takahashi DY, Baccalá LA (2014) On the statistical performance of connectivity estimators in the frequency domain. In: Slezak D, Tan AH, Peters JF, Schwabe L (eds) Lecture Notes in Computer Science. Springer, Heidelberg, pp 412–423Google Scholar
- 9.Barnett L, Seth AK (2014) The MVGC multivariate Granger causality toolbox: a new approach to Granger-causal inference. J Neurosci Methods 223:50–68CrossRefGoogle Scholar
- 10.Haufe S, Nikulin VV, Müller KR, Nolte G (2013) A critical assessment of connectivity measures for EEG data: a simulation study. NeuroImage 64:120–133CrossRefGoogle Scholar
- 11.Wu MH, Frye RE, Zouridakis G (2011) A comparison of multivariate causality based measures of effective connectivity. Comput Biol Med 41:1132–1141CrossRefGoogle Scholar
- 12.Florin E, Gross J, Pfeifer J, Fink GR, Timmermann L (2011) Reliability of multivariate causality measures for neural data. J Neurosci Methods 198:344–358CrossRefGoogle Scholar
- 13.Fasoula A, Attal Y, Schwartz D (2013) Comparative performance evaluation of data-driven causality measures applied to brain networks. J Neurosci Methods 215:170–189CrossRefGoogle Scholar
- 14.Astolfi L, Cincotti F, Mattia D, Marciani MG, de Baccala LA, Vico Fallani F, Salinari S, Ursino M, Zavaglia M, Ding L, Edgar JC, Miller GA, He B, Babiloni F (2007) Comparison of different cortical connectivity estimators for high-resolution EEG recordings. Hum Brain Mapp 28:143–157CrossRefGoogle Scholar
- 15.Marple SL Jr (1987) Digital spectral analysis: with applications. Prentice Hall, Englewood CliffsGoogle Scholar
- 16.Baccalá LA, Sameshima K (2001) Overcoming the limitations of correlation analysis for many simultaneously processed neural structures. Prog Brain Res Adv Neural Popul Coding 130:33–47CrossRefGoogle Scholar
- 17.Baccalá LA, Sameshima K (2014) Causality and Influentiability: The need for distinct neural connectivity concepts. In: Slezak D, Tan AH, Peters JF, Schwabe L (eds) Lecture Notes in Computer Science. Springer, Heidelberg, pp 424–435Google Scholar
- 18.Baccalá LA, Sameshima K (2014) Multivariate time series brain connectivity: a sum up. In: Sameshima K, Baccalá LA (eds) Methods in brain connectivity inference through multivariate time series analysis. CRC Press, Boca Raton, pp 245–251Google Scholar
- 19.Guo S, Wu J, Ding M, Feng J (2008) Uncovering interactions in the frequency domain. PLoS Comput Biol 4:e1000087MathSciNetCrossRefGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.