Abstract
Studying the flow of information between different areas of the brain can be performed using the so-called partial directed coherence (PDC). This measure is usually evaluated by first identifying a multivariate autoregressive model and then using Fourier transforms of the impulse responses identified and applying appropriate normalizations. Here, we present another way to evaluate PDCs in multivariate time series. The method proposed is nonparametric and utilizes a strong spectral factorization of the inverse of the spectral density matrix of a multivariate process. To perform the factorization, we have recourse to an algorithm developed by Davis and his collaborators. We present simulations as well as an application on a real data set (local field potentials in a sleeping mouse) to illustrate the methodology. A detailed comparison with the common approach in terms of complexity is made. For long autoregressive models, the proposed approach is of interest.
Similar content being viewed by others
References
Amblard PO, Michel OJJ (2011) On directed information theory and Granger causality graphs. J Comput Neurosci 30(1):7–16. doi:10.1007/s10827-010-0231-x
Amblard PO, Michel OJJ (2013) The relation between granger causality and directed information theory: a review. Entropy 15(1):113–143
Anderson BDO, Moore JB (1979) Optimal filtering. Prentice Hall, London
Baccalá LA, Sameshima K (2001) Partial directed coherence: a new concept in neural structure determination. Biol Cybern 84:463–474
Barnett L, Seth AK (2014) The MVGC multivariate Granger causality toolbox: a new approach to Granger-causal inference. J Neurosci Methods 223:50–68
Boyd SP, Vandenberghe L (2004) Convex optimization. Cambridge University Press, Cambridge
Brillinger DR (2001) Time series: data analysis and theory. SIAM, Philadelphia
Chicharro D (2011) On the spectral formulation of granger causality. Biol Cybern 105:331–347
Davis JH, Dickinson RG (1983) Spectral factorization by optimal gain iteration. SIAM J Appl Math 43(2):289–301
Dhamala M, Rangarajan G, Ding M (2008) Analysing information flow in brain networks with nonparametric granger causality. Neuroimage 41:354–362
Dickinson RG (1978) Iterative methods for matrix spectral factorization. Unpublished master’s thesis, Queen’s University, Kingston, Ontario, Canada
Effron B (2010) Large-scale inference: empirical Bayes methods for estimation, testing, and prediction. Cambridge University Press, Cambridge
Eichler M (2006) On the evaluation of information flow in multivariate systems by directed transfer function. Biol Cybern 94:469–482
Eichler M (2011) Graphical modeling of multivariate time series. Probab Theory Relat Fields. doi:10.1007/s00440-011-0345-8
Geweke J (1982) Measurement of linear dependence and feedback between multiple time series. J Am Stat Assoc 77:304–313
Geweke J (1984) Measures of conditional linear dependence and feedback between times series. J Am Stat Assoc 79(388):907–915
Gourévitch B, Bouquin-Jeannés RL, Faucon G (2006) Linear and nonlinear causality between signals: methods, example and neurophysiological applications. Biol Cybern 95(4):349–369
Granger CWJ (1980) Testing for causality: a personal viewpoint. J Econ Dyn Control 2:329–352
Hall P (1992) The bootstrap and edgeworth expansion. Srpinger, New York
Harris TJ, Davis JH (1992) An iterative method for matrix spectral factorization. SIAM J Sci Stat Comput 13(2):531–540
Lütkepohl H (2005) New introduction to multiple time series analysis. Springer, Berlin
Morf M, Vieira A, Lee D, Kailath T (1978) Recursive multichannel maximum entropy spectral estimation. IEEE Trans Geosci Electron 16(2):85–94
Percival DB, Walden AT (1993) Spectral analysis for physical application: multitaper and conventional univariate techniques. Cambridge University Press, Cambridge
Rozanov YA (1967) Stationary random processes. Holden Day, San Francisco
Sameshima K, Baccalá LA (1999) Using partial directed coherence to describe neuronal ensemble interactions. J Neurosci Methods 94:93–103
Schelter B, Timmer J, Eichler M (2009) Assessing the strength of directed influences among neural signals using renormalized partial directed coherence. J Neurosci Methods 179:121–130
Schelter B, Winterhalder M, Eichler M, Peifer M, Hellwig B, Guschlbauer B, Lücking CH, Dahlhaus R, Timmer J (2005) Testing for directed influences among neural signals using partial directed coherence. J Neurosci Methods 152:210–219
Sporns O (2010) The networks of the brain. MIT Press, Cambridge
Whittaker J (1989) Graphical models in applied multivariate statistics. Wiley, New York
Wilson GT (1972) The factorization of matricial spectral densities. SIAM J Appl Math 23(4):420–426
Acknowledgments
P.O. Amblard is supported by a Marie Curie International Outgoing Fellowship of the European Union. P.O. A. gratefully acknowledges J. Davis for the discussions on his algorithm and S. Crochet for making his data available.
Author information
Authors and Affiliations
Corresponding author
Additional information
P.O. Amblard is supported by a Marie Curie International Outgoing Fellowship from the European Union. Part of this work was performed while P.O.A. was affiliated with the University of Melbourne, Math & Stat Dept, Australia.
Appendices
Appendix 1: Spectral factorization
The aim here is to present the main steps for the derivation of the Davis and Dickinson algorithm. The code is presented in the next appendix. The algorithm relies on the equivalence between Kalman filtering and Wiener filtering. Note that the complete proof is lengthy and requires a considerable amount of algebraic manipulation. The complete proof is given in some detail in Dickinson’s master’s thesis but does not appear in other publications. This is the main reason to include here the main steps of the proof. The only difference with Dickinson’s proof is the faster method we use to obtain Eq. (9) in what follows.
The proof consists in expressing the spectral factors used in Wiener filtering in terms of the elements of the solution of Kalman filtering. Then the spectral factor essentially depends on the covariance of the error, which is given by the solution of a Riccati equation. This equation has no closed-form solution (except in very rare cases). Using a Newton–Raphson recursion to solve the Riccati equation makes it possible to obtain as a byproduct the Davis algorithm.
Suppose we have the following state and observation equations:
where \({\varvec{u}}\) and \({\varvec{v}}\) are independent white sequences with zero mean and corresponding covariance matrices \({\varvec{Q}}\) and \({\varvec{R}}\). The aim in filtering is to estimate \({\varvec{x}}_k\) from the observation \({\varvec{y}}\) up to time \(k\).
The covariance matrix is \({\varvec{S}}_{yy}(z) = {\varvec{C}}{\varvec{S}}_{xx}(z) {\varvec{C}}^\top + {\varvec{R}}\) and can be written as
The Wiener filter necessitates having a strong spectral factorization of this spectral matrix in the form
Note that in this sketch of the proof we adopt for convenience a convention other than that in Eq. (5), the derivation here being done in accordance with the common definition of correlation. The difference is simply a transposition. Then the spectral factor is given by \({\varvec{F}}(z) = {\varvec{I}}+ {\varvec{C}}({\varvec{I}}z-{\varvec{A}})^{-1} {\varvec{K}}\), where the Kalman gain (steady state) reads \({\varvec{K}}= {\varvec{A}}{\varvec{P}}{\varvec{C}}^\top {\varvec{W}}^{-1}\) and \({\varvec{W}}= {\varvec{R}}+ {\varvec{C}}{\varvec{P}}{\varvec{C}}^\top \). \({\varvec{P}}\) is the solution of the Riccati equation \({\varvec{P}}= {\varvec{A}}{\varvec{P}}{\varvec{A}}^\top - {\varvec{A}}{\varvec{P}}{\varvec{C}}^\top {\varvec{W}}^{-1} {\varvec{C}}{\varvec{P}}{\varvec{A}}^\top + {\varvec{B}}{\varvec{Q}}{\varvec{B}}^\top \).
The problem reduces to one of obtaining the solution of the Riccati equation, which is far from being obvious. For this, Davis proposed using a Newton–Raphson algorithm to solve
The Newton–Raphson iteration for solving this is
A first trick is to use the representation in series \({\varvec{X}}=\sum _{n\le 0} {\varvec{A}}^{n} {\varvec{\Gamma }}{\varvec{A}}^{\top n }\) for the solution of \({\varvec{X}}+ {\varvec{A}}{\varvec{X}}{\varvec{A}}^\top ={\varvec{\Gamma }}\). If \({\varvec{\Gamma }}\) is positive-definite, then the series is an inner product, and we can use Parseval’s equality to obtain the equivalent form in the \(z\)-domain. Apply this to \({\varvec{P}}_{n+1}\) to obtain
Then pre- and postmultiplying of \({\varvec{P}}_{n+1}\) by \({\varvec{C}}\), using some algebra and recalling the definitions of \({\varvec{S}}_{yy}\) and \({\varvec{F}}\) and the fact that \((2i\pi )^{-1}\oint {\varvec{F}}_n(z)^{-1} \hbox {d}z/z={\varvec{I}}\), allows us to obtain
which constitutes the first part of the algorithm.
To obtain the iteration on the spectral factor, Dickinson proposes to examine \(\Delta {\varvec{P}}_n := {\varvec{P}}_{n+1}-{\varvec{P}}_n\). Substracting two successive iterations of the Newton–Raphson iteration leads to
This last matrix \( {\varvec{T}}_n\) is positive-definite since \({\varvec{W}}_{n-1} \) is positive-definite.
Since \(\Delta {\varvec{P}}_{n}\) satisfies an equation of the type \({\varvec{X}}+ {\varvec{A}}{\varvec{X}}{\varvec{A}}^\top ={\varvec{\Gamma }}\), we use the series representation for \(\Delta {\varvec{P}}_{n}\),
and since \( {\varvec{T}}_n\) is positive-definite, we can have an equivalent form in the \(z\)-domain,
Then we must solve the equation, which can be verified by direct evaluation:
Inserting (9), a lengthy calculation leads to
an expression that can be linked with the causal projection.
If \(H(z)\) is the \(z\)-transform of a sequence \(h_k, k\in \mathbb {Z}\), recall that
Thus, note that the integrals appearing in expression (11) are of the form
The first integral in (11) concerns an anticausal quantity with no constant term and is therefore equal to zero. Noting that \({\varvec{W}}_{n-1}\) does not depend on \(v\), and therefore its causal part is equal to itself, we finally get the beautiful result
Appendix 2: Code for spectral factorization
The following is MATLAB code for the spectral factorization. It uses three-dimensional arrays. No test for positive definiteness is included, and if the assumptions on matrix \(S\) are inadequate, then the algorithm should not converge.
Rights and permissions
About this article
Cite this article
Amblard, PO. A nonparametric efficient evaluation of partial directed coherence. Biol Cybern 109, 203–214 (2015). https://doi.org/10.1007/s00422-014-0636-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00422-014-0636-0