Skip to main content
Log in

A nonparametric efficient evaluation of partial directed coherence

  • Original Paper
  • Published:
Biological Cybernetics Aims and scope Submit manuscript

Abstract

Studying the flow of information between different areas of the brain can be performed using the so-called partial directed coherence (PDC). This measure is usually evaluated by first identifying a multivariate autoregressive model and then using Fourier transforms of the impulse responses identified and applying appropriate normalizations. Here, we present another way to evaluate PDCs in multivariate time series. The method proposed is nonparametric and utilizes a strong spectral factorization of the inverse of the spectral density matrix of a multivariate process. To perform the factorization, we have recourse to an algorithm developed by Davis and his collaborators. We present simulations as well as an application on a real data set (local field potentials in a sleeping mouse) to illustrate the methodology. A detailed comparison with the common approach in terms of complexity is made. For long autoregressive models, the proposed approach is of interest.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Amblard PO, Michel OJJ (2011) On directed information theory and Granger causality graphs. J Comput Neurosci 30(1):7–16. doi:10.1007/s10827-010-0231-x

    Article  PubMed  Google Scholar 

  • Amblard PO, Michel OJJ (2013) The relation between granger causality and directed information theory: a review. Entropy 15(1):113–143

    Article  Google Scholar 

  • Anderson BDO, Moore JB (1979) Optimal filtering. Prentice Hall, London

    Google Scholar 

  • Baccalá LA, Sameshima K (2001) Partial directed coherence: a new concept in neural structure determination. Biol Cybern 84:463–474

    Article  PubMed  Google Scholar 

  • Barnett L, Seth AK (2014) The MVGC multivariate Granger causality toolbox: a new approach to Granger-causal inference. J Neurosci Methods 223:50–68

    Article  PubMed  Google Scholar 

  • Boyd SP, Vandenberghe L (2004) Convex optimization. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Brillinger DR (2001) Time series: data analysis and theory. SIAM, Philadelphia

    Book  Google Scholar 

  • Chicharro D (2011) On the spectral formulation of granger causality. Biol Cybern 105:331–347

    Article  CAS  PubMed  Google Scholar 

  • Davis JH, Dickinson RG (1983) Spectral factorization by optimal gain iteration. SIAM J Appl Math 43(2):289–301

    Article  Google Scholar 

  • Dhamala M, Rangarajan G, Ding M (2008) Analysing information flow in brain networks with nonparametric granger causality. Neuroimage 41:354–362

    Article  PubMed Central  PubMed  Google Scholar 

  • Dickinson RG (1978) Iterative methods for matrix spectral factorization. Unpublished master’s thesis, Queen’s University, Kingston, Ontario, Canada

  • Effron B (2010) Large-scale inference: empirical Bayes methods for estimation, testing, and prediction. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Eichler M (2006) On the evaluation of information flow in multivariate systems by directed transfer function. Biol Cybern 94:469–482

    Article  PubMed  Google Scholar 

  • Eichler M (2011) Graphical modeling of multivariate time series. Probab Theory Relat Fields. doi:10.1007/s00440-011-0345-8

  • Geweke J (1982) Measurement of linear dependence and feedback between multiple time series. J Am Stat Assoc 77:304–313

    Article  Google Scholar 

  • Geweke J (1984) Measures of conditional linear dependence and feedback between times series. J Am Stat Assoc 79(388):907–915

    Article  Google Scholar 

  • Gourévitch B, Bouquin-Jeannés RL, Faucon G (2006) Linear and nonlinear causality between signals: methods, example and neurophysiological applications. Biol Cybern 95(4):349–369

    Article  PubMed  Google Scholar 

  • Granger CWJ (1980) Testing for causality: a personal viewpoint. J Econ Dyn Control 2:329–352

    Article  Google Scholar 

  • Hall P (1992) The bootstrap and edgeworth expansion. Srpinger, New York

    Book  Google Scholar 

  • Harris TJ, Davis JH (1992) An iterative method for matrix spectral factorization. SIAM J Sci Stat Comput 13(2):531–540

    Article  Google Scholar 

  • Lütkepohl H (2005) New introduction to multiple time series analysis. Springer, Berlin

    Book  Google Scholar 

  • Morf M, Vieira A, Lee D, Kailath T (1978) Recursive multichannel maximum entropy spectral estimation. IEEE Trans Geosci Electron 16(2):85–94

    Article  Google Scholar 

  • Percival DB, Walden AT (1993) Spectral analysis for physical application: multitaper and conventional univariate techniques. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Rozanov YA (1967) Stationary random processes. Holden Day, San Francisco

    Google Scholar 

  • Sameshima K, Baccalá LA (1999) Using partial directed coherence to describe neuronal ensemble interactions. J Neurosci Methods 94:93–103

    Article  CAS  PubMed  Google Scholar 

  • Schelter B, Timmer J, Eichler M (2009) Assessing the strength of directed influences among neural signals using renormalized partial directed coherence. J Neurosci Methods 179:121–130

    Article  PubMed  Google Scholar 

  • Schelter B, Winterhalder M, Eichler M, Peifer M, Hellwig B, Guschlbauer B, Lücking CH, Dahlhaus R, Timmer J (2005) Testing for directed influences among neural signals using partial directed coherence. J Neurosci Methods 152:210–219

    Article  PubMed  Google Scholar 

  • Sporns O (2010) The networks of the brain. MIT Press, Cambridge

    Google Scholar 

  • Whittaker J (1989) Graphical models in applied multivariate statistics. Wiley, New York

    Google Scholar 

  • Wilson GT (1972) The factorization of matricial spectral densities. SIAM J Appl Math 23(4):420–426

    Article  Google Scholar 

Download references

Acknowledgments

P.O. Amblard is supported by a Marie Curie International Outgoing Fellowship of the European Union. P.O. A. gratefully acknowledges J. Davis for the discussions on his algorithm and S. Crochet for making his data available.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pierre-Olivier Amblard.

Additional information

P.O. Amblard is supported by a Marie Curie International Outgoing Fellowship from the European Union. Part of this work was performed while P.O.A. was affiliated with the University of Melbourne, Math & Stat Dept, Australia.

Appendices

Appendix 1: Spectral factorization

The aim here is to present the main steps for the derivation of the Davis and Dickinson algorithm. The code is presented in the next appendix. The algorithm relies on the equivalence between Kalman filtering and Wiener filtering. Note that the complete proof is lengthy and requires a considerable amount of algebraic manipulation. The complete proof is given in some detail in Dickinson’s master’s thesis but does not appear in other publications. This is the main reason to include here the main steps of the proof. The only difference with Dickinson’s proof is the faster method we use to obtain Eq. (9) in what follows.

The proof consists in expressing the spectral factors used in Wiener filtering in terms of the elements of the solution of Kalman filtering. Then the spectral factor essentially depends on the covariance of the error, which is given by the solution of a Riccati equation. This equation has no closed-form solution (except in very rare cases). Using a Newton–Raphson recursion to solve the Riccati equation makes it possible to obtain as a byproduct the Davis algorithm.

Suppose we have the following state and observation equations:

$$\begin{aligned} {\varvec{x}}_k&= {\varvec{A}}{\varvec{x}}_{k-1} + {\varvec{B}}{\varvec{u}}_k, \\ {\varvec{y}}_k&= {\varvec{C}}{\varvec{x}}_k + {\varvec{v}}_k, \end{aligned}$$

where \({\varvec{u}}\) and \({\varvec{v}}\) are independent white sequences with zero mean and corresponding covariance matrices \({\varvec{Q}}\) and \({\varvec{R}}\). The aim in filtering is to estimate \({\varvec{x}}_k\) from the observation \({\varvec{y}}\) up to time \(k\).

The covariance matrix is \({\varvec{S}}_{yy}(z) = {\varvec{C}}{\varvec{S}}_{xx}(z) {\varvec{C}}^\top + {\varvec{R}}\) and can be written as

$$\begin{aligned} {\varvec{S}}_{yy}(z) = {\varvec{R}}+ {\varvec{C}}({\varvec{I}}z - {\varvec{A}})^{-1} {\varvec{B}}{\varvec{Q}}{\varvec{B}}^\top ({\varvec{I}}z^{-\star } - {\varvec{A}})^{-\dagger }{\varvec{C}}^\top . \end{aligned}$$

The Wiener filter necessitates having a strong spectral factorization of this spectral matrix in the form

$$\begin{aligned} {\varvec{S}}_{yy}(z) = {\varvec{F}}(z) {\varvec{W}}{\varvec{F}}^{\dagger }(z^{-\star }). \end{aligned}$$

Note that in this sketch of the proof we adopt for convenience a convention other than that in Eq. (5), the derivation here being done in accordance with the common definition of correlation. The difference is simply a transposition. Then the spectral factor is given by \({\varvec{F}}(z) = {\varvec{I}}+ {\varvec{C}}({\varvec{I}}z-{\varvec{A}})^{-1} {\varvec{K}}\), where the Kalman gain (steady state) reads \({\varvec{K}}= {\varvec{A}}{\varvec{P}}{\varvec{C}}^\top {\varvec{W}}^{-1}\) and \({\varvec{W}}= {\varvec{R}}+ {\varvec{C}}{\varvec{P}}{\varvec{C}}^\top \). \({\varvec{P}}\) is the solution of the Riccati equation \({\varvec{P}}= {\varvec{A}}{\varvec{P}}{\varvec{A}}^\top - {\varvec{A}}{\varvec{P}}{\varvec{C}}^\top {\varvec{W}}^{-1} {\varvec{C}}{\varvec{P}}{\varvec{A}}^\top + {\varvec{B}}{\varvec{Q}}{\varvec{B}}^\top \).

The problem reduces to one of obtaining the solution of the Riccati equation, which is far from being obvious. For this, Davis proposed using a Newton–Raphson algorithm to solve

$$\begin{aligned} 0&= f({\varvec{P}}) \\&= -{\varvec{P}}+ {\varvec{A}}{\varvec{P}}{\varvec{A}}^\top - {\varvec{A}}{\varvec{P}}{\varvec{C}}^\top {\varvec{W}}^{-1} {\varvec{C}}{\varvec{P}}{\varvec{A}}^\top + {\varvec{B}}{\varvec{Q}}{\varvec{B}}^\top . \end{aligned}$$

The Newton–Raphson iteration for solving this is

$$\begin{aligned}&{\varvec{P}}_{n+1} - ({\varvec{A}}- {\varvec{K}}_n {\varvec{C}}) {\varvec{P}}_{n+1} ({\varvec{A}}- {\varvec{K}}_n {\varvec{C}})^\top \\&\quad ={\varvec{K}}_n {\varvec{R}}{\varvec{K}}_n^\top + {\varvec{B}}{\varvec{Q}}{\varvec{B}}^\top . \end{aligned}$$

A first trick is to use the representation in series \({\varvec{X}}=\sum _{n\le 0} {\varvec{A}}^{n} {\varvec{\Gamma }}{\varvec{A}}^{\top n }\) for the solution of \({\varvec{X}}+ {\varvec{A}}{\varvec{X}}{\varvec{A}}^\top ={\varvec{\Gamma }}\). If \({\varvec{\Gamma }}\) is positive-definite, then the series is an inner product, and we can use Parseval’s equality to obtain the equivalent form in the \(z\)-domain. Apply this to \({\varvec{P}}_{n+1}\) to obtain

$$\begin{aligned} {\varvec{P}}_{n+1}&= \frac{1}{2i\pi } \oint (z{\varvec{I}}- {\varvec{A}}+ {\varvec{K}}_n {\varvec{C}})^{-1} \big ( {\varvec{K}}_n {\varvec{R}}{\varvec{K}}_n^\top + {\varvec{B}}{\varvec{Q}}{\varvec{B}}^\top \big ) \\&(z^{-\star } {\varvec{I}}- {\varvec{A}}+ {\varvec{K}}_n {\varvec{C}})^{-\dagger } \frac{\hbox {d}z}{z}. \end{aligned}$$

Then pre- and postmultiplying of \({\varvec{P}}_{n+1}\) by \({\varvec{C}}\), using some algebra and recalling the definitions of \({\varvec{S}}_{yy}\) and \({\varvec{F}}\) and the fact that \((2i\pi )^{-1}\oint {\varvec{F}}_n(z)^{-1} \hbox {d}z/z={\varvec{I}}\), allows us to obtain

$$\begin{aligned} {\varvec{R}}+{\varvec{C}}{\varvec{P}}_{n+1} {\varvec{C}}^\top&= \frac{1}{2i\pi } \oint {\varvec{F}}_n(z)^{-1} {\varvec{S}}_{yy}(z) {\varvec{F}}_n(z^{-\star })^{-\dagger }\frac{\hbox {d}z}{z}\\&:= {\varvec{W}}_n, \end{aligned}$$

which constitutes the first part of the algorithm.

To obtain the iteration on the spectral factor, Dickinson proposes to examine \(\Delta {\varvec{P}}_n := {\varvec{P}}_{n+1}-{\varvec{P}}_n\). Substracting two successive iterations of the Newton–Raphson iteration leads to

$$\begin{aligned}&\Delta {\varvec{P}}_{n} - ({\varvec{A}}- {\varvec{K}}_n {\varvec{C}}) \Delta {\varvec{P}}_n ({\varvec{A}}- {\varvec{K}}_n {\varvec{C}})^\top \\&\quad = -({\varvec{K}}_n- {\varvec{K}}_{n-1}){\varvec{W}}_{n-1}({\varvec{K}}_n- {\varvec{K}}_{n-1})^\top :=- {\varvec{T}}_n. \end{aligned}$$

This last matrix \( {\varvec{T}}_n\) is positive-definite since \({\varvec{W}}_{n-1} \) is positive-definite.

Since \(\Delta {\varvec{P}}_{n}\) satisfies an equation of the type \({\varvec{X}}+ {\varvec{A}}{\varvec{X}}{\varvec{A}}^\top ={\varvec{\Gamma }}\), we use the series representation for \(\Delta {\varvec{P}}_{n}\),

$$\begin{aligned} \Delta {\varvec{P}}_{n} = -\sum _{k\ge 0}({\varvec{A}}- {\varvec{K}}_n {\varvec{C}})^k {\varvec{T}}_n({\varvec{A}}- {\varvec{K}}_n {\varvec{C}})^{(k)\top }, \end{aligned}$$

and since \( {\varvec{T}}_n\) is positive-definite, we can have an equivalent form in the \(z\)-domain,

$$\begin{aligned} \Delta {\varvec{P}}_{n}&= \frac{-1}{2i\pi } \oint \big ({\varvec{I}}- z^{-1}({\varvec{A}}- {\varvec{K}}_n {\varvec{C}})\big )^{-1}{\varvec{T}}_n \nonumber \\&\times \Big (z^{-\star } ( {\varvec{I}}- z^{\star }({\varvec{A}}- {\varvec{K}}_n {\varvec{C}})\Big )^{-\dagger } \frac{\hbox {d}z}{z}. \end{aligned}$$
(9)

Then we must solve the equation, which can be verified by direct evaluation:

$$\begin{aligned}&{\varvec{C}}(z {\varvec{I}}- {\varvec{A}})^{-1}({\varvec{A}}-{\varvec{K}}_n {\varvec{C}})^\top \Delta {\varvec{P}}_n {\varvec{C}}^\top \nonumber \\&\quad = ({\varvec{F}}_{n+1}(z) -{\varvec{F}}_n(z)) {\varvec{W}}_{n}. \end{aligned}$$

Inserting (9), a lengthy calculation leads to

$$\begin{aligned}&-2i\pi ({\varvec{F}}_{n+1}(z) -{\varvec{F}}_n(z)) {\varvec{W}}_{n}\\&\quad = ({\varvec{F}}_n(z)- {\varvec{F}}_{n-1}(z)) {\varvec{W}}_{n-1} \nonumber \\&\qquad \times \oint \frac{\hbox {d}v}{(v-z)} ( {\varvec{F}}_n(v^{-\star })- {\varvec{F}}_{n-1}(v^{-\star }))^{\dagger } {\varvec{F}}_n(v^{-\star })^{-\dagger }\nonumber \\&\qquad -\, {\varvec{F}}_n(z) \nonumber \\&\qquad \times \oint \frac{dv}{(v-z)} {\varvec{F}}_n(v)^{-1} \Big ( {\varvec{F}}_n(v) {\varvec{W}}_{n-1} {\varvec{F}}_n(v^{-\star }) \!-\!{\varvec{S}}_{yy}(v) \Big )\nonumber \\&\qquad \times \,\,{\varvec{F}}_n(v^{-\star })^{-\dagger }, \nonumber \end{aligned}$$
(10)

an expression that can be linked with the causal projection.

If \(H(z)\) is the \(z\)-transform of a sequence \(h_k, k\in \mathbb {Z}\), recall that

$$\begin{aligned} (P_+H)(z)&= \sum _{k\ge 0} h_k z^{-k}\\&= \frac{1}{2i\pi }\oint \frac{\hbox {d}v}{v} \frac{z}{z-v} H(v). \end{aligned}$$

Thus, note that the integrals appearing in expression (11) are of the form

$$\begin{aligned} \oint \frac{\hbox {d}v}{v} \frac{v}{v-z} H(v)&= \oint \frac{\hbox {d}v}{v} \frac{v+z-z}{v-z}H(v) \\&= \oint \frac{\hbox {d}v}{v} H(v) -2i\pi (P_+H)(z). \end{aligned}$$

The first integral in (11) concerns an anticausal quantity with no constant term and is therefore equal to zero. Noting that \({\varvec{W}}_{n-1}\) does not depend on \(v\), and therefore its causal part is equal to itself, we finally get the beautiful result

$$\begin{aligned}&{\varvec{W}}_{n} = \frac{1}{2i\pi }\oint \frac{\hbox {d}v}{v}{\varvec{F}}_n(v)^{-1}{\varvec{S}}_{yy}(v){\varvec{F}}_n(v^{-\star })^{-\dagger }\\&{\varvec{F}}_{n+1}(z) = {\varvec{F}}_n(z) \Big (P_+\big [ {\varvec{F}}_n^{-1}{\varvec{S}}_{yy}{\varvec{F}}_n^{-\dagger }\big ]\Big )(z) {\varvec{W}}_{n}^{-1}. \end{aligned}$$

Appendix 2: Code for spectral factorization

The following is MATLAB code for the spectral factorization. It uses three-dimensional arrays. No test for positive definiteness is included, and if the assumptions on matrix \(S\) are inadequate, then the algorithm should not converge.

figure a

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Amblard, PO. A nonparametric efficient evaluation of partial directed coherence. Biol Cybern 109, 203–214 (2015). https://doi.org/10.1007/s00422-014-0636-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00422-014-0636-0

Keywords

Navigation