Abstract
A recently proposed iterated alternating sequential (IAS) MEG inverse solver algorithm, based on the coupling of a hierarchical Bayesian model with computationally efficient Krylov subspace linear solver, has been shown to perform well for both superficial and deep brain sources. However, a systematic study of its ability to correctly identify active brain regions is still missing. We propose novel statistical protocols to quantify the performance of MEG inverse solvers, focusing in particular on how their accuracy and precision at identifying active brain regions. We use these protocols for a systematic study of the performance of the IAS MEG inverse solver, comparing it with three standard inversion methods, wMNE, dSPM, and sLORETA. To avoid the bias of anecdotal tests towards a particular algorithm, the proposed protocols are Monte Carlo sampling based, generating an ensemble of activity patches in each brain region identified in a given atlas. The performance in correctly identifying the active areas is measured by how much, on average, the reconstructed activity is concentrated in the brain region of the simulated active patch. The analysis is based on Bayes factors, interpreting the estimated current activity as data for testing the hypothesis that the active brain region is correctly identified, versus the hypothesis of any erroneous attribution. The methodology allows the presence of a single or several simultaneous activity regions, without assuming that the number of active regions is known. The testing protocols suggest that the IAS solver performs well with both with cortical and subcortical activity estimation.
Similar content being viewed by others
Notes
References
Aine CJ, Sanfratello L, Ranken D, Best E, MacArthur JA, Wallace T, Gilliam K, Donahue CH, Montano R, Bryant JE et al (2012) MEG-SIM: a web portal for testing MEG analysis methods using realistic simulated and empirical data. Neuroinformatics 10(2):141–158
Algorri ME, Flores-Mangas F (2004) Classification of anatomical structures in MR brain images using fuzzy parameters. IEEE Trans Biomed Eng 51:1599–1608
Attal Y, Schwartz D (2013) Assessment of subcortical source localization using deep brain activity imaging model with minimum norm operators: a MEG study. PLoS ONE 8:e59856
Attal Y, Maess B, Friederici A, David O (2012) Head models and dynamic causal modeling of subcortical activity using magnetoencephalographic/electroencephalographic data. Rev Neurosci 23:141–158
Auranen T, Nummenmaa A, Hämäläinen MS, Jääskeläinen IP, Lampinen J, Vehtari A, Sams M (2005) Bayesian analysis of the neuromagnetic inverse problem with \(\ell _p\)-norm priors. NeuroImage 26:870–884
Baillet S, Mosher JC, Leahy RM (2001) Electromagnetic brain mapping. IEEE Signal Proc Mag 18:14–30
Bernardo JM, Smith AFM (2004) Bayesian theory. Wiley, New York
Brette R, Destexhe A (2012) Handbook of neural activity measurement. Cambridge University Press, Cambridge
Calvetti D, Pascarella A, Pitolli F, Somersalo E, Vantaggi B (2015) A hierarchical Krylov-Bayes iterative inverse solver for MEG with physiological preconditioning. Inverse Probl 31:125005
Calvetti D, Pitolli F, Somersalo E, Vantaggi B (2018) Bayes meets Krylov: preconditioning CGLS for underdetermined systems. SIAM Rev. https://doi.org/10.1137/15M1055061.
Calvetti D, Hakula H, Pursiainen S, Somersalo E (2009) Conditionally Gaussian hypermodels for cerebral source localization. SIAM J Imaging Sci 2:879–909
Ciofolo C, Barillot C (2009) Atlas-based segmentation of 3D cerebral structures with competitive level sets and fuzzy control. Med Image Anal 13:456–470
Dale AM, Fischl B, Sereno MI (1999) Cortical surface-based analysis: I. Segmentation and surface reconstruction. NeuroImage 9:179–194
Dale MA, Liu AK, Fischl BR, Buckner RL, Belliveau JW, Lewine JD, Halgren E (2000) Dynamic statistical parametric mapping: combining fMRI and MEG for high-resolution imaging of cortical activity. Neuron 26:55–67
de Munck JC, Vijn PCM, da Silva FH Lopes (1992) A random dipole model for spontaneous brain activity. IEEE Trans BME 39:791–804
Desikan RS, Ségonne F, Fischl B, Quinn BT, Dickerson BC, Blacker D, Buckner RL, Dale AM, Maguire RP, Hyman BT et al (2006) An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. NeuroImage 31:968–980
Gorodnitsky IF, Rao BD (1997) Sparse signal reconstruction from limited data using FOCUSS: a re-weighted minimum norm algorithm. IEEE Trans Signal Process 45:600–616
Gramfort A, Papadopoulos T, Olivi E, Clerc M (2010) Open MEEG: open source software for quasistatic bioelectromagnetics. Biomed Eng Online 9:45
Gramfort A, Luessi M, Larson E, Engemann DA, Strohmeier D, Brodbeck C, Parkkonen L, Hämäläinen MS (2014) MNE software for processing MEG and EEG data. NeuroImage 31:446–460
Hedrich T, Pellegrino G, Kobayashi E, Lina JM, Grova G (2017) Comparison of the spatial resolution of source imaging techniques in high-density EEG and MEG. NeuroImage 175:531–544
Henson RN, Mattout J, Phillips C, Friston KJ (2009) Selecting forward models for MEG source-reconstruction using model-evidence. NeuroImage 46:168–176
Henson RN, Flandin G, Friston KJ, Mattout J (2010) A parametric empirical Bayesian framework for fMRI-constrained MEG/EEG source reconstruction. Hum Brain Mapp 31:1512–1531
Hämäläinen MS, Hari R, Ilmoniemi RJ, Knuutila J, Lounasmaa OV (1993) Magnetoencephalography-theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev Mod Phys 65:413–498
Hämäläinen MS, Ilmoniemi RJ (1984) Interpreting measured magnetic fields of the brain: estimates of current distributions. Report TKK-F-A559
Huizenga HM, JaC De Munck, Waldorp LJ, Grasman RPPP (2002) Spatiotemporal EEG/MEG source analysis based on a parametric noise covariance model. IEEE Trans BME 49:533–539
Kass RE, Raftery AE (1995) Bayes factors. J Am Stat Assoc 90:773–795
Kiebel SJ, Daunizeau J, Phillips C, Friston KJ (2008) Variational Bayesian inversion of the equivalent current dipole model in EEG/MEG. NeuroImage 39:728–741
Kybic J, Clerc M, Abboud T, Faugeras O, Keriven R, Papadopoulos T (2005) A common formalism for the integral formulations of the forward EEG problem. IEEE Trans Med Imaging 24:12–28
Lin FH, Witzel T, Ahlfors SP, Stufflebeam SM, Belliveau JW, Hämäläinen MS (2006a) Assessing and improving the spatial accuracy in MEG source localization by depth-weighted minimum-norm estimates. NeuroImage 31:160–171
Lin FH, Belliveau JW, Dale AM, Hämäläinen MS (2006b) Distributed current estimates using cortical orientation constraints. Hum Brain Mapp 27:1–13
Lopez JD, Litvak V, Espinosa JJ, Friston K, Barnes GR (2014) Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM. NeuroImage 84:476–487
Lucka F, Pursiainen S, Burger M, Wolters CH (2012) Hierarchical Bayesian inference for the EEG inverse problem using realistic FE head models: depth localization and source separation for focal primary currents. NeuroImage 61:1364–1382
Mattout J, Phillips C, Penny WD, Rugg MD, Friston KJ (2006) MEG source localization under multiple constraints: an extended Bayesian framework. NeuroImage 30:753–767
Molins A, Stufflebeam SM, Brown EN, Hämáläinen MS (2008) Quantification of the benefit from integrating MEG and EEG data in minimum ℓ2-norm estimation. Neuroimage 42(3):1069–1077
Mosher JC, Spencer ME, Leahy RM, Lewis PS (1993) Error bounds for EEG and MEG dipole source localization. Electroenceph clin Neurophysiol 86:303–321
Nagarajan SS, Portniaguine O, Hwang D, Johnson C, Sekihara K (2006) Controlled support MEG imaging. NeuroImage 33:878–885
Nummenmaa A, Auranen T, Hämäläinen MS, Jääskeläinen IP, Lampinen J, Sams M, Vehtari A (2007a) Hierarchical Bayesian estimates of distributed MEG sources: theoretical aspects and comparison of variational and MCMC methods. NeuroImage 35:669–685
Nummenmaa A, Auranen T, Hämäläinen MS, Jääskeläinen IP, Lampinen J, Sams M, Vehtari A (2007b) Automatic relevance determination based hierarchical Bayesian MEG inversion in practice. NeuroImage 3:876–889
Owen JP, Wipf DP, Attias HT, Sekihara K, Nagarajan SS (2012) Performance evaluation of the Champagne source reconstruction algorithm on simulated and real M/EEG data. NeuroImage 60:305–323
Parkkonen L, Fujiki N, Mäkelä JP (2009) Sources of auditory brainstem responses revisited: contribution by magnetoencephalography. Hum Brain Mapp 30:1772–1782
Pascual-Marqui RD (1999) Review of methods for solving the EEG inverse problem. Int J Bioelectromagn 1:75–86
Sato M, Yoshioka T, Kajihara S, Toyama K, Goda N, Doya K, Kawato M (2004) Hierarchical Bayesian estimation for MEG inverse problem. NeuroImage 23:806–826
Stephan KE, Penny WD, Daunizeau J, Moran RJ, Friston KJ (2009) Bayesian model selection for group studies. NeuroImage 46:1004–1017
Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM (2011) Brainstorm: a user-friendly application for MEG/EEG analysis. Comp Intell Neurosci 2011:879716
Trujillo-Barreto NJ, Aubert-Vázquez E, Valdés-Sosa PA (2004) Bayesian model averaging in EEG/MEG imaging. NeuroImage 21:1300–1319
Uutela K, Hämäläinen MS, Somersalo E (1999) Visualization of magnetoencephalographic data using minimum current estimates. NeuroImage 10:173–180
Wagner M, Fuchs M, Kastner J (2004) Evaluation of sLORETA in the presence of noise and multiple sources. Brain Topogr 16:277–280
Wipf D, Nagarajan S (2009) A unified Bayesian framework for MEG/EEG source imaging. NeuroImage 44:947–966
Wipf DP, Owen JP, Attias HT, Sekihara K, Nagarajan SS (2010) Robust Bayesian estimation of the location, orientation, and time course of multiple correlated neural sources using MEG. NeuroImage 49:641–655
Vorwerk J, Cho JH, Rampp S, Hamer H, Knösche TR, Wolters CH (2014) A guideline for head volume conductor modeling in EEG and MEG. NeuroImage 100:590–607
Acknowledgements
This work was completed during the visit of DC and ES at University of Rome “La Sapienza” (Visiting Researcher/Professor Grant 2015). The hospitality of the host university is kindly acknowledged. The work of ES was partly supported by NSF, Grant DMS-1312424. The work of DC was partially supported by grants from the Simons Foundation (#305322 and # 246665) and by NSF, Grant DMS-1522334.
Author information
Authors and Affiliations
Corresponding author
Additional information
Handling Editor: Christophe Grova.
Electronic supplementary material
Below is the link to the electronic supplementary material.
10548_2018_670_MOESM1_ESM.pdf
Supplementary material Figure S1 Mapping of the brain activity to 85 different BRs over 100 simulations using synthetic data corresponding to randomly generated activity patches in the right frontal pole, indicated in red in the list of the BRs reconstructed with, respectively, IAS (a), wMNE (b), dSPM (c) and sLORETA (d). The histograms bin the average activity in each BR: in red the BRs of the left hemisphere and in black the ones of the right hemisphere. (pdf 79.7KB)
10548_2018_670_MOESM2_ESM.pdf
Supplementary material Figure S2 Mapping of the brain activity to 85 different BRs over 100 simulations using synthetic data corresponding to randomly generated activity patches in the left amygdala, indicated in red in the list of the BRs reconstructed with, respectively, IAS (a), wMNE (b), dSPM (c) and sLORETA (d). The histograms bin the average activity in each BR: in red the BRs of the left hemisphere and in black the ones of the right hemisphere. (pdf 79.6KB)
Appendix
Appendix
Interpretation of Hyperparameters
In Calvetti et al. (2015), the interpretation of the hyperparameters \(\beta \in {\mathbb {R}}\) and \(\theta ^*\in {\mathbb {R}}^N\) was discussed: it was shown that \(\beta \) allows the user to control the sparsity of the IAS solution, while the empirical Bayesian approach provided a way to relate \(\theta ^*\) to the sensitivity scaling. We summarize here the analysis on hyperparameters, developing the discussion of \(\theta ^*\) further, so that the parameter tuning can be done easily and semi-automatically.
Parameter \(\beta \) and Control of Sparsity
In Calvetti et al. (2015), it was proved (Theorem 2.1) that the sequential minimization that constitutes the core of the IAS algorithm can be interpreted as a fixed point iteration to find a minimizer \(\hat{Q} = [\hat{\mathbf {q}}_1,\hat{\mathbf {q}}_2,\ldots ,\hat{\mathbf {q}}_n]^{\mathsf {T}}\in {\mathbb {R}}^{3n}\) of the energy functional (3),
where \(\hat{\varTheta } = [\hat{\theta }_1;\hat{\theta }_2;\ldots ,\hat{\theta }_n]\in {\mathbb {R}}^n\), and \(S:{\mathbb {R}}^{3n} \rightarrow {\mathbb {R}}^n\) is defined componentwise as
Furthermore, it was shown that if we write \(\beta _j = 5/2 + \eta \), \(1\le j\le n\), then, as \(\eta \rightarrow 0^+\), we have the asymptotic expression
In particular, the penalty term in the above expression is a weighted \(\ell ^1\)-norm for the dipole amplitudes that are measured in the metric defined by the anatomical prior matrices \({{\mathsf {C}}}_j\). This argument demonstrates that at the limit, the IAS algorithm provides an effective algorithm for finding a weighted minimum current estimate (MCE), with the modification given by the anatomical prior (Uutela et al. 1999). In conclusion, we see that the role of the hyperparameter \(\beta \) is to control the sparsity of the IAS estimate. In Calvetti et al. (2015), this effect was demonstrated using numerical simulations.
Parameter \(\theta ^*\) and Sensitivity
The asymptotic expression (7) is indicative also from the point of view of the interpretation of \(\theta _j^*\). It is well-known that MEG algorithms based on penalized minimization of the fidelity to data tend to favor superficial sources, and to compensate this effect, a sensitivity weight is often introduced (see, e.g. Lin et al. 2006a). From the Bayesian point of view, sensitivity weighting is a problematic practice, since traditionally the prior should be independent of the observation model, a condition that the sensitivity weight does not satisfy. However, it is possible to find a satisfactory Bayesian interpretation for \(\theta _j^*\) so that it effectively works as a sensitivity weight. The connection between sensitivity and hypermodels is built through the analysis of the signal-to-noise ratio as follows. Consider the linear forward model
To estimate the expected power of the noiseless signal appearing in the signal-to-noise ratio defined in (4), observe that from the prior model, conditional on \(\varTheta \in {\mathbb {R}}^n\), we have
where the subscript refers to the Frobenius norm of the matrix. Furthermore, by using the gamma hyperprior model \(\theta _j \sim \varGamma (\beta ,\theta _j^*)\) for the vector \(\varTheta \), we arrive at
The choice of the hyperparameters \(\theta _j^*\) must therefore be compatible of what we a priori assume about the distribution of the activity and the resulting SNR. To begin with, assume that we have a reason to believe that only one source is active, but we do not know which one. Then, by the definition (4) of SNR, the active source \(j_1\) must satisfy
Finally, assume that we only have a prior idea of how many non-zero sources may be active, and we express this belief in a form of a probability density
where \(\sum _k p_k = 1\). If we expect that out of the n dipoles, it is reasonable to expect that \(\overline{k} = sn\) are active, a binomial distribution can be used for \(p_k\), \(k_k\sim {\mathrm{Binom}}(n,s)\), \(0<s<1\); in practice the binomial can be approximated by a Poisson distribution, \(p_k\sim {\mathrm{Poisson}}(\overline{k})\) with mean \(\overline{k}\) provided by the user. Using the previous result, conditioned on k, we arrive at the scaling law
with
This argument confirms that, in order to match the model with the SNR, the parameters \(\theta _j^*\) should indeed be chosen to be inversely proportional to the sensitivity.
Construction of the Activity Patch
To select the vertices in the activity patch \({\mathcal {P}}\), we first pick randomly a seed vertex from the BR of interest, then grow the patch by adding iteratively the neighboring vertices, pruning off at each step those that fall outsize the pertinent BR, and stopping the process as soon as the desired number \(N_{\mathcal {P}}\) of vertices have been included. The selected nodes along with the edges of the triangular mesh form a local graph. To generate the activity in the patch, we start by computing a positive graph Laplacian of the patch, which is the matrix \({\mathsf {L}}\in {\mathbb {R}}^{N_{\mathcal {P}}\times N_{\mathcal {P}}}\) with entries
where \({\mathrm{deg}}(v_i)\) is the number of the edges that terminate at the vertex \(v_i\).
After defining a correlation length \(\lambda \), given in units of the number of steps, we draw a random amplitude vector by setting
where \({\mathsf {I}}_{N_{\mathcal {P}}}\in {\mathbb {R}}^{N_{\mathcal {P}}\times N_{\mathcal {P}}}\) is the unit matrix and \(W\in {\mathbb {R}}^{N_{\mathcal {P}}}\) is a standard normal Gaussian random vector, that is, \(W\sim {{\mathcal {N}}}(0,{\mathsf {I}}_{N_{\mathcal {P}}})\). The amplitudes are scaled so that the amplitude of the dipole at the seed vertex is one. Finally, we draw the dipole moment directions from the anatomical prior, making sure that adjacent dipoles are not pointing in the opposite sides of the cortex patch.
Rights and permissions
About this article
Cite this article
Calvetti, D., Pascarella, A., Pitolli, F. et al. Brain Activity Mapping from MEG Data via a Hierarchical Bayesian Algorithm with Automatic Depth Weighting. Brain Topogr 32, 363–393 (2019). https://doi.org/10.1007/s10548-018-0670-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10548-018-0670-7