Brain Topography

, Volume 31, Issue 4, pp 546–565 | Cite as

Localizing Event-Related Potentials Using Multi-source Minimum Variance Beamformers: A Validation Study

  • Anthony T. Herdman
  • Alexander Moiseev
  • Urs Ribary
Original Paper


Adaptive and non-adaptive beamformers have become a prominent neuroimaging tool for localizing neural sources of electroencephalographic (EEG) and magnetoencephalographic (MEG) data. In this study, we investigated single-source and multi-source scalar beamformers with respect to their performances in localizing and reconstructing source activity for simulated and real EEG data. We compared a new multi-source search approach (multi-step iterative approach; MIA) to our previous multi-source search approach (single-step iterative approach; SIA) and a single-source search approach (single-step peak approach; SPA). In order to compare performances across these beamformer approaches, we manipulated various simulated source parameters, such as the amount of signal-to-noise ratio (0.1–0.9), inter-source correlations (0.3–0.9), number of simultaneously active sources (2–8), and source locations. Results showed that localization performance followed the order of MIA > SIA > SPA regardless of the number of sources, source correlations, and single-to-noise ratios. In addition, SIA and MIA were significantly better than SPA at localizing four or more sources. Moreover, MIA was better than SIA and SPA at identifying the true source locations when signal characteristics were at their poorest. Source waveform reconstructions were similar between MIA and SIA but were significantly better than that for SPA. A similar trend was also found when applying these beamformer approaches to a real EEG dataset. Based on our findings, we conclude that multi-source beamformers (MIA and SIA) are an improvement over single-source beamformers for localizing EEG. Importantly, our new search method, MIA, had better localization performance, localization precision, and source waveform reconstruction as compared to SIA or SPA. We therefore recommend its use for improved source localization and waveform reconstruction of event-related potentials.


Many commercial (e.g., BESA) and open-source (e.g., MNE-Python, NutMEG, SPM, FieldTrip, Brainstorm) software packages for conducting source modeling of electroencephalographic (EEG) and magnetoencephalographic (MEG) data include some type of single-source beamformer method. Some packages also include a type of multi-source beamformer (e.g., NutMEG). Validating these software pipelines has become an important goal in neuroimaging because many of these methods use different mathematical approaches in an attempt to answer some basic questions, “Where are the neural sources of activity located within the brain?”, “What is the temporal and spectral characteristics of these source activities?”, and more recently, “How do these sources communicate information in a functionally connected network?”. Because each source modeling method has multiple steps, different assumptions, and different mathematical approaches, each method has its own strengths and weaknesses when attempting to answer the questions stated above.

One method that has been taking hold in the neuroimaging literature is the minimum-variance beamformer approach (Herdman and Cheyne 2009; Moiseev et al. 2011; Moiseev and Herdman 2013; Robinson and Vrba 1999; Sekihara and Nagarajan 2015; Sekihara et al. 20012002, 2004, 2005; Van Veen and Buckley 1988; Van Veen et al. 1997). Minimum-variance beamformers have become prominent neuroimaging tools for localizing neural sources of EEG and MEG. This is in part due to evidence showing that beamformers have good face and concurrent validity (Brookes et al. 2009; Darvas et al. 2004; Jonmohamadi et al. 2014; Moiseev et al. 2011; Van Veen et al. 1997; Van Hoey et al. 1999; Dalal et al. 2009, 2008). For instance, Dalal and colleagues demonstrated that single-source beamformers performed well in localizing patient’s sources for finger-tapping (Dalal et al. 2008) and reading-related activity (Dalal et al. 2009) that were consistent with their intracranial EEG findings. In a simulation study by Murzin et al. (2011), a single-source minimum-variance beamformer showed good localization performance for up to three uncorrelated EEG sources. More recently, Jonmohamadi et al. (2014) compared localization performances and time course reconstruction errors for eight different source-modeling methods, that generally fall under two classes: minimum-norm and minimum-variance (single-source) beamformers. They showed that for a single simulated source, minimum-variance beamformers outperformed minimum-norm beamformers with respect to source localization and time-domain reconstruction. However, performance degraded more for minimum-variance beamformers when simulated source amplitude, depth, and frequency were changed. Given this evidence, beamformers are highly useful in localizing source activity but they still have some limitations. Overall, these studies speak to the validity of using this minimum-variance beamformers to localize up to three simultaneously active sources.

As a good starting point, the aforementioned simulation studies investigated beamformer performances for mostly uncorrelated sources. However, previous evidence by Van Hoey et al. (1999) showed that inter-source correlations can significantly affect the minimum variance beamformers’ ability to localize sources. Thus, inter-source correlation is an important independent variable that needs to be more fully investigated with respect to multiple (more than three) simultaneously active sources. This is because animal and human neurophysiology has revealed that many (more than two or three) sources are often simultaneously active during even simple sensory processes. As a means to improved localization of multiple simultaneously active and correlated sources, multi-source (a.k.a., multiple constrained minimum variance; MCMV) beamformers have been investigated for their validity (Diwakar et al. 2011; Moiseev et al. 2011). Multi-source beamformers (e.g., dual-core and MCMV) improved localization error as compared to single-source beamformers, especially when two sources were highly correlated; correlation coefficients (r) exceeding 0.8 (Diwakar et al. 2011; Moiseev et al. 2011). Our previous work also demonstrated that iteratively searching for source locations using MCMV beamformers can accurately find highly correlated (r = 0.8) sources within 5 mm of true location even under low to moderate signal-to-noise ratios (SNR; Moiseev et al. 2011). Although these studies provided validation for using MCMV beamformers for localizing up to four highly-correlated MEG sources, we still need to validate the MCMV method for localizing more than two simultaneously active EEG sources.

In addition, most studies that used MCMV beamformer methods applied those to MEG data (Dalal et al. 2006; Diwakar et al. 2011; Hui et al. 2010; Moiseev et al. 2011; Moiseev and Herdman 2013); yet no studies, to our knowledge, have attempted to validate MCMV methods for localizing EEG data. Previous studies have; however, validated single-source beamformer methods for localizing up to two simulated-EEG source locations (e.g., Brookes et al. 2009; VanHoey et al. 1999) or inter-source coherence (e.g., Haufe and Ewald 2016). Based on validation studies showing good performances for single-source beamformers in localizing simulated and real EEG data and the MCMV beamformers for MEG data, we set out to further validate the MCMV beamformers for multiple simultaneously active EEG sources. Thus, the intent of this study was to validate the MCMV beamformer approaches by comparing their performances to single-source beamformer performances on localizing simulated and real EEG data. Because we used simulated-EEG data, we knew the ground truth of the source activity (locations and waveforms) and this provided a true estimate of a beamformer’s localization performance. Results from the real EEG experiment provided evidence for the face validity of using these beamformer methods.

In this paper, we show the localization performances of a single-source beamformer and two multi-source beamformers using simulated and real EEG data. Importantly, we show how these localization performances are altered by increases in the combinations and number of sources (up to eight). Because of the challenges placed on source localization with increasing number of simultaneously active sources, we present a new iterative search approach for MCMV beamformers. This new approach may help improve source localization where there are more than two simultaneously active and highly correlated sources, which is often the case for early sensory evoked responses.

In addition to more accurate source localization, MCMV beamformers inherently improves source waveform reconstruction by reducing, and in some instances eliminating, source mixing (combination of activity among sources). This is achieved by nulling constraints implemented in our multi-source beamformer methods (Moiseev et al. 2011). Single-source beamformers do not have such nulling constraints; therefore, an identified source may contain significant amounts of source activity from other nearby or even distant unidentified sources (i.e., source mixing/leakage). For example, if source 1 received source-mixing contribution from source 2 and source 2 becomes more active under condition B (e.g., attending to a sound) than condition A (passive listening), then source 1 will also show this increase in activity. This could lead to misinterpretation that the effect of condition B (attention) also occurs at source 1. If this happens, improper interpretations are likely to occur regarding where in the brain particular activities/functions are generated. In addition, contrasts between conditions might be improperly interpreted to occur in locations that are simply contaminated by source mixing; thereby leading to error in functional localization of particular sensory or cognitive functions. In this study, we also investigated our MCMV beamformer’s performance on reconstructing source waveforms. This provided information with regards to how well our MCMV beamformer methods can represent time domain waveforms that are often used to interpret when and where certain perceptual and cognitive events occur. In other words, we investigated how our MCMV’s beamformers perform in representing the spatiotemporal dynamics of function.

In order to test the performance of the single-source and multi-source beamformers we manipulated various simulated source parameters, such as the amount of SNR, inter-source correlations, number of simultaneously active sources, source locations and configurations. We performed a simulation study to be able to statistically compare the beamformers’ performances across the selected source parameters. We also applied the beamformer approaches to a real EEG dataset to provide evidence of face validity.


Head Model

We used open MEEG software to generate the head model for our EEG simulations. We constructed one head model with 8803 dipole locations (5 × 5 × 5 mm grid) bounded by the cortical surface of the Colin27 default MRI from Brainstorm2 (Tadel et al. 2011).The head model was constructed using 128 EEG channel locations based on a slightly modified 10–10 system (Herdman and Takai 2013). A boundary-element model was used and had scalp, skull, and brain conductivity ratios equal to 1, 0.0125, and 1, respectively. Channel number was used as an independent variable to assess localizer performances with respect to number of channels (64 or 128) in the forward solution. To obtain the 64-channel gain matrix, we simply selected a sub-index of the 128-channel gain matrix for 64 channels that had electrode locations for the 10–20 system.


Derivations for the single- and multiple-source beamformer algorithms used in this study can be found in Moiseev et al. (2011). Because we chose to simulate evoked activity (see above), we decided to limit the beamformer localizers to the event-related (ZER) and multi-source event-related (MER) localizers that were previously shown to be good localizers for this case (Moiseev et al. 2011; Moiseev and Herdman 2013). For brevity, we do not restate their derivations here but define their algorithm and highlight the differences in methods used in this study.

Localizers (ZER and MER) and Waveform Reconstruction

We investigated the performance of the ZER and MER localizers, which were defined as (Moiseev et al. 2011):
$$ZER=({{\varvec{h}}^{iT}}{{\varvec{R}}^{ - 1}}\bar {{\varvec{R}}}{{\varvec{R}}^{ - 1}}{{\varvec{h}}^i})/({{\varvec{h}}^{iT}}{{\varvec{R}}^{ - 1}}{\varvec{N}}{{\varvec{R}}^{ - 1}}{{\varvec{h}}^i})$$
$$MER=Tr\left[ {({{\varvec{H}}^T}{{\varvec{R}}^{ - 1}}\bar {{\varvec{R}}}{{\varvec{R}}^{ - 1}}{\varvec{H}}){{\left( {{{\varvec{H}}^T}{{\varvec{R}}^{ - 1}}{\varvec{N}}{{\varvec{R}}^{ - 1}}{\varvec{H}}} \right)}^{ - 1}}} \right]$$

Here \({{\varvec{h}}^i}={\varvec{h}}\left( {i,{{\varvec{u}}^i}} \right)\) is the M-dimensional gain vector of a single source in voxel i with orientation \({{\varvec{u}}^i}\), where M is the number of channels, and symbol “T” denotes transposition. \({\varvec{H}}=\{ {{\varvec{h}}^{{i_1}}}, \ldots ,{{\varvec{h}}^{{i_n}}}~\}\) is the (M × n) forward solutions matrix of the n-source scalar beamformer. \({\varvec{R}}\) is the (M × M) covariance matrix for each trial’s active interval then averaged across trials, \(\bar {{\varvec{R}}}\) is the (M × M) 2nd moment matrix of the averaged potentials, and \({\varvec{N}}\) is the (M × M) covariance for the control interval.

Using localizers (1) and (2), estimated source locations \({i_1}, \ldots ,{i_n}\) are found as follows. In the single-source case they are assumed to correspond to local maxima of \(ZER({{\varvec{h}}^i})\) when \(i\) spans the brain volume. Thus, all the maxima are easily found simply by calculating (1) for each voxel, with source orientations found as described in Sekihara et al. (2004). In the multi-source case locations \({i_1}, \ldots ,{i_n}\) correspond to a global maximum of \(MER({{\varvec{h}}^{{i_1}}}, \ldots ,{{\varvec{h}}^{{i_n}}})\) over all possible combinations of \({i_k}\) and orientations \({{\varvec{u}}^{{i_k}}}~\) As testing all such combinations is technically prohibitive for most labs because of current computer processing limitations, some approximate ways to reach the global maximum must be used. Two such approximations discussed below are the single-step iterative approach (SIA) and the multi-step iterative approach (MIA). Both methods replace brute force search for a global maximum in 5n-dimensional parameter space with iterative search in a three-dimensional brain space. In all cases, orientations for the probe source locations \({i_k}\) are obtained using known expressions summarized in Online Appendix.

Source ERP waveforms \({s_i}(t)\) for both single- and multi-source cases are reconstructed using expression \({s_i}\left( t \right)={{\varvec{w}}^{iT}}{\varvec{b}}(t)\), where \({\varvec{b}}\) is M-dimensional column vector of sensor readings, and \({{\varvec{w}}^i}\) is a M-dimensional weight vector. The latter is found as \({{\varvec{w}}^i}={{\varvec{R}}^{ - 1}}{{\varvec{h}}^i}/({{\varvec{h}}^{iT}}{{\varvec{R}}^{ - 1}}{{\varvec{h}}^i})\) in the single-source case and as i-th column of a matrix \({\varvec{W}}={{\varvec{R}}^{ - 1}}{\varvec{H}}{\left( {{{\varvec{H}}^T}{{\varvec{R}}^{ - 1}}{\varvec{H}}} \right)^{ - 1}}\) in the multi-source case.

Source Location and Search Approaches

To find source locations for the single-source localizers (ZER) we applied a single-step peak approach (which we refer to as SPA; Fig. 1). We first constructed a localizer image for the active interval (0–300 ms) by using Eq. (1). We then thresholded the localizer image to determine which voxels had ZER values for the active interval that were greater than a threshold estimated from the ZER values within a control interval between − 600 and 0 ms. To estimate the threshold, we first calculated the localizer values for the control interval by using half the control interval samples to determine the covariance matrix \({{\varvec{R}}_{{\varvec{c}}{\varvec{t}}{\varvec{r}}{\varvec{l}}}}\) and the other half of the control interval samples to determine the covariance matrix \({{\varvec{N}}_{{\varvec{c}}{\varvec{t}}{\varvec{r}}{\varvec{l}}}}\). We then substituted \({{\varvec{R}}_{{\varvec{c}}{\varvec{t}}{\varvec{r}}{\varvec{l}}}}\) and \({{\varvec{N}}_{{\varvec{c}}{\varvec{t}}{\varvec{r}}{\varvec{l}}}}\) for \({\varvec{R}}\) and \({\varvec{N}}\) in the localizer Eq. (1) defined above. This yielded a ZER image for the control interval that was used as a null distribution to compare the ZER values for the active interval. The ZER threshold was defined as the 95% largest value in this control image in order to estimate a 5% alpha level for finding a significant ZER in the active interval. We then applied this threshold to the active image and performed a search algorithm for maximal values separated by at least 15 mm in any of the x, y, and z Cartesian planes (Herdman and Cheyne 2009). The maximal voxel location(s) were identified as “observed” source locations and compared to the “true” source locations for each condition in the simulations.

Fig. 1

Graphical depiction of the source simulation and source reconstruction scheme used in this study. This is an example of simulation run with a signal-to-noise ratio equal to 0.4 for four simulated sources. True source waveforms were generated (top left graph) at the selected source locations with randomized orientations (brain image below). Colored source locations correspond to colored source wave forms above. Auditory (green and blue) and visual (cyan and red) sources were simulated to have 4- and 8-Hz oscillations, respectively. Simulated source waveforms were forward projected to electrodes using a forward model in order to obtain event-related potentials (ERPs). Two-hundred trials of real EEG brain noise were gained to yield a specified SNR and then added the forward-projected EEG signal. Bottom left panel shows 200 trials of real-brain noise plus the single-trial ERPs at electrode PO9. The averaged ERP at PO9 is shown by a black line. This EEG data was then used for source reconstruction using the SPA (blue pathway), SIA (green pathway), and MIA (red pathway) beamformers.

Source reconstruction results regarding “observed” locations and waveforms were compared to the “true” source locations and waveforms. See main text for more detailed descriptions of the steps for the SPA, SIA, and MIA pathways and of the comparisons between “true” and “observed” source waveforms and locations

For the multi-source localizer (MER), n sources’ locations and orientations should be simultaneously optimized to achieve the localizer maximum value. This can be performed iteratively in several ways but we chose to evaluate two approaches. For both of them, in each iteration we applied the same threshold estimation algorithm as for the ZER beamformer.

The first approach was a SIA similar to what we previously presented in Moiseev et al. (2011). SIA was conducted as follows (see SIA loop in green in Fig. 1):

  1. (1)

    find the first source location (i) by taking the maximum in the ZER localizer image,

  2. (2)

    place a null constraint at this location by setting the \(\varvec{H}_{\varvec{ref}} = {\varvec{h}^{i}}\) where \({\varvec{h}^i}\) is the gain vector of a dipole at location (i) with the optimal orientation \({\varvec{u}^i}\)

  3. (3)

    calculate the beamformer image for the MER localizer for each remaining voxel (k) using multi-source gain vectors \({\varvec{H}}=\left\{{{\varvec{H}_{\varvec{ref}}},{\varvec{h}^k}} \right\}=\{ {\varvec{h}^i},~{\varvec{h}^k}\}\) (see Appendix B in Moiseev et al. (2011) for details),

  4. (4)

    find the next observed source location (j) as the voxel with the maximum MER-localizer value,

  5. (5)

    add the gain vector \({\varvec{h}^j}\) for this voxel to the \(\varvec{H}_{{\varvec{ref}}} ,\;\varvec{H}_{{\varvec{ref}}} = \{ \varvec{h}^{i} ,~\varvec{h}^{j} \}\), to be used on the next iterative loop,

  6. (6)

    repeat steps 2–5 until the maximum of the MER-localizer output (across remaining voxel) doesn’t increase by more than 1% of the previous iteration’s maximum localizer output,

  7. (7)

    accept only those source locations that have localizer values greater than the control-interval threshold (as calculated for ZER localizer).


SIA is a MCMV approach that searches for peak activity by iteratively placing null constraints (optimized gain vector for the reference voxel) and recalculating the MER localizer to find the next largest peak sources (described in detail in Moiseev et al. 2011). SPA, on the other hand, only calculates the ZER localizer once and then finds the all peak activity that is spatially separated by 15 mm.

Because the SIA sets the source locations and orientations based on the first beamformer localizer output, there is a possibility that the first observed source is mislocalized in position and orientation, especially when two or more “true” sources are highly correlated (r > .9). This initial mislocalization can then lead to improper discovery of all subsequent source locations and orientations. In order to compensate for this we implemented a MIA. MIA was conducted as follows (see MIA loop in red in Fig. 1):

  1. (1)

    find the first source location (i) by taking the maximum in the ZER localizer image,

  2. (2)

    place a null constraint at this location by setting the \(\varvec{H}_{{\varvec{ref}}} = \{ \varvec{h}^{i} \}\) where \({\varvec{h}^i}\) is the gain vector of a dipole at location (i) with the optimal orientation \({\varvec{u}^i}\)

  3. (3)

    calculate the MCMV beamformer image for the MER localizer for each remaining voxel (k) using multi-source gain vectors \(\varvec{H}=\left\{ {{\varvec{H}_{\varvec{ref}}},{\varvec{h}^k}} \right\}=\{ {\varvec{h}^i},~{\varvec{h}^k}\}\) (see Appendix B in Moiseev et al. 2011 for details),

  4. (4)

    find the next observed source location (j) as the voxel with the maximum MCMV localizer value,

  5. (5)

    replace the \({{\varvec{H}}_{{\varvec{r}}{\varvec{e}}{\varvec{f}}}}\) matrix with this voxel’s gain vector so that \({\varvec{H}_{\varvec{ref}}}=\{ {\varvec{h}^j}\}\),

  6. (6)

    repeat steps 2–5 (inside loop) until the maximum of the MER localizer output (across remaining voxel) doesn’t increase by more than 1% of the previous iteration’s maximum localizer output for three iterations (i.e., three loops of steps 2–5),

  7. (7)

    select the voxel location (k) that had the largest localizer value in the final three iterations in the inside loop (Note: if the maximum localizer value for the first inside loop is less than threshold for three consecutive iterations then both inside and outside loops stop and the MIA returns that no sources were found),

  8. 8)

    add the gain vector for this voxel (k) to the \({\varvec{H}_{\varvec{ref}}}=\{ {\varvec{h}^n},{\varvec{h}^k}\}\) to be used on the next outside loop,

  9. (9)

    accept only those source locations that have localizer values greater than the control-interval threshold (as calculated for ZER localizers),

  10. (10)

    repeat steps 2–9 (outside loop) until the maximal localizer output, after inclusion of all found sources (if any) in the \({\varvec{H}_{ref}}\), is less than threshold for three inside loops (2–5).


Simulated EEG Experiments

Source Locations and Orientations

Sources were bilaterally located in dipole pairs of up to a maximum of four pairs (total of eight dipoles). The four pairs of dipoles were constrained to be near the auditory (Aud), primary visual (Vis), inferior temporal (ITC), and motor cortices (MC). We chose number and location of the dipole pairs to be as follows: two sources = Aud; four sources = Aud + Vis; six sources = Aud + ITC + MC; eight sources = Aud + Vis + ITC + MC. We conducted 15 simulation runs for each source configuration by randomly shifting dipole locations between − 10 and 10 mm in any x, y, and z directions (Fig. 2). The bilateral dipole pairs were not symmetrically bound to each other. The number of sources (Source Number) was considered as one of the independent variables for the simulations. Sources orientations were randomly selected for each dipole pair with the addition of allowing for a random shift of up to 90° in azimuth and elevation for the second dipole of the pair. In other words, the intra-pair orientation differences were within 90° of each other for azimuth and elevation. We used 15 simulation runs in order to be representative of typical participant sample sizes in many neuroimaging studies.

Fig. 2

Spatial distribution of simulated “true” source locations for the 15 Monte-Carlo runs (black dots) plotted within the common MRI’s tessellated cortical image

Source Waveforms

Using matlab (R2013b), we generated sinusoidal source waveforms with a duration of 300 ms (sample rate = 512 Hz) that were Gaussian windowed using matlab’s gausswin.m function (samples = 154, alpha = 2.5) (Fig. 1). The sinusoidal frequency for each dipole pair was as follows: Aud pair = 4 Hz, Vis pair = 8 Hz, ITC pair = 12 Hz, and MC pair = 16 Hz (see Fig. 1 for an example). These frequencies were chosen in order to change the correlation of the dipole-pair’s source activity while minimally affecting inter-pair correlations (e.g., correlations among Aud and Vis pairs) and keeping them within typical bandwidth of the real EEG noise spectrum. The inter-pair correlations did not exceed 0.11 for any of the possible combinations. Intra-pair correlations were manipulated to be 0.3, 0.6, and 0.9 by altering the initial phase differences between the each bilaterally located dipole pair (i.e., phase differences = 72.5°, 53.1°, and 25.8°, respectively). If there were more than one pair of dipoles in the source configuration, all pairs had the same intra-pair source correlations for each run. Source amplitudes were either fixed at 30 nA for all sources (Simulation I) or varied for each source to have equal scalp-recorded SNRs across all sources (Simulation II).


The source waveforms were forward projected to the electrodes (64 or 128 channels) using the gain matrices as defined above (Fig. 1). The forward-projected electrode waveforms were then situated at time point 0 within 200 epochs of real EEG noise spanning from − 600 to 400 ms. Thus, the source waveforms were time-locked across trials and represented evoked activity. We used real EEG recording from one participant who performed a visual detection task of simple letters (Herdman and Takai 2013). The 128-channel EEG continuous data were first filtered from 0.1 to 58 Hz using a FIR filter. We then extracted 200 trials of 1 s long epochs from the prestimulus intervals that had amplitudes less than ± 50 µV. The stimulus-onset asynchrony in the Herdman and Takai (2013) study was randomly selected between 1750 and 2250; therefore, time-locked activity within this 1 s prestimulus interval would be negligible. We visually inspected each trial and the average of all trials to be confident that there were no apparent time-locked signals.

In order to evaluate how SNR affects the beamformer performances, we conducted two simulations that used different methods of simulating the signal and noise amplitudes. For Simulation 1 (constant source amplitudes), we set all source amplitudes to a constant value of ± 30 nA, which represented the same amount of cortical tissue being activated regardless of source location. We then multiplied the scalp-recorded brain noise by a gain factor and added it to the forward projected source activity in order to achieve required SNRs, averaged across electrodes. The SNRs were set to be between 0.1 and 0.7 in 0.1 interval steps. These loosely represent weak (0.1) to strong (0.7) SNRs that are typically found in ERP studies. SNRs, at each electrode, were calculated as the root-mean-squared amplitude of the summed source EEG waveform (without EEG noise) between 0 and 300 ms divided by the root-mean-squared amplitude of the EEG noise between − 300 and 0 ms. Because source locations had different depths (i.e., distances from electrodes), they contributed differently to the overall scalp-recorded SNRs in Simulation 1. Thus, a caveat to using constant source amplitudes was that deeper sources had less contribution to the scalp recorded EEG when determining the overall SNR. In addition, deeper source activity projected to the electrodes might have been below the EEG noise floor for lower SNRs and thus likely not seen by the beamformers. This could cause reduced beamformer performances for identifying the number of sources or contributions to the scalp EEG. For example, transient auditory evoked responses of approximately 20 nA generated deep in the primary auditory cortices would have less contribution to the overall scalp-recorded SNRs than somatosensory evoked responses of approximately 20 nA generated in the superficial somatosensory cortices. We therefore conducted Simulation 2 (variable source amplitudes), where we set sources to have variable amplitudes in order to yield the same scalp-recorded SNRs for each source. In order to do this, we varied each source amplitude between 5 and 70 nA so that it’s scalp-recorded SNR would be equal to 0.1 to 0.7 in 0.1 interval steps. After adjusting each sources amplitude by a specific gain function, it was forward projected to the electrodes and added to the EEG noise. This meant that each source amplitude was different from each other but each had equal SNRs as measured at the electrodes.

Dependent Variables

We chose to measure three dependent variables to evaluate the performance of Search Method (SPA, SIA, and MIA): (1) Mathews correlation coefficient for classifiers (MCC); (Matthews 1975); (2) localization error (LocError); (3) waveform reconstruction error (WaveError).

MCC was used to evaluate a beamformer’s localization performance. We chose MCC over other classifier methods because it takes into account the true positives (hits), true negatives (correct rejections), false positives (false alarms), and false negatives (misses) of a classifier (in our case the beamformer localizers) even when the true positives and true negatives have largely unequal sample sizes (Matthews 1975). For our simulations, the true positive sample size was the number of simulated sources (i.e., 2–8) and our true negatives were the number of voxels (excluding the simulated sources) that were arbitrarily defined by the head model (i.e., 8803 in our simulations). The MCC accounted for such large discrepancies in sample sizes and thus provided a reasonable measure of a beamformer’s localization performance. The MCC was a correlation between the observed classifications and the true classifications of sources being present. Thus, a MCC value of + 1 indicated perfect agreement between observed and true classification, 0 indicated no better than chance agreement, and − 1 indicated total disagreement between observed and true classification. We set two criteria to determine whether an observed beamformer source resulted in a true positive (hit) or a false positive (false alarm). The first criterion was that a true positive must be within 15 mm in any x, y, z location (or maximum of 26 mm in Euclidian distance) from a true source. In situations that had more than one observed source that satisfied the first criterion, a second criterion was used determine which of the observed sources was the true positive and which was(were) a false positive(s). This second criterion was set as the observed beamformer’s waveform that had the lowest WaveError (as described below), which was identified as the true positive (i.e., hit). An observed source(s) that satisfied the first criterion but failed the second criterion was designated as a false positive(s). False negatives (misses) were designated as true source locations that failed to meet the first criteria above (i.e., no observed sources within 26 mm Euclidian distance). True negatives (correct rejections) were all other voxel locations that were not designated as true positives, false positives, or false negatives. Furthermore, we calculated a hit rate as the number of hits divided by the number of true sources and a false-alarm rate as the number of false alarms divided by the number of true sources.

Localization Error (LocError) was used to evaluate a beamformer’s precision in localizing true sources. LocError was calculated as the absolute vector difference between the observed and nearest true source locations. We chose to use the absolute error for localization because the difference between true and observed location represents a single value difference. Thus, this will equally weight large and small differences for estimating error (Chai and Draxler 2014). We used only the LocErrors of observed sources that were considered to be true positives (i.e., hits) in further statistical analyses.

WaveError was used to evaluate a beamformer’s precision in reconstructing true source waveforms. It was calculated as the root-mean-square amplitude difference between the observed source waveform and the nearest true source’s waveform at each sample point within the active interval (0–300 ms). We chose to use a root-mean squared error statistic in order to minimize the likelihood of missing large error differences in one particular time interval that might be washed out by average error in another time interval due to noise. Thus, mean-squared error can be the preferred choice of error estimates as compared to other statistics (Chai and Draxler 2014). We used only the WaveErrors of observed sources that were considered as true positives (i.e., hits) in further statistical analyses.

Statistical Analyses

For both simulations, we measured the dependent variables (MCC, LocError, and WaveError) as a function of the independent variables: channel number (64 and 128), source number (2, 4, 6, and 8), source correlation (0.3, 0.6, and 0.9), and SNR (0.1 to 0.7). We tested four main hypotheses: (1) localizer performance is best for MIA followed by SIA and then SPA across all changes in the independent variables; (2) increasing the number of EEG electrodes (channel number) improves localizer performances for all search methods (SPA, SIA, and MIA); (3) increasing intra-pair source correlations reduces localization performance for all localizers; (4) increasing the number of sources degrades localization performances of all localizers. To test these main hypotheses we performed multi-factor analysis of variance (ANOVA) tests for each dependent variable (MCC, LocError, and WaveError) with the factors being: search method (SPA, SIA, and MIA), Channel number (64 and 128), Source Correlation (0.3, 0.6, and 0.9), SNR (0.1 to 0.7), and Source Number (2,4, 6, and 8). We considered ANOVA effects to be significant at p < .05 after adjusting for sphericity using Greenhouse-Geisser. We conducted Tukey–Kramer’s post-hoc tests for significant ANOVA results and considered these tests to be significant at p < .01. Multi-factor ANOVA with post-hoc testing of the significance of the effects accounts for multiple comparisons across factors (Howell 2009).

Real EEG Experiment

To evaluate the beamformer approaches with respect to real EEG, we applied SPA, SIA, and MIA to EEG collected from ten participants performing a visual recognition task of single letters. This data was part of previously published study (Herdman and Takai 2013) and, for brevity, we only briefly restate the study’s methods relevant to the analyses performed here.

Stimuli and Task

Visual stimuli were upper-case roman alphabetic letters and pseudoletters (see Herdman and Takai 2013). In the task analyzed from this study (Orthography Task), participants were asked to press one of two buttons to discriminate between randomly presented single letters and pseudoletters. Two-hundred of each stimulus type were presented at central fixation for 500 ms with a random interstimulus interval between 1500 and 2000 ms.

EEG Analyses

EEG was recorded, using a BIOSEMI system (BIOSEMI, from 128 channels placed over the scalp in a 10–5 electrode configuration with four addition electrooculogram electrodes. For beamformer analyses, the 128 scalp channels were re-referenced to their average reference. Event-related potentials (ERPs) were obtained by extracting epochs between − 500 and 500 ms relative to stimulus onset. Trials with ERPs exceeding ± 100 microV were excluded from further analyses. In this study we did not apply the PCA-artefact correction as stated in our previous paper (Herdman and Takai 2013) because the PCA-artefact correction procedure would have created a degenerate covariance matrix which can be problematic when conducting beamformer analyses. We combined the trials from letters and pseudoletters into the same dataset. Because we don’t know the true source locations and waveforms underlying real-EEG data, we decided to compare the SIA, MIA, and SPA source locations and waveforms to those calculated from dipole modeling (see below for procedures). Although dipole modeling has its own limitations, we felt that it has the longest standing use and validation of its methods. Thus, we felt that dipole modeling would be a good means to evaluate concurrent validation of the beamformer approaches under the current experiment.

Beamformer Analyses

We used SPA, SIA, and MIA (as described above) to localize the sources of the visual ERPs. We chose to localize the early visual sensory components (P1–N1) because of their previously established generator locations using dipole modeling (Hillyard and Anllo-Vento 1998). Thus, for beamforming the control interval was − 400 to 0 ms and the active interval was 0 to 200 ms. Although other active intervals could easily be analyzed, we limited this to the early sensory components because it was our intent to show the differences among SPA, SIA, and MIA performances and not a full investigation of the localization of longer-latency visual ERPs. We leave such investigations to future studies. The head model used for the real-EEG analyses was calculated in the same manner as that for the simulated-EEG experiment (described above).

Dipole Modeling

Because we do not know the ground truth regarding the source locations, orientations, and time courses for our real-EEG data, we needed to get an estimate of where the true sources might be located. To do this we chose to perform spatiotemporal dipole modeling using BESA software (BESA GmbH;, as motivated above. Although dipole modeling methods have their own limitations and assumptions; we felt that dipole modeling has a long-standing history showing good reliability in localizing visual ERP sources. Thus, we considered the dipole modeling results to be good estimates of the true locations of the visual ERPs. We used the dipole locations, orientations and waveforms as the ground truths for comparing results among the beamformer approaches: SPA, SIA, and MIA.

We performed dipole modeling on the grand-averaged visual ERPs between 0 and 200 ms for the combined letter and pseudoletter conditions in order to improve SNR so that we could get a good source model fit. We began by fitting two symmetrically-constrained dipoles to an interval surrounding the largest peak (i.e., N170 from 100 to 200 ms). Residual variance was 22.5% for this fit. These dipoles fitted to bilateral fusiform gyri (Talairach coordinates = ± 38.7, − 72.3, − 10.7 mm). A peak in the residual data remained between 110 and 140 ms at which point we attempted to fit two more symmetrically constrained dipoles. However, this fit failed because the dipoles fused into a local minimum in the centre of the occipital lobe. We instead fitted a single dipole to this interval, which localized to the medial aspect of the occipital lobe near the cuneus (Talairach coordinates = − 6.6, − 83.9, 0.8 mm). Including this third dipole yielded an average residual variance of 3.9% between 100 and 200 ms. This met our stopping criteria for fitting more dipoles. These three sources’ locations and waveforms were used as the ground truth for estimating the dependent variables (MCC, Localization Error, and WaveError) for SPA, SIA, and MIA. The dependent variables for the real-EEG experiment were calculated in the same manner as they were for the simulated-EEG, with one exception. Because ground truth is unknown when localizing real EEG data, we needed to consider that sources found using beamforming might result in true positives that were not observed when using dipole source modeling. Unfortunately, there is no way for us to know which method (beamforming or dipole modeling) is the ground truth so we chose to be conservative and classify beamformer locations with distances of greater than 26 mm from any dipole source as being false-positives. The other beamformer sources that were not classified as true positives were classified as “possible true positives” but they were not included in the calculations for MCC, Localization Error, nor WaveError. Future studies comparing SPA, SIA, and MIA results to those obtained from concurrent intracranial recordings could help elucidate whether or not such beamformer-found source locations are true positives or false positives.


Simulated EEG Experiment

Simulation 1: Constant Source Amplitudes

Figure 3 shows an example the localization performances for the SPA, SIA, and MIA search methods for a single simulation run in Simulation 1 whereby all source amplitudes were fixed at 30 nA and SNR = 0.2 For four sources (left column), SPA and SIA were able to localize three of the four sources (green circles paired with green baskets) with reasonably small LocError (< 3.5 mm) and missed one true source location (blue basket). MIA, however, found all four sources with minimal LocError (< 1.5 mm). For six sources (middle column), SPA found only three of the six sources with a LocError of 7.1 mm; whereas SIA found four sources with an average LocError of 5.6 mm. SIA also found two nearby false-positives (red circles) and thus the MCC for SIA (MCC = 0.67) was smaller than that for SPA (MCC = 0.71). MIA again performed very well by finding five of the six sources with no false positives and an average LocError of 2.5 mm. For eight sources (right column), SPA performed poorly (MCC = 0.50) by only finding two of the eight sources with an average LocError of 10 mm. SIA had better performance with localizing five of the eight sources with an average LocError of 6.0 mm. MIA maintained higher localization performance than the other two search methods by localizing six of the eight sources with an average LocError of 5.7 mm. The results from this single run that showed that localization performance for MIA was superior to SIA which was superior to SPA were consistent the results we obtained for all other simulated runs as described below.

Fig. 3

Examples of localization performances (MCC values) for Search Method by Source number (4, 6, and 8) for Monte-Carlo run #1 with an SNR = 0.2 and intra-pair correlation = 0.6. “True” and “observed” source locations are designated by cross-hairs in a square (baskets) and filled circles, respectively. Hits and False Alarms for observed sources are designated by green and red colors, respectively. Misses for true sources are designated by blue baskets. True source locations are colored as green baskets if they had a corresponding Hit for an observed source within 15 mm. LocError, WaveError, and localization performance (MCC) for each condition are listed in parentheses above the images. MCC equal to 1 represents perfect localization performance, 0 represents chance performance, and − 1 represents complete failure

Localizer Performance

ANOVA testing on MCC scores revealed significant main effects and interactions. Here, we report those that are relevant to our main hypotheses. A significant Channel Number effect revealed that localization performance (MCC) was significantly greater for 128 than 64 channels F1.14 = 457.42, p < .0001; Fig. 4A (top left graph). In addition, a significant Search Method effect showed progressively greater MCC scores across search methods in that MIA > SIA > SPA, as we predicted (F2.28 = 718.42, p < .0001). As predicted, the Search Method by Source Correlation interaction revealed that localization performances significantly decreased as intra-pair source correlations increased with MIA having greater performance than SIA and SIA having greater performance than SPA (Fig. 4A top middle graph; F4.56 = 14.56, p < .0001). Moreover, increasing the number of simulated sources caused localization performance to decline across all search methods but to lesser extents for MIA and SIA as compared to SPA (Fig. 4A top right graph; Source Number by Search Method interaction, F6.84 = 23.80, p < .0001). In other words, increasing number of simulated sources from 2 to 8 resulted in SPA having the greatest decline in localization performance (MCC = 0.89–0.59) as compared to SIA (0.94–0.7) and MIA (0.97–0.74). SPA simply performed poorly at finding true sources with increasing number of sources in the simulated data as is evident by its low hit rates as compared to SIA and MIA (Fig. 4B lower right graph).

Fig. 4

A MCC showing localizer performances for search method by channels (left panel), search method by source correlation (middle panel) and search method by source number interaction (right panel). MCC equal to 1 represents perfect localization performance, 0 represents chance performance, and − 1 represents complete failure. Error bars reflect ± one standard deviation. See text for significant differences among comparisons. B Hit rates (presented as positive values) and false-alarm rates (presented as negative values) that were used to calculate the MCC in (A)

ANOVA and post-hoc testing of MCC scores also revealed significant Search Method by SNR by Source Number interactions (p < .05). Figure 5A shows these interactions plotted out as function of Search method by SNR for each Source Number. Overall, as SNR increased so did the MCC scores for each Search Method. However, looking across all Source Number plots, SPA performed significantly poorer (lower MCC scores) than SIA and MIA when SNRs were ≤ 0.5, which is in the range of typical SNRs for most perceptual ERP studies. Strikingly, the hit rates for SPA with SNRs ≤ 0. 5 dramatically declined as the number of simulated sources increased. For example, SPA had low hit rates (0.17–0.35) for SNRS ≤ 0.3 when there were eight sources. This means that many sources were missed and SPA had a poor localization performance (low MCC scores). Contrarily, SIA and MIA had significantly better hit rates (0.20–0.70) for these conditions and thus higher MCC scores than SPA. Notably, MIA had significantly higher hit rates and lower false-alarms rates than did SIA when there were more sources; therefore, MCC scores were significantly higher for MIA than SIA for these conditions.

Fig. 5

Localizer performance. A MCC showing localizer performances for search method (SPA, SIA, MIA) by Signal-to-Noise Ratio (SNR) for the ER-localizers by number of “true” sources. MCC equal to 1 represents perfect localization performance, 0 represents chance performance, and − 1 represents complete failure. Error bars reflect ± one standard deviation. Asterisks above the line graphs designate significant differences at p < .01 for the contrasts of SPA vs. MIA (cyan *), SPA vs. MIA (purple *), and SIA vs. MIA (orange *) at each SNR. B hit rates (presented as positive values) and false-alarm rates (presented as negative values) that were used to calculate the MCC in (A)

Over all variables, SIA and MIA had greater hit rates than SPA, but they also had greater false-alarm rates. Most of these false alarms were due to additionally found source locations within 26 mm of true sources (i.e., nearby false alarms). Summed across all variables and trials, there were more nearby false alarms than distant false-alarms for the SIA and MIA than for SPA. The ratios of nearby false alarms to distant false-alarms for SPA, SIA, and MIA were 1.4 (66/47), 16.9 (1271/75), and 10.8 (843/78), respectively. Thus, nearby false alarms were mainly responsible for reducing the MCC values for SIA and MIA; whereas low hit rates were mainly responsible for reducing MCC values for SPA (Figs. 4B, 5B). This helped to identify which types of errors were contributing to the lower MCC scores. For example, misses were the prominent errors for SPA whereas false alarms were the prominent errors for SIA and MIA.

In summary, MCC scores revealed that localization performance followed the order of MIA > SIA > SPA regardless of the Channel Number, Source Correlation, and Source Number (Figs. 4A, 5A). In addition, the SIA and MIA methods were significantly better than the SPA method when trying to localize four or more sources for a majority of the SNRs, especially between SNRs of 0.1 and 0.5 (Fig. 5A, cyan and purple asterisks). These results showed that localization performance for MIA was significantly better than SIA which was significantly better than SPA.

Localization Error

Localization error for hits was largest (i.e., poorest) for SPA (8.19 ± 2.07 mm), followed by SIA (6.21 ± 1.30 mm), then MIA (4.95 ± 0.98 mm) as averaged across all other independent variables (F2.28 = 1021.20, p < .0001). ANOVA results also revealed a significant Channel Number effect (F1.14 = 1520.60, p < .0001); whereby, LocError was larger for 64 channels (7.60 ± 1.79 mm) than 128 channels (5.30 ± 1.48 mm) (Fig. 6A left graph). Other expected findings were the significant increases in LocError as intra-pair source correlations increased (Fig. 6A middle graph; F2.28 = 822.76, p < .0001) and as Source Number increased (Fig. 6A right graph; F3.42 = 1155.90, p < .0001). Expectedly, LocErrors decreased as SNR increased (F6.84 = 1418.90, p < .0001; Fig. 6B). What is evident from the SNR by Source graphs (Fig. 6B), was that MIA had significantly smaller (better) localization errors than did SPA and SIA for many of the SNRs (F12.168 = 50.02, p < .0001). Notably, SIA had significantly smaller LocError than SPA for SNRs of 0.1 and 0.2 when localizing 6 and 8 sources. Interestingly, the greatest differences in LocError among the search methods occurred for low SNRs (SNR < 0.4) and higher source numbers (6 and 8). Thus, MIA appeared to have an advantage over SIA and SPA in identifying the true source locations when signal characteristics were at their poorest (e.g., low SNRs, high correlations, high source numbers).

Fig. 6

A Localization errors for the constant source amplitude method for search method by channels (left panel), search method by source correlation (middle panel) and search method by source number interaction (right panel). Error bars reflect ± one standard deviation. See text for significant differences among comparisons. B Localization errors for search method (SPA, SIA, MIA) by signal-to-noise ratio (SNR) for the ER-localizers by number of “true” sources. Error bars reflect ± one standard deviation. Asterisks above the line graphs designate significant differences at p < .01 for the contrasts of SPA vs. MIA (cyan *), SPA vs. MIA (purple *), and SIA vs. MIA (orange *) at each SNR

Wave Error

For Hits, source waveform reconstructions were similar between MIA (3.04 ± 0.18 nA) and SIA (3.16 ± 0.19 nA) but significantly worse for SPA (4.05 ± 0.26 nA; F2.28 = 2108.0, p < .0001; Fig. 7A). Wave Error between 64- and 128-channel conditions were significantly different ( F12.168 = 50.02, p < .0001; Fig. 7A left upper graph) with a significant channel number effect being mainly driven by a larger difference for the SPA than SIA and MIA methods. Wave Error, expectedly, increased as source correlation increased but this effect mostly occurred for SPA ( F2.28 = 175.50, p < .0001, Fig. 7A middle upper graph). Correspondingly, wave error significantly increased as source number increased ( F3.42 = 3372.0, p < .0001; Fig. 7A middle upper graph). Again, this effect was mostly driven by greater increases in wave error for SPA than SIA or MIA as the number of sources increased (Fig. 7A right panel). Wave error decreased as SNR increased ( F6.84 = 1198.8, p < .0001; Fig. 7B). MIA and SIA had significantly lower (better) wave error than did SPA at all SNR values and across all source numbers ( F48.912 = 12.33, p < .0001). We found very small differences in wave error between SIA and MIA at only a few SNR values (SNR = .2 for 2 sources; SNR = 0.5 and 0.6 for 6 sources; and SNR = 0.7 for 8 sources) that didn’t seem to follow any logical trend in the data. Thus, we considered that these could be statistical false discoveries or irrelevantly small differences that happen to be significant. Wave error for SPA for 2 sources increased as SNR increased from 0.4 to 0.7 (Fig. 7B left bottom graph). This result was observed in our previous works (Moiseev et al. 2011; Moiseev and Herdman 2013). One possible explanation is that with lower SNRs, source correlations affect single-source beamformer performances less. At some mid-range SNR localization correlation is felt stronger, which makes performances of both single and multi source beamformer actually degrade. For higher SNRs multi-source beamformers become more and more accurate because mislocalization is smaller, and nulls are placed more accurately. Conversely, for the single-source beamformer and very high SNRs, things may completely fall apart if correlations are strong.

Fig. 7

A Waveform errors for the constant source amplitude method for search method by channels (left panel), search method by source correlation (middle panel) and search method by source number interaction (right panel). Error bars reflect ± one standard deviation. See text for significant differences among comparisons. B Waveform Errors for Search Method (SPA, SIA, MIA) x Signal-to-Noise Ratio (SNR) for the ER-localizers by number of “true” sources. Error bars reflect ± one standard deviation. Asterisks above the line graphs designate significant differences at p < .01 for the contrasts of SPA vs. MIA (cyan *), SPA vs. MIA (purple *), and SIA vs. MIA (orange *) at each SNR

Simulation 2 (Variable Source Amplitudes) Versus Simulation 1: Constant Source Amplitudes

We also compared the beamformers’ performances (MCC scores) for variable (5–70 nA) versus constant (30 nA) amplitudes for each source (Fig. 8). Setting each source to a constant 30 nA resulted in poorer performances (smaller MCC scores) than allowing each source amplitude to vary in order to maintain the same contribution to the SNR measured at the electrode array ( F1.14 = 50.24, p < .0001). Interestingly though, post-hoc analyses of the significant interaction between Source Number and Amplitude Type (F3.42 =4.00, p = .0084) revealed a significant difference between variable and constant amplitudes only for 8 sources (p < .05; Fig. 8). Differences between variable- and constant-amplitude methods were not found to be significant for 2–6 sources, but there is a trend.

Fig. 8

A MCC showing localizer performances for constant and variable source amplitude methods by Source Number. MCC equal to 1 represents perfect localization performance, 0 represents chance performance, and − 1 represents complete failure. Error bars reflect ± one standard deviation. See text for significant differences among comparisons. B Hit rates (presented as positive values) and False-Alarm Rates (presented as negative values) that were used to calculate the MCC in (A)

Real EEG Experiment

Localizer Performance

Beamformer localizations for real EEG are depicted in Fig. 9 for SPA, SIA, and MIA. Each participant’s beamformer locations are overlaid on a tessellated cortical hull for hits as green circles, false-alarms as red circles, and “possible true sources” as blue circles. The three dipole locations are presented as black baskets. All beamformer localizers (SPA, SIA, and MIA) showed clustering of sources within the occipital regions, except that SPA had more scattered false positive across the rest of the brain space. Interestingly, SIA and MIA had fairly tight clustering of hits and “possible true sources” within the occipital cortices whereas SPA had more spread out clustering and fewer “possible true sources”. This might reflect that SIA and MIA are finding additional true sources because these might have better sensitivity in finding sources when the number of true sources increases. Contrarily, this might reflect additional false positives for SIA and MIA as what was seen for the simulated-EEG experiment. However, ANOVA results for beamformer localization performances (i.e., MCC) for real EEG data showed no significant differences among SPA, SIA, or MIA ( F2.29 =1.08, p = .353). Although there were no significant differences in localization performance, the mean MCC values for the real EEG results showed a consistent trend similar to that for simulated EEG results in that localization performances (i.e., MCC) for MIA > SIA > SPA (Fig. 10).

Fig. 9

Real-EEG data localizations for SPA, SIA, and MIA beamformers relative to locations for dipole fitted sources. Dipole source locations are designated by black baskets. Hits and false alarms for beamformer localized sources are designated by green circles and red filled circles, respectively. A third class of “possible true sources” found using beamformers are designated by blue filled circles

Fig. 10

A MCC showing localizer performances for real-EEG experiment. MCC equal to 1 represents perfect localization performance, 0 represents chance performance, and − 1 represents complete failure. Error bars reflect ± one standard deviation. No significant differences were found for MCC among SPA, SIA, and MIA. B Hit rates (presented as positive values) and false-alarm rates (presented as negative values) that were used to calculate the MCC in (A)

Localization Error

Localization errors (i.e., distance between beamformer hits and dipole locations) appeared to be greater for SPA than SIA and MIA (Fig. 9). The average LocError were SPA = 16.5 ± 4.7 mm, SIA = 15.5 ± 3.7 mm, and MIA = 11.9 ± 4.4 mm. ANOVA results revealed; however, no significant differences among SPA, SIA, and MIA ( F2.29 =3.16, p = .059). There was a nearly significant effect but we had insufficient evidence to reject the null hypothesis due to a small sample size (n = 10) of this previously collected dataset to fully determine if this difference was real. Future studies of larger sample sizes are thus recommended. That being stated though, the trend was similar to what we observed for the simulated-EEG localization error in that SPA > SIA > MIA.

Wave Error

We did not find evidence for differences among SPA, SIA, and MIA for our measure of wave error ( F2.29 =1.8, p = .185). The average wave error was SPA = 12.3 ± 2.1, SIA = 18.6 ± 13.0, and MIA = 13.5 ± 3.6. There was no similar trend in these results as compared to what we found for the simulated EEG experiment.


In this paper, we provided a proof of principle that MCMV beamforming (SIA and MIA) has advantages over traditional beamforming (SPA) when localizing multiple sources underlying simulated and real EEG data. This was to be expected based on previous studies that investigated MCMV beamforming of MEG data (Diwakar et al. 2011; Moiseev et al. 2011; Moiseev and Herdman 2013) but had yet to be confirmed for EEG data and for more than four simulated sources. Evidence from the current study demonstrated that MCMV beamforming performs similarly for EEG source modeling as compared to MEG source modeling (Moiseev et al. 2011) for two and four sources. Thus, the evidence indicates that MCMV beamforming (particularly MIA) is a useful method for localizing EEG activity. In the following we discuss the search methods’ (SPA, SIA, and MIA) performances with respect to number of EEG channels, source correlations, number of sources, SNR, and source amplitudes for the simulated-EEG experiment; followed by a discussion regarding the results from the real-EEG experiment.

Simulated EEG Experiment

Number of EEG Channels

Using 128 EEG channels vs. 64 EEG channels, marginally improved localization performance (higher MCC scores) for all search methods. This was to be expected because increasing the number of EEG electrodes improves the spatial sampling of EEG field. Because EEG spatial field distribution is relatively smooth across the scalp, there is an optimal spatial sampling of the electrical potential across the scalp (i.e., optimal number of equidistance placed electrodes). Although our results indicated that increasing the number of EEG electrodes improves localization, we did not evaluate the optimal number of EEG electrodes needed for best performance. Future research investigating a larger number of electrodes would be needed to determine the optimal number.

Source Correlation

By their design, single-source beamformer algorithms inherently reduce correlated source activities: increasing source correlations between two sources have been previously shown to significantly impair their localization, especially for correlations > 0.8 (Sekihara and Nagarajan 2008; Van Veen et al. 1997). This is particularly apparent when trying to localize auditory evoked responses using single-source beamformers because auditory responses from each hemisphere can have correlated activity in the range of 0.8–0.95. Several methods have been suggested to overcome this problem: half-sensor beamforming (Herdman et al. 2003; Herdman and Cheyne 2009) and MCMV-type beamforming (Dalal et al. 2006; Diwakar et al. 2011; Hui et al. 2010; Moiseev et al. 2011; Moiseev and Herdman 2013). The first method is crude and is mostly useful only when sources are known to be generated within each hemisphere. We will not discuss this method further because it has largely been replaced by the more sophisticated and robust methods based on MCMV beamforming. To our knowledge, only beamformers presented in Moiseev et al. (2011), Moiseev and Herdman (2013) use unbiased localizers based on multi-source gain vectors. Interestingly, the effects of source correlation on MCMV beamformers follow the same trend as the effects found for traditional single-source beamformers (current paper; Van Veen et al. 1997); however to a lesser extent for MCMV beamformers (SIA and MIA). We found that increasing source correlation reduced localization performance, increased localization error, and minimally increased waveform error. But these effects were most pronounced for the single-source (SPA) beamformer than for the MCMV (SIA and MIA) beamformers. Because many perceptual and cognitive events generate correlated activities within and across hemispheres, our results indicate that MCMV beamformer methods (especially MIA) should identify more true underlying sources and provide more precise localization of EEG signals than single-source beamformer methods.

Number of Sources

We further showed, in this paper, that beamformer performances decreased as the number of underlying sources increased. Most previous simulation studies investigated beamformer localization of four or less simultaneously activated sources where beamformers show fairly high localization performances. In this paper; however, we showed that localization performance continued to decline as the number of underlying sources increased, at least up to eight sources. Thus, there was a cost to localization with increasing the number of sources. For experiments that evoke activity from multiple brain regions (> 4) within discrete time intervals (< 300 ms), the traditional beamformers will have a hard time localizing the true sources, both in number and location. For example, if auditory, visual, and motor cortices are all activated due to a stimulus (e.g., pressing a button to an audiovisual target) then single-source beamforming will perform very poorly at finding the sources (see Fig. 3 upper plots). The SIA method will improve localization performance for such conditions (see Fig. 3 middle plots) but MIA will provide even greater improvements (see Fig. 3 lower plots). One likely factor contributing to this decline in performance with increasing number of sources is the overlapping spatial distributions of the far-field EEG recorded at the scalp. As more sources are added, more inter-source correlations can exist. Correlations affect the single-source beamformer directly; however, to some degree they also affect multi-source beamformers due to LocError. Localization errors result in imperfect removal (“nulling”) of the already found sources and thus residual signals, especially correlated ones, can deteriorate the localization of the next source. This most likely accounts for the LocError found for the MCMV beamformers in this paper. Note that, in real life, similar consequences may also result from source modeling errors because the true sources are only approximately represented by current dipoles. Therefore, perfect nulling may not be possible in principle.

Another interesting and unexpected result for the MCMV beamformers (SIA and MIA) was the increased false-alarm rate with increased source number that was not evident for the SPA beamformer (see Fig. 4B). This might largely be due to residual signal from one source being localized again as an additional source, especially for the MIA method. We found that the majority (91%) of the false alarms for MIA were situated within 26 mm of the hit locations. Thus, we suspect that the false alarms for MIA were simply a relocalization of the residual signal left over after nulling a previously found source. Improving the way in which gain matrix nulling is performed or the criteria for accepting and rejecting true sources might help improve localization performance by reducing the false alarm. Future work in this area is therefore warranted.

Signal-to-Noise Ratio

Not surprisingly, we found that localization performance improved as SNR increased for all search methods. For two sources, all beamformer methods (SPA, SIA, and MIA) performed relatively well (MCC > 0.8) even at low SNR of less than 0.3 (Figs. 3, 5). They also had reasonable source LocError (error < 5 mm) at SNR > 0. 3 (Fig. 6B). This is consistent with previous literature that showed good to excellent source localizations for two correlated sources (Diwakar et al. 2011; Hui et al. 2010; Moiseev et al. 2011; Van Veen et al. 1997). However, within the SNR range 0.2–0.5 (typical range for ERP studies), the single-source beamformer (SPA) performed rather poorly at localizing four or more underlying sources (see Figs. 3, 5). In addition, localization and wave errors were particularly high (poor) for SPA as compared to SIA and MIA methods. This is understandable because single-source beamformers are prone to mislocalizing or altogether missing sources that have low SNRs and that are highly correlated with another source (Van Veen et al. 1997). Again, this mainly has to do with the inherent nature of beamformers suppressing correlated signals. The SPA method doesn’t have any way to compensate for this inherent attribute; whereas MCMV beamforming tries to compensate by eliminating mixing of reconstructed signals so that correlations do not matter anymore. Thus, our MCMV beamformers (SIA and MIA) should (and did) have improved localization performances and precisions at low SNRs as compared to the single-source beamformer (SPA).

Constant Versus Variable Source Amplitudes

The degradation of the localizer performances was affected by the number of sources interacting with the SNR when the amplitudes of the sources were fixed to 30 nA (Fig. 8). We showed that varying the source amplitudes in order to maintain the same SNR at the scalp level for each source improved localization performances for 2–8 sources. The degraded localization performance for the constant source amplitude simulation is mostly likely a result superficial sources having greater SNR at the scalp and thus contributing more to the overall covariance matrix used by the beamforming technique. Thus, the more superficial sources will be better localized than the deeper sources if their amplitudes are set to a constant value. In the case of variable source amplitudes, the covariance matrix contained equal contributions from superficial and deep sources and thus were similarly localized using any beamforming technique. The reason we compared these two methods is because the constant source-amplitude simulation (simulation 1) was a test of beamformer localization performances in more realistic neurophysiological conditions; whereas the variable source-amplitude simulation was a test of the beamformer performance without the caveat in the differences in SNR among deep and superficial sources. The constant-source amplitude method is more likely to represent realistic neural activation because a similar number of sensory cortical neurons are generally recruited regardless of depth and location. Thus, superficial visual cortical activations will have greater SNR at the scalp level than deeper primary auditory cortical activations. This means that in audiovisual experiments, for example, visual activations will be better localized by beamformers than auditory activations because of the depth relationship. We did not; however, do an exhaustive parameterization of source amplitude differences among the number of sources and measure the beamformers’ performances. This could be done in future studies but the math and theory would dictate that localization performance for each source depends on its SNR as measured at the scalp and the other sources’ SNRs.

Search Method (SPA, SIA, and MIA)

Results from this study confirmed that our new search algorithm (MIA) improved performance and precision of localizing multiple sources over SPA and SIA methods. The main reason for this improvement is likely because MIA repetitively nulls one source while it searches for the next best largest source activity in the beamformer map before accepting a source location. The MIA method only accepts a source location once it has found an optimal location and orientation. SIA on the other hand accepts the newly found source location keeping all previously found sources unchanged; that is without attempting to optimize the whole set. Interestingly, once SIA or MIA found a true source, waveform reconstruction was similar between the two search methods; they had similar wave error (see Fig. 7). However, MIA had the advantages of (1) finding more sources with fewer false positives resulting in higher MCC scores; and (2) finding sources that were closer to the true source locations (i.e., smaller localization error). Overall, MIA was the better search method as compared to SIA, which was better than SPA.

An important distinction between single-source (SPA) and multiple-source (MIA and SIA) beamformers is that the single-source beamformer doesn’t account for leakage among the found sources; whereas the multiple-source beamformers do. This is because the MCMV constraints prevent leakage from any one found source into all other found sources. This has important implication when attempting to associate behavioural function with underlying brain activity. If sources from spatially nearby, but functionally separate, brain regions contain mixed signals, then a behavioural function would be incorrectly linked to both sources when only one of those source location is truly associated with the behaviour. In addition, source mixing/leakage can cause a problem in functional connectivity analysis, often estimated using phase coherence measures (e.g., PLV). Phase coherence will be incorrectly identified to occur between two brain regions that contain source mixing. Because MCMV beamformers prevent source mixing, phase coherence measures would provide more valid results for interpreting functional connectivity among sources, even those nearby to each other.

Real-EEG Experiment

To provide evidence for face and concurrent validity for using SPA, SIA, and MIA to localize perceptually-evoked sources, we applied these methods to a visual ERP study. Overall, all localizers did a reasonably good job in localizing the three dipole source locations with MCC values of 0.6 or higher. Although differences in localization performances among SPA, SIA, and MIA did not reach traditional statistical significance levels, our results showed that MIA > SIA > SPA. This is consistent with what we found in the simulated-EEG experiment. Furthermore, localization error for the real EEG experiment was not significantly different among localizers but the average values again showed that MIA had better (smaller) LocError than did SIA and SPA. These findings indicate that MIA is at least as good, if not better, at localizing simulated- and real-EEG data.

Localization results from real-EEG experiment had more false positives than what we had observed in the simulated-EEG experiment and what we had expected. This might be due to the variability in EEG noise covariance across the ten participants in the real-EEG experiment as compared to the single EEG noise covariance that we used for the simulated-EEG experiment. A next step to validating these localizers would be to add different noise covariances to the simulated sensor data so as to mimic having simulated-EEG recordings from more than one participant. Thus, the simulations would resemble more realistic experiments where EEG is collected from many participants. That being stated; however, our results from the real-EEG experiment do provide some evidence that MIA, SIA, and SPA perform similarly even when different EEG noise varies across participants.

Taken together, results from the simulated- and real-EEG experiments showed that MIA is as good as, and in some situations will outperform, SPA and SIA for localizing EEG sources. MIA and SIA also have the advantage of limiting source mixing/leakage by including the nulling constraints as compared to SPA. A major challenge in the neuroimaging community is to validate ill-posed inverse solutions. Simulation studies can help provide some evidence for this validity because the ground truth regarding source locations, orientation, and waveforms are known. However, simulation studies might not represent real-life recordings. Thus, it is important that the community continues to provide evidence of internal validity through simulation studies where the ground truth is known and to provide evidence of concurrent and face validity through real recordings (e.g., extracranaial and intracranial measures) where the ground truth can be approximated from observing consistent findings.


Based on our findings, we conclude that MCMV beamformers are an improvement over single-source beamformers for localizing ERPs. Because MIA had better localization performance and precision as compared to SIA or SPA for simulated-EEG data and similar performances for real-EEG data, we feel MIA will provide improved localization and waveform reconstruction of ERPs. We hope that our research will help guide future investigations into the needed area of validating methods for localizing sources of ERPs.



Funding was provided by Natural Sciences and Engineering Research Council of Canada and University of British Columbia.

Supplementary material

10548_2018_627_MOESM1_ESM.docx (16 kb)
Supplementary material 1 (DOCX 16 KB)


  1. Brookes MJ, Vrba J, Mullinger KJ, Geirsdóttir GB, Yan WX, Stevenson CM, Morris PG (2009) Source localisation in concurrent EEG/fMRI: applications at 7T. Neuroimage 45(2):440–452CrossRefPubMedGoogle Scholar
  2. Chai T, Draxler RR (2014) Root mean square error (RMSE) or mean absolute error (MAE)?—Arguments against avoiding RMSE in the literature. Geosci Model Develop 7(3):1247–1250CrossRefGoogle Scholar
  3. Dalal SS, Sekihara K, Nagarajan SS (2006) Modified beamformers for coherent source region suppression. IEEE Trans Bio-Med Eng 53(7):1357–1363. CrossRefGoogle Scholar
  4. Dalal SS, Guggisberg AG, Edwards E, Sekihara K, Findlay AM, Canolty RT, Nagarajan SS (2008) Five-dimensional neuroimaging: localization of the time—frequency dynamics of cortical activity. Neuroimage 40(4):1686–1700CrossRefPubMedPubMedCentralGoogle Scholar
  5. Dalal SS, Baillet S, Adam C, Ducorps A, Schwartz D, Jerbi K, Lachaux JP (2009) Simultaneous MEG and intracranial EEG recordings during attentive reading. Neuroimage 45(4):1289–1304CrossRefPubMedGoogle Scholar
  6. Darvas F, Pantazis D, Kucukaltun-Yildirim E, Leahy RM (2004) Mapping human brain function with MEG and EEG: methods and validation. NeuroImage 23:S289–S299CrossRefGoogle Scholar
  7. Diwakar M, Tal O, Liu TT, Harrington DL, Srinivasan R, Muzzatti L, Huang MX (2011) Accurate reconstruction of temporal correlation for neuronal sources using the enhanced dual-core MEG beamformer. Neuroimage 56(4):1918–1928CrossRefPubMedGoogle Scholar
  8. Haufe S, Ewald A (2016) A simulation framework for benchmarking EEG-based brain connectivity estimation methodologies. Brain Topogr. PubMedGoogle Scholar
  9. Herdman AT, Cheyne D (2009) A practical guide for MEG and beamforming. Brain Sign Analysis: Advances in Neuroelectric and Neuromagnetic Methods, pp 99Google Scholar
  10. Herdman AT, Takai O (2013) Paying attention to orthography: a visual evoked potential study. Front Human Neurosci 7:199CrossRefGoogle Scholar
  11. Herdman AT, Wollbrink A, Chau W, Ishii R, Ross B, Pantev C (2003) Determination of activation areas in the human auditory cortex by means of synthetic aperture magnetometry. NeuroImage 20:995–1005CrossRefPubMedGoogle Scholar
  12. Hillyard SA, Anllo-Vento L (1998) Event-related brain potentials in the study of visual selective attention. Proc Natl Acad Sci 95(3):781–787CrossRefPubMedPubMedCentralGoogle Scholar
  13. Howell DC (2009) Statistical methods for psychology, Nelson EducationGoogle Scholar
  14. Hui HB, Pantazis D, Bressler SL, Leahy RM (2010) Identifying true cortical interactions in MEG using the nulling beamformer. NeuroImage 49(4):3161–3174CrossRefPubMedGoogle Scholar
  15. Jonmohamadi Y, Poudel G, Innes C, Weiss D, Krueger R, Jones R (2014) Comparison of beamformers for EEG source signal reconstruction. Biomed Signal Process Control 14:175–188CrossRefGoogle Scholar
  16. Matthews BW (1975) Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim Et Biophys Acta (BBA) 405(2):442. CrossRefGoogle Scholar
  17. Moiseev A, Herdman AT (2013) Multi-core beamformers: derivation, limitations and improvements. NeuroImage 71:135. CrossRefPubMedGoogle Scholar
  18. Moiseev A, Gaspar JM, Schneider JA, Herdman AT (2011) Application of multi-source minimum variance beamformers for reconstruction of correlated neural activity. NeuroImage. Google Scholar
  19. Murzin V, Fuchs A, Kelso JS (2011) Anatomically constrained minimum variance beamforming applied to EEG. Exp Brain Res 214(4):515CrossRefPubMedGoogle Scholar
  20. Robinson SE, Vrba J (1999) Functional neuroimaging by synthetic aperture magnetometry (SAM). In: Yoshimoto T, Kotani M, Kuriki S, Karibe H, Nakasato N (eds) Recent advances in biomagnetism. Tohoku University Press, Sendai, pp 302–305Google Scholar
  21. Sekihara K, Nagarajan SS (2008) Adaptive spatial filters for electromagnetic brain imaging Springer Science & Business Media, NewYorkGoogle Scholar
  22. Sekihara K, Nagarajan SS (2015) Electromagnetic brain imaging: a bayesian perspective, 1st edn. Springer, SwitzerlandCrossRefGoogle Scholar
  23. Sekihara K, Nagarajan SS, Poeppel D, Marantz A, Miyashita Y (2001) Reconstructing spatio-temporal activities of neural sources using an MEG vector beamformer technique. IEEE Trans Bio-Med Eng 48(7):760–771. CrossRefGoogle Scholar
  24. Sekihara K, Nagarajan SS, Poeppel D, Marantz A, Miyashita Y (2002) Application of an MEG eigenspace beamformer to reconstructing spatio-temporal activities of neural sources. Hum Brain Mapp 15(4):199–215CrossRefPubMedGoogle Scholar
  25. Sekihara K, Nagarajan S, Poeppel D, Marantz A (2004) Asymptotic SNR of scalar and vector minimum-variance beamformers for neuromagnetic source reconstruction. IEEE Trans Biomed Eng 51(10):1726–1734CrossRefPubMedPubMedCentralGoogle Scholar
  26. Sekihara K, Sahani M, Nagarajan SS (2005) Localization bias and spatial resolution of adaptive and non-adaptive spatial filters for MEG source reconstruction. NeuroImage 25(4):1056–1067. CrossRefPubMedPubMedCentralGoogle Scholar
  27. Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM (2011) Brainstorm: a user-friendly application for MEG/EEG analysis. Comput Intell Neurosci 2011:8CrossRefGoogle Scholar
  28. Van Veen BD, Buckley KM (1988) Beamforming: a versatile approach to spatial filtering. IEEE Trans Signal Process 5:4–24Google Scholar
  29. Van Veen BD, van Drongelen W, Yuchtman M, Suzuki A (1997) Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Trans Biomed Eng 44(9):866–880Google Scholar
  30. Van Hoey G, Van de Walle R, Vanrumste B, D’Havse M, Lemahieu I, Boon P (1999) Beamforming techniques applied in EEG source analysis. Proc ProRISC99 10:545–549Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Anthony T. Herdman
    • 1
    • 2
  • Alexander Moiseev
    • 2
  • Urs Ribary
    • 2
    • 3
  1. 1.Faculty of Medicine, School of Audiology and Speech SciencesUniversity of British ColumbiaVancouverCanada
  2. 2.Behavioral and Cognitive Neuroscience Institute (BCNI)Simon Fraser UniversityBurnabyCanada
  3. 3.Department of PsychologySimon Fraser UniversityBurnabyCanada

Personalised recommendations