A comparative experimental study of visual brain event-related potentials to a working memory task: virtual reality head-mounted display versus a desktop computer screen

Virtual reality head mounted display (VR HMD) systems are increasingly utilised in combination with electroencephalography (EEG) in the experimental study of cognitive tasks. The aim of our investigation was to determine the similarities/differences between VR HMD and the computer screen (CS) in response to an n-back working memory task by comparing visual electrophysiological event-related potential (ERP) waveforms (N1/P1/P3 components). The same protocol was undertaken for VR HMD and CS with participants wearing the same EEG headcap. ERP waveforms obtained with the VR HMD environment followed a similar time course to those acquired in CS. The P3 mean and peak amplitudes obtained in VR HMD were not significantly different to those obtained in CS. In contrast, the N1 component was significantly higher in mean and peak amplitudes for the VR HMD environment compared to CS at the frontal electrodes. Significantly higher P1 mean and peak amplitudes were found at the occipital region compared to the temporal for VR HMD. Our results show that successful acquisition of ERP components to a working memory task is achievable by combining VR HMD with EEG. In addition, the higher amplitude N1/P1 components seen in VR HMD indicates the potential utility of this VR modality in the investigation of early ERPs. In conclusion, the combination of VR HMD with EEG/ERP would be a useful approach to advance the study of cognitive function in experimental brain research.


Introduction
Recent advances in the technical specifications of virtual reality (VR) systems have enabled their utility in the study of cognitive load and function (Cipresso et al. 2018;Luong et al. 2019;Kourtesis et al. 2019;Radianti et al. 2020). Researchers have taken advantage of VR systems in a head-mounted display (VR HMD) configuration in their investigation of cognitive tasks as such systems offer the ability to create/control the visual surround and deliver complex stimuli (Harjunen et al. 2017;Rupp et al. 2019;Dey et al. 2019;Tauscher et al. 2019;Tremmel et al. 2019). Electroencephalography (EEG) is a non-invasive method which conveniently acquires brain neuronal activity in human participants by recording voltage differences at the scalp surface in the millisecond range of temporal resolution (Gevins 1998). Interestingly, studies have shown the feasibility of combining VR HMD with EEG to acquire brain responses to various cognitive tasks: visual oddball (Tauscher et al. 2019), n-back working memory (Dey et al. 2019;Luong et al. 2019;Tremmel et al. 2019), 3-choice vigilance and image recognition (Rupp et al. 2019) and bimodal oddball (Harjunen et al. 2017) tasks. In terms of signal-to-noise levels, Harjunen et al. (2017) demonstrated that, despite their a priori concerns regarding signal interference from the VR HMD system upon EEG signals, there was no difference between the results obtained using a VR HMD system and a desktop computer screen (CS).

Communicated by Bill J Yates.
EEG is also used to acquire event-related potentials (ERPs) which are positive or negative voltage deflections in response to specific time-locked cognitive stimuli or events (Blackwood and Muir 1990;Sur and Sinha 2009). ERP studies have been undertaken in the investigation of working memory using the n-back task which requires the continuous maintenance, updating and manipulation of information (Barrouillet et al. 2004;Chen and Huang 2016). The visual n-back task involves participants processing stimuli that are presented in a sequence on a screen whilst being asked to determine whether the current stimulus matches a pre-specified n-back target (Watter et al. 2001). For example, in the 1-back condition, participants are required to match the current stimulus with the one immediately prior (Watter et al. 2001;Pelegrina et al. 2015;Scharinger et al. 2015).
The most reported and prominent visual ERP component in response to the n-back task is a waveform at ~ 300-800 ms post-stimulus termed, the P300 or P3 component (Brouwer et al. 2012;Dong et al. 2015). The P3 component has been implicated to be involved in attention, working memory, task performance, predictability, and relevance judgment (Sutton et al. 1965;Kahneman 1973;Ruchkin and Sutton 1978;Picton 1992;Polich 2007;Brouwer et al. 2012;Zahedi et al. 2020). P3 visual ERP responses have been reported with the Oculus VR system in a bimodal oddball task (Harjunen et al. 2017), a decision-making game paradigm (Spapé et al. 2019), a brain-computer interface (BCI) speller (Du et al. 2019) and in a BCI training paradigm (Amaral et al. 2018). There are only a few investigations which have reported and analysed the early post-stimulus positive and negative ERP components in response to working memory tasks. For example, early responses at ~ 40-150 ms post-stimulus, termed P1 and N1 components, have been reported (Pratt et al. 2011;Liu et al. 2017;Zhao et al. 2020). The N1 and P1 components are thought to reflect early attentional processing and low level features (Hillyard and Anllo-Vento 1998;Woodman 2010;Pratt et al. 2011;Hinojosa et al. 2015;Shalchy et al. 2020).
The use of ERPs to examine the influence of VR HMD or desktop display on measures of cognitive workload may offer potential insights into the underlying cognitive components affecting memory workload. Although the investigations by Dey et al. (2019) and Tremmel et al. (2019) have reported the responses to the n-back task using a combination of VR HMD and EEG they undertook only a power spectral analysis of the data, which enables a combined non-time locked and time-locked EEG data analysis of the presented n-back stimuli. As far as we are aware, there are no reported studies which have presented an analysis of ERP responses (stimuli time-locked) to a classic n-back working memory task using a combined VR HMD with EEG approach and compared the ERP waveforms to those obtained using a desktop CS. Since the visual surround in VR HMD is relatively more controlled compared to the desktop CS, differences may be expected in the N1/P1/P3 ERP responses between the two environments for the presented working memory task (there was no a priori hypothesis on the direction of change). The aim of our comparative electrophysiological brain study is to determine the similarities and differences between VR HMD (without any physical modification or customisation to the headset) and the traditional desktop CS environment in response to a classic visual n-back working memory task. Specifically, we compared N1/P1/P3 ERP waveforms using a within group design, where the VR HMD and the control desktop CS protocols were undertaken in the same experimental session day with participants wearing the same EEG electrode headcap.

Participants
In previous EEG/ERP studies, authors have typically utilised a range of 10-30 (average ~ 20) participants in their investigations (see Larson and Carbine 2017). In our study, we recruited 21 participants who took part in this study voluntarily and received no payment (7 females and 14 males, ages 20-28, Mean = 23.6, SD = 2.1). Recruitment was achieved through word-of-mouth. All participants were right-handed, according to the Edinburgh Handedness Inventory (Oldfield 1971). Each participant was asked to complete the Spielberger State-Trait Anxiety Inventory (Marteau and Bekker 1992) and a paper-based health questionnaire which served to exclude pre-existing conditions or medications that could affect cognitive function. No participants reported neurological disorders or psychological discomfort, and all participants had normal or corrected-to-normal visual acuity. The study was carried out in conformity with the Declaration of Helsinki (World Medical Association 2013). Local ethical approval was given by the Hull York Medical School Ethics Committee. Informed written consent was gained from each participant before partaking in the study. Participants were free to withdraw from the study at any time without having to provide an explanation. Participants were instructed to verbally report any discomfort experienced whilst wearing the HTC Vive VR HMD and the EEG headcap during the experimental session (Fig. 1). No participants reported any discomfort except one participant (out of 21) who experienced motion sickness during the trial period in the VR HMD environment and the session was terminated.

Desktop CS and VR HMD
Dell UltraSharp 2408WFP 24 inches LCD Monitor was used to present cognitive tasks in the CS environment (resolution: 1920 × 1200; pixel-response rate: 6 ms; contrast ratio: 3000:1 dynamic; visual angle: 9.8° × 9.8°; a distance of ~ 55 cm from the participant's nasion). An unmodified HTC Vive VR HMD (first version) was used for generating the virtual environment (display: OLED; resolution: 2160 × 1200; refresh rate: 90 Hz; platform: SteamVR, Viveport; field of view: 110°; weight 470 g). The HTC Vive VR HMD was calibrated to the experimental room using two HTC Vive base stations (version 1.0) which track and trace the VR HMD headset and hand controllers.
For the VR HMD headset, the distance between the VR HMD left and right lenses, and the distance between the participant's face and the VR HMD lenses, were adjusted for each participant to ensure comfortable vision of the VR environment. The n-back task was presented in the HTC Vive VR environment via Bigscreen Beta software (version 0.16.1) which enables proportional mirroring of the CS. The Bigscreen software parameters for curvature and brightness were fixed for all participants at 0% and 75%, respectively. The field of view in the VR HMD environment was expanded to the extent that participants could see the faint outline of the screen border at the periphery of their sight with a black void in the periphery. For each participant, we adjusted the parameters for size and distance (depth); the Mean ± SEM for size = 36.1 ± 2.5%, and distance = 37.3 ± 2.1%.
In a separate experimental session (without any participant involvement), we assessed the intensity of light of the presented stimuli, as this could affect the interpretation of data between the two environments, using a probe of a calibrated light meter (RS PRO ILM1335, RS Components). The probe was placed into the cut-out left eye area of a polystyrene mannequin head model, and the VR HMD headset attached. The light meter was used to measure the intensity of light using the lux scale. In our EEG laboratory, with lights off and blinds closed, the light level range was ~ 0.5-10 lx depending upon the location of measurement. For the CS environment, the light level during the presentation of each n-back trial (white colour on black background) was 0.62 lx (0.61 lx for the interval screen between each trial), and for the VR HMD headset 0.14 lx (0.13 lx for the interval screen).

EEG system
The data was recorded using a smartphone-based EEG system (Clewett et al. 2016;Bateson 2018;Bateson and Asghar 2021), with a sampling rate of 250 Hz, and was affixed to the backrest of a chair upon which the participant was sitting. EEG signals were acquired from 19 Ag/AgCl scalp electrodes (Fp1, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1, O2) configured in the international 10-20 arrangement using an EEG headcap (Sleepcap, electrodes encased in a soft sponge material, cables located internally, Spes Medica). A support headcap net (Spes Medica) was placed on top of the EEG Sleepcap to fix in position the Sleepcap electrodes. Two linked Ag/AgCl ear electrodes (clipped to the earlobes) were used as the reference electrode in each participant.

Cognitive task
An n-back task was designed and run by employing the Psychology Experiment Building Language (PEBL, version 0.14) open-source software (Mueller and Piper 2014). A single letter appeared in white font on a black background for 500 ms, followed by a fixation cross which was jittered 700-1100 ms (Fig. 2).
The 1-back and 2-back cognitive task conditions were presented to the participants. An initial practice session was undertaken which presented both 1-back and 2-back tasks. The participants received instructions that informed them regarding the particular n-back task (1-back or 2-back) to be undertaken in the experimental session. In each session (CS or VR HMD) participants completed the 1-back task before continuing onto the 2-back task as undertaken by Brouwer et al. (2012), Scharinger et al. (2017) and Liu et al. (2017). The 1-back task comprised 101 trials and the 2-back task comprised 102 trials (English alphabet letters). We Fig. 1 Participant wearing the EEG headcap (blue colour), EEG net (white colour) and the VR HMD system sought to balance the requirement for an adequate number of target stimuli for the ERP analysis with the recording session duration in which participants could comfortably complete the experimental protocol in CS and VR HMD environments. To ensure that the ERP protocol was ~ 50 min in duration, we selected 40 target stimuli per condition. A participant break of 10-15 min was provided between the change-over of the screen environments to reduce the potential for fatigue.
The sequence of letter stimuli was pseudo-randomised (to stop repeat of the same letter sequence and to ensure that lure trials were not present) for each of the two conditions. During each block, participants were required to respond by pressing the space bar key on a computer keyboard with their dominant hand when the current trial matched the stimulus seen n-back steps ago (1-back or 2-back). To enable participants to feel and easily locate the space bar key whilst wearing the HTC Vive VR HMD this key was fully covered in self-adhesive loop strip tape (VELCRO®). The same keyboard with the loop strip on the spacebar key was used for the n-back task presented in the CS environment. Participants were instructed to sit in the same posture for the CS and VR HMD experimental sessions.

Experimental procedure
The electrophysiological experiments took place in a dimly lit room, at room temperature. Each participant was provided with earplugs and completed the n-back task both in VR HMD and CS environments. The order in which the participants completed the n-back task was pseudo-randomised (VR first and then CS, or CS first and then VR), and counterbalanced at 50:50. The participant wore the same headcap (without removal) for both the CS and VR experimental sessions, and EEG electrode impedance was checked at the start of each session (each electrode impedance was kept below 5 kΩ).

EEG data pre-processing
EEG data were processed and analysed using EEGLAB (Delorme and Makeig 2004) and ERPLAB (Lopez-Calderon and Luck 2014) toolboxes running on MATLAB (Matlab R2015a, Mathworks, Inc.). Previous investigations of ERP components during VR HMD have used a range of high pass filters from 0.1 to 3 Hz (Harjunen et al. 2017;Rupp et al. 2019;Tauscher et al. 2019;Du et al. 2019;Lier et al. 2020). In our study, we applied a 2nd order infinite impulse response (IIR) Butterworth filter for bandpass filtering to the continuous data with a lower cutoff of 0.5 Hz and higher cutoff of 30 Hz. All data sets were segmented into epochs from − 200 to 600 ms relative to the stimulus onset. A 200 ms pre-stimulus time window was used for baseline correction.
Artifact detection was applied to the epoched data (target and non-target trials) by employing a moving window peakto-peak function with a 100 µV voltage threshold ). Next, a visual inspection was performed to detect and remove epochs with residual artifacts. We selected trials for inclusion in which participant's selected correct responses to the target trials. Previous investigations have indicated that a minimum of 6-8 trials are necessary for accurate and reasonably stable ERP analysis (Olvet and Hajcak 2009;Pontifex et al. 2010;Boudewyn et al. 2018). Across the participants, the mean ± SEM and range of correct target stimuli epochs selected for the data analysis for CS and VR HMD environments were 17 ± 1.47 (range 10-34) and 21 ± 1.49 (range 11-35), respectively. Target ERP waveforms obtained from each participant were averaged for each environment across the 1-back and 2-back conditions.

Measurement of ERPs
The ERP responses acquired in both CS and VR HMD environments were visually inspected for components and their durations (Fig. 3). Based upon this visualisation, and previous ERP investigations using working memory tasks (Pratt et al. 2011;Liu et al. 2017;Zhao et al. 2020), the anterior Fig. 2 The n-back task and illustration of 1-back and 2-back trials, and fixation cross (+). ISI Inter-stimulus interval N1, posterior P1 and P3 components were selected for the ERP analysis. The first negative waveform that peaked between 120 and 200 ms after stimulus onset at the frontal and central regions was identified and named as N1. The first positive peak within 120-200 ms after stimulus onset at the temporal and occipital regions was named as P1. A positive amplitude waveform, observed between 300 and 500 ms after stimulus onset, was labelled as P3.
The peak amplitude (local peak approach) and peak latency are commonly used metrics which investigators measure in their analysis of a given ERP component . Peak amplitude and latency of the ERP component at a single timepoint have been measured in n-back studies (Watter et al. 2001;Hajcak et al. 2010;Daffner et al. 2011;Brouwer et al. 2012;Chen et al. 2019) including an investigation using VR HMD (Harjunen et al. 2017). An alternative to using peak metrics, which is increasingly utilised, is the mean amplitude measure Nielsen and Gonzalez 2020). The mean amplitude measure has the advantage of taking into account the selected ERP component over a given time period instead of at a single timepoint only, although there could be shortcomings in terms of time window selection as narrower windows may increase the noise level Hutman et al. 2016;Canette et al. 2020). In our ERP analysis, we have utilised both a peak and mean amplitude measures approach to take advantage of both metrics as has been undertaken by other investigators Hutman et al. 2016;Canette et al. 2020).

Statistical analysis of behavioural and electrophysiological data
For the behavioural analyses, response times (correct responses) and accuracy (% of correct responses) of the participants were calculated for each participant separately. Behavioural data (response time or accuracy) were analysed using a 2 × 2 [environment (CS, VR HMD) × cognitive workload (1-back, 2-back)] repeated-measures analysis of variance (ANOVA).
The statistical analysis of the data was performed using IBM SPSS Statistics software (version 24, SPSS Inc., Chicago, IL, USA). Bonferroni adjustment (p < 0.05) was used for the post-hoc pairwise comparisons and pairedsamples t tests. The Greenhouse-Geisser epsilon correction was applied when the sphericity assumption was violated (Mauchly's test of sphericity).

Behavioural results
Behavioural results of mean correct response rates (accuracy) and reaction times of participants are presented in Table 1. For the accuracy of correct responses, a twoway repeated-measures ANOVA was conducted and a significant main effect found for cognitive workload F(1, 19) = 45.425, p < 0.001, η p 2 = 0.71 but not for environment F(1, 19) = 1.902, p = 0.184, η p 2 = 0.09 or the interaction effect between environment and cognitive workload F(1, 19) = 0.339, p = 0.567, η p 2 = 0.02. Post-hoc pairwise t tests with Bonferroni adjustment showed that the accuracy differed significantly between 1-back and 2-back levels of cognitive workload. The accuracy level, irrespective of environment, in the 1-back condition [Mean ± SEM (M) = 89.00 ± 0.03] was significantly higher compared to the 2-back [M = 69.00 ± 0.04, t(19) = 6.71, p < 0.001, d = 1.50].
For the mean reaction times, the results of the twoway repeated-measures ANOVA indicated that there was a significant main effect of cognitive workload F(1, 19) = 12.295, p = 0.002, η p 2 = 0.40, during the n-back tasks.

ERP results
The grand average ERP responses to both 1-back and 2-back conditions in the CS and VR HMD environments are presented in Fig. 4. We measured the mean amplitude, peak amplitude and latency of the N1, P1 and P3 ERP components in response to the presentation of the 1-back and 2-back tasks.

N1 component
In the frontal (F3 + Fz + F4) and central (C3 + Cz + C4) regions an N1 component (between 120 and 200 ms) was observed. Measurements for mean amplitude, peak amplitude and peak latency of N1 for CS or VR HMD, with 1-back and 2-back workloads in frontal and central regions, are presented in Table 2. All the statistical analyses are summarised in Table 3.
There was a significant interaction effect between environment and region F(1, 19) = 14.187, p = 0.001, η p 2 = 0.43. A subsequent paired t test revealed that the N1 mean amplitude

N1 peak amplitude
The three-way repeated-measures ANOVA revealed a significant main effect of environment F(1, 19) = 6.020,

N1 peak latency
The three-way repeated-measures ANOVA showed no significant effects of any factors or their interactions (all values of p > 0.05) for the N1 peak latency at the frontal and central regions.

P1 component
Mean amplitude, peak amplitude, and peak latency of the P1 component (between 120 and 200 ms) were calculated at the temporal (T5 + T6) and occipital (O1 + O2) regions (Table 2). All the statistical analyses are summarised in Table 4.

P3 peak latency
The results of the three-way repeated-measures ANOVA for the P3 peak latencies revealed that there were significant main effects of cognitive workload F(1, 19)

Discussion
In our experimental investigation, we successfully acquired ERPs waveforms in response to a visual n-back cognitive task in participants wearing an unmodified HTC Vive VR HMD system on top of the EEG headcap. We used a within group design, where the VR HMD and the control desktop CS experimental protocols were held in the same session with participants wearing the same electrode headcap. In our CS control environment, the ERP responses to the n-back task contained the P3, N1 and P1 waveform components of interest. The peak amplitudes and latencies of these components were similar to those shown and reported previously in ERP studies of n-back using traditional CS setups (Brouwer et al. 2012;Chen et al. 2019;Daffner et al. 2011;Hajcak et al. 2010;Kwon et al. 2018;Scharinger et al. 2015;Watter et al. 2001). In our investigation, we found that the P3, N1 and P1 waveform components followed a similar time course in both VR HMD and CS environments at all EEG electrode locations.
The most prominent amplitude ERP waveform feature we observed in response to the n-back task for both CS and VR environments was the P3 component. No main effect of environment was found for this component (mean/peak amplitudes or peak latency), indicating the similarity of the P3 waveforms obtained in VR HMD and CS environments. Our results revealed that the mean and peak P3 amplitudes were significantly higher in the parietal region for both VR and CS environments compared to the other brain regions. Similarly, (Spapé et al. 2019) found the highest P3 amplitude at the parietal Pz electrode using the Oculus Rift VR HMD system with a decision-making game task. In addition, the comparative experimental study of Harjunen et al. (2017) also revealed the highest P3 peak amplitude to a cross-modal paradigm at the parietal Pz electrode using either Oculus Rift VR HMD or desktop CS environments. Our results and those of Harjunen et al. (2017) show that the higher amplitudes at the parietal region seen for the P3 component in the CS environment remains high when the task is also presented in the VR HMD environment. Our ERP results indicate that usage of the VR HMD does not impact upon the P3 waveform and is similar to that obtained in the CS environment.
In our study, a main effect of environment was found for the N1 peak amplitude, where it was significantly higher in the VR HMD environment compared to CS. Furthermore, a significant interaction effect between environment and region revealed a higher N1 peak amplitude at the frontal region compared to the central one. For the P1 mean amplitude, a significant interaction effect between environment and region was also found, showing that there was a higher P1 mean amplitude for VR HMD compared to CS at the occipital region versus the temporal. One explanation for the higher N1/P1 amplitudes in VR HMD may be related to the relatively more controlled environment in this modality (VR screen display expanded to fit the full field of view with a black void in the periphery) compared to CS (real life laboratory setting with various peripheral objects such as a table and walls). The higher N1/P1 components in VR HMD compared to CS may be related to differences in attention levels between these two screen environments. Higher amplitude anterior N1 and posterior P1 components are associated with elevated early attentional processes (Hillyard and Anllo-Vento 1998;Woodman 2010;Hinojosa et al. 2015). Increases in the focus of attention in the visual field are thought to lead to an increase in the anterior N1 amplitude (Rugg et al. 1987;Luck et al. 2000). It may be the case that in the VR HMD environment there is a restricted field of view compared to CS, and participants thereby have increased focus upon the presented stimuli leading to higher amplitude N1/P1 components.
Another possibility for the higher amplitude N1/P1 values in VR HMD could be related to light intensity (a low level feature of the visual stimuli) which has been shown to affect early N1/P1 ERP components and influence perception of stimuli (Hillyard and Kutas 1983;Johannes et al. 1995;Wijers et al. 1997;Busch et al. 2004;Alorda et al. 2007;Lakens et al. 2013;Schettino et al. 2016;Schindler et al. 2018). Since the intensity of light during the presentation of n-back trials for CS and VR HMD was similar (< 0.5 lx difference), a change in the light intensity between the two environments is not likely to explain the higher amplitude N1/P1 components in VR HMD. Our results suggest that researchers investigating N1/P1 components could consider utilising VR HMD as N1/P1 waveforms, based upon our n-back task results, are more discernible in amplitude compared to CS.
We cannot exclude the possibility that there could be low level features which could account for the N1/P1 component differences between CS and VR HMD. For example, the size of the n-back letter stimuli presented may not have been the same for CS and VR HMD despite our attempts for each participant to have similar screen experiences in both environments (VR HMD lens adjustments, proportional mirroring and adjustments in the parameters for size/ depth). Ideally, the size of the letters presented in VR HMD should be measured, and unlike for CS, we cannot identify a technically feasible method to measure the size of the letter stimuli presented in the VR HMD screen. Other low levels features which could account for the differences in N1/P1 components between the two environments may be related to the screen type (LCD versus OLED which could impact upon levels of contrast), screen resolutions and refresh rates.
The anterior N1 and posterior P1 ERP components appear to have a close temporal coincidence in the desktop CS environment (see Fig. 4) which may reflect similar dipole generators (Hillyard and Anllo-Vento 1998;Joyce and Rossion 2005). For the VR HMD environment we also found a close temporal coincidence for the anterior N1 and posterior P1 components. Further investigation using source localisation techniques could reveal the localisation of the neuronal sources for the CS and VR HMD environments.
In our group of participants, physical discomfort was not reported using the HTC Vive system. In contrast, discomfort and pain were reported in the study by Tauscher et al. (2019) which they attributed to the physical pressure placed upon EEG electrodes of the headcap from the overlying holding strap of the HTC Vive VR HMD. One likely explanation for this difference between the studies is that we used an EEG headcap, where the electrodes are encased in a soft sponge material (Sleepcap, Spes Medica), whereas Tauscher et al. (2019) utilised a headcap with electrodes encased in a hard plastic surround. Although Tauscher et al. (2019) was able to overcome participant discomfort by making cut-outs in the HTC Vive strap, we suggest that it may instead be more practical and comfortable when combining EEG headcaps with VR HMD systems to utilise headcaps with electrodes encased in soft materials to absorb pressure from the overlying holding strap of the VR HMD system. In addition, since the EEG headcap used in our investigation has all the wires arising from each electrode located internally within the cap, this would be advantageous in terms of the practicality of donning the VR HMD system, keeping it unmodified and minimising EEG artifacts arising from electrode wire disturbance.
We used an experimental design in our study, where participants donned the same EEG headcap (without removal) for the CS and VR HMD experimental sessions which were completed within ~ 25 min of each other. This design minimises any participant discomfort from wearing the relatively bulky HTC Vive system, and potential differences in electrode location and ERP responses to stimuli as may result if long periods of time have lapsed such as undertaking CS and VR HMD experimental sessions on different days. None of our participants reported tiredness and discomfort when undergoing data acquisition using the EEG/HTC Vive system combination. In contrast, Rupp et al. (2019) mentioned discomfort and neck strain in some participants and attributed this to the VR HMD exerting pressure on the face due to its weight and bulk. The discomfort may have been due to the relatively longer experimental session (~ 50 min) in the investigation by Rupp et al. (2019) versus our shorter session (~ 25 min). Future studies should consider the weight of the VR HMD, and to use lighter weight VR HMDs for longer duration protocols to minimise participant discomfort.
In our current investigation, participants were required to look at each presented stimulus at the centre of the screen and to refrain from making head movements during EEG recordings whilst wearing the HTC Vive VR HMD to minimise eye movement and head movement-induced artifacts. Given the immersive multidimensional nature of a VR HMD system, it would be desirable to acquire EEG data during stimulus presentations, where the participant is able to orientate the head/eyes to any location in the visual field. Tauscher et al. (2019) undertook a comparative investigation of recording ERP visual oddball task responses both without participant head motion (static) and with horizontal and vertical head motions using an EEG system with acquisition electronics mounted at the back of the head. The quality of the ERP waveforms was found to be substantially reduced by horizontal, and especially by vertical, head motions in VR HMD compared to participants looking at a computer screen whilst maintaining a fixed position.
To minimise movement-induced artifacts in the EEG signals, we did not make use of the associated HTC Vive hand controllers as an input method for participants to select target stimuli in our ERP investigation. Hand controllers with motion tracking have been utilised in a visual n-back task study conducted by Tremmel et al. (2019) using the HTC Vive system, where participants used their dominant hand to place target/non-target balls upon a receptacle. However, an ERP analysis approach was not employed by these authors, instead they used a power spectral analysis of the various EEG frequency bands. It would be valuable in future studies of VR HMD and ERP if the motion activity and button press responses of hand controllers could be time stamped onto the EEG recording.
In our investigation we mirrored, using Bigscreen Beta software, the desktop CS which was running PEBL presentation software to enable time-locked delivery of each stimulus. A similar setup was used in the study by Rupp et al. (2019) who utilised an HTC Vive VR HMD with an EEG system and demonstrated the successful feasibility of acquiring ERPs in response to visual attention and memory tasks using Unreal Engine 4 software for time-locked stimulus delivery. In our study, we duplicated the contents of the desktop CS screen on the HTC Vive, whereas Rupp et al. (2019) generated a virtual computer screen placed upon a virtual table. Notwithstanding this difference, our ERP waveforms with the HTC Vive were similar in profile to those found by Rupp et al. (2019); the peak amplitude of the P3 component in our study and that reported by Rupp et al. (2019) were both in the range 5-10 μV with ~ 400 ms for the peak latency.
Adverse effects termed cybersickness, simulator sickness, motion sickness, nausea, and disorientation have been reported in participants using VR HMD systems (Moss and Muth 2011;Arafat et al. 2018;Saredakis et al. 2020;Heo and Yoon 2020). Such adverse symptoms may arise due to various factors including those related to the content presented in VR, the amount of visual movements, participant locomotion, amount of time spent in the VR environment, the demographic user profiles and the technical specifications of the hardware used (see Kourtesis et al. 2019;Saredakis et al. 2020). Only one participant (from a total of 21) in our visual ERP investigation reported feeling the sensation of motion sickness when wearing the HTC Vive system. In their ERP study, Harjunen et al. (2017) reported a single participant (from a total of 12) who experienced nausea using the Oculus Rift DK2. Other recent visual ERP investigations have not reported motion sickness/nausea in participants when using VR HMD sets (Tromp et al. 2018;Rupp et al. 2019;Tauscher et al. 2019;Stolz et al. 2019;Du et al. 2019;Spapé et al. 2019;Sun et al. 2019). We recommend that all EEG studies which integrate with VR HMD systems explicitly report the number of participants who experience motion sickness/nausea including if none was experienced. Given the rapid pace of improvements in system technical specifications including higher display resolutions and faster image refresh rates (see Kourtesis et al. 2019) the incidence of reported motion sickness/nausea could be expected to be reduced in future studies using VR HMD systems.

Conclusion
We have demonstrated the experimental feasibility and successful acquisition of visual electrophysiological ERPs in response to a cognitive working memory task whilst donning a VR HMD over an EEG headcap without the need to make any modifications or customisations. The ERP waveforms obtained using the VR HMD followed a similar time course to those acquired in the CS environment. The P3 mean and peak amplitude components obtained in the VR environment were not significantly different from those acquired in the CS environment. However, we did find significant differences between the responses seen with VR HMD compared to those for CS for the early ERP components. The N1 component was significantly higher in mean amplitude and peak amplitude for the VR HMD environment compared to CS for electrodes at the frontal region. For VR HMD, there were significantly higher P1 mean and peak amplitudes at the occipital region compared to the temporal region but not for CS. Based upon the n-back task, our results indicate that researchers can take advantage of VR HMD systems to acquire ERP waveforms from participants undertaking cognitive tasks in experimental brain research investigations. Although requiring further study, the higher amplitude N1/P1 components found in our results using VR HMD indicates the potential usefulness of this modality in the investigation of early ERP components.