A total of 24 right-handed participants (10 female), aged between 40 and 71 years (M = 57.0; SD = 9.5), took part in this study. All were right-handed (verified by the Edinburgh Handedness Inventory; EHI; Oldfield, 1971) and had normal or corrected-to-normal vision. On average, they received M = 11.5 years of education (SD = 1.8), and had an IQ of M = 107.1 (SD = 9.9), as estimated by a German vocabulary test (MWT-B; Lehrl, 1999). All participants were without any history of neurological or psychiatric disease. The study was approved by the Ethics Committee of the Jena University Hospital, and all participants gave written informed consent prior to participation, in accordance with the Declaration of Helsinki. Each participant received a reimbursement of €30.
Participants underwent a single session which lasted approximately 2 h, with 40 min used for questionnaires and screening tests, and the remaining time allotted to the experimental conditions, with breaks being taken as needed.
The tapping task used a simple sequence which consisted of using the index finger of the dominant hand to press the “1” key, and the middle finger of this same hand to press the “2” key on a separate numeric keyboard. This “1, 2” sequence was then tapped repetitively at a subjectively preferred speed. As all participants were right handed, the sequence tapped was the same for each participant. Following the methodology described by Kane and Engle (2000), this tapping task consisted of three blocks: the first block, which lasted 30 s, familiarised the participant with the sequence. If poorly performed, this block could be repeated. If successfully executed, the second block commenced, during which the average tapping speed was calculated over a duration of 60 s. If the wrong key was pressed, auditory feedback in the form of a beep was provided. If this block was also successfully completed, the participant could then go on to the final block. Here, the average tapping speed calculated in the second block was added to a tolerance buffer of 150 ms and was used as the cut-off speed for the participant’s subsequent performance. If the participant was too slow by taking longer to press a key than the time stipulated by this average tapping speed, or pressed the wrong key, auditory feedback was again provided. This final block lasted for 3 min. This time-span was chosen as 3 min reflects the average length of a block in the whole report task. All participants were asked whether they could tap without any discomfort for this period and none of them experienced any problems. Each tap made by the participant in this final block was recorded in a text file, along with the time stamp of when the tap was made, which key was pressed, the correct response, and how long it took for the key to be pressed. This allowed for error rates and tapping speeds to be established for each participant post hoc, as well as allowing for a comparison between the time stamps of each response on each task to be made.
Whole report task
The whole report task was run using Matlab (MathWorks, 2012), using Psychtoolbox (Brainard, 1997). Participants received task instructions on-screen, along with two examples to elucidate the instructions. Following this, a pre-test consisting of 12 triples of trials divided into 4 blocks, with 12 trials per block, was run. This pre-test familiarised the participant with the task, and identified the appropriate exposure durations for each participant using an adaptive staircase model. Each triple consisted of two trials that were not used for adjustment. These were either unmasked with exposure duration of 200 ms or masked with exposure duration of 250 ms. One trial in each triple was critical for adjustment; this was masked and initially displayed for 100 ms. If at least one letter in such a critical trial was reported correctly, the exposure duration was decreased by 10 ms in the following critical trial. This was repeated until a final exposure duration was identified at which the participant could not even report one letter correctly. This exposure duration was determined as the lowest exposure duration and was combined with four longer exposure durations during the remainder of the experiment, which were picked from a pre-defined list based on the value of the lowest exposure duration. In 18 participants, the exposure durations used were 10, 20, 40, 90, and 200 ms. A further three participants had exposure durations of 20, 40, 60, 120, and 210 ms, whilst one participant had exposure durations of 30, 50, 80, 130, and 220 ms. Finally, two participants were tested using exposure durations of 40, 60, 100, 150, and 230 ms. In five unmasked conditions, stimuli were followed by a mask, to avoid visual persistence effects. The mask consisted of red-and-blue scattered squares of 1.3° size appearing on each stimulus location for 500 ms. Furthermore, to enhance variability of exposure durations, two unmasked conditions were additionally used, i.e., the second shortest and the longest exposure durations were presented both masked and unmasked. In unmasked trials, visual persistence increases the duration of information uptake by several hundred milliseconds (Sperling, 1960; Dick, 1974). This duration is estimated by parameter µ in TVA-based fitting of whole report performance, a parameter which only serves the valid estimation of the remaining parameters here, and is of no additional interest for this study. This resulted in seven effective exposure conditions, with each condition having 20 trials. The whole experiment thus consisted of 140 trials, which were divided into 4 blocks. Such exposure duration variability allowed measuring a broad range of whole report performance. Lower exposure durations allow valid estimations of the perceptual threshold t0 at lower exposure durations, which is also decisive for that of the rate of information uptake in ms at t0, i.e., for estimating visual processing speed C. Higher exposure durations are necessary for receiving precise estimates of the asymptote level of performance, i.e., of VSTM storage capacity K. An example of a trial sequence is given in Fig. 1.
As can be seen from Fig. 1, a fixation point was presented on the screen for a duration of 1000 ms. Following this, six different isoluminant letters were presented equidistantly in a circle around the fixation point. These target letters were either all red or all blue [CIE red = (0.49, 0.515, 0.322), CIE blue = (0.49, 0.148, 0.068)], and were selected randomly from a pre-specified set of letters (excluding the letters I, Q, and Y). The size of these letters was 1.5 cm by 1.5 cm, with the luminance being set to 0.49 cd/m2, thereby ensuring that both red and blue targets had the same level of task difficulty. In masked trials, the masks consisted of 2.0 cm by 2.0 cm squares of overlapping blue [Colour space: CIE L × a × b blue = (17.95; 45.15; − 67.08)] and red [CIE L × a × b red = (28.51; 46.06; 41.28)] flecks. After this, the screen went blank, and at this point, the participant had to verbally report as many target letters as possible, in any order. It was emphasised that this was not a speeded task, thereby allowing each participant to take as much time as necessary in making the responses. The researcher, who was seated to the side and slightly behind the participant, then entered the reported letters via a keyboard before proceeding to the next trial. The reported letters, as well as the time-stamps of each trial, were exported to a text file. After each block, participants received visual, on-screen feedback as to their accuracy on the letters they actually reported. In order to avoid both too liberal and too conservative responses, participants were encouraged to aim for an accuracy rate of 70–90%, indicated by a green area on the accuracy bar. If their accuracy was below 70%, participants were asked to only report those letters they were fairly confident of having seen. If the accuracy was over 90%, participants were encouraged to be less conservative by reporting more target letters, even if they did not feel entirely confident.
The task order was counterbalanced, with 12 participants completing the single-task condition before the dual-task condition, and 12 participants completing it afterwards. In the dual-task, all participants started with the training and speed adjustment blocks of the tapping task before the whole report paradigm was subsequently started. During the dual-task, it was ensured that the participants did not visually monitor their tapping on the keyboard, but instead constantly fixated on the screen. This screen was adjusted for each participant, such that the central fixation point was at eye level. Due to the set-up of the apparatus, participants’ hands were located below the periphery of their visual field. Thus, to visually monitor their tapping, they would have had to move their heads to be able to see their hands (a mere shifting of the gaze downwards would not have been sufficient). The experimenter specifically monitored this, and ensured that no participant looked away from the central fixation point throughout the dual-task condition.
Data obtained through the whole report paradigm were analysed using the LIBTVA script developed by Dyrholm (2012) and run through Matlab (MathWorks, 2012) to obtain a TVA-based maximum likelihood fit for the data of each participant. This fitting method uses the observed data points to extrapolate the probabilistic parameters, utilising the fixed-capacity independent race model (see Shibuya & Bundesen, 1988). Moreover, to assess the data in which both tasks were successfully executed, dual-task trials in which a tapping error had occurred were excluded from the analysis. This yielded information regarding the goodness of fit, and the various visual attentional parameters of each participant, and how they were affected by motor-cognitive dual-tasking.
In addition to the exact parameter estimates, 200 bootstrapping estimates were derived (Efron & Tibshirani, 1993) to obtain quantitative estimates of the robustness of the maximum likelihood estimates produced by the TVA fitting (see Habekost & Bundesen, 2003). To that end, the original dataset was resampled by drawing 140 “new” trials, at random, with replacement, from the original sample of 140 trials. The algorithm was repeated 200 times (for each experimental condition and participant) and a TVA-based maximum likelihood fit was computed for each of the resulting 200 bootstrapping samples. The standard estimates of these bootstrapping estimates may be taken as quantitative estimates of the standard errors of the original parameter estimations (Habekost & Bundesen, 2003). Note that, as during resampling, each original trial can be drawn 0, 1, 2, …, or up to n times, resampling an original mixture of trials with fluctuating, “normal” and “0” rates, of information should result in increased standard errors of the bootstrapping estimates. On the other hand, rather constant rates of information uptake across the dual-task condition should lead to a low probability of producing extreme deviations from the mean also during the bootstrapping process that equals that of the standard, single task, condition. Note also that the same arguments apply to the estimation of the whole set of parameters (i.e., also to t0 and K estimates).
Calculation of dual task costs
To normalise the dual-task costs (see Boisgontier et al., 2013), the following formula was used when an increase in the metric was indicative of a dual-task cost (such as in the t0 parameter): DTC = [(DT − ST)/ST] × 100; when a decrease in the metric indicated a dual-task cost (as for the C and K parameters), then DTC = [(ST − DT)/ST] × 100 was used, instead (whereby DTC = dual-task costs, ST = single task performance, and DT = dual-task performance).
To minimise distractions, the tests were administered in a dimly lit- and sound-attenuated room. The entire experiment was run on a Fujitsu Lifebook E series laptop, with a separate numeric keyboard used for the tapping task. However, for the presentation of stimuli, an ASUS VG248 17 inch monitor with a refresh rate of 100 Hz, a resolution of 1024 × 768 pixels. To ensure a viewing distance of 60 cms, both the seat on which the participant sat as well as the table on which the screen was placed were not moveable. Furthermore, the distance between the participant and the screen was demarcated with tape.