In this experiment, we investigated prioritization and attention-allocation schema during a demanding time-sharing situation. Participants monitored four subtasks of different event rates (and hence different priorities) while their eye movements were recorded. The event rates (priorities) for the subtasks remained constant during the whole session from trial to trial to allow participants to learn and adapt to subtasks’ individual attention demands. According to threaded cognition (Salvucci & Taatgen, 2011), attention allocation was supposed to improve from trial to trial as attention allocation policy is gradually adjusted to meet task requirements, and eventually attention should be distributed across the subtasks according to subtasks’ individual priorities. At first, attention allocation as well as time-sharing performance should be far from optimal when participants face an unfamiliar task. In fact, the threaded cognition theory predicts that at first there should no significant differences in the amount of attention allocated to each subtask. By contrast, an approach involving a tighter executive control like the SAS of Norman and Shallice (1986) would allow fast and effective adaptation to the task by assuming that task priorities are fully perceived and understood and formulated into a higher-level control strategy or schema. If task priorities are not correctly understood or a control strategy cannot be implemented, this approach predicts generally weak time-sharing performance with only slow improvement through learning in individual subtask performance.
Method
Participants
Nineteen participants (psychology students of the University of Turku) were recruited for the experiment (one male, 18 females, mean age 26 years). All participants had normal or corrected-to-normal vision. Participants were asked about their active computer-gaming experience; one reported being an active gamer.
Apparatus
Eye movements were recorded with a desktop-mounted Eyelink1000 (SR Research Ltd., Ontario, Canada) system. Sampling frequency was 1,000 Hz. The stimuli were presented on a 21-in. CRT screen with a screen resolution of 800 × 600 pixels and a 75-Hz refresh rate. Participants were seated 57 cm from the screen, and a chin rest was used to stabilize the head. Stimuli were created with the E-prime software (Schneider et al., 2002a, 2002b).
Stimuli
Stimuli consisted of four progress indicators, presented on a white background (see Fig. 1). The screen (29.2° VA horizontally and 21.4° VA vertically) was divided into four quarters by a black line with one indicator on each quarter. Each indicator comprised a blue frame (275 pixels/10.2° VA horizontally and 40 pixels/1.4° VA vertically), a moving blue pointer (40 pixels/1.4° VA vertically) and a stationary red target bar (40 pixels/1.4° VA vertically) above the frame. A reset button (150 pixels/5.6° VA horizontally and 50 pixels/1.8° VA vertically) was placed under each indicator. Each blue pointer moved horizontally within the frame at constant speed (20.25 pixels/0.70° VA/s) independent of each other. In the beginning of a trial, the pointer started moving automatically from the left edge of the frame and stopped once it reached the right edge of the frame. Participants were able to return the pointer to the starting position by pressing the respective reset button with the computer mouse after which the pointer started moving again. The red target bar above the frame was positioned in one of four possible positions: 60, 120, 180, or 240 pixels (2.2, 4.5, 6.7, or 8.9° VA) from the left edge of the frame. Target positions determined frequencies of pointer-target encounters (i.e., subtask event rate). The frequencies were 0.34 Hz, 0.17 Hz, 0.11 Hz, and 0.08 Hz.
Experimental task
Participants’ task was to monitor all the indicators and perform the task as accurately as possible by avoiding early and late resets. To motivate the participants to perform at their maximum level, a feedback system was constructed. If the reset button was pressed when the pointer was within ± 2 pixels from the target bar position (the reward zone), the participant was rewarded with 10 points signaled by a distinct reward tone. On the other hand, if the pointer was outside the reward zone at the moment of reset, the participant was penalized with -2 points signaled by a distinct penalty tone. Finally, if the pointer passed the reward zone, the participant was penalized with -2 points/s until the indicator was reset. Rewards and penalties were summed up into a composite score that was presented on the counter in the middle of the screen. Participants were instructed to pay attention to all subtasks and collect as many points as possible. No information about task priorities and allocation strategies was given to the participants.
Dependent variables
Eye-movement data were parsed into fixations, after which fixations were assigned to one of five areas of interest (four subtasks and score counter). Fixations shorter than 80 ms were excluded. Main dependent measures, derived from eye-movement data, were average percentage of trial time spent looking at different subtasks, visual sampling rate, and dwell time. Other dependent measures included reset error rate, time to first action, and the composite score of each trial.
Design
Two independent variables were manipulated: subtask event rate (0.34 Hz, 0.17 Hz, 0.11 Hz, 0.08 Hz) and amount of practice. Participants’ time-sharing performance (composite score for trials) was a factor of interest.
Procedure
Participants were first presented with a short Power Point presentation outlining the general procedure and explaining the trial sequence. Instructions were presented again on the computer display prior to the experiment. At the beginning of each session, the eye-tracker was calibrated using a 9-pt calibration. Drift correction was done after every trial. A chinrest was used to reduce head movements and to control the viewing distance. Participants were encouraged to practice the task for two trials after which they were given a chance to ask for further clarification. Each trial, involving all four subtasks, lasted for 1 min. There were altogether 15 trials in the session. A 15-s rest period was placed between the trials, after which the next trial started automatically. During the rest, the remaining time and the ordinal number of the next trial were displayed on the screen. After the session was completed, participants were asked to describe the performance strategy they employed (if any) during the task. Eight participants (42.1 % of all participants) reported that they focused on the subtask of highest event rate. Three participants (15.8 %) reported other strategies, while seven (36.8 %) reported having used no particular strategy. For one participant (5.3 %) this information was missing. The duration of the entire session was about 25 min.
Results
Effect of practice on the composite score
We analyzed the effect of practice on participants’ task performance by calculating average scores for each trial. Figure 2 shows mean scores as a function of completed trials for all participants. Score data were submitted to a linear mixed effects analysis using SPSS 26.0.0.1 (SPSS, Inc., Chicago, IL, USA). Amount of practice (number of completed trials) was entered in the model as a fixed effect and participants as a random effect. A significant main effect of practice on average trial score was found, F(16, 1257.000) = 133.338, p < .001; participants improved their performance along the trials. As evident in Fig. 2, there is variation between participants in the level of their time-sharing performance as well as in the effect of practice. Except for two, all participants started with a negative trial score on the practice trials (p1 and p2), but then improved rather quickly. Some irregularities can be seen in participants’ level of performance from trial to trial. High trial score may be followed by a considerable drop on the next trial and vice versa.
Average percentage of trial time spent looking at different subtasks
The allocation of visual attention was analyzed by calculating the percentage of trial time participants spent fixating on the subtasks. The data were submitted to a linear mixed effects analysis. Amount of practice (number of completed trials), subtask event rate, and trial score, as well as their interactions, were entered in the model as fixed effects. Participants and its interaction with subtask event rate were entered as random effects. Results are presented in Table 1. The estimated marginal means (EMMs) of the trial time percentages as well as their standard errors as a function of practice and trial score in the four subtasks are presented in Fig. 3. To illustrate the differences between lower and higher levels of time-sharing performance, the results are plotted for the lower and the upper quartiles of the trial score distribution (36 and 132 reward points, respectively).
Table 1 The results of the linear mixed effects analyses for eye measures, reset accuracy, and time to first action
There was a significant main effect of subtask event rate on the percentage of trial time; the higher the event rate, the higher the percentage, indicating participants’ tendency to allocate more attention in subtasks with higher event rates. The main effects of amount of practice and trial score were not significant. The interaction between subtask event rate and trial score was significant but interaction between amount of practice and trial score was not. However, the interaction between subtask event rate and amount of practice and the three-way interaction between subtask event rate, amount of practice, and trial score were both significant.
To break apart the three-way interaction, a linear mixed effects analysis was calculated separately for each subtask by entering amount of practice and trial score and their interaction as fixed effects and participants as a random effect. It turned out that the interaction between amount of practice and trial score was only significant for the subtask with the highest (0.34 Hz) event rate, F(16, 271.893) = 2.467, p = .002 (for all the other subtasks Fs < 1). As apparent from Fig. 3, when the subtask event rate is high, the lower-performing individuals increase the amount of visual attention along the trials. Higher-performing individuals set the level of attention initially higher and then keep it relatively unchanged throughout the session.
Visual sampling rate
To further investigate attentional allocation during task performance, the rate with which the participants visually sampled the subtasks throughout the session was calculated by dividing the total dwell time spent on a subtask by the number of gaze visits (enter and leave) to it. Amount of practice, subtask event rate, and trial score, as well as their interactions, were entered in the model as fixed effects. Participants and their interaction with subtask event rate were entered as random effects. Figure 4 shows the EMMs of the visual sampling rates as well as their standard errors as a function of practice and trial score in the four subtasks.
There were significant main effects of subtask event rate, amount of practice, and trial score on the visual sampling rate (see Table 1). The interaction between subtask event rate and amount of practice as well as between amount of practice and trial score were significant. The interactions between subtask event rate and trial score and the three-way interaction between subtask event rate, amount of practice, and trial score were not significant.
Breaking apart the interactions with a linear mixed effects analysis revealed that the effect of practice was significant for the subtasks of lowest and highest event rates, F(16, 273.943) = 2.323, p = .003 and F(16, 272.314) = 4.461, p < .001, respectively. Interaction terms for the other subtasks were non-significant: F(16, 272.976) = 1.253, p = .228 (0.11 Hz) and F < 1 (0.17 Hz). There was increase in visual sampling rate of the subtask of highest event rate along the trials and a slight decrease for the subtask of lowest event rate, whereas the sampling rate remained steady for the other subtasks (see Fig. 4).
The optimal sampling rates suggested by Senders (1964) (two times the task event rate) are plotted in Fig. 4 for each subtask. It can be seen that participants were sampling the subtasks of event rates 0.08 Hz and 0.11 Hz excessively. For the subtask of 0.17 Hz event rate, the sampling was quite close to the optimal, and for the subtask of highest event rate the sampling rate was too low. Thus, the aforementioned changes in sampling rates along the trials for the subtasks of 0.08 Hz and 0.34 Hz reflect participants’ attempts to adjust their sampling behavior to subtasks’ requirements. Notice, however, that according to the optimal sampling rates suggested by Moray (1986) and Wickens (2015) (equal to display’s event rate) the observed sampling rates would indicate excessive oversampling on every subtask. In this case Senders’ estimate seem to fit the data better.
Mean dwell duration
The efficiency of information acquisition was analyzed by calculating the mean dwell duration for each subtask. Short duration is assumed to reflect high efficiency. Duration data were submitted to a linear mixed effects analysis. Amount of practice, subtask event rate, and trial score as well as their interactions were entered in the model as fixed effects. Participants and its interaction with subtask event rate were entered as random effects. The dwell durations as well as their standard errors as a function of amount of practice and trial score (at 1st and 3rd quartiles of trial score distribution) in four subtasks are shown in Fig. 5.
The main effect of subtask event rate on average dwell duration was significant (see Table 1); dwells were longer for the subtasks with higher event rates. Analogously to the trial time percentage and the frequency of visual sampling, participants seem to differentiate between the subtasks with dwell durations as well. The main effects of amount of practice and trial score were also significant. As is evident from Fig. 5, the dwell durations during the first few trials differed considerably from the durations of the later trials on all subtasks. For subtasks of higher event rates (0.34 Hz and 0.17 Hz), the dwell durations were almost twice as long during the first practice trial than on the rest of the trials. This suggests that participants initially examined these two subtasks more intensely. The two-way interaction between subtask event rate and trial score was significant. No other significant interactions were found.
The interaction was broken apart by calculating a linear mixed effects analysis separately for each subtask by entering amount of practice and trial score and their interaction as fixed effects and participants as a random effect. The observed effect of trial score was significant for the subtask of 0.34 Hz event rate only, F(1, 271.970) = 10.897, p = .001. The interaction terms for other subtasks were non-significant: F < 1 (0.08 Hz), F(1, 281.309) = 1.566, p = .212 (0.11 Hz) and F(1, 261.006) = 3.743, p = .054 (0.17 Hz). As is evident from Fig. 5, high performers’ dwell durations were longer on subtasks with highest event rate than those of low performers for the first third of the session. This may reflect high performers’ stronger effort to adapt to the features of the most important subtask.
Reset accuracy
The level of performance for each subtask was analyzed by computing the accuracy of resets. We calculated the mean absolute reset error in pixels for each subtask in each trial. Small error in resets reflects high accuracy. Error data were submitted to a linear mixed effects analysis. Amount of practice, subtask event rate, and trial score as well as their interactions were entered as fixed effects in the model. Participants and its interaction with subtask event rate were entered as random effects. For illustrative purposes the EMMs of reset error as well as their standard errors as a function of amount practice and trial score (the lower and the upper quartiles of trial score distribution) in the four subtasks are shown in Fig. 6.
The main effects of subtask event rate and amount of practice on reset accuracy were not significant (see Table 1). On the other hand, the main effect of trial score was significant. The interactions between subtask event rate and amount of practice and between subtask event rate and trial score were not significant. However, there were significant interactions between amount of practice and trial score and between subtask event rate, amount of practice, and trial score.
To break apart the three-way interaction, a linear mixed effects analysis was calculated separately for each subtask by entering amount of practice and trial score and their interactions as fixed effects and participants as a random effect. The interaction between amount of practice and trial score was significant only for the subtask with the highest (0.34 Hz) event rate, F(16, 271.944) = 2.235, p = .005. The interaction terms for the other subtasks were as follows: 0.17 Hz: F(16, 270.159) = 1.664, p = .053; 0.11 Hz: F < 1; 0.08 Hz: F < 1. As can be seen in Fig. 6, reset accuracy in the subtask with highest event rate was initially poor among the lower-performing participants but improved with practice. On the other hand, the higher-performing participants achieved a higher level of accuracy, which remained relatively stable throughout the trials. Note that the negative reset errors in the first trials are due to the extrapolation when calculating the EMMs based on the model as the original reset error values were all positive.
Time to first action (TFA)
The efficiency of time-sharing was analyzed by calculating the average time to first action on each subtask (following the idea presented in Rantanen, 2009). TFA was determined by calculating the time between the moment participants’ gaze entered a subtask and the moment of the following reset action (from the onset of dwell to response). Shorter TFAs were assumed to indicate more precise attention shifting and thus more efficient time-sharing. TFA data were submitted to a linear mixed effects analysis. Amount of practice, subtask event rate, and trial score as well as their interactions were entered as fixed effects in the model. Participants and its interactions with subtask event rate and amount of practice were entered as random effects. For illustrative purposes the EMMs of TFA as well as their standard errors as a function of amount practice and trial score (the lower and the upper quartiles of trial score distribution) in the four subtasks are shown in Fig. 7.
There were significant main effects of subtask event rate and amount of practice on time to first action (see Table 1). The main effect of trial score was not significant. No significant two-way interactions were found between any of the variables. The TFAs were generally longer on subtasks with higher event rates. TFAs were also longer during the first few trials but they rapidly dropped to a relatively constant level for the rest of the session.
Discussion
The results of Experiment 1 suggest that all participants were able recognize subtasks’ different attentional requirements rapidly and allocated their attention accordingly. Participants invested more attention in terms of percentage of trial time, visual sampling rate, dwell duration, and time to first action to subtasks of higher event rates than subtasks of lower event rates. This distinction was apparent from the very first trials of the session. This clearly indicates that they were prioritizing the subtasks.
In terms of the rate of visual sampling, the amount of allocated attention was excessive in all subtasks except for the one with highest event rate in which it was insufficient compared to the optimal. However, participants increased the rate on this subtask along the trials, which suggests that participants recognized this particular subtask as the most important and strived to optimize their performance in it. Also, participants’ reporting of their strategies during the task indicate that almost half of the participants consciously recognized the priority of this subtask. The analysis of times to first action revealed that the shifting of attention preceding the reset took place earlier on subtasks of higher event rates than on the subtasks of lower event rates. This also supports the idea of prioritization. Obviously, participants wanted to make sure that they didn’t miss the moment of the reset action on the subtasks they perceived as most important by executing the attention shift early enough.
Practice didn’t have a substantial effect on participants’ attention allocation. Participants determined the level of attention to each subtask almost instantly and made only minor adjustments during the trials. The effect of practice was observed most clearly on the subtask of highest event rate in which all participants increased the visual sampling rate during the session. Also, the lower performing participants increased the percentage of trial time for the subtask of highest event rate along the trials.
The results from dwell time duration analysis do not support the prediction that higher performers’ superior performance is due to their ability to process information faster than done by low performers. The analyses showed that higher performing participants’ dwell durations were actually longer than lower performers’ on the subtask of highest event rate during the first part of the session. This suggests that information acquisition speed is not a crucial factor behind efficient time-sharing. Longer dwells may indicate that higher performers analyzed the most important subtask more intensively at first to create an accurate mental model of the required actions. This may also reflect their capability to resist premature attention shifts enabling them to absorb more information about the subtask status, thus allowing for more accurate performance control, which then resulted in more accurate resets.
Participants’ behavior is mostly inconsistent with the predictions drawn from the assumptions of threaded cognition. All participants established almost instantly a clear distinction between subtasks in regard to attention allocation, which is not in line with a gradual optimization proposed by threaded cognition. Although such behavior was observed to some extent during the first few trials, the amount of allocated attention was clearly distinctive on each subtask from the very beginning.