Media multitasking and implicit learning

Abstract

Media multitasking refers to the simultaneous use of different forms of media. Previous research comparing heavy media multitaskers and light media multitaskers suggests that heavy media multitaskers have a broader scope of attention. The present study explored whether these differences in attentional scope would lead to a greater degree of implicit learning for heavy media multitaskers. The study also examined whether media multitasking behaviour is associated with differences in visual working memory, and whether visual working memory differentially affects the ability to process contextual information. In addition to comparing extreme groups (heavy and light media multitaskers) the study included analysis of people who media multitask in moderation (intermediate media multitaskers). Ninety-four participants were divided into groups based on responses to the media use questionnaire, and completed the contextual cueing and n-back tasks. Results indicated that the speed at which implicit learning occurred was slower in heavy media multitaskers relative to both light and intermediate media multitaskers. There was no relationship between working memory performance and media multitasking group, and no relationship between working memory and implicit learning. There was also no evidence for superior performance of intermediate media multitaskers. A deficit in implicit learning observed in heavy media multitaskers is consistent with previous literature, which suggests that heavy media multitaskers perform more poorly than light media multitaskers in attentional tasks due to their wider attentional scope.

In a world of competing demands, it can be difficult to focus on one thing at a time. Recent improvements in the portability and accessibility of technology have resulted in more time with devices and the ability to perform more digital tasks at once (Carrier, Rosen, Cheever, & Lim, 2015; Kononova & Chiang, 2015; Rideout, Foehr, & Roberts, 2010). Media multitasking is defined as the use of different forms of media simultaneously (Ophir, Nass, & Wagner, 2009) and can occur on a single device (e.g. checking emails while browsing the internet), or across multiple devices (e.g. watching TV while browsing social media on a phone).

There has been a dramatic increase in media multitasking behaviour within the past few decades. For instance, an American study reported that during the last 10 years, the amount of time that young people spend media multitasking has increased by 120% (Rideout et al., 2010). In Australia, it was found that media multitasking occurred during 70% of computer sessions logged in a university system (Judd, 2013). In a study of everyday self-control, the desire to use media ranked fourth behind essential physical needs such as eating, drinking, and sleeping (Hofmann, Baumeister, Forster, & Vohs, 2012). Paradoxically, while people seem to be increasingly driven to multitask, most theories of human cognition posit that we are not particularly well suited to it, as our attentional systems have a limited capacity to process multiple streams of information (Broadbent, 1958; Deutsch & Deutsch, 1963; Salvucci & Taatgen, 2011; Treisman, 1960). Research also suggests that we may be oblivious to these limitations, as we are poor judges of our own multitasking ability (Sanbonmatsu, Strayer, Medeiros-Ward, & Watson, 2013).

In a seminal study, Ophir et al. (2009) used responses on a media multitasking questionnaire to examine habitual multitasking. The questionnaire asked participants to report the number of hours per week they engaged in 12 different media activities. For each type of media, participants were also asked how frequently they used the other 11 forms of media at the same time. The responses were used to compute a media multitasking index—the weighted average for the number of media used per hour. Essentially, this score demonstrates the frequency with which a respondent reports engaging in media multitasking.

A growing number of laboratory studies have used the media multitasking index to explore the relationship between media multitasking behaviour and cognitive processes. Studies comparing heavy and light media multitaskers have generally found that increased media multitasking is associated with poorer performance on various cognitive tasks. These include poorer control of attention (Cardoso-Leite et al., 2016; Ralph, Thomson, Cheyne, & Smilek, 2014), difficulty filtering out distracting information (Cain & Mitroff, 2011; Moisala et al., 2016; Ophir et al., 2009), difficulties encoding and retrieving information in memory (Uncapher, Thieu, & Wagner, 2016), and poorer performance on fluid intelligence tests (Minear, Brasher, McCurdy, Lewis, & Younggren, 2013). There is some evidence that those who multitask frequently also perform worse on measures of actual multitasking behaviour, such as switching attention between tasks (Ophir et al., 2009; for contrasting results, see Alzahabi & Becker, 2013; Minear et al., 2013). Furthermore, research examining personality and affective features indicates that heavy media multitasking is linked to higher trait impulsivity and sensation seeking (Minear et al., 2013; Sanbonmatsu et al., 2013), and to reduced well-being, including depression (Becker, Alzahabi, & Hopwood, 2013; Reinecke et al., 2016).

Media multitasking and breadth-biased attention

Research has explored the mechanism underlying these differences in performance between heavy and light media multitaskers. A number of studies have converged on the idea that the differences observed between heavy and light media multitaskers are driven by differences in their scope of attention (Cain & Mitroff, 2011; Lui & Wong, 2012; Ophir et al., 2009; Uncapher et al., 2016). Heavy media multitaskers are theorised to have a broader scope of attention, compared to light media multitaskers, and distribute attention more widely, attending to information regardless of whether it is pertinent to the goals of a given task (Ophir et al., 2009). For instance, heavy media multitaskers have been shown to take in more visual information relative to light media multitaskers. In Cain and Mitroff (2011), participants searched for a target shape (a green circle) within an array of nontarget shapes (green squares). In some trials, either the target (a circle) or one of the distractors (squares) was presented in red (an additional colour singleton). Participants were told that in the ‘sometimes’ condition the target could sometimes be red. In the ‘never’ condition they were told the target would never be red. Notably, heavy media multitaskers attended to the red colour singleton across both conditions, even when directed to ignore it (Cain & Mitroff, 2011). Having a breadth-biased scope of attention meant heavy media multitaskers attended to more of the existing visual information than was necessary to complete the task.

This attentional bias in heavy media multitaskers has also been observed in working memory processing. Ophir et al. (2009) found that compared to light media multitaskers, heavy media multitaskers were more affected by the presence of distractors during an n-back task. During the task participants viewed a stream of shapes and had to indicate whether the current shape was the same as a stimulus presented two items, three items, or four items ago. As the number of distractors increased, from two back to four back, performance decreased for all participants. Yet as demands on working memory increased from two back to four back, heavy media multitaskers showed a significant increase in false alarm rates (i.e. a tendency to misidentify nontarget stimuli as being targets) relative to light media multitaskers. Ophir and colleagues (2009) concluded that heavy media multitaskers have difficulty filtering out irrelevant information in memory due to their wider breadth of attention.

Further evidence that heavy media multitaskers are prone to encoding task irrelevant information has been demonstrated in Uncapher et al. (2016). In a working memory task, participants were required to encode the orientation of target shapes while filtering out distractors. After a delay period, they had to determine whether the shapes had changed orientation. Heavy media multitaskers had difficulty determining whether a change in orientation had actually occurred, and also demonstrated a greater tendency toward false alarms (incorrectly reporting a change when none had occurred). It was suggested that heavy media multitaskers maintained fewer and less defined representations of the target in working memory. In a long-term memory task, participants were then shown target objects from the working memory task, along with new objects. They had to indicate whether the object was old or new. Relative to light media multitaskers, heavy media multitaskers demonstrated more false alarms (i.e. a tendency to indicate that they recognised an object regardless of whether this was truly the case). In addition, when reviewing distractor objects, heavy media multitaskers had more difficulty remembering which distractor objects they had already seen. Taken together, these results suggest that for heavy multitaskers, a broader attentional scope resulted in task-irrelevant information competing with relevant information during both encoding and retrieval.

Despite the drawbacks discussed in previous studies, a broader attentional scope could also be advantageous in some situations. Given that heavy media multitaskers are experienced in processing information from a variety of sources, Lui and Wong (2012) explored whether heavy media multitaskers demonstrate superior performance in a multisensory integration task. A pip-and-pop paradigm was used, wherein participants searched for a target (a vertical or horizontal line) within an array of distractor lines of different orientations. The colours of the lines changed throughout the trial, with target and distractor lines alternating at different frequencies. In some trials, an auditory tone occurred in synchrony with the changing of the target line. This tone provided a cue, which could be used to boost search speed. In the tone-absent condition, the performance of heavy media multitaskers was worse than that of light multitaskers, possibly due to difficulties with filtering out distracting information. However, in the tone-present condition, heavy media multitaskers were better able to utilise the unexpected auditory cue, resulting in a large improvement in visual search performance. It was argued that because heavy media multitaskers were attending to information from multiple channels, they utilised additional information more successfully than light media multitaskers, who focused on a single channel.

The studies reviewed so far suggest that heavy media multitaskers may take in more information, due to their broader scope of attention. In contrast to most laboratory settings, where participants are presented with simple, clearly defined tasks, and instructed to attend to specific targets and ignore distractors, in our daily life, extraneous stimuli are not necessarily meaningless, and may provide important information. For instance, Lin (2009) argues that reviewing information about a concept from several decentralised sources may result in a more complex understanding of the topic, enabling novel distinctions to be made. Consequently, it is possible that by processing extraneous information, heavy media multitaskers may demonstrate superior performance in some types of learning. For example, heavy media multitaskers have a tendency to broadly scan information, and this could specifically enhance their ability in the implicit learning of spatial configurations.

Implicit learning and media multitasking

Implicit learning taps aspects of human cognition that operate outside of conscious awareness (Chun & Jiang, 1998; Gluck & Bower, 1988; McGeorge & Burton, 1990; Nissen & Bullemar, 1987; Reber, 1993; Reber & Lewis, 1977). Implicit learning can be defined as learning that occurs irrespective of any intention to learn, largely without explicit awareness of the knowledge that has been attained (Reber, 1993). In contrast to explicit learning, implicit learning generally operates independently of psychometric intelligence (Gebauer & Mackintosh, 2007; Merrill, Conners, Yang, & Weathington, 2014) and occurs automatically, without deliberate effort or conscious reflection (Goujon, Didierjean, & Thorpe, 2015). Although there is strong evidence that explicit and implicit learning are distinct processes, these two learning systems do interact, as seen in proceduralisation, where declarative knowledge becomes automatic (Sun, Slusarz, & Terry, 2005).

There are various methods used to study implicit learning. The first implicit learning task to be developed was the artificial grammar task, where participants learned the underlying structure of an invented language without any conscious knowledge (Reber & Lewis, 1977). Two other commonly used implicit learning tasks are the serial reaction time task, where participants implicitly learn a repeating sequence (Nissen & Bullemar, 1987), and the contextual cueing task, where participants incidentally learn the association between a target image and the surrounding visual context (Chun & Jiang, 1998). All implicit learning tasks assess the ability to detect underlying patterns, or regularities, in the environment without conscious knowledge of the learning that has taken place. However, the majority of media multitasking behaviour is visual in nature, particularly when using screen-based technology. Therefore, the implicit learning paradigm best suited to studying this behaviour is the contextual cueing paradigm, in which implicit learning of spatial or scene information facilitates the detection of a target (Jiang & Chun, 2003). In daily life, we routinely exploit visual regularities in our environment (Chun & Jiang, 1998; Jiang & Chun, 2003). For instance, when entering the kitchen we do not randomly hunt for our coffee cup, we search in the most likely places, such as on the bench-top. Acquired knowledge and visual cues guide our attention to the most plausible location (Jiang & Chun, 2003).

In Chun and Jiang’s (1998) study of spatial contextual cueing, participants were instructed to search for a target, a letter T rotated to the left or right, among distractors, letter Ls in varying orientations. There were 12 ‘old’ arrays where the arrangement of the target and distractors repeated across different blocks, and 12 ‘new’ arrays, where the position of the target and distractors varied and did not repeat across blocks. A significant improvement in search speed for the repeated displays emerged after the fifth block—the contextual cueing effect (Chun & Jiang, 1998). Participants quickly learned the relationship between the target and distractors in the repeating displays and made use of this invariant information. Crucially, the learning that occurred during the contextual cueing task was entirely implicit. Participants were unaware of the association between the spatial context and the target. When questioned, less than 20% of participants noticed that some configurations repeated (Chun & Jiang, 1998). On a recognition test, participants were unable to determine whether an array was repeated or novel at above chance levels (Chun & Jiang, 1998).

Within the literature, there are two opposing explanations for the contextual cueing effect. These two accounts predict two distinct patterns of results for heavy and light media multitaskers in the contextual cueing task. According to the global hypothesis, it is the global arrangement of distractors in the array that guides attention to the target location (Brockmole, Castelhano, & Henderson, 2006; Kunar, Flusberg, & Wolfe, 2006). The global hypothesis predicts that heavy media multitaskers will benefit from a broader attentional scope, as a tendency to attend to the entire array would result in better performance relative to light or intermediate media multitaskers. In contrast, the local hypothesis proposes that contextual cueing occurs based on the position of a small subset of distractors neighbouring the target, rather than the entire visual display (Brady & Chun, 2007; Jiang & Wagner, 2004; Olson & Chun, 2002). If these local effects drive contextual cueing, the broader attentional scope of heavy media multitaskers will impede implicit learning, as they are less efficient in attending to the local context.

To date, only one previous study has explored the relationship between media multitasking behaviour and a measure of implicit learning. Cain, Leonard, Gabrieli, and Finn (2016) used a measure of probability learning, the weather prediction task, where adolescent participants were presented with various cards and asked whether a given combination of cards was more likely to be related to sun or rain. Feedback appeared after each response in the form of smiling or frowning faces. However, as there was no absolutely correct answer, this feedback was probabilistic. Over time, participants learned to correctly classify the stimuli at above chance levels based on this feedback. The resultant learning is considered to be implicit as participants are unable to explain any underlying relationship between the stimuli and outcome. While the study found no relationship between implicit learning and media multitasking behaviour, the weather prediction task has been criticised on the grounds that task performance can be influenced by explicit memorisation strategies (Gluck, Shohamy, & Myers, 2002; Price, 2008). In addition, those with better executive function demonstrate increased explicit learning on the task, making it difficult to disentangle the role of implicit and explicit processes in learning (Price, 2005).

Research aims

The present study aims to extend on this line of enquiry by assessing implicit learning using the contextual cueing task. Previous research indicates that, due to a broader attentional scope, heavy media multitaskers may be more prone to attending to irrelevant distractors (Cain & Mitroff, 2011). However, this tendency could be conducive to task performance if the extra information is in fact useful (Lui & Wong, 2012), especially if the contextual cueing effect is driven by the global context (Brockmole et al., 2006; Kunar et al., 2006). This characteristic of heavy media multitaskers may be advantageous in the contextual cueing task, enabling them to implicitly learn the spatial configuration of target and distractors faster than light media multitaskers. Therefore, the purpose of the present study is to explore whether there are group differences in heavy, intermediate, and light media multitaskers in a measure of implicit learning.

The present study also aims to determine whether the ability to process contextual information is affected by individual differences in visual working memory. While some studies propose that working memory has little to no role in implicit learning (Unsworth & Engle, 2005; Vickery, Sussman, & Jiang, 2010), other research suggests that visual working memory processes are involved in contextual cueing (Travis, Mattingley, & Dux, 2013). Furthermore, heavy and light media multitaskers were suggested to have differences in visual working memory (Ophir et al., 2009). However, results have been somewhat contradictory, as several working memory tasks have found different results. Heavy media multitaskers showed poorer performance than light media multitaskers on the n-back task (Ophir et al., 2009; see also Uncapher et al., 2016, for consistent results) whereas no difference was observed on the recent probes task, a widely used measure of interference in working memory (Minear et al., 2013). Therefore, the current study aims to include a measure of visual working memory, the n-back task, to determine if the results obtained by Ophir et al. (2009) can be replicated.

The majority of studies have explored extremes of media multitasking behaviour by comparing heavy and light media multitaskers (Alzahabi & Becker, 2013; Cain & Mitroff, 2011; Liu & Wong, 2012; Minear et al., 2013; Moisala et al., 2016; Ophir et al., 2009; Ralph et al., 2015; Uncapher et al., 2016). The underlying assumption is that the performance of intermediate media multitaskers would fall somewhere between these two extreme groups. However, recent studies argue that by employing an extreme groups design, we may be discarding valuable information from the middle of the distribution (about 68%; e.g. Unsworth, McMillan, Hambrick, Kane, & Engle, 2015). Extreme group comparisons have also been criticised on the grounds that there may be substantial variability in scores among participants within the extreme groups, and the design could increase the risk of Type I error (Unsworth et al., 2015).

To date, only one study has included intermediate media multitaskers, and results suggest the relationship between media multitasking habits and cognitive performance could be more complex than previously suggested (Cardoso-Leite et al., 2016). The performance of intermediate media multitaskers did, in fact, fall between that of heavy and light media multitaskers on a measure of proactive cognitive control. However, on two tasks that assess susceptibility to distraction and the monitoring and updating of information in working memory, intermediate media multitaskers out-performed both heavy and light media multitaskers (Cardoso-Leite et al., 2016). It is possible that the association between some aspects of cognitive control and media multitasking follows an inverted U curve. Moderate levels of media multitasking may be associated with an optimal level of cognitive control (Cardoso-Leite et al., 2016). Interestingly, the few studies that have considered media multitasking as a continuous variable and examined linear relationships between media multitasking and cognition (Uncapher et al., 2016; Cain et al., 2016) do not support the inverted U-curve relationship between media multitasking and cognitive control. The present study included both an extreme group approach and a correlational approach in an attempt to reconcile this difference.

Method

Participants

A sample of 94 participants (47 male, 47 female) was recruited for the study. All participants were 18–35 years of age (M = 25.46, SD = 5.54). This age range was selected because the study specifically targeted people who frequently engage in media multitasking, and previous research suggests that younger generations may combine media tasks more frequently (Carrier, Cheever, Rose, Benitez, & Chang, 2009). Additionally, the age cut-off aimed to prevent confounding effects of age-related changes, as some evidence indicates that selective attention gradually deteriorates after the age of 50 (Tales, Muir, Bayer, & Snowden, 2002).

Materials and procedure

Participants were asked to complete the media use questionnaire, followed by two computerised cognitive tasks: the contextual cueing task and n-back task. In total, completion of all tasks took approximately one hour. Counterbalancing of the cognitive tasks was used to control for order effects, such as fatigue, on task performance.

Media use questionnaire

Media multitasking behaviour was assessed using the media use questionnaire developed by Ophir et al. (2009). The questionnaire measured the use of 12 different forms of media: print media, television, computer-based video, music, nonmusic audio, video or computer games, telephone and mobile phone calls, instant messaging, text messaging, e-mail, Web surfing, and other computer-based applications. For each form of media, participants were asked to report the total number of hours per week they spend using this medium. Participants were then asked to indicate whether they use one of the other 11 forms of media at the same time as the primary media. Participants were asked whether they concurrently use other media ‘most of the time,’ ‘some of the time,’ ‘a little of the time,’ or ‘never.’ Participants’ responses to the questionnaire were quantified using the media multitasking index. As per the procedure used in Ophir and colleague’s (2009) study, numerical values were assigned based on participant’s responses: ‘Most of the time’ (=1), ‘some of the time’ (=0.67) ‘a little of the time’ (=0.33) and ‘never’ (=0). The media multitasking index was then computed as described in Ophir et al. (2009).

Participants were divided into groups based on their media multitasking index score. This resulted in groups of 13 light media multitaskers, 62 intermediate media multitaskers, and 19 heavy media multitaskers. The same cut-off values were used to classify media multitaskers as Ophir et al. (2009) and Cardoso-Leite et al. (2016). Ophir’s cut-off scores were selected for several reasons. First, Ophir’s original sample more closely approximates the distribution of media multitasking index scores in the wider population, because they used a larger (n = 262) sample than the present study. Secondly, Cardoso-Leite et al. (2016) also used these cut-off scores, and given that theirs is the only other study that also considers intermediate media multitaskers, using these values allows a more direct comparison to their results. Previous research has established the advantage of using the same cut-off scores to compare samples, for instance, Minear et al. (2013), chose to use Cain and Mitroff’s (2011) cut-off scores when analysing their data.

Contextual cueing task

Implicit learning was measured using a computerised contextual cueing task, as described in Chun and Jiang (1998). Using Inquisit software (Version 4), participants viewed a series of visual arrays which appeared within an invisible 8 × 6 grid (subtending approximately 28.5° × 21.6°) on a grey background (see Fig. 1a). Each visual array contained a target letter T rotated 90 degrees to the left or the right. The array also included 11 distractors, letter Ls rotated zero degrees, 90 degrees, 180 degrees, or 270 degrees. Stimuli were presented in red, green, blue, or purple in each array, the colours assigned were random, and an equal number of colours appeared in each array. On each trial, a fixation dot appeared, followed by a 500-ms pause, and then the visual array was presented. Participants were asked to locate the T in each array, and to press the ‘z’ key if the bottom of the T was pointing to the left and the ‘/’ key if the bottom of the T was pointing to the right.

Fig. 1
figure1

Example of the visual arrays in the contextual cueing task (a) and example trial schematic of the three-back task (b). (Colour figure online)

After completing a practice session (12 trials), participants performed the actual task consisting of a total of 480 trials. Performance was measured across four epochs, and each of these epochs consisted of five blocks of trials. A block contained 24 trials, and 12 of these trials included new arrays (random spatial configurations which did not repeat across blocks) while the remaining 12 trials included old arrays (which repeated across blocks). There was a 10-s break after completion of each block. Accuracy and reaction times were measured throughout the task. The present study used a shortened version of the original task, as Chun and Jiang (1998) tested participants for six epochs (720 trials), while the current study contained four epochs (480 trials). Previous research has established that abbreviating the task has no adverse impacts on the magnitude of spatial context learning demonstrated, or on the implicit nature of the learning (Bennett, Barnes, Howard, & Howard, 2009).

After completing the contextual cueing task, participants also completed a brief interview to assess whether they had explicit knowledge of the repeating displays. Only 4% of participants reported explicit awareness of the underlying spatial regularity. This was fairly consistent with the rate of explicit awareness reported in previous studies, for instance, other research with a shortened version of the task found that around 10%–11% of participants reported explicit awareness (Bennett et al., 2009), although such awareness does not usually allow participants to identify repeated configurations at above chance levels (Chun & Jiang, 2003). Therefore, the learning demonstrated on the task is considered to be implicit (see also Colagiuri & Livesey, 2016).

N-back task

Participants were asked to complete a computerised single n-back task as described in Jaeggi et al. (2010). Using Inquisit software (Version 4), participants were shown a sequence of eight random shapes. The shapes were yellow and appeared on a black background (see Fig. 1b). Participants viewed each shape for 500 milliseconds, followed by a 2,500 millisecond interstimulus interval. After completing a practice session, participants were tested on three levels. In a two-back condition, the target was an image that matched the shape seen two trials ago. In the three- and four-back conditions, the target was an image that matched the shape seen three and four trials ago, respectively. Participants were instructed to press a key when they identified a target shape. No response was required for nontarget shapes. Each level had three consecutive blocks, and there were nine blocks in total. A block contained 20 + n stimuli consisting of six targets and 14 + n nontargets. Performance on the task was measured using the correct hit rate minus false alarm rate, averaged over the total number of blocks and n-back levels. Participants were divided into three groups (low, medium, high) based on their working memory performance as in Kane and Engle (2003). This resulted in groups of 25 participants with low working memory performance, 43 participants with medium working memory performance, and 26 participants with high working memory performance. The performance thresholds for the low and high working memory groups were calculated using the first quartile and third quartile, and were 0.3 and 0.54, respectively.

Results

Prior to examining the reaction time data, any incorrect responses (1.8%) or responses longer than 2,000 ms (2.4%) in the contextual cueing task were excluded from the data set. For the main analysis, a mixed design factorial analysis of variance (ANOVA) with a Greenhouse–Geisser correction was proposed with display (new, old) and epoch (1–4) as within-subjects variables, and multitasking group (light, intermediate, and heavy) and working memory (low, medium, high) as between-subjects variables.

Analysis of reaction time data

The mixed ANOVA showed a significant main effect of display type, F(1, 85) = 28.27, p < .001 \( , {\upeta}_p^2 \) = .250. Participants responded faster to old displays (M = 898.20 ms, SE = 14.46 ms) compared to new displays (M = 942.25 ms, SE = 14.25ms)—the contextual cueing effect. There was a significant main effect of epoch, F(2.22, 188.46) = 141.07, p < .001, \( {\upeta}_{p\;}^2 \) = .624. Response times became faster as the task progressed: The mean response times were 1005.82 ms (SE = 15.48ms), 938.03 ms (SE = 15.23 ms), 882.55 ms (SE = 13.67ms) and 854.50 ms (SE = 13.79 ms) from Epochs 1 to 4.

Crucially, there was also a significant two-way interaction between display type and epoch, F(2.76, 234.43) = 10.63, p < .001, \( {\upeta}_p^2 \) = .111 (see Fig. 2). Although the speed with which participants detected the target gradually increased from Epoch 1 to 4 for both old and new displays (all ps < .001), improvements in speed were much greater for old displays than for new displays (all ps < .003).

Fig. 2
figure2

Contextual cueing effect across the four epochs. Error bars computed using 95% confidence interval (Cousineau, 2005)

Of particular interest was whether the different multitasking groups showed variation in the contextual cueing effect. There was a significant three-way interactionFootnote 1 between display type, epoch, and media multitasking group F(5.52, 234.43) = 2.41, p = .032, \( {\upeta}_p^2 \) = .054, indicating that the contextual cueing effect did vary among different groups of multitaskers. To further explore this three-way interaction effect, a 4 (Epoch: 1–4) × 3 (media multitasking group: light, intermediate, heavy) mixed ANOVA was conducted, with the difference in reaction times between new and old displays (i.e. the contextual cueing effect) as the dependent variable. There was a significant interaction between epoch and media multitasking group, F(6, 273) = 2.95, p = .008, \( {\upeta}_p^2 \) = .06 (see Figs. 3 and 4). The contextual cueing effect grew quite rapidly across the four epochs for light media multitaskers, and intermediate media multitaskers (see Fig. 3)—the increase from Epoch 1 to 2 was significant (all ps < .014) but there was no significant change from Epochs 2 to 3, or 3 to 4 (all ps > .138) for both light and intermediate media multitaskers. However, for heavy media multitaskers, the contextual cueing effect did not increase significantly across the four epochs—the change from Epochs 1 to 2, 2 to 3, and 3 to 4 were all nonsignificant (p = 1). The increase in the contextual cueing effect from Epochs 1 to 4 was much greater for light media multitaskers (p = .039) and intermediate media multitaskers (p < .001) in comparison with heavy media multitaskers (p = 1), who showed a nonsignificant mean difference between Epochs 1 and 4 of 6.18 ms (see Fig. 4).

Fig. 3
figure3

Development of the contextual cueing effect across epochs for light, intermediate, and heavy media multitaskers. Error bars computed using standard error of the mean (Franz & Loftus, 2012)

Fig. 4
figure4

Increase in the contextual cueing effect from Epoch 1 to 4 for light (LMM), intermediate (IMM), and heavy (HMM) media multitaskers. Error bars computed using standard error of the mean (Franz & Loftus, 2012)

To investigate whether heavy media multitaskers showed a rapid improvement within Epoch 1, we compared the contextual cueing effect of the three media multitasking groups between the first and second half of Epoch 1. There was no significant interaction between block (first vs second half of Epoch 1) and media multitasking group (p = .173). We also analysed epoch 4 separately to examine whether there was a significant difference between different media multitasking groups within the epoch. Results showed that the effect of display was significant (p = .001) but did not interact with media multitasking group (p = .25). In other words, the significant difference between different media multitasking groups only appeared when analysed across epochs. This is consistent with results from regression analysis: the difference in the contextual cueing effect between Epochs 1 and 4 was negatively correlated with media multitasking (r = -.247, p = .017). However, media multitasking score did not significantly predict the contextual task performance during Epoch 1, β =.11, t(91) = 1.06, p = .29, nor did it significantly predict the contextual task performance during Epoch 4, β = -.17, t(91) = -1.68, p = .1) when controlling for the effect of age.

Working memory performance was used to control for the impact of individual differences in working memory on the measure of implicit learning. There was a main effect of working memory, F(2, 85) = 3.29, p = .042, \( {\upeta}_p^2 \) = .072, indicating that those with high working memory (M = 865 ms) were faster than those with low (M = 935ms) or intermediate (M = 948 ms) working memory in the contextual cueing task in general. However, there were no significant interactions between working memory and performance on the contextual cueing task, with nonsignificant interactions between working memory and epoch, F(4.43, 188.46) = 1.53, p = .190, and between working memory and display type, F(2, 85) = 0.11, p = .898. These results suggest that the observed differences in contextual cueing between multitasking groups are unrelated to underlying differences in working memory.

Working memory was also included in the analysis to determine whether media multitasking behaviour is related to differences in visual working memory. While it was predicted that heavy media multitaskers would have poorer performance on the working memory task, in fact, working memory did not interact with multitasking group, F(4, 85) = 0.15 p = .964. In Ophir et al. (2009), group differences in working memory performance only emerged as the difficulty of the n-back task increased. A separate further analysis of working memory performance per n-back level and media multitasking group (i.e. a mixed ANOVA with working memory performance per n-back level and media multitasking group) indicated that this was not the case in the present study (working memory performance of the three media multitasking groups per n-back level is shown in Appendix 1, Tables 1). There were no significant differences between multitasking groups at the three-back and four-back levels (p > .84). Regression analysis on the relationship between media multitasking score and working memory performance showed that working memory performance did not significantly predict media multitasking score, β = -.044, t(91) = -.401, p = .69, when controlling for the effect of age.

Additionally, the working memory performance of the three media multitasking groups at different n-back levels was analysed using d’ as in Ophir et al. (2009). The results were consistent with the main analysis, showing no interaction between multitasking group and working memory (p = .91). Hit rate and false alarm rates for the three media multitasking groups at each n-back level are shown in Appendix 1, Figs. 5 and 6.

Analysis of accuracy data

The mixed-design ANOVA for accuracy revealed a main effect of epoch, F(3, 255) = 2.66, p=.049 \( , {\upeta}_p^2 \) =.030, indicating that participants became more accurate with an increased number of trials (i.e. a practice effect). No other significant effect was observed. This suggests that it is unlikely that reaction times were confounded by a speed versus accuracy trade-off (i.e. the increase in response speed across epochs was not associated with a corresponding increase in errors).

Discussion

This study aimed to investigate whether there are group differences in the performance of heavy, intermediate, and light media multitaskers on a measure of implicit learning. It was predicted that heavy media multitaskers would demonstrate a greater degree of implicit learning in the contextual cueing paradigm, as they are theorised to have a wider breadth of attention (Cain & Mitroff, 2011; Lui & Wong, 2012; Ophir et al., 2009; Uncapher et al., 2016). However, results found the opposite: The speed at which implicit learning occurred was slower for heavy media multitaskers compared with intermediate or light media multitaskers. The current study also aimed to explore whether differences in visual working memory performance affect performance in the contextual cueing task in heavy, intermediate, and light media multitaskers. Working memory performance was found to be unrelated to performance in the contextual cueing task, and more importantly, unrelated to media multitasking behaviour. Finally, the study also aimed to investigate the performance of intermediate media multitaskers, given that this group had received little research attention to date. One previous study found that on some laboratory tasks, intermediate multitaskers out-performed both heavy and light media multitaskers (Cardoso-Leite et al., 2016). There was no evidence for superior performance of intermediate multitaskers in the current study.

Implicit learning and media multitasking behaviour

Initially, we sought to explore whether there may be hidden benefits to being a heavy media multitasker. One possibility was that heavy media multitaskers might actually be picking up more perceptual information than other groups, even unintentionally (Lin, 2009; Lui & Wong, 2012), and this would manifest in a greater extent of implicit learning during the contextual cueing task. Surprisingly, results were in the opposite direction, and in fact heavy media multitaskers performed worse than other groups. Light and intermediate media multitaskers showed a steady increase in contextual cueing throughout the task, but for heavy media multitaskers, the degree of contextual cueing did not increase during the task. While unexpected, this result does conform to a growing body of literature that indicates that increased media multitasking behaviour is associated with poorer performance on a variety of cognitive tasks (Cain & Mitroff, 2011; Cardoso-Leite et al., 2016; Minear et al., 2013; Mosiala et al., 2016; Uncapher et al., 2016).

Previously, it has been argued that the main underlying difference between heavy and light media multitaskers is their scope of attention (Cain & Mitroff, 2011; Lui & Wong, 2012; Ophir et al., 2009; Uncapher et al., 2016). Heavy media multitaskers are theorised to have a broader scope of attention relative to light and intermediate media multitaskers. To date, only one study has specifically examined the impact of attentional scope on performance in the contextual cueing task (Bellaera, von Mühlenen, & Watson, 2014). Bellaera et al. (2014) examined whether a tendency to process visual information using a broad or narrow attentional scope affects performance in the contextual cueing task. Scope of attention was measured using a shape detection task, wherein participants searched for a target shape (such as a triangle), which might appear as a local shape (e.g. small triangles arranged in the shape of a square) or a global shape (e.g. a large triangle made up of small squares). Reaction times to detect global targets were subtracted from reaction times for local targets for each participant in order to determine a preference for either broadly distributed or narrowly focused attention. Participants subsequently completed the contextual cueing task. Results showed that those with a broader scope of attention displayed a significantly reduced contextual cueing effect; on average those with broadly distributed attention showed a contextual cueing effect of 114 ms, compared to a contextual cueing effect of 213 ms for those with a narrow attentional scope.

Taken together, the results of the current study and those of Belleara et al. (2014) indicate that a broader scope of attention is associated with reduced contextual cueing. In other words, the heavy media multitaskers in our study were uniquely impaired on the contextual cueing task because their broad attentional scope was actually more of a hindrance than a help during the task. The results of the current study are consistent with the local hypothesis: A narrower scope of attention meant that attending to the local context facilitated search performance for light media multitaskers, whereas attending to the global display impaired the performance of heavy media multitaskers.

A number of previous studies illustrate that it is not necessary to attend to the entire display to obtain a contextual cueing effect, and in fact, focusing on the local context can facilitate search. It has been demonstrated that repeating only half of the display, just one quadrant of the display (Olson & Chun, 2002), or just two distractors in the same quadrant of the target produces a significant contextual cueing effect (Brady & Chun, 2007). Interestingly, adding empty distance between the distractor and target does not eliminate this local effect, suggesting that if there is no information close to the target, selective attention may extend further in order to search for useful context (Olson & Chun, 2002). The use of the local context also explains why contextual cueing survives in experiments where entire displays are rescaled (Jiang & Wagner, 2004) but is extinguished when ‘new’ distractors are inserted in between the target and ‘old’ distractors (Olson & Chun, 2002). It has been suggested that during the contextual cueing task, the repeated presentation of the target and a subset of distractors forms a visual ‘chunk’ (Gobet et al., 2001; Manelis & Reder, 2012). If local context drives contextual cueing effect, this implies that heavy media multitaskers could be less proficient in the chunking of visual information, and less proficient in the use of these perceptual chunks to guide visual search.

This use of local context in the contextual cueing task also explains why Cain et al. (2016) did not observe any correlation between media multitasking behaviour and performance on a probabilistic classification task (i.e. the weather prediction task), as the implicit learning involved in the task is less dependent on local contextual learning. The current study indicates that frequent media multitasking behaviour is related to a specific deficit in implicit learning for spatial context. This is a novel finding, and extends on previous laboratory studies examining media multitasking, which have primarily examined conscious cognitive processes.

While it has been argued that the differences in implicit learning between media multitasking groups occurred due to differences in attentional scope, other explanations may also be considered. Overall, it is unlikely that other cognitive factors can account for reduced contextual cueing in heavy media multitaskers. For example, the observed differences cannot be explained by variability in psychometric intelligence, given that implicit learning is theorised to be independent of intelligence (Gebauer & Mackintosh, 2007; Kaufman et al., 2010). Moreover, differences in working memory were controlled for in the experiment, and did not interact with implicit learning. Previous research has found a small correlation between cognitive processing speed and implicit learning (Kaufman et al., 2010). However, this association has only been demonstrated using a measure of implicit sequence learning, the serial reaction time task, and has never been replicated within the contextual cueing paradigm (Bennett, Romano, Howard, & Howard, 2008). Furthermore, previous research has failed to find a relationship between multitasking behaviour and processing speed (Cain et al., 2016), so it is unlikely that our three groups of media multitaskers would vary systematically on a measure of processing speed.

Another possibility is that features of the contextual cueing task itself influenced the result. It could be contended that shortening the contextual cueing task impacted on the expression of implicit learning within this sample. However, this is unlikely because the contextual cueing effect emerged fairly rapidly for all groups during the experiment. In addition, previous research has found a greater magnitude of implicit learning when using a shortened version of the task (Bennett et al., 2009). Another potential concern is that the stagnant contextual cueing effect observed in heavy media multitaskers could reflect a ceiling effect for this group. However, this is improbable, as there was no indication of particularly rapid response times for heavy media multitaskers. Heavy media multitaskers showed somewhat slower mean reaction times overall (952.1 ms) compared to light (909.6 ms) and intermediate (912 ms) media multitaskers, though these group differences in reaction time were nonsignificant (p > .05). Therefore, there was no indication that heavy media multitaskers failed to improve during the task due to having quickly reached the upper limits of what is attainable in terms of response speed.

Alternatively, there is also some evidence that the use of different search strategies during the task can influence the development of contextual cueing. Lleras and von Mühlenen, (2004) found that participants instructed to be as receptive as possible during the search, and to just allow the target to ‘pop’ into their mind, demonstrated a robust contextual cueing effect. In contrast, those instructed to deliberately and actively direct their attention to search for the target displayed no contextual cueing, and even negative cueing effects (increased search times for repeated displays; Lleras & von Mühlenen, 2004). One possibility is that the heavy media multitaskers in our sample were more prone to using an active search strategy, which impaired their performance on the task. However, this is doubtful because substantial literature indicates that heavy multitaskers demonstrate reduced attentional control compared to other groups (Cain & Mitroff, 2011; Lui & Wong, 2012; Ophir et al., 2009; Ralph et al., 2015; Uncapher et al., 2016), which suggests they would actually be less likely to adopt a top-down search strategy. In addition, the pattern of our results differed from that of Lleras and von Mühlenen (2004) in that heavy media multitaskers did show a contextual cueing effect, but it was to a lesser extent than other groups. While differences in search strategy probably do not explain this result, future studies could question participants about any strategies used during the task in order to further investigate this possibility.

Aside from cognitive and task-related factors, it is also possible that affective features may have impacted on the result obtained. There is some evidence that depressed individuals are uniquely impaired on contextual cueing, to the extent that they do not show a contextual cueing effect (Lamy, Goshen-Kosover, Aviani, Harari, & Levkovitz, 2008). Interestingly, there is also evidence that frequent media multitasking behaviour is associated with psychological distress, including depression (Becker et al., 2013; Reinecke et al., 2016). Given the proposed link between media multitasking and depression, there may have been a higher incidence of subclinical symptoms of depression in the group of heavy media multitaskers, and this could have impaired their contextual cueing performance. However, it is unlikely that all of our heavy media multitaskers were depressed, because they did exhibit a contextual cueing effect overall, albeit a smaller effect than the other groups. Nonetheless, the impact of subclinical depression on contextual cueing remains untested. Future research could screen participants for psychiatric distress in order to better control for this possibility.

Working memory and media multitasking behaviour

Individual differences in working memory can influence the ability to control attention (Fukuda & Vogel, 2009; Vogel, McCollough, & Machizawa, 2005; Vogel, Woodman, & Luck, 2001). The n-back task was included in the current study to control for the possibility that these individual differences could be a confounding factor when measuring implicit learning. However, results found no relationship between working memory and implicit learning. It is worth noting that the n-back task used in the current study involved objects rather than spatial configurations, which may limit the conclusions that can be drawn regarding the relationship between visual working memory and implicit learning (of spatial context). However, performance on n-back tasks using objects and n-back tasks using spatial context are suggested to be highly correlated (Jaeggi et al., 2010). Overall, results suggest that the differences in implicit learning observed between heavy media multitaskers and other groups are unlikely to have occurred due to underlying differences in working memory.

Intriguingly, there was also no relationship between performance on the n-back task and media multitasking behaviour. This is at odds with previous studies using the n-back task. Originally, Ophir et al. (2009) concluded that while heavy and light media multitaskers do not fundamentally differ on the measure of working memory, heavy media multitaskers showed a disproportionate increase in false alarm rates during the more difficult three- and four-back levels of n-back task (i.e. they were misidentifying nontarget stimuli as being targets). Ophir et al. (2009) interpreted this as evidence for poor inhibitory control in heavy media multitaskers, as it represented difficulty managing the intrusion of irrelevant information in working memory. Using a standard visual working memory task, Uncapher et al. (2016) obtained a conceptual replication of this result, as they found that heavy media multitaskers had more difficulty screening out irrelevant information, and this extraneous information placed an additional burden on working memory during both encoding and retrieval.

Conversely, a study that directly measured interference in working memory found no differences in performance between heavy and light media multitaskers (Minear et al., 2013). There was no evidence that heavy media multitaskers were impaired on the recent probes task, a widely used measure of the ability to regulate information in working memory in accordance with task goals (Jonides & Nee, 2006). One possible explanation is that these conflicting results have occurred due to task differences in cognitive load. In Ophir et al. (2009), group differences only emerged at a higher cognitive load as demands on working memory increased. Minear et al. (2013) may have failed to reproduce Ophir’s result, as the recent probes task may not have made sufficient demands on cognitive load. A second possibility is that these different results reflect additional demands on the executive function component of working memory assessed during the n-back task. Both the recent probes task and the n-back task measure proactive interference, a reduction in accuracy and response time seen due to intrusion from previously relevant stimuli (Jonides & Nee, 2006). However, the n-back task used by Ophir et al. (2009) also assesses cognitive updating, the process of maintaining relevant information and deleting or replacing irrelevant information in memory (Carretti, Cornoldi, De Beni, & Romanò, 2005). It is possible that heavy media multitaskers differ from light media multitaskers in the cognitive updating of information, rather than the ability to inhibit distractors per se.

Further evidence that heavy media multitaskers may have difficulty with cognitive updating was seen in a recent study that used the n-back task (Cain et al., 2016). Increased media multitasking was linked to poorer performance on the n-back task. However, Cain and colleagues’ (2016) result was driven by a declining hit rate for heavy media multitaskers as the task difficulty increased (i.e. they were failing to identify targets). A tendency for heavy media multitaskers to miss targets suggests that increased media multitasking behaviour is linked to difficulties in the updating of information within working memory rather than difficulties managing interference. However, the present study failed to find evidence for either of these explanations, as there was no evidence that heavy media multitaskers performed poorly on the task overall, and there was also no evidence that heavy media multitaskers were specifically impaired as task difficulty increased in the three- and four-back conditions. Notably, one other recent study also failed to find group differences between heavy and light media multitaskers on the n-back task (Cadoso-Leite et al., 2016).

One explanation for these mixed results is that the current study included a more heterogeneous sample. While Ophir et al. (2009) administered the n-back task to a relatively homogenous population of university students, and Cain et al. (2016) studied adolescents in middle school, the current study was not limited to students. This has important implications, because the media use questionnaire measures the frequency of media multitasking behaviour and does not distinguish between choosing to media multitask and a requirement to media multitask due to circumstance. For instance, in many office roles, workers may be expected to be responsive to incoming e-mails and instant messages in combination with other computer tasks, even if this is not their preferred style of work. These individuals may indeed media multitask very frequently, yet differ fundamentally from those who multitask frequently as a personal preference. As an example, various studies have suggested that heavy media multitasking is linked to impulsivity and sensation seeking (Minear et al., 2013; Sanbonmatsu et al., 2013); however, these relationships may not hold for heavy media multitaskers whose multitasking behaviour is more driven by necessity.

Intermediate media multitaskers

The majority of media multitasking studies that use the media use questionnaire employ an extreme groups design, comparing the performance of heavy and light multitaskers and excluding the middle of the distribution. As a result, people who media multitask in moderation have received little research attention to date. However, one recent study included analysis of intermediate media multitaskers, and found that this group actually performed better than other groups on measures of working memory and proactive cognitive control (Cardoso-Leite et al., 2016).

The results of the current study found no evidence for superior performance of intermediate media multitaskers. While Cardoso-Leite and colleagues (2016) found that intermediate media multitaskers performed better than other groups on the n-back task, in the current study, the performance of intermediate media multitaskers on the n-back task did not reliably differ from that of other groups. On the measures of implicit learning, the performance of intermediate media multitaskers tended to fall between the two extreme groups. Unlike heavy media multitaskers, intermediate media multitaskers demonstrated a steady increase in the magnitude of the contextual cueing effect throughout the task.

One reason why these results differed from previous research may be that Cardoso-Leite et al. (2016) also targeted people who frequently play action video games, resulting in an increased proportion of gamers in their sample. This is of interest because unlike media multitasking, frequent use of action video games has been linked to beneficial outcomes for visual processing, attention, and decision making (Green & Bavelier, 2012; Spence & Feng, 2010). Therefore, it could be that superior performance in intermediate media multitaskers is only seen in association with increased use of action video games. Inclusion of intermediate media multitaskers in future studies is needed to clarify these findings.

Theoretical and practical implications

While previous studies have proposed that heavy media multitasking is often associated with poorer performance in various conscious cognitive processes, our finding that increased media multitasking is also linked to a disruption in implicit learning is new. This finding is important because the ability to quickly and implicitly learn the association between visual stimuli, as it is defined within the contextual cueing paradigm, is fundamental to efficient visual processing, including identifying relevant information, scene learning, navigation, and prediction (Bennett et al., 2009; Brady & Chun, 2007; Brockmole et al., 2006; Chun & Jiang, 1998; Couperus, Hunt, Nelson, & Thomas, 2010; Goujon et al., 2015; Jiang & Chun, 2003).

The current result is also of theoretical importance, as it affirms an established trend seen in a number of studies for heavy media multitaskers to demonstrate breadth-biased attention (Cain & Mitroff, 2011; Lui & Wong, 2012; Mosiala et al., 2016; Ophir et al., 2009; Uncapher et al., 2016). As the research evidence to date is cross-sectional and correlational, it is impossible to infer causality from these results, or determine the direction of the relationship. If increased media multitasking behaviour is linked to the development of breadth-biased attention, then our findings suggest that this tendency to allocate attention more widely is also related to a reduced ability to incidentally learn and make use of perceptual regularities. On the other hand, if the differences observed in heavy media multitaskers reflect preexisting individual differences, this suggests that people who are already less sensitive to detecting visual regularities in the environment are choosing to multitask more often. Ironically, these heavy media multitaskers may be particularly ill equipped to manage multiple streams of information, given that they show a deficit in implicit learning, which plays a fundamental role in visual processing.

In conclusion, the results of the current study indicate that frequent media multitasking behaviour is associated with a reduced magnitude of implicit learning, as measured within the contextual cueing paradigm. In contrast to several previous studies, media multitasking behaviour was not found to be associated with differences in working memory performance. The present study extends on previous research in three important ways: it lends further support to the theory that media multitaskers differ in their scope of attention, it extends on previous media multitasking research by examining nonconscious processes, and it contributes to a small number of studies exploring individual differences in implicit learning.

Notes

  1. 1.

    It is worth noting that, as a result of using an extreme groups approach, some of the subgroups (3 levels of working memory × 3 levels of media multitasking group) had fairly small sample sizes, which may raise concerns for data stability. However, results from a mixed ANOVA using Display (new, old) × Epoch (1–4) × Media Multitasking Group (heavy, intermediate, and light) excluding working memory (low, intermediate, high) were consistent with the main analysis reported above, indicating that the three way interaction between display, epoch, and multitasking group was stable.

References

  1. Alzahabi, R., & Becker, M. W. (2013). The association between media multitasking, task-switching and dual-task performance. Journal of Experimental Psychology, 39(5), 1485–1495. doi:10.1037/a0031208

    PubMed  Google Scholar 

  2. Becker, M. W., Alzahabi, R., & Hopwood, C. J. (2013). Media multitasking is associated with symptoms of depression and anxiety. Cyberpsychology, Behavior and Social Networking, 16(2), 132–135. doi:10.1089/cyber.2012.0291

    PubMed  Article  Google Scholar 

  3. Bellaera, L., von Mühlenen, A., & Watson, D. G. (2014). When being narrow minded is a good thing: Locally biased people show stronger contextual cueing. The Quarterly Journal of Experimental Psychology, 67(6), 1242–1248. doi:10.1080/17470218.2013.858171

    PubMed  Article  Google Scholar 

  4. Bennett, I. J., Romano, J. C., Howard, J. H., & Howard, D. V. (2008). Two forms of implicit learning in young adult dyslexics. Annals of New York Academy of Sciences, 1145, 184–198. doi:10.1196/annals.1416.006

    Article  Google Scholar 

  5. Bennett, I. J., Barnes, K. A., Howard, J. H., & Howard, D. V. (2009). An abbreviated implicit spatial context learning task that yields greater learning. Behaviour Research Methods, 41(2), 391–395. doi:10.3758/BRM.41.2.391

    Article  Google Scholar 

  6. Brady, T. F., & Chun, M. M. (2007). Spatial constraints on learning in visual search: Modeling contextual cueing. Journal of Experimental Psychology, 33(4), 798–815. doi:10.1037/0096-1523.33.4.798

    PubMed  Google Scholar 

  7. Broadbent, D. E. (1958). Perception and communication. New York, NY: Pergamon Press.

    Google Scholar 

  8. Brockmole, J. R., Castelhano, M. S., & Henderson, J. M. (2006). Contextual cueing in naturalistic scenes: Global and local contexts. Journal of Experimental Psychology, 32(4), 699–706. doi:10.1037/0278-7393.32.4.699

    PubMed  Google Scholar 

  9. Cain, M. S., & Mitroff, S. R. (2011). Distractor filtering in media multitaskers. Perception, 40, 1183–1192. doi:10.1068/p7017

    PubMed  Article  Google Scholar 

  10. Cain, M. S., Leonard, J. A., Gabrieli, J. D. E., & Finn, A. S. (2016). Media multitasking in adolescence. Psychonomic Bulletin and Review, 23(2), 483–490. doi:10.3758/s13423-016-1036-3

    Article  Google Scholar 

  11. Cardoso-Leite, P., Kludt, R., Vignola, G., Ma, W. J., Green, C. S., & Bavelier, D. (2016). Technology consumption and cognitive control: Contrasting action video game experience with media multitasking. Attention, Perception and Psychophysics, 78(1), 218–241. doi:10.3758/s13414-015-0988-0

    PubMed  PubMed Central  Article  Google Scholar 

  12. Carretti, C., Cornoldi, C., De Beni, R., & Romanò, M. (2005). Updating in working memory: A comparison of good and poor comprehenders. Journal of Experimental Child Psychology, 91(1), 45–66. doi:10.1016/j.jecp.2005.01.005

    PubMed  Article  Google Scholar 

  13. Carrier, L. M., Cheever, N. A., Rose, L. D., Benitez, S., & Chang, J. (2009). Multitasking across generations: Multitasking choices and difficulty ratings in three generations of Americans. Computers in Human Behaviour, 25(2), 483–489. doi:10.1016/j.chb.2008.10.012

    Article  Google Scholar 

  14. Carrier, L. M., Rosen, L. D., Cheever, N. A., & Lim, A. L. (2015). Causes, effects, and practicalities of everyday multitasking. Developmental Review, 35, 64–78. doi:10.1016/j.dr.2014.12.005

    Article  Google Scholar 

  15. Chun, M. & Jiang, Y. (2003). Implicit, long-term spatial contextual memory. Journal of Experimental psychology: Learning, Memory, and Cognition. 29(2), 224–234. doi:10.1037/0278-7393.29.2.224

  16. Chun, M. M., & Jiang, Y. (1998). Contextual cueing: Implicit learning and memory of visual context guides spatial attention. Cognitive Psychology, 36(1), 28–71. doi:10.1006/cogp.1998.0681

    PubMed  Article  Google Scholar 

  17. Colagiuri, B., & Livesey, E. J. (2016). Contextual cuing as a form of nonconscious learning: Theoretical and empirical analysis in large and very large samples. Psychnomic Bulletin & Review, 20, 1–14. doi:10.3758/s13423-016-1063-0

    Google Scholar 

  18. Couperus, J. W., Hunt, R. H., Nelson, C. A., & Thomas, K. M. (2010). Visual search and contextual cueing: Differential effects in 10-year-old children and adults. Attention, Perception and Psychophysics, 73(2), 334–348. doi:10.3758/s13414-010-0021-6

    Article  Google Scholar 

  19. Cousineau, D. (2005). Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson’s method. Tutorials in Quantitative Methods for Psychology, 1(1), 42–45. doi:10.20982/tqmp.01.1.p042

    Article  Google Scholar 

  20. Deutsch, J. A., & Deutsch, D. (1963). Attention: Some theoretical considerations. Psychological Review, 70, 80–90. doi:10.1037/h0039515

    PubMed  Article  Google Scholar 

  21. Franz, V. H., & Loftus, G. R. (2012). Standard errors and confidence intervals in within-subjects designs: Generalizing Loftus and Masson (1994) and avoiding the biases of alternative accounts. Psychnomic Bulletin & Review, 19, 395–404. doi:10.3758/s13423-012-0230-1

    Article  Google Scholar 

  22. Fukuda, K., & Vogel, E. K. (2009). Human variation in overriding attentional capture. Journal of Neuroscience, 29(27), 8726–8733. doi:10.1523/JNEUROSCI.2145-09.2009

    PubMed  Article  Google Scholar 

  23. Gebauer, G. F., & Mackintosh, N. J. (2007). Psychometric intelligence dissociates implicit and explicit learning. Journal of Experimental Psychology: Learning, Memory & Cognition, 33(1), 34–54. doi:10.1037/0278-7393.33.1.34

    Google Scholar 

  24. Gluck, M. A., & Bower, G. H. (1988). From conditioning to category learning: An adaptive network model. Journal of Experimental Psychology, 117(3), 227–247. doi:10.1037/0096-3445.117.3.227

    PubMed  Article  Google Scholar 

  25. Gluck, M. A., Shohamy, D., & Myers, C. (2002). How do people solve the ‘weather prediction’ task?: Individual variability in strategies for probabilistic category learning. Learning & Memory, 37(2), 210–222. doi:10.1101/lm.45202

    Google Scholar 

  26. Gobet, F., Lane, P. C. R., Croker, S., Cheng, P. C.-H., Jones, G., Oliver, I., & Pine, J. M. (2001). Chunking mechanisms in human learning. Trends in Cognitive Sciences, 5(6), 236–242. doi:10.1016/S1364-6613(00)01662-4

    PubMed  Article  Google Scholar 

  27. Goujon, A., Didierjean, A., & Thorpe, S. (2015). Investigating implicit statistical learning mechanisms through contextual cueing. Trends in Cognitive Sciences, 19(9), 524–533. doi:10.1016/j.tics.2015.07.009

    PubMed  Article  Google Scholar 

  28. Green, C. S., & Bavelier, D. (2012). Learning, attentional control, and action video games. Current Biology, 22(6), R197–R206. doi:10.1016/j.cub.2012.02.012

    PubMed  PubMed Central  Article  Google Scholar 

  29. Hofmann, W., Baumeister, R. F., Forster, G., & Vohs, K. D. (2012). Everyday temptations: An experience sampling study of desire, conflict, and self-control. Journal of Personality and Social Psychology, 102(6), 1318–1335. doi:10.1037/a0026545

    PubMed  Article  Google Scholar 

  30. Jaeggi, S. M., Studer-Luethi, B., Buschkuehl, M., Su, Y., Jonides, J., & Perrig, W. J. (2010). The relationship between n-back performance and matrix reasoning—Implications for training and transfer. Intelligence, 38(6), 625–635. doi:10.1016/j.intell.2010.09.001

    Article  Google Scholar 

  31. Jiang, Y., & Chun, M. M. (2003). Contextual cueing. Reciprocal influences between attention and implicit learning. In L. Jimenez (Ed.), Attention and implicit learning. Philadelphia, PA: John Benjamins.

    Google Scholar 

  32. Jiang, Y., & Wagner, L. C. (2004). What is learned in spatial contextual cueing—Configuration or individual locations? Perception & Psychophysics, 66(3), 454–463. doi:10.3758/BF03194893

    Article  Google Scholar 

  33. Jonides, J., & Nee, D. E. (2006). Brain mechanisms of proactive interference in working memory. Neuroscience, 139(1), 181–193. doi:10.1016/j.neuroscience.2005.06.042

    PubMed  Article  Google Scholar 

  34. Judd, T. (2013). Making sense of multitasking: Key behaviours. Computers and Education, 63, 358–367. doi:10.1016/j.compedu.2012.12.017

    Article  Google Scholar 

  35. Kane, M. & Engle, R. (2003). Working-memory capacity and the control of attention: The contributions of goal neglect, response competition and task set to stroop interference. Journal of Experimental psychology: General. 132(1), 47–70. doi:10.1037/0096-3445.132.1.47

  36. Kaufman, S. B., DeYoung, C. G., Gray, J. R., Jimenez, L., Brown, J., & Mackintosh, N. (2010). Implicit learning as an ability. Cognition, 116(3), 321–340. doi:10.1016/j.cognition.2010.05.011

    PubMed  Article  Google Scholar 

  37. Kononova, A., & Chiang, Y. (2015). Why do we multitask with media? Predictors of media multitasking among Internet users in the United States and Taiwan. Computers in Human Behaviour, 50, 31–41. doi:10.1016/j.chb.2015.03.052

    Article  Google Scholar 

  38. Kunar, M. A., Flusberg, S. J., & Wolfe, J. M. (2006). Contextual cueing by global features. Perception and Psychophysics, 68(7), 1204–1216. doi:10.3758/BF03193721

    PubMed  PubMed Central  Article  Google Scholar 

  39. Lamy, D., Goshen-Kosover, A., Aviani, N., Harari, H., & Levkovitz, H. (2008). Implicit memory for spatial context in depression and schizophrenia. Journal of Abnormal Psychology, 117(4), 954–961. doi:10.1037/a0013867

    PubMed  Article  Google Scholar 

  40. Lin, L. (2009). Breadth-biased versus focused cognitive control in media multitasking behaviors. Proceedings of the National Academy of Sciences of the United States of America, 106(37), 15521–15522. doi:10.1073/pnas.0908642106

    PubMed  PubMed Central  Article  Google Scholar 

  41. Lleras, A., & von Mühlenen, A. (2004). Spatial context and top-down strategies in visual search. Spatial Vision, 17(4), 465–482. doi:10.1163/1568568041920113

    PubMed  Article  Google Scholar 

  42. Lui, K. F., & Wong, A. C. (2012). Does media multitasking always hurt? A positive correlation between multitasking and multisensory integration. Psychonomic Bulletin and Review, 19(4), 647–653. doi:10.3758/s13423-012-0245-7

    PubMed  Article  Google Scholar 

  43. Manelis, A., & Reder, L. M. (2012). Procedural learning and associative memory mechanisms contribute to contextual cueing: Evidence from fMRI and eye-tracking. Learning & Memory, 19(11), 527–534. doi:10.1101/lm.025973.112

    Article  Google Scholar 

  44. McGeorge, P., & Burton, A. M. (1990). Semantic processing in an incidental learning task. The Quarterly Journal of Experimental Psychology, 42, 597–609. doi:10.1080/14640749008401239

    Article  Google Scholar 

  45. Merrill, E. C., Conners, F. A., Yang, Y., & Weathington, D. (2014). The acquisition of contextual cueing effects by persons with and without intellectual disability. Research in Developmental Disabilities, 35(10), 2341–2351. doi:10.1016/j.ridd.2014.05.026

    PubMed  Article  Google Scholar 

  46. Minear, M., Brasher, F., McCurdy, M., Lewis, J., & Younggren, A. (2013). Working memory, fluid intelligence, and impulsiveness in heavy media multitaskers. Psychonomic Bulletin and Review, 20, 1274–1281. doi:10.3758/s13423-013-0456-6

    PubMed  Article  Google Scholar 

  47. Moisala, M., Salmela, V., Hietajärvi, L., Salo, E., Carlson, S., Salonen, O., … Alho, K. (2016). Media multitasking is associated with distractibility and increased prefrontal activity in adolescents and young adults. NeuroImage (134), 113–121. doi:10.1016/j.neuroimage.2016.04.011

  48. Nissen, M. J., & Bullemar, P. (1987). Attentional requirements of learning evidence from performance measures. Cognitive Psychology, 19(1), 1–32. doi:10.1016/0010-0285(87)90002-8

    Article  Google Scholar 

  49. Olson, I. R., & Chun, M. M. (2002). Perceptual constraints on implicit learning of spatial context. Visual Cognition, 9(3), 273–302. doi:10.1080/13506280042000162

    Article  Google Scholar 

  50. Ophir, E., Nass, C., & Wagner, A. D. (2009). Cognitive control in media multitaskers. Proceedings of the National Academy of Sciences of the United States of America, 106(37), 15583–15587. doi:10.1073/pnas.0903620106

    PubMed  PubMed Central  Article  Google Scholar 

  51. Price, A. L. (2005). Cortico-striatal contributions to category learning: Dissociating the verbal and implicit systems. Behavioural Neuroscience, 119(6), 1438–1447. doi:10.1037/0735-7044.119.6.1438

    Article  Google Scholar 

  52. Price, A. L. (2008). Distinguishing the contributions of implicit and explicit processes to performance of the weather prediction task. Memory & Cognition, 37(2), 210–222. doi:10.3758/MC.37.2.210

    Article  Google Scholar 

  53. Ralph, B. C., Thomson, D. R., Cheyne, J. A., & Smilek, D. (2014). Media multitasking and failures of attention in everyday life. Psychological Research, 78, 661–669. doi:10.1007/s00426-013-0523-7

    PubMed  Article  Google Scholar 

  54. Ralph, B. C., Thomson, D. R., Seli, P., Carriere, J. S., & Smilek, D. (2015). Media multitasking and behavioural measures of sustained attention. Attention, Perception and Psychophysics, 77(2), 390–401. doi:10.3758/s13414-014-0771-7

    PubMed  Article  Google Scholar 

  55. Reber, A. S. (1993). Implicit learning and tacit knowledge. London, UK: Oxford University Press.

    Google Scholar 

  56. Reber, A. S., & Lewis, S. (1977). Implicit learning: An analysis of the form and structure of a body of tacit knowledge. Cognition, 5(4), 333–361. doi:10.1016/0010-0277(77)90020-8

    Article  Google Scholar 

  57. Reinecke, L., Aufenanger, S., Beutel, M. E., Quiring, O., Stark, B., Wölfling, K., & Müller, K. W. (2016). Digital stress over the life span: The effects of communication load and internet multitasking on perceived stress and psychological health impairments in a German probability sample. Media Psychology, 19, 1–26. doi:10.1080/15213269.2015.1121832

    Article  Google Scholar 

  58. Rideout, V., Foehr, U., & Roberts, D. (2010). Generation M2: Media in the lives of 8–18 year olds. Retrieved from http://kff.org/other/event/generation-m2-media-in-the-lives-of/

  59. Salvucci, D. D., & Taatgen, N. A. (2011). The multitasking mind. New York, NY: Oxford University Press.

    Google Scholar 

  60. Sanbonmatsu, D. M., Strayer, D. L., Medeiros-Ward, N., & Watson, J. M. (2013). Who multi-tasks and why? Multi-tasking ability, perceived multi-tasking ability, impulsivity, and sensation seeking. PLoS ONE, 8(1), 1–8. doi:10.1371/journal.pone.0054402

    Article  Google Scholar 

  61. Spence, I., & Feng, J. (2010). Video games and spatial cognition. Review of General Psychology, 14(2), 92–104. doi:10.1037/a0019491

    Article  Google Scholar 

  62. Sun, R., Slusarz, P., & Terry, C. (2005). The interaction of the explicit and the implicit in skill learning: A dual-process approach. Psychological Review, 112(1), 159–192. doi:10.1037/0033-295X.112.1.159

    PubMed  Article  Google Scholar 

  63. Tales, A., Muir, J. L., Bayer, A., & Snowden, R. J. (2002). Spatial shifts in visual attention in normal ageing and dementia of the Alzheimer type. Neuropsychologia, 40(12), 2000–2012. doi:10.1016/S0028-3932(02)00057-X

    PubMed  Article  Google Scholar 

  64. Travis, S, Mattingley, J. & Dux, P. (2013). On the role of working memory in spatial contextual cueing. Journal of Experimental Psychology: Learning, Memory and Cognition. 39(1), 208-219. doi:10.1037/a0028644

  65. Treisman, A. M. (1960). Contextual cues in selective listening. Quarterly Journal of Experimental Psychology, 12, 243–248. doi:10.1080/17470216008416732

    Article  Google Scholar 

  66. Uncapher, M. R., Thieu, M. K., & Wagner, A. D. (2016). Media multitasking and memory: Differences in working memory and long-term memory. Psychonomic Bulletin and Review, 23(2), 483–490. doi:10.3758/s13423-015-0907-3

    PubMed  PubMed Central  Article  Google Scholar 

  67. Unsworth, N., & Engle, R. W. (2005). Individual differences in working memory capacity and learning: Evidence from the serial reaction time task. Memory & Cognition, 33(2), 213–220. doi:10.3758/BF03195310

    Article  Google Scholar 

  68. Unsworth, N., McMillan, B. D., Hambrick, D. Z., Kane, M. J., & Engle, R. W. (2015). Is playing video games relates to cognitive abilities? Psychological Science, 26(6), 759–774. doi:10.1177/0956797615570367

    PubMed  Article  Google Scholar 

  69. Vickery, T. J., Sussman, R. S., & Jiang, Y. V. (2010). Spatial context learning survives interference from working memory. Journal of Experimental Psychology. Human Perception and Performance, 36(6), 1358–1371. doi:10.1037/a0020558

    PubMed  PubMed Central  Article  Google Scholar 

  70. Vogel, E. K., Woodman, G. F., & Luck, S. J. (2001). Storage of features, conjunctions, and objects in visual working memory. Journal of Experimental Psychology: Human Perception and Performance, 27, 92–114. doi:10.1037/0096-1523.27.1.92

    PubMed  Google Scholar 

  71. Vogel, E. K., McCollough, A. W., & Machizawa, M. G. (2005). Neural measures reveal individual differences in controlling access to working memory. Nature, 438(7067), 500–503. doi:10.1038/nature04171

    PubMed  Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Myoungju Shin.

Appendix

Appendix

Table 1 Working memory performance for each n-back level for heavy, intermediate, and light media multitaskers
Fig. 5
figure5

Hit rates for heavy, intermediate, and light media multitaskers at each n-back level

Fig. 6
figure6

False alarm rates for heavy, intermediate, and light media multitaskers at each n-back level

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Edwards, K.S., Shin, M. Media multitasking and implicit learning. Atten Percept Psychophys 79, 1535–1549 (2017). https://doi.org/10.3758/s13414-017-1319-4

Download citation

Keywords

  • Implicit learning
  • Media multitasking
  • Attention
  • Working memory