Advertisement

Psychonomic Bulletin & Review

, Volume 20, Issue 6, pp 1274–1281 | Cite as

Working memory, fluid intelligence, and impulsiveness in heavy media multitaskers

  • Meredith Minear
  • Faith Brasher
  • Mark McCurdy
  • Jack Lewis
  • Andrea Younggren
Brief Report

Abstract

Ophir, Nass, and Wagner (Proceedings of the National Association of Sciences 106:15583–15587, 2009) reported that individuals who routinely engage in multiple forms of media use are actually worse at multitasking, possibly due to difficulties in ignoring irrelevant stimuli, from both external sources and internal representations in memory. Using the media multitasking index (MMI) developed by Ophir et al., we identified heavy media multitaskers (HMMs) and light media multitaskers (LMMs) and tested them on measures of attention, working memory, task switching, and fluid intelligence, as well as self-reported impulsivity and self-control. We found that people who reported engaging in heavy amounts of media multitasking reported being more impulsive and performed more poorly on measures of fluid intelligence than did those who did not frequently engage in media multitasking. However, we could find no evidence to support the contention that HMMs are worse in a multitasking situation such as task switching or that they show any deficits in dealing with irrelevant or distracting information, as compared with LMMs.

Keywords

Task switching or executive control Attention Working memory Media multi-tasking 

The last 30 years have seen remarkable changes in the availability and use of various forms of media. The increases in the speed of and access to technology, accompanied with decreasing cost, have allowed the development of new forms of media consumption such as media multitasking, where an individual engages in the simultaneous use of different forms of media. This behavior appears to be on the rise in children, teens, and young adults (Rideout, Foehr, & Roberts, 2010; Roberts, Foehr, & Rideout, 2005). Researchers have grown increasingly interested in the extent to which the use of modern technology may alter how individuals process information. Recent work has demonstrated that distraction while new information is learned leads to poorer retention and may even alter the neural systems involved (Foerde, Knowlton & Poldrack, 2006). Studies of real-world behaviors such as driving while using cell phones (Strayer & Johnston, 2001), the effects of instant messaging on academic performance (Levine, Waite, & Bowman, 2007), or the effects of students using laptops (Fried, 2008) or texting during a lecture (Ellis, Daniels, & Jauregui, 2010) are not encouraging for the efficacy of media multitasking.

Ophir, Nass, and Wagner (2009) studied the question of whether individuals who report engaging in heavy media multitasking are systematically different from those who do not. Specifically, they asked, “Are chronic multi-taskers more attentive to irrelevant stimuli in the external environment and irrelevant representations in memory?” (Ophir et al., 2009, p. 15583). They investigated this question by first developing the Media Use Survey to measure an individual’s preference for media multitasking. Participants are asked about their use of 12 forms of media and, for each form, how often they simultaneously engage in any of the other 11 forms. From these data, an MMI was developed and used to distinguish between HMMs and LMMs on the basis of the top and bottom quartiles of the MMI distribution. Ophir and colleagues then compared the performance of Stanford students who scored as HMMs and LMMs on a series of cognitive tasks. They found worse performance for HMMs than for LMMs under conditions of distraction both for a version of the AX-CPT task when distractors were present and on a visual working memory task with a large number of distractors. They also reported a significantly higher false alarm rate for HMMs on a three-back letter task and worse performance by HMMs on task switching as measured by switch cost. They found no differences between the groups in measures of response inhibition and working memory capacity. They concluded that HMMs may be more susceptible to irrelevant information from both external and internal sources.

The work that has followed the publication of Ophir et al. (2009) has focused more on simple attentional tasks. Cain and Mitroff (2011) identified HMM and LMM individuals using the MMI and tested them on an additional singleton detection task. They found that HMMs appeared to take in more information from the environment than was necessary to perform the task, which suggests that HMMs may have a bias toward a broader focus of attention even when a narrower filter would improve performance. Lui and Wong (2012) reported evidence that this possible broader attentional bias in HHMs may lead to improved multisensory integration. These studies focused on fairly low-level processing differences between the two groups, with Cain and Mitroff deliberately choosing an attentional paradigm with low working memory demands in order to focus on attention. However, there are no published studies following up on what are perhaps the most startling and widely cited findings of the Ophir et al. work—that is, the proposed differences in cognitive control and reduced ability to deal with interference in working memory.

Therefore, in two studies, we tested for differences between HMMs and LMMs on measures of working memory, fluid intelligence, attention, and the resolution of interference in working memory. We also attempted a direct replication of the Ophir et al. (2009) finding that HMMs were worse at switching tasks. However, it is important to note that like Ophir et al., our studies cannot establish any causal relationships between media multitasking and cognition.

Study 1

In our first study, we focused on whether there are any group differences between HMMs and LMMs on standard measures of working memory capacity and fluid intelligence, as well as task switching ability, using the switching paradigm originally described in Ophir et al. (2009). In addition, we collected survey data on self-reported impulsivity and self-control in relation to preference for media multitasking.

Method

Participants

Two hundred twenty-one College of Idaho students (18–25 years of age, M = 19.8; 151 female) took the Media Use Questionnaire (Ophir et al., 2009) administered online for course research participation credit or extra credit. Thirty-three participants (10 males) with an MMI score greater than 5.36 were identified as HMMs, and 36 (15 males) participants were classified as LMMs, with MMIs less than 3.18. We used the same cutoffs reported by Cain and Mitroff (2011). The mean MMI score for the HMM group was 6.6, with an SD of 1.3, and the LMM group had a mean MMI score of 2.1, with an SD of 0.67.

Materials

Survey measures

We administered three online surveys. The first survey was the Media Use Questionnaire developed by Ophir et al. (2009). Participants were asked to estimate how many hours a week they spent using 12 different media forms: print media, television, computer-based video (e.g., YouTube), music, nonmusical audio, video or computer games, telephone and mobile phone voice calls, instant messaging, SMS (text messaging), e-mail, Web surfing, and other computer-based applications (such as word processing). For each medium, they estimated how often they simultaneously used each of the other 11 media. Surveys took an average of 20 min to complete and were scored using the MMI described in Ophir et al. The second survey was the Barratt Impulsiveness Scale, (BIS-11; Patton, Stanford, & Barratt, 1995), a frequently used self-report instrument for impulsivity (Stanford et al., 2009). It consists of 30 questions and gives an overall measure and three subscales of attentional, motor, and nonplanning impulsiveness.

The third survey was the Self-Control Scale, consisting of 36 items measuring an individual’s perceived self-control (Tangney, Baumeister, & Boone, 2004). Higher scores indicate more self-control.

Laboratory tasks

Participants who came into the testing lab completed three computerized tasks. All were programmed in E-Prime 2 (Schneider, Eschman, & Zuccolotto, 2002).

Fluid intelligence

This was measured using 30 problems from Raven’s standard progressive matrices (RPM; Raven, 1998). The RPM is a nonverbal reasoning task in which participants are shown a pattern-matrix missing the final piece and then choose the item that completes the pattern. In the computerized version of this task, participants clicked on the item with the mouse and were given an unlimited amount of time to finish the task.

Working memory

To measure working memory capacity, we used the automated reading span, a standard and widely used task, described in Conway et al. (2005).

Task switching

We used the switching task exactly as described in Ophir et al. (2009).

Procedure

Participants completed the survey measures online and then were invited to be tested in the laboratory. Participants did not know that the laboratory measures were related to the online surveys they had taken earlier until debriefing. The order of the tasks was randomized across participants.

Results

Survey results

Two hundred twenty-one participants completed the Media Use Questionnaire online. The mean MMI score was 4.3, with a standard deviation of 1.9. We found that the MMI was positively correlated with impulsivity, r = .29, p < .01, and negatively correlated with self-control, r = −.16, p < .05. Impulsivity and self-control were correlated, r = −.47, p < .0001. For impulsivity, we further analyzed the data by examining the relationship between each of the second-order factors and found that MMI was significantly correlated with all three factors (attentional impulsivity, r = .25, p < .01; nonplanning impulsivity, r = .13, p < .05; and motor impulsivity, r = .33, p < .001; see Fig. 1).
Fig. 1

The relationship between the scores on the Media Multitasking Index and the motor impulsivity component of the Barratt Impulsivity Scale

Laboratory task results

The data from 1 HHM participant on the RPM and data from 2 HHM participants on task switching were lost due to computer error. The mean scores and standard deviations are shown in Table 1. A two-tailed t-test revealed a significant difference between the HMM and LMM groups on the Raven’s, with better performance by the LMM group, t(66) = −2.18, p < .05, Cohen’s d = −.54. However, there was no significant difference between the groups on the reading span task, t(67) = 0.48, p = .63.
Table 1

Means and standard deviations for Study 1 results

 

Raven’s Matrices

Reading Span

Single-Task Trials

Nonswitch Trials

Switch Trials

Mixing Cost

Switching Cost

HMMs

22.2 (4.7)

35.8 (17.3)

681.1 (126.7)

951.2 (207.3)

1,035.9 (254.0)

308.8 (181.3)

85.9 (99.6)

LMMs

24.7 (3.5)

34.8 (19.2)

673.9 (88.1)

997.0 (324.1)

1,099.7 (366.0)

355.1 (287.6)

107.5 (110.5)

Note. HMMs, heavy media multitaskers; LMMs, light media multitaskers

Task-switching performance was tested using a mixed factorial ANOVA with two specified contrasts: a mixing cost contrast testing the mean reaction time (RT) of nonswitch trials against single-task trials and a switch-cost contrast testing the mean of the switch trials against the mean of the nonswitch trials. Only RTs from correct trials were used. While there was a main effect of trial type, F(2, 130) = 126.2, p < .0001, we found no main effect of group, F < 1, and no evidence of any group difference in task-switching performance as measured either by mixing or switch cost, both Fs < 1.

Discussion

The data from this study provide mixed support for Ophir et al. (2009). The lack of group differences in reading span support Ophir et al.’s contention that there are no differences in working memory capacity between HMMs and LMMs. The poorer performance of HMMs on the matrix reasoning measure may support the hypothesis that HMMs have greater difficulty inhibiting distracting information. However, we did not find a group difference in switching performance. Switching performance appears to be sensitive to both lab-based training (Karbach & Kray, 2009; Minear & Shah, 2008) and real-world experiences such as being bilingual (Prior & MacWhinney, 2010) and playing video games (Cain, Landau, & Shimamura, 2012; Strobach, Frensch, & Schubert, 2012), as well as group differences such as ADHD status (Cepeda, Cepeda, & Kramer, 2000). Ophir et al. reported a larger switch cost for HMMs and proposed that HMM’s poor performance on switching may result from breath-biased cognitive control, which is less effective at suppressing the irrelevant task set on switch trials. However, we were unable to replicate this finding.

Our survey data indicated a positive relationship between an individual’s self-reported media multitasking data and self-reported impulsivity. Other studies have reported relationships between personality traits and measures of multitasking. Jeong and Fishbein (2007) found sensation seeking to be predictive of media multitasking behavior in teenagers and König, Oberacher, and Kleinmann (2010) reported a relationship between impulsivity and multitasking behavior at work, although impulsivity was not correlated with a self-reported preference for polychronicity. Therefore, there may be preexisting differences that contribute to the preference for media multitasking, and one possible explanation for the poor performance on the Raven’s may be that our HMMs were more impatient and likely to give up on more difficult problems. However, we did not collect RT data on this measure, so we conducted a follow-up study in which we used a similar fluid intelligence task and measured both accuracy and RT.

Study 2

This study attempted to replicate the group difference on measures of fluid intelligence seen in Study 1 and to collect additional RT information.

Method

Participants

Fifty-seven participants were recruited, with 27 HMMs (7 male; mean MMI = 6.41 [5.31–8.37]), and 30 LMMs (9 male; mean MMI = 2.1 [0.66–2.89]). Three HMMs and 4 LMMs previously had participated in Study 1. Participants received $10 or course credit for participating.

Materials and procedure

Participants completed the BIS and a computerized fluid reasoning task designed to collect RTs to individual items, as well as accuracy. The program included 24 items from the advanced RPM (ARPM), either the even- or the odd-numbered problems, counterbalanced across participants. Participants were asked to click on the alternative that best solved the matrix problem, and with no time limit on completing the task.

Results

One HMM participant failed to complete the BIS. Means and standard deviations are shown in Table 2. Two-tailed t-tests revealed significant group differences in ARPM accuracy and RT, with HMMs having lower accuracy, t(55) = −2.14, p < .05, Cohen’s d = −0.56, and shorter RTs, t(55) = −2.69, p < .01, Cohen’s d = −0.72. On the BIS, the only significant difference between the two groups was on the motor subscale, t(54) = 2.86, p < .01, Cohen’s d = 0.77. We conducted two ANCOVAs to assess whether motor impulsivity could explain the differences in accuracy and RT between the two groups. For accuracy, the effect of group was still marginally significant when controlling for motor impulsivity, F(1, 53) = 3.83, p = .056, while for RT, the group difference was no longer significant, F(1, 53) = 3.1, p =.08. We also ran two ANOVAs to assess whether there was a differential effect of MMI status on the more difficult problems. To do this, we divided the problems into three sets: easy, medium, and difficult. Problem difficulty is confounded with time as the problem difficulty increases with problem order. For RT, we found main effects of both group, F(1, 55) = 7.2, p < .01, and problem difficulty, F(1, 55) = 97.1, p < .001,with both groups taking more time as the task progressed. We also found a significant interaction between difficulty and group, with LMMs showing a much larger increase in RT for the final third of the task, F(1, 55) = 9.9, p < . 01 (see Fig. 2a). For the accuracy data, there were main effects of both group, F(1, 55) = 4.4, p < .05, and problem difficulty, F(1, 55) = 28.1, p < .001, but the interaction was not significant, F(1, 55) = 1.4, p = . 24 (see Fig. 2b).
Table 2

Means and standard deviations for Study 2 results

 

ARPM Accuracy

ARPM RT

BIS score

Attentional

Motor

Nonplanning

HMMs

.58 (.18)

30,297.5 (10,882.6)

71.4 (10.6)

19.2 (3.9)

21.1 (4.1)

26.2 (4.6)

LMMs

.68 (.16)

40,780.1 (17,413.1)

66.0 (12.2)

18.2 (3.8)

17.8 (4.3)

24.7 (5.5)

Note. ARPM, advanced Raven’s progressive matrices; RT, reaction time; BIS, Barratt Impulsiveness Scale; HMMs, heavy media multitaskers; LMMs, light media multitaskers

Fig. 2

a Group reaction time (RT) differences on advanced Raven’s progressive matrices problems broken down by problem difficulty. b Group differences in accuracy. Error bars reflect standard errors. HMM, heavy media multitasker; LMM, light media multitasker

Discussion

These data replicated the difference between HMMs and LMMs on a fluid intelligence task. We also found that HMMs reported higher motor impulsivity, and this may have been reflected in shorter RTs as the task progressed. On the basis of Ophir et al.’s (2009) initial findings, it may be that HMMs experience greater interference from previous problems as the task progresses or may find it more difficult to suppress distracting information, such as incorrect alternatives, as the difficulty of the items increases.

Therefore, in our third study, we focused on testing the hypotheses that HHMs (1) have “greater difficulty filtering out irrelevant stimuli from their environment” and (2) “are less likely to ignore irrelevant representations in memory” (Ophir et al., 2009, p. 15585).

Study 3

Here, we tested for group differences on attention, on the ability to resolve interference in working memory, and again on task switching. To measure attention, we chose the attention network task (ANT), since it delivers three different measures of attention: alerting, orienting, and executive. This task has also shown sensitivity to changes in attentional functioning with experience, such as the restorative effects of nature (Berman, Jonides, & Kaplan, 2008), as well as attentional training (Rueda, Rothbart, McCandliss, Saccomanno, & Posner, 2005) and meditation (Jha, Krompinger, & Baime, 2007). We tested for difficulties in dealing with interference in working memory by using the recent probes item recognition task. This task has been widely used to study interference in working memory (Jonides & Nee, 2006) and is sensitive to group differences such as age (Jonides et al., 2000) and individual differences such as fluid intelligence (Braver, Gray, & Burgess, 2007). Finally, we tested again for differences in task switching. This time, we chose to make the switches predictable, since this may be more sensitive to differences in proactive cognitive control, such as keeping track of a sequence. We predicted that if HHMs have greater difficulty with irrelevant information from both internal and external sources, they would show worse performance on all three tasks.

Method

Participants

Fifty-three College of Idaho students participated, with 27 HMMs (9 male) and 26 LMMs (11 male). The HMMs had a mean MMI score of 7.03 (SD = 1.3), and the LMMs a mean of 2.01 (SD = 0.72). Ten HMMs and 8 LMMs also had participated in Study 1.

Tasks

Task switching

We used the same task as before, with only one change: The switches were predictable, with the task switching every fourth trial.

Recent probes item recognition

On each trial, participants were shown an array of four letters for 1,500 ms and were asked to remember them over a delay of 3,000 ms. A probe was then presented for 1,500 ms, and participants indicated, yes or no, whether the probe was a member of the current studied array. On half of the trials, the probe was a member of the current set. However, on the remaining trials, two thirds of the foils were recent foils; that is, they appeared as a part of the target array within the past 2 trials, so that they had recently been a target item. The remaining trials were nonrecent foils. There were three 48-trial blocks, for a total of 144 trials.

ANT

The ANT combines a flanker task with a center arrow flanked by two arrows on either side with a spatial cuing paradigm to produce estimates of the three different attention networks proposed by Posner and colleagues: alerting attention, orienting attention, and executive attention. This task is described in detail in Fan, McCandliss, Sommer, Raz, and Posner (2002). Participants used their index and ring fingers on the arrow keys on the number key pad to indicate the direction of the center arrow.

Results

All the means and standard deviations are presented in Table 3. The task-switching results were analyzed using the same mixed factorial ANOVA and contrasts as described in Study 1. There was a main effect of trial type, F(2, 102) = 86.2, p < .0001, with the longest RTs for switch trials and the shortest RTs for single-task trials, but no main effect of group, F < 1, and no difference between the groups on either mixing or switch costs, Fs < 1.
Table 3

Means and standard deviations for Study 3 results

 

Predictable Switching

Recent Probes Task

ANT

Single-Task Trials

Nonswitch Trials

Switch Trials

Mixing Cost

Switch Cost

Recent Probe Trials

Nonrecent Probe Trials

AA

OA

EA

RT

ACC

RT

ACC

HMMs

641.5 (18.5)

823.3 (33.9)

1,183.7 (92.7)

181.8 (150.5)

360.3 (333.3)

735.5 (93.5)

.86 (.10)

701.1 (93.5)

.94 (.07)

48.5 (26.4)

43.5 (22.6)

100.4 (28.2)

LMMs

626.9 (16.2)

809.7 (41.7)

1,138.8 (74.9)

182.8 (175.6)

329.1 (231.6)

749.1 (86.1)

.85 (.10)

706.5 (97.1)

.93 (.09)

43.9 (24.9)

39.3 (21.9)

98.3 (39.4)

Note. ANT, attention network task; AA, alerting attention, OA, orienting attention; executive attention; RT, reaction time; ACC, accuracy; HMM, heavy media multitaskers; LMM, light media multitaskers

RT and accuracy data from the recent probes task were analyzed using a mixed factorial ANOVA with group (HMM vs. LMM) as a between-subjects factor and trial type (recent vs. nonrecent probe) as a within-subjects factor. The data from 2 HMM participants were lost due to computer error. There was a main effect of trial type for both RT and accuracy, with greater accuracy on nonrecent probes, F(1, 49) = 46.04, p < .0001, and shorter RTs for nonrecent probe trials, F(1, 49) = 16.8, p < .0001. There was no main effect of group for either accuracy or RT, Fs < 1, and no interaction between group and trial type for either accuracy, F(1,49) = 1.2, p = .29, or RT, F < 1. Given Ophir et al.’s (2009) report that HMMs showed an increased rate of false alarms in the three-back task, we ran an additional analysis including block (first vs. third) to make sure that any differences between groups did not develop over the task. There was a significant interaction between the type of trial and block, with nonrecent probe trials increasing in accuracy from the first to the last block, while recent probes decreased in accuracy, F(1, 49) = 6.3, p < .05. However, no other effects were significant, all Fs < 1. Finally, we tested whether there were any group differences on the ANT for alerting attention, orienting attention, or executive attention, there were no differences, all Fs < 1.

Discussion

We were surprised to see no differences between a group of HMMs and LMMs on measures of attention, interference in working memory, and task switching. Cain and Mitroff (2011) also reported differences in attention, with HMMs attending to irrelevant information even in a situation where they could safely ignore it. One possible reason for the difference between our results and theirs may be in the structure of the tasks. In the additional singleton paradigm, the different conditions are blocked, so that one can apply a particular attentional strategy based on the task instructions to an entire block. In the ANT, all the conditions are mixed together, which does not allow the employment of a particular attentional filter based on top-down knowledge.

We also did not see any differences in performance in the recent probes task. Ophir et al. (2009) proposed that HMMs are more susceptible to interference from familiar items in working memory on the basis of their increased rate in false alarms on the three-back task. Therefore, we predicted that HMMs would show worse performance on recent negative probes. However, we found no group differences in either accuracy or RT on recent versus nonrecent negative probes. One explanation may lie in the differences between the n-back task and the recent probes task. While both tasks are used to study interference in working memory and share neural overlap in the left inferior frontal cortex, the n-back task includes other executive processes, such as updating (Jonides & Nee, 2006). Additionally, in the Ophir et al. study, cognitive load appears to be a consideration. For both the filtering and the n-back tasks, group differences were apparent only at the highest load. For the filtering task, this meant that group differences emerged only at six items, and for the n-back, only at the three-back level. We did not manipulate load in our task; therefore, it is possible that differences between the two groups may emerge with items sets larger than four items. Finally, we again found no differences between HMMs and LMMs on a measure of task switching.

General discussion

The Ophir et al. (2009) study was an important and widely cited initial examination of the possible effects of media multitasking. However, it is worth noting that the studies reported had small sample sizes, with two out of the three behavioral studies using an N of 15 in each group and the other with groups of 22 and 19. In three studies, we attempted both direct and conceptual replications of the original findings and, to our surprise, were unable to find much support for Ophir et al.’s hypotheses. Three possible explanations may lie in (1) differences in defining HMMs and LMMs,( 2) task differences in cognitive processes and load, and (3) differences in participants.

Ophir et al. (2009) were the first to quantify media multitasking and based definitions of heavy and light on their own distribution of 262 scores using one SD above and below the mean, resulting in cutoff scores of 2.86 for LMMs and 5.9 for HMMs. Cain and Mitroff (2011) used the top and bottom quartiles from their distribution of 85 individuals, which gave them cutoffs of 3.18 for LMMs and 5.38 for HMMs. Our mean and SD gave us cutoffs of 5.07 for HMMs and 2.7 for LMMs. In Study 1, this would have led to an N of 37 HMMs and 30 LMMs. If we had used values based on Ophir et al.’s distribution, our Ns would have been 20 HMMs and 30 LMMs. We decided to adopt the Cain and Mitroff criterion, since it served as an independently established middle ground resulting in Ns of at least 30 per group. We conducted follow-up analyses for all three studies, using both Ophir et al.’s cutoff values and our own, and this did not change the results. However, establishing a standardized definition of what constitutes heavy and light usage would assist in future comparison across studies.

In our attempted conceptual replications, we used different tasks to measure the ability to inhibit distracting information and no longer relevant information in working memory. Had we found group differences, this would have strongly supported the original findings. However, the failure to find group differences is harder to interpret, since most tasks do not measure any one cognitive process. For example, while both n-back and recent probes are commonly used to measure interference in working memory, there are processes such as updating that are not shared between tasks. Another explanation may lie in the role of cognitive load. For three of Ophir et al.’s (2009) tasks, group differences emerged only under higher load (more distractors/higher n-back level). We did not explicitly manipulate load in our study, except perhaps in the Raven’s matrices, in which task difficulty increased across trials, and we did find some evidence (shorter RTs for more difficult items) that HMMs performed worse on these items. Therefore, task variations in processes and load may provide boundary conditions for the original differences proposed.

Finally, most puzzling was our inability to replicate a switch cost difference between LMMs and HMMs. Using larger sample sizes and the same task, we were unable to replicate this result and know of at least one other reported replication failure (Alzahabi & Becker, 2011). The biggest difference we can see between Ophir et al. (2009) and our study (other than sample size) is the populations from which participants were recruited. It may be that individuals who are able to engage in frequent media multitasking behaviors and attend a very selective university may employ different strategies, such as a broader attentional focus, than do other groups. In our sample from a small liberal arts college, we found greater impulsivity, worse self-control, and worse performance on measures of fluid intelligence in our HMMs. Whether these same relationships would be found in a sample of Stanford students is unknown. However, Ophir et al. did report no relationship between MMI in their sample and personality variables such as need for cognition and conscientiousness. Therefore, MMI status may be quite heterogeneous, with different populations of individuals who engage in media multitasking for varying reasons and degrees of success.

In conclusion, more research is needed to understand how media multitasking may be related to cognitive performance in different populations of teenagers and young adults.

Notes

Author Note

We thank Dr. Randall Engle’s Attention and Working Memory Lab for the automated reading span task and Dr. Patricia Reuter-Lorenz for sending us the recent probes letter recognition task.

Correspondence concerning this article should be addressed to Meredith Minear, Department of Psychology, The College of Idaho, Caldwell, ID 83605. Phone: (208) 459-517. E-mail: mereditheminear@gmail.com

References

  1. Alzahabi, R. & Becker, M.W. (2011). In defense of media multi-tasking: No increase in task-switch or dual-task costs. Journal of Vision, 11, article 102 doi:  10.1167/11.11.102
  2. Berman, M. G., Jonides, J., & Kaplan, S. (2008). The cognitive benefits of interacting with nature. Psychological Science, 19, 1207–1212.PubMedCrossRefGoogle Scholar
  3. Braver, T. S., Gray, J. R., & Burgess, G. C. (2007). Explaining the many varieties of working memory variation: Dual mechanisms of cognitive control. In A. R. A. Conway, C. Jarrold, M. J. Kane, A. Miyake, & J. Towse (Eds.), Variation in Working Memory (pp. 76–106.) New York: Oxford University Press.Google Scholar
  4. Cain, M. S., Landau, A. N., & Shimamura, A. P. (2012). Action video game experience reduces the cost of switching tasks. Attention, Perception, & Psychophysics, 74, 641–647. doi: 10.3758/s13414-012-0284-1 CrossRefGoogle Scholar
  5. Cain, M. S., & Mitroff, S. R. (2011). Distractor filtering in media multitaskers. Perception, 40, 1183–1192.PubMedCrossRefGoogle Scholar
  6. Cepeda, N. J., Cepeda, M. L., & Kramer, A. F. (2000). Task switching and attention deficit hyperactivity disorder. Journal of Abnormal Child Psychology, 28, 213–226.PubMedCrossRefGoogle Scholar
  7. Conway, A., Kane, J., Bunting, M., Hambrick, Z., Willhelm, O., & Engle, R. (2005). Working memory span tasks: A methodological review and user’s guide. Psychonomic Bulletin & Review, 12, 769–786.CrossRefGoogle Scholar
  8. Ellis, Y., Daniels, B. W., & Jauregui, A. (2010). The effect of multitasking on the grade performance of business students. Research in Higher Education Journal, 8, 1–10.Google Scholar
  9. Fan, J., McCandliss, B. D., Sommer, T., Raz, A., & Posner, M. I. (2002). Testing the efficacy and independence of attentional networks. Journal of Cognitive Neuroscience, 14, 340–347.PubMedCrossRefGoogle Scholar
  10. Foerde, K., Knowlton, B. J., & Poldrank, R. A. (2006). Modulation of competing memory systems by distraction. Proceedings of the National Association of Sciences, 103, 11778–11783.Google Scholar
  11. Fried, C. B. (2008). In-class laptop use and its effects on student learning. Computers in Education, 50, 906–914.CrossRefGoogle Scholar
  12. Jeong, S. J., & Fishbein, M. (2007). Predictors of multitasking with media: Media factors and audience factors. Media Psychology, 10, 364–384.CrossRefGoogle Scholar
  13. Jha, A. P., Krompinger, J., & Baime, M. J. (2007). Mindfulness training modifies subsystems of attention. Cognitive, Affective and Behavioural Neuroscience, 7, 109–119.CrossRefGoogle Scholar
  14. Jonides, J., Marshuetz, C., Smith, E. E., Reuter-Lorenz, P. A., Koeppe, R. A., & Hartley, A. (2000). Age differences in behavior and PET activation reveal differences in interference resolution in verbal working memory. Journal of Cognitive Neuroscience, 12, 188–196.PubMedCrossRefGoogle Scholar
  15. Jonides, J., & Nee, D. E. (2006). Brain mechanisms of proactive interference in working memory. Neuroscience, 139, 181–193.PubMedCrossRefGoogle Scholar
  16. Karbach, J., & Kray, J. (2009). How useful is executive control training? Age differences in near and far transfer of task-switching training. Developmental Science, 12, 978–990.PubMedCrossRefGoogle Scholar
  17. König, C. J., Oberacher, L., & Kleinmann, M. (2010). Personal and situational determinants of multitasking at work. Journal of Personnel Psychology, 9, 99–103.CrossRefGoogle Scholar
  18. Levine, L. E., Waite, B. M., & Bowman, L. L. (2007). Electronic media use, reading and academic distractibility in college youth. Cyberpsychology & Behavior, 10, 560–566.CrossRefGoogle Scholar
  19. Lui, K. F. H., & Wong, A. C. N. (2012). Does media multitasking always hurt? A positive correlation between multitasking and multisensory integration. Psychonomic Bulletin and Review, 19, 647–653.PubMedCrossRefGoogle Scholar
  20. Minear, M., & Shah, P. (2008). Training and transfer effects in task-switching. Memory & Cognition, 36, 1470–1483.CrossRefGoogle Scholar
  21. Ophir, E., Nass, C., & Wagner, A. (2009). Cognitive control in media multitaskers. Proceedings of the National Association of Sciences, 106, 15583–15587.CrossRefGoogle Scholar
  22. Patton, J. H., Stanford, M. S., & Barratt, E. S. (1995). Factor structure of the Barratt impulsiveness scale. Journal of Clinical Psychology, 51, 768–774.PubMedCrossRefGoogle Scholar
  23. Prior, A., & MacWhinney, B. (2010). A bilingual advantage in task switching. Bilingualism: Language and Cognition, 13, 253–262.CrossRefGoogle Scholar
  24. Raven, J. (1998). Manual for Raven’s progressive matrices and vocabulary scales. Oxford: Oxford Psychologists.Google Scholar
  25. Rideout, V. J., Foehr, U. G., & Roberts, D. F., (2010). Generation M2: Media in the Lives of 8-­to 18-­Year-­Olds (No. 8010). The Kaiser Family Foundation. Retrieved from http://www.kff.org/entmedia/8010.cfm
  26. Roberts, D.F., Foehr, U.G., & Rideout, V.J. (2005). Generation M: Media in the Lives of 8–18 Year Olds. Menlo Park, CA: Kaiser Family Foundation. Available at: http://www.kff.org/entmedia/upload/Generation-M-Media-in-the-Lives-of-8-18-Year-olds-Report.pdf
  27. Rueda, M. R., Rothbart, M. K., McCandliss, B. D., Saccomanno, L., & Posner, M. I. (2005). Training, maturation, and genetic influences on the development of executive attention. Proceedings of the National Academy of Sciences, 102, 14931–14936.CrossRefGoogle Scholar
  28. Schneider, W., Eschman, A., & Zuccolotto, A. (2002). E-Prime user’s guide. Pittsburgh: Psychology Software Tools.Google Scholar
  29. Stanford, M. S., Mathias, C. W., Dougherty, D. M., Lake, S. L., Anderson, N. E., & Patton, J. H. (2009). Fifty years of the Barratt impulsiveness scale: An update and review. Personality and Individual Differences, 47, 385–395.CrossRefGoogle Scholar
  30. Strayer, D. L., & Johnston, W. A. (2001). Driven to distraction: Dual task studies of simulated driving and conversing on a cellular telephone. Psychological Science, 12, 462–466.PubMedCrossRefGoogle Scholar
  31. Strobach, T., Frensch, P. A., & Schubert, T. (2012). Video game practice optimizes executive control skills in dual-task and task switching situations. Acta Psychologica, 140, 13–24.PubMedCrossRefGoogle Scholar
  32. Tangney, J. P., Baumeister, R. F., & Boone, A. L. (2004). High self-control predicts good adjustment, less pathology, better grades, and interpersonal success. Journal of Personality, 72, 271–322.PubMedCrossRefGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2013

Authors and Affiliations

  • Meredith Minear
    • 1
  • Faith Brasher
    • 2
  • Mark McCurdy
    • 2
  • Jack Lewis
    • 2
  • Andrea Younggren
    • 2
  1. 1.Department of PsychologyUniversity of WyomingLaramieUSA
  2. 2.Department of PsychologyThe College of IdahoCaldwellUSA

Personalised recommendations